text
stringlengths
4
5.48M
meta
stringlengths
14
6.54k
\section{Introduction and Main results} Consider the stable solution of the following equation \begin{equation}\label{8LE} (-\Delta)^s u=|u|^{p-1}u\;\;\;\;\hbox{in}\;\;\;\;\; \R^n, \end{equation} where $(-\Delta)^s$ is the fractional Laplacian operator for $2<s<3$. \vskip0.1in The motivation of studying such an equation is originated from the classical Lane-Emden equation \begin{equation}\label{LZ=000123} -\Delta u=|u|^{p-1}u \;\;\;\;\hbox{in}\;\;\;\;\; \R^n \end{equation} and its parabolic counterpart, which have played a crucial role in the development of nonlinear PDEs in the last decades. These arise in astrophysics and Riemannian geometry. The pioneering works on Eq.\eqref{LZ=000123} were contributed by R. Fowler \cite{Fowler=1,Fowler=2}. Later, the ground-breaking result on equation \eqref{LZ=000123} is the fundamental Liouville-type theorems established by Gidas and Spruck \cite{Gidas-Sp=1981}, they claimed that the Eq. \eqref{LZ=000123} has no positive solution whenever $p\in (1, 2^\ast-1)$, where $2^\ast=2n/(n-2)$ if $n\geq 3$ and $2^\ast=\infty$ if $n\leq 2.$ The critical case $p=2^\ast-1$, Eq.\eqref{LZ=000123} has a unique positive solution up to translation and rescaling which is radial and explicitly formulated, see Caffarelli-Gidas-Spruck \cite{Caffarelli1989}. Since then many experts in partial differential equations devote to the above equations for various parameters $s$ and $p$. \vskip0.1in For the nonlocal case of $0 < s < 1,$ a counterpart of the classification results of Gidas and Spruck \cite{Gidas-Sp=1981}, and Caffarelli- Gidas-Spruck \cite{Caffarelli1989} holds for the fractional Lane-Emden equation \eqref{8LE}, see the works due to Li \cite{YLi04} and Chen-Li-Ou \cite{Chen-Li-Ou}. In these cases, the Sobolev exponent is given by $P_S(n, s)=(n+2s)/(n-2s)$ if $n>2s$, and otherwise $P_S(n, s)=\infty$. \vskip0.123in Recently, for the nonlocal case of $0<s<1$, Davila, Dupaigne and Wei in \cite{Wei0=1} gave a complete classification of finite Morse index solution of \eqref{8LE}; for the nonlocal case of $1<s<2$, Fazly and Wei in \cite{Wei1=2} gave a complete classification of finite Morse index solution of \eqref{8LE}. For the local cases $s=1$ and $s=2$, such kind of classification is proved by Farina in \cite{Farina2007} and Davila, Dupaigne, Wang and Wei in \cite{Wei=2}, respectively. For the case $s=3$, the Joseph-Lundgren exponent (for the triharmonic Lane-Emden equation) is obtained and classification is proved by in \cite{LWZ2016=3}. \vskip0.1in However, when $2<s<3$, the equation \eqref{8LE} has not been considered so far. In this paper we classify the stable solution of \eqref{8LE}. \vskip0.23in There are many ways of defining the fractional Laplacian $(-\Delta)^s$, where $s$ is any positive, noninteger number. Caffarelli and Silvestre in \cite{Caffarelli2007} gave a characterization of the fractional Laplacian when $0<s<1$ as the Dirichlet-to-Neumann map for a function $u_e$ satisfying a higher order elliptic equation in the upper half space with one extra spatial dimension. This idea was later generalized by Yang in \cite{Yang2100} when the $s$ is being any positive, noninteger number. See also Chang-Gonzales \cite{Chang2011} and Case-Chang \cite{Case2015} for general manifolds. \vskip0.123in To introduce the fractional operator $(-\Delta)^s$ for $2<s<3$, just like the case of $1<s<2$, via the Fourier transform, we can define \be\nonumber \widehat{(-\Delta)^s}u(\xi)=|\xi|^{2s} \widehat{u}(\xi) \ee or equivalently define this operator inductively by $(-\Delta)^s=(-\Delta)^{s-2}\circ(-\Delta)^2$. \vskip0.1in \begin{definition} We say a solution $u$ of \eqref{8LE} is stable outside a compact set if there exists $R_0>0$ such that \be\label{8stable} \int_{\R^n}\int_{\R^n}\frac{(\varphi(x)-\varphi(y))^2}{|x-y|^{n+2s}}dx dy-p\int_{\R^n}|u|^{p-1}\varphi^2dx\geq0 \ee for any $\varphi\in C_c^{\infty}(\R^n\backslash \overline{B_{R_0}})$. \end{definition} Set \be\nonumber p_s(n)= \begin{cases} \infty\;\;&\hbox{if}\;\; n\leq 2s,\\ \frac{n+2s}{n-2s}\;\;&\hbox{if}\;\; n>2s. \end{cases} \ee The first main result of the present paper is the following \bt\label{8Liouvillec} Suppose that $n>2s$ and $2<s<\delta<3$. Let $u\in C^{2\delta}(\R^n)\cap L^1(\R^n,(1+|z|)^{n+2s}dz)$ be a solution of \eqref{8LE} which is stable outside a compact set. Assume \begin{itemize} \item [(1)] $1<p<p_s(n)$ or \item [(2)] $p_s(n)< p$ and \be\label{8gamma} p\frac{\Gamma(\frac{n}{2}-\frac{s}{p-1})\Gamma(s+\frac{s}{p-1})}{\Gamma(\frac{s}{p-1})\Gamma(\frac{n-2s}{2}-\frac{s}{p-1})} >\frac{\Gamma(\frac{n+2s}{4})^2}{\Gamma(\frac{n-2s}{4})^2}, \ee then the solution $u\equiv0$. \item [(3)] $p=p_s(n)$, then $u$ has finite energy, i.e., \be\nonumber \|u\|^2_{\dot{H}^s(\R^n)}:= \int_{\R^n}\int_{\R^n}\frac{(u(x)-u(y))^2}{|x-y|^{n+2s}}dx dy=\int_{\R^n}|u|^{p+1}<+\infty. \ee If in addition $u$ is stable, then $u\equiv0$. \end{itemize} \et \br In the Theorem \ref{8Liouvillec} the condition \eqref{8gamma} is optimal. In fact, the radial singular solution $u=|x|^{-\frac{2s}{p-1}}$ is stable if \be\nonumber p\frac{\Gamma(\frac{n}{2}-\frac{s}{p-1})\Gamma(s+\frac{s}{p-1})}{\Gamma(\frac{s}{p-1})\Gamma(\frac{n-2s}{2}-\frac{s}{p-1})} \leq\frac{\Gamma(\frac{n+2s}{4})^2}{\Gamma(\frac{n-2s}{4})^2}. \ee See \cite{LWZ}. \er \br The hypothesis $(2)$ of Theorem \ref{8Liouvillec} is equivalent to \be p<p_c(n):=\begin{cases} +\infty\;\;\;\;\;\;\;\;&\hbox{if}\;\;\;\;\;\;\;\; n\leq n_0(s),\\ \frac{n+2s-2-2a_{n,s}\sqrt{n}}{n-2s-2-2a_{n,s}\sqrt{n}}&\hbox{if}\;\;\;\;\;\;\;\;n> n_0(s),\\ \end{cases} \ee where $n_0(s)$ is the largest root of $n-2s-2-2a_{n,s}\sqrt{n}=0$, see \cite{LWZ}. More details and further sharp results about $a_{n,s}$ and $n_0(s)$ see \cite{LWZ=00}. \er \br In this remark, we further analyze the hypothesis $(2)$ in Theorem \ref{8Liouvillec}. Recall that when $s=1$ the condition \eqref{8gamma} gives a upper bounded of $p$ (originated from Joseph and Lundgren \cite{Joseph1972}), it is \be p<p_c(n):=\begin{cases} \;\;\;\;\;\;\;\;\;\infty\;\;\;\;\;\;\;\;&\hbox{if}\;\;\;\;\;\;\;\; n\leq10,\\ \frac{(n-2)^2-4n+8\sqrt{n-1}}{(n-2)(n-10)}\;\;\;\;\;\;\;\;&\hbox{if}\;\;\;\;\;\;\;\; n\geq11. \end{cases} \ee For the case $s=2$, \eqref{8gamma} induce the upper bound of $p$ which is given by the following formula (cf. Gazzola and Grunau \cite{Gazzola2006}): \be p<p_c(n)=\begin{cases} \;\;\;\;\;\;\;\;\;\;\infty\;\;\;\;\;\;\;\;&\hbox{if}\;\;\;\;\;\;\;\; n\leq12,\\ \frac{n+2-\sqrt{n^2+4-n\sqrt{n^2-8n+32}}}{n-6-\sqrt{n^2+4-n\sqrt{n^2-8n+32}}}\;\;\;\;\;\;\;\;&\hbox{if}\;\;\;\;\;\;\;\; n\geq13. \end{cases} \ee \vskip0.1in In the triharmonic case, the corresponding exponent given by see (\cite{LWZ2016=3}) is the following \be\nonumber p<p_{c}(n)= \begin{cases} \;\;\;\infty\;\;&\hbox{if}\;\; n\leq 14,\\ \frac{n+4-2D(n)}{n-8-2D(n)}\;\;&\hbox{if}\;\; n\geq15, \end{cases} \ee where \be\nonumber D(n):=\frac{1}{6}\Big(9n^2+96-\frac{1536+1152n^2}{D_0(n)}-\frac{3}{2}D_0(n)\Big)^{1/2}; \ee \be\nonumber D_0(n):=-(D_1(n)+36\sqrt{D_2(n)})^{1/3}; \ee \be\nonumber\aligned D_1(n):=-94976+20736n+103104n^2-10368n^3+1296n^5-3024n^4-108n^6; \endaligned\ee \be\nonumber\aligned D_2(n):&=6131712-16644096n^2+6915840n^4-690432n^6-3039232n\\ &\quad+4818944n^3-1936384n^5+251136n^7-30864n^8-4320n^9\\ &\quad+1800n^{10}-216n^{11}+9n^{12}. \endaligned\ee \er \s{Preliminary} Throughout this paper we denote $b:=5-2s$ and define the operator \be\nonumber \Delta_b w:=\Delta w+\frac{b}{y}w_y=y^{-b}\mathbf{div}(y^b\nabla w) \ee for a function $w\in W^{3,2}(\R^{n+1};y^bdxdy)$. We firstly quote the following result. \bt\label{ththYang} (See \cite{Yang2100} ) Assume $2<s<3$. Let $u_e\in W^{3,2}(\R^{n+1};y^bdxdy)$ satisfy the equation \be\label{8eqs} \Delta_b^3u_e=0 \ee on the upper half space for $(x,y)\in \R^n\times\R_+$ (where $y$ is the spacial direction) and the boundary conditions: \be\label{8eqs1}\aligned &u_e(x,0)=f(x),\\ &\lim_{y\rightarrow0}y^b\frac{\pa u_e}{\pa y}=0,\\ &\frac{\pa^2 u_e}{\pa y^2}\mid_{y=0}=\frac{1}{2s}\Delta_{x}u_e\mid_{y=0},\\ &\lim_{y\rightarrow0}C_{n,s}y^b\frac{\pa}{\pa y}\Delta_b^2 u_e=(-\Delta)^s f(x), \endaligned \ee where $f(x)$ is some function defined on $H^s(\R^n)$. Then we have \be\label{Yangmore} \int_{\R^n}|\xi|^{2s}|\hat{u}(\xi)|^2d\xi=C_{n,s}\int_{\R^{n+1}_+} y^b|\nabla\Delta_b u_e(x,y)|^2dxdy. \ee \et \vskip0.23in Applying the above theorem to solutions of \eqref{8LE}, we conclude that the extended function $u_e(x,y)$, where $x=(x_1,...,x_n)\in\R^n$ and $y\in\R^+$, satisfies \be\label{8LEE}\begin{cases} \Delta_b^3 u_e=0\;\;\hbox{in}\;\;\R^{n+1}_+,\\ \lim_{y\rightarrow0}y^b\frac{\pa u_e}{\pa y}=0\;\;\hbox{on}\;\;\pa\R^{n+1}_+,\\ \frac{\pa^2 u_e}{\pa y^2}\mid_{y=0}=\frac{1}{2s}\Delta_{x}u_e\mid_{y=0}\;\;\hbox{on}\;\;\pa\R^{n+1}_+,\\ \lim_{y\rightarrow0}y^b\frac{\pa}{\pa y}\Delta_b^2 u_e=-C_{n,s}|u_e|^{p-1}u_e\;\;\hbox{in}\;\;\R^{n+1}_+. \end{cases}\ee Moreover, \be\nonumber \int_{\R^n}|\xi|^{2s}|\hat{u}(\xi)|^2d\xi=C_{n,s}\int_{\R^{n+1}_+} y^b|\nabla\Delta_b u_e(x,y)|^2dxdy \ee and $u(x)=u_e(x,0)$. \vskip0.11in \vskip0.1in Define \be\label{8exu}\aligned &E(\la,x,u_e)=\int_{\R^{n+1}_+\cap\pa B_1}\frac{1}{2}\theta_1^b|\nabla\Delta_b u_e^\la|^2-\frac{C_{n,s}}{p+1}\int_{\pa \R^{n+1}_+\cap B_1}|u_e^\la|^{p+1}\\ &+\sum_{0\leq i,j\leq 4,i+j\leq5}C_{i,j}^1\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\la^{i+j}\frac{d^i u^\la_e}{d\la^i}\frac{d^j u^\la_e}{d\la^j}\\ &+\sum_{0\leq t,s\leq 2,t+s\leq3}C_{t,s}^2\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\la^{t+s}\nabla_{S^n}\frac{d^t u^\la_e}{d\la^t}\nabla_{S^n}\frac{d^s u^\la_e}{d\la^s}\\ &+\sum_{0\leq l,k\leq 1,l+k\leq1}C_{l,k}^3\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\la^{l+k}\Delta_{S^n}\frac{d^l u^\la_e}{d\la^l}\Delta_{S^n}\frac{d^k u^\la_e}{d\la^k}\\ &+(\frac{s}{p-1}+1)\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b(\Delta_b u^\la_e)^2. \endaligned\ee \vskip0.1in The following is the monotonicity formula which will paly an important role. \bt\label{8monoid} Let $u_e$ satisfy the equation \eqref{8eqs} with the boundary conditions \eqref{8eqs1}, we have the following \be\aligned \frac{d E(\la,x,u_e)}{d\la}&=\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\Big( 3\la^5(\frac{d^3u_e^\la}{d\la^3})^2+A_1\la^3(\frac{d^2u_e^\la}{d\la^2})^2+A_2\la(\frac{du^\la_e}{d\la})^2\Big)\\ &+\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\Big( 2\la^3|\nabla_{S^n}\frac{d^2u_e^\la}{d\la^2}|^2+B_1\la|\nabla_{S^n}\frac{du_e^\la}{d\la}|\Big)\\ &+\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\la(\Delta_{S^n}\frac{du_e^\la}{d\la})^2, \endaligned \ee \et where $\theta_1=\frac{y}{r}$ and \be\nonumber\aligned &A_1:=10\delta_1-2\delta_2-56+\alpha_0^2-2\al_0-2\beta_0-4,\\ &A_2:=-18\delta_1+6\delta_2-4\delta_3+2\delta_4+72-\al_0^2+\beta_0^2+2\al_0+2\beta_0,\\ &B_1:=8\al-4\beta-2\beta_0+4(n+b)-14, \endaligned\ee \be\nonumber\aligned &\al:=n+b-2-\frac{4s}{p-1},\beta:=\frac{2s}{p-1}(3+\frac{2s}{p-1}-n-b),\quad\quad \\ &\al_0:=n+b-\frac{4s}{p-1},\;\;\;\beta_0:=\frac{2s}{p-1}(1+\frac{2s}{p-1}-n-b)\quad\quad \endaligned\ee and \be\label{8delta}\aligned \delta_1=&2(n+b)-\frac{8s}{p-1},\\ \delta_2=&(n+b)(n+b-2)-(n+b)\frac{12s}{p-1}+\frac{12s}{p-1}(1+\frac{2s}{p-1}),\\ \delta_3=&-\frac{8s}{p-1}(1+\frac{2s}{p-1})(2+\frac{2s}{p-1})+2(n+b)\frac{6s}{p-1}(1+\frac{2s}{p-1})\\ &-(n+b)(n+b-2)(1+\frac{4s}{p-1}),\\ \delta_4=&(3+\frac{2s}{p-1})(2+\frac{2s}{p-1})(1+\frac{2s}{p-1})\frac{2s}{p-1}\\ &\quad-2(n+b)(1+\frac{2s}{p-1})(2+\frac{2s}{p-1})\frac{2s}{p-1}\\ &\quad+(n+b)(n+b-2)(2+\frac{2s}{p-1})\frac{2s}{p-1}.\\ \endaligned\ee We will give the proof of Theorem \ref{8monoid} in the next section. Now we would like to state a consequent result of Theorem \ref{8monoid}. Recall that $E(\la,x,u_e)$, defined in \eqref{8exu}, can be divided into two parts: the integral over the ball $B_\la$ and the terms on the boundary $\pa B_\la$. We note that in our blow-down analysis, the coefficients (including positive or negative, big or small) of the boundary terms can be estimated in a unified way, therefore we may change some coefficients of the boundary terms in $E(\la,x,u_e)$. After such a change, we denote the new functional by $E^c(\la,x,u_e)$. \vskip0.1in Define \be\label{728-pm} p_m(n):=\begin{cases} +\infty\;\;\;\;\;\;\;\;&\hbox{if}\;\;\;\;\;\;\;\; n<2s+6+\sqrt{73},\\ \frac{5n+10s-\sqrt{15(n-2s)^2+120(n-2s)+370}}{5n-10s-\sqrt{15(n-2s)^2+120(n-2s)+370}}&\hbox{if}\;\;\;\;\;\;\;\;n\geq2s+6+\sqrt{73}.\\ \end{cases} \ee We have the following \bt\label{8Monotonem} Assume that $\frac{n+2s}{n-2s}<p<p_m(n)$, then $E^c(\la,x,u_e)$ is a nondecreasing function of $\la>0$. Furthermore, \be\nonumber \frac{d E^c(\la,x,u_e)}{d\la}\geq C(n,s,p)\la^{2s\frac{p+1}{p-1}-6-n} \int_{\R^{n+1}_+\cap\pa B_\la(x_0)}y^b\Big(\frac{2s}{p-1}u_e+\la\pa_r u_e\Big)^2, \ee where $C(n,s,p)$ is a constant independent of $\la$. \et By carefully comparing $\frac{n+2s}{n-2s}<p<p_m(n)$ with $p>\frac{n+2s}{n-2s}$ and \eqref{8gamma}, we get the following (see the last section of the current paper) monotonicity formula for our blow down analysis. \bt\label{8Monotone} Assume that $p>\frac{n+2s}{n-2s}$ and \eqref{8gamma}, then $E^c(\la,x,u_e)$ is a nondecreasing function of $\la>0$. Furthermore, \be\nonumber \frac{d E^c(\la,x,u_e)}{d\la}\geq C(n,s,p)\la^{2s\frac{p+1}{p-1}-6-n} \int_{\R^{n+1}_+\cap\pa B_\la(x_0)}y^b\Big(\frac{2s}{p-1}u_e+\la\pa_r u_e\Big)^2, \ee where $C(n,s,p)$ is a constant independent of $\la$. \et \section{Monotonicity formula and the proof of Theorem \ref{8monoid}} The derivation of the monotonicity for the \eqref{8LE} when $2<s<3$ is complicated in its process, we divide it into several subsections. In subsection $3.1$, we derive $\frac{d}{d\la}\overline{E}(u_e,\la)$. In subsection $3.2$, we calculate $\frac{\pa^j}{\pa r^j}u_e^\la$ and $\frac{\pa^i}{\pa \la^i}u_e^\la,\;\; i,j=1,2,3,4$. In subsection $3.3$, the operator $\Delta_b^2$ and its representation will be given. In subsection $3.4$, we decompose $\frac{d}{d\la}\overline{E}(u_e^\la,1)$. Finally, combine with the above four subsections, we can obtain the monotonicity formula, hence get the proof of Theorem \ref{8monoid}. \vskip0.1in Suppose that $x_0=0$ and denote by $B_\la$ the balls centered at zero with radius $\la$. Set \be\nonumber \overline{E}(u_e,\la):=\la^{2s\frac{p+1}{p-1}-n}\Big(\int_{\R^{n+1}_+\cap B_\la}\frac{1}{2}y^b|\nabla\Delta_b u_e|^2 -\frac{C_{n,s}}{p+1}\int_{\pa\R^{n+1}_+\cap B_\la}|u_e|^{p+1}\Big).\\ \ee \subsection{The derivation of $\frac{d}{d\la}\overline{E}(u_e,\la)$} Define \be\label{8uvw}\aligned &v_e:=\Delta_b u_e, u_e^\la(X):=\la^{\frac{2s}{p-1}}u_e(\la X),\;\; w_e(X):=\Delta_b v_e\\ &v_e^\la(X):=\la^{\frac{2s}{p-1}+2}v_e(\la X),\;\; w_e^\la(X):=\la^{\frac{2s}{p-1}+4}w_e(\la X), \endaligned\ee where $X=(x,y)\in \R^{n+1}_+$. Therefore, \be\label{8lamda} \Delta_b u_e^\la(X)=v_e^\la(X), \Delta_b v_e^\la(X)=w_e^\la(X). \ee Hence \be\nonumber\aligned &\Delta_b w_e^\la=0,\\ &\lim_{y\rightarrow0}y^b\frac{\pa u_e}{\pa y}=0,\\ &\frac{\pa^2 u_e}{\pa y^2}\mid_{y=0}=\frac{1}{2s}\Delta_x u_e\mid_{y=0},\\ &\lim_{y\rightarrow0}C_{n,s}y^b\frac{\pa}{\pa y}w_e^\la=-C_{n,s}|u_e|^{p-1}u_e. \endaligned \ee In addition, differentiating \eqref{8lamda} with respect to $\la$ we have \be\nonumber \Delta_b \frac{du_e^{\la}}{d\la}=\frac{dv_e^{\la}}{d\la},\;\;\; \Delta_b \frac{dv_e^{\la}}{d\la}=\frac{dw_e^{\la}}{d\la}. \ee Note that \be\nonumber \overline{E}(u_e,\la)=\overline{E}(u_e^\la,1)=\int_{\R^{n+1}_+\cap B_1}\frac{1}{2}y^b|\nabla v_e^\la|^2 -\frac{C_{n,s}}{p+1}\int_{\pa\R^{n+1}_+\cap B_1}|u_e^\la|^{p+1}. \ee Taking derivative of the energy $\overline{E}(u_e^\la,1)$ with respect to $\la$ and integrating by part we have: \be\label{8overE}\aligned &\frac{d\overline{E}(u_e^\la,1)}{d\la}=\int_{\R^{n+1}_+\cap B_1}y^b\nabla v_e^\la\nabla\frac{dv_e^\la}{d\la} -C_{n,s}\int_{\pa\R^{n+1}_+\cap B_1}|u_e^\la|^{p-1}u_e^\la\frac{du_e^\la}{d\la}\\ =&\int_{\pa(\R^{n+1}_+\cap B_1)}y^b\frac{\pa v_e^\la}{\pa n}\frac{dv_e^\la}{d\la}- \int_{\R^{n+1}_+\cap B_1}(y^b\Delta v_e^\la+by^{b-1}\frac{\pa v_e^\la}{\pa y})\frac{dv_e^\la}{d\la}\\ &-C_{n,s}\int_{\pa\R^{n+1}_+\cap B_1}|u_e^\la|^{p-1}u_e^\la\frac{du_e^\la}{d\la}\\ =&-\int_{\pa \R^{n+1}\cap B_1}y^b\frac{\pa v_e^\la}{\pa y}\frac{dv_e^\la}{d\la} +\int_{\R^{n+1}_+\cap\pa B_1}y^b\frac{\pa v_e^\la}{\pa r}\\ &-\int_{\R^{n+1}_+\cap B_1}y^b\Delta v_e^\la\frac{dv_e^\la}{d\la}+by^{b-1}\frac{\pa v_e^\la}{\pa y}\frac{v_e^\la}{d\la} -\int_{\pa\R^{n+1}_+\cap B_1}y^b\frac{w_e^\la}{\pa y}\frac{u_e^\la}{d\la}. \endaligned\ee Now note that from the definition of $v_e^\la$ and by differentiating it with respect to $\la$, we get the following identity for $X\in\R^{n+1}_+$, \be\nonumber r\frac{\pa v_e^\la}{\pa r}=\la \pa_{\la} v_e^\la-(\frac{2s}{p-1}+2)v_e^\la. \ee Hence, \be\nonumber \aligned \int_{\R^{n+1}_+\cap B_1}y^b\frac{\pa v_e^\la}{\pa r}&\frac{dv_e^\la}{d\la} =\int_{\R^{n+1}_+\cap B_1}y^b\Big(\la\frac{dv_e^\la}{d\la}\frac{dv_e^\la}{d\la}-(\frac{2s}{p-1}+2)v_e^\la\frac{dv_e^\la}{d\la}\Big)\\ &=\la\int_{\R^{n+1}_+\cap \pa B_1}y^b(\frac{dv_e^\la}{d\la})^2-(\frac{s}{p-1}+1)\frac{d}{d\la}\int_{\R^{n+1}_+\cap \pa B_1}y^b(v_e^\la)^2.\\ \endaligned \ee Note that \be\nonumber \aligned -\int_{\R^{n+1}_+\cap B_1}y^b\Delta v_e^\la\frac{dv_e^\la}{d\la}=& \int_{\pa\R^{n+1}_+\cap B_1}y^b\frac{\pa v_e^\la}{\pa y}\frac{v_e^\la}{d\la}-\int_{\R^{n+1}_+\cap \pa B_1}y^b\frac{\pa v_e^\la}{\pa r}\frac{v_e^\la}{d\la}\\ &+\int_{\R^{n+1}_+\cap B_1}\nabla v_e^\la\nabla(y^b\frac{dv_e^\la}{d\la}). \endaligned \ee Integration by part we have \be\nonumber\aligned &\int_{\R^{n+1}_+\cap B_1}y^b\nabla v_e^\la \nabla\frac{dv_e^\la}{d\la}\\ &=-\int_{\pa\R^{n+1}_+\cap B_1}y^b\frac{\pa v_e^\la}{\pa y}\frac{dv_e^\la}{d\la}+\int_{\R^{n+1}_+\cap \pa B_1}y^b\frac{\pa v_e^\la}{\pa r}\frac{dv_e^\la}{d\la}\\ &\quad -\int_{\R^{n+1}_+\cap B_1}\nabla\cdot(y^b\nabla v^\la_e)\frac{dv_e^\la}{d\la}\\ &=-\int_{\pa\R^{n+1}_+\cap B_1}y^b\frac{\pa v_e^\la}{\pa y}\frac{dv_e^\la}{d\la}+\int_{\R^{n+1}_+\cap \pa B_1}y^b\frac{\pa v_e^\la}{\pa r}\frac{dv_e^\la}{d\la}\\ &\quad -\int_{\R^{n+1}_+\cap B_1}y^b\Delta_b v_e^\la\Delta_b\frac{du_e^\la}{d\la}.\\ \endaligned \ee Now \be\nonumber\aligned -\int_{\R^{n+1}_+\cap B_1}&y^b\Delta_b v_e^\la\Delta_b\frac{du_e^\la}{d\la} =-\int_{\R^{n+1}_+\cap B_1}y^b\Delta_b v_e^\la(\Delta\frac{du_e\la}{d\la}+\frac{b}{y}\frac{\pa}{\pa y}\frac{du_e^\la}{d\la})\\ =&-\int_{\pa(\R^{n+1}_+\cap B_1)}y^b\Delta_b v_e^\la\frac{\pa}{\pa n}\frac{du_e^\la}{d\la} +\int_{\R^{n+1}_+\cap B_1}\nabla(y^b\Delta_b v_e^\la)\nabla\frac{du_e^\la}{d\la}\\ &-\int_{\R^{n+1}_+\cap B_1}by^{b-1}\Delta_b v_e^\la\frac{\pa}{\pa y} \frac{du_e^\la}{d\la}\\ =&-\int_{\pa(\R^{n+1}_+\cap B_1)}y^b\Delta_b v_e^\la \frac{\pa}{\pa n}\frac{d u_e^\la}{d\la} +\int_{\R^{n+1}_+\cap B_1}y^b\nabla\Delta_b v_e^\la\nabla \frac{du_e^\la}{d\la}\\ =&-\int_{\pa(\R^{n+1}_+\cap B_1)}y^b\Delta_b v_e^\la \frac{\pa}{\pa n}\frac{d u_e^\la}{d\la} +\int_{\pa(\R^{n+1}_+\cap B_1)}y^b\frac{\pa \Delta_b v_e^\la}{\pa n}\frac{d u_e^\la}{d\la}\\ &-\int_{\R^{n+1}_+\cap B_1}y^b\Delta_b^2 v_e^\la\frac{d u_e^\la}{d\la}\\ =&-\int_{\pa(\R^{n+1}_+\cap B_1)}y^b\Delta_b v_e^\la \frac{\pa}{\pa n}\frac{d u_e^\la}{d\la} +\int_{\pa(\R^{n+1}_+\cap B_1)}y^b\frac{\pa \Delta_b v_e^\la}{\pa n}\frac{d u_e^\la}{d\la}.\\ \endaligned \ee Here we have used that $\Delta_b^2 v_e^\la=\Delta_b^3 u_e^\la=0$. Therefore, combine with the above arguments we get that \be\label{8overE1}\aligned \int_{\R^{n+1}_+\cap B_1}&y^b\nabla v_e^\la \nabla\frac{dv_e^\la}{d\la}=-\int_{\pa\R^{n+1}_+\cap B_1}y^b\frac{\pa v_e^\la}{\pa y}\frac{dv_e^\la}{d\la} +\int_{\R^{n+1}_+\cap \pa B_1}y^b \frac{\pa v_e^\la}{\pa r}\frac{dv_e^\la}{d\la}\\ &+\int_{\pa\R^{n+1}_+\cap B_1}y^b\Delta_b v_e^\la\frac{\pa}{\pa y}\frac{du_e^\la}{d\la}-\int_{\R^{n+1}_ +\cap \pa B_1}y^b \Delta v_e^\la \frac{\pa}{\pa r}\frac{du_e^\la}{d\la}\\ &-\int_{\pa\R^{n+1}_+\cap B_1}y^b\frac{\pa}{\pa y}\Delta_b v_e^\la\frac{du_e^\la}{d\la}+\int_{\R^{n+1}_+\cap \pa B_1}y^b\frac{\pa}{\pa r}\Delta_b v_e^\la\frac{du_e^\la}{d\la}\\ =&\int_{\R^{n+1}_+\cap \pa B_1}y^b \frac{\pa v_e^\la}{\pa r}\frac{dv_e^\la}{d\la}-\int_{\R^{n+1}_ +\cap \pa B_1}y^b \Delta v_e^\la \frac{\pa}{\pa r}\frac{du_e^\la}{d\la}\\ &-C(n,s)\int_{\pa\R^{n+1}_+\cap B_1}|u_e^\la|^{p-1}\frac{du_e^\la}{d\la}+\int_{\R^{n+1}_+\cap \pa B_1}y^b\frac{\pa}{\pa r}\Delta_b v_e^\la\frac{du_e^\la}{d\la}.\\ \endaligned \ee Here, we have used that $\frac{\pa \Delta u_e^\la(x,0)}{\pa y}=0$, $\frac{\pa}{\pa y}\frac{du_e^\la}{d\la}=0$ on $\pa \R^{n+1}_+$ and $\lim_{y\rightarrow0}y^b\frac{\pa}{\pa y}\Delta_b v_e^\la=-C_{n,s}|u_e^\la|^{p-1}u_e^\la$. By \eqref{8overE} and \eqref{8overE1} we obtain that \be\label{8overee}\aligned \frac{d}{d\la}\overline{E}(u_e^\la,1)=&\int_{\R^{n+1}_+\cap \pa B_1}y^b\frac{\pa v_e^\la}{\pa r}\frac{dv_e^\la}{d\la} +\int_{\R^{n+1}_+\cap \pa B_1}y^b\frac{\pa w_e^\la}{\pa r}\frac{du_e^\la}{d\la}\\ &-\int_{\R^{n+1}_+\cap \pa B_1}y^bw_e^\la\frac{\pa}{\pa r}\frac{du_e^\la}{d\la}. \endaligned\ee Recall \eqref{8uvw} and differentiate it with respect to $\la$, we have \be\nonumber\aligned \frac{du_e^\la(X)}{d\la}&=\frac{1}{\la}\big(\frac{2s}{p-1}u_e^\la(X)+r\pa_r u_e^\la(X)\big),\\ \frac{dv_e^\la(X)}{d\la}&=\frac{1}{\la}\big((\frac{2s}{p-1}+2)v_e^\la(X)+r\pa_r v_e^\la(X)\big),\\ \frac{dw_e^\la(X)}{d\la}&=\frac{1}{\la}\big((\frac{2s}{p-1}+4)w_e^\la(X)+r\pa_r w_e^\la(X)\big).\\ \endaligned \ee Differentiate the above equations with respect to $\la$ again we get \be\nonumber \la\frac{d^2u_e^\la(X)}{d\la^2}+\frac{du_e^\la(x)}{d\la}=\frac{2s}{p-1}\frac{du_e^\la(X)}{d\la}+r\pa_r\frac{du_e^\la}{d\la}. \ee Hence, for $X\in\R^{n+1}_+\cap B_1$, we have \be\nonumber\aligned \pa_r(u_e^\la(X))&=\la\frac{du_e^\la}{d\la}-\frac{2s}{p-1}u_e,\\ \pa_r(\frac{du_e^\la(X)}{d\la})&=\la\frac{d^2u_e^\la(X)}{d\la^2}+(1-\frac{2s}{p-1})\frac{du_e^\la}{d\la},\\ \pa_r(v_e^\la(X))&=\la\frac{dv_e^\la}{d\la}-(\frac{2s}{p-1}+2)v_e^\la,\\ \pa_r(w_e^\la(X))&=\la\frac{d w_e^\la}{d\la}-(\frac{2s}{p-1}+4)w_e^\la.\\ \endaligned \ee Plugging these equations into \eqref{8overee}, we get that \be\label{8overlineE}\aligned \frac{d}{d\la}\overline{E}(u_e^\la,1)=&\int_{\R^{n+1}_+\cap \pa B_1}y^b\big(\la\frac{dv_e^\la}{d\la}\frac{dv_e^\la}{d\la}-(\frac{2s}{p-1}+2)v_e^\la\frac{dv_e^\la}{d\la}\big)\\ &+y^b\big(\la\frac{d w_e^\la}{d\la}\frac{u_e^\la}{d\la}-(\frac{2s}{p-1}+4)w_e^\la\frac{du_e^\la}{d\la}\big)\\ &-y^b\big(\la w_e^\la\frac{d^2 u_e^\la}{d\la^2}+(1-\frac{2s}{p-1}w_e^\la\frac{du_e^\la}{d\la}\big)\\ =&\underbrace{\int_{\R^{n+1}_+\cap \pa B_1}y^b\big[\la\frac{dv_e^\la}{d\la}\frac{dv_e^\la}{d\la}-(\frac{2s}{p-1}+2)v_e^\la\frac{dv_e^\la}{d\la}\big]}\\ &+\underbrace{y^b\big[\la \frac{d w_e^\la}{d\la}\frac{du_e^\la}{d\la}-\la w_e^\la\frac{d^2u_e^\la}{d\la^2}\big]-5y^bw_e^\la\frac{du_e^\la}{d\la}}\\ :=&\overline{E}_{d1}(u_e^\la,1)+\overline{E}_{d2}(u_e^\la,2). \endaligned \ee \subsection{The calculations of $\frac{\pa^j}{\pa r^j}u_e^\la$ and $\frac{\pa^i}{\pa \la^i}u_e^\la,\; i,j=1,2,3,4$} Note \be\label{8ue0} \la\frac{du_e^\la}{d\la}=\frac{2s}{p-1}u_e^\la+r\frac{\pa}{\pa r}u_e^\la. \ee Differentiating \eqref{8ue0} once, twice and thrice with respect to $\la$ respectively, we have \be\label{8ue1} \la\frac{d^2u_e^\la}{d\la^2}+\frac{du_e^\la}{d\la}=\frac{2s}{p-1}\frac{du_e^\la}{d\la}+r\frac{\pa}{\pa r}\frac{du_e^\la}{d\la}, \ee \be\label{8ue2} \la\frac{d^3u_e^\la}{d\la^3}+2\frac{d^2u_e^\la}{d\la^2}=\frac{2s}{p-1}\frac{d^2u_e^\la}{d\la^2}+r\frac{\pa}{\pa r}\frac{d^2u_e^\la}{d\la^2}, \ee \be\label{8ue3} \la\frac{d^4u_e^\la}{d\la^4}+3\frac{d^3u_e^\la}{d\la^3}=\frac{2s}{p-1}\frac{d^3u_e^\la}{d\la^3}+r\frac{\pa}{\pa r}\frac{d^3u_e^\la}{d\la^3}. \ee Similarly, differentiating \eqref{8ue0} once, twice and thrice with respect to $r$ respectively we have \be\label{8ue4} \la\frac{\pa}{\pa r}\frac{du_e^\la}{d\la}=(\frac{2s}{p-1}+1)\frac{\pa}{\pa r}u_e^\la+r\frac{\pa^2}{\pa r^2}u_e^\la, \ee \be\label{8ue5} \la\frac{\pa^2}{\pa r^2}\frac{du_e^\la}{d\la}=(\frac{2s}{p-1}+2)\frac{\pa^2}{\pa r^2}u_e^\la+r\frac{\pa^3}{\pa r^3}u_e^\la, \ee \be\label{8ue6} \la\frac{\pa^3}{\pa r^3}\frac{du_e^\la}{d\la}=(\frac{2s}{p-1}+3)\frac{\pa^3}{\pa r^3}u_e^\la+r\frac{\pa^4}{\pa r^4}u_e^\la. \ee From \eqref{8ue0}, on $\R^{n+1}_+\cap \pa B_1$, we have \be\nonumber \frac{\pa u_e^\la}{\pa r}=\la \frac{du_e^\la}{d\la}-\frac{2s}{p-1}u_e^\la. \ee Next from \eqref{8ue1}, on $\R^{n+1}_+\cap \pa B_1$, we derive that \be\nonumber \frac{\pa}{\pa r}\frac{d u_e^\la}{d\la}=\la\frac{d^2u_e^\la}{d\la^2}+(1-\frac{2s}{p-1})\frac{du_e^\la}{d\la}. \ee From \eqref{8ue4}, combine with the two equations above, on $\R^{n+1}_+\cap \pa B_1$, we get \be\label{8r31}\aligned \frac{\pa^2}{\pa r^2} u_e^\la&=\la\frac{\pa}{\pa r}\frac{du_e^\la}{d\la}-(1+\frac{2s}{p-1})\frac{\pa}{\pa r} u_e^\la\\ &=\la^2\frac{d^2 u_e^\la}{d\la^2}-\la\frac{4s}{p-1}\frac{du_e^\la}{d\la}+(1+\frac{2s}{p-1})\frac{2s}{p-1}u_e^\la. \endaligned\ee Differentiating \eqref{8ue1} with respect to $r$, and combine with \eqref{8ue1} and \eqref{8ue2}, we get that \be\label{8r32}\aligned \frac{\pa^2}{\pa r^2}\frac{du_e^\la}{d\la}&=\la\frac{\pa}{\pa r}\frac{d^2 u_e^\la}{d\la^2}-\frac{2s}{p-1}\frac{\pa}{\pa r}\frac{du_e^\la}{d\la}\\ &=\la^2\frac{d^3 u_e^\la}{d\la^3}+(2-\frac{4s}{p-1})\la\frac{d^2 u_e^\la}{d\la^2}-(1-\frac{2s}{p-1})\frac{2s}{p-1}\frac{du_e^\la}{d\la}. \endaligned\ee From \eqref{8ue5}, on $\R^{n+1}_+\cap \pa B_1$, combine with \eqref{8r31} and \eqref{8r32}, we have \be\label{8ue7}\aligned \frac{\pa^3}{\pa r^3}u_e^\la=&\la\frac{\pa^2}{\pa r^2}\frac{du_e^\la}{d\la}-(2+\frac{2s}{p-1})\frac{\pa^2}{\pa r^2}u_e^\la\\ =&\la^3\frac{d^3 u_e^\la}{d\la^3}-\la^2\frac{6s}{p-1}\frac{d^2 u_e^\la}{d\la^2}+\la(\frac{6s}{p-1}+\frac{12s^2}{(p-1)^2})\frac{du_e^\la}{d\la}\\ &-(2+\frac{2s}{p-1})(1+\frac{2s}{p-1})\frac{2s}{p-1}u_e^\la. \endaligned\ee Now differentiating \eqref{8ue1} once with respect to $r$, we get \be\nonumber \la\frac{\pa^2}{\pa r^2}\frac{d^2 u_e^\la}{d\la^2}=(\frac{2s}{p-1}+1)\frac{\pa^2}{\pa r^2}\frac{du_e^\la}{d\la}+r\frac{\pa^3}{\pa r^3}\frac{du_e^\la}{d\la}, \ee then on $\R^{n+1}_+\cap \pa B_1$, we have \be\label{8ue8} \frac{\pa^3}{\pa r^3}\frac{du_e^\la}{d\la}=\la\frac{\pa^2}{\pa r^2}\frac{d^2 u_e^\la}{d\la^2}-(\frac{2s}{p-1}+1)\frac{\pa^2}{\pa r^2}\frac{du_e^\la}{d\la}. \ee Now differentiating \eqref{8ue2} twice with respect to $r$, we get \be\nonumber \la\frac{\pa}{\pa r}\frac{d^3 u_e^\la}{d\la^3}=(\frac{2s}{p-1}-1)\frac{\pa}{\pa r}\frac{d^2 u_e^\la}{d\la^2}+r\frac{\pa^2}{\pa r^2}\frac{d^2 u_e^\la}{d\la^2}, \ee hence on $\R^{n+1}_+\cap \pa B_1$, combine with \eqref{8ue2} and \eqref{8ue3} there holds \be\label{8ue9} \aligned \frac{\pa^2}{\pa r^2}&\frac{d^2 u_e^\la}{d\la^2}=\la\frac{\pa}{\pa r}\frac{d^3 u_e^\la}{d\la^3}+(1-\frac{2s}{p-1})\frac{\pa}{\pa r}\frac{d^2 u_e^\la}{d\la^2}\\ =&\la^2\frac{d^4 u_e^\la}{d\la^4}+\la(4-\frac{4s}{p-1})\frac{d^3 u_e^\la}{d\la^3}+(1-\frac{2s}{p-1})(2-\frac{2s}{p-1})\frac{d^2 u_e^\la}{d\la^2}. \endaligned\ee Now differentiating \eqref{8ue1} with respect to $r$, we have \be\nonumber \la\frac{\pa}{\pa r}\frac{d^2 u_e^\la}{d\la^2}=\frac{2s}{p-1}\frac{\pa}{\pa r}\frac{du_e^\la}{d\la}+r\frac{\pa^2}{\pa r^2}\frac{du_e^\la}{d\la}. \ee This combine with \eqref{8ue1} and \eqref{8ue2}, on $\R^{n+1}_+\cap \pa B_1$, we have \be\label{8ue10}\aligned \frac{\pa^2}{\pa r^2}\frac{du_e^\la}{d\la}&=\la\frac{\pa}{\pa r}\frac{d^2 u_e^\la}{d\la^2}-\frac{2s}{p-1}\frac{\pa}{\pa r}\frac{d u_e^\la}{d\la}\\ &=\la^2\frac{d^3 u_e^\la}{d\la^3}+\la(2-\frac{4s}{p-1})\frac{d^2 u_e^\la}{d\la^2}-\frac{2s}{p-1}(1-\frac{2s}{p-1})\frac{du_e^\la}{d\la}. \endaligned\ee Now from \eqref{8ue8}, combine with \eqref{8ue9} and \eqref{8ue10}, we get \be\label{8ue11}\aligned \frac{\pa^3}{\pa r^3}\frac{du_e^\la}{d\la}=&\la^3\frac{d^4 u_e^\la}{d\la^4}+\la^2(3-\frac{6s}{p-1})\frac{d^3 u_e^\la}{d\la^3}-\la(1-\frac{2s}{p-1})\frac{6s}{p-1}\frac{d^2 u_e^\la}{d\la^2}\\ &+(1-\frac{2s}{p-1})(1+\frac{2s}{p-1})\frac{2s}{p-1}\frac{du_e^\la}{d\la}. \endaligned\ee From \eqref{8ue6}, on $\R^{n+1}_+\cap \pa B_1$, combine with \eqref{8ue11} then \be\nonumber\aligned \frac{\pa^4}{\pa r^4}u_e^\la=&\la \frac{\pa^3}{\pa r^3}\frac{du_e^\la}{d\la}-(3+\frac{2s}{p-1})\frac{\pa^3}{\pa r^3}u_e^\la\\ =&\la^4\frac{d^4 u_e^\la}{d\la^4}-\la^3\frac{8s}{p-1}\frac{d^3 u_e^\la}{d\la^3}+\la^2(2+\frac{4s}{p-1})\frac{6s}{p-1}\frac{d^2 u_e^\la}{d\la^2}\\ &-\la (1+\frac{2s}{p-1})(1+\frac{s}{p-1})\frac{16s}{p-1}\frac{du_e^\la}{d\la}\\ &+(3+\frac{2s}{p-1})(2+\frac{2s}{p-1})(1+\frac{2s}{p-1})\frac{2s}{p-1}u_e^\la.\\ \endaligned\ee In summary, we have that \be\nonumber\aligned \frac{\pa^3}{\pa r^3}u_e^\la =&\la^3\frac{d^3 u_e^\la}{d\la^3}-\la^2\frac{6s}{p-1}\frac{d^2 u_e^\la}{d\la^2}+\la(\frac{6s}{p-1}+\frac{12s^2}{(p-1)^2})\frac{du_e^\la}{d\la}\\ &-(2+\frac{2s}{p-1})(1+\frac{2s}{p-1})\frac{2s}{p-1}u_e^\la \endaligned\ee and \be\nonumber \frac{\pa ^2}{\pa r^2}u_e^\la=\la^2\frac{d^2 u_e^\la}{d\la^2}-\la\frac{4s}{p-1}\frac{du_e^\la}{d\la}+(1+\frac{2s}{p-1})\frac{2s}{p-1}u_e^\la \ee \be\nonumber \frac{\pa u_e^\la}{\pa r}=\la \frac{du_e^\la}{d\la}-\frac{2s}{p-1}u_e^\la. \ee \subsection{On the operator $\Delta_b^2$ and its representation } Note that \be\nonumber\aligned \Delta_b u=&y^{-b}\nabla\cdot(y^b\nabla u) =&u_{rr}+\frac{n+b}{r}u_r+\frac{1}{r^2}\theta_1^{-b}\mathbf{ div} _{S^n}(\theta_1^b\nabla_{S^n}u), \endaligned\ee where $\theta_1=\frac{y}{r},r=\sqrt{|x|^2+y^2}$. Set $v=\Delta_b u$ and $\Delta_b^2 u:=w$. Then \be\nonumber\aligned w=&\Delta_b v=v_{rr}+\frac{n+b}{r}v_r+\frac{1}{r^2}\theta_1^{-b} \mathbf{ div} _{S^n}(\theta_1^b\nabla_{S^n}v)\\ =&\pa_{rrrr}u+\frac{2(n+b)}{r}\pa_{rrr}u+\frac{(n+b)(n+b-2)}{r^2}\pa_{rr}u-\frac{(n+b)(n+b-2)}{r^3}\pa_r u\\ &+r^{-4}\theta_1^{-b}{\bf div}_{S^n}(\theta_1^b\nabla(\theta_1^{-b}\mathbf{ div}_{S^n}(\theta_1^b\nabla_{S^n}u))\\ &+2r^{-2}\theta_1^{-b}{\bf div}_{S^n}(\theta_1^b\nabla_{S^n}(u_{rr}+\frac{n+b-2}{r}u_r))\\ &-2(n+b-3)r^{-4}\theta_1^{-b}\mathbf{ div}_{S^n}(\theta_1^b\nabla_{S^n}u).\\ \endaligned\ee On $\R^{n+1}_+\cap \pa B_1$, we have \be\nonumber\aligned w=&\underbrace{\pa_{rrrr}u+2(n+b)\pa_{rrr}u+(n+b)(n+b-2)\pa_{rr}u-(n+b)(n+b-2)\pa_r u}\\ &\underbrace{+\theta_1^{-b}\mathbf{ div}_{S^n}(\theta_1^b\nabla(\theta_1^{-b}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u))}\\ &\underbrace{+2\theta_1^{-b}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}(u_{rr}+\frac{n+b-2}{r}u_r))}\\ &\underbrace{-2(n+b-3)\theta_1^{-b}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u)}\\ :=&I(u)+J(u)+K(u)+L(u).\\ \endaligned\ee By these notations, we can rewrite the term $\overline{E}_{d2}(u_e^\la,1)$ appear in \eqref{8overlineE} as following \be\label{8ijk}\aligned &\overline{E}_{d2}(u_e^\la,1)\\ &=\int_{\R^{n+1}_+\cap \pa B_1}\theta_1^b(\la\frac{d w_e^\la}{d\la}\frac{d u_e^\la}{d\la}-\la w_e^\la\frac{d^2 u_e^\la}{d\la^2})-5\theta_1^b w_e^\la\frac{d u_e^\la}{d\la}\\ &=\underbrace{\int_{\R^{n+1}_+\cap\pa B_1}\la\theta_1^b\frac{d}{d\la}I(u_e^\la)\frac{d u_e^\la}{d\la}-\la\theta_1^b I(u_e^\la)\frac{d^2 u_e^\la}{d\la^2}-5\theta_1^bI(u_e^\la)\frac{du_e^\la}{d\la}}\\ &\quad \underbrace{+\int_{\R^{n+1}_+\cap\pa B_1}\la\theta_1^b\frac{d}{d\la}J(u_e^\la)\frac{d u_e^\la}{d\la}-\la\theta_1^b J(u_e^\la)\frac{d^2 u_e^\la}{d\la^2}-5\theta_1^bJ(u_e^\la)\frac{du_e^\la}{d\la}}\\ &\quad \underbrace{+\int_{\R^{n+1}_+\cap\pa B_1}\la\theta_1^b\frac{d}{d\la}K(u_e^\la)\frac{d u_e^\la}{d\la}-\la\theta_1^b K(u_e^\la)\frac{d^2 u_e^\la}{d\la^2}-5\theta_1^bK(u_e^\la)\frac{du_e^\la}{d\la}}\\ &\quad \underbrace{+\int_{\R^{n+1}_+\cap \pa B_1}\la\theta_1^b\frac{d}{d\la}L(u_e^\la)\frac{du_e^\la}{d\la}-\la\theta_1^bL(u_e^\la)\frac{d^2u_e^\la}{d\la^2}-5\theta_1^bL(u_e^\la)\frac{du_e^\la}{d\la}}, \endaligned\ee we define as \be\nonumber\aligned \overline{E}_{d2}(u_e^\la,1):&=\mathcal{I}+\mathcal{J}+\mathcal{K}+\mathcal{L}\\ :&=I_1+I_2+I_3+J_1+J_2+J_3+K_1+K_2+K_3+L_1+L_2+L_3. \endaligned\ee where $I_1,I_2,I_3,J_1,J_2,J_3,K_1,K_2,K_3,L_1,L_2,L_3 $ are corresponding successively to the $12$ terms in \eqref{8ijk}. By the conclusions of subsection $2.2$, we have \be\label{8iuiu}\aligned I(u_e^\la)&=\pa_{rrrr} u_e^\la+2(n+b)\pa_{rrr} u_e^\la\\ &\quad +(n+b)(n+b-2)\pa_{rr} u_e^\la-(n+b)(n+b-2)\pa_r u_e^\la\\ &=\la^4\frac{d^4 u_e^\la}{d\la^4}+\la^3\big(2(n+b)-\frac{8s}{p-1})\big)\frac{d^3 u_e^\la}{d\la^3}\\ &\quad+\la^2\big[\frac{12s}{p-1}(1+\frac{2s}{p-1})-(n+b)\frac{12s}{p-1}+(n+b)(n+b-2)\big]\frac{d^2 u_e^\la}{d\la^2}\\ &\quad+\la\big[-\frac{8s}{p-1}(1+\frac{2s}{p-1})(2+\frac{2s}{p-1})+2(n+b)\frac{6s}{p-1}(1+\frac{2s}{p-1})\\ &\quad+(n+b)(n+b-2)(-\frac{4s}{p-1}-1)\big]\frac{d u_e^\la}{d\la}\\ &\quad+\big[(1+\frac{2s}{p-1})(2+\frac{2s}{p-1})(3+\frac{2s}{p-1})\frac{2s}{p-1}\\ &\quad-(n+b)(1+\frac{2s}{p-1})(2+\frac{2s}{p-1})\frac{4s}{p-1}\\ &\quad+(n+b)(n+b-2)(\frac{2s}{p-1}+2)\frac{2s}{p-1}\big]u_e^\la.\\ \endaligned\ee For convenience, we denote that \be\label{8iue} I(u_e^\la)=\la^4\frac{d^4 u_e^\la}{d\la^4}+\la^3 \delta_1\frac{d^3 u_e^\la}{d\la^3}+\la^2\delta_2\frac{d^2 u_e^\la}{d\la^2}+\la\delta_3\frac{du_e^\la}{d\la}+\delta_4 u_e^\la, \ee where $\delta_i$ are the corresponding coefficients of $\la^i\frac{d^i u_e^\la}{d\la^i}$ appeared in \eqref{8iuiu} for $i=1,2,3,4$. Now taking the derivative of \eqref{8iue} with respect to $\la$, we get \be\label{8diue}\aligned \frac{d}{d\la}I(u_e^\la)=&\la^4\frac{d^5 u_e^\la}{d\la^5}+\la^3 (\delta_1+4)\frac{d^4 u_e^\la}{d\la^4}+\la^2(3\delta_1+\delta_2)\frac{d^3 u_e^\la}{d\la^3}\\ &+\la(2\delta_2+\delta_3)\frac{d^2u_e^\la}{d\la^2}+(\delta_3+\delta_4)\frac{du_e^\la}{d\la} \endaligned\ee and \be\label{8urr}\aligned \pa_{rr}& u_e^\la+(n+b-2)\pa_r u_e^\la \\ =&\la^2\frac{d^2u_e^\la}{d\la^2}+\la(n+b-2-\frac{4s}{p-1})\frac{du_e^\la}{d\la}+\frac{2s}{p-1}(3+\frac{2s}{p-1}-n-b)u_e^\la\\ :=&\la^2\frac{d^2 u_e^\la}{d\la^2}+\la\alpha\frac{du_e^\la}{d\la}+\beta u_e^\la. \endaligned\ee Hence, \be\label{8urrla}\aligned \frac{d}{d\la}[\pa_{rr}& u_e^\la+(n+b-2)\pa_r u_e^\la]=\la^2\frac{d^3 u_e^\la}{d\la^3}+\la(\alpha+2)\frac{d^2 u_e^\la}{d\la^2}+(\alpha+\beta)\frac{du_e^\la}{d\la}, \endaligned\ee here $\alpha=n+b-2-\frac{4s}{p-1}$ and $\beta=\frac{2s}{p-1}(3+\frac{2s}{p-1}-n-b)$. \subsection{The computations of $I_1,I_2,I_3$ and $\mathcal{I}$} \be\label{8i1}\aligned I_1:=&\int_{\R^{n+1}_+\cap\pa B_1}\la \theta_1^b\frac{d}{d\la}I(u_e^\la)\frac{du_e^\la}{d\la}\\ =&\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\big(\la^5\frac{d^5 u_e^\la}{d\la^5}+\la^4(4+\delta_1)\frac{d^4 u_e^\la}{d\la^4}+\la^3(3\delta_1+\delta_2)\frac{d^3 u_e^\la}{d\la^3}\\ &+\la^2(2\delta_2+\delta_3)\frac{d^2 u_e^\la}{d\la^2}+\la(\delta_3+\delta_4)\frac{du_e^\la}{d\la}\big)\frac{du_e^\la}{d\la}\\ =&\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\big[\la^5\frac{d^4 u_e^\la}{d\la^4}\frac{d u_e^\la}{d\la}-\la^5\frac{d^3 u_e^\la}{d\la^3}\frac{d^2 u_e^\la}{d\la^2}+(\delta_1-1)\la^4\frac{d^3 u_e^\la}{d\la^3}\frac{d u_e^\la}{d\la}\\ &+(4-\delta_1+\delta_2)\la^3\frac{d^2 u_e^\la}{d\la^2}\frac{d u_e^\la}{d\la}+\frac{3\delta_1-\delta_2+\delta_3-12}{2}\la^2(\frac{d u_e^\la}{d\la})^2\big]\\ &+\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b \big[(12-3\delta_1+\delta_2+\delta_4)\la(\frac{d u_e^\la}{d\la})^2\\ &+(\delta_1-4-\delta_2)\la^3(\frac{d^2 u_e^\la}{d\la^2})^2+\la^5(\frac{d^3 u_e^\la}{d\la^3})^2\big]\\ &+\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b(6-\delta_1)\la^4\frac{d^3 u_e^\la}{d\la^3}\frac{d^2 u_e^\la}{d\la^2}, \endaligned \ee where $\delta_i (i=1,2,3,4) $ are defined in \eqref{8iuiu} and \eqref{8iue}. Denote $f=u_e^\la,f':=\frac{du_e^\la}{d\la}$, we have used the following differential identities: \be\nonumber\aligned \la^5 f'''''f'=&\big[\la^5f''''f'-\la^5f'''f''-5\la^4f'''f'+20\la^3f''f'-30\la^2f'f'\big]'\\ &+60\la(f')^2-20\la^3(f'')^2+\la^5(f''')^2+10\la^4f'''f'', \endaligned\ee \be\nonumber\aligned \la^4f''''f'=\big[\la^4f'''f'-4\la^3f''f'+6\la^2f'f'\big]'-12\la(f')^2+4\la^3(f'')^2-\la^4f'''f'', \endaligned\ee \be\nonumber\aligned \la^3f'''f'=\big[\la^3f''f'-\frac{3\la^2}{2}f'f'\big]'+3\la(f')^2-\la^3(f'')^2, \endaligned\ee and \be\nonumber\aligned \la^2 f''f'=\big[\frac{\la^2}{2}f'f'\big]'-\la(f')^2. \endaligned\ee \be\label{8i2}\aligned &I_2:=-\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b I(u_e^\la)\frac{d^2 u_e^\la}{d\la^2}\\ &=-\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b \big(\la^4\frac{d^4 u_e^\la}{d\la^4}+\la^3 \delta_1\frac{d^3 u_e^\la}{d\la^3}+\la^2\delta_2\frac{d^2 u_e^\la}{d\la^2}+\la\delta_3\frac{du_e^\la}{d\la}+\delta_4 u_e^\la\big)\frac{d^2 u_e^\la}{d\la^2}\\ &=\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\big[-\la^5\frac{d^3 u_e^\la}{d\la^3}\frac{d^2 u_e^\la}{d\la^2}-\delta_4\la\frac{d u_e^\la}{d\la}u_e^\la\big]\\ &\quad+\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\big[\la^5(\frac{d^3 u_e^\la}{d\la^3})^2-\delta_2\la^3(\frac{d^2 u_e^\la}{d\la^2})^2+\delta_4\la(\frac{d u_e^\la}{d\la})^2\big]\\ &\quad +\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\big[(5-\delta_1)\la^4\frac{d^3 u_e^\la}{d\la^3}\frac{d^2 u_e^\la}{d\la^2}-\delta_3\la^2\frac{d^2 u_e^\la}{d\la^2}\frac{d u_e^\la}{d\la}+\delta_4\frac{d u_e^\la}{d\la}u_e^\la\big]. \endaligned\ee Here we have used that \be\nonumber\aligned -\la^5f''''f''=\big[-\la^5f'''f''\big]'+5\la^4f'''f''+\la^5(f''')^2 \endaligned\ee and \be\nonumber\aligned -\la f''f=[-\la f'f]'+f'f+\la(f')^2. \endaligned\ee \be\label{8iu3}\aligned I_3:=&-5\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b I(u_e^\la)\frac{d u_e^\la}{d\la}\\ =&-5\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\big[\la^4\frac{d^4 u_e^\la}{d\la^4}+\la^3 \delta_1\frac{d^3 u_e^\la}{d\la^3}+\la^2\delta_2\frac{d^2 u_e^\la}{d\la^2}+\la\delta_3\frac{du_e^\la}{d\la}+\delta_4 u_e^\la\big]\frac{d u_e^\la}{d\la}\\ =&\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\big[-5\frac{d^3 u_e^\la}{d\la^3}\frac{d u_e^\la}{d\la}+(20-5\delta_1)\la^3\frac{d^2 u_e^\la}{d\la^2}\frac{d u_e^\la}{d\la}\big]\\ &+\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\big[(5\delta_1-20)\la^3(\frac{d^2 u_e^\la}{d\la^2})^2-5\delta_3\la(\frac{d u_e^\la}{d\la})^2\big]\\ &+\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\big[5\la^4\frac{d^3 u_e^\la}{d\la^3}\frac{d^2 u_e^\la}{d\la^2}+(15\delta_1-60-5\delta_2)\frac{d^2 u_e^\la}{d\la^2}\frac{d u_e^\la}{d\la}-5\delta_4\frac{d u_e^\la}{d\la}u_e^\la\big]. \endaligned\ee Here we have use that \be\nonumber\aligned -\la^4 f''''f'=\big[-5\la^4f'''f'+20\la^3f''f'\big]'-20\la^3(f'')^2-60\la^2f''f'+5\la^4f'''f'' \endaligned\ee and \be\nonumber\aligned -\la^3 f'''f'=\big[-\la^3f''f'\big]'+3\la^2 f''f'+\la^3 (f'')^2. \endaligned\ee Now we add up $I_1,I_2, I_3$ and further integrate by part, we can get the term $\mathcal{I}$. \be\label{8I}\aligned \mathcal{I}:=& I_1+I_2+I_3\\ =&\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\big[\la^5\frac{d^4 u_e^\la}{d\la^4}\frac{d u_e^\la}{d\la}-2\la^5\frac{d^3 u_e^\la}{d\la^3}\frac{d^2 u_e^\la}{d\la^2}\\ &+(\delta_1-6)\la^4\frac{d^3 u_e^\la}{d\la^3}\frac{d u_e^\la}{d\la}+(24-6\delta_1+\delta_2)\la^3\frac{d^2 u_e^\la}{d\la^2}\frac{d u_e^\la}{d\la}\\ &+(9\delta_1-3\delta_2-36)\la^2\frac{d u_e^\la}{d\la}\frac{d u_e^\la}{d\la}\\ &+(8-\delta_1)\la^4(\frac{d^2 u_e^\la}{d\la^2})^2-\delta_4\la\frac{d u_e^\la}{d\la}u_e^\la-2\delta_4(u_e^\la)^2\big]\\ &+\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\Big( 2\la^5(\frac{d^3u_e^\la}{d\la^3})^2+(10\delta_1-2\delta_2-56)\la^3(\frac{d^2u_e^\la}{d\la^2})^2\\ &+(-18\delta_1+\delta_2-4\delta_3+2\delta_4+72)\la(\frac{du_e^\la}{d\la})^2 \Big). \endaligned\ee Since $u_e^\la(X)=\la^{\frac{2s}{p-1}}u_e(\la X)$, we have the following \be\nonumber\aligned \la^4&\frac{d^4 u_e^\la}{d\la^4}=\la^{\frac{2s}{p-1}}\big[\frac{2s}{p-1}(\frac{2s}{p-1}-1)(\frac{2s}{p-1}-2)(\frac{2s}{p-1}-3)u_e(\la X)\\ &+\frac{8s}{p-1}(\frac{2s}{p-1}-1)(\frac{2s}{p-1}-2)r\la\pa_r u_e(\la X)\\ &+\frac{12s}{p-1}(\frac{2s}{p-1}-1)r^2\la^2\pa_{rr}u_e(\la X)\\ &+\frac{8s}{p-1}r^3\la^3\pa_{rrr}u_e(\la X)+r^4\la^4\pa_{rrrr}u_e(\la X)\big],\\ \endaligned\ee and \be\nonumber\aligned \la^3\frac{d^3 u_e^\la}{d\la^3}&=\la^{\frac{2s}{p-1}}\big[\frac{2s}{p-1}(\frac{2s}{p-1}-1)(\frac{2s}{p-1}-2)u_e(\la X)\\ &+\frac{6s}{p-1}(\frac{2s}{p-1}-1)r\la\pa_r u_e(\la X)\\ &+\frac{6s}{p-1}r^2\la^2 \pa_{rr} u_e(\la X)+r^3\la^3\pa_{rrr} u_e(\la X)\big], \endaligned\ee \be\nonumber\aligned &\la^2\frac{d^2 u_e^\la}{d\la^2}\\ &=\la^{\frac{2s}{p-1}}\big[\frac{2s}{p-1}(\frac{2s}{p-1}-1) u_e(\la X)+\frac{4s}{p-1}r\la\pa_r u_e(\la X)+r^2\la^2\pa_{rr}u_e(\la X)\big] \endaligned\ee and \be\nonumber\aligned \la\frac{d u_e^\la}{d\la}=\la^{\frac{2s}{p-1}}\big[\frac{2s}{p-1}u_e(\la X)+r\la\pa_r u_e(\la X)\big]. \endaligned\ee Hence, by scaling we have the following derivatives: \be\nonumber\aligned \frac{d}{d\la}&\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\la^5\frac{d^4 u_e^\la}{d\la^4}\frac{d u_e^\la}{d\la}\\ &=\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-5}y^b\big[\frac{2s}{p-1}(\frac{2s}{p-1}-1)(\frac{2s}{p-1}-2)(\frac{2s}{p-1}-3)u_e\\ &\quad+\frac{8s}{p-1}(\frac{2s}{p-1}-1)(\frac{2s}{p-1}-2)\la\pa_r u_e+\frac{12s}{p-1}(\frac{2s}{p-1}-1)\la^2\pa_{rr}u_e\\ &\quad+\frac{8s}{p-1}\la^3\pa_{rrr}u_e+\la^4\pa_{rrrr}u_e\big]\big[\frac{2s}{p-1}u_e+r\la\pa_r u_e\big]; \endaligned\ee \be\label{8Iscaling}\aligned \frac{d}{d\la}&\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\la^5\frac{d^3 u_e^\la}{d\la^3}\frac{d^2 u_e^\la}{d\la^2}\quad\quad\quad\quad\\ &=\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-5}y^b\big[\frac{2s}{p-1}(\frac{2s}{p-1}-1)(\frac{2s}{p-1}-2)u_e\\ &+\frac{6s}{p-1}(\frac{2s}{p-1}-1)\la\pa_r u_e +\frac{6s}{p-1}\la^2 \pa_{rr} u_e+\la^3\pa_{rrr} u_e\big]\\ &\big[\frac{2s}{p-1}(\frac{2s}{p-1}-1) u_e+\frac{4s}{p-1}\la\pa_r u_e+\la^2\pa_{rr}u_e\big]; \endaligned\ee \be\nonumber\aligned \frac{d}{d\la}&\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\la^4\frac{d^3 u_e^\la}{d\la^3}\frac{d u_e^\la}{d\la}\quad\quad\quad\quad\\ &=\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-5}y^b\big[\frac{2s}{p-1}(\frac{2s}{p-1}-1)(\frac{2s}{p-1}-2)u_e\\ &+\frac{6s}{p-1}(\frac{2s}{p-1}-1)\la\pa_r u_e +\frac{6s}{p-1}\la^2 \pa_{rr} u_e+\la^3\pa_{rrr} u_e\big]\\ &\big[\frac{2s}{p-1}u_e+\la\pa_r u_e\big]; \endaligned\ee \be\nonumber\aligned \frac{d}{d\la}&\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\la^3\frac{d^2 u_e^\la}{d\la^2}\frac{d u_e^\la}{d\la}\quad\quad\quad\quad\\ &=\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-5}y^b \big[\frac{2s}{p-1}(\frac{2s}{p-1}-1) u_e\\ &+\frac{4s}{p-1}\la\pa_r u_e +\la^2\pa_{rr}u_e][\frac{2s}{p-1}u_e+\la\pa_r u_e\big] \endaligned\ee and \be\nonumber\aligned \frac{d}{d\la}&\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\la^2\frac{d u_e^\la}{d\la}\frac{d u_e^\la}{d\la}\quad\quad\quad\quad\\ &=\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-5}y^b \big[\frac{2s}{p-1}u_e+\la\pa_r u_e\big]^2. \endaligned\ee Further, \be\nonumber\aligned \frac{d}{d\la}&\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\la^4\frac{d^2 u_e^\la}{d\la^2}\frac{d^2 u_e^\la}{d\la^2}\\ &=\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-5}y^b \big[\frac{2s}{p-1}(\frac{2s}{p-1}-1) u_e\\ &+\frac{4s}{p-1}\la\pa_r u_e+\la^2\pa_{rr}u_e\big]^2, \endaligned\ee \be\nonumber\aligned \frac{d}{d\la}&\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\la\frac{d u_e^\la}{d\la}u_e^\la\\ &=\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-5}y^b \big[\frac{2s}{p-1}u_e+\la\pa_r u_e\big]u_e, \endaligned\ee and \be\nonumber\aligned \frac{d}{d\la}&\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b u_e^\la=\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-5}y^b u_e^2. \endaligned\ee \subsection{The computations of $J_i,K_i,L_i (i=1,2,3)$ and $\mathcal{J},\mathcal{K},\mathcal{L}$} Firstly, \be\label{8j1}\aligned J_1:=&\int_{\R^{n+1}_+\cap\pa B_1}\la\theta_1^b \frac{d}{d\la}J(u_e^\la)\frac{du_e^\la}{d\la}=\int_{\R^{n+1}_+\cap\pa B_1}\la\theta_1^b J(\frac{du_e^\la}{d\la})\frac{du_e^\la}{d\la}\\ =&\la\int_{\R^{n+1}_+\cap\pa B_1} \mathbf{div}_{S^n}\big(\theta_1^b\nabla_{S^n}(\theta_1^{-b}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}\frac{du_e^\la}{d\la}))\big)\frac{du_e^\la}{d\la}\\ =&-\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\nabla_{S^n}\big(\theta_1^{-b} \mathbf{div}_{S^n}(\theta_1^b \nabla_{S^n}\frac{du_e^\la}{d\la})\big)\nabla_{S^n}\frac{du_e^\la}{d\la}\\ =&\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^{-b}\big[\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}\frac{du_e^\la}{d\la})\big]^2\\ =&\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b(\Delta_{S^n}\frac{du_e^\la}{d\la})^2, \endaligned\ee here we have used integrate by part formula on the unit sphere $S^n$. \be\label{8j2}\aligned J_2:=&-\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b J(u_e^\la)\frac{d^2 u_e^\la}{d\la^2}\\ =&-\la\int_{\R^{n+1}_+\cap\pa B_1}\mathbf{div}_{S^n}\big(\theta_1^b\nabla_{S^n}(\theta_1^{-b}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u_e^\la))\big)\frac{d^2 u_e^\la}{d\la^2}\\ =&\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\nabla_{S^n}(\theta_1^{-b}\mathbf{ div}_{S^n}(\theta_1^b \nabla_{S^n}u_e^\la)\nabla_{S^n}\frac{d^2u_e^\la}{d\la^2}\\ =&-\la\int_{\R^{n+1}_+\cap\pa B_1} \theta_1^{-b}\mathbf{div}_{S^n}(\theta_1^b \nabla_{S^n}u_e^\la) \frac{d^2}{d\la^2}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u_e^\la)\\ =&\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}-\la \theta_1^{-b}\big[\mathbf{div}_{S^n}(\theta_1^b \nabla_{S^n}u_e^\la)\big]\frac{d}{d\la}\big[\mathbf{div}_{S^n}(\theta_1^b \nabla_{S^n}u_e^\la)\big]\\ &\quad+\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^{-b}\mathbf{ div}_{S^n}(\theta_1^b \nabla_{S^n}u_e^\la)\cdot \frac{d}{d\la}\mathbf{div}_{S^n}(\theta_1^b \nabla_{S^n}u_e^\la)\\ &\quad+\la\int_{\R^{n+1}_+\cap\pa B_1} \theta_1^{-b}\big[\frac{d}{d\la}\mathbf{div}_{S^n}(\theta_1^b \nabla_{S^n}u_e^\la)\big]^2\\ =&\frac{d}{d\la}\Big(\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\Big( -\frac{1}{2}\la\frac{d}{d\la}(\Delta_{S^n}u_e^\la)^2+\frac{1}{2}(\Delta_{S^n}u_e^\la)^2 \Big)\Big)\\ &+\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^{b}\la(\Delta_{S^n}\frac{du_e^\la}{d\la})^2, \endaligned\ee here we denote that $g= \mathbf{div}_{S^n}(\theta_1^b \nabla_{S^n}u_e^\la),g'=\frac{d}{d\la}\mathbf{div}_{S^n}(\theta_1^b \nabla_{S^n}u_e^\la)$ and we have used that \be\nonumber -\la g g'=\big[-g g'\big]'+g g'+\la (g')^2=\big[-g g'+\frac{1}{2}g^2\big]'+\la(g')^2. \ee Further, \be\label{8j3}\aligned J_3:=&-5\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b J(u_e^\la)\frac{du_e^\la}{d\la}\\ =&-5\int_{\R^{n+1}_+\cap\pa B_1}\mathbf{div}_{S^n}\big(\theta_1^b \nabla_{S^n}(\theta_1^{-b}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u))\big)\frac{du_e^\la}{d\la}\\ =&5\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b \nabla_{S^n}\big(\theta_1^{-b}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u_e^\la)\big)\nabla_{S^n}\frac{d u_e^\la}{d\la}\\ =&-5\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^{-b}\mathbf{ div}_{S^n}(\theta_1^b\nabla_{S^n}u_e^\la)\frac{d}{d\la}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u_e^\la)\\ =&-\frac{5}{2}\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b(\Delta_{S^n}u_e^\la)^2. \endaligned\ee Therefore, combine with \eqref{8j1}, \eqref{8j2} and \eqref{8j3}, we get that \be\label{8j}\aligned \mathcal{J}:=&J_1+J_2+J_3\\ =&2\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^{-b}\big[\frac{d}{d\la}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u_e^\la)\big]^2\\ &\quad-4\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^{-b}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u)\frac{d}{d\la}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u)\\ &\quad+\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}-\la \theta_1^b\big[\mathbf{div}_{S^n}(\theta_1^b \nabla_{S^n}u_e^\la)\big]\frac{d}{d\la}\big[\mathbf{div}_{S^n}(\theta_1^b \nabla_{S^n}u_e^\la)\big]\\ =&2\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^{-b}\big[\frac{d}{d\la}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u_e^\la)\big]^2\\ &\quad-2\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^{-b}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u_e^\la)\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u_e^\la)\\ &\quad+\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}-\la \theta_1^{-b}\big[\mathbf{div}_{S^n}(\theta_1^b \nabla_{S^n}u_e^\la)\big]\frac{d}{d\la}\big[\mathbf{div}_{S^n}(\theta_1^b \nabla_{S^n}u_e^\la)\big]\\ =&\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^{b}\Big(-2(\Delta_{S^n}u_e^\la)^2-\frac{1}{2}\la\frac{d}{d\la}(\Delta_{S^n}u_e^\la)^2\Big)\\ &\quad+2\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^{b}(\Delta_{S^n}u_e^\la)^2. \endaligned\ee Note that \be\label{8Jscaling}\aligned &\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b \big[\theta_1^{-b}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u_e^\la)\big]^2\\ &=\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-5}\big(\la^2\Delta_b u_e-\la^2\pa_{rr}u_e-(n+b)\la\pa_r u_e\big)^2, \endaligned\ee and \be\nonumber\aligned &\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\la \frac{d}{d\la}\big[\theta_1^{-b}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u_e^\la)\big]^2\\ &=\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-4}\frac{d}{d\la}\big(\la^2\Delta_b u_e-\la^2\pa_{rr}u_e-(n+b)\la\pa_r u_e\big)^2. \endaligned\ee Next we compute $K_1,K_2,K_3$ and $\mathcal{K}$. \be\label{8k1}\aligned &K_1\\ :=&\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\frac{d}{d\la}K(u_e^\la)\frac{du_e^\la}{d\la}\\ =&2\la\int_{\R^{n+1}_+\cap\pa B_1}\mathbf{div}_{S^n}\big(\theta_1^b\nabla_{S^n}(\frac{d}{d\la}(\pa_{rr}+(n+b-2)\pa_r)u_e^\la)\big)\frac{du_e^\la}{d\la}\\ =&2\la\int_{\R^{n+1}_+\cap\pa B_1}\mathbf{div}_{S^n}\big(\theta_1^b\nabla_{S^n}(\la^3\frac{d^3 u_e^\la}{d\la^3}+\la^2(\alpha+2)\frac{d^2 u_e^\la}{d\la^2}+\la(\alpha+\beta)\frac{d u_e^\la}{d\la})\big)\frac{du_e^\la}{d\la}\\ =&-2\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\nabla_{S^n}\big(\la^3\frac{d^3 u_e^\la}{d\la^3}+\la^2(\alpha+2)\frac{d^2 u_e^\la}{d\la^2}+\la(\alpha+\beta)\frac{d u_e^\la}{d\la}\big)\nabla_{S^n}\frac{du_e^\la}{d\la}\\ =&\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}-\la^3\theta_1^b\frac{d}{d\la}\big(\frac{d}{d\la}\nabla_{S^n} u_e^\la\big)^2 +(2-2\alpha)\la^2\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b (\frac{d}{d\la}\nabla_{S^n} u_e^\la)\\ \cdot &(\frac{d^2}{d\la^2}\nabla_{S^n} u_e^\la) +2\la^3\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b(\frac{d^2}{d\la^2}\nabla_{S^n} u_e^\la)^2\\ &-(2\alpha+2\beta)\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b(\frac{d}{d\la}\nabla_{S^n} u_e^\la)^2. \endaligned\ee Here we denote that $h=\nabla_{S^n} u_e^\la,h'=\frac{d}{d\la}\nabla_{S^n} u_e^\la$, and have used that \be\nonumber -\la^3h'h'''=\big[-\frac{\la^3}{2}\frac{d}{d\la}(h')^2\big]'+3\la^2 h'h''+\la^3(h'')^2. \ee Next, \be\label{8k2}\aligned K_2:=&-\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b K(u_e^\la)\frac{d^2 u_e^\la}{d\la^2}\\ =&-2\la\int_{\R^{n+1}_+\cap\pa B_1}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}\big(\la^2\frac{d^2 u_e^\la}{d\la^2}+\la\alpha\frac{d u_e^\la}{d\la}+\beta u_e^\la)\big)\frac{d^2 u_e^\la}{d\la^2}\\ =&2\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\nabla_{S^n}\big(\la^2\frac{d^2 u_e^\la}{d\la^2}+\la\alpha\frac{d u_e^\la}{d\la}+\beta u_e^\la\big)\nabla_{S^n}\frac{d^2 u_e^\la}{d\la^2}\\ =&\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\big[2\beta\la\nabla_{S^n}u_e^\la \frac{d}{d\la}\nabla_{S^n}u_e^\la-\beta(\nabla_{S^n}u_e^\la)^2\big]\\ &+2\la^3\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b(\frac{d^2}{d\la^2}\nabla_{S^n}u_e^\la)^2-2\la\beta\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b(\frac{d}{d\la}\nabla_{S^n}u_e^\la)^2\\ &+2\la^2\alpha\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\frac{d}{d\la}\nabla_{S^n}u_e^\la\frac{d^2}{d\la^2}\nabla_{S^n}u_e^\la. \endaligned\ee Here we have used that \be\nonumber 2\la h h''=[2\la h h'-h^2]'-2\la(h')^2. \ee Further, \be\label{8k3}\aligned K_3:=&-5\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b K(u_e^\la)\frac{du_e^\la}{d\la}\\ =&-10\int_{\R^{n+1}_+\cap\pa B_1}\mathbf{div}_{S^n}\big(\theta_1^b\nabla_{S^n}(\la^2\frac{d^2 u_e^\la}{d\la^2}+\la\alpha\frac{du_e^\la}{d\la}+\beta u_e^\la)\big)\frac{du_e^\la}{d\la}\\ =&10\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\nabla_{S^n}\big(\la^2\frac{d^2 u_e^\la}{d\la^2}+\la\alpha\frac{du_e^\la}{d\la}+\beta u_e^\la\big)\nabla_{S^n}\frac{du_e^\la}{d\la}\\ =&\frac{d}{d\la}\big[5\beta\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b \nabla_{S^n} u_e^\la \nabla_{S^n} u_e^\la\big]+10\la\alpha\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\big(\frac{d}{d\la}\nabla_{S^n} u_e^\la\big)^2\\ &+10\la^2\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b \frac{d}{d\la}\nabla_{S^n} u_e^\la \frac{d^2}{d\la^2}\nabla_{S^n} u_e^\la. \endaligned\ee Now combine with \eqref{8k1}, \eqref{8k2} and \eqref{8k3}, we get that \be\label{8k}\aligned \mathcal{K}:=&K_1+K_2+K_3\\ =&\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\Big[-\la^3\frac{d}{d\la}(\frac{d}{d\la}\nabla_{S^n} u_e^\la)^2\\ &+2\beta\la\nabla_{S^n} u_e^\la\frac{d}{d\la}\nabla_{S^n} u_e^\la+4\beta(\nabla_{S^n} u_e^\la)^2+6\la^2(\nabla_{S^n}\frac{du_e^\la}{d\la})^2\Big]\\ &+4\la^3\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b(\frac{d^2}{d\la^2}\nabla_{S^n} u_e^\la)^2\\ &+(8\alpha-4\beta-12)\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b(\frac{d}{d\la}\nabla_{S^n} u_e^\la)^2.\\ \endaligned\ee Notice that by scaling we have \be\label{8Kscaling}\aligned &\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b(\nabla_{S^n} u_e^\la)^2\\ &=\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-5}y^b\big[\la^2|\nabla u_e|^2-\la^2|\pa_r u_e|^2\big]. \endaligned\ee \be\nonumber\aligned &\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\la\frac{d}{d\la}(\nabla_{S^n} u_e^\la)^2\\ &=\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-4}y^b\frac{d}{d\la}\big[\la^2|\nabla u_e|^2-\la^2|\pa_r u_e|^2\big] \endaligned\ee and \be\nonumber\aligned &\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\la^3\frac{d}{d\la}(\frac{d}{d\la}\nabla_{S^n} u_e^\la)^2\\ &=\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-4}y^b\frac{d}{d\la}\big[\frac{2s}{p-1}\la\nabla u_e+\la^2\nabla \pa_r u_e\big]^2. \endaligned\ee Finally, we compute $\mathcal{L}$. \be\nonumber\aligned &L_1:=\int_{\R^{n+1}_+\cap\pa B_1}\la\theta_1^b\frac{d}{d\la}L(u_e^\la)\frac{du_e^\la}{d\la}\\ &=-2(n+b-3)\la\int_{\R^{n+1}_+\cap\pa B_1}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}\frac{du_e^\la}{d\la})\frac{du_e^\la}{d\la}\\ &=2(n+b-3)\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b(\nabla_{S^n}\frac{du_e^\la}{d\la})^2; \endaligned\ee \be\nonumber\aligned &L_2:=\int_{\R^{n+1}_+\cap\pa B_1}-\la\theta_1^bL(u_e^\la)\frac{d^2u_e^\la}{d\la^2}\\ &=2(n+b-3)\la\int_{\R^{n+1}_+\cap\pa B_1}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u_e^\la)\frac{d^2 u_e^\la}{d\la^2}\\ &=-2(n+b-3)\int_{\R^{n+1}_+\cap\pa B_1}\la\theta_1^b\nabla_{S^n}u_e^\la\frac{d^2}{d\la^2}\nabla_{S^n}u_e^\la\\ &=-(n+b-3)\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\frac{d}{d\la}\big[2\la\nabla_{S^n}u_e^\la\nabla_{S^n}\frac{du_e^\la}{d\la}-(\nabla_{S^n}u_e^\la)^2\big]\\ &+2(n+b-3)\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b|\nabla_{S^n}\frac{du_e^\la}{d\la}|^2;\\ \endaligned\ee \be\nonumber\aligned &L_3:=\int_{\R^{n+1}_+\cap\pa B_1}-5\theta_1^bL(u_e^\la)\frac{du_e^\la}{d\la}\\ &=10(n+b-3)\int_{\R^{n+1}_+\cap\pa B_1}\mathbf{div}_{S^n}(\theta_1^b\nabla_{S^n}u_e^\la)\frac{du_e^\la}{d\la}\\ &=-10(n+b-3)\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\nabla_{S^n}u_e^\la\nabla_{S^n}\frac{du_e^\la}{d\la}\\ &=-5(n+b-3)\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b[\nabla_{S^n}u_e^\la]^2.\\ \endaligned\ee Hence, \be\nonumber\aligned &\mathcal{L}:=L_1+L_2+L_3\\ &=-(n+b-3)\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\big[\la\frac{d}{d\la}(\nabla_{S^n}u_e^\la)^2\big]\\ &\quad-4(n+b-3)\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b[\nabla_{S^n}u_e^\la]^2\\ &\quad+4(n+b-4)\la\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b|\nabla_{S^n}\frac{du_e^\la}{d\la}|^2\\ &=-(n+b-3)\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-4}y^b\frac{d}{d\la}\big[\la^2|\nabla u_e|^2-\la^2|\pa_r u_e|^2\big]\\ &\quad-4(n+b-3)\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-5}y^b\big[\la^2|\nabla u_e|^2-\la^2|\pa_r u_e|^2\big]. \endaligned\ee By rescaling, we have \be\label{8Lscaling}\aligned &\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b\big[\la\frac{d}{d\la}(\nabla_{S^n}u_e^\la)^2\big]\\ &=\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-4}y^b\frac{d}{d\la}\big[\la^2|\nabla u_e|^2-\la^2|\pa_r u_e|^2\big];\\ &\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_1}\theta_1^b[\nabla_{S^n}u_e^\la]^2\\ &=\frac{d}{d\la}\int_{\R^{n+1}_+\cap\pa B_\la}\la^{2s\frac{p+1}{p-1}-n-5}y^b\big[\la^2|\nabla u_e|^2-\la^2|\pa_r u_e|^2\big]. \endaligned\ee \subsection{The term $\overline{E}_{d_1}$} Notice that on the boundary $\pa B_1$, \be\nonumber\aligned &v_e^\la=\Delta_b u_e^\la\\ &=\la^2\frac{d^2u_e^\la}{d\la^2}+(n+b-\frac{4s}{p-1})\la\frac{du_e^\la}{d\la}+\frac{2s}{p-1}(1+\frac{2s}{p-1}-n-b)u_e^\la+\Delta_\theta u_e^\la\\ &:=\la^2\frac{d^2u_e^\la}{d\la^2}+\alpha_0\la\frac{du_e^\la}{d\la}+\beta_0 u_e^\la+\Delta_\theta u_e^\la. \endaligned\ee Integrate by part, it follows that \be\label{8ed1ed1}\aligned &\int_{\pa B_1}y^b\la(\frac{dv_e^\la}{d\la})^2=\int_{\pa B_1}\Big(\la^5(\frac{d^3u_e^\la}{d\la^3})^2+(\alpha_0^2-2\alpha_0-2\beta_0-4)\la^3(\frac{d^2u_e^\la}{d\la^2})^2\\ &\quad\quad\quad\quad\quad\quad\quad\quad+(-\alpha_0^2+\beta_0^2+2\alpha_0+2\beta_0)\la(\frac{du_e^\la}{d\la})^2\Big)\\ &+\int_{\pa B_1}\Big(-2\la^3(\nabla_\theta \frac{d^2u_e^\la}{d\la^2})^2+(10-2\beta_0)\la(\nabla_\theta \frac{du_e^\la}{d\la})^2\Big) +\int_{\pa B_1}\la(\Delta_\theta \frac{du_e^\la}{d\la})^2\\ &+\frac{d}{d\la}\Big(\int_{\pa B_1}\sum_{0\leq i,j\leq 2,i+j\leq2}{c^1_{i,j}}\la^{i+j}\frac{d^iu_e^\la}{d\la^i}\frac{d^ju_e^\la}{d\la^j} +\sum_{0\leq s,t\leq 2,s+t\leq2}{c^2_{s,t}}\la^{s+t}\frac{d^su_e^\la}{d\la^s}\frac{d^tu_e^\la}{d\la^t}\Big), \endaligned\ee where $c^i_{i,j},c^2_{s,t}$ depending on $a,b$ hence on $p,n$. \vskip0.2in \noindent{\bf Proof of Theorem \ref{8monoid}}. Notice that the equation \eqref{8overlineE}, combine with the estimates on $\mathcal{I},\mathcal{J},\mathcal{K},\mathcal{L}$ and \eqref{8ed1ed1}, we obtain Theorem \ref{8monoid}.\hfill $\Box$ \section{Energy estimates and Blow down analysis } In this section, we do some energy estimates for the solutions of \eqref{8LE}, which are important when we perform a blow-down analysis in the next section. \subsection{Energy estimates} \bl Let $u$ be a solution of \eqref{8LE} and $u_e$ satisfy \eqref{8LEE}, then there exists a positive constant $C$ such that \be\label{8estimate1}\aligned &\int_{\pa\R^{n+1}_+}|u_e|^{p+1}\eta^6+\int_{\R^{n+1}_+}y^b|\nabla\Delta_b u_e|^2\eta^6\\ &\leq C\big[\int_{\R^{n+1}_+}y^b|\Delta_b u_e|^2\eta^4|\nabla\eta|^2+\int_{\R^{n+1}_+}y^b|\nabla u_e|^2 \frac{|\Delta_b\eta^6|^2}{\eta^6} +\int_{\R^{n+1}_+}y^bu_e^2\frac{|\nabla\Delta_b \eta^6|^2}{\eta^6}\\ &\quad +\int_{\R^{n+1}_+}y^b|\nabla u_e|^2\eta^2|\nabla \eta|^4 +\int_{\R^{n+1}_+}y^b|\nabla^2 u_e|^2\eta^4|\nabla\eta|^2\\ &\quad +\int_{\R^{n+1}_+}y^b|\nabla u_e|^2\eta^4|\nabla^2\eta|^2\big]. \endaligned\ee \el \bp Multiply the equation \eqref{8LEE} with $y^b u_e\eta^6$, where $\eta$ is a test function, we get that \be\aligned 0=&\int_{\R^{n+1}_+}y^b u_e\eta^6\Delta_b^3 u_e=\int_{\R^{n+1}_+} u_e\eta^6 \mathbf{div}(y^b\nabla\Delta_b^2u_e)\\ =&-\int_{\pa\R^{n+1}_+} u_e\eta^6 \frac{\pa}{\pa y}\Delta_b^2 u_e-\int_{\R^{n+1}_+}y^b\nabla(u_e\eta^6)\nabla\Delta_b^2u_e\\ =&C_{n,s}\int_{\pa\R^{n+1}_+}|u_e|^{p+1}\eta^6-\int_{\pa\R^{n+1}_+}y^b\frac{\pa (u_e\eta^6)}{\pa y}\Delta_b^2 u_e+\int_{\R^{n+1}_+}y^b\Delta_b(u_e\eta^6)\Delta_b^2u_e\\ =&C_{n,s}\int_{\pa\R^{n+1}_+}|u_e|^{p+1}\eta^6+\int_{\R^{n+1}_+}y^b\Delta_b(u_e\eta^6)\Delta_b^2u_e\\ =&C_{n,s}\int_{\pa\R^{n+1}_+}|u_e|^{p+1}\eta^6-\int_{\pa\R^{n+1}_+}\Delta_b(u_e\eta^6)y^b\frac{\pa \Delta_b u_e}{\pa y}\\ &-\int_{\R^{n+1}_+}y^b\nabla(\Delta_b(u_e\eta^6))\nabla\Delta_b u_e\\ =&C_{n,s}\int_{\pa\R^{n+1}_+}|u_e|^{p+1}\eta^6-\int_{\R^{n+1}_+}y^b\nabla(\Delta_b(u_e\eta^6))\nabla\Delta_b u_e.\\ \endaligned\ee Hence, we have \be\label{8eee} C_{n,s}\int_{\pa\R^{n+1}_+}|u_e|^{p+1}\eta^6=\int_{\R^{n+1}_+}y^b\nabla(\Delta_b(u_e\eta^6))\nabla\Delta_b u_e. \ee Since $\Delta_b(\xi\eta)=\eta\Delta_b\xi+\xi\Delta_b\eta+2\nabla\xi\nabla\eta$, we have \be\nonumber\aligned \Delta_b(u_e\eta^6)=\eta^6\Delta_b u_e+u_e\Delta_b\eta^6+12\eta^5\nabla u_e\nabla\eta, \endaligned\ee therefore, \be\label{8zhankai}\aligned \nabla\Delta_b(u_e\eta^6)\nabla\Delta_b u_e=&6\eta^5\Delta_b u_e\nabla\eta\nabla\Delta_b u_e+(\eta)^6(\nabla\Delta_b u_e)^2+\Delta_b\eta^6\nabla u_e\nabla\Delta_b u_e\\ &+u_e\nabla\Delta_b \eta^6\nabla\Delta_b u_e+60\eta^4(\nabla\eta\nabla\Delta_b u_e)(\nabla u_e\nabla\eta)\\ &+12\eta^5\sum_{i,j}\pa_{ij}u_e\pa_i\eta\pa_j \Delta_b u_e+12\eta^5\sum_{i,j}\pa_i u_e\pa_{ij}\eta\pa_j\Delta_b u_e. \endaligned\ee here $\pa_j(j=1, ..., n, n+1)$ denote the derivatives with respect to $x_1,...,x_n,y$ respectively. A similar way can be applied to deal with the following term $|\nabla\Delta_b (u_e\eta^3)|^2$. On the other hand, by the stability condition, we have \be\label{8stable1}\aligned p\int_{\R^n}|u|^{p+1}\eta^6\leq\int_{\R^n}\int_{\R^n}\frac{\Big(u(x)\eta^3(x)-u(y)\eta^3(y)\Big)^2}{|x-y|^{n+2s}}= \frac{1}{C_{n,s}}\int_{\R^{n+1}_+}y^b|\nabla\Delta_b(u_e\eta^3)|^2. \endaligned\ee (Here we notice that $u_e(x,0)=u(x)$, see Theorem \ref{ththYang}, \eqref{Yangmore})\\ Combine with \eqref{8eee}, \eqref{8zhankai} and \eqref{8stable1}, we have \be\nonumber\aligned &\int_{\R^{n+1}_+}y^b|\nabla\Delta_b u_e|^2\eta^6\\ &\leq C\varepsilon\int_{\R^{n+1}_+}y^b(\nabla\Delta_b u_e)^2\eta^6+C(\varepsilon)\big[\int_{\R^{n+1}_+}y^b|\Delta_b u_e|^2\eta^4|\nabla\eta|^2\\ &\quad +\int_{\R^{n+1}_+}y^b|\nabla u_e|^2 (\frac{|\Delta_b\eta^6|^2}{\eta^6}+\eta^4|\nabla^2\eta|^2)\\ &\quad +\int_{\R^{n+1}_+}y^bu_e^2\frac{|\nabla\Delta_b \eta^6|^2}{\eta^6} +\int_{\R^{n+1}_+}y^b|\nabla u_e|^2\eta^2|\nabla \eta|^4 +\int_{\R^{n+1}_+}y^b|\nabla^2 u_e|^2\eta^4|\nabla\eta|^2\big],\\ \endaligned\ee we can select $\varepsilon$ so small that $C\varepsilon\leq\frac{1}{2}$. Combine with \eqref{8eee} and \eqref{8zhankai}, we obtain our conclusion. \ep \bc\label{8hhhh} Let $u$ be a solution of \eqref{8LE} and $u_e$ satisfy \eqref{8LEE}, then \be\nonumber\aligned &\int_{\pa\R^{n+1}_+\cap B_{R/2}}|u_e|^{p+1}+\int_{\R^{n+1}_+\cap B_{R/2}}y^b(\nabla\Delta_b u_e)^2\\ &\leq C\big[R^{-6}\int_{\R^{n+1}_+\cap B_R}y^b u_e^2+R^{-4}\int_{\R^{n+1}_+\cap B_R}y^b|\nabla u_e|^2\\ &+R^{-2}\int_{\R^{n+1}_+\cap B_R}y^b(|\Delta_b u_e|^2+|\nabla^2 u_e|^2)\big]. \endaligned\ee \ec \bp We let $\eta=\xi^m$ where $m>1$ in the estimate \eqref{8estimate1}. We have \be\nonumber\aligned &\int_{\pa\R^{n+1}_+}|u_e|^{p+1}\xi^{6m}+\int_{\R^{n+1}_+}y^b|\nabla\Delta_b u_e|^2\xi^{6m}\\ &\leq C[\int_{\R^{n+1}_+}y^b(|\Delta_b u_e|^2+|\nabla^2 u_e|^2)\xi^{6m-2}|\nabla\xi|^2\\ &\quad +\int_{\R^{n+1}_+}y^b|\nabla u_e|^2\xi^{6m-4}(|\nabla^2\xi|^2+|\nabla\xi|^4)+\int_{\R^{n+1}_+}y^b u_e^2\xi^{6m-6}|\nabla^3\xi|^2]. \endaligned\ee Let $\xi=1$ in $B_{R/2}$ and $\xi=0$ in $B_R^C$, satisfying $|\nabla \xi |\leq\frac{C}{R}$, then we have the desired estimates. \ep \vskip0.1in \bl\label{8lemma3} Suppose that $u$ is a solution of \eqref{8LE} which is stable outside some ball $B_{R_0}\subset\R^n$. For $\eta\in C_c^\infty(\R^n\backslash\overline{B_{R_0}})$ and $x\in\R^n$, define \be \rho(x)=\int_{\R^n}\frac{(\eta(x)-\eta(y))^2}{|x-y|^{n+2s}}dy. \ee Then \be \int_{\R^n}|u|^{p+1}\eta^2dx+\int_{\R^n}\int_{\R^n}\frac{|u(x)\eta(x)-u(y)\eta(y)|^2}{|x-y|^{n+2s}}dxdy\leq C\int_{\R^n} u^2\rho dx. \ee \el \bl\label{8lemma4} Let $m>n/2$ and $x\in\R^n$. Set \be\label{8420} \rho(x):=\int_{\R^n}\frac{(\eta(x)-\eta(y))^2}{|x-y|^{n+2s}}dy\;\;\hbox{where}\;\;\eta(x)=(1+|x|^2)^{-m/2}. \ee Then there is a constant $C=C(n,s,m)>0$ such that \be C^{-1}(1+|x|^2)^{-n/2-s}\leq \rho(x)\leq C(1+|x|^2)^{-n/2-s}. \ee \el \bc\label{8corr1} Suppose that $m>n/2$, $\eta$ is given by \eqref{8420} and $R>R_0>1$. Define \be\label{8rhor} \rho_R(x)=\int_{\R^n}\frac{(\eta_R(x)-\eta_R(y))^2}{|x-y|^{n+2s}}dy,\;\;\hbox{where}\;\;\eta_R(x)=\eta(x/R)\psi(x/R) \ee for a standard test function $\psi$ that $\psi\in C^{\infty}(\R^n)$, $0\leq\psi\leq1,\psi=0$ on $B_1$ and $\psi=1$ on $\R^n\setminus B_2$. Then there exists a constant $C>0$ such that \be\nonumber \rho_R(x)\leq C\eta^2(x/R)|x|^{-(n+2s)}+R^{-2s}\rho(x/R). \ee \ec \bl\label{8lemma5} Suppose that $u$ is a solution of \eqref{8LE} which is stable outside a ball $B_{R_0}$. Consider $\rho_R$ which is defined in \eqref{8rhor} for $n/2<m<n/2+s(p+1)/2$. Then there exists a constant $C>0$ such that \be\nonumber \int_{\R^n}u^2\rho_R\leq C(\int_{B_{3R_0}}u^2\rho_R+R^{n-2s\frac{p+1}{p-1}}) \ee for any $R>3R_0$. \el The proofs of Lemma \ref{8lemma3}, Corollary \ref{8corr1}, Lemma \ref{8lemma4} and Lemma \ref{8lemma5} are similar to that of Lemmas $2.1, 2.2, 2.4$ in \cite{Wei0=1} and we omit the details here. \bl\label{8estimate111} Suppose that $p\neq\frac{n+2s}{n-2s}$. Let $u$ be a solution of \eqref{8LE} which is stable outside a ball $B_{R_0}$ and $u_e$ satisfy \eqref{8LEE}. Then there exists a constant $C>0$ such that \be\nonumber \int_{B_R}y^b u_e^2\leq CR^{n+6-2s\frac{p+1}{p-1}},\quad \int_{B_R}y^b |\nabla u_e|^2\leq CR^{n+4-2s\frac{p+1}{p-1}}, \ee \be\nonumber \int_{B_R}y^b (|\nabla^2 u_e|^2+|\Delta_b u_e|^2)\leq CR^{n+2-2s\frac{p+1}{p-1}}. \ee \el \bp Recall that the Possion formula for the fractional equation for the case $0<s<1$ (see \cite{Caffarelli2007}), we can generalize the expression formula to the general case with non-integer positive real number. Therefore, \be\nonumber {u}_e(x,y)=C_{n,s}\int_{\R^n}u(z)\frac{y^{2s}}{(|x-z|^2+y^2)^{\frac{n+2s}{2}}}dz. \ee Then we have \be\label{8pa0} |{u}_e(x,y)|^2\leq C\int_{\R^n}u^2(z)\frac{y^{2s}}{(|x-z|^2+y^2)^{\frac{n+2s}{2}}}dz, \ee and \be\nonumber \pa_y{u}_e(x,y)=C_{n,s}\int_{\R^n}u(z)\big[\frac{2sy^{2s-1}}{(|x-z|^2+y^2)^{\frac{n+2s}{2}}} -\frac{(n+2s)y^{2s+1}}{(|x-z|^2+y^2)^{\frac{n+2s+2}{2}}}\big]dz, \ee also \be\nonumber \pa_{x_j}{u}_e(x,y)=-C_{n,s}\int_{\R^n}u(z)\frac{(n+2s)(x_j-z_j)y^{2s}}{(|x-z|^2+y^2)^{\frac{n+2s+2}{2}}}dz, \ee for $j=1,2,...,n$. Hence by H\"older's inequality we have \be\label{8pa1} |\nabla u_e(x,y)|^2\leq C\int_{\R^n}\frac{u^2(z)y^{2s-2}}{(|x-z|^2+y^2)^{\frac{n+2s}{2}}}dz. \ee By a straightforward calculation we have \be\nonumber\aligned \pa_{x_jx_j}{u}_e(x,y)=&C_{n,s}\int_{\R^n}u(z)\big[\frac{(n+2s)(n+2s+2)(x_j-z_j)^2y^{2s}}{(|x-z|^2+y^2)^{\frac{n+2s+4}{2}}}\\ &-\frac{(n+2s)y^{2s}}{(|x-z|^2+y^2)^{\frac{n+2s+2}{2}}}\big]dz, \endaligned\ee \be\nonumber\aligned \pa_{x_jy}{u}_e(x,y)=&C_{n,s}\int_{\R^n}u(z)\big[\frac{(n+2s)(n+2s+2)(x_j-z_j)^2y^{2s+1}}{(|x-z|^2+y^2)^{\frac{n+2s+4}{2}}}\\ &-\frac{2s(n+2s)(x_j-z_j)^2y^{2s-1}}{(|x-z|^2+y^2)^{\frac{n+2s+2}{2}}}\big], \endaligned\ee and \be\nonumber\aligned \pa_{yy}{u}_e(x,y)=&C_{n,s}\int_{\R^n}u(z)\big[\frac{2s(2s-1)y^{2s-2}}{(|x-z|^2+y^2)^{\frac{n+2s}{2}}}\\ &-\frac{(n+2s)(4s+1)y^{2s}}{(|x-z|^2+y^2)^{\frac{n+2s+2}{2}}} +\frac{(n+2s)(n+2s+2)y^{2s+2}}{(|x-z|^2+y^2)^{\frac{n+2s+4}{2}}}\big]. \endaligned\ee Therefore, we have \be\nonumber |\nabla^2 u_e(x,y)|+|\Delta_b u_e(x,y)|\leq C\int_{\R^n}|u(z)|\frac{y^{2s-2}}{(|x-z|^2+y^2)^{\frac{n+2s}{2}}}dz. \ee Hence, \be\label{8pa2} |\nabla^2 u_e(x,y)|^2+|\Delta_b u_e(x,y)|^2\leq C\int_{\R^n}u^2(z)\frac{y^{2s-4}}{(|x-z|^2+y^2)^{\frac{n+2s}{2}}}dz. \ee Now we turn to estimate the following integration, which provides a unify way to deal with our desired estimates. Define \be\label{8ak}\aligned A_k:=&\int_{|x|\leq R,z\in\R^n}u^2(z)\big[\int_0^R\frac{y^{2k+1}}{(|x-z|^2+y^2)^{\frac{n+2s}{2}}}dy\big]dzdx\\ =&\int_{|x|\leq R,z\in\R^n}u^2(z)\big[\int_0^{R^2}\frac{\alpha^k}{(|x-z|^2+\alpha)^{\frac{n+2s}{2}}}d\alpha]dzdx\\ \leq&\frac{1}{2}\int_{|x|\leq R,z\in\R^n}u^2(z)[\int_0^{R^2}\frac{d\alpha}{(|x-z|^2+\alpha)^{\frac{n+2s}{2}}-k}\big]dzdx\\ =&\frac{1}{2}(\frac{n+2s}{2}-k)\int_{|x|\leq R,z\in\R^n}u^2(z)\big[(|x-z|^2)^{k-\frac{n+2s}{2}+1}\\ &-(|x-z|^2+R^2)^{k-\frac{n+2s}{2}+1}\big],\\ \endaligned\ee where $k=0,1,2$. Split the integral to $|x-z|\leq 2R$ and $|x-z|>2R$, for the case of $|x-z|\leq 2R$, we see that \be\nonumber\aligned &\int_{|x|\leq R,|x-z|\leq 2R}u^2(z)\Big[(|x-z|^2)^{k-\frac{n+2s}{2}+1}-(|x-z|^2+R^2)^{k-\frac{n+2s}{2}+1}\Big]\\ &\leq\int_{|x|\leq R,|x-z|\leq 2R}u^2(z)\Big[(|x-z|^2)^{k-\frac{n+2s}{2}+1}\Big]\\ &\leq C R^{2k-2s+2}\int_{|z|\leq 3R}u^2(z)dz\\ &\leq R^{2k-2s+2}\Big(\int_{B_{3R}}|u|^{p+1}\eta_R^2\Big)^{2/(p+1)}\Big(\int_{B_{3R}}\eta_R^{-4/(p-1)} \Big)^{(p-1)/(p+1)}\\ &\leq C R^{2k-2s+2}\Big(\int_{B_{3R}}u^2(z)\rho_R(z)\Big)^{2/(p+1)}\\ &\leq C R^{n+2k+2-2s\frac{p+1}{p-1}}. \endaligned \ee Here we have used Lemma \ref{8lemma3} and \ref{8lemma5}. For the case of $|x-z|>2R$, by the mean value theorem, we have \be\nonumber\aligned &\int_{|x|\leq R,|x-z|> 2R}u^2(z)\Big[(|x-z|^2)^{k-\frac{n+2s}{2}+1}-(|x-z|^2+R^2)^{k-\frac{n+2s}{2}+1}\Big]\\ &\leq R^2\int_{|x|\leq R,|x-z|> 2R}u^2(z)\Big[(|x-z|^2)^{k-\frac{n+2s}{2}}\Big]\\ &\leq C R^{n+2}\int_{|z|\geq R} u_e^2(z)|z|^{2k-n-2s}dz\\ &\leq C R^{n+2}\Big[\int_{|z|\geq R} (u^{p+1}_e(z))\Big]^{2/(p+1)}\Big(\int_{|z|\geq R}|z|^{(2k-n-2s)\frac{p+1}{p-1}}\Big)^{(p-1)/(p+1)}\\ &\leq C R^{n+2k+2-2s\frac{p+1}{p-1}}, \endaligned \ee here we have used Lemma \ref{8lemma3}. Hence, we obtain that \be\label{8akak} A_k\leq C R^{n+2k+2-2s\frac{p+1}{p-1}}, \ee where $C=C(n,s,p)$ independent of $R$. Now, combine with \eqref{8pa0}, \eqref{8pa1} and \eqref{8pa2}, recall that $b=5-2s$, we have \be\nonumber\aligned &\int_{B_R}y^b u_e^2dxdy\leq A_2, \quad \int_{B_R}y^b|\nabla u_e|^2dxdy\leq A_1,\\ &\int_{B_R}y^b\big(|\nabla^2 u_e|^2+|\Delta_b u_e|^2\big)dxdy\leq A_0. \endaligned \ee Apply \eqref{8akak}, we finish our proof. \ep Combine Corollary \ref{8hhhh} and Lemma \ref{8lemma5}, we have the following lemma. \bl\label{8lemmaE} Let $u$ be a solution of \eqref{8LE} which is stable outside a ball $B_{R_0}$ and $u_e$ satisfy \eqref{8LEE}. Then there exists a positive constant $C$ such that \be\nonumber\aligned &\int_{\pa\R^{n+1}_+\cap B_R}|u_e|^{p+1}+R^{-6}\int_{\R^{n+1}_+\cap B_R}y^b |u_e|^2 +R^{-4}\int_{\R^{n+1}_+\cap B_R}y^b |\nabla u_e|^2\\ &+R^{-2}\int_{\R^{n+1}_+\cap B_R}y^b\big(|\Delta_b u_e|^2+|\nabla^2 u_e|^2\big) +\int_{\R^{n+1}_+\cap B_R}y^b|\nabla\Delta_b u_e|^2 \leq C R^{n-2s\frac{p+1}{p-1}}. \endaligned\ee \el \newpage \subsection{ Blow down analysis and the proof of Theorem \ref{8Liouvillec}} {\bf The proof of Theorem \ref{8Liouvillec}.} Suppose that $u$ is a solution of \eqref{8LE} which is stable outside the ball of radius $R_0$ and suppose that $u_e$ satisfies \eqref{8LEE}. In the subcritical case, i.e., $1<p<p_s(n)$, Lemma \ref{8lemma3} implies that $u\in \dot{H}^s(\R^n)\cap L^{p+1}(\R^n)$. Multiplying \eqref{8LE} with $u$ and integrate, we obtain that \be\label{841} \int_{\R^n}|u|^{p+1}=\|u\|^2_{\dot{H}^s(\R^n)}. \ee Multiplying \eqref{8LE} with $u^\la(x)=u(\la x)$ yields \be\nonumber \int_{\R^n}|u|^{p-1} u^\la=\int_{\R^n} (-\Delta)^{s/2}u (-\Delta)^{s/2}u^\la=\la^s\int_{\R^n}w w_\la, \ee where $w=(-\Delta)^{s/2}u$. Following the ideas provided in \cite{Xavier2015,Xavier2014} and using the change of variable $z=\sqrt{\la}x$, we can get the following Pohozaev identity \be\nonumber\aligned -\frac{n}{p+1}\int_{\R^n}|u|^{p+1}&=\frac{2s-n}{2}\int_{\R^n}|w|^{2} +\frac{d}{d\la}\int_{\R^n}w^{\sqrt{\la}}w^{1/\sqrt{\la}}dz \Big|_{\la=1}\\ &=\frac{2s-n}{2}\|u\|^2_{\dot{H}^s(\R^n)}. \endaligned\ee Hence, we have the following Pohozaev identity \be\nonumber\aligned \frac{n}{p+1}\int_{\R^n}|u|^{p+1}=\frac{n-2s}{2}\|u\|^2_{\dot{H}^s(\R^n)}. \endaligned\ee For $p<p_s(n)$, this equality above together with \eqref{841} proves that $u\equiv0$. For $p=p_s(n)$, this equality above means that the energy is finite. Further, since $u\in \dot{H}^s(\R^n)$, apply the stability inequality with test function $\psi=u\eta^2(\frac{x}{R})$, and let $R\rightarrow+\infty$ (where $\eta$ is cutoff function), then we get that \be\nonumber\aligned p\int_{\R^n}|u|^{p+1}\leq\|u\|^2_{\dot{H}^s(\R^n)}. \endaligned\ee This together with \eqref{841} gives that $u\equiv0$. \vskip0.1in Now we consider the supercritical case, i.e., $p>\frac{n+2s}{n-2s}$, we perform the proof via a few steps. \vskip0.1in \noindent{\bf Step 1.} $\lim_{\la\rightarrow\infty} E(u_e,0,\la)<\infty$.\\ From Theorem \ref{8Monotone} we know that $E$ is nondecreasing w.r.t. $\la$, so we only need to show that $E(u_e,0,\la)$ is bounded. Note that \be\nonumber E(u_e,0,\la)\leq\frac{1}{\la}\int_{\la}^{2\la} E(u_e,0,t)dt\leq\frac{1}{\la^2}\int_\la^{2\la}\int_{t}^{t+\la}E(u_e,0,\gamma)d\gamma dt. \ee From Lemma \ref{8lemmaE}, we have that \be\nonumber\aligned \frac{1}{\la^2}&\int_\la^{2\la}\int_{t}^{t+\la}\gamma^{2s\frac{p+1}{p-1}-n}\big[\int_{\R^{n+1}_+\cap B_\gamma}\frac{1}{2}y^b|\nabla\Delta_b u_e|^2dydx\\ &\quad\quad\quad\quad\quad\quad -\frac{C_{n,s}}{p+1}\int_{\pa\R^{n+1}_+\cap B_\gamma}|u_e|^{p+1}dx\big]d\gamma dt\leq C, \endaligned\ee where $C>0$ is independent of $\gamma$. \be\label{8type1}\aligned \frac{1}{\la^2}&\int_\la^{2\la}\int_{t}^{t+\la}\int_{\R^{n+1}_+\cap\pa B_\gamma}\gamma^{2s\frac{p+1}{p-1}-n-5}y^b\big[\frac{2s}{p-1}(\frac{2s}{p-1}-1)(\frac{2s}{p-1}-2)u_e\\ &+\frac{6s}{p-1}(\frac{2s}{p-1}-1)\gamma\pa_r u_e +\frac{6s}{p-1}\gamma^2 \pa_{rr} u_e+\gamma^3\pa_{rrr} u_e\big]\\ &\quad\quad\big[\frac{2s}{p-1}(\frac{2s}{p-1}-1) u_e+\frac{4s}{p-1}\gamma\pa_r u_e+\gamma^2\pa_{rr}u_e\big]\\ &\leq C\frac{1}{\la^2}\int_\la^{2\la}\int_{t}^{t+\la}t^{2s\frac{p+1}{p-1}-n-5}\int_{\R^{n+1}_+\cap\pa B_\gamma}\\ &\quad y^b\big[u_e^2+\gamma^2(\pa_r u_e)^2+\gamma^4(\pa_{rr}u_e)^2+\gamma^6(\pa_{rrr}u_e)^2\big]\\ &\leq C\frac{1}{\la^2}\int_\la^{2\la}t^{2s\frac{p+1}{p-1}-n-5}\int_{\R^{n+1}_+\cap B_{3\la}}\\ &\quad y^b\big[u_e^2+\gamma^2(\pa_r u_e)^2+\gamma^4(\pa_{rr}u_e)^2+\gamma^6(\pa_{rrr}u_e)^2\big]\\ &\leq C \la^{n-2s\frac{p+1}{p-1}+6}\frac{1}{\la^2}\int_\la^{2\la}t^{2s\frac{p+1}{p-1}-n-5}dt\\ &\leq C \endaligned\ee and \be\label{8type2}\aligned &\Big|\frac{1}{\la^2}\int_\la^{2\la}\int_{t}^{t+\la}\int_{\R^{n+1}_+\cap\pa B_\gamma}\gamma^{2s\frac{p+1}{p-1}-n-4}y^b\frac{d}{d\gamma}(\gamma^2\Delta_b u_e-\gamma^2\pa_{rr}u_e-(n+b)\gamma\pa_r u_e)^2\Big|\\ &\leq\frac{1}{\la^2}\int_\la^{2\la}t^{2s\frac{p+1}{p-1}-n-5}\int_{t}^{t+\la}\int_{\R^{n+1}_+\cap\pa B_\gamma}y^b\big[2\gamma^2\Delta_b u_e-2\gamma^2\pa_{rr}u_e-(n+b)\gamma\pa_r u_e\big]\\ &\quad \big[\gamma^2\Delta_b u_e-\gamma^2\pa_{rr}u_e-(n+b)\gamma\pa_r u_e\big]\\ &\leq\frac{1}{\la^2}\int_\la^{2\la}t^{2s\frac{p+1}{p-1}-n-5}\int_{t}^{t+\la}\int_{\R^{n+1}_+\cap B_{3\la}}y^b[2\gamma^2\Delta_b u_e-2\gamma^2\pa_{rr}u_e-(n+b)\gamma\pa_r u_e]\\ &\quad\big[\gamma^2\Delta_b u_e-\gamma^2\pa_{rr}u_e-(n+b)\gamma\pa_r u_e\big]\\ &\leq C\frac{1}{\la^2}\int_\la^{2\la}t^{2s\frac{p+1}{p-1}-n-5}\int_{t}^{t+\la}\int_{\R^{n+1}_+\cap B_{3\la}}y^b \big[\gamma^4(\Delta_b u_e)^2+\gamma^4(\pa_{rr}u_e)^2+\gamma^2\pa_r u_e\big]\\ &\leq C \la^{n-2s\frac{p+1}{p-1}+6}\frac{1}{\la^2}\int_\la^{2\la}t^{2s\frac{p+1}{p-1}-n-5}dt\\ &\leq C. \endaligned\ee \vskip0.1in Integrate by part, by the scaling identity of section 3, for example \eqref{8Iscaling}, \eqref{8Jscaling}, \eqref{8Kscaling} and \eqref{8Lscaling}, we can treat the remaining terms by a similar way as the estimates \eqref{8type1} and \eqref{8type2}. \vskip0.12in \noindent{\bf Step 2.} There exists a sequence $\la_i\rightarrow\infty$ such that $(u_e^{\la_i})$ converges weakly to a function $u_e^\infty$ in $H^3_{loc}(\R^n;y^bdxdy)$, this is a direct consequence of Lemma \ref{8lemmaE}. \vskip0.12in \noindent{\bf Step 3.} {\bf The function $u_e^\infty$ is homogeneous.} Due to the scaling invariance of $E $ (i.e., $E(u_e,0,R\la)=E(u_e^{\la},0,R)$ ) and the monotonicity formula, for any given $R_2>R_1>0$, we see that \be\nonumber\aligned 0=&\lim_{i\rightarrow\infty}\big(E(u_e,0,R_2\la_i)-E(u_e,0,R_1\la_i)\big)\\ =&\lim_{i\rightarrow\infty}\big(E(u_e^{\la_i},0,R_2)-E(u_e^{\la_i},0,R_1)\big)\\ \geq&\liminf_{i\rightarrow\infty}\int_{(B_{R_2}\setminus B_{R_1})\cap \R^{n+1}_+} y^b r^{2s\frac{p+1}{p-1}-n-6}\big(\frac{2s}{p-1}{u_e^{\la_i}}+r\frac{\pa u_e^{\la_i}}{\pa r}\big)^2dydx\\ \geq&\int_{(B_{R_2}\setminus B_{R_1})\cap \R^{n+1}_+} y^b r^{2s\frac{p+1}{p-1}-n-6}\big(\frac{2s}{p-1}{u_e^{\infty}}+r\frac{\pa u_e^{\infty}}{\pa r}\big)^2dydx.\\ \endaligned\ee In the last inequality we have used the weak convergence of the sequence $(u_e^{\la_i})$ to the function $u_e^{\infty}$ in $H^3_{loc}(\R^n;y^bdxdy)$. This implies that \be\nonumber \frac{2s}{p-1}\frac{u_e^{\infty}}{r}+\frac{\pa u_e^{\infty}}{\pa r}=0\;\;\hbox{a.e.}\;\;\hbox{in}\;\;\R^{n+1}_+. \ee Therefore, $u_e^\infty$ is homogeneous. \vskip0.12in \noindent{\bf Step 4}. { \bf $u_e^\infty=0$.} This is a direct consequence of Theorem $3.1$ in \cite{Wei1=2}. \vskip0.12in \noindent{\bf Step 5}. $(u_e^{\la_i})$ converges strongly to zero in $H^3(B_R\setminus B_\varepsilon;y^bdxdy)$ and $(u_e^{\la_i})$ converges strongly in $L^{p+1}(\pa\R^{n+1}_+\cap (B_R\setminus B_\varepsilon))$ for all $R>\varepsilon>0$. These are consequent results of Lemma \ref{8lemmaE} and Theorem $1.5$ in \cite{Fabes1982}. \vskip0.12in \noindent{\bf Step 6}. $u_e=0$. Note that \be\nonumber\aligned \overline{E}(u_e,\la)=&\overline{E}(u_e^\la,1)\\ =&\frac{1}{2}\int_{\R^{n+1}_+\cap B_1}y^b|\nabla\Delta_b u_e^\la|^2dxdy-\frac{C_{n,s}}{p+1}\int_{\pa\R^{n+1}_+\cap B_1}|u_e^\la|^{p+1}dx\\ =&\frac{1}{2}\int_{\R^{n+1}_+\cap B_\varepsilon}y^b|\nabla\Delta_b u_e^\la|^2dxdy-\frac{C_{n,s}}{p+1}\int_{\pa\R^{n+1}_+\cap B_\varepsilon}|u_e^\la|^{p+1}dx\\ &+\frac{1}{2}\int_{\R^{n+1}_+\cap( B_1\setminus B_\varepsilon)}y^b|\nabla\Delta_b u_e^\la|^2dxdy-\frac{C_{n,s}}{p+1}\int_{\pa\R^{n+1}_+\cap( B_1\setminus B_\varepsilon)}|u_e^\la|^{p+1}dx\\ =&\varepsilon^{n-2s\frac{p+1}{p-1}}\overline{E}(u_e,\la\varepsilon)+\frac{1}{2}\int_{\R^{n+1}_+\cap( B_1\setminus B_\varepsilon)}y^b|\nabla\Delta_b u_e^\la|^2dxdy\\ &-\frac{C_{n,s}}{p+1}\int_{\pa\R^{n+1}_+\cap( B_1\setminus B_\varepsilon)}|u_e^\la|^{p+1}dx.\\ \endaligned\ee Letting $\la\rightarrow+\infty$ and then $\varepsilon\rightarrow0$, we deduce that $\lim_{\la\rightarrow+\infty}\overline{E}(u_e,\la)=0$. Using the monotonicity of $E$, \be\aligned E(u_e,\la)&\leq\frac{1}{\la}\int_{\la}^{2\la} E(t)dt\leq \sup_{[\la,2\la]} \overline{E}+C\frac{1}{\la}\int_\la^{2\la}[E-\overline{E}]\\ &\leq\sup_{[\la,2\la]} \overline{E}+C\frac{1}{\la} \int_\la^{2\la}\la^{2s\frac{p+1}{p-1}-n-5}\int_{\R^{n+1}_+\cap\pa B_\la}y^b \big[(u_e)^2+\la^2|\nabla u_e|^2\\ &\quad +\la^4(|\Delta_b u_e|^2+|\nabla^2 u_e|^2)+\la^6|\nabla\Delta_b u_e|^2\big]\\ &=\sup_{[\la,2\la]} \overline{E}+C\frac{1}{\la}\la^{2s\frac{p+1}{p-1}-n-5}\int_{\R^{n+1}_+\cap( B_{2\la}\backslash B_\la)}y^b \big[(u_e)^2+\la^2|\nabla u_e|^2\\ &\quad +\la^4(|\Delta_b u_e|^2+|\nabla^2 u_e|^2)+\la^6|\nabla\Delta_b u_e|^2\big]\\ &=\sup_{[\la,2\la]} \overline{E}+C\frac{1}{\la}\int_{\R^{n+1}_+\cap( B_{2}\backslash B_1)}y^b \big[(u_e^\la)^2+|\nabla u_e^\la|^2\\ &\quad +|\Delta_b u_e^\la|^2+|\nabla^2 u_e^\la|^2+|\nabla\Delta_b u_e^\la|^2\big]\\ \endaligned\ee and so $\lim_{\la\rightarrow\infty} E(u_e,\la)=0$. Since $u$ is smooth, we also have $E(u_e,0)=0$. Since $E$ is monotone, $E\equiv0$ and so $\overline{u}$ must be homogenous, a contradiction unless $u_e\equiv0$. \section{Algebraic analysis: The proof of Theorem \ref{8Monotonem}} Let $k:=\frac{2s}{p-1}$ and $m:=n-2s$. By a direct calculation, we obtain that \be\label{8A1A2}\aligned A_1&= -10k^2+10mk-m^2+12m+25,\\ A_2&=3k^4-6mk^3+(3m^2-12m-30)k^2+(12m^2+30m)k+9m^2+36m+27,\\ B_1&=-6k^2+6mk+12m+30. \endaligned\ee Notice that our supercritical condition $p>\frac{n+2s}{n-2s}$ is equivalent to $0<k<\frac{n-2s}{2}=\frac{m}{2}$. Next, we have the following lemma which yields the sign of $A_2$ and $B_1$. \bl\label{8A2-0} If $p>\frac{n+2s}{n-2s}$, then $A_2>0$ and $B_1>0$. \el \bp From \eqref{8A1A2}, we derive that \be\label{8A2} A_2=3(k+1)(k+3)(k-(m+1))(k-(m+3)), \ee and the roots of $B_1=0$ are \be\nonumber \frac{1}{2}m-\frac{1}{2}\sqrt{m^2+8m+20},\quad \frac{1}{2}m+\frac{1}{2}\sqrt{m^2+8m+20}. \ee Recall that $p>\frac{n+2s}{n-2s}$ is equivalent to $0<k<\frac{m}{2}$, we get the conclusion. \ep To show monotonicity formula, we proceed to prove the following inequality. That is, there exist real numbers $c_{i,j}$ and positive real number $\epsilon$ such that \be\aligned \label{8keyinequality} &3\la^5(\frac{d^3u^\la}{d\la^3})^2+A_{1}\la^3\Big(\frac{d^2u^\la}{d\la^2}\Big)^2+A_{2}\la(\frac{du^\la}{d\la})^2\\ &\geq \epsilon\la(\frac{du^\la}{d\la})^2+\frac{d}{d\la} \Big(\sum_{0\leq i,j\leq2}c_{i,j}\la^{i+j}\frac{d^iu^\la}{d\la^i}\frac{d^ju^\la}{d\la^j} \Big). \endaligned\ee To deal with the rest of the dimensions, we employ the second idea: we find nonnegative constants $d_1, d_2$ and constants $c_1, c_2$ such that we have the following Jordan form decomposition: \be\label{8quad}\aligned &3\la^5(f''')^2+A_1\la^3(f'')^2+A_2\la(f')^2=3\la(\la^2f'''+c_1\la f'')^2+d_1\la(\la f''+c_2 f')^2\\ &\;\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+d_2\la(f')^2 +\frac{d}{d\la}(\sum_{i,j}e_{i,j}\la^{i+j}f^{(i)}f^{(j)}), \endaligned\ee where the unknown constants are to be determined. \bl \label{860011} Let $ p>\frac{n+2s}{n-2s}$ and $A_1$ satisfy \be \label{8A11} A_1+12 >0, \ee then there exist nonnegative numbers $d_1,d_2$, and real numbers $c_1,c_2,e_{i,j}$ such that the differential inequality (\ref{8quad}) holds. \el \bp Since $$4\la^4f'''f''=\frac{d}{d\la}(2\la^4(f'')^2)-8\la^3(f'')^2$$ and $$2\la^2f''f'=\frac{d}{d\la}(\la^2(f')^2)-2\la(f')^2,$$ by comparing the coefficients of $\lambda^3(f'')^2$ and $\lambda (f')^2,$ we have that \be\nonumber d_1=A_1-3c_1^2+12c_1, \quad d_2=A_2-(c_2^2-2c_2)(A_1-3c_1^2+12c_1). \ee In particular, $$\max_{c_1}{d_1(c_1)}=A_1+12 \hbox{ and the critical point is } c_1=2. $$ Since $A_2>0$, we select that $c_1=2,c_2=0$. Hence, in this case, by a direct calculation we see that $d_1=A_1+12>0$. Then we get the conclusion. \ep We conclude from Lemma \ref{860011} that if $A_1+12>0$ then (\ref{8keyinequality}) holds. This implies that when $m<6+\sqrt{73}, p>\frac{n+2s}{n-2s}$ or $m\geq 6+\sqrt{73}$ and \be \label{pee} \frac{n+2s}{n-2s}<p<\frac{5m+20s-\sqrt{15m^2+120m+370}}{5m-\sqrt{15m^2+120m+370}}, \ee then (\ref{8keyinequality}) holds. \vskip0.2in Let \be\label{727=1} p_m(n):=\begin{cases} +\infty\;\;\;\;\;\;\;\;&\hbox{if}\; n<2s+6+\sqrt{73},\\ \frac{5n+10s-\sqrt{15(n-2s)^2+120(n-2s)+370}}{5n-10s-\sqrt{15(n-2s)^2+120(n-2s)+370}}&\hbox{if}\; n\geq2s+6+\sqrt{73}.\\ \end{cases} \ee Combining all the lemmas of this section, we obtain the Theorem \ref{8Monotonem}. \vskip0.2in \vskip0.16in Now we proceed to prove Theorem \ref{8Monotone}. From Corollary 1.1 of \cite{LWZ}, we know that if $n>2s,s>0,p>\frac{n+2s}{n-2s}$, then there exists $n_0(s)$, where $\frac{1}{\sqrt{n}}<a_{n,s}<\frac{1}{2}\frac{n-2s}{\sqrt{n}}+\frac{1}{\sqrt{n}}$, such that the inequality \eqref{8gamma} always holds whenever $n\leq n_0(s)$; while when $n> n_0(s)$, then the inequality \eqref{8gamma} is true if and only if $$p<p_2:=\frac{n+2s-2-2a_{n,s}\sqrt{n}}{n-2s-2-2a_{n,s}\sqrt{n}},$$ where $n_0(s)$ is in fact the largest $n$ satisfying $n-2s-2-2a_{n,s}\sqrt{n}\leq0$. In particular, $ \frac{n+2s}{n-2s}<\frac{n+2s-4}{n-2s-4}<p_2<+\infty.$ Therefore, we introduce \be\label{8pcn88} p_c(n):=\begin{cases} +\infty\;\;\;\;\;\;\;\;&\hbox{if}\;\;\;\;\;\;\;\; n\leq n_0(s),\\ \frac{n+2s-2-2a_{n,s}\sqrt{n}}{n-2s-2-2a_{n,s}\sqrt{n}}&\hbox{if}\;\;\;\;\;\;\;\;n> n_0(s).\\ \end{cases} \ee From \cite{LWZ=00}, we use the sharp estimate $n_0(s)<2s+8.998$ for $2<s<3$, then \be\label{8n0s} n_0(s)\leq 2s+8.998<2s+6+\sqrt{73}\simeq2s+14.544. \ee On the other hand, via the sharp estimate $a_{n,s}<1$ from \cite{LWZ=00} \be\label{8compare}\aligned \frac{5n+10s-\sqrt{15(n-2s)^2+120(n-2s)+370}}{5n-10s-\sqrt{15(n-2s)^2+120(n-2s)+370}}>\frac{n+2s-2-2a_{n,s}\sqrt{n}}{n-2s-2-2a_{n,s}\sqrt{n}} \endaligned\ee provided that $s\in(2,3)$ and that \be\label{728-mm}\aligned 225m^4-720m^3-17244m^2-29088m+7236>0, \hbox{ where } m=n-2s. \endaligned\ee The \eqref{728-mm} holds whenever $m>11.12$, that is $n>2s+11.12$. This combine with \eqref{8n0s} we obtain that $p_c(n)<p_m(n)$. Therefore we get Theorem \ref{8Monotone}. \hfill $\Box$ \newpage
{'timestamp': '2016-09-05T02:06:08', 'yymm': '1609', 'arxiv_id': '1609.00705', 'language': 'en', 'url': 'https://arxiv.org/abs/1609.00705'}
\subsection{#2}{#4}\par\hfill{\arx{#1}}\par\hfill\emph{#3}} \nc{\arXiv}[3]{\subsection{#2} {#3}, \arx{#1}\par\hfill} \nc{\DOIpaper}[5]{\subsection{#2}{#4}\par\hfill{\url{http://dx.doi.org/#1}}\par\hfill\emph{#3}} \nc{\AMSPaper}[5]{\subsection{#3}{#5}\par\hfill{\url{#1}}\par\hfill\emph{#4}\par\hfill{#2}} \nc{\nAMSPaper}[4]{\subsection{#2} {#3}, {#4}, \url{#1}} \nc{\AMS}[3]{\subsection{#1} {#2}, \url{#3}} \nc{\SPMBul}{\textbf{$\mathcal{SPM}$ Bulletin}} \nc{\BulEnd}{\par\bigskip\noindent Boaz Tsaban\\ \emph{E-mail}: tsaban@math.biu.ac.il\\ \emph{URL}: http://www.cs.biu.ac.il/\~{}tsaban} \nc{\arx}[1]{\url{http://arxiv.org/abs/#1}} \nc{\probissue}{\emph{Problem of the issue}} \title[$\mathcal{SPM}$ Bulletin \textbf{\issuenumber} (\issuemonth{} \issueyear)] $\mathcal{SPM}$ Bulletin\\[0.5cm] Issue \issuenumber{} (\issuemonth{} \issueyear{})} \begin{document} \maketitle \section{Editor's note} In addition to the beautiful research announcements, this issue includes an announcement of a festive conference dedicated to Selection Principles, celebrating the 60th birthday of Marion Scheepers. This will be an opportunity for all active researchers in the field to exchange new methods and results in the field, and for researchers looking for new, promising research directions to consider this topic in a thorough manner. \medskip With best regards, \by{Boaz Tsaban}{tsaban@math.biu.ac.il} \hfill \texttt{http://www.cs.biu.ac.il/\~{}tsaban} \section{New conference: Frontiers of Selection Principles} Dear Friends and Colleagues, \medskip We are glad to invite you to the conference Frontiers of Selection Principles, celebrating the 60th birthday of Marion Scheepers. This festive conference will take place at the Dewajtis Campus, Bielanski forest, Warsaw, 21.8--1.9, 2017: 21.8--25.8 tutorial, 26.8 excursion, 27.8--1.9 conference. (This is immediately after the Logic Colloquium and immediately before the Bedlewo meeting on set theoretic topology and analysis.) \emph{Selection principles} connect topology, set theory, and functional analysis and make it possible to transport and apply methods from each of these fields to the other ones. It is now one of the most active streams of research within set theory and general topology. This conference will be fully dedicated to selection principles and their applications. It will begin with a one week tutorial, aimed mainly at students and researchers with no prior knowledge of selection principles. This tutorial will provide an overview of the filed, and a detailed introduction to its main methods. Participants with no prior knowledge attending the tutorial will thus fully benefit from the lectures of the second week, and will be able to consider possible directions for research in the field. The second conference week will consist of invited lectures, by leading experts, on results in the frontiers of selection principles. We recommend that students and participants with expertise in other disciplines attend both weeks of the conference. The expected accommodation cost is approximately 20EUR per night (full board: 30EUR). Registration fee: Early registration: 50EUR. Normal registration: 80EUR. Registration fees will help reducing the costs for some students or early career researchers that would otherwise not be able to attend this meeting. Discount possibilities will be provided in future emails. \medskip \noindent Tentative list of conference speakers: \begin{center} \begin{tabular}{lll} Leandro Aurichi & Liljana Babinkostova & Taras Banakh\\ Angello Bella* & Daniel Bernal Santos & Maddalena Bonanzinga*\\ Lev Bukovsky & Steven Clontz* & Samuel Da-Silva*\\ Rodrigo Dias & Rafa\l{} Filipow & David Gauld\\ Ondrej Kalenda & Ljubi\v{s}a Ko\v{c}inac* & Adam Krawczyk\\ Adam Kwela & Andrzej Nowik & Selma\"O{}zca\v{g}*\\ Yinhe Peng & Szymon Plewik & Robert Rałowski \\ Masami Sakai & Marion Scheepers & Paul Szeptycki* \\ Piotr Szewczak & Franklin Tall & Secil Tokg\"oz \\ Boaz Tsaban & Tomasz Weiss & Lyubomyr Zdomskyy\\ Shuguo Zhnag & Ondrej Zindulka & Szymon Zeberski \end{tabular} \end{center} \noindent (* to be confirmed) \medskip Subject to time constraints, students and other participants may have an opportunity to contribute a short lecture on a topic of selection principles. \noindent\textbf{Special request:} Since a part of this workshop is especially accessible to students, we would appreciate your forwarding this message to your students and to other students who may be interested in attending this conference. \noindent The conference is organized by Cardinal Stefan Wyszyński University in Warsaw. \noindent\textbf{Important:} This is the only message distributed widely. Please email Piotr Szewczak (p.szewczak@wp.pl) to be included in the mailing list for details and updates. \medskip On behalf of the organizing committee, \medskip \noindent Piotr Szewczak (Cardinal Stefan Wyszynski University and Bar-Ilan University)\\ \noindent Boaz Tsaban (Bar-Ilan University)\\ \noindent Lyubomyr Zdomskyy (Kurt Gödel Research Center) \section{Long announcements} \arXivl{1607.04688} {On the definability of Menger spaces which are not $\sigma$-compact} {Franklin D. Tall and Secil Tokgoz} {Hurewicz proved completely metrizable Menger spaces are $\sigma$-compact. We extend this to \v{C}ech-complete Menger spaces and consistently to projective Menger metrizable spaces. On the other hand, it is consistent that there is a co-analytic Menger space that is not $\sigma$-compact.} \arXivl{1607.04781} {Definable versions of Menger's conjecture} {Franklin D. Tall} {Menger's conjecture that Menger spaces are $\sigma$-compact is false; it is true for analytic subspaces of Polish spaces and undecidable for more complex definable subspaces of Polish spaces. For non-metrizable spaces, analytic Menger spaces are $\sigma$-compact, but Menger continuous images of co-analytic spaces need not be. The general co-analytic case is still open, but many special cases are undecidable, in particular, Menger topological groups. We also prove that if there is a Michael space, then productively Lindelof Cech-complete spaces are $\sigma$-compact. We also give numerous characterizations of proper K-Lusin spaces. Our methods include the Axiom of Co-analytic Determinacy, non-metrizable descriptive set theory, and Arhangel'skii's work on generalized metric spaces.} \arXivl{1608.03546} {Discrete subsets in topological groups and countable extremally disconnected groups} {Evgenii Reznichenko and Ol'ga Sipacheva} {It is proved that any countable topological group in which the filter of neighborhoods of the identity element is not rapid contains a discrete set with precisely one onisolated point and that the existence of a nondiscrete countable extremally disconnected group implies the existence of a rapid ultrafilter.} \arXivl{1608.06210} {$\sigma$-Ideals and outer measures on the real line} {S. Garc\'ia-Ferreira, A. H. Tomita, and Y. F. Ortiz-Castillo} {A {\it weak selection} on $\mathbb{R}$ is a function $f: [\mathbb{R}]^2 \to \mathbb{R}$ such that $f(\{x,y\}) \in \{x,y\}$ for each $\{x,y\} \in [\mathbb{R}]^2$. In this article, we continue with the study (which was initiated in [ag]) of the outer measures $\lambda_f$ on the real line $\mathbb{R}$ defined by weak selections $f$. One of the main results is to show that $CH$ is equivalent to the existence of a weak selection $f$ for which: \[ \mathcal \lambda_f(A)= \begin{cases} 0 & \text{if $|A| \leq \omega$,}\\ \infty & \text{otherwise.} \end{cases} \] Some conditions are given for a $\sigma$-ideal of $\mathbb{R}$ in order to be exactly the family $\mathcal{N}_f$ of $\lambda_f$-null subsets for some weak selection $f$. It is shown that there are $2^\mathfrak{c}$ pairwise distinct ideals on $\mathbb{R}$ of the form $\mathcal{N}_f$, where $f$ is a weak selection. Also we prove that Martin Axiom implies the existence of a weak selection $f$ such that $\mathcal{N}_f$ is exactly the $\sigma$-ideal of meager subsets of $\mathbb{R}$. Finally, we shall study pairs of weak selections which are "almost equal" but they have different families of $\lambda_f$-measurable sets.} \arXivl{1609.05822} {Selectively (a)-spaces from almost disjoint families are necessarily countable under a certain parametrized weak diamond principle} {Charles J.G. Morgan and Samuel G. Da Silva} {The second author has recently shown [20] that any selectively (a) almost disjoint family must have cardinality strictly less than $2^{\aleph_0}$, so under the Continuum Hypothesis such a family is necessarily countable. However, it is also shown in the same paper that $2^{\aleph_0} < 2^{\aleph_1}$ alone does not avoid the existence of uncountable selectively (a) almost disjoint families. We show in this paper that a certain effective parametrized weak diamond principle is enough to ensure countability of the almost disjoint family in this context. We also discuss the deductive strength of this specific weak diamond principle (which is consistent with the negation of the Continuum Hypothesis, apart from other features). \par Houston Journal of Mathematics 42 (2016), 1031--1046.} \arXivl{1206.0722} {Notes on the od-Lindel\"of property} {Mathieu Baillif} {A space is od-compact (resp. od-Lindel\"of) provided any cover by open dense sets has a finite (resp. countable) subcover. We first show with simple examples that these properties behave quite poorly under finite or countable unions. We then investigate the relations between Lindel\"ofness, od-Lindel\"ofness and linear Lindel\"ofness (and similar relations with `compact'). We prove in particular that if a $T_1$ space is od-compact, then the subset of its non-isolated points is compact. If a $T_1$ space is od-Lindel\"of, we only get that the subset of its non-isolated points is linearly Lindel\"of. Though, Lindel\"ofness follows if the space is moreover locally openly Lindel\"of (i.e. each point has an open Lindel\"of neighborhood).} \arXiv{1610.04800} {Relating games of Menger, countable fan tightness, and selective separability} {Steven Clontz} {By adapting techniques of Arhangel'skii, Barman, and Dow, we may equate the existence of perfect-information, Markov, and tactical strategies between two interesting selection games. These results shed some light on Gruenhage's question asking whether all strategically selectively separable spaces are Markov selectively separable.} \arXivl{1611.04998} {Convergence in topological groups and the Cohen reals} {Alexander Shibakov} {We show that it is consistent to have an uncountable sequential group of intermediate sequential order while no countable such groups exist. This is proved by adding $\omega_2$ Cohen reals to a model of $\diamondsuit$.} \section{Short announcements}\label{RA} \arXiv{1607.04756} {Hereditarily normal manifolds of dimension $> 1$ may all be metrizable} {Alan Dow and Franklin D. Tall} \arXiv{1607.07188} {A long chain of P-points} {Borisa Kuzeljevic and Dilip Raghavan} \arXiv{1607.07669} {The cofinal structure of precompact and compact sets in general metric spaces} {Aviv Eshed, M. Vincenta Ferrer, Salvador Hern\'andez, Piotr Szewczak, Boaz Tsaban} \arXiv{1607.07978} {$\omega^\omega$-bases in topological and uniform spaces} {Taras Banakh} \arXiv{1608.03381} {$I$-convergence classes of sequences and nets in topological spaces} {Amar Kumar Banerjee and Apurba Banerjee} \arXiv{1608.07144} {The existence of continuous weak selections and orderability-type properties in products and filter spaces} {Koichi Motooka, Dmitri Shakhmatov, Takamitsu Yamauchi} \arXiv{1610.04506} {Weakly linearly Lindel\"of monotonically normal spaces are Lindel\"of} {I. Juh\'asz, V. V. Tkachuk, R. G. Wilson} \arXiv{1611.07267} {On the cardinality of almost discretely Lindel\"of spaces} {Santi Spadaro} \arXiv{1611.08289} {Categorical properties on the hyperspace of nontrivial convergent sequences} {S. Garcia-Ferreira, R. Rojas-Hernandez, Y. F. Ortiz-Castillo} \arXiv{1612.06651} {First countable and almost discretely Lindel\"of $T_3$ spaces have cardinality at most continuum} {Istv\'an Juh\'asz, Lajos Soukup, Zolt\'an Szentmikl\'ossy} \ed
{'timestamp': '2016-12-23T02:00:36', 'yymm': '1612', 'arxiv_id': '1612.07354', 'language': 'en', 'url': 'https://arxiv.org/abs/1612.07354'}
\section{Introduction} Extreme value theory is used in understanding unlikely observations. It provides a theoretical framework that classifies distributions according to their tail behavior and explains asymptotic properties of the extreme order statistics. It also provides statistical estimators for different interesting quantities. There is a range of applications including finance, insurance, climatology and geology. The possibility to handle multivariate observations is vital for many applications. The traditional way to approach the multivariate setting is to consider the componentwise maxima of the multivariate observations and apply the theory of the one dimensional case to the marginal distributions (see \cite{deHaan} chapter 6). \citet{Ilmonen} approach the problem in a rather different way by considering a situation where the symmetry properties of the distribution provide a natural order relation and hence a natural concept of a multivariate extreme order statistic. Namely, they consider the extreme values of elliptically distributed random variables. Extreme value index encodes information about the tail behavior of univariate distributions. In the case of a positive extreme value index, a popular estimator for this parameter is the Hill estimator proposed in \cite{Hill}. A multivariate generalization of this, the separating Hill estimator, was recently introduced in \cite{Ilmonen}. Below we review the Hill estimator and its asymptotic properties. We also discuss the definition of a positive extreme value index of an elliptical distribution. In the remainder we settle a severe deficiency in the current asymptotic theory of the separating Hill estimator. The value of the separating Hill estimator depends on variables that describe the location and scatter of the underlying distribution. We show that, under mild conditions, the asymptotic properties of the estimator remain unchanged if, instead of true location and scatter, the estimator is evaluated with respect to estimated location and scatter. This is vital for applications, since in practice estimated location and scatter are what one has at one's disposal. This paper is organized as follows. In Section \ref{sec::positive} we review the relevant basics of the Hill estimator, in Section \ref{sec::known} we review the separating Hill estimator and in Section \ref{sec::estimated} we obtain its asymptotic properties under estimated location and scatter. \section{Estimation of a positive extreme value index}\label{sec::positive} The extreme value index yields the limiting distribution of the extreme order statistics subject to a sequence of appropriately chosen affine normalizations. For example, extreme quantile estimation is based on having an estimate of this parameter. The extreme value index naturally arises from the Fisher-Tippett-Gnedenko theorem (see \citet{fisher} and \citet{gnedenko}) which is of fundamental nature for extreme value theory. \begin{theorem} Let $X_n$ be a sequence of i.i.d. real-valued random variables whose cumulative distribution function is $F$. If there are sequences of real numbers $a_n > 0$, $b_n$ such that \begin{equation*} P\left( \frac{ \max \left\{ X_1, \dots, X_n \right\} - b_n}{a_n} \leq x \right) = F^n \left( a_n x + b_n \right) \to G(x) \end{equation*} as $n \to \infty$ for all $x\in \mathbb{R}$, then there is $\gamma \in \mathbb{R}$ such that \begin{equation*} G(x)=G_\gamma (x) = \exp \left( - \left(1+ \gamma x \right)^{- 1 / \gamma }\right) \end{equation*} for $1+ \gamma x \geq 0$ and $0$ otherwise. In the case $\gamma =0$ \begin{equation*} G_0 (x) = \exp\left( - e^{-x} \right). \end{equation*} \end{theorem} \noindent The following definitions are now natural. \begin{definition} Let $F$ be a cumulative distribution function such that there are sequences of real numbers $a_n > 0$, $b_n$ satisfying \begin{equation*} F^n \left( a_n x + b_n \right) \to G(x) = G_\gamma (x), \end{equation*} as $n \to \infty$. Then $F$ is said to be in the domain of attraction of $G_\gamma$, denoted $F \in \mathcal{D}\left( G_\gamma \right)$. \end{definition} \begin{definition} The extreme value index of a distribution function $F$ is $\gamma$ if and only if $F \in \mathcal{D}\left( G_\gamma \right)$. \end{definition} \bigskip There are several statistical estimators for $\gamma$ which are valid in different cases and have different desirable properties. Among the earliest ones is the Hill estimator proposed in \citet{Hill}. It is valid for $\gamma > 0$ and among its merits is its straightforward applicability. Since it was proposed, the Hill estimator has indeed become a widely used tool. Consider a sample $s_1, s_2, \dots, s_n \in \mathbb{R}$. Denote the order statistics by $s_{(1,n)} \geq s_{(2,n)} \geq \cdots \geq s_{(n,n)}$. Let $1 \leq k < n$. The expression for the Hill estimator is \begin{equation*} \hat{H}_{k,n} = \frac{1}{k} \sum \limits_{i=1}^k \log \left( \frac{s_{(i,n)}}{s_{(k+1,n)}} \right). \end{equation*} The restriction $\gamma > 0$ is clear since each of the logarithms takes a positive value. \bigskip There are characterizations for the distributions in each $\mathcal{D} \left( G_\gamma \right)$. For $\gamma >0$ the characteristic property is a regularly varying tail. The general definition of this property is as follows. \begin{definition} Let $f: \mathbb{R}_+ \to \mathbb{R}$ be an eventually positive function. If for some $\alpha \in \mathbb{R}$ \begin{equation*} \lim \limits_{t\to \infty}\frac{f(tx)}{f(t)} = x^{\alpha} \end{equation*} holds for all $x >0$, $f$ is said to be regularly varying, denoted $f \in RV_\alpha$. \end{definition} \noindent The number $\alpha$ is called the index of regular variation of $f$. If $\alpha = 0$, $f$ is said to be slowly varying. Regularly varying functions can be considered as a generalization of functions of the form $f(x) = Cx^{\alpha}$, $C \in \mathbb{R},$ to the case where the constant $C$ is replaced with a slowly varying function. The distributions in $\mathcal{D}\left( G_\gamma \right)$ for $\gamma >0$ are precisely those whose tails are regularly varying. \begin{theorem}\label{thm::RVandDomain} Let $F$ be a distribution function. The condition $F \in \mathcal{D} (G_\gamma)$ for $\gamma >0$ holds if and only if \begin{equation*} \lim \limits_{t \to \infty} \frac{1-F(tx)}{1-F(t)} = x^{- 1/\gamma}. \end{equation*} \end{theorem} \noindent In other words: $F \in \mathcal{D} \left(G_\gamma \right)$ for $\gamma > 0$ if and only if $1-F \in RV_{-\alpha}$, where $\alpha = 1 / \gamma$. In this context $\alpha = 1 / \gamma$ is called the tail index of the distribution. \bigskip Asymptotic properties of the Hill estimator have been a subject to much interest. Consider the case where $F$ is a Pareto distribution i.e. $F(x) =1- C x^{-\alpha}$, $x \geq x_m(C) >0$, for some $\alpha>0$. Now the self-similarity properties of the distribution suggest that, intuitively, $\hat{H}_{k,n} \to \gamma$ whenever $k=k(n)$ grows large as $n \to \infty$. This indeed turns out to be the case and the result even holds more generally: When $C$ is replaced by a slowly varying function, i.e. when $1-F$ is regularly varying. \begin{theorem}\label{thm::HillConsistency} Assume that $F \in \mathcal{D}(G_\gamma)$ for $\gamma > 0$. Now, as $k_n \to \infty,$ $n \to \infty,$ $k_n /n \to 0$, \begin{equation*} \hat{H}_{k_n,n} \to_P \gamma. \end{equation*} \end{theorem} \noindent This result was obtained in \citet{Mason}. The natural question after settling the consistency of $\hat{H}_{k,n}$ is its limiting distribution. This requires some additional assumptions about the underlying distribution. The following class of functions that is appropriate for this purpose is discussed in \citet{2erv}. \begin{definition}\label{def::2ERV} A measurable function $f: \mathbb{R}_+ \to \mathbb{R}$ is said to be of second order extended regular variation if \begin{equation*} \lim \limits_{t \to \infty} \frac{\frac{f(tx)-f(t)}{a(t)} -\frac{x^{\gamma}-1}{\gamma}}{A \left( t \right) } = H_{\gamma, \rho}(x) = c_1 \int_1^x s^{\gamma-1} \int_1^s u^{\rho-1 } \mathrm{d}u\mathrm{d}s + c_2\int_1^x s^{\gamma+\rho-1} \mathrm{d}s \end{equation*} for some $c_1,c_2 \in \mathbb{R}$, $\gamma \in \mathbb{R}$ and $\rho \leq 0$, where $H$ is not a multiple of $\frac{x^{\gamma}-1}{\gamma}$ and the positive or negative function $A$ converges to zero as $t \to \infty$. The function $a$ is some positive auxiliary function. We denote this by $f \in 2ERV_{\gamma, \rho}$. \end{definition} \noindent Let $F$ be a distribution function. Consider the following function \begin{equation}\label{eq::U} U(y) = \inf \left\{ x \in \mathbb{R} \: \middle| \frac{1}{1-F(x)} \leq y \: \right\}. \end{equation} It can be shown that $U \in 2ERV_{\gamma, \rho}$ for $\gamma >0$ implies $F \in \mathcal{D}( G_\gamma)$. The asymptotic behavior of $\hat{H}_{k,n}$ is also neatly expressed in terms of $U$. \begin{theorem}\label{thm::univariateHillDistribution} Let $F$ be a distribution function such that the related $U$, given by \eqref{eq::U}, is in $2ERV_{\gamma, \rho}$ for some $\gamma >0$. Let $A$ be the auxiliary function of $U$ in Definition \ref{def::2ERV}. Now, as $k_n \to \infty,$ $n \to \infty,$ $k_n /n \to 0$, \begin{equation*} \sqrt{k_n} \left( \hat{H}_{k_n,n} - \gamma \right) \to \mathcal{N} \left( \frac{\lambda}{1-\rho}, \gamma^2 \right), \end{equation*} where \begin{equation*} \lambda = \lim \limits_{n \to \infty} \sqrt{k_n} A\left( \frac{n}{k_n} \right). \end{equation*} \end{theorem} \noindent It is worth a remark that if \begin{equation*} \lim \limits_{t\to \infty} t^\alpha \left( \frac{U(tx)}{U(t)} - x^\gamma \right)=0 \end{equation*} for all $\alpha >0$, the result above holds with $\lambda=0$, i.e. the estimator is then asymptotically unbiased. The general conditions for the tail of the distribution under which the sequence $k_n$ can be selected so that the Hill estimator is asymptotically normal were derived in \citet{haeusler}. The roles of different smoothness conditions for the asymptotic normality of the Hill estimator were further clarified in \citet{deHaanHillNormal}. Together these results provide a satisfying picture of the basic asymptotic properties of the Hill estimator. They were also the foundation for the asymptotic properties of the separating Hill estimator derived in \citet{Ilmonen}. Other aspects of the asymptotic behavior that have been examined include asymptotic bias of the estimator (see e.g. \cite{haanPeng} and \citet{haeusler}), optimal selection of the sequence $k_n$ (see \citet{hall2}, \citet{gomes} and \citet{danielsson}) and bias correction (see \citet{gomes2} and references therein). \section{Separating Hill estimator under known location and scatter}\label{sec::known} In the univariate context, extreme values are observations that are exceptionally large or small. In the multivariate setting, the situation is less straightforward due to the lack of a canonical order relation. A possible approach explored in e.g. \citet{Sibuya}, \citet{Tiago} and \citet{deHaanResnickMulti} is to consider the componentwise maxima of the sample. However, it is questionable whether this approach gives a good definition of multivariate extremes. It is not invariant under affine transformations. In fact, a simple rotation may alter the data points that are considered as extreme observations. Under the assumption of multivariate ellipticity, the symmetry properties allow for a different approach. A random variable is said to be elliptically distributed if \begin{equation}\label{eq::elliptical} X \sim \mu + \mathcal{R} \Lambda U, \end{equation} where the location vector $\mu \in \mathbb{R}^d$, the random vector $U$ is uniformly distributed on the unit sphere $\mathbb{S}^{d-1} \subset \mathbb{R}^d$, the matrix $\Lambda \in \mathbb{R}^{d \times d}$ is such that the scatter matrix $\Sigma = \Lambda \Lambda^T \in \mathbb{R}^{d \times d}$ is a full rank positive definite matrix, and $\mathcal{R}$ is a real valued random variable. The random variable $\mathcal{R}$ is called the generating variate of the distribution. The family of elliptical distributions was introduced in \citet{kelker}. (See also \citet{fang}.) The symmetry properties of elliptical distributions motivate considering the Mahalanobis distance introduced in \citet{mahalanobis}. (See also \citet{mahalanobis2}.) \begin{definition} Let $A \in \mathbb{R}^{d\times d}$ be a full rank symmetric positive definite matrix. The metric $d_A$ given by \begin{equation*} d_A(x,y) = \left( \left\langle \, x-y \, \middle| \, A^{-1} \left( x-y\right)\, \right\rangle \right)^{\frac{1}{2}} \end{equation*} is called the Mahalanobis distance relative to $A$. \end{definition} \noindent The crucial observation here is the following: If $X$ follows an elliptical distribution with the location $\mu$ and scatter $\Sigma,$ the Mahalanobis distance $d_\Sigma(X, \mu)$ is equal in distribution to $\mathcal{R}$ as \begin{eqnarray*} d_\Sigma(X, \mu)^2 &\sim & \left\langle \, \mu+ \mathcal{R} \Lambda U -\mu \, \middle| \Sigma^{-1} \left(\mu + \mathcal{R} \Lambda U -\mu \right) \, \, \right\rangle \\ &\sim & \mathcal{R}^2 \left\langle \, U \, \middle| \Lambda^T \left( \Lambda \Lambda^T \right)^{-1} \Lambda U \, \, \right\rangle \\ &\sim& \mathcal{R}^2 \left\| U \right\|^2 \sim \mathcal{R}^2. \end{eqnarray*} Despite its simplicity, this observation rigorously describes the intimate relationship between $X$ and $\mathcal{R}$. It leads us to consider extreme observations under ellipticity to be the ones that correspond to extreme values of $\mathcal{R}$ in the univariate sense. Consequently, under ellipticity, we have both intuitively and formally sensible concept of an extreme value index. \begin{definition}\label{def::ellipticalExtreme} An elliptical distribution is said to have an extreme value index $\gamma >0$ if and only if the extreme value index of its generating variate is $\gamma >0$. \end{definition} The definition above is further supported by a result from \citet{ellipticalRV}: General approach to the multivariate regular variation is defining it in the following way. The distribution of a $d$-dimensional random variable $X$ is said to be regularly varying with index $\alpha>0$ if there exists a random variable $\Theta$ with values on the unit sphere $\mathbb{S}^{d-1}$ a.s. such that, for all $x>0$, as $t \to \infty$, \begin{equation*} \frac{P \left( \left\| X \right\| \geq tx , \, X / \left\| X \right\| \in \cdot \right)}{P \left( \left\| X \right\| \geq t \right)} \to_v x^{-\alpha} P \left( \Theta \in \cdot \right), \end{equation*} where $\to_v$ denotes vague convergence. It was shown in \citet{ellipticalRV}, that the regular variation of an elliptical distribution (defined in the general sense) is equivalent to the regular variation of the tail of the corresponding generating variate. Recall now that by Theorem \ref{thm::RVandDomain}, a positive extreme value index is equivalent to having regularly varying tail. \bigskip An affine invariant multivariate extension of the Hill estimator was introduced in \citet{Ilmonen}: Consider a multivariate sample $S_n=\left\{X_1, \dots, X_n \right\} \subset \mathbb{R}^d$. Let $\mu \in \mathbb{R}^d$ and let $\Sigma \in \mathbb{R}^{d\times d}$ be a full rank positive definite matrix. Let $D_{(1,n)} \geq D_{(2,n)} \geq \cdots \geq D_{(n,n)}$ be the order statistics of the Mahalanobis distances $d_\Sigma(X_i,\mu)$. The separating Hill estimator under these parameters is given by \begin{equation}\label{eq::sepHill} \hat{H}_d \left( S_n, \mu, \Sigma, k,n \right) = \frac{1}{k} \sum \limits_{i=1}^k \log \left( \frac{D_{(i,n)}}{D_{(k+1,n)}} \right). \end{equation} Under ellipticity, as observed above, if the location and scatter of the underlying distribution are $\mu$ and $\Sigma$ respectively, the Mahalanobis distance $d_\Sigma(X,\mu)$ is equal in distribution to $\mathcal{R}$. In that case the asymptotic properties of $\hat{H}^s_d$ under known location and scatter are straightforward to derive. This was done in \citet{Ilmonen}. However, in practice, one has to estimate the location and scatter. The asymptotic theory under known location and scatter is thus highly insufficient for ensuring the practical applicability of the estimator. In this paper we settle the matter by proving that replacing location and scatter by estimates does not affect the asymptotic behavior of the separating Hill estimator. This holds as long as the estimators for the location and scatter are $\sqrt{n}$-consistent. \section{Separating Hill estimator under estimated location and scatter}\label{sec::estimated} It is not trivial that the asymptotic properties of the separating Hill estimator are not affected by replacing the true location and scatter in the expression \eqref{eq::sepHill} with estimates. Let $d \geq 2$. Throughout this section, including Subsections \ref{sec::Lemmas} and \ref{sec::main}, we assume that $X_1, X_2, \dots,$ $X_n: \Omega \to \mathbb{R}^d$, is a sequence of i.i.d. random variables with a $d$-dimensional elliptical distribution \eqref{eq::elliptical}. We assume that the distribution has a positive extreme value index $\gamma$. We denote the location parameter of the distribution by $\mu$, and the positive definite scatter parameter by $\Sigma$. Notations $\hat{\mu}_n$ and $\hat{\Sigma}_n$ are used for estimators of the location and scatter of the distribution, respectively. Throughout this section, again including Subsections \ref{sec::Lemmas} and \ref{sec::main}, we write \begin{eqnarray*} R_i &=& \left\langle \, X_i - \mu \, \middle| \, \Sigma^{-1} \left( X_i - \mu \right) \, \right\rangle^{1/2} \quad \\ \quad E_i^{(n)} &=& \left\langle \, X_i - \hat{\mu}_n \, \middle| \, \hat{\Sigma}^{-1}_n \left( X_i - \hat{\mu}_n \right) \, \right\rangle^{1/2}. \end{eqnarray*} That is, $R_i$ is the true Mahalanobis distance of $X_i$ from the mean and $E_i^{(n)}$ is the $n$:th estimate of it. We denote their order statistics by $R_{(1,n)} \geq R_{(2,n)} \geq \dots \geq R_{(n,n)}$ and $E_{(1,n)} \geq E_{(2,n)} \geq \dots \geq E_{(n,n)}$, and by $S_n$ we denote the set of the $n$ first observations $X_i$, $S_n = \left\{ X_1 ,\dots, X_n \right\}$. Below we will show that the difference \begin{equation*} \left| \hat{H}_d(S_n,\mu,\Sigma,k_n,n) - \hat{H}_d(S_n,\hat{\mu}_n,\hat{\Sigma}_n,k_n,n) \right| \end{equation*} becomes negligible if $k_n \to \infty,$ $n \to \infty,$ $k_n / n \to 0$ and if the estimates $\hat{\mu}_n$ and $\hat{\Sigma}_n$ behave well. Our strategy is straightforward. The difference above consists of log-ratios \begin{equation}\label{eq::logratio} \log \left( \frac{E_{(i,n)}}{R_{(i,n)}} \right), \end{equation} where $ 1 \leq i \leq k_n+1$. We use an elementary argument to bound the absolute value of these expressions uniformly by a sequence that approaches zero in probability sufficiently quickly. The desired results are then obtained by showing that the obtained bound holds with a probability that approaches one. \subsection{Controlling the log-ratios of the order statistics}\label{sec::Lemmas} The asymptotical results for the separating Hill estimator under estimated location and scatter are obtained in two parts. We begin by proving the following lemmas that give a bound for the individual log-ratios \eqref{eq::logratio}. We will prove that the bound is valid for the pairs $\left( R_{(i,n)},E_{(i,n)} \right)$, where $1 \leq i \leq l$, if $l$ satisfies certain condition and $n$ is large enough. \vspace{12pt} \begin{lemma}\label{Lemma::epsilons} Let $X_1, X_2, \dots$ be a sequence of i.i.d. random variables with a $d$-dimensional elliptical distribution \eqref{eq::elliptical}. Let $R_{(i,n)}$ and $E_{(i,n)}$ be defined as at the beginning of Section \ref{sec::estimated}. For all $n > 0$ and $ 1 \leq i \leq n,$ define a random variable $\varepsilon_{(i,n)}$ by $\varepsilon_{(i,n)}= E_{(i,n)}^2 - R_{(i,n)}^2.$ Let $1 \leq l \leq n$. If $M_n < 1$ and $R_{(l,n)} > \frac{M_n}{2 \left( 1- M_n \right)},$ then \begin{equation* \left| \varepsilon_{(l, n)} \right| \leq \delta_n \left( R_{(l,n)} \right), \end{equation*} where the polynomial \begin{equation*} \delta_n \left( x \right) = M_n x^2 + M_n x + M_n, \end{equation*} and the coefficient $M_n$ in the above expression is given by \begin{eqnarray}\label{eq::Mn} M_n &=& \max \bigg\{ \lambda_{\max} \, A_n, \, \sqrt{\lambda_{\max}} \left(2 \left\| \mu \right\| A_n + B_n \right) , \\&& \, A_n \left\| \mu \right\|^2 + B_n \left\| \mu \right\| + C_n \, \bigg\}, \nonumber \end{eqnarray} where $\lambda_{\max}$ denotes the largest eigenvalue of $\Sigma$ and \begin{eqnarray*} A_n&=& \left\| \Sigma^{-1} - \hat{\Sigma}^{-1} \right\| \\ B_n &=& \left( \left\| \hat{\mu}_n \right\| + \left\| \mu \right\| \right)\left\| \Sigma^{-1} - \hat{\Sigma}^{-1}_n \right\| + \left( \left\| \hat{\Sigma}^{-1}_n \right\| + \left\| \Sigma^{-1} \right\| \right) \left\| \mu - \hat{\mu}_n \right\| \\ C_n &=& \left\| \mu \right\|^2 \left\| \Sigma^{-1} - \hat{\Sigma}^{-1}_n \right\| + \left( \left\| \mu \right\| + \left\| \hat{\mu}_n \right\| \right) \left\| \hat{ \Sigma}^{-1}_n \right\| \left\| \mu- \hat{\mu}_n \right\|. \end{eqnarray*} \end{lemma} \begin{proof} The scatter matrix $\Sigma$ is positive definite. Thus its eigenvalues $\lambda_1, \dots, \lambda_d$ are positive and a properly normalized set of its eigenvectors $\left\{ e_1 , \dots, e_d \right\}$ form an orthonormal basis for $\mathbb{R}^d$. Write \begin{equation*} \lambda_{\max} = \max \left\{ \lambda_1, \dots, \lambda_n \right\}. \end{equation*} The following inequality holds for all $x,y \in \mathbb{R}^d$: \begin{equation}\label{eq::mahalanobisBound} \left\| x-y \right\| \leq \sqrt{\lambda_{\max}} \, d_\Sigma (x,y) \end{equation} This can be seen by writing $x$ and $y$ in the basis $\left\{ e_1, \dots, e_d \right\}$ \begin{eqnarray*} \left\| x-y \right\|^2 &=& \sum \limits_{i=1}^d \left( x_i - y_i \right)^2 \\ &=& \lambda_{\max} \, \sum \limits_{i=1}^d \frac{1}{\lambda_{\max}} \left( x_i - y_i \right)^2 \\ &\leq & \lambda_{\max} \, \sum \limits_{i=1}^d \frac{1}{\lambda_{i}} \left( x_i - y_i \right)^2 \left\langle \, e_i \, \middle|\, e_i \, \right\rangle \\ &=& \lambda_{\max} \left\langle \, \sum \limits_{j=1}^d \left( x_j - y_j \right)e_j \, \middle|\, \sum \limits_{i=1}^d \left( x_i - y_i \right) \Sigma^{-1} e_i \, \right\rangle \\ &=& \lambda_{\max} \, d_\Sigma(x,y)^2. \end{eqnarray*} By the Cauchy-Schwartz inequality and the bound $\left\| A x\right\| \leq \left\| A \right\| \left\| x \right\|$ for the operator norm, we have that for all $1 \leq i \leq n$: \begin{eqnarray*} \left| R_i^2 - \left( E_i^{(n)} \right)^2 \right| &=& \bigg|\left\langle \, X_i - \mu \, \middle| \, \Sigma^{-1} \left( X_i - \mu \right) \, \right\rangle \\&& - \left\langle \, X_i - \hat{\mu}_n \, \middle| \, \hat{\Sigma}^{-1}_n \left( X_i - \hat{\mu}_n \right) \, \right\rangle\bigg| \\ &\leq& \left\| \Sigma^{-1} - \hat{\Sigma}^{-1}_n \right\| \left\| X_i \right\|^2 + \biggl( \left( \left\| \hat{\mu}_n \right\| + \left\| \mu \right\| \right) \left\| \Sigma^{-1} - \hat{\Sigma}^{-1}_n \right\| \\&& + \left( \left\| \hat{\Sigma}^{-1}_n \right\| + \left\| \Sigma^{-1} \right\| \right) \left\| \mu - \hat{\mu}_n \right\| \biggr) \left\| X_i \right\| \\&& + \biggl( \left\| \mu \right\|^2 \left\| \Sigma^{-1} - \hat{\Sigma}^{-1}_n \right\| \\&& + \left( \left\| \mu \right\| + \left\| \hat{\mu}_n \right\| \right) \left\| \hat{ \Sigma}^{-1}_n \right\| \left\| \mu- \hat{\mu}_n \right\| \biggr). \end{eqnarray*} Denote the coefficients of $\left\| X_i \right\|^2$ and $\left\| X_i \right\|$ by $A_n$ and $B_n$, respectively, and denote the expression inside the last large brackets by $C_n$. By the estimate $\left\| X \right\| \leq \left\| X - \mu \right\| + \left\| \mu \right\|$ the expression above has an upper bound \begin{equation*} A_n \left\| X_i - \mu \right\|^2 + \left( 2 \left\| \mu \right\| A_n + B_n \right) \left\| X_i - \mu \right\| + \left( A_n \left\| \mu \right\|^2 + B_n \left\| \mu \right\| + C_n \right). \end{equation*} We can now relate $\left\| X_i - \mu \right\|$ and $\left\| R_i\right\|$ using Equation \eqref{eq::mahalanobisBound} and obtain the following upper bound for the original expression: \begin{equation*} \lambda_{\max} \, A_n R_i^2 + \sqrt{ \lambda_{\max} } \, \left( 2 \left\| \mu \right\| A_n + B_n \right) R_i + \left( A_n \left\| \mu \right\|^2 + B_n \left\| \mu \right\| + C_n \right). \end{equation*} An upper bound for this is $\delta_n \left( R_i \right) =M_n R_i^2 + M_n R_i +M_n$, where $M_n$ is as in \eqref{eq::Mn}. Consider \begin{equation*} \varepsilon_{(i,n)} = E^{2}_{(i,n)} - R_{(i,n)}^2. \end{equation*} We bound $\left| \varepsilon_{(i,n)} \right|$ by finding an upper bound and a lower bound for the difference above. We also find the conditions under which these bounds are valid. The upper bound: Let $1 \leq l \leq n$. Since $M_n > 0,$ the function \begin{equation*} x^2 + \delta_n(x) = (1+M_n) x^2 + M_n x + M_n \end{equation*} is strictly increasing for $x > 0$. Thus, for all $R_{j} \leq R_{(l,n)},$ we have that \begin{equation*} \left( E_{j}^{(n)} \right)^2 \leq R_j^2 + \delta_n \left( R_j \right) \leq R_{(l,n)}^2 + \delta_n \left( R_{(l,n)}\right). \end{equation*} It now follows that $R_{(l,n)}^2 + \delta_n \left( R_{(l,n)}\right)$ is an upper bound for at least $n-l+1$ of the observations $\left(E_{j}^{(n)} \right)^2$, or \begin{equation*} E_{(l,n)}^2 \leq R_{(l,n)}^2 + \delta_n \left( R_{(l,n)}\right). \end{equation*} Equivalently \begin{equation*} \varepsilon_{(i,n)} \leq \delta_n \left( R_{(l,n)}\right). \end{equation*} The lower bound: By differentiating we see that the function \begin{equation*} x^2 - \delta_n \left( x \right) = (1-M_n) x^2 -M_n x - M_n \end{equation*} is strictly increasing if \begin{equation*} \frac{2 \left( 1-M_n \right)}{M_n} \, x > 1. \end{equation*} Thus, assuming that $M_n <1$ and $R_{(l,n)} > \frac{M_n}{2 \left( 1-M_n \right)},$ we have, for all $R_i \geq R_{(l,n)},$ that \begin{equation*} \left( E_{j}^{(n)} \right)^2 \geq R_j^2 - \delta_n \left( R_j \right) \geq R_{(l,n)}^2 - \delta_n \left( R_{(l,n)}\right). \end{equation*} Thus, under the given conditions, $R_{(l,n)}^2 - \delta_n \left( R_{(l,n)}\right)$ is a lower bound for at least $l$ of the observations $\left(E_{j}^{(n)} \right)^2$, or \begin{equation*} E_{(l,n)}^2 \geq R_{(l,n)}^2 - \delta_n \left( R_{(l,n)}\right). \end{equation*} Equivalently \begin{equation*} \varepsilon_{(i,n)} \geq - \delta_n \left( R_{(l,n)}\right). \end{equation*} \end{proof} \vspace{12pt} Next we obtain a uniform bound for the large log-ratios in terms of $M_n$. \vspace{12pt} \begin{lemma}\label{Lemma::differences} Let $X_1, X_2, \dots$ be a sequence of i.i.d. random variables with a $d$-dimensional elliptical distribution \eqref{eq::elliptical}. Let $R_{(i,n)}$ and $E_{(i,n)}$ be defined as at the beginning of Section \ref{sec::estimated} and let $M_n$ be defined as in Lemma \ref{Lemma::epsilons}. Let $1 \leq l \leq n$. Assume that $M_n < 1$ and $R_{(l ,n )} \geq \frac{M_n}{2\left(1 - M_n \right)}$. Denote \begin{equation*} a_n = M_n + \frac{M_n}{R_{(l,n)}} + \frac{M_n}{R^2_{(l,n)}}. \end{equation*} If $a_n < 1$, then the following holds for all $1 \leq i \leq l$: \begin{equation*} \left| \log \left( \frac{E_{(i,n)}}{E_{(l,n)}} \right) - \log \left( \frac{R_{(i,n)}}{R_{(l,n)}} \right) \right| \leq \log \left( \frac{1}{1-a_n} \right). \end{equation*} \end{lemma} \begin{proof} By the triangle inequality \begin{eqnarray*} \left| \log \left( \frac{E_{(i,n)}}{E_{(l,n)}} \right) - \log \left( \frac{R_{(i,n)}}{R_{(l,n)}} \right) \right| &=& \frac{1}{2} \left| \log \left( \frac{E^2_{(i,n)}}{R^2_{(i,n)}} \right) + \log \left( \frac{R^2_{(l,n)}}{E^2_{(l,n)}} \right) \right| \\ &\leq& \frac{1}{2} \left| \log \left( \frac{E^2_{(i,n)}}{R^2_{(i,n)}} \right) \right| + \frac{1}{2} \left| \log \left( \frac{E^2_{(l,n)}}{R^2_{(l,n)}} \right) \right| \\ &=& \frac{1}{2} \left| \log \left( 1+ \frac{ \varepsilon_{(i,n)}}{R^2_{(i,n)}} \right) \right| \\&&+\frac{1}{2} \left| \log \left( 1+ \frac{ \varepsilon_{(l,n)}}{R^2_{(l,n)}} \right) \right|, \end{eqnarray*} where $\varepsilon_{(i,n)}$ are as in Lemma \ref{Lemma::epsilons}. Under the assumptions $M_n < 1$ and $R_{(l,n)} > \frac{M_n}{2 \left( 1-M_n \right)},$ Lemma \ref{Lemma::epsilons} yields \begin{equation*} \left| \varepsilon_{(i,n)} \right| \leq \delta_n \left(R_{(i,n)} \right) \end{equation*} for all $1 \leq i \leq l$. Since $R_{(i,n)} \geq R_{(l,n)}$, the quantity \begin{equation*} \frac{ \delta_n \left( R_{(i,n)} \right) }{R^2_{(i,n)}} = M_n + \frac{M_n}{R_{(i,n)}} +\frac{M_n}{R^2_{(i,n)}} \end{equation*} is bounded from above by \begin{equation*} a_n=M_n + \frac{M_n}{R_{(l,n)}} +\frac{M_n}{R^2_{(l,n)}}. \end{equation*} Assuming that $a_n <1$, by monotonicity of the logarithm, the following holds for all $1 \leq i \leq l$: \begin{equation*} \left| \log \left( 1+ \frac{ \varepsilon_{(i,n)}}{R^2_{(i,n)}} \right) \right| \leq \max \left\{ \left| \log \left( 1 \pm \frac{ \delta_n \left( R_{(i,n)} \right) }{R^2_{(i,n)}} \right) \right| \right\} \leq \max \left\{ \left| \log \left( 1 \pm a_n \right) \right| \right\}. \end{equation*} The mean value theorem yields \begin{equation*} \max \left\{ \left| \log \left( 1 \pm a_n \right) \right|\right\} = \left| \log \left( 1-a_n \right) \right| = \log \left( \frac{1}{1-a_n} \right). \end{equation*} Thus, under our assumptions, we obtain an upper bound for the original expression: \begin{eqnarray*} &&\frac{1}{2} \left| \log \left( 1+ \frac{ \varepsilon_{(i,n)}}{R^2_{(i,n)}} \right) \right| +\frac{1}{2} \left| \log \left( 1+ \frac{ \varepsilon_{(l,n)}}{R^2_{(l,n)}} \right) \right| \\ &\leq& \frac{1}{2} \log \left( \frac{1}{1-a_n} \right)+ \frac{1}{2} \log \left( \frac{1}{1-a_n} \right) \\ &=&\log \left( \frac{1}{1-a_n} \right). \end{eqnarray*} \end{proof} \subsection{Asymptotic properties of the separating Hill estimator}\label{sec::main} \vspace{12pt} \noindent Equipped with the lemmas derived in the last section, we will now return to the separating Hill estimator. The bound given in Lemma \ref{Lemma::differences} suffices to control the amount by which replacing the true location and scatter by estimates distorts the value of $\hat{H}_{d}$. The conditions under which the bound is valid are asymptotically nonrestrictive. \vspace{12pt} \begin{theorem}\label{thm::consistency} Let $X_1, X_2, \dots$ be a sequence of i.i.d. random variables with a $d$-dimensional elliptical distribution \eqref{eq::elliptical} that has a positive extreme value index $\gamma$. Assume that $\hat{\mu}_n$ and $\hat{\Sigma}_n$ are consistent estimators of the location and scatter of the distribution, respectively. Now, as $k_n \to \infty,$ $n \to \infty,$ $k_n /n \to 0$, we have that \begin{equation*} \hat{H}_{d} \left( S_n, \hat{\mu}_n, \hat{\Sigma}_n , k_n ,n \right) \to_P \gamma, \end{equation*} where $\hat{H}_d$ is as in \eqref{eq::sepHill}. \end{theorem} \begin{proof} Let $k_n \to \infty,$ $n \to \infty,$ and let $k_n /n \to 0$. Let $M_n$ be defined as in Lemma \ref{Lemma::epsilons}. By definition, consistency of $\hat{\mu}_n$ and $\hat{\Sigma}_n$ imply that $M_n \to_P 0$. Since the sequence $k_n \to \infty$ is such that $k_n / n \to 0$, we have that $R_{(k_n+1, n)} \to_P \infty$. Thus the conditions of Lemma \ref{Lemma::epsilons} \begin{equation}\label{eq::conditions} M_n < 1 \quad \text{and} \quad R_{(k_n+1,n)} \geq \frac{M_n}{2 \left( 1- M_n \right) } \end{equation} hold with a probability $p_n$ that approaches one. Define the following auxiliary sequence \begin{equation*} b_n = \begin{cases}\begin{array}{ll} \log (2), & \text{if } a_n > \frac{1}{2} \\ \log \left( \frac{1}{1-a_n} \right), & \text{if } a_n \leq \frac{1}{2} \end{array} \end{cases} , \end{equation*} where, as in Lemma \ref{Lemma::differences}, \begin{equation*} a_n = M_n + \frac{M_n}{R_{(k_n+1,n)}} + \frac{M_n}{R^2_{(k_n+1,n)}}. \end{equation*} Since $R_{(k_n+1,n)} \to_P \infty$ and $M_n \to_P 0$, we have that $a_n \to_P 0$, and by the continuous mapping theorem $b_n \to_P 0$. Denote by $q_n$ the probability of the event $a_n \leq \frac{1}{2}$. Assume that the conditions \eqref{eq::conditions} hold, and that $a_n \leq \frac{1}{2}$. Then, by Lemma \ref{Lemma::differences}, \begin{eqnarray*} &&\left| \hat{H}_{d} \left( S_n, \hat{\mu}_n, \hat{\Sigma}_n , k_n ,n \right)- \hat{H}_{d} \left( S_n, \mu, \Sigma , k_n ,n \right) \right| \\ &=&\left| \frac{1}{k_n} \sum \limits_{i=1}^{k_n} \log \left( \frac{E_{(i,n)}}{E_{(k_n+1,n)}} \right) - \frac{1}{k_n} \sum \limits_{i=1}^{k_n} \log \left( \frac{R_{(i,n)}}{R_{(k_n+1,n)}} \right) \right| \\ &\leq& \frac{1}{k_n} \sum \limits_{i=1}^{k_n} \left| \log \left( \frac{E_{(i,n)}}{E_{(k_n+1,n)}} \right) - \log \left( \frac{R_{(i,n)}}{R_{(k_n+1,n)}} \right) \right| \leq \log \left( \frac{1}{1-a_n} \right) \\&=&b_n. \end{eqnarray*} Since $p_n, q_n \to 1$, the probability for both conditions \eqref{eq::conditions} and $a_n \leq \frac{1}{2}$ holding simultaneously --- and consequently the probability of the sequence $b_n$ being an upper bound for the difference --- approaches one. The result now follows from $b_n \to_P 0$ and from the fact that the separating Hill estimator is consistent under known location and scatter as was shown in \citet{Ilmonen}. \end{proof} \vspace{12pt} When it comes to the limiting distribution, the rate of convergence of $\hat{\mu}_n$ and $\hat{\Sigma}_n$ also plays a role. It turns out that them being $\sqrt{n}$-consistent is sufficient. As a familiar example, the sample mean vector and the sample covariance matrix are $\sqrt{n}$-consistent if the fourth moments of the marginal distributions are finite. Let $\mathcal{R}$ be the generating variate of an elliptical distribution with an extreme value index $\gamma > 0$. Define $U$ of $\mathcal{R}$ as in \eqref{eq::U}. Assume that $U \in ERV2_{\gamma, \rho}$ and let $A$ be a corresponding auxiliary function introduced in Definition \ref{def::2ERV}. Below in Theorem \ref{thm::limiting} we simply refer to an elliptical distribution with parameters $\gamma, \rho$ and an auxiliary function $A.$ \vspace{12pt} \begin{theorem}\label{thm::limiting} Let $X_1, X_2, \dots$ be a sequence of i.i.d. random variables with a $d$-dimensional elliptical distribution with parameters $\gamma, \rho,$ and an auxiliary function $A$ (see the discussion above). Assume that $\sqrt{n}\left(\mu -\hat{\mu}_n \right)$ and $\sqrt{n}\left(\Sigma -\hat{\Sigma}_n \right)$ converge in distribution. Let $k_n \to \infty,$ $n \to \infty,$ $k_n /n \to 0$, and let \begin{equation*} \lambda = \lim \limits_{n \to \infty} A \left( \frac{n}{k_n} \right). \end{equation*} Then \begin{equation*} \sqrt{k_n} \left( \hat{H}_{d} \left( S_n, \hat{\mu}_n, \hat{\Sigma}_n , k_n ,n \right) - \gamma \right) \to_D \mathcal{N} \left( \frac{\lambda}{1- \rho}, \gamma^2 \right), \end{equation*} where $\hat{H}_d$ is as in \eqref{eq::sepHill}. \end{theorem} \begin{proof} Consider an elliptical distribution with parameters $\gamma, \rho, $ and an auxiliary function $A.$ Assume that $\left(\mu -\hat{\mu}_n \right) = O_p\left(\frac{1}{\sqrt{n}}\right)$ and that $\left(\Sigma -\hat{\Sigma}_n \right) = O_p\left(\frac{1}{\sqrt{n}}\right)$. Let $k_n \to \infty,$ $n \to \infty,$ $k_n /n \to 0$, and let \begin{equation*} \lambda = \lim \limits_{n \to \infty} A \left( \frac{n}{k_n} \right). \end{equation*} We have that \begin{eqnarray*} &&\sqrt{k_n} \left( \hat{H}_{d} \left( S_n, \hat{\mu}_n, \hat{\Sigma}_n , k_n ,n \right) - \gamma \right) \\ &=& \sqrt{k_n} \left( \hat{H}_{d} \left( S_n, \hat{\mu}_n, \hat{\Sigma}_n , k_n ,n \right) - \hat{H}_{d} \left( S_n, \mu_n, \Sigma_n , k_n ,n \right) \right) \\ &&+ \sqrt{k_n} \left( \hat{H}_{d} \left( S_n, \mu_n, \Sigma_n , k_n ,n \right) - \gamma \right). \end{eqnarray*} We now consider the first term and show that it converges to zero in probability. The claim then follows from Slutsky's theorem and the limiting normality of the separating Hill estimator under known location and scatter that was proven in \cite{Ilmonen}. Let $a_n$ and $b_n$ be as in the proof of Theorem \ref{thm::consistency}. Then with a probability that approaches one, we have that \begin{eqnarray*} \left| \sqrt{k_n} \left( \hat{H}_{d} \left( S_n, \hat{\mu}_n, \hat{\Sigma}_n , k_n ,n \right) - \hat{H}_{d} \left( S_n, \mu_n, \Sigma_n , k_n ,n \right) \right) \right| \leq \sqrt{k_n} b_n. \end{eqnarray*} It now suffices to show that the right hand side of the inequality above converges to zero in probability. If $\left| a_n \right| \leq \frac{1}{2}$, then the following holds. \begin{eqnarray*} \sqrt{k_n} b_n = \sqrt{k_n} \, \left| \log \left( 1 - a_n \right) \right| &=& \sqrt{k_n} \, \left| \sum \limits_{m=1}^{\infty} \frac{\left( -1 \right)^{m+1} }{m} \left( -a_n \right)^m \right| \\ &\leq& \sqrt{k_n} a_n + \sqrt{k_n} \sum \limits_{m=2}^{\infty} \left| \frac{\left( -1 \right)^{m+1} }{m} \left( - a_n \right)^m \right| \\ &\leq& \sqrt{k_n} a_n + \sqrt{k_n} a_n \sum \limits_{m=2}^{\infty} \frac{1}{2^{m-1}m} \\ &=&\left( 1+ S \right) \sqrt{k_n} a_n, \end{eqnarray*} where $S$ is the sum of the series \begin{equation*} \sum \limits_{m=2}^{\infty} \frac{1 }{2^{m-1}m} , \end{equation*} that is convergent by the ratio test. Recall that \begin{equation*} \sqrt{k_n} a_n = \sqrt{k_n} M_n + \sqrt{k_n} \frac{M_n}{R_{(k_n+1,n)}} + \sqrt{k_n} \frac{M_n}{R^2_{(k_n+1,n)} }. \end{equation*} Since $\sqrt{n} \left( \hat{\Sigma}_n - \Sigma \right)$ is assumed to converge in distribution and \begin{equation*} \sqrt{n} \left( \hat{\Sigma}^{-1}_n - \Sigma^{-1} \right) = \hat{\Sigma}^{-1}_n \left( \sqrt{n} \left( \Sigma -\Sigma_n \right) \right) \Sigma^{-1}, \end{equation*} we have, by Slutsky's theorem, that $\sqrt{n} \left( \hat{\Sigma}^{-1}_n - \Sigma^{-1} \right)$ also converges in distribution. The sequence $\sqrt{n} \, \left( \hat{\mu}_n - \mu \right)$ converges in distribution. Since convergence in distribution implies boundedness in probability, we have that \begin{eqnarray*} \sqrt{k_n} \, \left( \hat{\mu}_n - \mu \right) &=& \sqrt{\frac{k_n}{n}} \sqrt{n} \, \left( \hat{\mu}_n - \mu \right) \to_P 0 \\ \sqrt{k_n}\left( \hat{\Sigma}_n - \Sigma \right) &=& \sqrt{\frac{k_n}{n}} \, \sqrt{n} \left( \hat{\Sigma}_n - \Sigma \right) \to_P 0, \end{eqnarray*} as $k_n / n \to 0$. In the expression of the sequence $M_n$, each summand is a product of one of the differences above, and of something that is bounded in probability. Thus, by Slutsky's theorem, $\sqrt{k_n} M_n \to_P 0,$ and it follows that \begin{equation*} \left( 1+ S\right) \sqrt{k_n} a_n \to_P 0. \end{equation*} This completes the proof. \end{proof}
{'timestamp': '2016-01-13T02:10:47', 'yymm': '1511', 'arxiv_id': '1511.08627', 'language': 'en', 'url': 'https://arxiv.org/abs/1511.08627'}
\section{Introduction} In the geometric theory of holomorphic functions on complex manifolds, the concept of invariant metrics plays an important role. Invariant metrics refer to Finsler or Hermitian metrics which are invariant with respect to biholomorphic mappings. Within the theory of several complex variables the two most important examples of such objects are the Bergman and Kobayashi metrics. In general for a complex manifold and its invariant metric, the isometry group properly contains the set of holomorphic and anti-holomorphic diffeomorphisms. However, these two coincide under certain conditions, for example if one considers the Bergman metrics on $C^\infty$ strongly pseudoconvex domains in the complex Euclidean spaces (\cite[Theorem 1.17]{Greene_Krantz_1982}). It is natural to consider the question, under which circumstances any isometry between complex manifolds equipped with their invariant metrics is either holomorphic or anti-holomorphic. The goal of this paper is to investigate the holomorphicity of the totally geodesic isometric embeddings in the case when complex manifolds are bounded symmetric domains equipped with Kobayashi metrics. There are a few works on this direction. Let $\Omega\subset \mathbb C^n$, $\Omega'\subset \mathbb C^m$ be bounded domains. Let $f\colon \Omega\rightarrow \Omega'$ be an isometric embedding with respect to the Kobayashi or Carath\'eodory metrics. By using Pinchuk's scaling method Seshadri-Verma \cite{Seshadri_Verma_2006} proved that if $\Omega$ and $\Omega' $ are strongly pseudoconvex domains with $m=n$ and $f$ extends $C^1$-smoothly to the boundary, then $f$ should be biholomorphic or anti-biholomorphic. If $\Omega$ and $\Omega'$ are $C^3$-smooth strongly convex domains, then Gaussier-Seshadri \cite{Gaussier_Seshadri_2013} showed that any totally geodesic disc is either holomorphic or anti-holomorphic. Moreover they proved that $f$ has the same property as well. More recently Antonakoudis \cite{Antonakoudis_2017} obtained corresponding result when $\Omega$ and $\Omega'$ are complete disc-rigid domains which include strongly convex bounded domains and Teichm\"uller spaces of finite dimension. One common ingredient in the proofs of these results is the unique existence of complex geodesic containing given two points in $\Omega$ or in the direction of given vector on $\Omega$. For strongly convex domains, it is due to a work of Lempert \cite{Lempert_1981, Lempert_1982} and for Teichm\"uller spaces, it is a classical result (see \cite{Earle_Kra_Krushkal_1994}). For a bounded symmetric domain $\Omega$ with rank at least two, as Antonakoudis pointed out in \cite{Antonakoudis_2017}, there exists a totally geodesic disc $f\colon \Delta \rightarrow \Omega$ which is neither holomorphic nor anti-holomorphic. Such disc is given by $$ f(z) = (z,\bar z)\in \Delta^2\subset \Omega, $$ where $\Delta^2\subset \Omega$ is a totally geodesic bidisc in $\Omega$ with respect to the Bergman metric. This misfortune leads us to examine how totally geodesic isometric embeddings into bounded symmetric domains behave. Motivated by the work of Tsai(\cite{Tsai_1993}) about the rigidity of proper holomorphic mappings between bounded symmetric domains, we obtain the following theorem. \begin{theorem}\label{main} Let $\Omega$ and $\Omega'$ be bounded symmetric domains and let $f\colon\Omega\to \Omega'$ be a $C^1$-smooth totally geodesic isometric embedding with respect to their Kobayashi metrics. Suppose that $\Omega$ is irreducible. Suppose further that $${\rm rank }(\Omega)\geq {\rm rank }(\Omega').$$ Then $$ {\rm rank }(\Omega)={\rm rank }(\Omega')$$ and $f$ is either holomorphic or anti-holomorphic. \end{theorem} For reducible bounded symmetric domains, we have the following result: \begin{theorem}\label{Kobayashi isometry} Let $\Omega=\Omega_1\times\cdots\times\Omega_m$ be a bounded symmetric domain, where each $\Omega_i$ is irreducible. Then, up to permutation of irreducible factors, any $C^1$ Kobayashi isometry $F\colon\Omega\to \Omega$ is of the form $$F(z_1,\ldots,z_m)=(f_1(z_1),\cdots,f_m(z_m)) \quad z_i\in \Omega_i,$$ where each $f_i\colon\Omega_i\to \Omega_i$ is either biholomorphic or anti-biholomorphic. \end{theorem} Note that an analogous theorem of Theorem \ref{Kobayashi isometry} for proper holomorphic maps was proved by Seo \cite{Seo_2018}. For a bounded symmetric domain $\Omega$, let $\widehat \Omega$ denote its compact dual. On $\widehat \Omega$, there is a space of minimal rational curves $C$ which are homologically generators of $H_2(\widehat \Omega, \mathbb Z)$. The intersection of $C$ with $\Omega$ is called a {\it minimal disc}, and the tangent vectors of minimal discs are called {\it rank one vectors}. We will say a holomorphic disc into a bounded symmetric domain is of rank one if it is tangential to rank one vectors at all points. Definition of the rank of vectors is given in 2. Preliminaries. \begin{theorem}\label{main-2} Let $\Omega$ and $\Omega'$ be bounded symmetric domains and let $f\colon\Omega\to \Omega'$ be a $C^1$-smooth totally geodesic isometric embedding that extends continuously to the boundary with respect to their Kobayashi metrics. Suppose $\Omega$ is irreducible. Suppose further that \begin{equation}\label{vector-rank} {\rm rank }(\Omega)\geq {\rm rank}~f_*(v) \end{equation} for all $v\in T\Omega$. Then $f$ is either holomorphic or anti-holomorphic. \end{theorem} \begin{corollary} Any rank one totally geodesic Poincar\'{e} disc in a bounded symmetric domain which is continuous up to the boundary is a minimal disc. \end{corollary} We remark that there are a lot of rank one holomorphic discs which are not minimal discs (see \cite{Griffiths_Harris_1979, Choe_Hong_2004}). Theorem \ref{main} and Theorem \ref{main-2} are not true anymore if there are no rank conditions: for $p,q,p',q'\in \mathbb N$ with $p<p'$, $q<q'$, $p' \leq q$, $p\leq q$, let $\Omega=\Omega^I_{p,q}$ and $\Omega' = \Omega^I_{p', q'}$ be irreducible bounded symmetric domains of type I: $$ \Omega^I_{p,q} := \left\{ Z \in M(p,q;\mathbb C) : I - \overline Z^t Z>0 \right\}. $$ Then for any anti-holomorphic map $\varphi\colon \Omega^I_{p,q} \to \Omega^I_{p'-p, q'-q} $, the map $$ Z\mapsto \left( \begin{array}{cc} Z &0\\ 0& \varphi(Z) \end{array}\right) \colon \Omega^I_{p,q}\to \Omega^I_{p', q'} $$ is a totally geodesic isometric embedding with respect to the Kobayashi metrics. \bigskip {\bf Acknowledgement} The first author was supported by the Institute for Basic Science (IBS-R032-D1-2021-a00). The second author was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2019R1F1A1060175). \section{Preliminaries} \subsection{Bounded symmetric domains} In this section, we recall some facts about bounded symmetric domains. For more details, see \cite{Mok89, Mok_Tsai_1992, Mok_2016}. A bounded domain $\Omega$ in the complex Euclidean space is called symmetric if for each $p\in \Omega$, there exists a holomorphic automorphism $I_p$ such that $I_p^2$ is the identity map of $\Omega$ which has $p$ as an isolated fixed point. Bounded symmetric domains are homogeneous complex manifolds and their Bergman metrics are K\"ahler-Einstein with negative holomorphic sectional curvatures. It is well known that all Hermitian symmetric spaces of non-compact type can be realized as convex bounded symmetric domains by the Harish-Chandra realizations. Moreover there exists a one to one correspondence between the set of Hermitian symmetric spaces of the non-compact type and the compact type. For a bounded symmetric domain $\Omega$, the corresponding Hermitian symmetric space of the compact type is called its compact dual. There exists a canonical embedding, which is called the Borel embedding, from $\Omega$ to its compact dual. In what follows $M(p,q; \mathbb C)$ denotes the set of $p\times q$ matrices with complex coefficients. The set of irreducible Hermitian symmetric spaces of non-compact type consists of four classical types and two exceptional types. We list the irreducible bounded symmetric domains which are the Harish-Chandra realizations of them as follows; $$ \Omega^I_{p,q} := \left\{ Z \in M(p,q;\mathbb C) : I - \overline Z^t Z>0 \right\}, \quad p,q \geq 1; $$ $$\Omega^{II}_n := \left\{ Z\in \Omega^I_{n,n} : Z^t = -Z \right\}, \quad n\geq 2; $$ $$ \Omega^{III}_n := \left\{ Z\in \Omega^I_{n,n} : Z^t = Z \right\}, \quad n\geq 1; $$ $$ \Omega^{IV}_n:= \left\{ (z_1, \ldots, z_n) \in \mathbb C^n : \| z\|^2 < 2, \| z\|^2 <1 + \bigg| \frac{1}{2} \sum_{j=1}^n z_j^2 \bigg|^2 \right\}, \quad n\geq 3; $$ $$ \Omega_{16}^V = \left\{z\in M_{1,2}^{\mathbb O_\mathbb C} : 1-(z|z) + (z^\#|z^\#) >0,\, 2-(z|z)>0 \right\}; $$ \begin{equation}\nonumber \begin{aligned} \Omega_{27}^{VI} &= \left\{ z\in H_3(\mathbb O_\mathbb C) : 1-(z|z) + (z^\#|z^\#) - |\det z|^2 >0,\right. \\ &\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad \left. 3-2(z|z) + (z^\#|z^\#) >0, 3-(z|z)>0 \right\}. \end{aligned} \end{equation} Here, $\mathbb O_\mathbb C$ is the complex 8-dimensional algebra of complex octonions. For the notation $M_{1,2}^{\mathbb O_{\mathbb C}}$, $H_3(\mathbb O_\mathbb C)$, $z^\#$ for the exceptional type domains, see \cite{Roos_2008}. \begin{theorem}[Polydisc Theorem]\label{polydisc theorem} Let $\Omega$ be a bounded symmetric domain and $b_\Omega$ be its Bergman metric. Let $X$ be the compact dual of $\Omega$ and $g_c$ be its K\"ahler-Einstein metric. There exists a totally geodesic complex submanifold $D$ of $(\Omega, b_\Omega)$ such that $(D, b_\Omega|_D)$ is holomorphically isometric to a Poincar\'e polydisc $(\Delta^r, \rho)$ and $$ \Omega = \bigcup_{\gamma\in K} \gamma D $$ where $K$ denotes an isotropy subgroup of $\text{Aut}(\Omega)$. Moreover, there exists a totally geodesic complex submanifold $S$ of $(X,g_c)$ containing $D$ as an open subset such that $(S, g_c|_S)$ is isometric to a polysphere $((\mathbb P^1)^r, \rho_c)$ equipped with a product Fubini-Study metric $\rho_c$. \end{theorem} The dimension of $D$ in Theorem \ref{polydisc theorem} is called the {\it rank} of $\Omega$. A projective line $C\cong \mathbb P^1$ in $X$ which is a homological generator of $H_2(X,\mathbb Z)$ is called a minimal rational curve of $X$ and it is totally geodesic on $(X, g_c)$. Via the Harish-Chandra and Borel embedding, we call the intersection of $C$ with $\Omega$ a {\it minimal disc}. In particular, minimal discs $\Delta$ can be precisely expressed by $\{ (z,0,\ldots,0): |z|<1\}\subset \Delta^r \cong D$ where $D$ is the totally geodesic polydisc in Theorem \ref{polydisc theorem}, and $\{(z,0\ldots, 0)\in (\mathbb P^1)^r\cong S\}\cong \mathbb P^1\subset X$ is the minimal rational curve on $X$ such that $\mathbb P^1 \cap \Omega = \Delta$. To each totally geodesic polydisc $\Delta^k$ with $1\leq k\leq r$ in $\Omega$, there exists a bounded symmetric subdomain $(\Delta^k)^\perp$ of rank $r-k$ in $\Omega$ such that $\Delta^k\times (\Delta^k)^\perp$ can be embedded in $\Omega$ as a totally geodesic submanifold with respect to the Bergman metrics. For each $b\in (\partial \Delta)^k$, $\{b\}\times (\Delta^k)^\perp$ is a boundary component of $\Omega$, i.e. it is a maximal complex submanifold contained in $\partial \Omega$. Moreover for any boundary component $B$ of $\Omega$, there exists a totally geodesic polydisc $\Delta^k$ and a point $b\in (\partial\Delta)^k$ so that $B\cong \{b\}\times (\Delta^k)^\perp$. For each irreducible bounded symmetric domain $\Omega$, $(\Delta^k)^\perp$ is given by Table \ref{characteristic subdomains} below for each $k$ with $1\leq k\leq \text{rank}(\Omega)$ and the canonical embedding into $\Omega$. \begin{table}[ht]\caption{Characteristic subdomains } \label{characteristic subdomains} \begin{tabular}{c|c|c|c|c|c|c} $\Omega$& $\Omega_{p,q}^I (p\leq q)$& $\Omega^{II}_n$& $\Omega^{III}_n$& $\Omega^{IV}_n$ & $\Omega^V_{16}$&$\Omega_{27}^{VI}$ \\[4pt]\hline && &&&&\\[-8pt] $(\Delta^k)^\perp$ & $\Omega^I_{p-k, q-k}$ & $\Omega^{II}_{n-2k}$& $\Omega_{n-k}^{III}$ & $\Delta(k=1)$ & $\mathbb B^5(k=1)$ & $\Omega_8^{IV} (k=2)$, $\Delta(k=1)$ \end{tabular} \end{table} Here $\mathbb B^n:=\{z\in \mathbb C^n: |z|<1\}$ denotes the $n$-dimensional unit ball. Denote by $T\Omega$ the holomorphic tangent bundle of $\Omega$. For a point $x\in \Omega$, let $v\in T_x\Omega$ be a unit vector. If $v$ realizes the minimum of holomorphic sectional curvature of $b_\Omega$, then we call $v$ a rank $1$-vector (or {\it characteristic vector} in \cite{Mok89}). A vector $v\in T_x\Omega$ is a rank $1$-vector if and only if there exists a minimal disc $\Delta\subset \Omega$ such that $v\in T_x\Delta$. For any $v\in T_x\Omega$, there exists a unique totally geodesic polydisc $\Delta^k\subset \Omega$ with minimum dimension such that $v\in T_x\Delta^k$ and we will call $v$ a rank $k$-vector. Let $\mathcal C_x^k$ denote the set $$ \mathcal C_x^k := \{ [v] : v \in T_x\Omega\text{ is a rank } k\text{-vector at } x\} $$ in $\mathbb PT_x\Omega$. The $k$-th characteristic bundle over $\Omega$ is defined by $$ \mathcal C^k (\Omega) := \bigcup_{x\in \Omega} \mathcal C^k_x \subset \mathbb PT\Omega.$$ We remark that $\mathcal C^k_x$ is parallel with $\mathcal C_0^k$ for any $k$ and $x\in \Omega$ in Harish-Chandra coordinates (\cite{Mok_Tsai_1992}). We will compare two vectors $v\in T_p\Omega$ and $w\in T_q\Omega$ with different $p,q \in \Omega$ using parallel translation with respect to Harish-Chandra coordinates. For convenience, we will denote $\mathcal C^1_x$, $\mathcal C^1(\Omega)$ by $\mathcal C_x$, $\mathcal C(\Omega)$, respectively. \begin{definition} A smooth real submanifold $N\subset \Omega$ is called an {\it integral manifold} of $\mathcal {RC}^k(\Omega)$, if for any $v\in TN$, there exists $[w]\in \mathcal C^k(\Omega)$ with $w\in T\Omega$ such that $v=\text{Re}(w)$. We will say that a map $f\colon N\rightarrow \Omega$ is tangential to $\mathcal{RC}^k(\Omega)$ if $f(N)$ is an integral manifold of $\mathcal{RC}^k(\Omega)$. \end{definition} Let $v$ be a unit rank 1-vector at $x\in \Omega$ and write $R_v$ for the Hermitian bilinear form on $T_x\Omega$ defined by $R_v(\xi, \eta) := R(v, \bar v , \xi, \bar \eta)$ where $R$ denotes the curvature tensor of $b_\Omega$. Then the eigenspace decomposition of $T_xX$ with respect to $R_v$ is given by $$ T_x \Omega = \mathbb Cv + \mathcal H_v + \mathcal N_v $$ corresponding to the eigenvalues $2$, $1$ and $0$ respectively. For $[v]\in \mathcal C_0$, let $\Delta\subset \Omega$ be a minimal disc so that $v\in T_0\Delta$. Then we have \begin{equation}\nonumber \mathcal N_v = T_0 \Delta^\perp. \end{equation} Identifying $T_{[v]} (\mathbb P T_0\Omega)$ with $T_0\Omega/\mathbb Cv$, we have $T_{[v]} \mathcal C_0 \cong (\mathbb C v + \mathcal H_v)/\mathbb C v$. Note that any vector $v\in T\Omega$ can be expressed as a linear combination of rank one vectors by Polydisc Theorem. For a nonzero vector $v=\sum_j c_jv_j$ with rank one vectors $v_j$, define $$\mathcal N_{[v]}:=\bigcap_j\mathcal N_{[v_j]}.$$ Since what matters is the directions of rank one vectors $v_j$, for a totally geodesic polydisc $\Delta^s\subset \Omega$, we denote $\mathcal N_{[v]}$ by $\mathcal N_{[T_x \Delta^s]}$ when $v$ is the vector realizing the maximal holomorphic sectional curvature of $\Delta^s$ with respect to the Bergman metric. For a totally geodesic polydisc $\Delta^k\subset \Omega$, we have $$T_p (\Delta^k)^\perp=\bigcap_{[v]\in \mathbb PT_p\Delta^k} \mathcal N_{[v]}.$$ We remark that $\mathcal N_{[v]}$ is parallel along totally geodesic holomorphic disc $\Delta$ which is tangent to $v$. \begin{lemma}[Lemma 3.6 in \cite{Mok_Tsai_1992}]\label{2-ball} Let $u\in \mathcal H_v$ be a root vector of unit norm. Then either \begin{enumerate} \item $R_{u\bar u u\bar u}=R_{v\bar v v \bar v}$ and $(\mathbb C v +\mathbb C u)\cap \Omega\cong \mathbb B^2$ is totally geodesic in $(\Omega, b_\Omega)$, or \item $R_{u\bar u u\bar u}=\frac{1}{2}R_{v\bar v v \bar v}$ and there exists $w\in \mathcal N_v$ such that $(\mathbb C v + \mathbb C u + \mathbb C w)\cap \Omega \cong \Omega_3^{IV}$ is totally geodesic in $(\Omega, b_\Omega)$. \end{enumerate} \end{lemma} Let $\mu$, $\phi$ be roots of $v$, $u$ respectively. If $R_{u\bar u u\bar u}=\frac{1}{2}R_{v\bar v v \bar v}$, then $\mu$, $\mu-\phi$, $\mu-2\phi$ are roots (see \cite[Lemma 3.6]{Mok_Tsai_1992}). \begin{lemma}\label{ball} \begin{enumerate} \item There are canonical totally geodesic isometric embeddings with respect to the Bergman and the Kobayashi metric $$ \nu\colon \Omega^{II}_n,\, \Omega^{III}_n\hookrightarrow \Omega^I_{n,n} \quad \text{ and }\quad \nu\colon \Omega^{IV}_{2k+1}\hookrightarrow \Omega^{IV}_{2k+2} $$ such that for any minimal disc $\Delta\subset \Omega$, $$ (\nu(\Delta) \times \nu(\Delta)^\perp)\cap \nu(\Omega) = \nu(\Delta\times\Delta^\perp) $$ for $\Omega= \Omega^{II}_n,\, \Omega^{III}_n$ or $\Omega^{IV}_{2k+1}$. \item For $\Omega = \Omega^I_{p,q}$, $\Omega^{IV}_{2k+2}$, $\Omega^V_{16}$ or $\Omega^{VI}_{27}$ and a characteristic vector $v\in T_x\Omega$, there exists a basis $\{e_1,\ldots, e_\ell\}$ of $\mathcal H_v$ such that $R(e_i,\bar e_i, e_i,\bar e_i)=2$ for any $i$. In particular $\Omega\cap \{ \mathbb C v + \mathbb C e_i\}\cong \mathbb B^2$. \end{enumerate} \end{lemma} \begin{proof} {\bf(1):} Let $\nu$ be the trivial embedding for $ \nu\colon \Omega^{II}_n,\, \Omega^{III}_n\hookrightarrow \Omega^I_{n,n} $ and let $\nu(z_1,\ldots, z_{2k+1} ) = (z_1,\ldots, z_{2k+1}, 0)$ for $\nu\colon \Omega^{IV}_{2k+1}\hookrightarrow \Omega^{IV}_{2k+2}$. {\bf (2):} By Lemma \ref{2-ball} we only need to show that $R(\xi, \bar \xi, \xi,\bar\xi)=R(v, \bar v, v, \bar v)$ for any root vector $\xi\in \mathcal H_v$. By the curvature formula for root vectors $e_\mu$ and $ e_\varphi$, $$ R(e_\mu, \bar e_\mu, e_\varphi, \bar e_\varphi) = c([e_\mu, e_{-\mu}]; [ e_\varphi, e_{-\varphi}]) $$ where $(\cdot, \cdot)$ denotes $-B(\cdot, \bar \cdot)$ for the Killing form $B$ of $\mathfrak g^{\mathbb C}$ and a positive constant $c$. \medskip For $\Omega = \Omega^I_{p,q}$, the identity component of the automorphism group of $\Omega$ is $SU(p,q)$ and its Lie algebra is $\frak{su}(p,q)$. For a Cartan subalgebra $\frak h:= \text{diag}[a_{11}, \ldots, a_{p+q, p+q}]$, let $\{ \pm (\epsilon_i - \epsilon_j) : 1\leq i<j\leq p+q\}$ be the set of roots of $\frak{su}(p,q)\otimes{\mathbb C} = \frak{sl}(p+q, \mathbb C)$ where $\epsilon_j(h) = a_{jj}$ for $h=(a_{11},\ldots, a_{p+q, p+q})$. Let $a_{ij}$ denote the root vector of $\epsilon_i - \epsilon_j$. Then $a_{ij}$ with $1\leq i \leq p < j\leq p+q$ consist of a basis of the holomorphic tangent space of $\Omega$ at $0$. For a characteristic vector $v= a_{1 p+1}$, we have $\mathcal H_v= \text{span} \{a_{1j} : p+2\leq j\leq p+q\} \cup \{a_{i p+1} : 2\leq i\leq p\}$ and $\mathcal N_v = \text{span}\{ a_{ij} : 2\leq i\leq p, p+2\leq j\leq p+q\}$. Hence $R(a_{ij}, \bar a_{ij}, a_{ij}, \bar a_{ij})=R(v, \bar v, v, \bar v)$ for any $i,j$. \medskip For $\Omega = \Omega^{IV}_{2k+2}$, the identity component of the automorphism group of $\Omega$ is $G = SO(2k,2)$ and its Lie algebra is $\frak g = \frak{so}(2k,2)$. For a Cartan subalgebra $\frak h:= \text{diag}(a_{11}, \ldots,\\ a_{k+1, k+1}, -a_{11},\ldots, -a_{k+1, k+1})$, let $\{ \pm \epsilon_i \pm \epsilon_j : 1\leq i<j\leq k+1\}$ be the set of roots of $\frak g^{\mathbb C} = \frak{so}(2k+2, \mathbb C)$. Then the root vectors of $\{ \epsilon_i \pm \epsilon_{k+1} : 1\leq i \leq k \}$ consist of a basis of the holomorphic tangent space of $\Omega$ at $0$. For a characteristic vector $v$ which is a root vector of $\epsilon_k - \epsilon_{k+1}$, $\mathcal H_v$ is the span of root vectors corresponding to $\{\epsilon_i-\epsilon_{k+1} :1\leq i \leq k-1\} $ and $\mathcal N_v$ is the span of root vector of $\epsilon_k + \epsilon_{k+1}$. Hence $R(\xi, \bar \xi, \xi,\bar\xi)=R(v, \bar v, v, \bar v)$ for any root vector $\xi$ corresponding to $\epsilon_i-\epsilon_{k+1}$ with $1\leq i\leq k-1$. \medskip For $\Omega = \Omega^{VI}_{27}$, the identity component of the automorphism group of it is the exceptional simple Lie group $G = E_7$ and its Lie algebra is $\frak g = \frak e_7$. The noncompact positive roots of $\frak e_7$ is listed on \cite[page 150]{Drucker_1978}: \begin{equation}\nonumber \begin{array}{c} \lambda_t , \, \frac{1}{2}(\lambda_2 + \lambda_3) + 2\varepsilon \rho_s, \, \\ \frac{1}{2}(\lambda_1 + \lambda_3) + \varepsilon (-\rho_0 + \rho_1 + \rho_2 +\rho_3), \frac{1}{2}(\lambda_1 + \lambda_3) + \varepsilon (-\rho_0 + \rho_i - \rho_j -\rho_k), \,\\ \frac{1}{2}(\lambda_1 + \lambda_2) - \varepsilon (\rho_0 + \rho_1 + \rho_2 +\rho_3),\, \frac{1}{2}(\lambda_1 + \lambda_2) + \varepsilon (\rho_0 + \rho_i - \rho_j -\rho_k), \end{array} \end{equation} with $1\leq t\leq 3$, $\varepsilon = \pm 1$, $0\leq s\leq 3$ and $(i,j,k)\in \{ (1,2,3), ( 2,3,1), (3,1,2)\}$. For a characteristic vector $v$ which is a root vector of $\lambda_3$, $\mathcal H_v$ is the span of root vectors corresponding to {\small \begin{equation}\label{H7} \frac{1}{2}(\lambda_2 + \lambda_3) + 2\varepsilon \rho_s, \, \frac{1}{2}(\lambda_1 + \lambda_3) + \varepsilon (-\rho_0 + \rho_1 + \rho_2 +\rho_3), \, \frac{1}{2}(\lambda_1 + \lambda_3) + \varepsilon (-\rho_0 + \rho_i - \rho_j -\rho_k) \end{equation}} with $\varepsilon = \pm 1$, $0\leq s\leq 3$ and $(i,j,k)\in \{ (1,2,3), ( 2,3,1), (3,1,2)\}$. Then it is not difficult to check that $\lambda_3 - 2\phi$ is not a root (see \cite[152--154]{Drucker_1978}) for any $\phi$ which is one of \eqref{H7}. \medskip For $\Omega = \Omega^V_{16}$, $G = E_6$ which is an exceptional simple Lie group and its Lie algebra is $\frak g = \frak e_6$. There exists an injective Lie algebra homomorphism from $\frak e_6$ into $\frak e_7$ such that compact roots of $\frak e_7$, when restricted to the Cartan subalgebra of $\frak e_6$, are distinct roots of $\frak e_6$. The compact positive roots of $\frak e_7$ corresponding to noncompact positive roots of $\frak e_6$ are given by {\small \begin{equation}\nonumber \frac{1}{2}(\lambda_2-\lambda_3) + 2 \varepsilon \rho_s, \, \frac{1}{2}(\lambda_1-\lambda_3) + \varepsilon ( -\rho_1 + \rho_1 + \rho_2 + \rho_3),\, \frac{1}{2}(\lambda_1-\lambda_3) + \varepsilon ( -\rho_1 + \rho_i - \rho_j - \rho_k), \end{equation}} with $\varepsilon = \pm 1$, $0\leq s\leq 3$ and $(i,j,k)\in \{ (1,2,3), ( 2,3,1), (3,1,2)\}$. For a characteristic vector $v$ which is a root vector of $\frac{1}{2}(\lambda_2-\lambda_3) - 2\rho_0$, $\mathcal H_v$ is the span of root vectors corresponding to {\small \begin{equation}\label{H6} \frac{1}{2}(\lambda_2-\lambda_3) + 2 \varepsilon \rho_s, \, \frac{1}{2}(\lambda_1-\lambda_3) -\rho_1 + \rho_1 + \rho_2 + \rho_3,\, \frac{1}{2}(\lambda_1-\lambda_3) -\rho_1 + \rho_i - \rho_j - \rho_k, \end{equation}} with $1\leq s\leq 3$ and $(i,j,k)\in \{ (1,2,3), ( 2,3,1), (3,1,2)\}$. Hence $\frac{1}{2}(\lambda_2-\lambda_3) - 2\rho_0 - 2\phi$ is not a root (see \cite[152--154]{Drucker_1978}) for any $\phi$ which is one of \eqref{H6}. \end{proof} For each $x\in X$ define $$ \mathcal V_x := \bigcup \{\ell : \ell \text{ is a minimal rational curve on } X \text{ through } x \}, $$ and $V_x = \mathcal V_x \cap \Omega\subset \Omega$. Let $\delta\in H^2(X,\mathbb Z)\cong \mathbb Z$ be the positive generator of the second integral cohomology group of $X$. Write $c_1(X) = (p+2)\delta$. Let $q \in\partial\Omega$ which sits on the boundary of a minimal disc. Note that $V_q$ is the union of the minimal discs whose boundaries contain the point $q$. In \cite{Mok_2016}, it is proved that $(V_q, b_\Omega|_{V_q})$ is the image of a holomorphic isometric embedding $F\colon (\mathbb B^{p+1}, b_{\mathbb B^{p+1}}) \rightarrow (\Omega,b_\Omega)$, where $\mathbb B^{p+1}:= \{ z\in \mathbb C^{p+1}: |z|<1\}$ is the $p+1$ dimensional unit ball. \subsection{The Kobayashi metric on bounded symmetric domains} We recall a few basic facts concerning the Kobayashi pseudodistance and complex/real geodesics. For more details, see \cite{Royden_1971, Kobayashi}. Let $\Delta := \{ z\in \mathbb C : |z|<1\}$ denote the unit disc. Let $M$ be a complex manifold and $TM$ be its holomorphic tangent bundle. The infinitesimal Kobayashi-Royden pseudometric $k_M\colon TM\rightarrow \mathbb R$ on $M$ is a Finsler pseudometric defined by \begin{equation}\nonumber k_M(z;v) = \inf \left\{ |\zeta| : \exists f\in \text{Hol}(\Delta, M),\, f(0)=z, \, df(0) = v/{\zeta}\right\}, \end{equation} and for $x,\,y\in M$ the Kobayashi pseudodistance $d_M^K$ is defined by $$ d_M^K(x,y) = \inf \left\{ \int_0^1 k_M( \gamma(t), \gamma'(t) ) dt : \gamma(0) = x, \gamma(1) = y\right\}. $$ Remark that $k_M$ and $d_M^K$ enjoy the distance decreasing property with respect to holomorphic mappings, i.e. for complex manifolds $M,\, N$ and a holomorphic map $f\colon M\rightarrow N$, we have $$ k_N(f(z); df(v))\leq k_M(z;v),\quad d_N^K(f(x), f(y)) \leq d_M^K(x,y) $$ for any $(z;v)\in TM$ and $x,y\in M$. For a complex manifold $M$, $k_M$ is upper semicontinuous. If $M$ is a taut manifold, i.e. $\text{Hol}(\Delta, M)$ is a normal family, then $k_M$ is continuous on $TM$. Therefore $k_\Omega$ is continuous for any bounded symmetric domain $\Omega$. We say that $M$ is (Kobayashi) hyperbolic if and only if $k_M(z;v)>0$ whenever $v\neq 0$. It is known that any bounded domain in $\mathbb C^n$ is hyperbolic. \begin{example} \begin{enumerate} \item If $M$ is the unit ball $\mathbb B^n=\{ z\in \mathbb C^n : |z|<1\}$, then $k_{\mathbb B^n}$ coincides with the Bergman metric $b_{\mathbb B^n}$. \item If $M$ is the polydisc $\Delta^r$, then we have $$k_{\Delta^r}(p;v) = \max_{1\leq j\leq r}\{b_\Delta(p_j; v_j)\} \quad\text{ and } \quad d_{\Delta^r}^K(x,y) = \max_{1\leq j\leq r}\{d^K_\Delta(x_j, y_j)\} $$ where $p=(p_1, \ldots, p_r)$, $v=(v_1, \ldots, v_r)$, $x=(x_1,\ldots, x_r)$ and $y=(y_1,\ldots, y_r)$. \end{enumerate} \end{example} \begin{definition} Let $M,\,N$ be Kobayashi hyperbolic manifolds. \begin{enumerate} \item A map $f\colon M\rightarrow N$ is a {\it totally geodesic embedding} if $f$ is an isometry for the Kobayashi distance, i.e. $$ d^K_M(x,y) = d^K_N(f(x), f(y)) $$ for any two points $x,y\in M$. \item A {\it (real) geodesic} in $M$ is a $C^1$ locally regular curve $\gamma\colon I\rightarrow M$ such that \begin{equation}\nonumber \int_{t_1}^{t_2} k_M(\gamma(t); \gamma'(t)) dt = d^K_M ( \gamma(t_1),\gamma( t_2)) \end{equation} for all $t_1, t_2\in I$, where $I\subset \mathbb R$ is an interval. \item A holomorphic map $\varphi\colon\Delta\rightarrow M$ is a {\it complex geodesic} if $\varphi$ is an isometry for the Kobayashi distances on $\Delta$ and $M$. \end{enumerate} \end{definition} Suppose that $M$ is a convex domain in $\mathbb C^n$. Every complex geodesic $\varphi\colon \Delta\rightarrow M$ gives rise to a unit speed geodesic $\gamma\colon \mathbb R\rightarrow M$ by $\gamma(t) = \varphi(\tanh (t))$ for any $t\in \mathbb R$. Note that every complex geodesic is a proper injective map from $\Delta$ to $M$. If $M$ is a strongly convex domain in $\mathbb C^n$, then any pair of points of $M$ is contained in a unique complex geodesic and it is continuously extended to the boundary (\cite{Lempert_1981}). On the other hand if $M$ is weakly convex, we know that for any two points in $M$ there exists a complex geodesic joining them but there could be many others. For example if $M$ is a polydisc $\Delta^k$, then a holomorphic map $\varphi = (\varphi_1, \ldots, \varphi_k)\colon \Delta\rightarrow\Delta^k$ is a complex geodesic if and only if $\varphi_j$ is an automorphism of $\Delta$ for some $j$. In particular, complex geodesic does not need to be extended continuously to the boundary. \begin{lemma}\label{max} Let $\Omega$ be a bounded symmetric domain. For $v\in T_p \Omega$, let $\Delta^r$ be a totally geodesic Poincar\'{e} polydisc passing through $p$ such that $v\in T_p \Delta^r$ with respect to the Bergman metric. Then \begin{equation}\nonumber k_\Omega(p;v) = \max_{1\leq j\leq r} k_\Delta(p_j;v_j), \end{equation} where $p_j$ and $v_j$ are the $j$-th components of $p\in \Delta^r$ and $v\in T_p\Delta^r\cong \mathbb{ C}^r$, respectively. In particular, if $[v]\in \mathcal C_p$, then $$ k_\Omega(p;v)=b_\Omega(p;v)$$ and $\Delta^r$ is totally geodesic with respect to the Kobayashi metric. \end{lemma} \begin{proof} By the distance decreasing property of Kobayashi metrics with respect to holomorphic mappings, we have $$\max_{1\leq j\leq r} k_\Delta(p_j;v_j)=k_{\Delta^r}(p;v)\geq k_\Omega(p;v).$$ Since each $v_j$, $j=1,\ldots, r$ has rank 1, there exists a minimal disc passing through $p$ tangential to $v_j$ and hence there exists a projection $p_j$ to the minimal disc. This implies that $k_\Omega(p;v)\geq k_\Delta(p_j;v_j)$ for each $j$. \end{proof} \section{Real and complex geodesics in bounded symmetric domains} \begin{lemma}\label{polyD} For a unit speed real geodesic $\gamma\colon I \rightarrow \Omega$, there exists a totally geodesic polydisc $\Delta^k$ of dimension $k$ such that $$ \gamma(I)\subset \Delta^k\times(\Delta^k)^\perp$$ where $ \gamma_1\colon I \rightarrow \Delta^k$ is a geodesic such that each component is a unit speed geodesic in a disc and $\gamma_2\colon I\to (\Delta^k)^\perp $ is not a geodesic. Here $ \gamma_1$ and $\gamma_2$ are the $\Delta^k$ and $(\Delta^k)^\perp$-components of $\gamma$, respectively. Moreover, there exists a unique totally geodesic holomorphic disc $\Delta\subset \Delta^k$ that contains $\gamma_1(I)$. \end{lemma} \begin{proof} Assume $\gamma(0)=0.$ Choose the unique minimal totally geodesic polydisc $\Delta^m$ such that $\gamma'(0)\in T_{0}\Delta^m$. Let $$ \gamma'(0)=(v_1,\ldots,v_m)\in T_0 \Delta^m \text{ with } 1=|v_1|\geq \cdots\geq |v_m|>0.$$ First we will show that $$\gamma(I)\subset \Delta_0\times (\Delta_0)^\perp$$ for some minimal disc $\Delta_0\subset \Delta^m$ passing through $0$. For each $t\in I$, by Polydisc Theorem (Theorem \ref{polydisc theorem}), we can choose the minimal totally geodesic polydisc $P_t$ such that $$0, \gamma(t)\in P_t.$$ Since $\gamma$ is $C^1$, we can choose a sufficiently small $t_0>0$ and a continuous family $\{\Delta_t,\,0<t<t_0\}$ of minimal discs passing through $0$ such that $$P_t\subset \Delta_t\times\Delta_t^\perp$$ and \begin{equation}\label{k-dist-2} t=d_\Omega^K(0, \gamma(t))=d_{\Delta}^K(0, \pi_t\circ \gamma(t)), \end{equation} where $\pi_t:\Omega\to \Delta_t$ is the projection to $\Delta_t$. Since $\Delta_t$ is a minimal disc, we can choose a continuous family of complex geodesic $\Gamma_t:\Delta\to \Delta_t\subset \Omega$, $0<t< t_0$ such that \begin{equation* \Gamma_t(0)=0,\quad \Gamma_t(\tanh(t))=\pi_t\circ \gamma(t). \end{equation*} Fix $t\in (0,\,t_0)$ and write $$ \Gamma_t'(0)=w_t\in T_0\Delta_t.$$ Since $\Gamma_t(\cdot)$ is a complex geodesic, we obtain $$k_\Omega(0, \Gamma_t'(0))=1,$$ i.e. $|w_t|=1.$ Choose a maximal rank one totally geodesic subdomain $L\subset\Omega$ through $0$ such that $w_t \in T_0L.$ Let $$ v_L:=(\pi_L)_*\gamma'(0)=(v_0, v_H)\in \mathbb Cw_t\oplus {H}_{[w_t]}, $$ where $\pi_L$ is the projection to $L$. Since $ v_L\in \mathcal C_0(\Omega)$, we obtain $$ k_\Omega(0, v_L)=b_\Omega(0, v_L)=\sqrt{|v_0|^2+|v_H|^2}.$$ On the other hand by \eqref{k-dist-2}, we obtain $$ k_\Omega(0,\gamma'(0))=k_\Delta(0, (\pi_{t}\circ\gamma)'(0))=|v_0|. $$ By the distance decreasing property of Kobayashi metrics, we obtain $$ |v_0|=k_\Omega(0, \gamma'(0)) \geq k_L(0, (\pi_L)_*\gamma'(0))=\sqrt{|v_0|^2+|v_H|^2}.$$ Therefore we obtain $$v_H=0,\quad |v_0|= k_\Omega(0, v_0)=1.$$ Moreover, by Lemma \ref{ball} without loss of generality we may choose a totally geodesic rank one subspace $L$, such that $\pi_L\circ \gamma$ is the unique geodesic in $L$ with $\pi_L\circ\gamma(0)=0$, $(\pi_L\circ \gamma)'(0)=v_0.$ Since $v_H=0$, this implies $$\pi_L\circ \gamma([0,\,t])\subset \Delta_{t}.$$ Since $L$ is arbitrary, by Lemma \ref{ball} we obtain $$ \gamma([0,\,t])\in \Delta_{t}\oplus (\Delta_{t})^\perp,$$ implying that $$ \gamma([0,\,t_0])\in \Delta_{t_0}\oplus (\Delta_{t_0})^\perp$$ and $(\Delta_{t_0})^\perp$-component $\gamma_0^\perp$ of $\gamma$ is a real geodesic in $\Delta_{t_0}^\perp$ if and only if $$k_\Omega(0,(\gamma_0^\perp)'(s))=1$$ for any $s\in [0,\,t_0]$. Hence by induction argument, we obtain $$\gamma([0,\,t_0])\subset \Delta_0^k\times (\Delta_0^k)^\perp$$ for some $\Delta_0^k\subset \Delta^m$. By iterating this process, we obtain $$\gamma(I)\subset \Delta_0^k\times (\Delta_0^k)^\perp$$ for another possibly smaller dimensional polydisc $\Delta^k_0$. The rest part of the lemma is clear by the argument above. \end{proof} \begin{corollary}\label{unique minimal disc} Let $\Omega$ be a bounded symmetric domain and $\gamma\colon I\rightarrow \Omega$ be a real geodesic such that $\gamma(I)$ is an integral submanifold of $\mathcal{RC}(\Omega)$. Then $\gamma(I)$ is contained in a unique minimal disc. \end{corollary} \begin{proof} We may assume that $\gamma(0)=0\in \Omega$ and $|\gamma'(0)|=1$. Since $\gamma'(0)\in \mathcal {RC}_0$, there exists a minimal disc $\Delta_0$ such that $\gamma'(0)\in T_0\Delta_0$. Then by Lemma \ref{polyD}, we obtain \begin{equation}\nonumber \gamma(I)\subset \Delta_0\times\Delta_0^\perp \end{equation} and \begin{equation}\nonumber k_\Omega(\gamma(t), \gamma'(t))=k_\Delta(\gamma_0(t), \gamma_0'(t)) \end{equation} where we let $\gamma=(\gamma_0, \gamma_0^\perp)\in \Delta_0\times \Delta_0^\perp$. Since $\gamma'(t) \in \mathcal {RC}_{\gamma(t)}$ for any $t\in I$, we obtain $$(\gamma^\perp_0)'\equiv 0,$$ which completes the proof. \end{proof} \begin{proposition}\label{poly-K} Let $\Delta^m=\Delta_1\times\cdots\times\Delta_m$ be an $m$-dimensional polydisc and let $f:\Delta^m\to\Delta^m$ be a $C^1$ isometry with respect to the Kobayashi metric. Then up to automorphisms of $\Delta^m$, $f$ is of the form $$ f(\zeta_1,\ldots,\zeta_m)=(\Phi_1(\zeta_1),\ldots,\Phi_m(\zeta_m))$$ with each $\Phi_i$ being $\zeta_i\mapsto \zeta_i$ or $\zeta_i\mapsto \bar\zeta_i$. \end{proposition} \begin{proof} We will use induction on $m$. If $m=1$, then it is already known. Now assume that the proposition holds for $m-1\geq 1$. Let $f=(f_1,\ldots, f_m)$ and $\gamma:\mathbb R\to \Delta_1$ be a unit speed complete geodesic. Then for any $y\in \Delta_1^\perp=\Delta_2\times\cdots\times\Delta_m$, the map $t\to (\gamma(t),y)$ is a complete geodesic in $\Delta_1\times\cdots\times\Delta_m$. Hence by Lemma~\ref{polyD}, there exists $j$ depending on $y$ such that $f_j(\gamma(\cdot), y):\mathbb R\to \Delta_j$ is a complete geodesic. Since $f$ is $C^1$, by choosing general $\gamma$ and $y$, we may assume that $f_j(\gamma(\cdot), y)$ is a complete geodesic for all $y$ in an open set $U\subset \Delta_1^\perp$. Moreover, after composing automorphisms of $\Delta^m$, we may assume that $\gamma(0)=0$, $0\in U$, $f_1(\gamma(t), 0)=\gamma(t)$ and $f_1(e^{i\theta}\gamma(\cdot), y)$ is a unit speed complete geodesic in $\Delta_1$ for all $(y, \theta)\in U\times I$ for some open interval $I\subset (-\pi,~\pi]$ containing $0$. Since $f_1(0,0)=0$ by assumption, if $\theta\in I$, then there exists $\eta$ such that $$f_1(e^{i\theta}\gamma(t),0)=e^{i\eta}\gamma(t)$$ for all $t$. Since $f_1(\gamma(t),0)=\gamma(t)$, we have \begin{equation} \begin{aligned} d_\Delta(\gamma(t), e^{i\theta} \gamma(t)) &= d_{\Delta^m} ((\gamma(t), 0), (e^{i\theta} \gamma(t), 0)) =d_{\Delta^m} (f(\gamma(t), 0), f(e^{i\theta} \gamma(t), 0))\\ &\geq d_{\Delta} (f_1(\gamma(t), 0), f_1(e^{i\theta} \gamma(t), 0)) = d_\Delta(\gamma(t), e^{i\eta}\gamma(t)) \end{aligned} \end{equation} and by a similar way $$d_\Delta(\gamma(t), e^{i\theta}\gamma(-t))\geq d_\Delta(\gamma(t), e^{i\eta}\gamma(-t)).$$ Therefore $\eta=\pm \theta$. Assume that $f_1(\cdot, 0)$ is orientation preserving near $0$. Then $\eta=\theta$ for all $\theta\in I$. Define a sector $$C:=\bigcup_{\theta\in I}e^{i\theta}\gamma(\mathbb R).$$ Then we obtain $$f_1(\zeta, 0)=\zeta$$ for all $\zeta\in C$. Similarly, we can show that for each $y\in U$, there exists an automorphism $\phi_y$ of $\Delta$ such that $$ f_1(\zeta, y)=\phi_y(\zeta) $$ for all $\zeta\in C$. On the other hand, since $f$ is totally geodesic, we obtain $$ d^K_{\Delta^{m}}(f(\zeta,x), f(\zeta,y)) =d^K_{\Delta^{m}}((\zeta,x),(\zeta, y)) =d^K_{\Delta^{m-1}}(x,y) <\infty$$ for any $x, y\in \Delta_1^\perp$. Therefore the limit set $\lim_{t\to \infty}f(e^{i\theta}\gamma(t), \Delta_1^\perp)$ is contained in the closure of a unique boundary component $C_\theta\subset\partial\Delta_1\times{\Delta_1^\perp}$ depending only on $\theta$. It implies that for all $y\in U$, \begin{equation}\nonumber \lim_{t\to \infty}\phi_y(e^{i\theta}\gamma(t))=\lim_{t\to\infty}f_1(e^{i\theta}\gamma(t), y)=\lim_{t\to\infty}f_1(e^{i\theta}\gamma(t), 0) \end{equation} for all $\theta\in I$ and therefore $$\phi_y(\zeta)=\zeta.$$ That is, on $C\times U$, $f_1(\zeta, y)$ is independent of $y$. Let $\widetilde f:=(f_2,\ldots,f_m):\Delta^m\to \Delta_2\times\cdots\times\Delta_m$. Since $$\partial_yf_1(\zeta, y)=0,\quad (\zeta, y)\in \overline{C\times U},$$ there exists an open set $W\subset \Delta_1$ containing $C$ such that for any complete geodesic $\widetilde \gamma$ in $\Delta_1^\perp$ passing through a point in $\overline U$, $\widetilde f(\zeta, \widetilde\gamma(\cdot))$, $\zeta\in W$ is a complete geodesic in $\Delta_1^\perp$ and the limit set $\lim_{t\to\infty}f(\Delta_1,\widetilde\gamma(t))$ is contained in the closure of a boundary component in $\Delta_1\times \partial\Delta_1^\perp$. Then by the same argument, we can choose an open cone $V\subset\Delta_1^\perp$ with vertex $0\in U$ such that on $V$, \begin{equation}\label{W} \widetilde f(\zeta, \cdot)= \widetilde f(0,\cdot) \end{equation} for all $\zeta\in W$. Hence if $y\in V\cap U$, then for any geodesic $\Gamma:\mathbb R\to W,$ $$\widetilde f(\Gamma(t),y)=\widetilde f(0, y).$$ Let $x\in \Delta_1^\perp$ and $y\in V\cap U$. Since for any geodesic $\Gamma:\mathbb R\to W,$ we have \begin{equation}\nonumber \begin{aligned} &d_{\Delta_1^\perp}(\widetilde f(\Gamma(t), x), \widetilde f(0,y))= d_{\Delta_1^\perp}(\widetilde f(\Gamma(t), x), \widetilde f(\Gamma(t),y))\\ &\quad\quad\quad\leq d_{\Delta^m}(f(\Gamma(t), x), f(\Gamma(t),y)) = d_{\Delta^m}((\Gamma(t), x), (\Gamma(t),y))<\infty, \end{aligned} \end{equation} for any $x\in \Delta_1^\perp$, $ \widetilde f(\Gamma(\cdot), x)$ is not a geodesic and therefore $f_1(\Gamma(\cdot), x)$ should be a complete geodesic in $\Delta_1$. Consider a family of complete geodesic $\Gamma_\theta:=e^{i\theta}\gamma:\mathbb R\to C$, $\theta\in I.$ Then by the same argument, we obtain that on $C\times\Delta_1^\perp$, $f_1(\zeta, x)$ is independent of $x$, i.e. $$f_1(\zeta, x)=f_1(\zeta, 0)=\zeta,\quad(\zeta,x)\in C\times\Delta_1^\perp$$ and $\widetilde f(\zeta,\cdot):\Delta_1^\perp\to\Delta_1^\perp$ is an isometry for all $\zeta\in C$. By induction argument, there exist automorphisms $\Phi_\zeta, \Psi_\zeta$ of $\Delta_1^\perp$ such that $\Psi_\zeta\circ\widetilde f(\zeta,\Phi_\zeta(\cdot))$ is a desired form. On the other hand by \eqref{W}, $\widetilde f(\zeta,\cdot)$ is equal to $\widetilde f(0,\cdot)$ on an open cone $V$. Therefore we obtain \begin{equation}\label{W2} \widetilde f(\zeta,x)=\widetilde f(0,x) \end{equation} for all $(\zeta,x)\in C\times \Delta_1^\perp$. Choose a complete geodesic $\Gamma:\mathbb R\to \Delta_1$ that passes a point in the interior of $C$. By \eqref{W2}, $\widetilde f(\Gamma(\cdot),x)$ is not a geodesic for all $x\in \Delta_1^\perp$. Therefore $f_1(\Gamma(\cdot), x)$ is a complete geodesic in $\Delta_1$ such that if $\Gamma(t)\in C$, then $$f_1(\Gamma(t),x) =f_1(\Gamma(t), 0)=\Gamma(t),$$ implying that $$f_1(\zeta,x)=\zeta$$ for all $(\zeta,x)\in \Gamma(\mathbb R)\times \Delta_1^\perp$. Since $\Gamma$ is arbitrary, we obtain $$f_1(\zeta,x)=\zeta$$ for all $(\zeta,x)\in \Delta_1\times \Delta_1^\perp$ and $\widetilde f(\zeta,\cdot):\Delta_1^\perp\to\Delta_1^\perp$ is an isometry for all $\zeta\in \Delta_1$. Let $\gamma$ be a complete geodesic in $\Delta^\perp_1$. Then $$f(\zeta, \gamma(t))=(f_1(\zeta,\gamma(t)), \widetilde f(\zeta, \gamma(t)))=(\zeta, \widetilde f(\zeta,\gamma(t))).$$ Therefore the limit set $\lim_{t\to\infty}f(\Delta_1, \gamma(t))$ should be contained in a boundary component in $\Delta_1\times\partial\Delta_1^\perp$ depending only on $\gamma$. Since $\widetilde f(\zeta,\cdot)$ is a desired form by induction, this implies that $\widetilde f$ is independent of $\zeta$, i.e. $$\widetilde f(\zeta,\cdot)=\widetilde f(0,\cdot),$$ which completes the proof. \end{proof} \begin{corollary}\label{poly-KK} Let $\Delta^m$ be an $m$-dimensional polydisc and let $f:\Delta^m\to\Delta^m$ be a $C^1$ isometry with respect to the Kobayashi metric. Then $f$ extends continuously to $\partial\Delta^m$ and if two $C^1$ isometry $f, g:\Delta^m\to \Delta^m$ coincide on an open set in the Shilov boundary of $\Delta^m$, then $f\equiv g$. \end{corollary} \section{Proof of theorems} \begin{lemma}\label{holo-disc} Let $\Omega$ be an irreducible bounded symmetric domain and let $F\colon\Omega\to\mathbb C^n$ be a $C^1$ map such that $F_*(v)\neq 0$ for all $v\neq 0.$ If $F$ is holomorphic or anti-holomorphic on every minimal disc, then $F$ is either holomorphic or anti-holomorphic. \end{lemma} \begin{proof} Let $F\colon\Omega\to \mathbb C^n$ be a $C^1$-map. Define $$U:=\{[v]\in \mathcal C(\Omega): \bar v F=0\}.$$ By assumption, we may assume that $U$ is a nonempty set. Then it is enough to show that $U=\mathcal C(\Omega)$. By continuity of $\bar \partial F$, $U$ is a closed set. Assume that $\partial U\neq\emptyset.$ Choose $[v]\in \partial U$. Then by continuity of $dF$, we obtain $$v F=\bar v F=0,$$ i.e. $$dF( \text{Re }v)=dF(\text{Im }v)=0.$$ Since $dF\neq 0$, this does not happen. Since $\Omega$ is irreducible, we obtain $U=\mathcal C(\Omega)$. \end{proof} \begin{lemma}\label{component_totally} Let $\Omega$ and $\Omega'$ be bounded symmetric domains and let $f\colon\Omega\to \Omega'$ be a $C^1$-smooth totally geodesic isometric embedding with respect to their Kobayashi metrics. Suppose $${\rm rank }(\Omega)\geq {\rm rank }(\Omega').$$ Then $$ {\rm rank }(\Omega)={\rm rank }(\Omega')$$ and for any totally geodesic polydisc $\Delta^r$ in $\Omega$, $f(\Delta^r)$ is contained in a maximal totally geodesic polydisc in $\Omega'$. \end{lemma} \begin{proof} Let $r$ be the rank of $\Omega$. We will use induction on $r$. If $r=1$, then it is trivial. Let $r>1$. It is enough to show it for a general maximal totally geodesic polydisc $\Delta^r\subset \Omega$ passing through $0$. Let $v\in T_0\Delta^r$ be a general vector realizing the maximal holomorphic sectional curvature. After rotation, we may assume \begin{equation}\label{v} v=(1,\ldots,1)\in T_0(\Delta_1\times \cdots\times \Delta_r). \end{equation} For each $i=1,\ldots,r$ and a point $(0,x)\in \{0\}\times \Delta_{i}^\perp\subset \Omega$, let $\gamma_i(\cdot,x)=\gamma_i^v(\cdot,x)\colon\mathbb R\to \Delta_i\times\Delta_i^\perp$ be a unit speed complete real geodesic with respect to the Bergman metric such that $$\gamma_i(0,x)=(0,x),\quad \gamma_i'(0, x)=(1,0,\ldots, 0)\in T_{(0,x)}(\Delta_i\times x).$$ Since $f$ is totally geodesic, by Lemma \ref{polyD}, there exists a unique totally geodesic polydisc $Q_i(x)=Q_i^v(x)\subset \Omega'$ passing through $f(0, x)$ such that \begin{equation}\label{del-x} f\circ \gamma_i(t, x)=(\Phi_i(t, x),\Psi_i(t, x))\in Q_i(x)\times Q_i(x)^\perp,\quad t\in \mathbb R, \end{equation} where each component of $\Phi_i(\cdot, x)$ is a geodesic of the disc and $\Psi_i(\cdot,x)$ is not a geodesic. Since $f$ is $C^1$, by continuity of the derivatives of $f$, we may assume that the dimension of $Q_i(x)=Q_i^v(x)$ is constant on some open neighborhood of $v\in T_0\Omega$. Now assume $i=1$. Since $f$ is an isometry, for each $x\in \Delta_1^\perp$, $$ d_{\Omega'}^K(f\circ \gamma_1(t, 0), f\circ\gamma_1(t,x))=d_\Omega^K(\gamma_1(t, 0),\gamma_1( t, x))=d_{\Delta_1^\perp}^K(0, x)<\infty.$$ Hence the limit set $\lim_{t\to \infty}f\circ \gamma_1(t, \Delta_1^\perp) $ should be contained in a unique boundary component $\Sigma$ of $\Omega'$. Choose a maximal totally geodesic polydisc $R(x)\subset \Omega'$ passing through $f(0, x)$ such that $R(x)\times \Sigma\subset \Omega'$ is totally geodesic. Since $\Sigma$ is independent of $x$, $R(x)$ is parallel with $R(0)$. Moreover, since $$T\Sigma\subset \mathcal N_{[\partial_t\Phi_1(t, x)]},$$ $Q_1(x)$ is contained in $R(x)$. Since $R(x)$ is parallel with $R(0)$, by continuity of the derivatives of $f$, we may assume that $Q_1(x)$ is parallel with $Q_1(0)$ for all $x\in \Delta_1^\perp$ sufficiently close to $0$. In particular, \begin{equation}\label{del'} f\circ \gamma_1(\mathbb R, U_1)\subset Q_1\times Q_1^\perp \end{equation} on an open set $U_1\subset \Delta_1^\perp$ containing $0$, where $Q_1=Q_1(0).$ Since $\Sigma$ and $R(0)$ is independent of $x$, there exists possibly smaller dimensional polydisc $Q_1\subset R(0)$ such that $$ f\circ \gamma_1(\mathbb R, \Delta_1^\perp)\subset Q_1\times Q_1^\perp.$$ We may assume that $Q_1$ is the maximal polydisc such that each component of $\Phi_1(\cdot,x)\colon\mathbb R\to Q_1$ is a unit speed complete geodesic in a disc for all $x\in \Delta_1^\perp$. Note that since $$f\circ \gamma_1=(\Phi_1,\Psi_1):\mathbb R\times\Delta_1^\perp\to Q_1\times Q_1^\perp$$ is totally geodesic isometric embedding, $\Phi_1$ and $\Psi_1$ are distance decreasing maps. Therefore for any $x,y\in \Delta_1^\perp$, \begin{equation}\label{inequal} \begin{aligned} d^K_{\Delta_1^\perp}(x,y) &= d_\Omega^K(\gamma_1(t,x), \gamma_1(t,y)) =d^K_{\Omega'}( f\circ\gamma_1(t,x), f\circ\gamma_1(t,y))\\ &= d^K_{Q_1\times Q_1^\perp} ((\Phi_1(t,x), \Psi_1(t,x)), (\Phi_1(t,y), \Psi_1(t,y))) \\ &\geq d^K_{Q_1}(\Phi_1(t, x), \Phi_1(t,y)). \end{aligned} \end{equation} Since each component of $\Phi_1(\cdot, x)\colon \mathbb R\to Q_1$ is a unit speed complete geodesic in a disc for all $x\in \Delta_1^\perp$, \eqref{inequal} implies $$ \lim_{t\to\infty} \Phi_1(t,x) = \lim_{t\to\infty} \Phi_1(t,y) \quad \text{ and } \lim_{t\to -\infty} \Phi_1(t,x) = \lim_{t\to -\infty} \Phi_1(t,y)$$ and hence \begin{equation} \Phi_1(\mathbb R,x) = \Phi_1(\mathbb R,y), \end{equation} i.e. \begin{equation}\label{open} f\circ\gamma_1(\mathbb R, \Delta^\perp_1)\subset \Phi_1(\mathbb R,0)\times Q_1^\perp. \end{equation} We will show that $$\Psi_1(t,\cdot):\Delta_1^\perp\to Q_1^\perp$$ is a totally geodesic isometric embedding for all $t\in \mathbb R$. Assume otherwise. Then by continuity of the derivatives of $f$, there exists an open interval $I\subset \mathbb R$ such that $\Psi_1(t, \cdot)$ is not a totally geodesic map for all $t\in I\subset \mathbb R$. After shrinking $I$ if necessary, we may assume that there exists a $C^1$ complete geodesic $\gamma_0:\mathbb R\to \Delta_1^\perp$, an open set $U\subset \mathbb R$ and small $\epsilon,\eta>0$ such that $$d^K_{Q_1^\perp}(\Psi_1(t,\widetilde\gamma(s_1)), \Psi_1(t, \widetilde\gamma(s_2))\leq (1-\eta)|s_1-s_2|, \quad t\in I,~s_1, s_2\in U$$ for all geodesic $\widetilde \gamma$ such that $\|\gamma_0-\widetilde\gamma\|_{C^1(U)}<\epsilon.$ Therefore $\Phi_1(t, \widetilde\gamma)$ should be a complete geodesic for all such geodesics $\widetilde \gamma.$ This implies that there exists an open set $\widetilde U\subset \Delta_1^\perp$ such that $\Phi_1(t,\cdot)$ is locally one to one on $\widetilde U$. On the other hand, by \eqref{open}, we obtain $$\Phi_1(\mathbb R\times \Delta_1^\perp)=\Phi_1(\mathbb R,0),$$ which is a contradiction. We have seen that $\Psi_1(t, \cdot)\colon\Delta_1^\perp\to Q_1^\perp$ is a totally geodesic isometric embedding for all $t\in \mathbb R$. Hence by induction argument, $Q_1^\perp$ is of rank $(r-1)$. Since $${\rm rank }(\Omega')=\dim Q_1+{\rm rank }(Q_1^\perp)\leq {\rm rank }(\Omega)=r,$$ $Q_1$ is a minimal disc and $${\rm rank }(\Omega')={\rm rank }(\Omega).$$ Let $P_1:=\Delta_2\times\cdots\times\Delta_r$. Since $\Psi_1(t, \cdot):\Delta_1^\perp\to Q_1^\perp$ is a totally geodesic isometric embedding, by induction argument, there exists a $(r-1)$-dimensional totally geodesic polydisc $\widetilde P_1(t)\subset Q_1^\perp$ such that $\Psi_1(t, P_1)\subset \widetilde P_1(t)$. On the other hand, since $f$ is an isometry, for any unit speed complete geodesic $\gamma$ in $P_1$, the limit set $\lim_{t\to\infty}f\circ\gamma_1(\mathbb R, \gamma(s))$ should be contained in a unique boundary component, which implies that $$\lim_{s\to\infty}\Psi_1(a, \gamma(s))=\lim_{s\to\infty}\Psi_1(b, \gamma(s)) ,\quad a, b\in \mathbb R.$$ Since $\gamma$ is arbitrary, on $P_1$, we obtain \begin{equation}\label{Psi-id} \Psi_1(a, \cdot)=\Psi_1(b, \cdot) \end{equation} by Corollary \ref{poly-KK} with induction argument and $$f\circ \gamma_1(\mathbb R,P_1)\subset Q_1\times\widetilde P_1,$$ where $\widetilde P_1:=\widetilde P_1(0).$ Choose another vector $$v_\theta=(e^{i\theta},1,\ldots,1)\in T_0 (\Delta_1\times\cdots\times\Delta_r).$$ Since $$\gamma_i^{v_\theta}=\gamma_i^v,\quad i=2,\ldots,r,$$ by the same argument as above, we obtain $$ f\circ\gamma_1^{v_\theta}(\mathbb R\times P_1)\subset Q_1(\theta)\times \widetilde P_1(\theta)$$ for some minimal disc $Q_1(\theta)$ and $(r-1)$-dimensional totally geodesic polydisc $\widetilde P_1(\theta)$. Since $$\{0\}\times\widetilde P_1=f\circ \gamma_1(0,P_1)=f\circ\gamma_1^{v_\theta}(0,P_1)=\{0\}\times\widetilde P_1(\theta),$$ we obtain $$\widetilde P_1=\widetilde P_1(\theta)$$ for any $\theta$. Let $B_1$ be the maximal totally geodesic subdomain such that $$T_0B_1=\mathcal N_{[T_0\widetilde P_1]}.$$ Then $$T_0Q_1(\theta)\subset T_0B_1.$$ Since $\theta$ is arbitrary, we obtain $$ f(\Delta_1\times P_1)\subset B_1\times \widetilde P_1.$$ Now consider $f\circ \gamma_2$. Then by the same argument, we obtain $$f(\Delta_2\times P_2)\subset B_2\times \widetilde P_2,$$ where $P_2=\Delta_1\times\Delta_3\times\cdots\times\Delta_r$ and $\widetilde P_2$ component of $f$ is an isometry on $\{\zeta_2\}\times P_2$ and independent of $\zeta_2\in \Delta_2$. Since $$\{0\}\times \widetilde P_1\subset B_2\times \widetilde P_2,$$ there exists a minimal disc $\widetilde \Delta\subset B_2$ such that $$\{0\}\times \widetilde P_1\subset \widetilde \Delta\times \widetilde P_2.$$ Suppose $$T_0\widetilde \Delta\cap T_0\widetilde P_1= \{0\}.$$ Then $\widetilde P_1=\widetilde P_2$. On the other hand, by \eqref{Psi-id}, $\Psi_1$ is independent of $\zeta_1\in \Delta_1$, contradicting the assumption that $\widetilde P_2$ component of $f$ is an isometry on $\{\zeta_2\}\times P_2.$ Therefore we obtain $$ T_0\widetilde P_1\cap T_0 B_2\neq \{0\}.$$ Since $\Delta_1\times P_1=\Delta_2\times P_2=\Delta_1\times\cdots\times\Delta_r$, we obtain $$ f(\Delta_1\times\cdots\times\Delta_r)\subset \left(B_1\times \widetilde P_1\right)\cap \left(B_2\times \widetilde P_2\right).$$ Therefore one has $$ f(\Delta_1\times\cdots\times\Delta_r)\subset \left(B_2\cap \widetilde P_1\right)\times \widetilde P_2,$$ which completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{main}] Let $r$ be the rank of $\Omega$. We will use induction on $r$. If $r=1$, it is well-known (cf.\cite{Antonakoudis_2017}). Assume that $r>1$. By Proposition~\ref{poly-K}, Lemma~\ref{holo-disc} and Polydisc Theorem (Theorem \ref{polydisc theorem}), it is enough to show that for a maximal totally geodesic polydisc $\Delta^r$ in $\Omega$ containing $0$, $f(\Delta^r)$ is contained in a maximal totally geodesic polydisc in $\Omega'$. Hence by Lemma~\ref{component_totally}, we can complete the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{Kobayashi isometry}] We will use induction on $m$. If $m=1$, then it is true by Theorem \ref{main}. Suppose $m>1$. Assume that $\Omega_1$ is an irreducible factor whose rank is maximum among $\Omega_i$'s. We may further assume that the dimension of $\Omega_1$ is maximum among all irreducible factors with the maximum rank. Since $F$ is an isometry, by Proposition~\ref{poly-K}, Lemma~\ref{holo-disc} and Lemma~\ref{component_totally}, for each $x\in \widetilde\Omega:=\Omega_2\times\cdots\times\Omega_m$, $F(\cdot, x):\Omega_1\to \Omega_1\times\cdots\times\Omega_m$ is either holomorphic or anti-holomorphic and preserves rank one vectors. Let $F=(f_1,\ldots,f_m)$. Since $\mathcal C_x(\Omega)$ is a disjoint union of $\mathcal C_{x}(\Omega_i)$ where we regard $\Omega_i$ as a canonically embedded submanifold in $\Omega$, by continuity of the derivatives of $F$, there exists unique $i$ such that if $j\neq i$, then $$v f_j(\cdot, x)= 0,\quad [v]\in \mathcal C(\Omega_1).$$ Therefore $f_i(\cdot, x)\colon\Omega_1\to \Omega_i$ is a totally geodesic isometric embedding. We may assume that $i$ is independent of $x$. Then by Theorem~\ref{main}, the rank of $\Omega_i$ is equal to the rank of $\Omega_1$ and $f_i(\cdot, x)\colon\Omega_1\to \Omega_i$ is either holomorphic or anti-holomorphic proper embedding. Since the dimension of $\Omega_1$ is maximum among all irreducible factors with maximum rank, $\Omega_i$ is biholomorphic to $\Omega_1$. Hence up to permutation, we may assume that $i=1$ and $f_1(\cdot, x)\in {\rm Aut}(\Omega_1)$ for all $x\in \widetilde\Omega$. Let $x\in \widetilde\Omega$ be fixed. Since $F$ is isometric, we obtain $$ d^K_{\Omega}(F(\zeta, x), F(\zeta, 0)) =d^K_{\Omega}((\zeta, x), (\zeta,0)) \leq d^K_{\widetilde \Omega}(x,0) < \infty $$ for all $\zeta\in \Omega_1$. Since $F$ is $C^1$, by continuity of the derivatives of $F$, we may assume that $F(\Omega_1\times\{x\})$ is parallel with $\Omega_1\times \{0\}$ for some open neighborhood of $0$ and hence we have $$\lim_{\zeta\to p}f_1(\zeta, x)=\lim_{\zeta\to p}f_1(\zeta,0)$$ for all $p\in \partial\Omega_1$. Since $f_1(\cdot,x)\in {\rm Aut}(\Omega_1)$, this implies that $$f_1(\cdot,x)=f_1(\cdot, 0)$$ for all $x\in \widetilde\Omega$, i.e. $f_1$ is independent of $x\in \widetilde\Omega$. Therefore $\widetilde f(\zeta,\cdot):=(f_2(\zeta,\cdot),\ldots,f_m(\zeta,\cdot)):\widetilde\Omega\to\widetilde\Omega$ is a totally geodesic isometry for all $\zeta\in \Omega_1$. Hence by induction argument and the argument similar to the above, we obtain $$\widetilde f(\zeta, \cdot)=\widetilde f(0,\cdot)$$ for any $\zeta\in \Omega_1$, which completes the proof. \end{proof} \begin{proposition} \label{1-disc} Let $\Omega$ be a bounded symmetric domain and let $f:\Delta\to \Omega$ be a $C^1$-smooth totally geodesic isometric embedding that extends continuously to the boundary. If $f$ is tangential to $\mathcal {RC}(\Omega)$, then $f$ is either holomorphic or anti-holomorphic. \end{proposition} \begin{proof} Choose $z\in \partial \Delta$ and consider the geodesics, say $\gamma$, whose one of end points are $z$. Note that $\Delta$ is foliated by the images of such geodesics. By Corollary \ref{unique minimal disc}, $f\circ\gamma(I)$ is contained in a unique minimal disc of $\Omega$ and hence $f(\Delta)$ is contained in $V_{f(z)}$ which is the image of a holomorphic isometric embedding $F\colon \mathbb B^{p+1}\rightarrow \Omega$ (see \cite{Mok_2016}). For $(x,v)\in T_x\Delta$, we have \begin{equation}\nonumber \begin{aligned} k_\Delta(x;v) = k_\Omega(f(x); df_x(v)) &\leq b_\Omega(f(x); df_x(v)) \\ &= b_{\mathbb B^{p+1}}(F^{-1}\circ f(x), d(F^{-1}\circ f)_x(v))\\ &= k_{\mathbb B^{p+1}} (F^{-1}\circ f(x), d(F^{-1}\circ f)_x(v)). \end{aligned} \end{equation} Let $\Delta_x\subset \Omega$ be the minimal disc passing through $f(x)$ tangential to $df_x(v)$. By the distance decreasing property, \begin{equation}\nonumber \begin{aligned} k_\Delta(x;v) &= k_\Omega(f(x); df_x(v)) \geq d_{\Delta_x}(f(x);df_x(v))\\ &\geq k_{F(\mathbb B^{p+1})}(f(x); df_x(v))\geq k_{\mathbb B^{p+1}} (F^{-1}\circ f(x), d(F^{-1}\circ f)_x(v)), \end{aligned} \end{equation} and as a result $$ k_\Delta(x;v) =k_{\mathbb B^{p+1}} (F^{-1}\circ f(x), d(F^{-1}\circ f)_x(v)). $$ By \cite{Antonakoudis_2017, Gaussier_Seshadri_2013}, $F^{-1}\circ f$ is either holomorphic or anti-holomorphic and hence $f$ has the same property as well. \end{proof} \begin{corollary}\label{ball to BSD} Let $f:\mathbb B^n\to \Omega$ be a $C^1$ totally geodesic isometric embedding extending continuously to the boundary such that $$[f_*(v)]\in \mathcal{RC}(\Omega),\quad $$ for any $v\neq 0\in T\mathbb B^n$. Then $f$ is either holomorphic or anti-holomorphic. Moreover, $f$ is a standard embedding. \end{corollary} \begin{proof}[Proof of Theorem \ref{main-2}] Let $r$ be the rank of $\Omega$. We will use induction on $r$. For $r=1$, it is proved in Corollary \ref{ball to BSD}. Assume that $r>1$. Let $p\in \Omega$ be a general point. We may assume $p=0$ and $f(0)=0$. We will show that $f$ maps $r$-dimensional totally geodesic polydisc to $r$-dimensional totally geodesic polydisc and therefore $$ [f_*(v)]\in \mathcal {C}_0(\Omega')$$ for all $[v]\in \mathcal {C}_0(\Omega)$. For a given rank $(r-k)$, $k<r$ boundary component $C$ of $\Omega$, choose a totally geodesic holomorphic disc $\Delta_C\subset \Omega$ passing through $0$ such that $$T C=\mathcal N_{[T_0\Delta_C]}.$$ Let $\gamma:\mathbb R\to \Delta_C$ be a complete geodesic passing through $0\in \Delta_C$ such that $$p_\gamma:=\lim_{t\to \infty}\gamma(t)\in C.$$ Since $f\circ \gamma$ is a complete geodesic, there exists a unique boundary component $D$ of $\Omega'$ such that $f(p_\gamma)$ is contained in $D$. Moreover, since $f$ is an isometry, $f(C)$ should be contained in the same boundary component $C$. Hence $D$ depends only on $C$. Let $C_1$ be another boundary component of $\Omega$ of rank $(r-k)$ such that $\overline{C}\cap\overline{C_1}$ is a rank $(r-k-1)$ boundary component. By continuity of $f$, we obtain $f(\overline{C}\cap \overline{C_1})\subset \overline{D}\cap\overline{D_1}, $ where $D_1$ is the boundary component of $\Omega'$ that contains $f(C_1)$. Furthermore, since $C\neq C_1$, the distance between any two points $p\in C, q\in C_1$ and hence the distance of any $f(p)$, $p\in C$ and $f(q)$, $q\in C_1$ is infinite. Therefore $D\neq D_1$ unless $D$ and $D_1$ are points in the Shilov boundary of $\Omega'$. In the latter case, $f$ is constant on $C$ and $C_1$. Since $\overline D\cap\overline{D_1}\neq \emptyset$, we obtain $D=D_1$, i.e. $f(C)=f(C_1).$ Suppose that there exists an open set $U$ in the rank $(r-k)$ boundary orbit of $\Omega$ such that if a rank $(r-k)$ boundary component $C$ of $\Omega$ satisfies $C\cap U\neq\emptyset$, then $f$ is constant on $C$. Choose two rank $(r-k)$ boundary components $C_1, C_2$ such that $C_1\cap U$ and $C_2\cap U$ are not empty and $\overline C_1\cap \overline C_2\neq\emptyset.$ Then by continuity of $f$, we obtain $f(C_1)=f(C_2)$. Since such $C_1$ and $C_2$ are general, there exists an open set $V$ in the Shilov boundary of $\Omega$ such that $f$ is constant on $V$. Let $x\in V$. Choose a totally geodesic polydisc $\Delta^{r}\subset \Omega$ such that $x\in (\partial\Delta)^{r}$. Then $V\cap (\partial\Delta)^{r}$ is open in $(\partial\Delta)^{r}$. We may assume $x=(1,\ldots,1)\in (\partial\Delta)^r$. Choose an $\epsilon_0>0$ such that $y=(e^{i\theta_1},\ldots, e^{i\theta_{r}})\in V\cap(\partial\Delta)^r$ for all $\theta_i\in (-\epsilon_0,\,\epsilon_0)$. Choose geodesics $\gamma_i$ in $\Delta$ such that $1=\gamma_i(\infty)$ and $e^{i\theta_i}=\gamma_i(-\infty)$. Then $\gamma:=(\gamma_1,\ldots,\gamma_{r})$ is a geodesic in $\Delta^{r}$ such that $x=\gamma(\infty)$ and $y=\gamma(-\infty)$ and therefore $f\circ\gamma$ is a geodesic in $\Omega'$ joining $f(x)=f\circ \gamma(\infty)$ and $f(y)=f\circ\gamma(-\infty)$. But Lemma \ref{polyD} shows that $f\circ\gamma(\infty)$ and $f\circ\gamma(-\infty)$ should be contained in different boundary components, contradicting the assumption. Therefore for general boundary component $C$ with rank$(C)>0$, $f(C)$ is not a point. Moreover, since $f$ preserves the boundary stratification, we obtain that for any general boundary component $C$, $f(C)$ is contained in a boundary component with rank greater or equal to the rank of $C$. Let $C$ be a rank $(r-1)$ characteristic subdomain of $\Omega$ and let $\Delta_C$ be a minimal disc such that $\Delta_C\times C$ is totally geodesic in $\Omega$. Assume that $(0,0)\in \Delta_C\times C.$ Choose a unit speed complete geodesic $\gamma$ in $\Delta_C\times \{0\}$ connecting $(0,0)$ and a point $(p,0)\in \partial\Delta_C\times C$. Then there exists a polydisc $Q_p$ passing through $0$ that satisfies the condition in Lemma~\ref{polyD} with respect to $f(\gamma(\cdot), 0)$. By the condition \eqref{vector-rank}, we obtain $\dim Q_p\leq r$. Moreover, similar to the proof of Lemma~\ref{component_totally}, we obtain $$f(\gamma(t), x)\subset Q_p\times Q_p^\perp,\quad x\in C.$$ Define $F=(\Phi, \Psi)\colon\mathbb R\times C\to Q_p\times Q_p^\perp$ by $$F(t, x)=f(\gamma(t), x).$$ We may assume that $Q_p$ is the maximal polydisc such that each component of $\Phi$ is a unit speed geodesic in a disc. Since $F$ is an isometry, as in the proof of Lemma~\ref{component_totally}, we obtain $$ \lim_{t\to\infty} \Phi(t,x) = \lim_{t\to\infty} \Phi(t,y) \quad \text{ and } \lim_{t\to -\infty} \Phi(t,x) = \lim_{t\to -\infty} \Phi(t,y) $$ and hence $$\Phi(\mathbb R,x) = \Phi(\mathbb R,y)\quad x, y\in C,$$ i.e. the image $\Phi(\mathbb R\times C)$ is real one-dimensional. Since $F$ is a totally geodesic isometry, this implies that $\Psi(t,\cdot):C\to Q_p^\perp$ is a totally geodesic isometry for all $t\in \mathbb R.$ Furthermore, since $\partial_t \Phi\neq 0$, by condition \eqref{vector-rank}, we obtain that for any $s>0$, $${\rm rank}~\Psi_*(s\partial/\partial t+v)\leq r-\dim Q_p\leq r-1$$ for any $v\in T_x C$. Therefore by continuity of the derivatives of $f$, we obtain $${\rm rank}~\Psi_*(v)\leq r-1$$ for any $v\in T_x C$. By induction argument, we may assume that $\Psi(t,\cdot)$ is a standard holomorphic embedding on $ C$ for all $t$ and $\Psi(t, \cdot)$ maps rank one vector to rank one vector. Choose another rank one complete geodesic $\widetilde \gamma\subset \Delta_C$ passing through $0$. Then by the same argument, we obtain $f(\widetilde\gamma(\mathbb R)\times C)\subset Q_{\widetilde p}\times Q_{\widetilde p}^\perp$, where $\widetilde p=\lim_{t\to 1}\widetilde \gamma(t)$. Then either $Q_p\cap Q_{\tilde p}=\{0\}$ for some $\widetilde\gamma$ or $Q_p=Q_{\tilde p}$ for all $\tilde\gamma$. In the first case, we obtain $$f(\{0\}\times C)=(\Phi(0, C), \Psi(0, C))\subset \{0\}\times \left(Q_p^\perp\cap Q_{\widetilde p}^\perp\right).$$ Therefore any rank one vector $v\in TC$ is mapped to rank one vector. In the second case, we obtain $$f(\Delta_C\times C)\subset Q_p\times Q_p^\perp.$$ Write $f=(\Phi, \Psi):\Delta_C\times C\to Q_p\times Q_p^\perp.$ Since $\Phi(\mathbb R,C)$ is real one-dimensional and $\Psi(t, \cdot)$ is a standard map on each polydisc, as in the proof of Lemma~\ref{component_totally}, we obtain that $\Psi$ is independent of $\zeta\in \Delta_C$. Therefore $\Phi(\cdot, x):\Delta_C\to Q_p$ is a totally geodesic isometry for all $x\in C$. Choose a complete geodesic $\gamma_1$ in $\Delta_C$. Then by the same argument we obtain $$ \Phi(\gamma_1(\mathbb R), x)=\Phi(\gamma_1(\mathbb R), y),\quad x, y\in C. $$ Since $\gamma_1$ is arbitrary and $\Phi(\cdot, x)$ is an isometry, $\Phi$ is independent of $x\in C$. Therefore $f$ maps any rank one vector in $TC$ to rank one vector. For any $[v]\in \mathcal C(\Omega)$, there exists rank $(r-1)$ characteristic subdomain $C$ such that $v\in TC$. Hence $f$ preserves the rank one vectors. Then by Lemma~\ref{holo-disc} and Proposition~\ref{1-disc} we can complete the proof. \end{proof}
{'timestamp': '2022-02-14T02:08:49', 'yymm': '2202', 'arxiv_id': '2202.05473', 'language': 'en', 'url': 'https://arxiv.org/abs/2202.05473'}
\section{Introduction} Polynomial interpolation is a classical computational problem considered in numerical mathematics: given a set of points and values find a polynomial of a given degree that assumes given values at given points. A common interpretation is that a polynomial is fitted to some observational data. It is tacitly assumed that a data set is given \emph{exactly}. Hence if, for instance, we are fitting a polynomial to some measured observable, our data set faithfully represents measured values. However, in some applications (e.g., signal processing, speech recognition, complex quantities processing) our measurements do not faithfully represent all characteristics of a measured signal, namely, they lack a \emph{phase}. This observation has led to the emergence of a new subject of \emph{phase retrieval}, which in short can be described as a study of what can be reconstructed if measurements are restricted to magnitudes while phases are lost. More on phase retrieval can be found, e.g., in \cite{phase-overview}. Recently the problem of phase retrieval has inspired a study of problems and algorithms using \emph{absolute value information} from the point of view of \emph{information-based complexity} (IBC). While most information-based complexity research concerned problems with information given as evaluations of linear functionals, \cite{avi} consider problems and algorithms that rely on information consisting of modules of real or complex valued linear functionals. In this paper we study \emph{phaseless} polynomial interpolation understood as an algorithmic task to construct a polynomial up to a phase, based on its values that are available up to a phase. This means that for a set of absolute values of evaluations of an unknown polynomial $f$, i.e., $|f(x_1)|,\ldots,|f(x_k)|$ for some points $x_1,\ldots,x_k$, we wish to find $uf$ for some number $u$ with $|u|=1$. We analyze the relation between the degree of a polynomial $f$ and the computational effort needed to perform this \emph{phaseless interpolation}. We show that it is possible to do it in a polynomial time. It turns out that while the result for real polynomials is quite straightforward, the complex case is more subtle; for instance, we cannot rely on evaluations given at arbitrary distinct points. The paper is organized as follows. The next section describes the computational framework for the problems of polynomial identification. Section~\ref{s:real} studies phaseless identification of real polynomials, whereas Section~\ref{s:complex} investigates phaseless identification of complex polynomials. We conclude the paper in Section~\ref{s:conclusion} and indicate areas of further work. \section{Computational framework and problem statement} To fix the terminology let us recall some general notions of IBC. Since in this work we are dealing with problems that are meant to be solved exactly, we will not need much beside basic IBC notions. We are interested in \emph{identification problems}, i.e., problems of the form: $$\mor{\id{V}}{V}{V}$$ and their \emph{solutions} that consist of an \emph{information operator} $N\colon V\to\mathbf{K}^*$ and an \emph{algorithm} $\phi\colon\mathbf{K}^*\to V$ such that $\phi\circ N=id_V$. Here $V$ is a set, $\mathbf{K}$ is either the field of real numbers $\mathbf{R}$ or the field of complex numbers $\mathbf{C}$, and $\mathbf{K}^*$ is the set of all finite sequences of elements of $\mathbf{K}$. Let $\Lambda$ be some class of \emph{elementary information operations} that can act on elements of $V$, i.e., elements of $\Lambda$ are maps $V\to\mathbf{K}$. Recall that, following common IBC convention, an information operator is a mapping $N\colon V\to\mathbf{K}^*$ such that for all $v\in V$: $$N(v)=\left(L_1(v),L_2(v;y_1),\ldots,L_{n(v)}(v;y_1,\ldots,y_{n(v)-1})\right),$$ where $L_1\in\Lambda$, $y_1=L_1(v)$, and for $i\in\{2,\ldots,n(v)\}$ $$L_i\colon V\times\mathbf{K}^*\to\mathbf{K}$$ $L_i(\cdot,\bar{y})\in\Lambda$ for all $\bar{y}\in\mathbf{K}^*$, and $y_i=L_i(v;y_1,\ldots,y_{i-1})$. At every step of a construction of $N(v)$ the decision whether to evaluate another elementary information operation or stop the computation of $N(v)$ is based only on the previous evaluations. Note that the number $n(v)$ of evaluations depends on $v$ via the sequence of intermediate values of elementary information operations $y_1,\ldots,y_k$. More on information operators and their role in IBC in more general settings can be found, e.g., in \cite{tww1988} and \cite{plaskota1996noisy}. All objects that we will deal with can be described as finite sequences of real numbers as follows. We treat complex numbers as pairs of real numbers via the usual correspondence $z\leftrightarrow\tuple{\textnormal{Re}(z),\textnormal{Im}(z)}$. Similarly, we treat real polynomials as finite sequences of reals via $a_0+a_1x+\cdots+a_nx^n\leftrightarrow \tuple{a_0,a_1\ldots,a_n}$, and complex polynomials as finite sequences of reals via $a_0+a_1x+\cdots+a_nx^n\leftrightarrow\tuple{\textnormal{Re}(a_0), \textnormal{Im}(a_0),\textnormal{Re}(a_1),\textnormal{Im}(a_1),\ldots, \textnormal{Re}(a_n),\textnormal{Im}(a_n)}$. For this reason, our algorithms essentially are partial functions $\phi\colon\mathbf{R}^*\to\mathbf{R}^*$. We choose for our computational model the real Blum-Shub-Smale machine augmented with the ability to compute $\sqrt{(\cdot)}$ and $\sin(\cdot)$ and assume that an algorithm has to be computable in this model. We will also, by a slight abuse of the term, call $\sqrt{(\cdot)}$ and $\sin(\cdot)$ arithmetic operations. More on Blum-Shub-Smale model of computation over the reals can be found in \cite{bss}. We assume that all arithmetic operations on reals are allowed and are performed in a constant time. All described algorithms make use only of the possibility to store and access finite sequences of real numbers, perform arithmetic operations on them and do bounded iteration. Hence the cost of an algorithm is understood as the (maximal) number of real arithmetic operations performed during its execution. We define the computational cost of a solution $(N,\phi)$ as: $$\mbox{cost}(N,\phi)=\sup_{v\in V}\left(\mbox{cost}_i(N,v)+ \mbox{cost}_a(\phi,N(v))\right)$$ where, for $v\in V$ and $y\in\mathbf{R}^*$: \begin{align*} \mbox{cost}_i(N,v)=&\ n(v)&\\ \mbox{cost}_a(\phi,y)=&\ \mbox{the number of arithmetic operations performed}&\\ &\ \mbox{by}\ \mbox{an algorithm}\ \phi\ \mbox{on input}\ y& \end{align*} The computational complexity of an identification problem $\mor{\id{V}}{V}{V}$ is defined as: $$\mbox{comp}(\id{V})=\inf\{\mbox{cost}(N,\phi)\colon (N,\phi)\ \mbox{is a solution of}\ \id{V}\}$$ \vskip 1pc For a real or complex linear space $V$, let $\equiv$ be a binary relation on $V$ defined as: $$x\equiv y\ \ \mbox{iff}\ \ x=u y\ \mbox{for some number}\ u\ \mbox{with}\ |u|=1$$ Clearly, $\equiv$ is an equivalence relation. We will write $\overline{V}$ for $V/\equiv$. Suppose that $\Lambda$ is some class of elementary information operations consisting of linear functionals. We define a new class of information operations: $$|\Lambda|=\{|L|\colon L\in\Lambda\}$$ where $|L|(v)=|L(v)|$ for $v\in V$, here $|\cdot|$ is an absolute value in case of real spaces, and modulus for complex spaces. Note that elements of $|\Lambda|$ act on $\overline{V}$ in a natural way since $|L(uv)|=|L(v)|$ for all $v\in V$ and all numbers $u$ such that $|u|=1$. We consider two identification problems related to the classes $\Lambda$ and $|\Lambda|$, correspondingly: $$\id{V}\colon V\to V\ \ \mbox{and}\ \ \id{\overline{V}}\colon \overline{V}\to \overline{V}$$ The problem of identification of $v \in V$ based on values $L_1(v), L_2(v), \ldots, L_n(v)$ for some $L_i \in \Lambda$ is the exact identification with exact evaluations of $L_i$. This is the problem $id_V$ when the class $\Lambda$ is available. In the setting of phase retrieval, one instead tries to identify $v \in V$ up to a unit $u$ (i.e.~a scalar $u$ such that $|u| = 1$) based on absolute values $|L_1(v)|, |L_2(v)|, \dotsc, |L_n(v)|$. This is the problem $id_{\overline{V}}$ when the class $|\Lambda|$ is available. If $L \in \Lambda$ is linear, we may fix a single $v$ from the orbit $\{uv \colon |u| = 1\}$ by fixing $L(v)$ from the orbit $\{uL(v) \colon |u| = 1\}$. Therefore, instead of considering $\id{\overline{V}}$ with available information $|\Lambda|$ we can equivalently consider the problem $\id{V}$, i.e., the exact identification of $v$ with a single exact evaluation $L(v)$ and a sequence of evaluations $|L_1(v)|, |L_2(v)|, \dotsc, |L_n(v)|$. In the remainder we shall use the above reformulation of quotient problems. \vskip 2pc For any field $\mathbf{K}$, let: $$\pprob{\mathbf{K}}=\{a_0+a_1x+\cdots+a_nx^n\colon \ a_0,a_1,\ldots,a_n\in\mathbf{K}\}$$ be the linear space of all polynomials of degree at most $n$ ($n\in\mathbb{N}$) and coefficients from $\mathbf{K}$ , and let $\prob{\mathbf{K}}=\pprob{\mathbf{K}}/\equiv$. Note that for every $f\in\pprob{\mathbf{K}}$ and $s\in\mathbf{K}$ the evaluation of $f$ at $s$ is well defined as: $$L_{s}(f)=a_0+a_1s+\cdots+a_ns^n$$ We consider $\mathbf{K}$ being the field of real numbers, some subfield of it, or the field of complex numbers. We assume that the domain of any polynomial from $\mathbf{K}_n[x]$ is: \begin{align*} &\mathbf{R},\ \textnormal{if}\ \mathbf{K}\ \textnormal{is a subfield of}\ \mathbf{R}\\ &\mathbf{C},\ \textnormal{if}\ \mathbf{K}=\mathbf{C} \end{align*} We will consider two basic classes of elementary information operations acting on spaces of polynomials: \begin{align*} \Lambda^{\text{std}}=\ &\{f\mapsto f(x)\colon \ \mbox{for some}\ x\ \mbox{in the domain of}\ f\}&\\ |\Lambda^{\text{std}}|=\ &\{f\mapsto |f(x)|\colon \ \mbox{for some}\ x\ \mbox{in the domain of}\ f\}& \end{align*} In the sequel we will use algorithms that may rely on information operations from the class $\Lambda^{\text{std}}\cup |\Lambda^{\text{std}}|$, we will always make it explicit how many operations from each of the classes have been used by an algorithm. We will call the usage of an operation from $\Lambda^{\text{std}}$ an \emph{exact} evaluation, while the usage of an operation from $|\Lambda^{\text{std}}|$ will be called a \emph{phaseless} (or \emph{signless} in the real case) evaluation. Our aim is to investigate the computational complexity of the problem: $$\sprob{\mathbf{K}}$$ when $|\Lambda^{\text{std}}|$ is available, or equivalently of the problem: $$\spprob{\mathbf{K}}$$ if we can use any number of information operations from $|\Lambda^{\text{std}}|$ and exactly one operation from $\Lambda^{\text{std}}$. \section{Real polynomials} \label{s:real} This section deals with phaseless identification of real polynomials. We shall start with the following observation. \begin{theorem} \label{t:non-adaptreal} The computational complexity of the problem $$\sprobr$$ for the class $|\Lambda^{\textnormal{std}}|$ is polynomial in $n$. Furthermore, it has a solution $(N,\phi)$ such that $N$ is nonadaptive and $\textnormal{cost}_i(N,v)=2n+1$ for all $v\in \overline{\mathbf{R}_n[x]}$, and $\sup\{\textnormal{cost}_a(\phi,y)\colon y\in N(\overline{\mathbf{R}_n[x]})\}$ is polynomial in $n$. \end{theorem} \begin{proof} Let us consider the general form of a real polynomial: $$p(x) = a_0 + a_1x + a_2x^2 + \cdots + a_nx^n$$ where $a_0,a_1,\ldots,a_n\in\mathbf{R}$. Suppose that $0 \leq j \leq n$ is the smallest index such that $a_j$ is non-zero. The square of the absolute value of $p$ is \begin{align*} |p(x)|^2 &= (x^j(a_j + a_{j+1}x + a_{j+2}x^2 + \cdots + a_nx^{n-j}))^2\\ &= x^{2j}(A_0 + A_1x + A_2x^2 + \cdots + A_{2(n-j)}x^{2(n-j)}) \\ \end{align*} for some $A_0,A_1,\ldots,A_{2(n-j)}\in\mathbf{R}$. Because $|p(x)|^2$ is a real polynomial of degree at most $2n$ we may determine its coefficients $A_k$ by using its evaluations at (any) $2n+1$ distinct real points and performing polynomial interpolation. Furthermore, every evaluation of $|p(x)|^2$ is of the form $(L(p))^2$, where $L \in |\Lambda^{\textnormal{std}}|$. Now, observe that: \begin{align*} A_k &= \left\{ \begin{array}{ll} a_j^2 & \textnormal{if}\ k = 0 \\ 2a_ja_{j+k} + \sum_{i=1}^{k-1} a_{j+i}a_{j+k-i} & \textnormal{if}\ 1 \leq k \leq 2(n-j) \\ \end{array} \right. \end{align*} Therefore, assuming $\tuple{A_k}_{0 \leq k \leq 2(n-j)}$ are given, coefficients $a_m$ may be computed inductively: \begin{align*} a_{m} &= \left\{ \begin{array}{ll} 0 & \textnormal{if}\ m < j \\ \pm \sqrt{A_0} & \textnormal{if}\ m = j \\ \frac{A_{m-j} - \sum_{i=1}^{m-j-1} a_{j+i}a_{m-i}}{2a_j} & \textnormal{if}\ j+1 \leq m \leq n \\ \end{array} \right. \end{align*} Thus, we may recover the polynomial $p(x)$ up to a sign (i.e., the sign of $a_j$), which completes the proof. \end{proof} One may wonder if $2n+1$ information operations are needed. The answer is no, provided we are satisfied with algorithms with exponential cost that use an adaptive information. \begin{theorem} \label{t:adaptreal} The problem $$\sprobr$$ has a solution $(N,\phi)$ such that $N$ is an adaptive information operator using information operations from the class $|\Lambda^\textnormal{std}|$ and such that $\textnormal{cost}_i(N,v)=n+2$ for all $v\in \overline{\mathbf{R}_n[x]}$, and $\sup\{\textnormal{cost}_a(\phi,y)\colon y\in N(\overline{\mathbf{R}_n[x]})\}$ is exponential in $n$. \end{theorem} \begin{proof} We will equivalently demonstrate that it is possible to identify a real polynomial by $n+1$ signless evaluations and one exact evaluation. Let us consider $p \in \pprobr$ and a sequence $x_0, x_1, \ldots, x_n \in \mathbf{R}$ of distinct real numbers. Denote by $\catl{B}_k = \{-1, 1\}^k$ the $k$-th power of the one-dimensional unit circle and by $\catl{B}_k^{(2)} = \{\tuple{\overline{x}, \overline{y}} \in \catl{B}_k \times \catl{B}_k \colon \overline{x} \neq \overline{y}\}$. For every tuple $\overline{b} \in \catl{B}_{n+1}$ there is exactly one polynomial $w_{\overline{b}} \in \pprobr$ such that $w_{\overline{b}}(x_i) = b_i|p(x_i)|$ for all $0 \leq i \leq n$. Therefore, there are exactly $2^{n+1}$ polynomials with values, up to a sign, matching those of $p$ in each of the points $x_0,x_1,\ldots,x_n$. Moreover, if $\overline{b} \neq \overline{b'}$ then the equation $w_{\overline{b}}(x) = w_{\overline{b'}}(x)$ has at most $n$ solutions. It follows that the set $$\catl{S} = \{x \in \mathbf{R} \colon \exists_{\tuple{\overline{b}, \overline{b'}}\in \catl{B}_{n+1}^{(2)}} \; w_{\overline{b}}(x) = w_{\overline{b'}}(x)\}$$ is finite, thus does not exhaust the whole $\mathbf{R}$. Therefore, there exists $x \in \mathbf{R}$ that differentiates any pair of polynomials $w_{\overline{b}}$ and $w_{\overline{b'}}$, and we may perform one exact evaluation at that $x$, which will uniquely identify $\overline{b}$ such that $w_{\overline{b}}=p$, and hence also $p$. Moreover, $x$ can be effectively found in exponential time by a naive algorithm that separates an interval from the set of all zeros of polynomials $w_{\overline{b}} - w_{\overline{b'}}$ computed to \emph{any} finite precision. \end{proof} One may also observe that since $\catl{S}$ is of Lebesgue measure zero, any point randomly chosen from any non-degenerated interval will almost surely not belong to $\catl{S}$. \begin{example} Consider $p \in \mathbf{R}_n[x]$ for $n=3$. Let us choose $\{1, 2, 3, 4\}$ for the set of four evaluation points and assume that $|p(x)| = 1$ for every $x \in \{1, 2, 3, 4\}$. Figure~\ref{f:real:example} shows all possible polynomials of degree at most $3$ that satisfy the signless evaluations. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{plotIBC.png} \caption{All possible 3-polynomials $p$ with $|p(x)| = 1$ for $x \in \{1, 2, 3, 4\}$.} \label{f:real:example} \end{figure} The polynomials whose value at $1$ is positive (i.e.~equal to $1$) are: \begin{align*} p_1(x) &= 1\\ p_2(x) &=-x^3 + 8x^2 - 19x + 13\\ p_3(x) &=x^3 - 7x^2 + 14x - 7\\ p_4(x) &=-\frac{1}{3}x^3 + 2x^2 - \frac{11}{3}x + 3\\ p_5(x) &=x^2 - 5x + 5\\ p_6(x) &=-\frac{4}{3}x^3 + 10x^2 - \frac{68}{3}x + 15\\ p_7(x) &=\frac{2}{3}x^3 - 5x^2 + \frac{31}{3}x - 5\\ p_8(x) &=-\frac{1}{3}x^3 + 3x^2 - \frac{26}{3}x + 7 \end{align*} The remaining polynomials are their opposites. Observe that at $x = \pi$ no two non-opposite polynomials have the same absolute value, because $\pi$ is a transcendental number (i.e., it is not algebraic) and the coefficients of the above polynomials are rational. Therefore, we can choose $x = \pi$ for our final evaluation. \end{example} The above example suggests that some information about coefficients of the polynomials can be used to find evaluation points in a non-adaptive way. \begin{theorem} \label{t:non-adaptcountabler} Suppose that $\mathbf{A}$ is a subfield of the field $\mathbf{R}$ of real numbers, such that the field extension $\mathbf{A}\subseteq \mathbf{R}$ is transcendental. The problem $$\sprobra$$ has a solution $(N,\phi)$ such that $N$ is a non-adaptive information operator using information operations from the class $|\Lambda^\textnormal{std}|$ and such that $\textnormal{cost}_i(N,v)=n+2$ for all $v\in \overline{\mathbf{A}_n[x]}$, and $\sup\{\textnormal{cost}_a(\phi,y)\colon y\in N(\overline{\mathbf{R}_n[x]})\}$ is exponential in $n$. \end{theorem} \begin{proof} We proceed similarly as in the proof of Theorem~\ref{t:non-adaptreal}. We will equivalently demonstrate that it is possible to identify a polynomial from $\mathbf{A}_n[x]$ by $n+1$ signless evaluations and one exact evaluation. However, this time all evaluation points (including the one to perform an exact evaluation) can be fixed beforehand and used without any need for adaption. Let us consider $p \in \mathbf{A}_n[x]$ and a sequence $x_0, x_1, \ldots, x_n \in \mathbf{A}$ of distinct elements of $\mathbf{A}$. For every tuple $\overline{b} \in \{-1, 1\}^{n+1}$ there is exactly one polynomial $w_{\overline{b}} \in \mathbf{A}_n[x]$ such that $w_{\overline{b}}(x_i) = b_i |p(x_i)|$ for all $0 \leq i \leq n$. This follows from two observations: \begin{itemize} \item $-1\mathbf{A} = \mathbf{A}$, since $\mathbf{A}$ is a field \item a matrix with entries from $\mathbf{A}$ has an inverse over $\mathbf{A}$ iff it has an inverse over $\mathbf{R}$ \end{itemize} Therefore, there are exactly $2^{n+1}$ polynomials from $\mathbf{A}_n[x]$ with values, up to a sign, matching those of $p$ in each of the points $x_0,x_1,\ldots,x_n$. From the assumption, the set: $$\catl{D}=\mathbf{R} \setminus \{x \in \mathbf{R} \colon \exists_{w\in\mathbf{A}_n[x]} \; w(x) = 0\}$$ is non-empty. By definition any $x \in \catl{D}$ differentiates every pair of distinct polynomials $w,v\in\mathbf{A}_n[x]$. Thus we may perform one exact evaluation at any fixed $x\in\catl{D}$, which will uniquely identify $\overline{b}$ such that $w_{\overline{b}}=p$, and hence also $p$. \end{proof} \begin{corollary} The problem $$\sprobrq$$ can be solved using $n+1$ signless evaluations at any distinct points $x_0,x_1,\ldots,x_n\in\mathbf{Q}$, and one evaluation at any point $x\in\mathbf{R}$ such that $x$ is a transcendental number. \end{corollary} \begin{remark} Under assumption of Theorem~\ref{t:non-adaptcountabler} one may solve $\spprob{\mathbf{A}}$ with a single exact evaluation at any transcendental (wrt.~$\mathbf{A}$) number $x$. This follows from the observation that the evaluation $\mor{L_x}{\pprob{\mathbf{A}}}{\mathbf{R}}$ at transcendental $x$ is an injection. On the other hand, without further assumptions about $\mathbf{A}$ there is no upper bound on the algorithmic cost of the problem (in fact, there may be no algorithm realizable on a Blum–Shub–Smale machine). \end{remark} \section{Complex polynomials} \label{s:complex} Phaseless identification of complex polynomials is much more complicated than in the case of real polynomials. One reason for this increased difficulty is that the cardinality of the unit sphere for complex numbers, i.e. $\{x \in \mathbf{C} \colon |x| = 1\}$, is continuum; whereas the unit sphere for real numbers, i.e. $\{x \in \mathbf{R} \colon |x| = 1\}$, is just the 2-element set $\{-1, 1\}$. The next theorem exploits this difference. \begin{theorem}\label{t:nocomplex} There is no solution of the problem $$\spprobc$$ that relies on the information operator $N$ which uses at most $n$ information operations from the class $\Lambda^\textnormal{std}$ and at most $n+1$ information operations from the class $|\Lambda^\textnormal{std}|$. \end{theorem} \begin{proof} Let us denote $\catl{B}_k = \{\overline{x} \in \mathbf{C}^k \colon \forall_{0 \leq i < k} \; |x_i| = 1\}$ and $\catl{B}_k^{(2)} = \{\tuple{\overline{x}, \overline{y}} \in \catl{B}_k \times \catl{B}_k \colon \overline{x} \neq \overline{y}\}$. Consider a a sequence $x_0, x_1, \ldots, x_n \in \mathbf{C}$ of distinct complex numbers and a polynomial $f \in \mathbf{C}_n[x]$ that does not have a root at any of these numbers --- i.e., $f(x_i) \neq 0$ for all $0\leq i\leq n$. Note that for every tuple $\overline{b} \in \mathbf{C}^{n+1}$ there is exactly one polynomial $w_{\overline{b}} \in \pprobc$ such that $w_{\overline{b}}(x_i) = \overline{b}_i |f(x_i)|$ for all $0\leq i\leq n$. By linearity, for all $\overline{b},\overline{b'}\in\mathbf{C}^{n+1}$, the condition $w_{\overline{b}} = w_{\overline{b'}}$ can be rewritten as $w_{\overline{b}- \overline{b'}} = 0$ and then as $w_{\overline{c}} = 0$, where $\overline{c} = \overline{b} - \overline{b'} $. Observe that the set $$\catl{U} = \{\overline{b} - \overline{b'}\in\mathbf{C}^{n+1}\colon \tuple{\overline{b}, \overline{b'}} \in \catl{B}_{n+1}^{(2)}\}$$ is equal to the set $$\{\overline{c} \in \mathbf{C}^{n+1} \colon \overline{c} \neq 0\ \land\ |\overline{c}_i| \leq 2\ \textnormal{for all}\ 0\leq i\leq n\}$$ Therefore, if we define $$\catl{S}=\{\overline{x'} \in \mathbf{C}^{n} \colon \exists_{\overline{c} \in \catl{U}} \; w_{\overline{c}}(\overline{x'}_i) = 0 \ \textnormal{for all}\ 0\leq i\leq n-1\}$$ we see that for $\overline{x'} \in \mathbf{C}^{n}$ we have: $$\overline{x'} \in \catl{S} \Leftrightarrow \exists_{\overline{b}, \overline{b'}\in\catl{B}_{n+1}^{(2)}} \forall_{0 \leq i < n} \; w_{\overline{b}}(\overline{x'}_i) = w_{\overline{b'}}(\overline{x'}_i) $$ We argue that $\catl{S}=\mathbf{C}^n$. Consider any tuple $\overline{x'} \in \mathbf{C}^n$ and any non-zero polynomial $p \in \pprobc$ for which $\overline{x'}$ is a tuple of all its roots. If $M = \frac{1}{2}\max_{0\leq i\leq n}{\frac{|p(\overline{x}_i)|}{|f(\overline{x}_i)|}}$, then the polynomial $p/M$ is non-zero and satisfies $$\frac{|(p/M)(\overline{x}_i)|}{|f(\overline{x}_i)|} \leq 2\ \textnormal{for all} \ 0\leq i\leq n.$$ Therefore, there is a non-zero tuple $\overline{c}\in\mathbf{C}^{n+1}$ such that $(p/M)(\overline{x}_i) = \overline{c}_i |f(\overline{x}_i)|$ and $|\overline{c}_i| \leq 2$ for all $0\leq i\leq n$. Since $p/M \in \mathbf{C}_n[x]$, it follows that $p/M=w_{\overline{c}}$, and hence $w_{\overline{c}}(\overline{x'}_i) = 0$ for all $0\leq i\leq n-1$. It means that $\overline{x'}\in\catl{S}$, so indeed $\catl{S}=\mathbf{C}^n$. Note that $f \in \{w_{\overline{b}}\in\mathbf{C}_n[x] \colon \overline{b}\in \catl{B}^{n+1}\}$ and for every $w_{\overline{b}}$, it holds that $$|w_{\overline{b}}(x_i)|=|f(x_i)|\ \textnormal{for all}\ 0\leq i\leq n$$ Now, if $x'_0,x'_1,\ldots,x'_{n-1}\in\mathbf{C}$ is any sequence of distinct complex numbers, there are some $\overline{b},\overline{b'}\in\catl{B}^{n+1}$ such that $\overline{b} \neq \overline{b'}$ and $w_{\overline{b}}(x'_i) = w_{\overline{b'}}(x'_i)= f(x'_i)$ for all $0\leq i \leq n-1$. Since $f$ is equal to at most one of $w_{\overline{b}}$ or $w_{\overline{b'}}$, we cannot identify $f$. \end{proof} \begin{corollary} There is no solution of the problem $$\sprobc$$ that relies on the information operator $N$ using information operations from the class $|\Lambda^\textnormal{std}|$ and such that $\textnormal{cost}_i(N,v)\leq 2n+1$ for all $v\in \overline{\mathbf{C}_n[x]}$. \end{corollary} \begin{proof} Since the information $|\Lambda^{\textnormal{std}}|$ is weaker than the information $\Lambda^{\textnormal{std}}$, it follows from Theorem~\ref{t:nocomplex} that $\spprobc$ cannot have a solution based on an information operator using exactly one information operation from the class $\Lambda^\textnormal{std}$ and $2n$ information operations from the class $|\Lambda^\textnormal{std}|$. Consequently, $\sprobc$ cannot have a solution based on an information operator using $2n+1$ information operations from the class $|\Lambda^\textnormal{std}|$. \end{proof} Let us see how Theorem~\ref{t:nocomplex} works on a simple example. \begin{example}\label{e:nogo:reals} Let $f \in \probc$ and assume that the evaluation points are $\overline{x} = \tuple{1, 2, 3, 4}$ with their corresponding values all equal to $1$. Consider (any) triple of points, for example $\{0, 5, 6\}$ and take any non-zero polynomial whose roots are at these points, for instance: $$p(x) = \frac{1}{24}x^3 - \frac{11}{24} x^2 + \frac{5}{4} x$$ The polynomial $p$ maps tuple $\overline{x} = \tuple{1, 2, 3, 4}$ to tuple $\overline{c} = \tuple{\frac{5}{6}, 1, \frac{3}{4}, \frac{1}{3}}$. Therefore, $p(x_i)=c_i f(x_i)$ for all $0 \leq i < 4$ and $|c_i|\leq 2$ for $0\leq i < 4$. Let us decompose $\overline{c}$ as $\overline{c} = \overline{b} - \overline{b}'$, where $| b_i| = |b_i'| = 1$ for $0\leq i \leq 3$: \begin{align*} \overline{b} &= \tuple{\frac{5 + \sqrt{119}i}{12},\frac{1+\sqrt{3}i}{2},\frac{3+\sqrt{55}i}{8},\frac{1+\sqrt{35}i}{6}}\\ \overline{b}' &= \tuple{\frac{-5 + \sqrt{119}i}{12},\frac{-1+\sqrt{3}i}{2},\frac{-3+\sqrt{55}i}{8},\frac{-1+\sqrt{35}i}{6}} \end{align*} Figure~\ref{f:complex:example:wb} shows interpolating polynomials $w_{\overline{b}}$ and $w_{\overline{b}'}$, whereas Figure~\ref{f:complex:example:abs} shows the plot of $|w_{\overline{b}}|$, which is equal to $|w_{\overline{b}'}|$ on $\mathbf{R}$. \begin{figure} \centering \includegraphics[width=\textwidth]{plotwbIBC.png} \caption{Plot of $w_b$ and $w_{b'}$ for $x \in [-0.1, 6.1]$.} \label{f:complex:example:wb} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{plotabsIBC.png} \caption{Plot of $|w_b|$ on $\mathbf{R}$ (which is equal to $ |w_{b'}|$ on $\mathbf{R}$).} \label{f:complex:example:abs} \end{figure} Note that the absolute values of both $w_{\overline{b}}$ and $w_{\overline{b}'}$ are all ones at $\tuple{1, 2, 3, 4}$, and the values are equal on $\tuple{0, 5, 6}$ (in fact, in this example, the absolute values are equal on the whole real line). \end{example} The above example shows that if we are not careful enough in choosing the evaluation points, we may not be able to distinguish between the polynomials in \emph{any} number of evaluations (recall Figure~\ref{f:complex:example:abs}). The next example shows that it is not sufficient to evaluate polynomials at points with different imaginary values. \begin{example}\label{e:nogo:units} Consider the following polynomials: \begin{align*} p(x) &= 2x + 1\\ q(x) &= x + 2\\ \end{align*} We have: \begin{align*} |p(e^{ix})|^2 &= 2e^{ix} + 1 = (2\cos(x) + 1)^2 + 4\sin^2(x) = 4\cos(x) + 5\\ |q(e^{ix})|^2 &= e^{ix} + 2 = (\cos(x) + 2)^2 + \sin^2(x) = 4\cos(x) + 5 \\ \end{align*} Therefore, $|p(e^{ix})| = |q(e^{ix})|$, i.e.~the absolute values of $p$ and $q$ on the unit circle are equal. \end{example} On the other hand, if we know the polynomial in advance, we may always choose $n+2$ points such that the absolute values at the chosen points identify the polynomial up to a phase. \begin{example} Let $\omega_0, \omega_1,\ldots, \omega_{n}$ be the sequence of $(n+1)$-th primitive roots of $1$. There is exactly one polynomial $w$ of degree at most $n$ such that: $w(0) = 1$ and $|w(\omega_i)| = 1$ for all $0\leq i \leq n$. Consider a polynomial $w(x) = a_0 + a_1x + a_2x^2 + \cdots + a_nx^n$. The constraint $w(0) = 1$ says that $a_0 = 1$. Now, let us take the Vandermonde matrix $T$ at $\omega_i$. We have to solve the equation: $T \overline{a} = \overline{u}$, where $u_i = 1$ and $a_0 = 1$. Because $T$ is non-singular, this is equivalent to the equation $\overline{a} = T^{-1} \overline{u}$. The first row of the inverse of the Vandermonde matrix is of the form $[\frac1{n+1}, \frac1{n+1}, \cdots, \frac1{n+1}]$. Thus we obtain equation: $1 = \frac1{n+1} \sum_{i=0}^{n} u_i$, which is satisfied only if $u_i = 1$ for all $0 \leq i \leq n$. Therefore, $\overline{a} = T^{-1} \overline{1}$ is the unique solution. \end{example} Example~\ref{e:nogo:reals} shows that it is not sufficient to evaluate polynomials at any fixed points with the same \emph{imaginary} value, whereas Example~\ref{e:nogo:units} shows that it is not sufficient to evaluate polynomials at any fixed points with the same \emph{real} value. One may wonder if it is sufficient to evaluate polynomials at some fixed set of points whose both \emph{real} and \emph{imaginary} values vary. The next theorem gives an affirmative answer to this question. \begin{theorem} The computational complexity of the problem $$\sprobc$$ for the class $|\Lambda^{\textnormal{std}}|$ is polynomial in $n$. Furthermore, it has a solution $(N,\phi)$ such that $N$ is nonadaptive and $\textnormal{cost}_i(N,v)=(2n+1)^2$ for all $v\in \overline{\mathbf{C}[x]}$, and $\sup\{\textnormal{cost}_a(\phi,y)\colon y\in N(\overline{\mathbf{C}_n[x]})\}$ is polynomial in $n$. \end{theorem} \begin{proof} Let us consider the general form of a complex polynomial ${p \in \pprobc}$: $$p(x) = a_0e^{\alpha_0i} + a_1e^{\alpha_1i}x + a_2e^{\alpha_2i}x^2 + \cdots + a_ne^{\alpha_ni}x^n$$ where all parameters are real and $a_i$ are non-negative. By using the circular coordinates for $x$, we get: $$p(xe^{yi}) = a_0e^{\alpha_0i} + a_1xe^{(\alpha_1 + y)i} + a_2x^2e^{(\alpha_2 + 2y)i} + \cdots + a_nx^ne^{(\alpha_n + ny)i}$$ The formula for the (square of) absolute value of $p(xe^{yi})$ takes the following form: \begin{align*} |p(xe^{yi})|^2 &= (a_j\cos(\alpha_0) + a_{1}x\cos(\alpha_{1} + y) + \cdots + a_nx^{n}\cos(\alpha_n + ny))^2\\ &+ (a_j\sin(\alpha_0) + a_{1}x\sin(\alpha_{1} + y) + \cdots + a_nx^{n}\sin(\alpha_n + ny))^2\\ &= \sum_{m = 0}^n \sum_{k=m}^n b_{k, m} x^{2k-m}(\cos(\beta_{k, m}) \cos(my) + \sin(\beta_{k, m})\sin(my))\\ \end{align*} where: \begin{align}\label{eq:new:coeff} b_{k, m} &= \left\{ \begin{array}{ll} a_k^2 & \textnormal{if}\ m = 0 \\ 2 a_k a_{k-m} & \textnormal{otherwise} \\ \end{array} \right.\\ \beta_{k, m} &= \alpha_k - \alpha_{k-m} \end{align} Let $x_0, x_1, x_2, \cdots, x_{2n}$ be a sequence of distinct non-negative real numbers. For every $x_i$ let us define function $\mor{\widehat{x_i}}{[-\pi, \pi]}{\mathbf{R}}$ as follows: $$\widehat{x_i}(\alpha) = |p(x_ie^{\alpha i})|^2$$ By the above observation, $\widehat{x_i}$ is a trigonometric polynomial: $$\widehat{x_i}(\alpha) = \sum_{m=0}^{n} A_{x_i, m} \cos(m \alpha) + B_{x_i, m} \sin(m \alpha)$$ with: \begin{align*} A_{x_i, 0} &= \sum_{k=j}^n b_{k, 0}x_i^{2k}\\ A_{x_i, m} &= \sum_{k=m}^n b_{k, m} x_i^{2k-m}\cos(\beta_{k, m})\\ B_{x_i, 0} &= 0\\ B_{x_i, m} &= \sum_{k=m}^n b_{k, m} x_i^{2k-m}\sin(\beta_{k, m})\\ \end{align*} To recover coefficients $A_{x_i, m}$ and $B_{x_i, m}$ we use a standard trick. If we set: $$c_{x_i}(\alpha) = \frac{\widehat{x_i}(\alpha) + \widehat{x_i}(-\alpha)}{2} = \sum_{m=0}^{n} A_{x_i, m} \cos(m \alpha)$$ the coefficients $A_{x_i, m}$ may be recovered by the inverse cosine transform from values $c_{x_i}(\omega_0), c_{x_i}(\omega_1), c_{x_i}(\omega_2), \dotsc, c_{x_i}(\omega_{2n})$ where $\omega_k = \frac{k\pi}{n}$. Similarly, setting: $$s_{x_i}(\alpha) = \frac{\widehat{x_i}(\alpha) - \widehat{x_i}(-\alpha)}{2} = \sum_{m=1}^{n} B_{x_i, m} \sin(m \alpha)$$ gives the coefficients $B_{x_i, m}$ by the inverse sine transform from values $s_{x_i}(\omega_1), s_{x_i}(\omega_2), \dotsc, s_{x_i}(\omega_{2n})$. Observe, that to recover both $A_{x_i, m}$ and $B_{x_i, m}$ we need $2n+1$ evaluations of $\widehat{x_i}$. Therefore, to recover $A_{x_i, m}$ and $B_{x_i, m}$ at every $x_i$ for $0 \leq i \leq 2n$, we need $(2n+1)^2$ evaluations of $|p|$. Now, let us consider the following polynomials of order at most $2n$: \begin{align*} A_0(x) &= \sum_{k=j}^n b_{k, 0}x^{2k}\\ A_m(x) &= \sum_{k=m}^n b_{k, m} x^{2k-m}\cos(\beta_{k, m})\\ B_m(x) &= 0\\ B_m(x) &= \sum_{k=m}^n b_{k, m} x^{2k-m}\sin(\beta_{k, m})\\ \end{align*} By definition $A_m(x_i) = A_{x_i, m}$ and $B_m(x_i) = B_{x_i, m}$, so we know the values of each $A_m, B_m$ at $2n+1$ distinct points. Therefore, we may recover coefficients $b_{k, 0}$, $b_{k, m}\cos(\beta_{k, m})$ and $b_{k, m} \sin(\beta_{k, m})$ by polynomial interpolation. Because $a_k$ are non-negative, they are uniquely determined by $a_k^2 = b_{k, 0}$. From Equation~\ref{eq:new:coeff} we can compute remaining $b_{k, m}$. Observe, that now we may compute the values of each pair $\tuple{\cos(\beta_{k, m}), \sin(\beta_{k, m})}$ and then: $\beta_{k, m} = \alpha_k - \alpha_{k-m}$. Fixing $\alpha_0$ fixes the rest of the angles, therefore up to $\alpha_0$ all coefficients of polynomial $p$ are uniquely determined. \end{proof} \section{Summary and further work} \label{s:conclusion} In the present paper we have investigated the relation between the number of (exact and phaseless) polynomial evaluations and combinatorial cost needed to process it in order to recover an unknown polynomial. However, some questions still remain open. Note that in Theorem~\ref{t:adaptreal} we have shown that $n+2$ (phaseless) polynomial evaluations are sufficient for the (phaseless) recovery of a polynomial, however the combinatorial cost of processing of that information, being in fact the cost of a full search among all possible candidates for a solution, is exponential in~$n$. Hence the natural question arises: can we do it faster? More generally, what is the minimal number of (phaseless) evaluations of an unknown polynomial $p$ such that it is possible to perform (phaseless) identification of $p$ in a polynomial time? It is of interest to investigate this both in the real and the complex case. \vskip 1em \begin{acknnowledgments} This research was supported by the National Science Centre, Poland, under projects 2018/28/C/ST6/00417 (M.~R.~Przyby{\l}ek) and 2017/25/B/ST1/00945 (P.~Siedlecki). \end{acknnowledgments} \bibliographystyle{apalike}
{'timestamp': '2019-07-25T02:01:38', 'yymm': '1907', 'arxiv_id': '1907.09371', 'language': 'en', 'url': 'https://arxiv.org/abs/1907.09371'}
\section{Introduction} Security on Android is enforced via permissions giving access to resources on the device. These permissions are often too coarse and their attribution is based on an all-or-nothing decision in the vast majority of Android versions in actual use. Additional security policies can be prescribed to impose a finer-grained control over resources. However, some key questions must be addressed: who writes the policies? What is the rational behind them? An answer could be that policies are written by experts based on intuition and prior knowledge. What can we do then in the absence of expertise? Moreover, are we sure that they provide enough coverage? We present DroidGen a tool for the systematic generation of anti-malware policies. DroidGen is fully automatic and data-driven: it takes as input two training sets of benign and malware applications and returns a policy as output. The resulting policy represents an optimal solution for a constraint satisfaction problem expressing that the discarded malware should be maximized while the number of excluded benign applications must be minimized. The intuition behind this is that the solution will capture the maximum of features which are specific to malware and less common to benign applications. Our goal is to make the generated policy as general as possible to the point of allowing us to make decisions regarding new applications which are not part of the training set. In addition to being fully push-button, DroidGen is able to generate a policy that filters out 91\% of malware from a representative testing set of Android applications with only a false positive rate of 6\%. Moreover, having the policies in a declarative readable format can boost the effort of the malware analyst by providing diagnosis and pointing her to suspicious parts of the application. In what follows we present the main ingredients of DroidGen, describe their functionality and report on experimental results. \iffalse \section{Android in a Nutshell} We introduce some Android ingredients which are relevant to the presentation. \paragraph{\bf Main components.} There are four main components constituting the building blocks of an Android application. An \textsf{Activity} represents a single screen graphical interface. A \textsf{Service} is a component that runs in the background. A \textsf{BroadcastReceiver} captures and responds to system-emitted messages. Finally, a \textsf{ContentProvider} offers an interface to manage and persist data. Applications inherit from these components and override their methods to customise their behaviour. \paragraph{\bf Life-cycle and Callbacks.} Android components have a number of predefined callbacks like \textsf{onStart}, \textsf{onResume}, etc. Both activities and services also have a life-cycle governing the order and conditions under which their callbacks are invoked by the system. Some other callbacks are invoked as a response to user interaction, such as \textsf{onClick} which is called when a button is clicked. \paragraph{\bf APIs and Permissions.} Android uses a permission mechanism to enforce and control the invocation of the framework APIs that facilitate the access to various resources on the device, such as the camera, microphone, SD card, GPS, etc. A single permission is generally associated with multiple API methods. \fi \iffalse \section{DroidGen's Main Ingredients} DroidGen proceeds in several phases: application abstraction, constraint extraction and constraint solving as illustrated in Figure~\ref{}. \fi \section{Application Abstraction} DroidGen proceeds in several phases: application abstraction, constraint extraction and constraint solving, see Figure~\ref{fig:droidgen}. \begin{figure} \begin{center} \includegraphics[scale=.4]{droidgen.png} \caption{Illustration of DroidGen's Main Ingredients} \label{fig:droidgen} \vspace{-.5cm} \end{center} \end{figure} Our goal is to infer policies that distinguish between good and bad behaviour. As it is not practical to have one policy per malicious application, we need to identify common behaviours of applications. Hence the first phase of our approach is the derivation of specifications (abstractions) which are general representations of applications. Given an application $A$, the corresponding high level specification $\mathsf{Spec}(A)$ consists of a set of properties $\{p_1,\ldots,p_k\}$ such that each property $p$ has the following grammar: \[ \begin{array}{rcl} p&:= & c:r\\ c&:=&\bf{\textsf{entry\_point $|$ activity $|$ service $|$ receiver}}\\ & & \textsf{$|$ onclick\_handler $|$ ontouch\_handler $|$\;} lc\\ lc&:=&\bf{\textsf{oncreate $|$ onstart $|$ onresume $|$ \ldots}}\\ u &:=& perm \;|\; api\\ \end{array} \] A property $p$ describes a context part $c$ in which a resource $r$ is used. The resource part can be either a permission \emph{perm} or \emph{api} referring to an api method identifier which consists of the method name, its signature and the class it belongs to. The context $c$ can be \textsf{entry\_point} referring to all entry points of the app, \textsf{activity} representing methods belonging to activities, \textsf{service} for methods belonging to service components\footnote{\textsf{activity}, \textsf{service} and \textsf{receiver} are some of the building blocks of Android applications.}, etc. We also have \textsf{onclick\_handler} and \textsf{ontouch\_handler} respectively referring to click and touch event handlers. Moreover, $c$ can be an activity life-cycle callback such as \textsf{oncreate}, \textsf{onstart}, etc.\footnote{Some components have a life-cycle governing their callbacks invocation.} Activity callbacks as well as the touch and click event handlers are also entry points. \iffalse For an application $A$ and a context $c$, we write $M_{\mathit{A}}(c)$ to denote the set of methods of $A$ represented by $c$. For illustration, let us consider the example in Figure~\ref{fig:recorder}. On the left hand side, we have code snippets representing a simple audio recording application named \textsf{Recorder} which inherits from an Activity component. We have $M_{\mathit{\mathsf{Recorder}}}(\textsf{entry\_point}) = \{\mathsf{onCreate}, \mathsf{onClick}\}$ as they are the only entry points of the application. \fi \iffalse A property $p$ of the form $c:r$ belongs to the specification of an application $A$ if $r$ is used within the context $c$ in $A$. In other words: \begin{equation} (c:u) \in \mathsf{Spec}(A) \equiv \exists x \in M_A(c).\; x \leadsto u \label{eq:spec_sem} \end{equation} For $u = api$, $x \leadsto api$ means that $api$ is transitively called from $x$. Similarly, $x \leadsto perm$ means that it exists an API method $a$ which is associated with $perm$ and $x \leadsto a$. Hence (\ref{eq:spec_sem}) indicates that it exists at least one method matching $c$ and from which $u$ ($api$ or $perm$) is transitively called (reachable). \fi \iffalse \[ \begin{array}{rcl} x \leadsto (U_1 \wedge U_2) & \equiv & (x \leadsto U_1) \wedge (x \leadsto U_2) \\ \end{array} \] \fi A property $p$ of the form $c:r$ belongs to the specification of an application $A$ if $r$ ($perm$ or $api$) is used within the context $c$ in $A$. In other words: it exists at least one method matching $c$ from which $r$ is transitively called (reachable). To address such a query, we compute the transitive closure of the call graph \cite{SeghirA15}. We propagate permissions (APIs) backwards from callees to callers until we reach a fixpoint. For illustration, let us consider the example in Figure~\ref{fig:recorder}. On the left hand side, we have code snippets representing a simple audio recording application named \textsf{Recorder} which inherits from an Activity component. On the right hand side, we have the corresponding specifications in terms of APIs (Figure~\ref{fig:recorder}(a)) and in terms of permissions (Figure~\ref{fig:recorder}(b)). The method \textsf{setAudioSource}, which sets the recording medium for the media recorder, is reachable (called) from the Activity life-cycle method \textsf{onCretae}, hence we have the entry \textsf{oncreate: setAudioSource} in the specification map (a). We also have the entry \textsf{oncreate}: \textsc{record\_audio} in the permission-based specification map (b) as the permission \textsc{record\_audio} is associated with the API method \textsf{setAudioSource} according to the Android framework implementation. Similarly, the API method \textsf{setOutputFile} is associated with the context \textsf{onclick} (a) as it is transitively reachable (through \textsf{startRecording}) from the click handler method \textsf{onClick}. Hence permission \textsc{write\_external\_storage}, for writing the recording file on disk, is also associated with \textsf{onclick} (b). Both APIs and permissions are also associated with the context \textsf{activity} as they are reachable from methods which are activity members. We use results from \cite{AuZHL12} to associate APIs with the corresponding permissions. \begin{figure} \begin{center} \begin{tabular}{c@{\hspace{0.2in}}c} \begin{lstlisting} public class Recorder extends Activity implements OnClickListener{ private MediaRecorder myRecorder; ... public void onCreate(...) { myRecorder = new MediaRecorder(); // uses RECORD_AUDIO permission myRecorder.setAudioSource(...); } private void startRecording() { // uses WRITE_EXTERNAL_STORAGE myRecorder.setOutputFile(...); recorder.start(); } public void onClick(...) { startRecording(); } } \end{lstlisting} & \scriptsize{ \begin{tabular}{c} \textsf{Spec}(Recorder) \\ \hline\hline \begin{tabular}{rl} \textsf{oncreate}:& setAudioSource \\ \textsf{onclick}:& setOutputFile \\ \textsf{activity}:& setAudioSource \\ \textsf{activity}:& setOutputFile \\ \end{tabular} \\ \hline (a) \\ \\ \textsf{Spec}(Recorder) \\ \hline\hline \begin{tabular}{rl} \textsf{oncreate}:& \textsc{record\_audio}\\ \textsf{onclick}:& \textsc{write\_external\_storage} \\ \textsf{activity}:& \textsc{record\_audio} \\ \textsf{activity}:& \textsc{write\_external\_storage} \\ \end{tabular} \\ \hline (b) \end{tabular} } \end{tabular} \end{center} \caption{Code snippets sketching a simple audio recording application together with the corresponding specifications based on APIs (a) and based on permissions (b)} \label{fig:recorder} \end{figure} \section{Specifications to Policies: an Optimisation Problem?} DroidGen tries to derive a set of rules (policy) under which a maximum number of benign applications is allowed and a maximum of malware is excluded. This is an optimization problem with two conflicting objectives. Consider \begin{figure}[h!] \vspace{-.5cm} \begin{center} \begin{tabular}{c@{\hspace{0.5in}}|@{\hspace{0.5in}}c} \begin{tabular}{c@{\hspace{0.1in}}c@{\hspace{0.1in}}c} $\mathsf{Spec}(benign_1)$ &=& $\{p_{a}\}$\\ \hline $\mathsf{Spec}(benign_2)$ &=& $\{p_{c}\}$\\ \hline $\mathsf{Spec}(benign_3)$ &=& $\{p_{b}, p_{e}\}$\\ \end{tabular} & \begin{tabular}{c@{\hspace{0.1in}}c@{\hspace{0.1in}}c} $\mathsf{Spec}(malware_1)$ &=& $\{p_{a}, p_{b}\}$\\ \hline $\mathsf{Spec}(malware_2)$ &=& $\{p_{a}, p_{c}\}$\\ \hline $\mathsf{Spec}(malware_3)$ &=& $\{p_{d}\}$\\ \end{tabular} \end{tabular} \end{center} \label{fig:specs} \vspace{-.7cm} \end{figure} \noindent Each application (benign or malware) is described by its specification consisting of a set of properties ($p_i$'s). As seen previously, a property $p_i$ can be for example \textsf{activity : record\_audio}, meaning that the permission \textsf{record\_audio} is used within an activity. A policy excludes an application if it contradicts one of its properties. We want to find the policy that allows the maximum of benign applications and excludes the maximum of malware. This is formulated as: \[ Max [\underbrace{I(p_a) + I(p_c) + I(p_b \wedge p_e)}_{benign} - \underbrace{(I(p_a \wedge p_b) + I(p_a \wedge p_c) + I(p_d))}_{malware}] \] where $I(x)$ is the function that returns 1 if $x$ is true or 0 otherwise. This type of optimization problems where we have a mixture of theories of arithmetic and logic can be efficiently solved via an SMT solver such as Z3 \cite{MouraB08}. It gives us the solution: $p_a = 0$, $p_b = 1$, $p_c = 1$, $p_d = 0$ and $p_e = 1$. Hence, the policy will contain the two rules $\neg p_a$ and $\neg p_d$ which filter out all malware but also exclude the benign application $benign_1$. A policy is violated if one of its rules is violated. \iffalse We can handle the general case where we have an arbitrary number $k$ of permissions per rule as follows: for each specification containing the rules $c:t_1,\ldots,c:t_k$, we add a new rule $r'$ of the form $c:t'$. The intuition is that $t'$ models the conjunction $t_1\wedge\ldots\wedge t_k$. Then if the constraint solver sets $r'$ to 0 in the solution, it means that $\neg r'$ is part of the policy, which is interpreted as $c:^{or} \neg t_1 \vee \ldots \vee \neg t_k$. Hence we obtain an or-rule. \fi \paragraph{\bf Policy Verification and Diagnosis.} Once we have inferred a policy, we want to use it to filter out applications violating it. A policy $P = \{\neg p_1,\ldots,\neg p_k\}$ is violated by an application $A$ if $\{p_1,\ldots,p_k\} \cap \mathsf{Spec}(A) \neq \emptyset$, meaning that $A$ contradicts (violates) at least one of the rules of $P$. In case of policy violation, the violated rule, e.g. $\neg p$, can give some indication about a potential malicious behaviour. DroidGen maps back the violated rule to the code in order to have a view of the violation origin. For $p = (c:u)$, a sequence of method invocations $m_1,..,m_k$ is generated, such that $m_1$ matches the context $c$ and $m_k$ invokes $u$. \section{Implementation and Experiments} \label{sec:experiments} DroidGen\footnote{www0.cs.ucl.ac.uk/staff/n.seghir/tools/DroidGen} is written in Python and uses Androguard\footnote{https://github.com/androguard} as front-end for parsing and decompiling Android applications. DroidGen automatically builds abstractions for the applications which are directly accepted in APK binary format. This process takes around 6 seconds per application. An optimization problem in terms of constraints over the computed abstractions is then generated and the Z3 SMT solver is called to solve it. Finally, the output of Z3 is interpreted and translated to a readable format (policy). Policy generation takes about 7 seconds and its verification takes no more than 6 seconds per app on average. We derived two kinds of policies based on a training set of 1000 malware applications from Drebin\footnote{\textsf {http://user.informatik.uni-goettingen.de/$\sim$darp/drebin/}} and 1000 benign ones obtained from Intel Security (McAfee). The first policy $P_p$ is solely based on permissions and is composed of 65 rules. The other policy $P_a$ is exclusively based on APIs and contains 152 rules. Snippets from both policies are illustrated in the appendix. We have applied the two policies to a testing set of 1000 malware applications and 1000 benign ones (different from the training sets) from the same providers. \iffalse The reason forn not using the official Google store as source for benign application is that we are not completely sure f they are indeed benign while labelling is provided for apps from Intel Security. \fi Results are summarised in Table~\ref{tab:results}. \vspace{-.3cm} \begin{table}[!h] \begin{center} \begin{tabular}{|c|c|c|} \hline Policy & Malware filtered out & Benign excluded\\ \hline \hline APIs ($P_a$)& 910/1000 & 59/1000\\ Permission ($P_p$) & 758/1000 & 179/1000\\ \hline \end{tabular} \caption{Results for a permissions-based policy ($P_p$) vs. an API-based one ($P_a$)} \label{tab:results} \vspace{-1cm} \end{center} \end{table} The policy $P_a$ composed of rules over APIs performs better than the one that uses permissions in terms of malware detection as it is able to filter out $91\%$ of malware while $P_p$ is only able to detect $76\%$. It also has a better false positive rate as it only excludes $6\%$ of benign applications, while $P_p$ excludes $18\%$. Being able to detect $91\%$ of malware is encouraging as it is comparable to the results obtained with some of the professional security tools (https://www.av-test.org/)\footnote{We refer to AV-TEST benchmarks dated September 2014 as our dataset was collected during the same period.}. Moreover, our approach is fully automatic and the actual implementation does not exploit the full expressiveness of the policy space as we only generate policies in a simple conjunctive form. We plan to further investigate the generation of policies in arbitrary propositional forms. \iffalse \paragraph{\bf Testing Malware.} To evaluate the pertinence of the synthesized policy with respect to malware, we considered the set of rules violated by malware and contrasted it with descriptions provided by anti-virus companies. In many instances the anti-malware policy was targeting the relevant aspect of the malicious behaviour and thus providing a good explanation on why the app is bad. For example, for the malware family FakeInstaller, we have found that the rules which filter most of its instances are \textsc{receiver : $\neg$send\_sms} and \textsc{onclick\_handler : $\neg$send\_sms}, which is in accordance with the description we have found\footnote{http://blogs.mcafee.com/mcafee-labs/fakeinstaller-leads-the-attack-on-android-phones}: \emph{"The user is forced to click an Agree or Next button, which sends the premium SMS messages. We have also seen versions that send the messages before the victim clicks a button"}. Similarly for the Plankton family, we have found that among the relevant rules are: \textsc{service : $\neg$access\_wifi\_state} and \textsc{service : $\neg$read\_history\_bookmarks}. They respectively correspond to \emph{"forward information to a remote server''} and \emph{"collect or modify the browser's bookmarks"} in the description\footnote{http://www.f-secure.com/v-descs/trojan\_android\_plankton.shtml}. This suggests future work about generating lightweight malware (family) signatures based on policy rules. \fi \section{Related Work} \label{sec:related} Many tools for analysing various security aspects of Android have emerged \cite{flowdroid, FahlHMSBF12, ChinFGW11, BackesGHMS13}. They either check or enforce certain security properties (policies). These policies are either hard-coded or manually provided. Our work complements such tools by providing the automatic means for inferring the properties to be checked. Hence, DroidGen can serve as a front-end for a verification tool such as EviCheck \cite{SeghirA15} to keep the user completely out of the loop. Also various machine-learning-based approaches have been proposed for malware detection \cite{YangXGYP14, ArpSHGR14, AvdiienkoKGZARB15, YangXALXE15}. While some of them outperform our method, We did not yet exploit the entire power of the policy language. We are planning to allow more general forms for the rules, which could significantly improve the results. Moreover, many of the machine-learning based approaches do not provide further indications about potential malicious behaviours in an application. Our approach returns policies in a declarative readable format which is helpful in terms of code inspection and diagnosis. Some qualitative results are reported in the appendix. To the best of our knowledge, our approach is unique for being data-driven and using a constraint solver for inferring anti-malware policies. \section{Conclusion and Future Work} \label{sec:learn_pol} We have presented DroidGen a tool for the automatic generation of anti-malware policies. It is data-driven and uses a constraint solver for policy inference. DroidGen is able to automatically infer an anti-malware policy which filters out 91\% of the tested malware with the additional benefit of being fully automatic. Having the policies in declarative readable format can boost the effort of the malware analyst by pointing her to suspicious parts of the application. As future work, we plan to generate more expressive policies by not restring their form to conjunctions of rules. We also plan to generate anti-malware policies for malware families with the goal of obtaining semantics-based signatures (see appendix). \bibliographystyle{abbrv}
{'timestamp': '2016-12-23T02:05:08', 'yymm': '1612', 'arxiv_id': '1612.07586', 'language': 'en', 'url': 'https://arxiv.org/abs/1612.07586'}
\section{Introduction} A connection between string theory on $AdS_{5}\times S^{5}$ and super Yang Mills theory in four dimensions was proposed by J. Maldacena some years ago \cite{Maldacena:1998re}. More recently, this result gave rise to what is currently called the AdS/CFT conjecture. However the name has been enlarged to include many different results. The AdS/CFT conjecture relates the renormalized gravity action induced in the boundary with the expectation value of the stress tensor of the dual CFT as: \begin{equation} \frac{1}{\sqrt{\gamma}}\frac{\delta S_{ren}}{\delta\gamma_{ij}}=\langle T_{ij}\rangle_{CFT}, \end{equation} where $\gamma_{ij}$ is the metric induced on the boundary As stated today the AdS/CFT conjecture actually represents a realization of holography as proposed 10 years ago by Susskind and t'Hooft \cite{Susskind:95}, \cite{'t Hooft:93}. This conjecture has been extensively checked, in part, because the conformal symmetry is strong enough to determine many generic results in a CFT without knowing the details of the particular theory. For instance, one can demonstrate that the thermodynamics of a black hole in an asymptotically (locally) AdS space reproduces the thermodynamics of a CFT. To our knowledge this is true for all the theories of gravity with a single negative cosmological constant (see for instance \cite{Aros:2000ij}). The main reason is that the thermodynamics of any CFT is almost completely determined by the conformal symmetry. Furthermore, one can prove that, under certain particular conditions, a gravitational theory can be induced in a lower dimensional surface at the bulk. The brane-worlds proposed in \cite{Randall:1999ee} are a realization of these ideas. In \cite{Carlip:2005tz}, using the same underlying idea, is shown that the Liouville theory arises as the effective theory at the AdS asymptotic boundary in 2+1 AdS-gravity. If the AdS/CFT conjecture is to be understood as a duality relation then a classical solution in the bulk should rise to a quantum corrected solution at the boundary. This result was actually confirmed between $3+1/2+1$ dimensions in \cite{Emparan:2002px}. An asymptotically (locally) AdS space needs to be treated carefully otherwise one is usually led to a divergent behavior in the Lagrangian, the conserved charges or/and the variations of the Lagrangian. Therefore to confirm many of the results of the AdS/CFT conjecture is necessary to use some (classical) regularization processes together with a proper set of boundary conditions. The regularization of the conserved charges has been an interesting field by itself where many relevant results has been found (See for instance \cite{Aros:1999id,Henneaux:1999ct,Barnich:2001jy}). A generic method to deal with the divergent behaviors of the actions appears in \cite{Skenderis:2002wp}, where the conjecture is used to build a method to compute anomalies of CFT's. In this work part of these results will be used. In particular in five dimensions a finite version of the Einstein-Hilbert action \cite{Balasubramanian:1999re} reads: \begin{eqnarray} I_{grav} &=& \frac{1}{16\pi G}\int_{M} d^{4}xd\rho\sqrt{g}(R-2\Lambda)-\frac{1}{8\pi G}\int_{\partial M} d^{4}x\sqrt{\gamma}K \nonumber\\ &-&\frac{3}{8\pi G}\int_{\partial M} d^{4}x\sqrt{\gamma}-\frac{1}{16\pi G}\int_{\partial M} d^{4}x\sqrt{\gamma}R[\gamma].\label{5dAction} \end{eqnarray} We would like to point out that although in this action every term is divergent, the addition of all of them becomes finite and well behaved. A discussion of a generalization of this action can be found in \cite{Olea:2006vd}. In this work we prove that an effective conformal gravity theory arises at the boundary when the action displayed in equation (\ref{5dAction}) is used as the bulk's theory. We have extended to five dimensions the work previously done by Carlip \cite{Carlip:2005tz} in three dimensions. The theory obtained coincides with the bosonic part of the super conformal gravity that appears in \cite{Balasubramanian:2000pq} and in \cite{Liu:1998bu}, however the method employed to reach this result is different. It is worth to stress that there is another approach, independent of the two already mentioned, to obtain the same result \cite{Elizalde:1994nz}. \section{The four dimensional conformal action and the anomaly} The purpose of this work is to rewrite the action that appears in equation (\ref{5dAction}) and to show that it can be understood as a four dimensional theory under diffeomorphisms that preserve the asymptotically AdS scaling of the metric. To fulfill this program, we begin with a general five dimensional asymptotic AdS metric with a Fefferman-Graham-type expansion near infinity. This yields the following line element: \begin{equation} ds^2 = l^{2}d\rho^{2}+ g_{ij}(x,\rho)dx^{i}dx^{j},\label{metric} \end{equation} where $\rho \,\rightarrow\,\infty$, defines the asymptotical (locally) AdS region. The metric $g_{ij}(x,\rho)$ admits the expansion: \[ g_{ij}(x,\rho)=e^{2\rho}g^{(0)}_{ij}(x)+g^{(2)}_{ij}(x)+e^{-2\rho}g^{(4)}_{ij}(x)-2e^{-2\rho}\rho h_{ij}(x)+\ldots \] Next, we set $l=1$, thus $\Lambda=-6$. With this expansion the Einstein equations can be solved iteratively. This yields (see reference \cite{Skenderis:2002wp}): \begin{equation} Tr(g^{(4)})=\frac{Tr(g^{(2)2})}{4},\textrm{ }\quad g^{(2)}_{ij}=-\frac{1}{2}(R^{(0)}_{ij}-\frac{1}{6}R^{(0)}g^{(0)}_{ij}),\quad Tr(h)=0, \end{equation} where traces are obtained using the metric $g^{(0)}_{ij}$. The following step consider a coordinate transformation that must leave invariant the asymptotic form of the metric (\ref{metric}). Using the prescription displayed in \cite{Carlip:2005tz}, the transformation reads: \[ \rho\rightarrow\rho+\frac{1}{2}\varphi(x)+ e^{-2\rho} f^{(2)}(x)+\ldots\,, \] \begin{equation}\label{NewCoordinates} x^{i}\rightarrow x^{i}+e^{-2\rho}h^{(2)i}(x)+\ldots\,\, . \end{equation} Note that in this new coordinate system $\rho$ and the variable $x_i$ appear factorized. The boundary, ($\bar{\rho}\, \rightarrow \,\infty)$, is defined as: \begin{equation}\label{F(X)} \rho=\bar{\rho}+\frac{1}{2}\varphi(x)+O(e^{-n\bar{\rho}})=F(x), \end{equation} therefore the induced metric at the boundary and the unit normal respectively read: \begin{equation} \gamma_{ij}=g_{ij}+\partial_{i}F\partial_{j}F\, , \end{equation} \begin{equation} n^{a}=\frac{1}{\sqrt{1+g^{ij}\partial_{i}F\partial_{j}F}}(-1,g^{ij}\partial_{j}F). \end{equation} In this new system of coordinates (\ref{NewCoordinates}) one can write an expansion in powers of $\rho$ for the determinant $\sqrt{\gamma}$, the extrinsic curvature and the Ricci scalar near the boundary. The expansions for each one of these geometrical objets are: \begin{eqnarray} \sqrt{\gamma}&=& e^{4\rho}\sqrt{g^{(0)}}+\frac{1}{2}\sqrt{g^{(0)}}e^{2\rho}(Tr(g^{(2)})+g^{(0)ij}\partial_{i}F\partial_{j}F)+\frac{1}{2}\sqrt{g^{(0)}} \left(Tr(g^{(4)})+\frac{1}{4}Tr(g^{(2)})^{2}\right. \nonumber\\ &-&\frac{1}{2}Tr(g^{(2)2})-\frac{1}{4}(g^{(0)ij}\partial_{i}F\partial_{j}F)^{2}+\frac{1}{2}Tr(g^{(2)})g^{(0)ij}\partial_{i}F\partial_{j}F\nonumber\\ &-&\left. g^{(0)ai}g^{(0)bj}g^{(2)}_{ij}\partial_{a}F\partial_{b}F \right)+\ldots, \end{eqnarray} \begin{eqnarray} K &=& \frac{1}{2}Tr(\gamma\pounds_{n}g_{\|})=-4+e^{-2\rho}(g^{(0)ij}\partial_{i}F\partial_{j}F+g^{(0)ij}\nabla^{(0)}_{i}\nabla^{(0)}_{j}F+Tr(g^{(2)}))+e^{-4\rho}(2Tr(g^{(4)}) \nonumber\\ &-&Tr(g^{(2)2})-\frac{1}{2}Tr(g^{(2)})g^{(0)ij}\nabla^{(0)}_{i}\nabla^{(0)}_{j}F-\frac{1}{2}Tr(g^{(2)})g^{(0)ij}\partial_{i}F\partial_{j}F) +\ldots \,\, \mbox{and} \end{eqnarray} \begin{eqnarray} R[\gamma]&=& e^{-2\rho}(R^{(0)}-6g^{(0)ij}\nabla^{(0)}_{i}\nabla^{(0)}_{j}F-6g^{(0)ij}\partial_{i}F\partial_{j}F)+e^{-4\rho}(-g^{(2)ij}R^{(0)}_{ij}-R^{(0)ij}\partial_{i}F\partial_{j}F \nonumber\\ &+&2g^{(2)ij}\nabla^{(0)}_{i}\nabla^{(0)}_{j}F+Tr(g^{(2)})g^{(0)ij}\nabla^{(0)}_{i}\nabla^{(0)}_{j}F-2g^{(2)ij}\partial_{i}F\partial_{j}F \nonumber \\ &+&2Tr(g^{(2)})g^{(0)ij}\partial_{i}F\partial_{j}F)+\ldots \,. \end{eqnarray} The Ricci scalar is defined up to total derivatives of the order of $O(e^{-4\rho})$. All indices are raised and lowered with $g^{(0)_{i\,j}}$. Here we have defined \begin{equation} Tr(\gamma\pounds_{n}g_{\|})= \gamma^{ij}\partial_{i}x^{\mu}\partial_{j}x^{\nu}(\pounds_{n}g)_{\mu\nu}, \end{equation} with $\mu=0\ldots 4$ and $x^{4}=\rho$, the derivative of the coordinates $x^{\mu}$ are given by: \begin{equation} \partial_{i}x^{\mu}=\delta_{i}^{\mu}+\partial_{i}F\delta_{4}^{\mu}. \end{equation} We want to use the expansions just described to extract the finite part of the action (\ref{5dAction}). First we integrate $\rho$ in the five dimensional action (\ref{5dAction}):\\ \begin{eqnarray} \int_{M} d^{4}xd\rho\sqrt{g}(R-2\Lambda)_{on-shell}&=& -8\int_{\partial M} d^{4}x\int^{\rho=F} d\rho\sqrt{g}=-8\int_{\partial M} d^{4}x\int^{\rho=F}d\rho( e^{4\rho}\sqrt{g^{(0)}} \nonumber\\ +\frac{1}{2}\sqrt{g^{(0)}}e^{2\rho}(Tr(g^{(2)}))&+&\frac{1}{2}\sqrt{g^{(0)}}(Tr(g^{(4)})+\frac{1}{4}Tr(g^{(2)})^{2}-\frac{1}{2}Tr(g^{(2)2}))+\ldots) \nonumber\\ =\int_{\partial M} d^{4}x(-2e^{4F}\sqrt{g^{(0)}}&+&\frac{1}{3}\sqrt{g^{(0)}}e^{2F}(R^{(0)})-\frac{1}{4}\sqrt{g^{(0)}}(R^{(0)ij}R^{(0)}_{ij} \nonumber \\ -\frac{1}{3}R^{(0)2})F+\ldots) & & \end{eqnarray} The term proportional to $F$ has a divergent term that must be eliminated adding a counterterm. This regularization procedure has been introduced by Skenderis \cite{Skenderis:2002wp}. Finally, evaluating the action on-shell we get: \begin{eqnarray} I_{grav}&=&\frac{1}{16\pi G}\int_{\partial M}\sqrt{g^{(0)}}(-\frac{1}{16}(R^{(0)ij}R^{(0)}_{ij}-\frac{1}{3}R^{(0)2})+\frac{1}{64}(\partial_{i}\varphi\partial^{i}\varphi)^{2} \nonumber\\ &+&\frac{1}{16}\partial_{i}\varphi\partial^{i}\varphi\nabla^{(0)}_{j}\nabla^{(0)j}\varphi-\frac{1}{8}(R^{(0)ij}R^{(0)}_{ij}-\frac{1}{3}R^{(0)2})\varphi+\frac{1}{8}G^{(0)ij}\partial_{i}\varphi\partial_{j}\varphi) \end{eqnarray} This expression can be recognized as the action for a 4-dimensional conformal gravity plus an anomalous part. It is worth to stress that this result has been obtained before by at least two different approaches. For instance, Riegert \cite{Riegert:1984kt} arrived to the same expression considering an action that could take into account and cancel the anomalous terms. Different approaches converging to the same conclusion give a solid confidence to the result obtained. \section{Conclusions} In this work we have proven that a four dimensional conformal gravity can be obtained through the AdS/CFT mechanism from five dimensional Einstein gravity. We have demonstrated this explicitly using the Fefferman-Graham expansion and regularizing the action. As expected the radial diffeomorphisms induces a Weyl transformation on the boundary metric which in turn produces the anomalous part as is demonstrated in \cite{Manvelyan:2001pv}. The degrees of freedom associated to radial diffeomorphisms are encoded in the dynamics of the scalar field $\varphi$. This action was obtained by Riegert \cite{Riegert:1984kt} as the local form of the action which gives a trace anomaly proportional to $R^{(0)ij}R^{(0)}_{ij}-\frac{1}{3}R^{(0)2}$ and corresponds to the local form of the anomalous part of the effective action associated with the Super Yang-Mills theory in $d=4$ (\cite{Balasubramanian:2000pq},\cite{Liu:1998bu}). Also from \cite{Bautier:2000mz} we know that this field encodes part of the degrees of freedom contained in the traceless part of $g^{(4)}$ which, along with $g^{(0)}$, contains all the degrees of freedom of the solutions for pure gravity in five dimensions. This calculation confirms the previous result obtained in \cite{Balasubramanian:2000pq} and in \cite{Liu:1998bu} by a different method for the pure gravitational sector. Our strategy appears to be more direct than the ones used in the works cited before, however the algebra involved is more complex.\\ The induced four dimensional action we have found here can be considered as a quantum correction for the Einstein Hilbert action in $d=4$. Mottola and Vaulin \cite{Mottola:2006ew} have considered a similar idea. They consider these terms as deviations from the classical stress tensor coming from quantum corrections. We consider to address this problem in a future work. In another context, this action could be used as an ansatz for the action proposed in \cite{Kanno:2003vf} to test Kaluza-Klein corrections in the Randall-Sundrum two-brane system.
{'timestamp': '2007-02-02T19:49:23', 'yymm': '0612', 'arxiv_id': 'hep-th/0612028', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-th/0612028'}
\section{Appendix} \label{A2} To simulate death times of annuitants, we notice that once a sample of the mortality intensity is obtained, the Cox process becomes an inhomogeneous Poisson process and the first jump times, which are interpreted as death times, can be simulated as follows (see e.g. \cite{BrigoMe07}): \begin{enumerate} \item Simulate the mortality intensity $\mu_x(t)$ from $t=0$ to $t=\omega-x$. \item Generate a standard exponential random variable $\xi$. For example, using an inverse transform method, we have $\xi = -\ln{(1-u)}$ where $u \sim \text{Uniform}(0,1)$. \item Set the death time $\tau$ to be the smallest $T$ such that $\xi \leq \int^T_0 \mu_x(s) \,ds$. If $\xi > \int^{\omega-x}_0 \mu_x(s) \,ds$ then set $\tau = \omega-x$. \item Repeat step (2) and (3) to obtain another death time. \end{enumerate} The payoff of an index-based hedging instrument depends on the realised survival probability $\exp\{-\int^t_0 \mu_x(v)\,dv\}$. The payoff of a customised instrument, on the other hand, depends on the proportion of survivors, $\frac{n-N_t}{n}$, underlying an annuity portfolio where the number of deaths, $N_t$, is obtained by counting the number of simulated death times that are smaller than $t$. Note that \begin{equation} e^{-\int^t_0 \mu_x(v)\,dv} \approx \frac{n-N_t}{n} \end{equation} and the accuracy of the approximation improves when $n$ increases. \section{Conclusion} Life and pension annuities are the most important types of post-retirement products offered by annuity providers to help securing lifelong incomes for the rising number of retirees. While interest rate risk can be managed effectively in the financial markets, longevity risk is a major concern for annuity providers as there are only limited choices available to mitigate the long-term risk. Development of effective financial instruments for longevity risk in capital markets is arguably the best solution available. Two types of longevity derivatives, a longevity swap and a cap, are analysed in this paper from a pricing and hedging perspective. We apply a tractable Gaussian mortality model to capture the longevity risk, and derive explicit formulas for important quantities such as survival probabilities and prices of longevity derivatives. Hedge effectiveness and features of an index-based longevity swap and a cap used as hedging instruments are examined using a hypothetical life annuity portfolio exposed to longevity risk. Our results suggest that the market price of longevity risk $\lambda$ is a small contributor to hedge effectiveness of a longevity swap since a higher annuity price is partially offset by an increased cost of hedging when $\lambda$ is taken into account. It is shown that a longevity cap, while being able to capture the upside potential when survival probabilities are overestimated, can be more effective in reducing longevity tail risk compared to a longevity swap, provided that $\lambda$ is large enough. The term to maturity $\hat{T}$ is an important factor in determining hedge effectiveness. However, the difference in hedge effectiveness is only marginal when $\hat{T}$ increases from $30$ to $40$ years for an annuity portfolio consisting of a single cohort aged 65 initially. This is due to the fact that only a small number of policies will still be in-force after a long period of time (30 to 40 years), and index-based instruments turn out to be ineffective when idiosyncratic mortality risk becomes a larger contributor to the overall risk, compared to systematic mortality risk. The effect of the portfolio size $n$ on hedge effectiveness is quantified and compared with the result obtained in \cite{LiHa11} where population basis risk is taken into account. In addition, we find that the skewness of the surplus distribution of a cap-hedged portfolio is sensitive to the term to maturity and the portfolio size, and, as a result, the difference between a longevity swap and a cap when used as hedging instruments becomes more pronounced for larger $\hat{T}$ and $n$. As discussed in \cite{BiffisBl14}, developing a liquid longevity market requires reliable and well-designed financial instruments that can attract sufficient amount of interests from both buyers and sellers. Besides of a longevity swap, which is so far a common longevity hedging choice for annuity providers, option-type instruments such as longevity caps can provide hedging features that linear products cannot offer. A longevity cap is shown to have alternative hedging properties compared to a swap, and this option-type instrument would also appeal to certain classes of investors interested in receiving premiums by selling a longevity insurance. Further research on the design of longevity-linked instruments from the perspectives of buyers and sellers would provide a further step towards the development of an active longevity market. \section{Introduction} Securing a comfortable living after retirement is fundamental to the majority of the working population around the world. A major risk in retirement, however, is the possibility that retirement savings will be outlived. Products that provide guaranteed lifetime income, such as life annuities, need to be offered in a cost effective way while maintaining the long run solvency of the provider. Annuity providers and pension funds need to manage the systematic mortality risk\footnote{From an annuity provider's perspective, longevity risk modelling can lead to a (stochastically) over- or underestimation of survival probabilities for all annuitants. For this reason longevity risk is also referred to as the systematic mortality risk.}, associated with random changes in the underlying mortality intensity, in a life annuity or pension portfolio. Systematic mortality risk cannot be diversified away with increasing portfolio size, while idiosyncratic mortality risk, representing the randomness of deaths in a portfolio with fixed mortality intensity, is diversifiable. Reinsurance has been important in managing longevity risk for annuity and pension providers. However, there are concerns that reinsurers have a limited risk appetite and are reluctant to take this ``toxic" risk (\cite{BlakeCaDoMa06}). In fact, even if they were willing to accept the risk, the reinsurance sector is not deep enough to absorb the vast scale of longevity risk currently undertaken by annuity providers and pension funds.\footnote{It is estimated that pension assets for the 13 largest major pension markets have reached nearly 30 trillions in 2012 (Global Pension Assets Study 2013, Towers Watson).} The sheer size of capital markets and an almost zero correlation between financial and demographic risks, suggests that they will increasingly take a role in the risk management of longevity risk. The first generation of capital market solutions for longevity risk, in the form of mortality and longevity bonds ( \cite{BlakeBurrows2001}, \cite{Blake2006a} and \cite{BauerBoRu10})\footnote{Of particular interest is an attempt to issue the EIB longevity bond by the European Investment Bank (EIB) in 2004, which was underwritten by BNP Paribas. This bond was not well received by investors and could not generate enough demand to be launched due to its deficiencies, as outlined in \cite{Blake2006a}.}, gained limited success. The second generation involving forwards and swaps have attracted increasing interest (\cite{Blakeetal13}). Index-based instruments aim to mitigate systematic mortality risk, and have the potential to be less costly and are designed to allow trading as standardised contracts (\cite{Blakeetal13}). Unlike the bespoke or customized hedging instruments such as reinsurance, they do not cover idiosyncratic mortality risk and give rise to basis risk (\cite{LiHa11}). Since idiosyncratic mortality risk is reduced for larger portfolios, portfolio size is an important factor that determines the hedge effectiveness of index-based instruments. Longevity derivatives with a linear payoff, including q-forwards and S-forwards, have as an underlying the mortality and the survival rate, respectively (\cite{LLMA10a}). Their hedge effectiveness has been considered in \cite{NgaiSh11} who study the effectiveness of static hedging of longevity risk in different annuity portfolios. They consider a range of longevity-linked instruments including q-forwards, longevity bonds and longevity swaps as hedging instruments to mitigate longevity risk and demonstrate their benefits in reducing longevity risk. \cite{LiHa11} also consider hedging longevity risk with a portfolio of q-forwards. They highlight basis risk as one of the obstacles in the development of an index-based longevity market. Longevity derivatives with a nonlinear payoff structure have not received a great deal of attention to date. \cite{BoyerSt13} evaluate European and American type survivor options using simulations and \cite{WangYa13} propose and price survivor floors under an extension of the Lee-Carter model. These authors do not consider the hedge effectiveness of longevity options and longevity swaps as hedging instruments. Although dynamic hedging has been considered, because of the lack of liquid markets in longevity risk, static hedging remains the only realistic option for annuity providers. \cite{Cairns11} considers q-forwards and a discrete-time delta hedging strategy, and compares it with static hedging. The lack of analytical formulas for pricing q-forwards and its derivatives, known as ``Greeks", can be a significant problem in assessing hedge effectiveness since simulations within simulations are required for dynamic hedging strategies. The importance of tractable models has also been emphasised in \cite{LucianoReVi12} who also consider dynamic hedging for longevity and interest rate risk. \cite{HariWaMeNi08} apply a generalised two-factor Lee-Carter model to investigate the impact of longevity risk on the solvency of pension annuities. This paper provides pricing analysis of longevity derivatives, as well as their hedge effectiveness. We consider static hedging. A longevity swap and a cap are chosen as linear and nonlinear products to compare and assess index-based capital market products management of longevity risk management. The model used for this analysis is a continuous time model for mortality with age based drift and volatility, allowing tractable analytical formulae for pricing and hedging. The analysis is based on a hypothetical life annuity portfolio subject to longevity risk. The paper considers the hedging of longevity risk using a longevity swap and a longevity cap, a portfolio of S-forwards and longevity caplets respectively, based on a range of different underlying assumptions for the market price of longevity risk, the term to maturity of hedging instruments, as well as the size of the underlying annuity portfolio. The paper is organised as follows. Section 2 specifies the two-factor Gaussian mortality model, and its parameters are estimated using Australian males mortality data. Section 3 analyses longevity derivatives, in particular, a longevity swap and a cap, from a pricing perspective. Explicit pricing formulas are derived under the proposed two-factor Gaussian mortality model. Section 4 examines various hedging features and hedge effectiveness of a longevity swap and a cap on a hypothetical life annuity portfolio exposed to longevity risk. Section 5 summarises the results and provides concluding remarks. \section{Analytical Pricing of Longevity Derivatives}\label{sec:derivativepricing} We consider longevity derivatives with different payoff structures including longevity swaps, longevity caps and longevity floors. Closed form expressions for prices of these longevity derivatives are derived under the assumption of the two-factor Gaussian mortality model introduced in Section~\ref{sec:model}. These instruments are written on survival probabilities and their properties are analysed from a pricing perspective. \subsection{Risk-Adjusted Measure} For the purpose of no-arbitrage valuation, we require the dynamics of the factors $Y_1(t)$ and $Y_2(t)$ to be written under a risk-adjusted measure.\footnote{Since the longevity market is still in its development stage and hence, incomplete, we assume a risk-adjusted measure exists but is not unique.} To preserve the tractability of the model, we assume that the processes $\tilde{W}_1(t)$ and $\tilde{W}_2(t)$ with dynamics \begin{align} d\tilde{W}_1(t)&=dW_1(t) \\ d\tilde{W}_2(t)&=\lambda \sigma e^{\gamma x}\, Y_2(t)\,dt+dW_2(t) \label{eq:QW2} \end{align} are standard Brownian motions under a risk-adjusted measure $\mathbb{Q}$. In Eq.~\eqref{eq:QW2} $\lambda$ represents the market price of longevity risk.\footnote{For simplicity, we assume that there is no risk adjustment for the first factor $Y_1$ and $\lambda$ is age-independent.} Under $\mathbb{Q}$ we can write the factor dynamics as follows: \begin{align} dY_1(t)&=\alpha_1 Y_1(t)\,dt+\sigma_1\,d\tilde{W}_1(t) \label{eq:dY1_Q}\\ dY_2(t)&=(\alpha x +\beta-\lambda \sigma e^{\gamma x})\, Y_2(t)\,dt+\sigma_2\,d\tilde{W}_2(t). \end{align} The corresponding risk-adjusted survival probability is given by \begin{equation}\label{eq:ra_sp} \tilde{S}_{x+t}(t,T) \stackrel{\text{def}}{=} E^\mathbb{Q}_t\left(e^{-\int^T_t \mu_x(v)\,dv}\right)=e^{\frac{1}{2}\tilde{\Gamma}(t,T)-\tilde{\Theta}(t,T)} \end{equation} where $\alpha_2 = \alpha x+\beta$ is replaced by $(\alpha x + \beta -\lambda \sigma e^{\gamma x})$ in the expressions for $\tilde{\Theta}(t,T)$ and $\tilde{\Gamma}(t,T)$, see Eq.~\eqref{eq:Theta} and Eq.~\eqref{eq:Gamma}, respectively. Since a liquid longevity market is yet to be developed, we aim to determine a reasonable value for $\lambda$ based on the longevity bond announced by BNP Paribas and European Investment Bank (EIB) in 2004 as proposed in \cite{CairnsBlDo06b} and applied in \cite{MeyrickeSh14}, see also \cite{WillsSh11}. The BNP/EIB longevity bond is a 25-year bond with coupon payments linked to a survivor index based on the realised mortality rates.\footnote{The issue price was determined by BNP Paribas using anticipated cash flows based on the 2002-based mortality projections provided by the UK Government Actuary's Department.} The price of the longevity bond is given by \begin{equation} V(0) = \sum^{25}_{T=1} B(0,T)\,e^{\delta\, T} E^\mathbb{P}_0\left({e^{-\int^T_0 \mu_x(v)\,dv}}\right) \end{equation} where $\delta$ is a spread, or an average risk premium per annum\footnote{The spread $\delta$ depends on the term of the bond and the initial age of the cohort being tracked (\cite{CairnsBlDo06b}), and $\delta$ is related to but distinct from $\lambda$, the market price of longevity risk.}, and the T-year projected survival rate is assumed to be the T-year survival probability for the Australian males cohort aged 65 as modelled in Section~\ref{sec:model}, see Eq.~\eqref{eq:sp}. Since the BNP/EIB bond is priced based on a yield of 20 basis points below standard EIB rates (\cite{CairnsBlDo06b}), we have the spread of $\delta = 0.002$.\footnote{The reference cohort for the BNP/EIB longevity bond is the England and Wales males aged 65 in 2003. Since the longevity derivatives market is under-developed in Australia, we assume that the same spread of $\delta=0.002$ (as in the UK) is applicable to the Australian males cohort aged 65 in 2008. Note however that sensitivity analyses will be performed in Section \ref{sec:hypotheticalexample}.} Under a risk-adjusted measure $\mathbb{Q}(\lambda)$, the price of the longevity bond corresponds to \begin{equation} V^{\mathbb{Q(\lambda)}}(0) = \sum^{25}_{T=1} B(0,T)\,E^{\mathbb{Q}(\lambda)}_0\left({e^{-\int^T_0 \mu_x(v)\,dv}}\right). \end{equation} Fixing the interest rate to $r=4\%$, we find a model-dependent $\lambda$, such that the risk-adjusted bond price $V^{\mathbb{Q(\lambda)}}(0)$ matches the market bond price $V(0)$ as close as possible. For example, for $\lambda=8.5$ we have $V(0) = 11.9045$ and $V^{\mathbb{Q(\lambda)}}(0)=11.9068$. For more details on the above procedure refer to \cite{MeyrickeSh14}. In the following we assume that the risk-adjusted measure $\mathbb{Q}$ is determined by a unique value of $\lambda$. \begin{figure}[h] \begin{center} \includegraphics[width=12cm, height=8cm]{SurvProb_Lambda.eps} \caption{Risk-adjusted survival probability with respect to different market price of longevity risk $\lambda$.}\label{fig:RiskAdjusted} \end{center} \end{figure} Figure~\ref{fig:RiskAdjusted} shows the risk-adjusted survival probabilities for Australian males aged 65 with respect to different values of the market price of longevity risk $\lambda$. As one observes from the figure, a larger (positive) value of $\lambda$ leads to an improvement in survival probability, while a smaller values of $\lambda$ indicate a decline in survival probability under the risk-adjusted measure $\mathbb{Q}$. \subsection{Longevity Swaps}\label{subsec:swaps} A longevity swap involves counterparties swapping fixed payments for payments linked to the number of survivors in a reference population in a given time period, and can be thought of as a portfolio of S-forwards, see \cite{Dowd2003}. An S-forward, or `survivor' forward has been developed by \cite{LLMA10}. Longevity swaps can be regarded as a stream of S-forwards with different maturity dates. One of the advantages of using S-forwards is that there is no initial capital requirement at the inception of the contract and cash flows occur only at maturity. Consider an annuity provider who has an obligation to pay an amount dependent on the number of survivors, and hence, survival probability of a cohort at time $T$. If longevity risk is present, the survival probability is stochastic. In order to protect himself from a larger-than-expected survival probability, the provider can enter into an S-forward contract paying a fixed amount $K\in(0,1)$ and receiving an amount equal to the realised survival probability $\exp{\{-\int^{T}_0\mu_x(v)\,dv\}}$ at time $T$. In doing so, the survival probability that the provider is exposed to is certain, and corresponds to some fixed value $K$. If the contract is priced in such a way that there is no upfront cost at the inception, it must hold that \begin{align} B(0,T)\,E^\mathbb{Q}_0\left(e^{-\int^{T}_0\mu_x(v)\,dv}-K(T)\right)=0 \end{align} under the risk-adjusted measure $\mathbb{Q}$. Thus, the fixed amount can be identified to be the risk-adjusted survival probability, that is, \begin{align}\label{eq:KT} K(T)=E^\mathbb{Q}_0\left(e^{-\int^{T}_0\mu_x(v)\,dv}\right). \end{align} Assuming that there is a positive market price of longevity risk, the longevity risk hedger who pays the fixed leg and receives the floating leg bears the cost for entering an S-forward.\footnote{The risk-adjusted survival probability will be larger than the ``best estimate" $\mathbb{P}$-survival probability if a positive market price of longevity risk is demanded, see Figure~\ref{fig:RiskAdjusted}.} Following terminology in \cite{BiffisBlPiAr14}, the amount $K(T)=\tilde{S}_x(0,T)$ can be referred to as the swap rate of an S-forward with maturity $T$. In general, the mark-to-market price process $F(t)$ of an S-forward with fixed leg $K$ (not necessarily $K(T)$ as in Eq.~\eqref{eq:KT}) is given by \begin{align} F(t) &= B(t,T) E^\mathbb{Q}_t\left(e^{-\int^{T}_0\mu_x(v)\,dv}-K\right)\nonumber\\ &=B(t,T)E^\mathbb{Q}_t\left(e^{-\int^{t}_0\mu_x(v)\,dv}e^{-\int^{T}_t\mu_x(v)\,dv}-K\right)\nonumber\\ &= B(t,T)\left(\bar{S}_x(0,t)\,\tilde{S}_{x+t}(t,T)-K\right) \label{eq:F_t} \end{align} for $t\in [0,T]$. The quantity \begin{equation}\label{eq:survivorindex} \bar{S}_x(0,t)=e^{-\int^{t}_0\mu_x(v)\,dv}\vert_{\mathcal{F}_t} \end{equation} is the realised survival probability, or the survivor index for the cohort, which is observable given $\mathcal{F}_t$. The term $\bar{S}_x(0,t)\,\tilde{S}_{x+t}(t,T)$ that appears in Eq.~\eqref{eq:F_t} has a natural interpretation. Given information $\mathcal{F}_0$ at time $t=0$, this term becomes $\tilde{S}_{x}(0,T)$, which is the risk-adjusted survival probability. As time moves on and more information $\mathcal{F}_t$, with $t\in(0,T)$, is revealed, the term $\bar{S}_x(0,t)\,\tilde{S}_{x+t}(t,T)$ is a product of the realised survival probability of the first $t$ years, and the risk-adjusted survival probability in the next $(T-t)$ years. At maturity $T$, this product becomes the realised survival probability up to time $T$. In order words, one can think of $\bar{S}_x(0,t)\,\tilde{S}_{x+t}(t,T)$ as the $T$-year risk-adjusted survival probability with information known up to time $t$. The price process $F(t)$ in Eq.~\eqref{eq:F_t} depends on the swap rate $\tilde{S}_{x+t}(t,T)$ of an S-forward written on the same cohort that is now aged $(x+t)$ at time $t$, with time to maturity $(T-t)$. If a liquid longevity market was developed, the swap rate $\tilde{S}_{x+t}(t,T)$ could be obtained from market data. As $\bar{S}_x(0,t)$ is observable at time $t$, the mark-to-market price process of an S-forward could be considered model-independent. However, since a longevity market is still in its development stage, market swap rates are not available and a model-based risk-adjusted survival probability $\tilde{S}_{x+t}(t,T)$ has to be used instead. An analytical formula for the mark-to-market price of an S-forward can be obtained if the risk-adjusted survival probability is expressed in a closed-form, which can be performed, for example, under the two-factor Gaussian mortality model. Since a longevity swap is constructed as a portfolio of S-forwards, the price of a longevity swap is simply the sum of the individual S-forward prices. \subsection{Longevity Caps}\label{subsec:caps} A longevity cap, which is a portfolio of longevity caplets, provides a similar hedge to a longevity swap but is an option-type instrument. Consider again a scenario described in Section~\ref{subsec:swaps} where an annuity provider aims to hedge against larger-than-expected $T$-year survival probability of a particular cohort. Alternatively to hedging with an S-forward, the provider can enter into a long position of a longevity caplet with payoff at time $T$ corresponding to \begin{equation}\label{eq:cappayoff} \max{\left\{\left(e^{-\int^{T}_0\mu_x(v)\,dv}-K\right),0\right\}} \end{equation} where $K\in(0,1)$ is the strike price.\footnote{The payoff of a longevity caplet is similar to the payoff of the option embedded in the principal-at-risk bond described in \cite{BiffisBl14}.} If the realised survival probability is larger than $K$, the hedger receives an amount $\left(\exp{\{-\int^{T}_0\mu_x(v)\,dv\}}-K\right)$ from the longevity caplet. This payment can be regarded as a compensation for the increased payments that the provider has to make in the annuity portfolio, due to the larger-than-expected survival probability. There is no cash outflow if the realised survival probability is smaller than or equal to $K$. In other words, the longevity caplet allows the provider to ``cap" its longevity exposure at $K$ with no downside risk. Since a longevity caplet has a non-negative payoff, it comes at a cost. The price of a longevity caplet \begin{equation}\label{eq:caplet_expectedvalue} C\ell(t; T,K)=B(t,T)E^\mathbb{Q}_t\left(\left(e^{-\int^{T}_0\mu_x(v)\,dv}-K\right)^+\right) \end{equation} under the two-factor Gaussian mortality model is obtained in the following Proposition. \begin{proposition}\label{prop:caplet} Under the two-factor Gaussian mortality model (Eq.~\eqref{eq:mi}-Eq.~\eqref{eq:dY2}) the price at time $t$ of a longevity caplet $C\ell(t;T,K)$, with maturity $T$ and strike $K$, is given by \begin{equation}\label{eq:Cl_t} C\ell(t; T,K)=\bar{S}_t\,\tilde{S}_t\,B(t,T)\,\Phi\left(\sqrt{\tilde{\Gamma}(t,T)}-d\right)- KB(t,T)\Phi\left(-d\right) \end{equation} where $\bar{S}_t=\bar{S}_{x}(0,t)$ is the realised survival probability observable at time $t$, $\tilde{S}_t=\tilde{S}_{x+t}(t,T)$ is the risk-adjusted survival probability in the next $(T-t)$ years, $d=\frac{1}{\sqrt{\tilde{\Gamma}(t,T)}}\left(\ln{\{K/(\bar{S}_t\tilde{S}_t)\}}+\frac{1}{2}\tilde{\Gamma}(t,T)\right)$ and $\Phi(\cdot)$ denotes the cumulative distribution function of a standard Gaussian random variable. \end{proposition} \begin{proof} Under the risk-adjusted measure $\mathbb{Q}$, we have, from Proposition~\eqref{prop:sp}, that \begin{equation} L \stackrel{\text{def}}{=} -\int^{T}_t\mu_x(v)dv \sim N(-\tilde{\Theta}(t,T),\tilde{\Gamma}(t,T)). \end{equation} Using the simplified notation $\tilde{\Theta}=\tilde{\Theta}(t,T)$, $\tilde{\Gamma}=\tilde{\Gamma}(t,T)$ we can write \begin{align*} C\ell(t; T,K) &= B(t,T)E^\mathbb{Q}_t\left((\bar{S}_t\,e^L-K)^+\right) \\ &= B(t,T)\int^\infty_{-\infty} \frac{1}{\sqrt{2\pi\tilde{\Gamma}}}e^{-\frac{1}{2}\left(\frac{\ell+\tilde{\Theta}}{\sqrt{\tilde{\Gamma}}}\right)^2}\left(\bar{S}_t\,e^\ell-K\right)^+\,d\ell \\ &= B(t,T)\int^\infty_{\frac{\ln{K/\bar{S}_t}+\tilde{\Theta}}{\sqrt{\tilde{\Gamma}}}}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\ell^2} \left(\bar{S}_t\,e^{\ell\sqrt{\tilde{\Gamma}} -\tilde{\Theta}}-K\right)\,d\ell \\ &= B(t,T)\left(\bar{S}_t\,e^{\frac{1}{2}\tilde{\Gamma}-\tilde{\Theta}}\int^\infty_{\frac{\ln{K/\bar{S}_t}+\tilde{\Theta}} {\sqrt{\tilde{\Gamma}}}} \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\left(\ell-\sqrt{\tilde{\Gamma}}\right)^2}\,d\ell- K\int^\infty_{\frac{\ln{K/\bar{S}_t}+\Theta}{\sqrt{\tilde{\Gamma}}}} \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\ell^2}\,d\ell\right). \end{align*} Equation~\eqref{eq:Cl_t} follows using properties of $\Phi(\cdot)$ and noticing that $\tilde{S}_t=e^{\frac{1}{2}\tilde{\Gamma} - \tilde{\Theta}}$, that is, $\tilde{\Theta}=\frac{1}{2}\tilde{\Gamma}-\ln{\tilde{S}_t}$. \end{proof} Similar to an S-forward, the price of a longevity caplet depends on the product term $\bar{S}_x(0,t)\,\tilde{S}_{x+t}(t,T)$. In particular, a longevity caplet is said to be out-of-the-money if $K>\bar{S}_x(0,t)\,\tilde{S}_{x+t}(t,T)$; at-the-money if $K=\bar{S}_x(0,t)\,\tilde{S}_{x+t}(t,T)$; and in-the-money if $K<\bar{S}_x(0,t)\,\tilde{S}_{x+t}(t,T)$. Eq.~\eqref{eq:Cl_t}, is verified using Monte Carlo simulation summarised in Table~\ref{table:compare_capletprice}, where we set $r=4\%$, $\lambda=8.5$ and $t=0$. Other parameters are as specified in Table~\ref{table:modelparas}. \begin{table}[!ht] \center \setlength{\tabcolsep}{1em} \renewcommand{\arraystretch}{1.1} \small{\caption{Pricing longevity caplet $C\ell (0;T,K)$ by the formula (Eq.~\eqref{eq:Cl_t}) and by Monte Carlo simulation of Eq.~\eqref{eq:caplet_expectedvalue}; [ , ] denotes the 95\% confidence interval.}\label{table:compare_capletprice} \begin{tabular}{l|ll} \hline \hline (T, K) & Exact & { }{ }{ }{ }{ }{ }M.C. Simulation \\ \hline (10, 0.6) & 0.15632 & 0.15644 [0.15631, 0.15656] \\ (10, 0.7) & 0.08929 & 0.08941 [0.08928, 0.08954] \\ (10, 0.8) & 0.02261 & 0.02262 [0.02250, 0.02275] \\ (20, 0.3) & 0.08373 & 0.08388 [0.08371, 0.08406] \\ (20, 0.4) & 0.03890 & 0.03897 [0.03879, 0.03914] \\ (20, 0.5) & 0.00525 & 0.00530 [0.00522, 0.00539] \\ \hline \hline \end{tabular}} \end{table} Following the result of Proposition~\ref{prop:caplet}, the two-factor Gaussian mortality model leads to the price of a longevity caplet that is a function of the following variables: \begin{itemize} \item realised survival probability $\bar{S}_x(0,t)$ of the first $t$ years; \item risk-adjusted survival probability $\tilde{S}_{x+t}(t,T)$ in the next $T-t$ years; \item interest rate $r$; \item strike price $K$; \item time to maturity $(T-t)$; and \item standard deviation $\sqrt{\tilde{\Gamma}(t,T)}$, which is a function of the time to maturity and the model parameters. \end{itemize} Since the quantity $\exp\left\{-\int^{T}_0\mu_x(v)\,dv\right\}$ is log-normally distributed under the two-factor Gaussian mortality model, Eq.~\eqref{eq:Cl_t} resembles the Black-Scholes formula for option pricing where the underlying stock price follows a geometric Brownian motion. In our setup, the stock price at time $t$ is replaced by the $T$-year risk-adjusted survival probability $\bar{S}_x(0,t)\,S_{x+t}(t,T)$ with information available up to time $t$. While the stock is traded and can be modelled directly using market data, the underlying of a longevity caplet is the survival probability which is not tradable but can be determined as an output from the dynamics of mortality intensity. As a result, the role of the stock price volatility in the Black-Scholes formula is played by the standard deviation of the integral of the mortality intensity $\int^T_t \mu_x(v)\,dv$. Since the integral $\int^T_t \mu_x(v)\,dv$ captures the whole history of the mortality intensity $\mu_x(t)$ from $t$ to $T$ under $\mathbb{Q}$, one can interpret the standard deviation $\sqrt{\tilde{\Gamma}(t,T)}$ as the volatility of the risk-adjusted aggregated longevity risk of a cohort aged $x+t$ at time $t$, for the period from $t$ to $T$. \begin{figure}[h] \begin{center} \includegraphics[width=8cm, height=6cm]{Caplet_Price_Plot.eps}\includegraphics[width=8cm, height=6cm]{Caplet_Price_lambda_Plot.eps} \caption{Caplet price as a function of (left panel) $T$ and $K$ and (right panel) $\lambda$ where $K=0.4$ and $T=20$.}\label{fig:caplet_price_plot} \end{center} \end{figure} The left panel of Figure~\ref{fig:caplet_price_plot} shows caplet prices for a cohort aged $x=65$, using parameters as specified in Table~\ref{table:modelparas}, as a function of time to maturity $T$ and strike $K$. We set $r=0.04$, $\lambda=8.5$ and $t=0$ such that $\bar{S}_x(0,0) = 1$. A lower strike price indicates that the buyer of a caplet is willing to pay more to secure a better protection against a larger-than-expected survival probability. On the other hand, when the time to maturity $T$ is increasing, the underlying survival probability is likely to take smaller values, which leads to a higher probability for the caplet to become out-of-the-money at maturity for a fixed $K$, see Eq.~\eqref{eq:cappayoff}. Consequently, for a fixed $K$ the caplet price decreases with increasing $T$. The right panel of Figure~\ref{fig:caplet_price_plot} illustrates the effect of the market price of longevity risk $\lambda$ on the caplet price. The price of a caplet increases with increasing $\lambda$. As shown in Figure~\ref{fig:RiskAdjusted}, a larger value of $\lambda$ will lead to an improvement in survival probability under $\mathbb{Q}$. Thus, a higher caplet price is observed since the underlying survival probability is larger (on average) under $\mathbb{Q}$ when $\lambda$ increases, see Eq.~\eqref{eq:caplet_expectedvalue}. Since longevity cap is constructed as a portfolio of longevity caplets, it can be priced as a sum of individual caplet prices, see also Section~\ref{subsubsec:caphedged_portfolio}. \section{Mortality Model}\label{sec:model} Let $(\Omega,\mathcal{F}_t=\mathcal{G}_t \vee \mathcal{H}_t,\mathbb{P})$ be a filtered probability space where $\mathbb{P}$ is the real world probability measure. The subfiltration $\mathcal{G}_t$ contains information about the dynamics of the mortality intensity while death times of individuals are captured by $\mathcal{H}_t$. It is assumed that the interest rate $r$ is constant where $B(0,t) = e^{-r\,t}$ denotes the price of a $t$-year zero coupon bond, and our focus is on the modelling of stochastic mortality. \subsection{Model Specification} For the purpose of financial risk management applications one requires stochastic mortality model that is tractable, and is able to capture well the mortality dynamics for different ages. We work under the affine mortality intensity framework and assume the mortality intensity to be Gaussian such that analytical prices can be derived for longevity options, as described in Section~\ref{sec:derivativepricing}. Gaussian mortality models have been considered in \cite{BauerBoRu10} and \cite{BlackburnSh13} within the forward mortality framework. \cite{LucianoVi08} suggest Gaussian mortality where the intensity follows the Ornstein-Uhlenbeck process. In addition, \cite{JevticLuVi13} consider a continuous time cohort model where the underlying mortality dynamics is Gaussian. We consider a two-factor Gaussian mortality model for the mortality intensity process $\mu_{x+t}(t)$ of a cohort aged $x$ at time $t=0$\footnote{For simplicity of notation we replace $\mu_{x+t}(t)$ by $\mu_x(t)$.}: \begin{equation}\label{eq:mi} d\mu_x(t) = dY_1(t)+dY_2(t), \end{equation} where \begin{align} dY_1(t)&=\alpha_1 Y_1(t)\,dt+\sigma_1\,dW_1(t) \label{eq:dY1}\\ dY_2(t)&=(\alpha \,x+\beta)\,Y_2(t)\,dt+\sigma e^{\gamma x}\,dW_2(t) \label{eq:dY2} \end{align} and $dW_1dW_2=\rho\,dt$. The first factor $Y_1(t)$ is a general trend for the intensity process that is common to all ages. The second factor $Y_2(t)$ depends on the initial age through the drift and the volatility terms.\footnote{We can in fact replace $x$ by $x+t$ in Eq.~\eqref{eq:dY2}. Using $x+t$ will take into account the empirical observation that the volatility of mortality tends to increase along with age $x+t$ (Figures ~\ref{fig:mort_rate} and ~\ref{fig:diff_mort_rate}). However, for a Gaussian process the intensity will have a non-negligible probability of reaching negative value when the volatility from the second factor ($\sigma e^{\gamma(x+t)}$) becomes very high, which occurs for example when $x+t > 100$ (given $\gamma>0$). Using $x$ instead of $x+t$ will also make the result in Section~\ref{sec:derivativepricing} easy to interpret. For these reasons we assume that the second factor $Y_2(t)$ depends on the initial age $x$ only.} The initial values $Y_1(0)$ and $Y_2(0)$ of the factors are denoted by $y_1$ and $y^x_2$, respectively. The model is tractable and for a specific choice of the parameters (when $\alpha=\gamma =0$) has been applied to short rate modelling in \cite{BrigoMe07}. \begin{proposition}\label{prop:sp} Under the two-factor Gaussian mortality model (Eq.~\eqref{eq:mi} - ~\eqref{eq:dY2}), the $(T-t)$- year expected survival probability of a person aged $x+t$ at time $t$, conditional on filtration $\mathcal{F}_t$, is given by \begin{align}\label{eq:sp} S_{x+t}(t,T)&\stackrel{\text{def}}{=}E^\mathbb{P}_t\left(e^{-\int^T_t \mu_{x}(v)dv}\right)=e^{\frac{1}{2}\Gamma(t,T) - \Theta(t,T)}, \end{align} where, using $\alpha_2 = \alpha x + \beta$ and $\sigma_2 = \sigma e^{\gamma x}$, \begin{align} \Theta(t,T) &= \frac{(e^{\alpha_1 (T-t)}-1)}{\alpha_1}Y_1(t)+\frac{(e^{\alpha_2 (T-t)}-1)}{\alpha_2}Y_2(t) \label{eq:Theta}\hspace{+0.2cm}\mbox{and}\\ \Gamma(t,T) &=\sum^2_{k=1}\frac{\sigma^2_k}{\alpha^2_k}\left(T-t-\frac{2}{\alpha_k}e^{\alpha_k(T-t)}+\frac{1}{2\alpha_k} e^{2\alpha_k(T-t)}+\frac{3}{2\alpha_k}\right)+ \notag \\ &\frac{2\rho\sigma_1\sigma_2}{\alpha_1\alpha_2}\left(T-t- \frac{e^{\alpha_1(T-t)}-1}{\alpha_1}-\frac{e^{\alpha_2(T-t)}-1}{\alpha_2}+\frac{e^{(\alpha_1+\alpha_2)(T-t)}-1}{\alpha_1+\alpha_2}\right).\label{eq:Gamma} \end{align} are the mean and the variance of the integral $\int^T_t \mu_{x}(v)\,dv$, which is Gaussian distributed, respectively. \end{proposition} We will use the fact that the integral $\int^T_t \mu_{x}(v)\,dv$ is Gaussian with known mean and variance to derive analytical pricing formulas for longevity options in Section~\ref{sec:derivativepricing}. \begin{proof} Solving Eq.~\eqref{eq:dY1} to obtain an integral form of $Y_1(t)$, we have \begin{equation}\label{eq:2terms} \int^T_tY_1(u)\,du=\int^T_tY_1(t)e^{\alpha_1(u-t)}du+\int^T_t\sigma_1\int^u_te^{\alpha_1(u-v)}dW_1(v)du. \end{equation} The first term in Eq.~\eqref{eq:2terms} can be simplified to \begin{equation*} \int^T_tY_1(t)e^{\alpha_1(u-t)}du=\frac{\left(e^{\alpha_1(T-t)}-1\right)}{\alpha_1}Y_1(t). \end{equation*} For the second term, we have \begin{align*} &\sigma_1 \int^T_te^{\alpha_1 u} \int^u_t e^{-\alpha_1v}dW_1(v)du=\sigma_1\int^T_t\int^u_t e^{-\alpha_1v}dW_1(v)d_u\left(\frac{1}{\alpha_1}e^{\alpha_1 u}\right) \\ &=\frac{\sigma_1}{\alpha_1}\int^T_td_u\left(e^{\alpha_1 u}\int^u_t e^{-\alpha_1 v} dW_1(v)\right) -\frac{\sigma_1}{\alpha_1}\int^T_te^{\alpha_1 u} d_u\left(\int^u_t e^{-\alpha_1 v} dW_1(v)\right) \\ &= \frac{\sigma_1}{\alpha_1}e^{\alpha_1T}\int^T_te^{-\alpha_1 u }dW_1(u)-\frac{\sigma_1}{\alpha_1} \int^T_t e^{\alpha_1 u}e^{-\alpha_1u}dW_1(u)=\frac{\sigma_1}{\alpha_1}\int^T_te^{\alpha_1(T-u)}-1\,dW_1(u), \end{align*} where stochastic integration by parts is applied in the second equality. To obtain an integral representation for $Y_2(t)$, we follow the same steps as above, replacing $Y_1(t)$ by $Y_2(t)$ in Eq.~\eqref{eq:2terms}. It is then straightforward to notice that \begin{equation}\label{eq:mort2term} \int^T_t\mu_x(u)\,du = \int^T_t Y_1(u)+Y_2(u)\,du \end{equation} is a Gaussian random variable with mean $\Theta(t,T)$ (Eq.~\eqref{eq:Theta}) and variance $\Gamma(t,T)$ (Eq.~\eqref{eq:Gamma}). Equation \eqref{eq:sp} is obtained by applying the moment generating function of a Gaussian random variable. \end{proof} \subsection{Parameter Estimation}\label{subsec:estimation} The discretised process, where the intensity is assumed to be constant over each integer age and calendar year, is approximated by the central death rates $m(x,t)$ (\cite{WillsSh11}). Figure~\ref{fig:mort_rate} displays Australian male central death rates $m(x,t)$ for years $t=1970,1971,\dots,2008$ and ages $x = 60,61,\dots,95$. Figure~\ref{fig:diff_mort_rate} shows the difference of the central death rates $\Delta m(x,t) = m(x+1,t+1)-m(x,t)$. The variability of $\Delta m(x,t)$ is evidently increasing with increasing age $x$, which leads to the anticipation that $\gamma>0$. Furthermore, for a fixed age $x$, there is a slight improvement in central death rates for more recent years, compared to the past. \begin{figure}[h] \begin{center} \includegraphics[width=12cm, height=9cm]{CentralMortalityRates_1970_08.eps} \caption{Australian male central death rates $m(x,t)$ where $t =1970,1971,\dots,2008$ and $x = 60,61,\dots,95$.}\label{fig:mort_rate} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=12cm, height=9cm]{Diff_CentralMortalityRates_1970_08.eps} \caption{Difference of the central death rates $\Delta m(x,t) = m(x+1,t+1)-m(x,t)$ where $t =1970,1971,\cdots,2007$ and $x = 60,61,\cdots,94$.}\label{fig:diff_mort_rate} \end{center} \end{figure} The parameters $\{\sigma_1,\sigma, \gamma,\rho\}$, which determine the volatility of the intensity process, are estimated as described below. As in \cite{JevticLuVi13}, we aim to estimate parameters using the method of least squares, thus, calibrating the model to the mortality surface. However, we take advantage of the fact that a Gaussian model is employed where the variance of the model can be calculated explicitly and thus, we capture the diffusion part of the process by matching the variance of the model to mortality data. Specifically, the implemented procedure is as specified below: \begin{enumerate} \item Using empirical data for ages $x = 60, 65,\dots, 90$ we evaluate the sample variance of $\Delta m(x,t)$ across time, denoted by $\text{Var}(\Delta m_x)$. \item The model variance $\text{Var}(\Delta \mu_x)$ for age $x$ is given by \begin{align}\nonumber \text{Var}(\Delta \mu_x) &= \text{Var}(\sigma_1\Delta W_1 + \sigma e^{\gamma x} \Delta W_2)\\ &=\left(\sigma_1^2+2\sigma_1\sigma\rho e^{\gamma x}+\sigma^2e^{2\gamma x}\right)\Delta t. \end{align} Since the difference between the death rates is computed in yearly terms, we set $\Delta t =1$. \item The parameters $\{\sigma_1,\sigma, \gamma,\rho\}$ are then estimated by fitting the model variance $\text{Var}(\Delta \mu_x)$ to the sample variance $\text{Var}(\Delta m_x)$ for ages $x = 60, 65,\dots, 90$ using least squares estimation, that is, by minimising \begin{equation} \sum^{90}_{x=60,65\dots}\left(\text{Var}(\Delta \mu_x|\sigma_1,\sigma,\gamma,\rho)-\text{Var}(\Delta m_x)\right)^2 \end{equation} with respect to the parameters $\{\sigma_1,\sigma, \gamma,\rho\}$. \end{enumerate} The remaining parameters $\{\alpha_1,\alpha, \beta, y_1, y^{65}_2, y^{75}_2\}$ are then estimated as described below\footnote{We calibrate the model for ages 65 and 75 simultaneously to obtain reasonable values for $\alpha$ and $\beta$ since the drift of the second factor $Y_2(t)$ is age-dependent.}: \begin{enumerate} \item From the central death rates, we obtain empirical survival curves for cohorts aged 65 and 75 in 2008. The survival curve is obtained by setting \begin{equation} \hat{S}_x(0,T)=\prod^T_{v=1}(1-m(x+v-1,0)) \end{equation} where $m(x,t)$ is the central death rate of an $x$ years old at time $t$.\footnote{Here $t=0$ represents calendar year 2008 and we approximate the 1-year survival probability $e^{-m(x+v-1,0)}$ by $1-m(x+v-1,0)$.} \item The parameters $\{\alpha_1,\alpha, \beta, y_1, y^{65}_2, y^{75}_2\}$ are then estimated by fitting the survival curves ($S_x(0,T)$) of the model to the empirical survival curves using least squares estimation, that is, by minimising \begin{equation} \sum_{x=65,75}\sum^{T_x}_{j=1}\left(\hat{S}_x(0,j)-S_x(0,j)\right)^2 \end{equation} where $T_{65}=31$ and $T_{75}=21$, with respect to the parameters $\{\alpha_1,\alpha, \beta, y_1, y^{65}_2, y^{75}_2\}$. \end{enumerate} The estimated parameters are reported in Table~\ref{table:modelparas}. Since $\gamma>0$ we observe that the volatility of the process is higher for older (initial) age $x$. \begin{table}[!ht] \center \setlength{\tabcolsep}{1em} \renewcommand{\arraystretch}{1.1} \small{\caption{Estimated model parameters.}\label{table:modelparas} \begin{tabular}{lllll} \hline \hline $\sigma_1$ & $\sigma$ & $\gamma$ & $\rho$ & $\alpha_1$ \\ \hline 0.0022465 & 0.0000002 & 0.129832 & -0.795875 & 0.0017508 \\ \hline \hline $\alpha$ & $\beta$ & $y_1$ & $y^{65}_2$ & $y^{75}_2$ \\ \hline 0.0000615 & 0.120931 & 0.0021277 & 0.0084923 & 0.0294695 \\ \hline \hline \end{tabular}} \end{table} \begin{figure}[h] \begin{center} \includegraphics[width=16cm, height=12cm]{Sim_mu_SP.eps} \caption{Percentiles of the simulated intensity processes $\mu_{65}(t)$ and $\mu_{75}(t)$ for Australian males aged $65$ (upper left panel) and $75$ (upper right panel) in 2008, with their corresponding survival probabilities (the mean and the $99\%$ confidence bands) for a $65$ years old (lower left panel) and $75$ years old (lower right panel).}\label{fig:sim_mu} \end{center} \end{figure} The upper panel of Figure~\ref{fig:sim_mu} shows the percentiles of the simulated mortality intensity for ages 65 and 75 in the left and the right panel, respectively. One observes that the volatility of the mortality intensity is higher for a 75 year old compared to a 65 year old. Corresponding survival probabilities are displayed in the lower panel of Figure~\ref{fig:sim_mu}, together with the $99\%$ confidence bands computed pointwise. As it is pronounced from the figures, the two-factor Gaussian model specified above, despite its simplicity, produces reasonable mortality dynamics for ages 65 and 75. \section{Managing Longevity Risk in a Hypothetical Life Annuity Portfolio} \label{sec:hypotheticalexample} Hedging features of a longevity swap and cap are examined for a hypothetical life annuity portfolio subject to longevity risk. Factors considered include the market price of longevity risk, the term to maturity of hedging instruments and the size of the underlying annuity portfolio. \subsection{Setup}\label{sec:setup} We consider a hypothetical life annuity portfolio that consists of a cohort aged $x=65$. The size of the portfolio that corresponds to the number of policyholders, is denoted by $n$. The underlying mortality intensity for the cohort follows the two-factor Gaussian mortality model described in Section~\ref{sec:model}, and the model parameters are specified in Table~\ref{table:modelparas}. We assume that there is no loading for the annuity policy and expenses are not included. Further, we assume a single premium, whole life annuity of $\$1$ per year payable in arrears conditional on the survival of the annuitant to the payment dates. The fair value, or the premium, of the annuity evaluated at $t=0$ is given by \begin{equation}\label{eq:annuityprice} a_{x} = \sum^{\omega-x}_{T=1} B(0,T) \, \tilde{S}_{x}(0,T) \end{equation} where $r=4\%$ and $\omega=110$ is the maximum age allowed in the mortality model. The life annuity provider, thus, receives a total premium, denoted by $A$, for the whole portfolio corresponding to the sum of individual premiums: \begin{equation} A = n\,a_x. \end{equation} This is the present value of the asset held by the annuity provider at $t=0$. Since the promised annuity cashflows depend on the death times of annuitants in the portfolio, the present value of the liability is subject to randomness caused by the stochastic dynamics of the mortality intensity. The present value of the liability for each policyholder, denoted by $L_k$, is determined by the death time $\tau_k$ of the policyholder, and is given by \begin{equation} L_k = \sum^{\lfloor \tau_k \rfloor}_{T=1} B(0,T) \end{equation} for a simulated $\tau_k$, with $\lfloor q \rfloor$ denoting the next smaller integer of a real number $q$. The present value of the liability $L$ for the whole portfolio is obtained as a sum of individual liabilities: \begin{equation} L = \sum^{n}_{k=1}L_k. \end{equation} The algorithm for simulating death times of annuitants, which requires a single simulated path for the mortality intensity of the cohort, is summarised in Appendix~\ref{A2}. The discounted surplus distribution ($D_{\text{no}}$) of an unhedged annuity portfolio is obtained by setting \begin{equation}\label{disc_S_sample} D_{\text{no}} = A - L. \end{equation} The impact of longevity risk is captured by simulating the discounted surplus distribution where each sample is determined by the realised mortality intensity of a cohort. Since traditional pricing and risk management of life annuity relies on diversification effect, or the law of large numbers, we consider the discounted surplus distribution per policy \begin{equation} D_{\text{no}}/n. \end{equation} Figure~\ref{fig:lawlargeno} shows the discounted surplus distribution per policy without longevity risk (i.e. when setting $\sigma_1 = \sigma =0$) with different portfolio sizes, varying from $n=2000$ to $8000$. As expected, the mean of the distribution is centred around zero as there is no loading assumed in the pricing algorithm, while the standard deviation diminishes as the number of policies increases. In the following we consider a longevity swap and a cap as hedging instruments. These are index-based instruments where the payoffs depend on the survivor index, or the realised survival probability (Eq.~\eqref{eq:survivorindex}), which is in turn determined by the realised mortality intensity. We do not consider basis risk\footnote{If basis risk is present, we need to distinguish between the mortality intensity for the population ($\mu_x^I$) and mortality intensity for the cohort ($\mu_x$) underlying the annuity portfolio, see \cite{BiffisBlPiAr14}.} but due to a finite portfolio size, the actual proportion of survivors, $\frac{n-N_t}{n}$, where $N_t$ denotes the number of deaths experienced by a cohort during the period $[0,t]$, will be in general similar, but not identical, to the survivor index (Appendix~\ref{A2}). As a result, the static hedge will be able to reduce systematic mortality risk, whereas the idiosyncratic mortality risk component will be retained by the annuity provider. \begin{figure}[h] \begin{center} \includegraphics[width=12cm, height=9cm]{LawLargeNo.eps} \caption{Discounted surplus distribution per policy without longevity risk with different portfolio size ($n$).}\label{fig:lawlargeno} \end{center} \end{figure} \subsubsection{A Swap-Hedged Annuity Portfolio} For an annuity portfolio hedged by an index-based longevity swap, payments from the swap \begin{equation}\label{eq:swappayoff} n\, \left( e^{-\int^{T}_0\mu_x(v)\,dv}-K(T) \right) \end{equation} at time $T \in\{1,...,\hat{T}\}$ depend on the realised mortality intensity, where $\hat{T}$ denotes the term to maturity of the longevity swap. The number of policyholders $n$ acts as the notional amount of the swap contract so that the quantity $n \exp\{-\int^{T}_0\mu_x(v)\,dv\}$ represents the number of survivors implied by the realised mortality intensity at time $T$. We fix the strike of a swap to the risk-adjusted survival probability, that is, \begin{equation} K(T)= \tilde{S}_x(0,T)=E^\mathbb{Q}_0\left(e^{-\int^{T}_0\mu_x(v)\,dv}\right) \end{equation} such that the price of a swap is zero at $t=0$, see Section~\ref{subsec:swaps}. The discounted surplus distribution of a swap-hedged annuity portfolio can be expressed as \begin{equation} D_{\text{swap}} = A - L + F_{\text{swap}} \end{equation} where \begin{equation}\label{eq:Fswap} F_{\text{swap}} = n\,\sum^{\hat{T}}_{T=1} B(0,T)\, \left( e^{-\int^{T}_0\mu_x(v)\,dv}-\tilde{S}_x(0,T) \right) \end{equation} is the (random) discounted cashflow coming from a long position in the longevity swap. The discounted surplus distribution per policy of a swap-hedged annuity portfolio is determined by $D_{\text{swap}}/n$. \subsubsection{A Cap-Hedged Annuity Portfolio}\label{subsubsec:caphedged_portfolio} For an annuity portfolio hedged by an index-based longevity cap, the cashflows \begin{equation} n\,\max{\left\{\left(e^{-\int^{T}_0\mu_x(v)\,dv}-K(T)\right),0\right\}} \end{equation} at $T \in\{1,...,\hat{T}\}$ are payments from a long position in the longevity cap. We set \begin{equation} K(T) = S_x(0,T)=E^\mathbb{P}_0\left(e^{-\int^{T}_0\mu_x(v)\,dv}\right) \end{equation} such that the strike for a longevity caplet is the ``best estimated" survival probability given $\mathcal{F}_0$.\footnote{For a longevity swap, the risk-adjusted survival probability is used as a strike price so that the price of a longevity swap is zero at inception. In contrast, a longevity cap has non-zero price and $S_x(0,T)$ is the most natural choice for a strike.} The discounted surplus distribution of a cap-hedged annuity portfolio is given by \begin{equation} D_{\text{cap}} = A - L + F_{\text{cap}} - C_{\text{cap}} \end{equation} where \begin{equation}\label{eq:Fcap} F_{\text{cap}} = n\,\sum^{\hat{T}}_{T=1} B(0,T)\, \max{\left\{\left(e^{-\int^{T}_0\mu_x(v)\,dv}-S_x(0,T)\right),0\right\}} \end{equation} is the (random) discounted cashflow from holding the longevity cap and \begin{equation} C_{\text{cap}} = n\, \sum^{\hat{T}}_{T=1}C\ell\left(0; T,S_x(0,T)\right) \end{equation} is the price of the longevity cap. The discounted surplus distribution per policy of a cap-hedged annuity portfolio is given by $D_{\text{cap}}/n$. \subsection{Results} Hedging results are summarised by means of summary statistics that include mean, standard deviation (std. dev.), skewness, as well as Value-at-Risk (VaR) and Expected Shortfall (ES) of the discounted surplus distribution per policy of an unhedged, a swap-hedged and a cap-hedged annuity portfolio. Skewness is included since the payoff of a longevity cap is nonlinear and the resulting distribution of a cap-hedged annuity portfolio is not symmetric. VaR is defined as the $q$-quantile of the discounted surplus distribution per policy. ES is defined as the expected loss of the discounted surplus distribution per policy given the loss is at or below the $q$-quantile. We fix $q = 0.01$ so that the confidence interval for VaR and ES corresponds to $99\%$. We use 5,000 simulations to obtain the distribution for the discounted surplus. Hedge effectiveness is examined with respect to (w.r.t.) different assumptions underlying the market price of longevity risk ($\lambda$), the term to maturity of hedging instruments ($\hat{T}$) and the portfolio size ($n$). Parameters for the base case are as specified in Table~\ref{table:basecase}. \begin{table}[!ht] \center \setlength{\tabcolsep}{1em} \renewcommand{\arraystretch}{1.1} \small{\caption{Parameters for the base case.}\label{table:basecase} \begin{tabular}{lll} \hline \hline $\lambda$ & $\hat{T}$ (years) & $n$ \\ \hline 8.5 & { }{ }{ }{ }30 & 4000 \\ \hline \hline \end{tabular}} \end{table} \subsubsection{Hedging Features w.r.t. Market Price of Longevity Risk} \label{subsubsec:hedge_pricerisk} \begin{figure}[!ht] \begin{center} \includegraphics[width=15.5cm, height=12cm]{HedgePriceRisk.eps} \caption{Effect of the market price of longevity risk $\lambda$ on the discounted surplus distribution per policy.}\label{fig:hedge_pricerisk} \end{center} \end{figure} \begin{table}[h] \center\small{\caption{\label{tab:hedge_pricerisk}Hedging features of a longevity swap and cap w.r.t. market price of longevity risk $\lambda$.} \begin{tabular*}{1.0\textwidth}% {@{\extracolsep{\fill}}l|rrrrr} \hline \hline & Mean & Std.dev. & Skewness &VaR$_{0.99}$ & ES$_{0.99}$ \\ \hline &\multicolumn{5}{c}{$\lambda = 0$}\\ \hline No hedge & -0.0076 & 0.3592 & -0.2804 & -0.9202 & -1.1027 \\ Swap-hedged & -0.0089 & 0.0718 & -0.1919 & -0.1840 & -0.2231 \\ Cap-hedged & -0.0086 & 0.2054 & 1.0855 & -0.3193 & -0.3515 \\ \hline &\multicolumn{5}{c}{$\lambda = 4.5$}\\ \hline No hedge & 0.1520 & 0.3592 & -0.2804 & -0.7606 & -0.9431 \\ Swap-hedged & 0.0048 & 0.0718 & -0.1919 & -0.1703 & -0.2094 \\ Cap-hedged & 0.0682 & 0.2054 & 1.0855 & -0.2425 & -0.2746 \\ \hline &\multicolumn{5}{c}{$\lambda = 8.5$}\\ \hline No hedge & 0.2978 & 0.3592 & -0.2804 & -0.6148 & -0.7973 \\ Swap-hedged & 0.0204 & 0.0718 & -0.1919 & -0.1547 & -0.1938 \\ Cap-hedged & 0.1205 & 0.2054 & 1.0855 & -0.1903 & -0.2224 \\ \hline &\multicolumn{5}{c}{$\lambda = 12.5$}\\ \hline No hedge & 0.4475 & 0.3592 & -0.2804 & -0.4650 & -0.6476 \\ Swap-hedged & 0.0398 & 0.0718 & -0.1919 & -0.1354 & -0.1744 \\ Cap-hedged & 0.1619 & 0.2054 & 1.0855 & -0.1489 & -0.1810 \\ \hline \hline \end{tabular*} } \end{table} The market price of longevity risk $\lambda$ is one of the factors that determines prices of longevity derivatives and life annuity policies. Since payoffs of a longevity swap, a cap and a life annuity are contingent on the same underlying mortality intensity of a cohort, all these products are priced using the same $\lambda$. Figure~\ref{fig:hedge_pricerisk} and Table~\ref{tab:hedge_pricerisk} illustrate the effect of changing $\lambda$ on the distributions of an unhedged, a swap-hedged and a cap-hedged annuity portfolio. The degree of longevity risk can be quantified by the standard deviation, the VaR and the ES of the distributions. We observe that increasing $\lambda$ leads to the shift of the distribution to the right, resulting in a higher average surplus. On the other hand, changing $\lambda$ has no impact on the standard deviation and the skewness of the distribution. For an unhedged annuity portfolio, a higher $\lambda$ leads to higher premium for the life annuity policy since the annuity price is determined by the risk-adjusted survival probability $\tilde{S}_x(0,T)$, see Eq.~\eqref{eq:annuityprice}. In other words, an increase in the annuity price compensates the provider for the longevity risk undertaken when selling life annuity policies. There is also a trade-off between risk premium and affordability. Setting a higher premium will clearly improve the risk and return of an annuity business, it might, however, reduce the interest of potential policyholders. An empirical relationship between implied longevity and annuity prices is studied in \cite{ChMiSa14}. When life annuity portfolio is hedged using a longevity swap, the standard deviation and the absolute values of the VaR and the ES reduce substantially. The higher return obtained by charging a larger market price of longevity risk in life annuity policies is offset by an increased price paid implicitly in the swap contract (since $\tilde{S}_x(0,T) \geq S_x(0,T)$ in Eq.~\eqref{eq:Fswap}). It turns out that as $\lambda$ increases an extra return earned in the annuity portfolio and the higher implicit cost of the longevity swap nearly offset each other out on average. The net effect is that a swap-hedged annuity portfolio remains to a great extent unaffected by the assumption on $\lambda$, leading only to a very minor increase in the mean of the distribution. For a cap-hedged annuity portfolio, the discounted surplus distribution is positively skewed since a longevity cap allows an annuity provider to get exposure to the upside potential when policyholders live shorter than expected. Compared to an unhedged portfolio, the standard deviation and the absolute values of the VaR and the ES are also reduced but the reduction is smaller compared to a swap-hedged portfolio. When $\lambda$ increases, we observe that the mean of the distribution for a cap-hedged portfolio increases faster than for a swap-hedged portfolio but slower than for an unhedged portfolio. It can be explained by noticing that when the survival probability of a cohort is overestimated, that is, when annuitants turn out to live shorter than expected, holding a longevity cap has no effect (besides paying the price of a cap for longevity protection at the inception of the contract) while there is a cash outflow when holding a longevity swap, see Eq.~\eqref{eq:Fswap} and Eq.~\eqref{eq:Fcap}. In the longevity risk literature, the VaR and the ES are of a particular importance as they are the main factors determining the capital reserve when dealing with exposure to longevity risk (\cite{MeyrickeSh14}). As shown in Table~\ref{tab:hedge_pricerisk}, the difference between a swap-hedged and a cap-hedged portfolio in terms of the VaR and the ES becomes smaller when $\lambda$ increases. In fact, for $\lambda \geq 17.5$, a longevity cap becomes more effective in reducing the tail risk of an annuity portfolio compared to a longevity swap.\footnote{Given $\lambda = 17.5$, the VaR and the ES for a swap-hedged portfolio are $-0.1051$ and $-0.1441$ respectively. For a cap-hedged portfolio they become $-0.1038$ and $-0.1360$, respectively.} This result suggests that a longevity cap, besides being able to capture the upside potential, can be a more effective hedging instrument than a longevity swap in terms of reducing the VaR and the ES when the demanded market price of longevity risk $\lambda$ is large. \subsubsection{Hedging Features w.r.t. Term to Maturity} \begin{figure}[!ht] \begin{center} \includegraphics[width=15.5cm, height=12cm]{HedgeTermMaturity.eps} \caption{Effect of the term to maturity $\hat{T}$ of the hedging instruments on the discounted surplus distribution per policy.}\label{fig:hedge_term} \end{center} \end{figure} \begin{table}[h] \center\small{\caption{\label{tab:hedge_term} Hedging features of a longevity swap and cap w.r.t. term to maturity $\hat{T}$.} \begin{tabular*}{1.0\textwidth}% {@{\extracolsep{\fill}}l|rrrrr} \hline \hline & Mean & Std.dev. & Skewness &VaR$_{0.99}$ & ES$_{0.99}$ \\ \hline &\multicolumn{5}{c}{$\hat{T} = 10$ Years}\\ \hline No hedge & 0.2978 & 0.3592 & -0.2804 & -0.6148 & -0.7973 \\ Swap-hedged & 0.2820 & 0.2911 & -0.3871 & -0.5707 & -0.7490 \\ Cap-hedged & 0.2893 & 0.2989 & -0.2661 & -0.5801 & -0.7592 \\ \hline &\multicolumn{5}{c}{$\hat{T} = 20$ Years}\\ \hline No hedge & 0.2978 & 0.3592 & -0.2804 & -0.6148 & -0.7973 \\ Swap-hedged & 0.1740 & 0.1794 & -0.7507 & -0.3656 & -0.5061 \\ Cap-hedged & 0.2234 & 0.2310 & 0.2006 & -0.3870 & -0.5259 \\ \hline &\multicolumn{5}{c}{$\hat{T} = 30$ Years}\\ \hline No hedge & 0.2978 & 0.3592 & -0.2804 & -0.6148 & -0.7973 \\ Swap-hedged & 0.0204 & 0.0718 & -0.1919 & -0.1547 & -0.1938 \\ Cap-hedged & 0.1205 & 0.2054 & 1.0855 & -0.1903 & -0.2224 \\ \hline &\multicolumn{5}{c}{$\hat{T} = 40$ Years}\\ \hline No hedge & 0.2978 & 0.3592 & -0.2804 & -0.6148 & -0.7973 \\ Swap-hedged & -0.0091 & 0.0668 & 0.0277 & -0.1616 & -0.1869 \\ Cap-hedged & 0.0984 & 0.1999 & 1.1527 & -0.1909 & -0.2131 \\ \hline \hline \end{tabular*} } \end{table} Table~\ref{tab:hedge_term} and Figure~\ref{fig:hedge_term} summarize hedging results with respect to the term to maturity of hedging instruments. Due to the long-term nature of the contracts, the hedges are ineffective for $\hat{T} \leq 10$ years and the standard deviations are reduced only by around $17-19\%$ for both instruments. The lower left panel of Figure~\ref{fig:sim_mu} shows that there is little randomness around the realised survival probability for the first few years for a cohort aged 65, and consequently the hedges are insignificant when $\hat{T}$ is short. The difference in hedge effectiveness between $\hat{T}= 30$ and $\hat{T} =40$ for both instruments is also insignificant. In fact, the longevity risk underlying the annuity portfolio becomes small after $30$ years since the majority of annuitants has already deceased before reaching the age of 95. In our model setup the chance for a 65 years old to live up to 95 is around $6\%$ (Figure~\ref{fig:RiskAdjusted} with $\lambda=0$) and, hence, only around $4000 \times 6\% = 240$ policies will still be in-force after 30 years. Much of the risk left is attributed to idiosyncratic mortality risk, and hedging longevity risk for a small portfolio using index-based instruments is of limited use. For a swap-hedged portfolio, the standard deviation is reduced significantly when $\hat{T} > 20$ years. The mean surplus, on the other hand, drops to nearly zero since there is a higher cost implied for the hedge with increasing number of S-forwards involved to form the swap as $\hat{T}$ increases. Similar hedging features with respect to $\hat{T}$ are observed for a longevity cap. However, the skewness of the distribution of a cap-hedged portfolio increases with increasing $\hat{T}$. It can be explained by noticing that while a longevity cap is able to capture the upside potential regardless of $\hat{T}$, it provides a better longevity risk protection when $\hat{T}$ is larger. As a result, the distribution of a cap-hedged portfolio becomes more asymmetric when $\hat{T}$ increases. \subsubsection{Hedging Features w.r.t. Portfolio Size}\label{subsubsec:portfoliosize} \begin{figure}[!ht] \begin{center} \includegraphics[width=15.5cm, height=12cm]{HedgePortfolioSize.eps} \caption{Effect of the portfolio size $n$ on the discounted surplus distribution per policy.}\label{fig:portfoliosize} \end{center} \end{figure} \begin{table}[h] \center\small{\caption{\label{tab:hedge_portfoliosize} Hedging features of a longevity swap and cap w.r.t. different portfolio size ($n$).} \begin{tabular*}{1.0\textwidth}% {@{\extracolsep{\fill}}l|rrrrr} \hline \hline & Mean & Std.dev. & Skewness &VaR$_{0.99}$ & ES$_{0.99}$ \\ \hline &\multicolumn{5}{c}{$n = 2000$}\\ \hline No hedge & 0.2973 & 0.3646 & -0.2662 & -0.6360 & -0.8107 \\ Swap-hedged & 0.0200 & 0.0990 & -0.1615 & -0.2120 & -0.2653 \\ Cap-hedged & 0.1200 & 0.2160 & 0.9220 & -0.2432 & -0.2944 \\ \hline &\multicolumn{5}{c}{$n = 4000$}\\ \hline No hedge & 0.2978 & 0.3592 & -0.2804 & -0.6148 & -0.7973 \\ Swap-hedged & 0.0204 & 0.0718 & -0.1919 & -0.1547 & -0.1938 \\ Cap-hedged & 0.1205 & 0.2054 & 1.0855 & -0.1903 & -0.2224 \\ \hline &\multicolumn{5}{c}{$n = 6000$}\\ \hline No hedge & 0.2977 & 0.3566 & -0.2786 & -0.6363 & -0.8001 \\ Swap-hedged & 0.0204 & 0.0594 & -0.3346 & -0.1259 & -0.1660 \\ Cap-hedged & 0.1204 & 0.2016 & 1.1519 & -0.1639 & -0.2051 \\ \hline &\multicolumn{5}{c}{$n = 8000$}\\ \hline No hedge & 0.2982 & 0.3554 & -0.2920 & -0.6060 & -0.7876 \\ Swap-hedged & 0.0209 & 0.0536 & -0.5056 & -0.1190 & -0.1595 \\ Cap-hedged & 0.1209 & 0.1992 & 1.1616 & -0.1598 & -0.1991 \\ \hline \hline \end{tabular*} } \end{table} Table~\ref{tab:hedge_portfoliosize} and Figure~\ref{fig:portfoliosize} demonstrate hedging features of a longevity swap and a cap with changing portfolio size $n$. We observe a decrease in standard deviation, as well as the VaR and the ES (in absolute terms) when portfolio size increases. Compared to an unhedged portfolio, the reduction in the standard deviation and the risk measures is larger for a swap-hedged portfolio, compared to a cap-hedged portfolio. Recall that idiosyncratic mortality risk becomes significant when $n$ is small. We quantify the effect of the portfolio size on hedge effectiveness by introducing the measure of longevity risk reduction $R$, defined in terms of the variance of the discounted surplus per policy, that is, \begin{equation} R = 1- \frac{\text{Var}(\bar{D}^*)}{\text{Var}(\bar{D})}, \end{equation} where $\text{Var}(\bar{D}^*)$ and $\text{Var}(\bar{D})$ represent the variances of the discounted surplus distribution per policy for a hedged and an unhedged annuity portfolio, respectively. The results are reported in Table~\ref{tab:R}. \begin{table}[h] \center \setlength{\tabcolsep}{1em} \renewcommand{\arraystretch}{1.1} \center\small{\caption{\label{tab:R} Longevity risk reduction $R$ of a longevity swap and cap w.r.t. different portfolio size ($n$).}} \begin{tabular}{lllll} \hline \hline $n$ & 2000 & 4000 & 6000 & 8000 \\ \hline $R_\text{swap}$ & $92.6\%$ & $96.0\%$ & $97.2\%$ & $97.7\%$ \\ \hline $R_\text{cap}$ & $64.9\%$ & $67.3\%$ & $68.0\%$ & $68.6\%$ \\ \hline \hline \end{tabular} \end{table} \cite{LiHa11} consider hedging longevity risk using a portfolio of q-forwards and find the longevity risk reduction of $77.6\%$ and $69.6\%$ for portfolio size of 10,000 and 3,000, respectively. In contrast to \cite{LiHa11}, we do not consider basis risk and the result of using longevity swap as a hedging instrument leads to a greater risk reduction. Overall, our results indicate that hedge effectiveness for an index-based longevity swap and a cap diminishes with decreasing $n$ since idiosyncratic mortality risk cannot be effectively diversified away for a small portfolio size. Even though a longevity cap is less effective in reducing the variance, part of the dispersion is attributed to its ability of capturing the upside of the distribution when survival probability of a cohort is overestimated. From Table~\ref{tab:hedge_portfoliosize} we also observe that the distribution becomes more positively skewed for a cap-hedged portfolio when $n$ increases, which is a consequence of having a larger exposure to longevity risk with increasing number of policyholders in the portfolio.
{'timestamp': '2015-08-04T02:04:48', 'yymm': '1508', 'arxiv_id': '1508.00090', 'language': 'en', 'url': 'https://arxiv.org/abs/1508.00090'}
\section{Introduction}\label{sec:introduction} \setcounter{equation}{0} {\it Introduction.} The Standard Model (SM) explanation for Electroweak Symmetry breaking (EWSB) -- that one condensing Higgs boson doublet carries all burdens of elementary particle mass generation -- remains speculative. Experimental results are consistent with this simple explanation, albeit for an increasingly shrinking window of Higgs boson mass. Precision electroweak data suggest that the Higgs boson must be less than $\sim 200\, {\rm GeV}$~\cite{LEPEWWG}, otherwise the virtual effects of the Higgs boson cause theory predictions to be incompatible with the data. Direct searches at LEP~II indicate that the Higgs boson mass must be greater than $114\, {\rm GeV}$~\cite{:2001xx}. Despite these passive successes of the SM Higgs boson, well-known theoretical concerns involving naturalness and the hierarchy problem suggest that the simple SM explanation is not complete. There have been an extraordinary number of interesting ideas that have surfaced for a more palatable description of EWSB and elementary particle mass generation, and each of them affects Higgs boson collider observables. The effect could be small, as in supersymmetry with very heavy superpartners and a SM-like lightest Higgs boson, or large, as in strongly coupled theories of EWSB~\cite{Hill:2002ap}. In this letter, we examine an enlargement of the Higgs sector. It resembles the Standard Model in that it contains one light Higgs boson, but differs from the SM in the strengths of this particle's interactions with other fields. There is a compelling reason for doing this sort of analysis: soon data from the LHC, a \(pp\) collider with \(\sqrt{s}=14\)~TeV, will teach us a great deal more about the theoretical nuances of EWSB. We begin our analysis by introducing a basic, model-independent parameterization of the couplings of the light Higgs boson to SM particles. We then examine how modifications to these couplings can dramatically affect the collider observables most relevant to Higgs searches. In particular, we focus on the process \(gg\rightarrow h\rightarrow \gamma\gamma\), which is one of the most promising channels for the discovery of a light Higgs boson at the LHC, and demonstrate how such modifications to the Higgs couplings affect the potential for discovery. {\it Shutting off two-photon decays.} Higgs boson decays to two photons provide one of the most important detection channels for a SM Higgs boson in the mass range \(115\mathrm{~GeV}\lesssim m_h\lesssim 150\)~GeV. Here, we wish to discuss the possibility that the two-photon decay branching fraction of the Higgs boson is significantly reduced compared to that of the SM. This can occur for many reasons. In certain regions of parameter space within supersymmetry, for example, this is possible when the partial width to the dominant mode, such as $h\rightarrow b\bar b$, is significantly enhanced such that $B(h\rightarrow \gamma\gamma)= \Gamma(h\rightarrow\gamma\gamma)/\Gamma_{tot}\rightarrow 0$ because $\Gamma_{tot}$ is so large~\cite{Kane:1995ek}. However, a rather extreme shift in couplings is needed in general to suppress the $h\rightarrow \gamma\gamma$ branching fraction to an insignificant level this way. A less extreme way that nature could reduce the $h\rightarrow\gamma\gamma$ branching fraction is by simultaneously altering the various couplings that enter the effective $h\gamma\gamma$ vertex such that a cancelation occurs. We will focus primarily on this second possibility. The SM Higgs boson observables at the LHC involve tree-level interactions of the Higgs boson with $WW$, $ZZ$, and $f\bar f$. A model-independent parameterization of these interactions suggests that we multiply each of the vertices by an $\eta$-factor of \begin{eqnarray} g^{\rm sm}_{\mathit{hWW}}\rightarrow \eta_W g^{\rm sm}_{\mathit{hWW}},~~ g^{\rm sm}_{\mathit{hZZ}}\rightarrow\eta_Z g^{\rm sm}_{\mathit{hZZ}}, ~~ g^{\rm sm}_{\mathit{h\bar f f}}\rightarrow \eta_f g^{\rm sm}_{\mathit{h\bar f f}} \nonumber \end{eqnarray} The relevant observables also rely crucially on the Higgs boson interacting at the loop level with $gg$ and $\gamma\gamma$, and to a lesser degree of importance $\gamma Z$. These interactions can be sensitive to new particles entering at one-loop order. Experimental data presently have little bearing on the question of how large an effect the new particles can have on the effective $hgg$ or $h\gamma\gamma$ vertices, and so we will parameterize their effects through effective operators. In general, deviations of the $h\gamma\gamma$ and $hgg$ couplings arise two ways: deviations of couplings of SM particles in the loops, and extra corrections due to exotic particles or effects contributing to the effective interactions. The former can be parameterized in terms of the $\eta$ coefficients described above; the latter can be parameterized by introducing new variables $\delta_g$ and $\delta_\gamma$. In the limit that the top quark is much heavier than the other SM fermions, the resulting operators are \begin{equation} \left( \delta_\gamma +\eta_WF_1(\tau_W)+\eta_t\frac{4}{3}F_{1/2}(\tau_t)\right) \frac{h}{v}\frac{\alpha}{8\pi}F_{\mu\nu}F^{\mu\nu} \end{equation} and \begin{equation} \left(\delta_g+\eta_tF_{1/2}(\tau_t)\right) \frac{h}{v}\frac{\alpha_s}{8\pi} G^a_{\mu\nu}G^{a\mu\nu},\end{equation} where $v\simeq 246\, {\rm GeV}$ (SM Higgs VEV), $\tau_i=4m_i^2/m_h^2$, and \begin{eqnarray} F_{1/2}(\tau)&=& -2\tau[1+(1-\tau)f(\tau)] \\ F_1(\tau)& =& 2+3\tau+3\tau(2-\tau)f(\tau) \end{eqnarray} and \begin{equation} f(\tau)=\left\{ \begin{array}{cc} {\rm arcsin}^2(1/\sqrt{\tau}) & \tau\geq 1 \\ -\frac{1}{4}\left[ \log (\eta_+/\eta_-) -i\pi\right]^2 & \tau<1 \end{array}\right. \end{equation} with $\eta_\pm =(1\pm \sqrt{1-\tau})$~\cite{Gunion:1989we}. One should note that $F_1(\tau_i)$ and $F_{1/2}(\tau_i)$ can be complex if $m_h>2m_i$, as this corresponds to internal lines going on shell. Since any Higgs boson worth its name has the property $\eta_W\neq 0$, one can express the condition under which the effective $h\gamma\gamma$ vertex vanishes as \begin{equation} \left( \frac{\eta_t}{\eta_W}\right) = -\frac{3}{4}\left(\frac{1}{F_{1/2}(\tau_t)}\left( \frac{\delta_\gamma}{\eta_W} \right) + \frac{F_1(\tau_W)}{F_{1/2}(\tau_t)}\right) \end{equation} In fig.~\ref{eta relations}, we plot the contours of $\Gamma(h\rightarrow \gamma\gamma)=0$ in the $\eta_t/\eta_W$ vs. $\delta_\gamma/\eta_W$ plane. The various lines of the plot correspond to various values of $m_h$ within the range of Higgs boson masses that are considered applicable for discovery through their $h\rightarrow\gamma\gamma$ decay channel. \begin{figure} \includegraphics[width=8.5cm]{Fig1.eps} \caption{Lines in $\eta_t/\eta_W$ vs. $\delta_\gamma/\eta_W$ represent where $\Gamma(h\rightarrow \gamma\gamma)\rightarrow 0$ for various values of $m_h$ in the range of Higgs boson masses where the two-photon decays are thought to be important for discovery at the LHC.} \label{eta relations} \end{figure} {\it Theory discussion.} From experience in supersymmetry, exotic physics contributions to $\delta_g$ or $\delta_\gamma$ can decouple very rapidly and have small effects~\cite{Kane:1995ek}. Possible exceptions to this statement include a radion in warped geometry that has large $\delta_g$ and $\delta_\gamma$ contributions due to quantum breaking of conformal invariance~\cite{Giudice:2000av}, or non-renormalizable operators with low-scale cutoff~\cite{Hall:1999fe}. If mixed with a condensing Higgs boson, the lightest mass eigenstate can have significant $\delta_g$ and $\delta_\gamma$ contributions. Nevertheless, if we approximate that $\delta_g$ and $\delta_\gamma$ are zero, we are left with the reasonable conclusion that the top quark coupling to the Higgs boson is greatly enhanced compared to its SM value. This is to be expected in the case where the $h\rightarrow \gamma\gamma$ decay rate goes to zero, since the $W$-induced amplitude contribution is about six times larger than the top-quark induced amplitude contribution and of the opposite sign. Thus, one expects that a theory with reduction of the $W$ coupling and simultaneous increase in the top-quark coupling has the potential to reduce and even zero out the $h\rightarrow \gamma\gamma$ amplitude. Reducing $W$ couplings is easy: arrange for several states or mechanisms to contribute to EWSB. Any individual Higgs boson state will have reduced couplings to $W$ since couplings are proportional to the contribution that the Higgs boson makes to the mass of the $W$ boson. As for the coupling to the top quark, increasing its value is not only possible, but can be expected when EWSB is shared among many sectors. If the top quark couples to only one Higgs boson \(H\) that does not fully generate the $W$ masses, then the Yukawa coupling of the top quark to that Higgs has to have a larger value to generate the requisite top quark mass (i.e., $m_t/\langle H\rangle > m_t/\langle H_{sm}\rangle$). The above considerations lead us naturally to think in terms of a type I two-Higgs doublet scenario~\cite{Haber:1978jt,Gunion:1989we}. In such a scenario, both (complex) doublets, which we denote \(\Phi_{f}\) and \(\Phi_{\mathit{EW}}\), obtain VEVs and contribute to EWSB and vector boson mass generation, but only one of the Higgs bosons, \(\Phi_{f}\), gives mass to the fermions. Among the eight degrees of freedom in $\Phi_{EW}$ and $\Phi_f$ three are eaten by $W^\pm_L,Z^0_L$, two become charged Higgses $H^\pm$, one becomes pseudo-scalar $A^0$, and two become scalars $h^0,H^0$, where $h^0$ is the lightest. Such a theory framework can be motivated by the dynamics of a strongly coupled sector~\cite{Hill:2002ap} which contributes to EWSB and to the vector boson masses. It is well-known that giving mass to the heavy fermions simultaneously is difficult, and so an additional scalar (or effective scalar) can be added to the theory that gives mass to the fermions (somewhat reminiscent of other approaches~\cite{Simmons:1988fu}). Of course, this second sector will necessarily contribute to vector boson masses as well (one cannot hide EWSB from the vector bosons), and a complicated mixing among the two Higgs doublets ensues. In our study we define a mixing angle \(\beta\) between the two Higgs VEVs, \(\tan\beta=\langle\Phi_f\rangle/\langle \Phi_{EW}\rangle\). A second angle, \(\alpha\), which parameterizes the mixing between the gauge and mass eigenstates of the CP-even Higgs, is defined by the relation \begin{equation} \left( \begin{array}{c} H^0 \\ h^0 \end{array}\right) = \left( \begin{array}{cc} \cos\alpha & \sin\alpha \\ -\sin\alpha & \cos\alpha\end{array}\right) \left( \begin{array}{c} \sqrt{2} {\rm Re}(\Phi_{EW}^0) \\ \sqrt{2} {\rm Re}(\Phi_f^0) \end{array}\right) \end{equation} The deviations in the couplings to SM fermions and vectors bosons of $h^0$ compared to the SM Higgs depend only on these angles: \begin{equation} \eta_f=\frac{\cos\alpha}{\sin\beta},~~~\eta_{W,Z}=\sin(\beta-\alpha). \label{eq:etas} \end{equation} The value of $\alpha$ is computed from the full potential of the theory is model-dependent, as are the masses of $H^\pm$ and $A^0$. The most stringent experimental constraint on these parameters comes from \(b\rightarrow s\gamma\). The experimental limit is \(BR(b\rightarrow s\gamma)=(3.3\pm0.4)\times10^{-4}\) and in type~I Higgs doublet models~\cite{Barger:1989fj}, \begin{equation} \Gamma(b\rightarrow s\gamma)=\frac{\alpha G_{F}^2 m_b^2}{128\pi^4} \left|A_{W}+\cot^2\beta A_{H}(m_{H^{\pm}})\right|^2, \label{bsg eq} \end{equation} where \(A_{W}\) and \(A_{H}\) are the respective loop functions for the SM and charged-Higgs-induced contributions. \(A_{H}\) is a function of \(m_{H^{\pm}}\) that has opposite sign to that of $A_W$. When \(H^\pm\) is light, the charged Higgs contribution to \(b\rightarrow s\gamma\) can be large and \(\sin\beta\) is either constrained to be close to one, or $\sin\beta$ is tuned to a smaller value such that $A_{W}+\cot^2\beta A_{H}\simeq -A_W$, thereby leading to an acceptable prediction. If $m_{H^\pm}\lower.7ex\hbox{$\;\stackrel{\textstyle>}{\sim}\;$}$ few TeV $A_H$ is effectively zero for any value of $\sin\beta\lower.7ex\hbox{$\;\stackrel{\textstyle>}{\sim}\;$} 1/3$. In both of these cases (light or heavy $H^\pm$) a solution exists for our purposes, and the $H^\pm$ does not substantively affect the two-photon decay rate of the Higgs boson. We specify \(\alpha\) to be small (in order to be precise in our discussion, we choose \(\alpha=0\)), which is also nicely consistent with $m_A^2,m_{H^0}^2,m_{H^\pm}^2\gg m_{h^0}^2$ and the model contains only one light Higgs boson. An extremely small \(\alpha\) is by no means required, however, and we note that the results that carry forward will exhibit the same qualitative features when $|\alpha|\lesssim1/2$. {\it Numerical results.} In fig.~\ref{BRratio} we compute the decay branching fractions of a $140\, {\rm GeV}$ Higgs boson as a function of $\sin\beta$ relative to the SM decay branching fractions. In this, and subsequent calculations, SM quantities are obtained using HDECAY~\cite{Djouadi:2000gu}, and we allow \(\sin\beta\) to vary from 1 down to \(\sim0.250\), below which point the perturbativity of the top quark Yukawa (given by \(y_{t}=\sqrt{2}m_t/v\sin{\beta}\) in this model) becomes a concern. For low values of $\sin\beta$ the branching fraction to $b\bar b$ is enhanced significantly over that of $WW^*$. As $\sin\beta\rightarrow 1$, which recovers the SM result, $WW^*$ wins out. \begin{figure} \includegraphics[width=8.5cm]{BR140alpha0new.eps} \caption{Ratio with respect to the SM of decay branching fractions as a function of $\sin\beta$ for the light Higgs of a type I two-Higgs doublet model. Here, we have taken \(m_h=140\)~GeV and \(\alpha=0\).} \label{BRratio} \end{figure} More important than the branching fractions, however, is the total cross-section of $pp\rightarrow h\rightarrow\gamma\gamma$, since that is what is measured at the collider. The largest contribution to the production cross-section for this observable $\sigma_h(\gamma\gamma)$ is through gluon fusion, $gg\rightarrow h\rightarrow \gamma\gamma$. The amplitude for $gg\rightarrow h$ is the same as for $h\rightarrow gg$ up to simple co-factors at leading order. Thus, we make a good estimate of the relative size of the $pp\rightarrow h\rightarrow\gamma\gamma$ cross-section: \begin{equation} \frac{\sigma_h(\gamma\gamma)}{\sigma_h(\gamma\gamma)_{sm}} = \frac{\Gamma_h(gg)}{\Gamma_h(gg)_{sm}} \frac{\Gamma_h(\gamma\gamma)}{\Gamma_h(\gamma\gamma)_{sm}} \left( \frac{\Gamma_h({\rm tot})}{\Gamma_h({\rm tot})_{sm}}\right)^{-1} \end{equation} In fig.~\ref{logobservables} we plot $\sigma_h(\gamma\gamma)/\sigma_h(\gamma\gamma)_{sm}$ as a function of $\sin\beta$ for $m_h=140\, {\rm GeV}$ (and we retain $\alpha=0$). For $0.38\lesssim \sin\beta \lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} 0.55$ we see that the total cross-section into two photons is more than an order of magnitude lower than the SM cross-section, thereby severely compromising the ability of the LHC to discover the light Higgs boson via that channel. However, as we suggested above, the $t\bar t h\rightarrow t\bar t b\bar b$ cross-section is greatly enhanced by nearly an order of magnitude above the SM cross-section. Thus, it may still be possible to discover such a Higgs boson. \begin{figure} \includegraphics[width=8.5cm]{crosssec140alpha0new.eps} \caption{Ratio with respect to the SM of cross-section observables at the LHC (\(\sqrt{s}=14\)~TeV) as a function of $\sin\beta$ for the light Higgs of a type I two-Higgs doublet model, with \(m_h=140\)~GeV and \(\alpha=0\).} \label{logobservables} \end{figure} \begin{table} \begin{center} \begin{tabular}{|c|c|c|}\hline \(m_h\) (GeV) & \(\Gamma_{\mathit{tot}}(h)\) (MeV) & \(\Gamma_{\mathit{tot}}^{\mathit{sm}}(h)\) (MeV) \\ \hline 110 & 12 & 3.0 \\ 130 & 16 & 4.9\\ 150 & 23 & 17\\ \hline \end{tabular} \end{center} \caption{In this table, we list the total width of the Higgs boson, evaluated when the \(h\gamma\gamma\) effective coupling is adjusted to zero, for a variety of choices of the Higgs mass. The SM width is also shown for comparison\label{tab:HiggsWidth}.} \end{table} To investigate this, we compute the significance of discovery, with \(\mathrm{30~fb}^{-1}\) of integrated luminosity, of the $h^0$ Higgs boson in this model by scaling the known significance values~\cite{Asai:2004ws} for discovering a SM Higgs boson in various channels using the ATLAS detector. In fig.~\ref{higgs significance} we assume that $\alpha=0$ and as we vary the Higgs boson mass we select the value of $\sin\beta$ such that $\Gamma(h\rightarrow \gamma\gamma)$, and hence $\sigma(gg\rightarrow h\rightarrow \gamma\gamma)$, is zero. Any value of $\sin\beta$ in the neighborhood of this point will yield similar significance of discovery values for the channels displayed. The other significance values are modified from their SM values by the scaling of the Higgs couplings by \(\eta_{W,Z}\) and \(\eta_{f}\) according to equation~(\ref{eq:etas}). Our scaling of the significance values is justified since the Higgs boson width remains narrow and never exceeds the invariant mass resolutions (e.g., $\Delta m_{\gamma\gamma}/m_{\gamma\gamma}\sim 1\%$~\cite{ATLAStdr}) of the relevant final states over the range of interest -- see table~\ref{tab:HiggsWidth}. Higgs boson discovery channels involving one or more couplings between the Higgs and EW gauge bosons (including all weak boson fusion processes) are suppressed by factors of \(\eta_{W,Z}\) as well as by an increase in the total width of the Higgs due to an increase in Higgs decays to \(\overline{b}b\), which dominate the total width in the mass range of interest---see table~\ref{tab:HiggsWidth}. In contrast, the \(\overline{t} th\) channel significance is dramatically increased. As a result, while the usual channels in which one would look to discover a Higgs boson in the mass range \(115\mathrm{~GeV}\lesssim m_{h}\lesssim 150\)~GeV are suppressed below the \(5\sigma\) level, there is still the opportunity to discover the Higgs boson in the $t\bar th$ channel. The searches, of course, must extend themselves to Higgs boson mass values well above what is normally thought to be the relevant range for this signature. \begin{figure} \includegraphics[width=8.5cm]{SigPlot.eps} \caption{Significance of various light Higgs boson observables at the LHC with $30\xfb^{-1}$ of integrated luminosity. This plot is made for $\alpha=0$ and $\sin\beta\simeq 0.45$ where $\Gamma(h\rightarrow\gamma\gamma)\rightarrow 0$ in the type I two-Higgs doublet model.} \label{higgs significance} \end{figure} {\it Conclusions.} Higgs decays to two photons provide one of the most important channels through which one might hope to discover a light (\(115\mathrm{~GeV}\lesssim m_{h}\lesssim 150\)~GeV) SM Higgs boson at the LHC. However, nature allows a Higgs sector whose couplings differ from their SM values in such a way that \(\Gamma(h\rightarrow\gamma\gamma)\) is suppressed into irrelevance. This possibility occurs readily in scenarios involving additional contributions to electroweak symmetry breaking from sources that do not contribute to the generation of fermion masses. The Higgs coupling to EW gauge bosons is decreased relative to the SM result as its contribution to EWSB is decreased, and its coupling to the fermions is consequently augmented, allowing for a cancelation in the coefficient of the effective \(h\gamma\gamma\) vertex. This is well illustrated in type~I two Higgs doublet models and in a variety of models where there exists a dynamical contribution to EWSB. If this is the case, one cannot rely on the usual channels (weak boson fusion, \(h\rightarrow WW^{\ast}\), and \(h\rightarrow\gamma\gamma\) itself) to discover a Higgs boson. Nevertheless, such a Higgs could still be detected through processes like \(\overline{t}th\), with \(h\) decaying to \(\overline{b}b\) or \(\overline{\tau}\tau\).
{'timestamp': '2006-12-18T21:50:41', 'yymm': '0612', 'arxiv_id': 'hep-ph/0612219', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-ph/0612219'}
\section{Introduction} Since the experimental discovery of the fractional quantum Hall effect, a tremendous effort has been devoted to the study of topologically ordered phases that can be realized in strongly correlated quantum many-body systems. In the past decade, substantial progress has been made in the characterization of topologically ordered phases in spin or boson systems, and a unified algebraic framework called `modular tensor categories' has been identified to describe these phases \cite{Kitaev}. From a physical point of view, a modular tensor category simply describes how the anyonic excitations fuse and braid with each other. Phrasing the properties of topologically ordered phases in the rigid algebraic language of modular tensor categories has allowed theorists to make significant progress in the study of bosonic topological phases. Not only has our theoretical understanding of topologically ordered phases vastly improved, in recent years many important results have been obtained on how to numerically identify the type of topological order realized by a particular microscopic Hamiltonian. For example, it was realized that a non-trivial topological order leaves an imprint on the entanglement entropy of a spatial region in the ground state via the `topological entanglement entropy' term \cite{KitaevPreskill,LevinWen}. A later refinement of the topological entanglement entropy showed that the complete spectrum of the reduced density matrix of a spatial region contains information about the universal edge physics of the topological phase \cite{LiHaldane}. When the system of interest is put on a torus, it will necessarily have a ground state degeneracy if a a non-trivial topological order is realized \cite{WenNiu}. In Refs. \cite{ZhangGrover,Cincio}, a useful basis for the ground state subspace, the so-called Minimally-Entangled State (MES) basis, was identified and it was shown that this basis can be used to obtain the $S$-matrix, which contains information about the anyon braiding statistics, and the $T$-matrix, which gives access to the chiral central charge and the topological spins of the anyons. Later works showed that these MES also give access to the topological spins of the anyons via a quantity called the `momentum polarization' \cite{ZaletelMong,TuZhang}. In this work, we focus on fermion systems with Abelian topological order, meaning that the anyons form an Abelian group under fusion. In the algebraic languague, the most important difference between bosonic and fermionic topological orders is that the latter don't satisfy the same strict requirement of modularity as bosonic systems do. We explain this in more detail in the main text, where the algebraic frameworks for both bosonic and fermionic Abelian topological orders are reviewed. A fermionic system necessarily has a fermion parity symmmetry, and a useful tool in the study of fermionic topological orders is to gauge this symmetry \cite{Kitaev} (the approach of gauging a global symmetry has also proven to be very useful in the study of symmetry-protected phases \cite{LevinGu}). Importantly, since the microscopic fermion becomes a non-local gauge charge after gauging, the resulting theory is purely bosonic. Below, two main questions regarding Abelian fermionic topological orders (AfTO) are addressed: (1) How does one uniquely identify the most general AfTO in numerics?, and (2) Does the existence of a gapped edge for the ungauged fermionic topological order imply the existence of a gapped edge for the gauged topological order and vice versa? Regarding the first question, we note that a lot of the above mentioned numerical techniques for identifying a topological order have been successfully applied to fractional quantum Hall systems and fractional Chern insulators \cite{Neupert,ShengGu,Regnault,ZaletelMong,Rezayi,Grushin}, which are of course fermionic in nature. These approaches relied crucially on the presence of a fermion number conservation symmetry, which allows for the definition of a Hall conductivity $\sigma_{xy}$. In fractional quantum Hall systems one knows the value of $\sigma_{xy}$ exactly from Galilean invariance (if there is no disorder), and in lattice systems it can be computed numerically as the many-body Chern number \cite{Niu,ShengHaldane}, or from the entanglement eigenvalues by an adiabatic flux insertion procedure \cite{Zaletel,Grushin} (see also Ref. \cite{Alexandradinata}). In this work, we outline an alternative numerical detection scheme which does not rely on computing the Hall conductance, and which can identify the most general Abelian fermionic topological order. In formulating this numerical scheme, we will make heavy use of the special structure of gauged AfTOs. Another important ingredient for our detection scheme is the momentum polarization technique \cite{ZaletelMong,TuZhang}, and we point out some properties of this numerical probe which --to the best of our knowledge-- have not been discussed previously in the literature and which are relevant for fermionic systems. As for the second question regarding the edge physics of topological phases, we will show that the bosonic theory which is obtained by gauging fermion parity in an AfTO can have a (conventional, bosonic) gapped edge with the trivial vacuum if and only if the ungauged fermionic theory can have a gapped edge as well. To show that gauging fermion parity does not change whether a system admits a gapped edge or not, we use the bulk-boundary correspondence formulated in terms of Lagrangian subgroups as introduced in Ref. \cite{Levin}. For completeness, we also mention some previous works which have studied fermionic topological orders and are relevant for the present work. Refs. \cite{GuWangWen,TianLan1,TianLan2} worked out an algebraic framework for general fermionic topological orders (both Abelian and non-Abelian), which is a generalization of the modular tensor categories for bosonic topological orders. Some ideas of these works will be used below. Refs. \cite{Belov,Cano} have studied Abelian fermionic topological orders with fermion number conservation symmetry using multi-component U$(1)$ Chern-Simons theories. The algebraic approach adopted in this work agrees with the U$(1)$ Chern-Simons approach of Refs. \cite{Belov,Cano} where the results overlap. And finally, Ref. \cite{Wang} has studied general gauged fermionic topological orders from a mathematical perspective, and conjectured that Kitaev's $16$-fold way \cite{Kitaev} has a natural generalization to the most general fermionic topological order. This being said, let us now turn to a short review of the algebraic framework behind bosonic and fermionic Abelian topological order. \section{Abelian bosonic topological order} In this section, we briefly review the properties of Abelian bosonic topological orders (AbTO) that are relevant for the main discussion below. For more details, the reader is referred to Ref. \cite{Kitaev}. In mathematical terms, an AbTO is equivalent to an Abelian modular tensor category. To specify such an Abelian modular tensor category, we need to provide a list $\mathcal{A}=\{a,b,c,\dots\}$ of $N$ anyon types, together with the corresponding Abelian fusion rules $a\times b = \sum_c N_{a,b}^c c$ such that $N_{a,b}^c\in \{0,1\}$, and topological spins $\theta_a$ \footnote{This is a slight abuse of terminology which is common in the literature. If we write $\theta_a=e^{2\pi i h_a}$, then it is more appropriate to call $h_a$ the topological spin. However, in this work it will be more convenient to simply refer to $\theta_a$ as the topological spin.}. Every AbTO has a unique trivial anyon, denoted as $1$, which has the properties $1\times a = a$ and $\theta_1=1$. Often, we will write the fusion of two anyons $a$ and $b$ simply as $ab = a\times b$. In principle, we also need to provide the $F$-symbols, but they will not be important here so we omit them. We denote the braiding phase associated with moving an anyon $b$ counter-clockwise around anyon $a$ as $M_{a,b}$. By definition, the braiding phases are symmetric: $M_{a,b}=M_{b,a}$. The ribbon identity allows $M_{a,b}$ to be expressed in terms of the topological spins: \begin{equation} M_{a,b}=\frac{\theta_{ab}}{\theta_a\theta_b} \end{equation} The $S$ matrix is related to the braiding phases by $S_{a,b}=M_{a,b}\mathcal{D}^{-1}$, where the total quantum dimension $\mathcal{D}$ of an Abelian modular tensor category is given by $\mathcal{D}=\sqrt{N}$. Modularity requires that $S^\dagger S=\mathds{1}$. An important connection between the bulk anyons and the boundary theory of an AbTO is given by the following relation \cite{Kitaev}: \begin{equation}\label{kit} \frac{1}{\sqrt{N}}\sum_a \theta_a = e^{2\pi i c_-/8}\, , \end{equation} where $c_-$ is the chiral central charge of the boundary theory. This implies that the bulk anyons determine the boundary chiral central charge up to a multiple of $8$. This is the best one can do, as there exists an invertible bosonic topological phase, the so-called $E_8$ state \cite{KitaevE8}, which has no anyons but a chiral edge with $c_-=8$. One can thus always stack an $E_8$ state on top of the system of interest, which does not affect the bulk anyons but changes the boundary chiral central charge by $8$. \subsection{Gapped boundaries}\label{GappedB} As was shown by Levin, an AbTO admits a gapped edge iff (1) $c_-=0$ and (2) the bulk anyons have a bosonic Lagrangian subgroup \cite{Levin}. A bosonic Lagrangian subgroup $\mathcal{L}_b$ is a subgroup of anyons which have the following properties (see also Refs. \cite{JuvenWang1,JuvenWang2}): \begin{itemize} \item[1)] Every anyon in $\mathcal{L}_b$ has trivial topological spin, \item[2)] All anyons in $\mathcal{L}_b$ braid trivially with each other, \item[3)] Every anyon which is not in $\mathcal{L}_b$ braids non-trivially with at least one anyon in the Lagrangian subgroup. \end{itemize} Using Eq.~(\ref{kit}), we will show that the existence of a Lagrangian subgroup implies that the chiral central charge is a multiple of $8$. It thus follows that an AbTO allows for a gapped edge iff it has a Lagrangian subgroup, and the separate requirement of zero chiral central charge is redundant, provided that we are allowed to stack $E_8$ states on top of our system. Combined with the results of Ref. \cite{Levin2}, this implies that every AbTO with a Lagrangian subgroup has a string-net representation \cite{stringnet}. To derive $c_-=0$ mod $8$ from the existence of a bosonic Lagrangian subgroup $\mathcal{L}_b$, we write the AbTO as $\mathcal{A}=\{\mathcal{L}_b,\mathcal{L}_b\times n_1, \mathcal{L}_b\times n_2,\dots\}$, where $n_i$ are a set of arbitrary anyons not in $\mathcal{L}_b$. Starting from Eq.~(\ref{kit}), we can now do the following manipulations: \begin{eqnarray} e^{2\pi ic_-/8} & = & \frac{1}{\sqrt{N}}\sum_{a\in\mathcal{A}} \theta_a \\ & = & \frac{1}{\sqrt{N}}\sum_{l \in \mathcal{L}_b}\theta_l + \frac{1}{\sqrt{N}} \sum_{n_i}\sum_{l \in \mathcal{L}_b}\theta_{n_il} \\ & = & \frac{1}{\sqrt{N}}\sum_{l \in \mathcal{L}_b}\theta_l + \frac{1}{\sqrt{N}} \sum_{n_i}\theta_{n_i} \sum_{l \in \mathcal{L}_b}\theta_{l}M_{l,n_i} \\ & = & \frac{N_{L}}{\sqrt{N}}+ \frac{1}{\sqrt{N}} \sum_{n_i}\theta_{n_i} \sum_{l \in \mathcal{L}_b}M_{l,n_i} \label{sum}\\ & = & \frac{N_{L}}{\sqrt{N}}\, ,\label{Lb} \end{eqnarray} where $N_{L}$ is the number of anyons in $\mathcal{L}_b$. In the third line we have used the ribbon identity, and in the fourth line we relied on the definition of a bosonic Lagrangian subgroup wich states that $\theta_l=1$ if $l\in \mathcal{L}_b$. To see why the second term in~(\ref{sum}) is zero, note that $M_{l_1,n_i}M_{l_2,n_i}=M_{l_1l_2,n_i}$, such that $M_{l,n_i}$ for fixed $n_i$ forms a representation of $\mathcal{L}_b$. Because there is at least one $l\in\mathcal{L}_b$ for which $M_{l,n_i}\neq 1$, this representation cannot be the trivial representation. Schur's orthogonality relations then imply that the sum of $M_{l,n_i}$ over all $l\in\mathcal{L}_b$ is zero. We have thus obtained the desired result that $c_-=0$ mod $8$ if there exists a Lagrangian subgroup. As a side-result, we also found that $N_{L}^2=N$, such that only AbTOs where the number of anyons is a square number can have a Lagrangian subgroup (this relation between $N_{L}$ and $N$ was also obtained previously in Ref. \cite{Levin}). \section{Abelian fermionic topological order} The main difference between a fermionic topological order (fTO) and a bTO, is that a fTO has a distinguished fermion excitation $f$ with the properties $f^2=f\times f=1$ and $\theta_f=-1$, which is `transparent', i.e. it braids trivially with all other particles. This is not allowed in a bTO because the existence of such a particle is a violation of modularity. In Refs. \cite{Cano,Cheng}, it was shown that every Abelian fermionic topological order (AfTO) $\mathcal{A}_f$ can be written as \begin{equation}\label{prod} \mathcal{A}_f = \mathcal{A}_b\times \{1,f\}\, , \end{equation} where $\mathcal{A}_b$ is an AbTO. This result also follows from corollary A.19 of Ref. \cite{Drinfeld}. Note that the above factorization property does not hold for non-Abelian fTO, as is known from explicit counter examples \cite{TianLan1,TianLan2}. It is important to keep in mind that in general, the factorization in Eq.~(\ref{prod}) is not unique. In particular, let us define the homomorphism $\beta: \mathcal{A}_b\rightarrow \mathbb{Z}_2$, where $\beta(a)$ takes values in $\{0,1\}$. The fact that $\beta$ is a homomorphism then implies that $\beta(a b) = \beta(a)+\beta(b)$ mod $2$. Using $\beta$, we can rewrite the factorization in Eq. \ref{prod} in a different way as $\mathcal{A}_f = \mathcal{A}_b^\beta\times\{1,f\}$, where $\mathcal{A}_b^\beta = \{a f^{\beta(a)}|a\in\mathcal{A}_b\}$. So the number of different factorizations is given by the number of homomorphisms from $\mathcal{A}_b$ to $\mathbb{Z}_2$. A fermionic system necessarily has fermion parity symmetry. This implies that we can introduce a corresponding fermion parity flux or $\pi$-flux into the system. In the absence of U$(1)$ fermion number symmetry, there is no unique way of defining such a fermion parity flux. In particular, for a given parity flux $\phi_i$ we can always obtain a different parity flux by attaching an anyon in $\mathcal{A}_f$ to it. This of course assumes that the parity flux cannot `absorb' the anyons in $\mathcal{A}_f$, i.e. that $\phi_i \times a\neq \phi_i$ for $a \in \mathcal{A}_f$. While it is possible for $\phi_i$ to absorb the transparent fermion $f$, we will show in the next section that the parity fluxes cannot absorb the anyons in $\mathcal{A}_b$. The case where the fermion parity fluxes can absorb $f$ occurs in superconducting systems where a Majorana mode binds to the parity flux, like in the weak pairing phase of spinless $p-$wave superconductors \cite{Volovik,ReadGreen}. When this happens, the parity fluxes are non-Abelian defects \cite{Ivanov}. An important property of fermion parity fluxes is that they have a well-defined topological spin. This is different from $\mathbb{Z}_2$ fluxes in bosonic systems, which have a topological spin that is only defined up to a minus sign. The reason is that bosonic systems admit trivial particles with both even and odd $\mathbb{Z}_2$ charge. So if we attach such a trivial charge one object to a $\mathbb{Z}_2$ flux, we don't change the superselection sector, but we do change its topological spin by $-1$. In fermionic systems, however, all charge one particles have topological spin $\theta_f=-1$, so by attaching them to a parity flux we don't change the topological spin. \subsection{Gapped boundaries}\label{GappedF} Similarly to the bosonic case, Levin showed that an AfTO admits a gapped edge if and only if $c_-=0$, and there exists a fermionic Lagrangian subgroup $\mathcal{L}_f$, which is defined to have the following properties \cite{Levin}: \begin{itemize} \item[1)] All anyons in $\mathcal{L}_f$ braid trivially with each other, \item[2)] Every anyon in $\mathcal{A}_b$ which is not in $\mathcal{L}_f$ braids non-trivially with at least one anyon in the Lagrangian subgroup. \end{itemize} The only difference between the definitions of $\mathcal{L}_b$ and $\mathcal{L}_f$ is that in the latter we do not require the anyons in $\mathcal{L}_f$ to have trivial topological spin. Note, however, that because $\theta_l^2=M_{l,l}=1$ for all $l\in\mathcal{L}_f$, it follows that the definition of a fermionic Lagrangian subgroup only leaves a sign ambiguity in the topological spins of the anyons in $\mathcal{L}_f$. \section{Gauging fermion parity: modular extensions of an AfTO}\label{sec:gauging} There exists a well-defined microscopic prescription to gauge the fermion parity symmetry in any fermionic lattice Hamiltonian \cite{Kogut}. After gauging, the system realizes a bosonic topological order in the bulk. This is because the gauging procedure promotes the parity fluxes to deconfined anyonic excitations, which braid non-trivially with $f$. This implies that the gauged theory is modular, and can be realized in a bosonic system. In mathematical terms, a GfTO is called a `modular extension' of the original fTO \cite{TianLan1,TianLan2}. In this section, we will show that for AfTO such modular extensions have a special structure. For this we consider the process where one creates an $a-\bar{a}$ anyon pair from the vacuum and braids one of them, say $a$, around a fermion parity flux. It is well-known from bosonic symmetry-enriched topological orders that braiding around a symmetry defect $g$ can permute the anyon types \cite{Bombin,Barkeshli,Tarantino,Chen}. This means that after braiding an anyon $a$ around $g$, it is possible for $a$ not to come back as itself, but as a different anyon $\pi_g(a)$. After gauging, the parity fluxes become deconfined anyonic excitations. Because braiding of anyons cannot change the anyon type, this implies that after gauging the anyons $a$ and $\pi_g(a)$ have to be identified as the same anyon \cite{Barkeshli,Tarantino,Chen}. However, we now argue that this cannot happen for a fermion parity flux, i.e. for fermion parity we always have $\pi_\phi(a)=a$. The reason is that fermionic Hilbert spaces have a superselection rule which states that every physical state needs to have well-defined fermion parity \cite{Wick}. So if fermion parity could permute anyons, then starting from an excited state with some localized anyons it would be possible to create an orthogonal state by acting with fermion parity, but this clearly violates the superselection rule. As already anticipated above, we can now also argue that fermion parity fluxes cannot absorb anyons in $\mathcal{A}_b$. To see this, assume that a parity flux $\phi_i$ could absorb an anyon $a\in\mathcal{A}_b$. The only way this can happen consistently, is if all anyons $b$ with $M_{a,b}\neq 1$ get permuted when they braid with the parity flux. But as we just argued, this is impossible. Note that $f$ escapes this argument since it braids trivially with all anyons in $\mathcal{A}_f$. When gauging a $\mathbb{Z}_2$ symmetry in bosonic systems, the trivial anyon $1$ before gauging splits into a trivial anyon and a non-trivial anyon after gauging. This is because the original symmetry-enriched topological order has trivial anyons with both even and odd $\mathbb{Z}_2$ charge. Under gauging, the trivial anyons with even charge remain trivial, but the trivial anyons with odd charge become the gauge charge anyons of the gauged theory. For fermionic systems, this does not happen because a fTO has no trivial anyons with odd fermion parity. The above two arguments show that the anyons of a fTO do not get identified and do not split after gauging fermion parity, which implies that the original fTO is a subcategory of the GfTO. For AfTO, if $\mathcal{A}_f\times\{1,f\}$ is a subcategory of the GfTO, then $\mathcal{A}_b$ is obviously also a subcategory of the gauged theory. We can now use theorem 3.13 from Ref. \cite{Drinfeld}, which says the following: \\ \\ \textbf{Theorem (Ref. \cite{Drinfeld}).} Consider a MTC $\mathcal{K}$, and assume that it is a fusion subcategory of $\mathcal{C}$, i.e. $\mathcal{K}\subset\mathcal{C}$. If $\mathcal{C}$ is a MTC, then $\mathcal{C}=\mathcal{K}\times\mathcal{K}'$, where $\mathcal{K}'$ is also a MTC. \\ \\ Because $\mathcal{A}_b$ is modular, we can directly apply the above theorem to conclude that the GfTO takes the form \begin{equation}\label{factor1} GfTO = \mathcal{A}_b \times\{1,f,\phi,f\phi\}\, , \end{equation} if the fermion parity fluxes are Abelian, and \begin{equation}\label{factor2} GfTO = \mathcal{A}_{b}\times\{1,f,\phi\}\, , \end{equation} if the fermion parity fluxes are non-Abelian. Both $\phi$ and $f\phi$ are anyons which correspond to deconfined fermion parity fluxes. In the present context, it is not hard to prove the factorization of the GfTO, as given in Eqs.~(\ref{factor1}) and (\ref{factor2}), without invoking theorem 3.13 of Ref. \cite{Drinfeld}. To see this, note that since fermion parity fluxes $\phi_i$ cannot permute anyons, the process of making an $a-\bar{a}$ pair, braiding $a$ around $\phi_i$, and subsequently annihilating the anyon pair again is a well-defined adiabatic process for every parity flux $\phi_i$ and anyon $a\in\mathcal{A}_b$. Therefore, we can associate a Berry phase to it. Because there are no trivial particles with odd fermion parity, the Berry phases depend only on the anyon type and it makes sense to write them as $e^{i\gamma_i(a)}$ and $e^{i\gamma_i(af)}$ for every flux $\phi_i$ and $a\in\mathcal{A}_b$. The Berry phases satisfy the obvious properties $e^{i\gamma_i(a b)}=e^{\gamma_i(a)}e^{i\gamma(b)}$ and $e^{i\gamma_i(f)}=-1$. Because $\mathcal{A}_b$ is modular, we can use lemma 3.31 from Ref. \cite{Drinfeld} (see also Ref. \cite{Barkeshli}, page 11), which states that for every function $e^{i\gamma_i(\cdot)}:\mathcal{A}_b\rightarrow U(1)$ that satisfies $e^{i\gamma_i(a b)}=e^{\gamma_i(a)}e^{i\gamma_i(b)}$, there exists a corresponding unique anyon $a_i\in \mathcal{A}_b$ such that \begin{equation}\label{braidingthm} e^{i\gamma_i(a)}=M_{a,a_i}\,,\;\;\forall a \in \mathcal{A}_b \end{equation} This implies that for every parity flux $\phi_i$, we can find an anyon $a_i\in\mathcal{A}_b$ such that $\phi_i \bar{a}_i$ braids trivially with all anyons in $\mathcal{A}_b$. Because the different $\phi_i$ are related by fusion with anyons in $\mathcal{A}_b$, and for every $\phi_i$ there is a unique anyon $a_i$ such that (\ref{braidingthm}) holds, we conclude that $\phi_i \bar{a}_i$ is independent of $i$. The parity flux in Eqs.~(\ref{factor1}) and (\ref{factor2}) is then simply defined as $\phi = \phi_i\bar{a}_i$ (a similar argument for the factorization of GAfTOs with fermion number symmetry was recently given in Ref. \cite{Lapa}). As mentioned previously, the factorization of an AfTO $\mathcal{A}_f=\mathcal{A}_b\times\{1,f\}$ is not unique and we can obtain an equivalent factorization $\mathcal{A}_f=\mathcal{A}_b^\beta\times\{1,f\}$ using a homorphism $\beta:\mathcal{A}_b\rightarrow \mathbb{Z}_2$. So as consistency check, we show that for every $\mathcal{A}_b^\beta$, there exists a corresponding factorization of the GfTO as in Eqs.~(\ref{factor1}) and (\ref{factor2}). To see this, we can use the same lemma from Ref. \cite{Drinfeld} to conclude that for every homomorphism $\beta$, there must exist an anyon $\tilde{b}\in\mathcal{A}_b$ such that \begin{equation} (-1)^{\beta(a)}=M_{a,\tilde{b}}\,,\;\; \forall a \in\mathcal{A}_b \end{equation} So we see that now the GfTO factorizes as $\mathcal{A}_b^\beta\times \{1,\tilde{b}\phi ,f, \tilde{b}\phi f\}$ or $\mathcal{A}_b^\beta\times \{1,\tilde{b}\phi ,f\}$. \subsubsection{Example: U$(1)_4\times$ $\overline{IQH}$} Let us give an example to illustrate the factorization property of GAfTO. We consider the multi-component U$(1)$ Chern-Simons theory \begin{equation} \mathcal{L}=\frac{1}{4\pi}K_{IJ}\epsilon^{\mu\nu\lambda}a_\mu^I \partial_\nu a_\lambda^J + \frac{1}{2\pi}t_I \epsilon^{\mu\nu\lambda} A_\mu \partial_\nu a_{\lambda}^I\, , \end{equation} with $K$-matrix \begin{equation} K = \left( \begin{array}{cc} 4 & \\ & -1 \end{array}\right) \end{equation} This describes a $\mathcal{A}_f=\mathcal{A}_b\times \{1,f\}=\mathbb{Z}_4\times \{1,f\}$ fermionic topological order, where the transparant fermion $f$ corresponds to the vector $l_f = \left(0,1\right)^T$, and $\mathbb{Z}_4=\{a,a^2,a^3,a^4=1\}$ is generated by anyon $a$ corresponding to vector $l_a=\left(1,0\right)^T$. The topological spins are given by $\theta_{a^p}=e^{ip^2\pi l_a^TK^{-1}l_a}=e^{ip^2\pi/4}$ and $\theta_f=e^{i\pi l_fK^{-1}l_f}=-1$. $A_\mu$ is a probe gauge field for the global U$(1)$ particle number symmetry. If we require that the fermion has charge one, then this implies that $q_f=1=l_f^TK^{-1}t= -t_2$. So the only freedom left is the first component from the charge vector $t=(t_1, -1)^T$. This freedom determines the Hall conductance, which is given by $\sigma_{xy}=t^TK^{-1}t=t_1^2/4-1$, and the U$(1)$ charges of the anyons in $\mathbb{Z}_4$: $q_a = l_a^TK^{-1}t=t_1/4$. Let us take $t_1=1$, such that $\sigma_{xy}=-3/4$ and $q_a=1/4$. If a system has U$(1)$ fermion number symmetry, then there is a preferred way to create a parity flux by adiabatically inserting $\pi$ flux of the U$(1)$ particle number symmetry. Let us denote the fermion parity flux obtained via this adiabatic procedure as $\phi_A$. As is well-known, the topological spin of $\phi_A$ is fixed by the Hall conductance. In particular, it holds that $\theta_{\phi_A} = e^{i\pi\sigma_{xy}/4}$ \cite{Goldhaber,ChengZaletel}. If we apply this formula to our example with $t_1=1$, we learn that $\theta_{\phi_A}=e^{-i3\pi/16}$ and therefore $\theta_{\phi_A^2} = e^{-i3\pi/4}$. This implies that $\phi_A\times\phi _A= af$ or $\phi_A\times \phi_A = a^3f$, which is at odds with the proposed factorization property of the GAfTO because we cannot find a parity flux $\phi= \phi_A a^p$ such that $\phi\times\phi= f$. However, the choice $t_1=1$ is not allowed. This is because if $q_a=1/4$, then $q_{a^4}=q_1= 1$. This is not possible if $t$ is the charge vector of U$(1)$ fermion number symmetry, because the trivial anyon should always have even fermion parity. So $t_1=2t'$ has to be even. With this property correctly incorporated, we find $\theta_{\phi_A} = e^{i\pi (t'^2-1)/4}$ and $\theta_{\phi_A^2}=e^{i\pi(t'^2-1)}=(-1)^{t'+1}$. Under a shift $t'\rightarrow t'+4$, the anyon charges change as $q_a\rightarrow q_a + 2$, and $\theta_{\phi_A}$ remains invariant. So the only four remaining cases we have to consider are $t'=0,1,2,3$, corresponding to respectively $q_a = 0,1/2,1,3/2$. Let us work through the case where $t'=1$. With $t'=1$, it holds that $\phi_A\times\phi_A = a^2f$. We can now define $\phi = \phi_A a^3$, such that $\phi \times\phi = f$ and $M_{a,\phi}=M_{a,a^3}M_{a,\phi_A} = e^{i3\pi/2}e^{i\pi q_a} = 1$. Therefore, the GfTO factorizes as $\{a,a^2,a^3,1\}\times \{1,\phi_Aa^3,f,\phi_Aa^3f\}$. The factorization for other choices of $t'$ can be obtained in a similar way. \section{Numerical identification of Abelian fermionic topological order} An important question is how one can numerically identify the type of topological order realized by a particular microscopic lattice Hamiltonian. In the past decade, it has become clear that the ground state wavefunctions contain a lot of (if not all) information about the anyonic excitations. The first example of a ground-state property that can be used to diagnose topological order is the topological entanglement entropy \cite{KitaevPreskill,LevinWen}, which gives access to the total quantum dimension. By looking not only at the entanglement entropy, but at the entire spectrum of the reduced density matrix corresponding to some spatial region in the ground state wavefunction one also obtains universal information about possible gapless edge modes of the system \cite{LiHaldane}. This correspondence between the entanglement spectrum and edge spectrum has been worked out in full detail for free fermion systems in Ref. \cite{Alexandradinata,Fidkowski}. For systems on a torus, Ref. \cite{ZhangGrover} identified the ground states with a definite anyon flux through one of the holes of the torus as those which are `minimally entangled states' (MES) with respect to cuts wrapping the hole under consideration. In the MES basis, one can obtain both the $S$ and $T$ matrices by taking certain wave function overlaps \cite{ZhangGrover,ZaletelMong}. Finally, using the same MES on the cylinder, one can also find the $T$ matrix by calculating the `momentum polarization' \cite{ZaletelMong,TuZhang,Cincio,Zaletel,He,Wen}. In section \ref{sec:mompol}, we first discuss the application of the momentum polarization technique to systems with non-trivial translational symmetry fractionalization, characterized by the presence of a non-trivial background anyon in each unit cell. Next to the entanglement contribution to momentum polarization as discussed in Refs. \cite{ZaletelMong,TuZhang,Cincio,Zaletel,He,Wen}, we identify an `eigenvalue contribution' which is determined by both the background anyon and the anyonic flux which labels the MES. The interplay of the entanglement and eigenvalue contributions to the momentum polarization is shown to lead to a physically intuitive picture for the behavior of the momentum polarization under a change of entanglement cut, from which one can numerically obtain the topological spin of the background anyon. Next, we discuss the application of the momentum polarization technique to fermion systems. It is shown that in fermionic systems not only non-trivial background anyons in $\mathcal{A}_b$, but also transparant `background fermions' give rise to a non-trivial eigenvalue contribution to the momentum polarization, which results entirely from the fermionic anti-commutation relations of the microscopic fermions. This eigenvalue contribution from background fermions also fits nicely with the physically intuitive picture for the dependence of the momentum polarization on the entanglement cut. In section \ref{sec:numid} we use the results of section \ref{sec:gauging} to describe a minimal scheme to numerically identify a general AfTO in numerics, without relying on charge conservation symmetry. In particular, we make use of the special structure of GAfTO's to show that, compared to bosonic systems, the only additional piece of information that needs to be determined is $\theta_\phi$, i.e. the topological spin of the fermion parity flux. We provide a concrete (minimal) procedure to obtain $\theta_\phi$ by calculating the momentum polarization, for which one can rely on the results of section \ref{sec:mompol}. \subsection{Momentum polarization in the presence of background anyons and its application to fermion systems}\label{sec:mompol} \subsubsection{Bosonic systems} Before turning to fermion systems, we first discuss the concept of momentum polarization in boson or spin systems,. Consider a MES on a cylinder with an anyon flux of the type $a$ through the hole of the cylinder. Let us denote this MES as $|\psi[a]\rangle$. We will call the direction along the axis of the cylinder the $x$-direction, and the direction wrapping the hole the $y$-direction. The size of the cylinder is given by $N_x\times N_y$ unit cells. We now choose a cut along the $y$-direction close to the middle of the cylinder, dividing the cylinder in two. The length of the left half is then $N_x^L$, while the length of the right half is $N_x^R$, such that $N_x^L+N_x^R=N_x$. With this cut, the translation operator in the $y$-direction can be written as a tensor product between the translation operator on the left half and the translation operator on the right half: $T_y = T_y^L\otimes T_y^R$. With these definitions in place, momentum polarization was defined in Refs. \cite{ZaletelMong,TuZhang,Cincio} as the expectation value $\langle\psi[a]|T_y^L|\psi[a]\rangle$. It was found that this expectation value scales with $N_y$ as \begin{equation}\label{mompol} \langle\psi[a]|T_y^L|\psi[a]\rangle = \exp\left(\frac{2\pi i}{N_y}\left( h_a -\frac{c_-}{24}\right) - \alpha N_y \right)\, , \end{equation} where $\theta_a = e^{2\pi i h_a}$ is the topological spin of anyon $a$, and $c_-$ is the chiral central charge. The complex number $\alpha$ is non-universal. One important assumption for the validity of Eq. (\ref{mompol}) is that there is no translational symmetry fractionalization. As we explain in more detail below, with non-trivial translational symmetry fractionalization we find that Eq. (\ref{mompol}) needs to be generalized to the following more general form: \begin{equation}\label{mompolgeneral} \langle\psi_b[C,a]|T_y^{C,L}|\psi_b[C,a]\rangle = \exp\left(\frac{2\pi i}{N_y}\left( h_a -\frac{c_-}{24}\right) + i\Theta_{a,b}N_x^{C,L} - \alpha_C N_y \right)\, , \end{equation} where the notation for the MES $|\psi_b[C,a]\rangle$ now depends on two anyon labels $a$ and $b$, and on an integer $C$ which denotes the position of the entanglement cut. The anyon $a$ is again the anyon flux through the hole of the cylinder, measured along cut $C$, and $b$ is a `background anyon' which sits inside every unit cell \cite{Zaletel3,ChengZaletel}. The translation operator $T^{C,L}_y$ acts on the left of the cut labeled by $C$. On the right hands side, $N_x^{C,L}$ is an integer which corresponds to the length of the left half of the cylinder, and $\alpha_C$ is a non-universal complex number also depending on the cut. Although $\alpha_C$ is non-universal, we will argue that the difference $\Delta\alpha_C = \alpha_{C+1}-\alpha_C$ between two neighboring cuts is universal. The interpretation of $\Theta_{a,b}$ will be explained in the next paragraph. To explain the general form of the momentum polarization formula Eq. (\ref{mompolgeneral}), it is useful to decompose the momentum polarization in an `eigenvalue contribution' and an `entanglement contribution'. Let us consider translationally invariant systems, such that $|\psi_b[C,a]\rangle$ is an eigenstate of $T_y$. The $y$-momentum of the MES is then determined by the anyon flux $a$ and the background anyon $b$ as follows: \begin{equation}\label{Teig} T_y|\psi_b[C,a]\rangle = e^{ i \Theta_{a,b} N_x}|\psi_a \rangle\, , \end{equation} where $e^{i\Theta_{a,b}} = M_{a,b}$ \cite{Zaletel3,ChengZaletel}. This background anyon has to be non-trivial in systems where a Lieb-Schultz-Mattis-Oshikawa-Hastings (LSMOH) obstruction to a gapped trivial featureless phase is present \cite{LSM,Oshikawa1,Hastings,Oshikawa2,Zaletel3}. Equation (\ref{Teig}) holds irrespective of whether the system is bosonic or fermionic, although in general the types of topological orders which can satisfy the LSMOH obstruction are different in both cases \cite{FillingC}. From Eq. (\ref{Teig}) we can immediately identify the eigenvalue contribution to the momentum polarization as $e^{i\Theta_{a,b}N_x^{C,L}}$. To identify the entanglement contribution to momentum polarization (as discussed in Refs. \cite{ZaletelMong,TuZhang,Cincio,Zaletel,He,Wen}), we write the MES as \begin{eqnarray} |\psi_b[C,a]\rangle & = & \sum_{\alpha,\beta} \Psi_{C,a,b}^{\alpha,\beta}|\alpha\rangle_L\otimes|\beta\rangle_R \\ & = & \sum_{\mu} s_{C,a,b}^\mu\, |\mu\rangle_L\otimes |\mu\rangle_R \end{eqnarray} In the first line, we have used an arbitrary basis $|\alpha\rangle_L$ ($|\beta\rangle_R$) for the left (right) half of the cylinder. In the second line, the state is decomposed in the Schmidt basis. In the Schmidt basis, the action of translation in the $y$-direction can be written as \begin{eqnarray} T_y|\psi_b[C,a]\rangle & = & \sum_\mu s_{C,a,b}^\mu\, T^{C,L}_y|\mu\rangle_L\otimes T_y^{C,R}|\mu\rangle_R \\ & = & e^{ i \Theta_{a,b} N_x}\sum_{\mu\lambda\sigma} s_{C,a,b}^\mu [U^{*C,L}_{a,b}]_{\mu\lambda} [U^{C,R}_{a,b}]_{\mu\sigma}|\lambda\rangle_L\otimes|\sigma\rangle_R\, , \end{eqnarray} where in the second line, we have re-expanded $T^{C,L/R}_y|\mu\rangle_{L/R}$ in the Schmidt basis. Note that since $T_y^{C,L}$ and $T_y^{C,R}$ are unitary, so are $U^{C,L}_{a,b}$ and $U^{C,R}_{a,b}$. We have also separated out the eigenvalue factor $e^{i\Theta_{a,b}N_x}$. The MES $|\psi_b[C,a]\rangle$ is an eigenstate of $T_y$ if \begin{equation} \left(U_{a,b}^{C,L}\right)^\dagger S_{C,a,b}U_{a,b}^{C,R} = S_{C,a,b}\, , \end{equation} where $S_{C,a,b} =$diag$(s_{C,a,b}^\mu)$. This implies that $U^{C,L}_{a,b}=U^{C,R}_{a,b}=U^C_{a,b}$, where $U^C_{a,b}$ commutes with $S_{C,a,b}$. The entanglement contribution to momentum polarization is then entirely given in terms of the Schmidt values and the unitary matrix $U^C_{a,b}$, and takes the form \begin{equation} \mathrm{tr}(S^2_{C,a,b}U^{C}_{a,b}) = \exp\left(\frac{2\pi i}{N_y}\left( h_a -\frac{c_-}{24}\right) - \alpha_C N_y \right) \, , \end{equation} where we have again used the notation $\alpha_C$ to emphasize that $\alpha_C$ depends on the choice of cut. To understand how $\alpha_C$ depends on the cut, we note that because of the background anyon $b$ per unit cell, the anyon flux through the hole of the cylinder is not the same for every cut. The anyon fluxes for two neighboring cuts differ by the total background anyon charge enclosed by the two cuts, which is $b^{N_y}$. In other words, it holds that \begin{equation} |\psi_b[C,a]\rangle = |\psi_b[C+1,ab^{N_y}]\rangle \end{equation} The ribbon identity allows us to write $\theta_{ab^{Ny}} = \theta_a\theta_{b}^{N_y^2}M_{a,b}^{N_y}$. So given the momentum polarization for a particular cut, we can obtain the momentum polarization for the neighboring cut by shifting \begin{equation}\label{intuitive} h_a \rightarrow h_a + h_b N_y^2 + \frac{\Theta_{a,b}}{2\pi}N_y \end{equation} From this we conclude that $\alpha_C$ depends on the cut as $\alpha_{C+1}=\alpha_C +2\pi i h_b$. So, interestingly, even though $\alpha_C$ is non-universal, by calculating this coefficient for two neighboring cuts one can numerically obtain the topological spin of the background anyon $b$. Also, note that Eq. (\ref{intuitive}) implies that even though the anyon flux through the hole of the cylinder depends on the choice of entanglement cut if $b$ is non-trivial, the momentum polarization nevertheless allows one to obtain a topological spin $h_a$ which is independent of the choice of cut, because it relies on a scaling in the cylinder circumference $N_y$. \subsubsection{Fermionic systems} For fermion systems on a cylinder, there are two types of boundary conditions: periodic and anti-periodic. For each type of boundary condition, one can find a set of MES. Let us start by considering MES in the anti-periodic sector. \emph{Anti-periodic boundary conditions --} With anti-periodic boundary conditions, the MES have anyon fluxes through the hole of the cylinder which are labeled by the anyons in $\mathcal{A}_f$, and \emph{not} by the different types of parity fluxes. The latter label MES in the \emph{periodic} sector, which we discuss below. So let us write a MES in the anti-periodic sector as \begin{equation} |\psi_{f^\sigma b}[C,f^\lambda a]\rangle\, , \;\;\;\sigma,\lambda\in \{0,1\}\,, \;\;a,b \in \mathcal{A}_b\, , \end{equation} where $f^\sigma b$ again denotes the background anyon and $f^\lambda a$ the anyon flux through the hole of the cylinder measured at cut $C$. Note that both the background anyon and the flux through the hole of the cylinder are labeled with anyons in $\mathcal{A}_f=\mathcal{A}_b\times\{1,f\}$, even though the ground state degeneracy on the torus is only given by $|\mathcal{A}_b|$, i.e. the number of anyons in $\mathcal{A}_b$. In the anti-periodic sector, the translation operator along the $y$-direction is defined as \begin{eqnarray} \tilde{T}_y: & c^\dagger_{(x,y)}\rightarrow c^\dagger_{(x,y+1)}\,,\qquad y\neq N_y-1\\ & c^\dagger_{(x,N_y-1)}\rightarrow -c^\dagger_{(x,1)} \end{eqnarray} Using this twisted translation operator, the momentum polarization is defined as \begin{equation}\label{mompolAP} \langle\psi_{f^\sigma b}[C,f^\lambda a]|\tilde{T}^{C,L}_y|\psi_{f^\sigma b}[C,f^\lambda a] \rangle = \exp\left(\frac{2\pi i}{N_y}\left( h_{f^\lambda a} -\frac{c_-}{24}\right) + i\Theta_{a,b}N_x^{C,L} - \alpha_C N_y \right) \end{equation} As in bosonic systems, the MES satisfy the following property: \begin{equation} |\psi_{f^\sigma b}[C,f^\lambda a]\rangle = |\psi_{f^\sigma b}[C+1,f^{\lambda+\beta N_y} ab^{N_y}]\rangle\, , \end{equation} which via the replacement $h_{f^\lambda a}\rightarrow h_{f^\lambda a} + N_y^2 (\sigma h_f +h_b) + \Theta_{a,b}N_y/2\pi$ implies that $\alpha_{C+1}-\alpha_C = 2\pi i (\sigma h_f +h_b) = i(\sigma \pi + 2\pi h_b)$. The dependence of $\Delta\alpha_C$ on $\sigma$ arises from the eigenvalue contribution to the entanglement polarization. To see this, consider the situation where $\mathcal{A}_b=1$. In this case, $\sigma$ corresponds to the fermion parity per unit cell, which means that on the torus the fermion parity of $|\psi_{f^\sigma}[C,1]\rangle$ is given by $(-1)^{\sigma N_x N_y}$. The momentum in the $y$-direction on the torus (with periodic boundary conditions in the $x$-direction) is then given by \begin{equation}\label{transeig1} \tilde{T}_y|\psi_{f^\sigma}[C,1]\rangle = (-1)^{\sigma N_x N_y}|\psi_{f^\sigma}[C,1]\rangle \end{equation} This property can readily be checked for band insulators, where $\sigma$ is the number of filled bands modulo $2$, and also follows from fermionic tensor network descriptions of gapped ground states \cite{fMPS,fPEPS}. From Eq. (\ref{transeig1}) we can identify the eigenvalue contribution to the momentum polarization as $e^{i\sigma\pi N_x^{C,L}N_y}$, which indeed leads to the dependence of $\alpha_C$ on the cut as described above. The last aspect of the MES in the anti-periodic sector that we need to comment on is the role of $f^\lambda$. The choice of $\lambda = 0,1$ is a non-universal property of the MES, as can be seen by noting that the value of $\lambda$ can be flipped by adding a single electron in a $k_y = \pi/N_y$ momentum state on the left of the cut, which changes the (eigenvalue contribution to the) momentum polarization accordingly by a factor of $e^{i\pi/N_y}$. This reflects the fact that the ground state degeneracy on the torus is given by $|\mathcal{A}_b|$, and not $|\mathcal{A}_f|$. \emph{Periodic boundary conditions --} On a cylinder with periodic boundary conditions, the anyonic fluxes which label the MES are given by the different fermion parity fluxes. So, with anti-periodic boundary conditions, we can write the MES as \begin{equation} |\psi_{f^\sigma b}[C,\phi a f^\lambda]\rangle\, , \;\;\;\sigma,\lambda\in \{0,1\}\,, \;\;a,b \in \mathcal{A}_b\, , \end{equation} where, as before, $\phi$ is the fermion parity flux which braids trivially with all anyons in $\mathcal{A}_b$. The momentum polarization in the periodic sector is then given by \begin{equation}\label{mompolP} \langle\psi_{f^\sigma b}[C,\phi a f^\lambda]| T_y^{C,L}|\psi_{f^\sigma b}[C,\phi a f^\lambda]\rangle = \exp\left(\frac{2\pi i}{N_y}\left( h_{\phi a} -\frac{c_-}{24}\right) + i\Theta_{\phi a,bf^\sigma}N_x^{C,L} - \alpha_C N_y \right) \end{equation} The by now familiar property of the MES: \begin{equation} |\psi_{f^\sigma b}[C,\phi a f^\lambda]\rangle = |\psi_{f^\sigma b}[C+1,\phi ab^{N_y} f^{\lambda+\sigma N_y}]\rangle \end{equation} implies that $\alpha_C$ depends on the cut as $\alpha_{C+1}-\alpha_C = i(\sigma \pi + 2\pi h_b)$. As a consistency check, let us again take $\mathcal{A}_b = 1$ such that $\sigma$ is the fermion number per site. This means that if we define the state on the torus with periodic boundary conditions along both cycles, then it holds that \begin{equation} (-1)^{\hat{F}}|\psi_{f^\sigma }[C,\phi ]\rangle = (-1)^{\eta+\sigma N_x N_y}|\psi_{f^\sigma }[C,\phi ]\rangle \, , \end{equation} where $\hat{F}$ is the fermion number operator, and $\eta=1$ if $\phi$ is non-Abelian \cite{ReadGreen} and $\eta=0$ otherwise. With this definition, one finds that on a torus the momentum in the $y$-direction is given by \begin{equation}\label{neighboringcuts} T_y|\psi_{f^\sigma }[C,\phi ]\rangle = (-1)^{ \sigma N_x(N_y + 1)} |\psi_{f^\sigma }[C,\phi ]\rangle\, , \end{equation} Again, this property can easily be checked for band insulators, and can also be seen in the fermionic tensor network formalism \cite{fMPS,fPEPS}. Eq. (\ref{neighboringcuts}) implies that the momentum polarizations for two neighboring cuts indeed differ by a factor $(-1)^{\sigma(N_y +1)} = e^{i\left(\Theta_{\phi,f^\sigma}+\sigma \pi N_y\right)}$, which arises entirely from the eigenvalue contribution to the momentum polarization. Finally, we note that the momentum polarization with periodic boundary conditions is independent of $f^\lambda$. This agrees with the fact that the topological spins of the fermion parity fluxes are invariant under the addition of a transparent fermion. In the appendix, we illustrate our general discussion of momentum polarization in fermionic systems by applying it to a Chern insulator and a topological $p+ip$ superconductor. In particular, we show that both in the anti-periodic and periodic sectors, the dependence of the momentum polarization on the choice of cut is indeed captured respectively by Eqs. (\ref{mompolAP}) and (\ref{mompolP}), with $\alpha_{C+1}-\alpha_C = i\pi$. \subsection{Numerically determining the topological order}\label{sec:numid} Having uncovered the special structure of GAfTO's in section \ref {sec:gauging}, and the details of momentum polarization in fermionic systems in section \ref{sec:mompol}, we now have all the necessary ingredients at our disposal to outline a minimal numerical detection scheme for the most general AfTO. First, by calculating the momentum polarizations for the different MES on the cylinder in the anti-periodic sector we obtain the topological spins of the anyons in $\mathcal{A}_b$, up to a minus sign ambiguity. Secondly, for the MES on the torus with anti-periodic boundary conditions along both cycles, we can use the formalism of Refs. \cite{ZaletelMong,ZhangGrover,Cincio} to obtain the unitary $S$-matrix of the MTC corresponding to $\mathcal{A}_b$, just as one does for bosonic systems. From the $S$-matrix, one obtains the fusion rules (i.e. the group structure) of $\mathcal{A}_b$ via the Verlinde formula \cite{Verlinde} \begin{equation} N_{a,b}^c = \sum_d \frac{S_{a,d}S_{b,d}S_{d,c}^*}{S_{0,d}}\, , \end{equation} Once the group structure of $\mathcal{A}_b$ is obtained, this can be used to partially fix the sign ambiguity in the topological spins of the anyons in $\mathcal{A}_b$, by imposing that $S_{a,b} = N^{-1/2}M_{a,b} = N^{-1/2}\theta_{ab}/(\theta_a\theta_b)$, where $N$ is the number of MES (= the number of anyons in $\mathcal{A}_b$). This fixes the topological spins up to a homomorphism $\beta$ from $\mathcal{A}_b$ to $\mathbb{Z}_2$. This is the same homomorphism as discussed above, and different $\beta$ correspond to different ways of writing $\mathcal{A}_f = \mathcal{A}_b^\beta\times \{1,f\}$, where $\mathcal{A}_b^\beta = \{af^{\beta(a)}|a\in\mathcal{A}_b\}$. At this point, one has to choose a particular homomorphism to fix the topological spins. To completely determine the topological quantum phase of the system of interest one does not only need to know $\mathcal{A}_f$, but also the complete algebraic data corresponding to the modular extension $\mathcal{A}_b\times \{1,f,\phi,f\phi\}$ (if $\phi$ is Abelian) or $\mathcal{A}_b\times\{1,f,\phi\}$ (if $\phi$ is non-Abelian). Let us first consider the case where the fermion parity fluxes are Abelian. Since $\phi$ is Abelian, it holds that either $\phi \times \phi = 1$ or $\phi\times\phi=f$. This in turn implies that $\theta_\phi^4=1$ or $\theta_\phi^4=-1$, which gives us eight different possible values for $\theta_\phi$. Once we know the topological spin of the Abelian parity flux, we have completely fixed which of the eight possible modular extensions is realized. When $\mathcal{A}_b=1$, this was shown by Kitaev as a part of his `$16$-fold way' \cite{Kitaev}. A non-Abelian parity flux satisfies $\phi\times\phi=1+f$. If $\mathcal{A}_b=1$, then Kitaev has shown that there exist exactly eight different modular extensions with non-Abelian parity fluxes \cite{Kitaev}. These eight different modular extensions correspond to the eight different Ising MTC's, and are uniquely identified by the topological spin of the fermion parity flux. In Ref. \cite{Wang}, it was conjectured that Kitaev's 16-fold way generalizes to all fTO's (Abelian and non-Abelian), i.e. it was conjectured that every fTO has exactly 16 different modular extensions, 8 of which have Abelian fermion parity flux and 8 which have non-Abelian fermion parity flux. When the fTO is Abelian, it was shown above that the GfTO factorizes as $\mathcal{A}_b\times\{1,f,\phi\}$, so there are indeed eight different non-Abelian modular extensions which again correspond the eight different Ising categories. From the above discussion, we learn that to complete the numerical identification of an Abelian fermionic topological order we only need to know $\theta_\phi$. To access $\theta_\phi$, one first calculates the momentum polarizations of the MES on the cylinder with periodic boundary conditions. This provides a set of topological spins, which correspond to $\theta_{a\phi}$. Because $\phi$ braids trivially with all anyons in $\mathcal{A}_b$, these topological spins factorize as $\theta_{a\phi} = \theta_a \theta_\phi$. This implies that we can organize the topological spins in the periodic sector in an ordered vector, which is proportional to the ordered vector of topological spins in the anti-periodic sector. The proportionality constant obtained in this way is unique and corresponds to $\theta_\phi$, such that one can simply try all the possible permutations of the topological spins in the periodic sector until one finds one where the required proportionality is realized. To see why the proportionality constant is unique we need to show that we cannot permute the vector of topological spins in the periodic sector to obtain a vector that is proportional to (and different from) the original, unpermuted one. Since we are only interested in permutations that do not leave the vector invariant, there can be no element in the vector that is fixed under the permutation. This means that the permutation acts as $\theta_{a\phi} \rightarrow \theta_{ad\phi}$, where $d$ is an anyon in $\mathcal{A}_b$ which is not the trivial anyon. Because $\theta_{ad\phi} = \theta_{a\phi}\theta_d M_{a,d}$, the resulting vector is proportional to the unpermuted one if and only if $M_{a,d}$ is independent of $a$. But this is a violation of modularity, and therefore this cannot happen. This completes the procedure of how to uniquely characterize an AfTO in numerics. Before concluding the discussion on numerical identification of fermionic topological orders, let us consider what happens if one would have made a different choice of homomorphism $\beta$. In that case, $\mathcal{A}_b$ becomes $\mathcal{A}_b^\beta = \{af^{\beta(a)}|a\in \mathcal{A}_b\}$ and one would interpret the set of topological spins obtained from momentum polarization in the periodic sector as $\theta_{af^{\beta(a)}\tilde{\phi}} = \theta_{af^{\beta(a)}\phi \tilde{b}}$, where $\tilde{b}$ is the unique anyon that satisfies $(-1)^{\beta(a)} = M_{a,\tilde{b}}$. Using this property, one can factorize the topological spins in the periodic sector as \begin{equation} \theta_{af^{\beta(a)}\tilde{\phi}}=\theta_{af^{\beta(a)}\phi \tilde{b}} = \theta_{af^{\beta(a)}}\theta_{\phi \tilde{b}} = \theta_{af^{\beta(a)}}\theta_{\tilde{\phi}} \, , \end{equation} so we can again permute them in such a way that they become proportional --as a vector-- to the vector of topological spins $\theta_{af^{\beta(a)}}$ in the anti-periodic sector. Of course, the set of topological spins in the periodic sector obtained from momentum polarization is independent of our choice of $\beta$, as can easily be verified: \begin{eqnarray} \theta_{af^{\beta(a)}\tilde{\phi}} = \theta_{af^{\beta(a)}\phi \tilde{b}}= \theta_{a\tilde{b}\phi} \end{eqnarray} So we find that the final identification of the AfTO is independent of our choice of $\beta$, as it should be of course. One only needs to keep in mind that when comparing two different AfTO with the same $\mathcal{A}_f$, one should always use the same of choice of $\beta$ to compare the topological spins of the fermion parity fluxes $\phi$. \section{Gauging fermion parity with boundaries} \subsection{Fermionic vs. bosonic gapped edges} In this section, we address the question of what happens at the boundary of an AfTO after fermion parity is gauged. To gain some intuition about this question, let us consider a fTO which has a gapped boundary $\mathcal{B}_0$ separating it from the trivial phase. Now imagine gauging the fermion parity everywhere in the bulk, except in a narrow strip along the edge. After gauging, the GfTO in the bulk is separated by a boundary $\mathcal{B}_1$ from a narrow strip of the original ungauged fTO, which itself is separated from the trivial phase by the gapped boundary $\mathcal{B}_0$. See figure \ref{fig:B1}(a) for an illustration. If $\mathcal{B}_1$ is also gapped, then this construction gives a gapped boundary separating the GfTO from the trivial phase. \begin{figure} \begin{center} \includegraphics[scale=0.5]{B1.pdf} \caption{A GfTO obtained by gauging a fTO with a gapped boundary $\mathcal{B}_0$ to the trivial phase. The gauging is done such that a narrow strip along the boundary is unaffected and remains in the original fTO phase. The boundary separating the GfTO from the ungauged fTO is denoted as $\mathcal{B}_1$.}\label{fig:B1} \end{center} \end{figure} To argue why we can always take $\mathcal{B}_1$ to be gapped, let us first consider the case of bTO. In bosonic systems, the `ungauging' procedure corresponds to condensing the gauge charges \cite{Barkeshli}, which are always bosonic and braid trivially with each other. Condensing the gauge charges results in confinement of the gauge fluxes, which means that after condensation the energy of a flux pair grows linearly with the spatial separation between the fluxes. Condensation of the gauge charges thus transforms the gauge fluxes into the symmetry defects of the ungauged phase \cite{Barkeshli}. Using the general relation between anyon condensation and gapped boundaries \cite{Levin,KitaevKong}, we can then always construct the gapped boundary $\mathcal{B}_1$ between the gauged and ungauged bTO as a domain wall where the gauge charges get condensed. In GfTOs, the $\mathbb{Z}_2$ gauge charge $\tilde{f}$ is by definition a fermion. Because $\tilde{f}$ has non-trivial topological spin, it is impossible to construct a gapped boundary where $\tilde{f}$ gets condensed. However, at a domain wall between the GfTO and the original fTO, we can condense the bound state $\tilde{f}f$, where $f$ is the transparant fermion of the fTO. Because $\tilde{f}f$ is a boson, this will result in a gapped boundary. See Refs. \cite{Aasen,Wan} for more details on this construction. At this point, we have obtained an argument that gauging fermion parity always preserves a gapped edge. Let us now connect this argument to the formalism of Lagrangian subgroups reviewed above in Secs. \ref{GappedB} and \ref{GappedF}, where it was stated that an Abelian bosonic (fermionic) TO admits a gapped edge iff it has a bosonic (fermionic) Lagrangian subgroup. Since the GfTO is modular it corresponds to a bTO, and so according to Sec. \ref{GappedB} we would expect it to have a bosonic Lagrangian subgroup if it admits a gapped boundary. However, this is far from clear from the argument presented above. In the `layered-boundary' construction the fermionic bulk gauge charge $\tilde{f}$ is bound to a transparant, microscopic fermion $f$ which only lives on the edge and the resulting bound state is subsequently condensed. We will refer to such a gapped edge which relies on the presence of microscopic boundary fermions as a `fermionic gapped edge'. The conventional notion of a gapped edge for a bTO as used in Sec. \ref{GappedB}, however, does not permit the use of microscopic fermionic degrees on the boundary, and requires all condensed particles to be bosons. We will refer to such a gapped edge which does not contain microscopic fermion degrees of freedom as a `bosonic gapped edge'. Using this terminology, Secs. \ref{GappedB} and \ref{GappedF} then simply state that an Abelian bosonic (fermionic) TO admits a bosonic (fermionic) gapped edge iff it has a bosonic (fermionic) Lagrangian subgroup. So we see that the `layered-boundary' argument only tells us that we can construct a fermionic gapped edge for the GfTO, but not necessarily a bosonic gapped edge. In the next two sections, we will show the stronger statement that the GfTO always admits a bosonic gapped edge if the ungauged fTO admits a fermionic gapped edge. We will also show the inverse implication, i.e. that an AfTO has a fermionic gapped edge if the gauged theory has a bosonic gapped edge. One of the implications of this result is that any bosonic topological order which is obtained by gauging a fermionic topological order with a gapped edge, can be realized by a purely bosonic lattice Hamiltonian with a gapped edge. \subsection{Abelian parity flux} In this section, we will show that a GAfTO has a bosonic gapped edge iff the corresponding ungauged AfTO has a fermionic gapped edge, while assuming that the fermion parity flux is Abelian. The case with non-Abelian fermion parity flux will be discussed in the next section. When the parity fluxes are Abelian it is not difficult to see that if we apply Eq.~(\ref{kit}) to the modular GAfTO, we get the following expression: \begin{equation}\label{GM1} e^{2\pi ic_-/8} = \frac{\theta_\phi}{\sqrt{N_b}}\sum_{a\in\mathcal{A}_b}\theta_a \,, \end{equation} where $N_b$ is the number of anyons in $\mathcal{A}_b$. Because gauging a discrete symmetry cannot change the chiral central charge, Eq. (\ref{GM1}) not only determines $c_-$ of the GAfTO (mod 8), but also of the original ungauged fTO \cite{TianLan1,TianLan2}. If we apply Eq. (\ref{GM1}) to a different factorization $\mathcal{A}_b^\beta\times\{1,\tilde{b}\phi,f,\tilde{b}\phi f\}$ of the GAfTO, we get \begin{eqnarray} e^{2\pi ic_-/8} & = & \frac{\theta_{\tilde{b}\phi}}{\sqrt{N_b}}\sum_{a\in\mathcal{A}_b^\beta}\theta_a \\ & = & \frac{\theta_{\phi}\theta_{\tilde{b}}}{\sqrt{N_b}} \sum_{a\in\mathcal{A}_b}\theta_a (-1)^{\beta(a)} \label{GM2} \end{eqnarray} Because $M_{a,\tilde{b}}=(-1)^{\beta(a)}$ for all $a\in\mathcal{A}_b$, it follows that $\tilde{b}^2$ braids trivially with all anyons in $\mathcal{A}_b$, and must therefore be the trivial anyon. From $\tilde{b}^2=1$, we know that $\theta_{\tilde{b}}^4=1$. So by equating Eqs. (\ref{GM1}) and (\ref{GM2}), we find that the insertion of the minus signs $(-1)^{\beta(a)}$ in the sum of the topological spins of $\mathcal{A}_b$ changes the value of that sum by a multiplicative factor which is a fourth root of unity, and equals $\theta_{\tilde{b}}^*$. Now we are equipped to show the following result: \begin{center} \emph{A GAfTO has a bosonic Lagrangian subgroup $\mathcal{L}_b$ if and only if the corresponding ungauged AfTO has a fermionic Lagrangian subgroup $\mathcal{L}_f$, and an integer chiral central charge which is a multiple of eight.} \end{center} Using the bulk-boundary connection reviewed in the previous sections, this result then implies that a GfTO has a bosonic gapped edge if and only if the ungauged fTO has a fermionic gapped edge. We first show the `if' direction and assume that the fTO has a fermionic Lagrangian subgroup $\mathcal{L}_f$ and zero chiral central charge (mod 8). To start, we observe that because the anyons in $\mathcal{L}_f$ braid trivially with each other, it follows from the ribbon identity that their topological spins form a $\mathbb{Z}_2$ valued representation of $\mathcal{L}_f$. In other words, $\theta_{l_i}=(-1)^{\alpha(l_i)}$ such that $\alpha(\cdot):\mathcal{L}_f\rightarrow \mathbb{Z}_2=\{0,1\}$ is a homomorphism. In general this homomorphism cannot be extended to a homomorphism from $\mathcal{A}_b$ to $\mathbb{Z}_2$. Using this homomorphism, we define $\mathcal{L}'_f = \{l_if^{\alpha(l_i)}|l_i\in\mathcal{L}_f\}$ such that all anyons in $\mathcal{L}_f'$ have trivial topological spin. We now want to extend $\mathcal{L}'_f$ in such a way that it becomes a bosonic Lagrangian subgroup of the GAfTO. Because all anyons in $\mathcal{L}_f'$ braid trivially with $f$ and because $f$ itself cannot be in $\mathcal{L}_b$, the extended Lagrangian subgroup $\mathcal{L}_b$ will have to contain a fermion parity flux. At this point, it is clear that we can find a bosonic Lagrangian subgroup $\mathcal{L}_b=\mathcal{L}'_f\times\{1,\phi c\}$ of the GfTO, provided that there exists an anyon $c\in \mathcal{A}_b$ such that: (1) $M_{c,l}= (-1)^{\alpha(l)}$ for all $l\in\mathcal{L}_f$, and (2) $\theta_\phi = \theta_c^*$. Here, we have used the factorization property of GAfTO to define $\phi$ as the parity flux which braids trivially with all anyons in $\mathcal{A}_b$. Let us first show that there exists an anyon $c$ satifying property (1), i.e. $M_{c,l}=(-1)^{\alpha(l)}$ for all $l\in\mathcal{L}_f$. If the homomorphism $\alpha: \mathcal{L}_f\rightarrow \mathbb{Z}_2$ is trivial, then property (1) is also trivial and $c$ is simply the identity anyon. Let us therefore focus on the case where $\alpha$ is non-trivial and write $\mathcal{A}_b = \{\mathcal{L}_f, \mathcal{L}_f\times c_1, \mathcal{L}_f\times c_2,\dots,\mathcal{L}_f\times d_1,\mathcal{L}_f\times d_2,\dots \}$, where $c_i, d_i$ are a set of anyons not in $\mathcal{L}_f$ of which the $c_i$ satisfy criterion (1), and the $d_i$ do not. Using Eq. (\ref{GM1}), we find \begin{eqnarray} \sqrt{N_b}\theta_\phi^* e^{2\pi ic_-/8} & = & \sum_{l\in\mathcal{L}_f}\theta_l + \sum_{i}\sum_{l\in\mathcal{L}_f}\theta_{lc_i}+\sum_{i}\sum_{l\in\mathcal{L}_f}\theta_{ld_i} \\ & = & \sum_{i}\theta_{c_i}\sum_{l\in\mathcal{L}_f}(-1)^{\alpha(l)}M_{l,c_i}+\sum_{i}\theta_{d_i}\sum_{l\in\mathcal{L}_f}(-1)^{\alpha(l)}M_{l,d_i} \\ & = & N_{L_f}\sum_{i}\theta_{c_i}\label{ci}\, , \end{eqnarray} where $N_{L_f}$ is the number of anyons in $\mathcal{L}_f$. In the second and third line we have applied Schur's orthogonality relations to the 1D irreps of $\mathcal{L}_f$. From this result, we see that there must exist at least one anyon $c_i$. To show that we can find an anyon $c$ satisfying both properties $(1)$ ($M_{c,l}=(-1)^{\alpha(l)}$), and $(2)$ ($\theta_c=\theta_\phi^*$), we proceed as follows. Because the $c_ic_j$ braid trivially with all anyons in $\mathcal{L}_f$, it follows from the definition of a fermionic Lagrangian subgroup that $c_ic_j \in \mathcal{L}_f$. This implies that \begin{equation} \theta_{c_ic_j} = (-1)^{\alpha(c_ic_j)} = M_{c_i,c_ic_j} \end{equation} Applying the ribbon identity to the left-hand side of this equation, we find \begin{equation} \theta_{c_i}\theta_{c_j} M_{c_i,c_j} = M_{c_i,c_i}M_{c_i,c_j} = \theta_{c_i}^2 M_{c_i,c_j} \Rightarrow \theta_{c_i} = \theta_{c_j} \end{equation} Because the topological spins of all the $c_i$ are the same, expression (\ref{ci}) for the chiral central charge becomes \begin{equation} e^{2\pi ic_-/8} = \frac{N_{L_f}}{\sqrt{N_b}}N_c \theta_\phi \theta_c \, , \end{equation} where $N_c$ is the number of $c_i$. From the assumption that the chiral central charge $c_-$ is a multiple of eight, we find that the topological spin of the $c_i$ indeed satisfies $\theta_c = \theta_\phi^*$. Because $N_{L_f}=\sqrt{N_b}$, it also follows that $N_c=1$, i.e. the anyon $c$ satisfying $\theta_c=\theta_\phi^*$ and $M_{l,c}=\theta_l$ for all $l\in\mathcal{L}_f$ is unique. The `only if' direction is almost trivial to show. If we assume that the GAfTO = $\mathcal{A}_b\times\{1,\phi,f,\phi f\}$ has a bosonic Lagrangian subgroup, we know that there exists a group of anyons $\mathcal{M}\subset\mathcal{A}_b$ that braid trivially with each other, and have the property that every $a\in\mathcal{A}_b$ which is not in $\mathcal{M}$ braids non-trivially with at least one anyon in $\mathcal{M}$. Because the anyons in $\mathcal{M}$ have trivial mutual braiding, it follows from the ribbon identity that their topological spins form a representation of $\mathcal{M}$: $\theta_{m_1}\theta_{m_2} = \theta_{m_1m_2}$ for all $m_1,m_2\in\mathcal{M}$. From the relation $\theta_{m}^2=M_{m,m}=1$ between topological spin and self-braiding, we also know that $\theta_{m}=\pm 1$ for all $m\in\mathcal{M}$. But this implies that $\mathcal{M}=\mathcal{L}_f$ is a fermionic Lagrangian subgroup of $\mathcal{A}_f=\mathcal{A}_b\times\{1,f\}$. \subsubsection{Example: gauging U$(1)_4\times\overline{IQH}$} To illustrate the general result we revisit the example discussed above, i.e. the $\mathcal{A}_f=\mathbb{Z}_4\times\{1,f\}=\{a,a^2,a^3,1\}\times\{1,f\}$ AfTO. Regardless of how we choose the charge vector $t=(2t',-1)$, this AfTO always has a fermionic Lagrangian subgroup given by $\mathcal{L}_f=\{1,a^2\}$. Recall that $\theta_{a^2}=-1$, so this would not be a valid Lagrangian subgroup if the system were bosonic. We first consider the case where the charge vector is given by $t=\left(2,-1\right)^T$. This implies that $\sigma_{xy}=0$, and $q_a=1/2$. This model is known to have a gapped edge, as it is equivalent to a fermionic $\mathbb{Z}_2$ gauge theory \cite{FillingC} (i.e. a fermionic toric code \cite{GuWang}). Let us again denote the fermion parity flux obtained by adiabatic flux insertion as $\phi_{A}$. Because $\sigma_{xy}=0$, $\phi_{A}$ is a boson. Because $a^2$ has charge one (recall that $q_{a^p} = pt'/2$), it also follows that $M_{\phi_{A},a^2}=-1$. From this we see that $\mathcal{L}_b=\{1,a^2f,\phi_{A},\phi_{A}a^2f\}=\{1,a^2f\}\times\{1,\phi_A\}=\mathbb{Z}_2\times\mathbb{Z}_2$ is a bosonic Lagrangian subgroup of the GAfTO. Let us now repeat this analysis for the case where $t=\left(0,-1\right)$. This choice of charge vector implies that $\sigma_{xy}=-1$ and $q_a=0$. This fTO is just the stacking of a $\sigma_{xy}=-1$ IQH state, and a purely bosonic U$(1)_4$ topological order (because all anyons in U$(1)_4$ have trivial fermion parity charge). The parity flux obtained by adiabatic flux insertion now has topological spin $\theta_{\phi_{A}}=e^{i\pi\sigma_{xy}/4}=e^{-i\pi/4}$. The flux $\phi_A$ also braids trivially with all anyons in $\mathcal{A}_b=\{1,a,a^2,a^3\}$ because they have zero charge. It is now easy to check that $\mathcal{L}_b=\{1,a\phi_{A},a^2f,a^3\phi_{A}f\}=\mathbb{Z}_4$ is the Lagrangian subgroup of the GAfTO. So in this example, the anyon $a$ plays the role of the special anyon $c$ which occured in the general proof above. \subsection{Non-Abelian parity flux} If the fermion parity flux, and therefore also the GfTO, is non-Abelian, we have to use a generalization of Eq. (\ref{kit}) to determine the chiral central charge modulo eight from the bulk data. Writing the quantum dimensions of the anyons in the GfTO order as $d_a$, the general expression for $c_-$ becomes \cite{Kitaev}: \begin{equation}\label{nonAb} e^{2\pi i c_-/8} = \frac{1}{\mathcal{D}} \sum_a d_a^2 \theta_a\, , \end{equation} where the total quantum dimension is given by $\mathcal{D}=\sqrt{\sum_a d_a^2}$. In a GAfTO $\mathcal{A}_b\times\{1,\phi,f\}$, the only non-Abelian anyons are $\mathcal{A}_b\times \phi$, and these are therefore the only anyons which have a quantum dimension different from one. From the fusion rule $\phi\times\phi = 1+f$, it follows that $d_\phi = \sqrt{2}$. Applying Eq. (\ref{nonAb}) to a non-Abelian GAfTO, we find \begin{equation}\label{eqdouble} e^{2\pi i c_-/8} = \frac{\theta_\phi}{\sqrt{N_b}}\sum_{a\in\mathcal{A}_b} \theta_a \end{equation} This is exactly the same expression as the one we obtained for Abelian fermion parity fluxes. We can now easily argue that an AfTO with non-Abelian fermion parity fluxes can never have a gapped edge. First, we note that $N_b^{-1/2}\sum_{a\in\mathcal{A}_b}\theta_a$ is always an eighth root of unity. This is because Abelian topological orders have a multi-component U$(1)$ Chern-Simons description, such that the corresponding edge theories are chiral Luttinger liquids with integer chiral central charge \cite{XGWen}. On the other hand, if $\phi$ is non-Abelian, it has a topological spin $\theta_\phi = e^{2\pi i (2n+1)/16}$, where $n$ is an integer \cite{Kitaev}. So from Eq. (\ref{eqdouble}), it follows that $c_-$ is a half odd integer, which means that the edge is always chiral and cannot be gapped. \section{Conclusions} In this work we have explored the special structure of gauged Abelian fermionic topological orders, and we have exploited this structure to study both the numerical detection of such phases and the fate of the edge physics under the gauging process. We have outlined a minimal scheme to uniquely identify the AfTO realized by a microscopic lattice Hamiltonian, which does not make use of fermion number conservation symmetry. We have also shown that a gauged fTO can have a gapped bosonic edge to the vacuum if and only if the original, ungauged fTO admits a fermionic gapped edge. An obvious question is of course how to generalize these results to systems with non-Abelian anyons. The mathematical framework required to address non-Abelian fermionic topological orders is developed and discussed in Refs. \cite{GuWangWen,TianLan1,TianLan2,Wang,Aasen}, and it is substantially more involved than the simple arguments used in this work. However, the understanding of Abelian systems provides a clear, intuitive picture of the physics involved, and hopefully this will be helpful for a rigorous study of non-Abelian systems. We leave such a study for future work. \subsubsection{Acknowledgements} I am grateful to Meng Cheng for pointing out theorem 3.13 from Ref. \cite{Drinfeld} to me, and for a collaboration on a previous project which was very useful for the present paper. I also want to thank Mike Zaletel for inspiring discussions which formed the motivation for the present work, and Johannes Motruk for discussions and for pointing me to some important references. During the completion of this work I was supported by the DOE, office of Basic Energy Sciences under contract no. DE-AC02-05-CH11231.
{'timestamp': '2020-08-06T02:05:32', 'yymm': '1911', 'arxiv_id': '1911.11867', 'language': 'en', 'url': 'https://arxiv.org/abs/1911.11867'}
\section{Introduction} Let $\mathbb{F}_q$ be the finite field with $q=p^s$ elements, where $p$ is an odd prime. An important class of curves over finite fields is the class of Artin-Schreier curves. That are given by the equation $y^q-y=f(x)$ for some $f(x)\in \mathbb{F}_q[x]$. These curves have been extensively studied in several contexts, e.g. \cite{COS, Coulter, FP, H, Van}. This type of curve can be generalized for several variables, i.e., the hypersurfaces of Artin-Schreier of the form $y^q-y=f(X)$, with $f(X) \in \mathbb{F}_q[X]\setminus\{0\}$ and $X=(x_1,\dots,x_r)$. Information about the number of affine rational points of algebraic hypersurfaces over finite fields has many applications in coding theory, cryptography, communications and related areas, e.g. \cite{Aubry, GO,Stepanov,Van}. The first aim of this paper is to determine the number of $\mathbb{F}_{q^n}$-rational points of the Artin-Schreier curve $\mathcal C_{i}$ given by \[\mathcal C_i: y^q-y = x(x^{q^i}-x) -\lambda\] where $i \in \mathbb N$ and $\lambda \in \mathbb{F}_{q^n}$. We denote by $N_n(\mathcal C_{i})$ the number of $\mathbb{F}_{q^n}$-rational points of $\mathcal C_{i}$. For $i\in\mathbb N$, we define $Q_i$ the map \begin{align*} Q_i : \mathbb{F}_{q^n} &\to \mathbb{F}_q\\ \alpha & \mapsto \mathrm{Tr}(\alpha(\alpha^{q^i}-\alpha)-\lambda), \end{align*} and $N_n(Q_i)$ denotes the number of zeroes of $Q_i$ in $\mathbb{F}_{q^n}$. Hilbert’s Theorem 90 implies that \[N_n(\mathcal C_{i}) = q \cdot N_n(Q_i).\] Therefore, we observe that determine $N_n(\mathcal C_{i})$ is equivalent to calculate $N_n(Q_i)$. Details about this fact can be found in \cite{A1,A2,OS}. The number of $\mathbb{F}_{q^n}$-rational points of Artin-Schreier curves have been studied for many authors. For instance, in \cite{Wolf1} Wolfmann determined the number of rational points of the curve defined over $\mathbb{F}_{q^k}$ given by the equation $y^q-y=ax^s+b,$ where $a\in\mathbb{F}_{q^k}^*$, $b\in\mathbb{F}_{q^k}$, $k$ is even and special integers $s.$ In \cite{Coulter}, Coulter determined the number of $\mathbb{F}_q$-rational points of the curve $y^{p^n}-y=ax^{p^{\alpha}+1}+L(x)$, where $a\in\mathbb{F}_q^*$, $L(x)$ is a $\mathbb{F}_{p^t}$-linearized polynomial and $t=\gcd(n,s)$ divides $d=\gcd(\alpha,s)$. In \cite{COS}, the authors determined the number of $\mathbb{F}_{q^m}$-rational points of the curve $y^{q^n}- y = \gamma x^{q^h+1} - \alpha$ in suitable cases, for positive integers $h, n, m$ with $n$ dividing $m$ and arbitrary $\gamma,a\in\mathbb{F}_{q^m} (\gamma\neq 0)$ . In \cite{BrocheroOliveira} we determine $N_n(C_i)$ in the case where $\gcd(n,p)=1$. Using an alternative method, that consists in determine the number of solutions of the quadratic form $Q_i$ using appropriates permutation matrices, in this paper we compute the number $N_n(\mathcal C_i)$ without the condition over $\gcd(n,p)$ and also determine conditions for the Artin-Schreier curve $C_i$ to be maximal or minimal with respect the Hasse-Weil bound. The second goal of this paper is to determine the number of $\mathbb{F}_{q^n}$-rational points of the affine Artin-Schreier hypersurface $\mathcal H_r$ given by \begin{equation} \label{hr} \mathcal H_r: y^q- y = \sum_{j=1}^r a_jx_j(x_j^{q^{i_j}}-x_j)- \lambda, \end{equation} where $a_j\in\mathbb{F}_q^*$ and $0< i_j< n$ for $1\le j\le r$. We denote by $N_n(\mathcal H_r)$ the number of $\mathbb{F}_{q^n}$-rational points of the hypersurface $\mathcal H_r$. The well-known Weil bound assures us that \begin{equation}\label{cota} |N_n(\mathcal H_r)-q^{rn}| \le (q-1)\prod_{j=1}^r q^{i_j}q^{\frac{nr}2}= (q-1)q^{\frac{nr+2I}2}, \end{equation} where $I = \sum_{j=1}^r i_j$. The hypersurface $\mathcal H_r$ is called $\mathbb{F}_{q^n}$-maximal ($\mathbb{F}_{q^n}$-minimal) if $N_n(\mathcal H_r)$ attains the upper (lower) bound given in Equation \eqref{cota}. In this paper, we provide necessary and sufficient conditions to this bound be attained. In the literature, we do not have a complete description on the number of rational points for Artin-Schreier hypersurfaces, as well as when this type of hypersurface attains the bound given in \eqref{cota} or another bounds. From the results about the curve $\mathcal C_i$, we can explicitly determine the number of rational points $N_n(\mathcal H_r)$ and the conditions to obtain the maximality or minimality of this hypersurface. This paper is organized as follows. Section $2$ provides background material and preliminary results. In Section 3 we compute the number of $\mathbb{F}_{q^n}$- rational points of the Artin-Schreier curves $C_i$ and provide necessary and sufficient conditions on these curves to be maximal or minimal. In Section $4$ we determine the number of $\mathbb{F}_{q^n}$-rational points of the hypersurfaces $\mathcal H_r$ given by the equation \eqref{hr} and find explicit conditions on these hypersurfaces to be maximal or minimal. \section{Preliminary results} Throughout this paper, we denote by $\psi$ and $\tilde \psi $ the canonical additive characters of $\mathbb{F}_{q^n}$ and $\mathbb{F}_q$ respectively. The quadratic character of $\mathbb{F}_q$ is denoted by $\chi$. The trace function $\mathrm{Tr}$ of $\mathbb{F}_{q^n}$ over $\mathbb{F}_q$ is given by \begin{align*} \mathrm{Tr}: \mathbb{F}_{q^n} \to & \, \mathbb{F}_q\\ x \mapsto & \, x+x^q+\cdots + x^{q^{n-1}}. \end{align*} We denote by $\tau$ the function $\tau = \begin{cases} 1& \text{ if } p \equiv 1 \pmod 4;\\ i & \text{ if } p \equiv 3 \pmod 4. \end{cases}$ In order to determine the number of rational points of $\mathcal C_{i}$, we associate to this curve the quadratic form $\mathrm{Tr}(x(x^{q^i}-x))$. Fixing a basis of $\mathbb{F}_{q^n}$ over $\mathbb{F}_q$, we provide its associate matrix and the dimension of its radical. In order to do that, we recall the following definitions. \begin{definition} Let $Q:\mathbb{F}_{q^n} \to \mathbb{F}_q $ be a quadratic form. The symmetric bilinear form $B:\mathbb{F}_{q^n}\times \mathbb{F}_{q^n}\to \mathbb{F}_q$ associated to $Q$ is $$B(\alpha,\beta) = \frac{1}2\left(Q(\alpha+\beta) -Q(\alpha)-Q(\beta)\right).$$ The radical of the quadratic form $Q: \mathbb{F}_{q^n} \to \mathbb{F}_q$ is the $\mathbb{F}_q$-subspace \[\text{rad}(Q) = \{\alpha \in \mathbb{F}_{q^n} : B(\alpha, \beta) = 0 \text{ for all } \beta\in \mathbb{F}_{q^n} \}.\] Moreover $Q $ is a non-degenerate form if $\text{rad}(Q) = \{0\}$. \end{definition} Let $\mathcal B=\{v_1,\dots, v_n\}$ be a basis of $\mathbb{F}_{q^n}$ over $\mathbb{F}_q$. The $n \times n$ matrix $A=(a_{ij})$ defined by $$a_{ij}= \begin{cases} Q(v_i),& \text{if $i=j$,}\\ \frac 12(Q(v_i+v_j)-Q(v_i)-Q(v_j)),& \text{if $i\ne j$}, \end{cases} $$ is the {\em associated matrix} of the quadratic form $Q$ over the basis $\mathcal B$. In particular, the dimension of $\text{rad}(Q)$ is equal to $n- rank (A).$ Let $Q_1: \mathbb{F}_q^m \to \mathbb{F}_q$ and $Q_2 : \mathbb{F}_q^n\to \mathbb{F}_q$ be quadratic forms where $m \ge n$. Let $U$ and $V$ be associated matrix of $Q_1$ and $Q_2$, respectively. We say that $Q_1$ is equivalent to $Q_2$ if there exists $M \in GL_m(\mathbb{F}_q)$ such that \[M^T UM =\left( \begin{array}{c|c} V & 0 \\ \hline 0 & 0 \end{array}\right)\in M_m(\mathbb{F}_q),\] where $ GL_m(\mathbb{F}_q)$ denotes the group of $m\times m$ invertible matrices over $\mathbb{F}_q$ and $M_m(\mathbb{F}_q)$ denotes the set of $m\times m $ matrices over $\mathbb{F}_q$. Furthermore, $Q_2 $ is a {\em reduced form} of $Q_1$ if $\text{rad}(Q_2)= \{0\}. $ The following theorem is a well-known result about the number of the solutions of quadratic forms over finite fields. \begin{theorem}[\cite{LiNi}, Theorems $6.26$ and $6.27$] \label{sol} Let $Q$ be a quadratic form over $\mathbb{F}_{q^n}$, where $q$ a power of an odd prime. Let $B_Q$ be the bilinear symmetric form associated to $Q$, $v = \dim( \text{rad}(B_Q))$ and $\tilde Q$ a reduced nondegenerate quadratic form equivalent to $Q$. Set $S_{\alpha} = |\{ x \in \mathbb{F}_{q^n}| Q(x) = \alpha\}|$ and let $\Delta$ be the determinant of the quadratic form $\tilde Q$. Then \begin{enumerate}[(i)] \item If $n+v$ is even \begin{equation} \label{casopar} S_{\alpha} = \left\{ \begin{array}{ll} q^{n-1} +Dq^{(n+v-2)/2}(q-1) \quad &\text{ if } \alpha = 0,\\ q^{n-1} - Dq^{(n+v-2)/2} \quad &\text{ if } \alpha \neq 0, \end{array}\right. \end{equation} where $D = \chi((-1)^{(n-v)/2}\Delta)$. \item If $n+v$ is odd \begin{align} \label{casoimpar} S_{\alpha} = \left\{ \begin{array}{ll} q^{n-1} \quad &\text{ if } \alpha = 0,\\ q^{n-1} + Dq^{(n+v-1)/2} \quad &\text{ if } \alpha \neq 0. \end{array}\right. \end{align} where $D = \chi((-1)^{(n-v-1)/2}\alpha\Delta)$. \end{enumerate} In particular $D \in \{-1,1\}.$ \end{theorem} The following lemma associates quadratic forms and characters sums and will be useful for our results. It can be obtained from Theorem \ref{sol} by a straightforward computation. \begin{lemma}\label{soma} Let $H$ be a $n\times n$ non null symmetric matrix over $\mathbb{F}_q$ and $l = \text{rank}(H)$. Then there exists $M \in \text{GL}_n(\mathbb{F}_q)$ such that $D = MHM^T$ is a diagonal matrix, i.e., $D = \text{diag}(a_1,a_2, \dots, a_l, 0,\dots,0)$ where $a_i \in \mathbb{F}_q^*$ for all $i=1,\dots,l$. For the quadratic form \[F: \mathbb{F}_q^n \to \mathbb{F}_q , \quad F(X) = XHX^T \quad (X = (x_1, \dots, x_n) \in \mathbb{F}_q^n), \] it follows that $$\sum_{X \in \mathbb{F}_{q^n}} \tilde\psi\big(F(X)\big)= (-1)^{l(s+1)}\tau^{ls}\chi(\delta)q^{n-l/2},$$ where $\delta = a_1 \cdots a_l$. \end{lemma} We end this section with some basics definitions and results on Gauss sums. \begin{definition} Let $\Psi$ be an additive character of $\mathbb{F}_q$ and $\Phi$ a multiplicative character of $\mathbb{F}_q^*$. The Gauss sum of $\Psi $ and $\Phi$ is defined by \[G(\Psi,\Phi) = \sum_{x \in \mathbb{F}_q^*} \Psi(x)\Phi(x).\] \end{definition} \begin{theorem}\cite[Theorem 5.15]{LiNi}\label{gaussdois} Let $\chi$ be the quadratic character of $\mathbb{F}_q^*$. Then $$G(\tilde\psi,\chi)=-(-\tau)^{s}\sqrt{q}.$$ \end{theorem} \section{The curve $y^q-y=x(x^{q^i}-x)-\lambda$} In \cite{BrocheroOliveira} we compute the number $N_n(\mathcal C_1)$ when $\gcd(n,p)=1$. In this section we employ a method that allows us to compute the number $N_n(\mathcal C_i)$ when $p$ divides $n$. Let us denote by $\mathcal B=\{\beta_1, \dots, \beta_n\}$ be a basis of $\mathbb{F}_{q^n}$ over $\mathbb{F}_q$ and \begin{equation}\label{B} B =\begin{psmallmatrix} \beta_1 & \beta_1^q &\cdots &\beta_1^{q^{n-1}} \\ \beta_2 & \beta_2^q & \cdots & \beta_2^{q^{n-1}} \\ \vdots & \cdots & \ddots & \vdots \\ \beta_n & \beta_n^q & \cdots & \beta_n^{q^{n-1}} \end{psmallmatrix}. \end{equation} We use this matrix as a tool to associate $N_n(\mathcal C_{i})$ with the number of $\alpha \in \mathbb{F}_{q^n}$ such that \begin{equation} \label{Tra1} \mathrm{Tr} (\alpha(\alpha^{q^i} -\alpha)) = \mathrm{Tr}(\lambda). \end{equation} Recall that $N_n(Q_i)$ denotes the number of solutions of $\mathrm{Tr}(\alpha(\alpha^{q^i} -\alpha))=\mathrm{Tr}(\lambda)$ with $\alpha \in \mathbb{F}_{q^n}$. From Hilbert's Theorem $90$ we have that $$ N_n(\mathcal C_{i}) = q N_n(Q_i).$$ The following proposition associates $\mathrm{Tr}(x^{q^i+1}-x^2-\lambda)$ with a quadratic form. \begin{proposition}\label{pr2} Let $f(x) = x^{q^i+1}-x^2-\lambda$, where $\lambda \in \mathbb{F}_{q^n}$. The number of solutions of $\mathrm{Tr}(f(x))=0$ in $\mathbb{F}_{q^n}$ is equal to the number of solutions in $\mathbb{F}_q^n$ of the quadratic form \[(x_1 \,\, x_2 \,\, \cdots \,\, x_n) A \begin{pmatrix} x_1\\ x_2 \\ \vdots \\ x_n \end{pmatrix} = \mathrm{Tr}(\lambda), \] where $A = (a_{j,l})$ is the $n\times n$ matrix defined by the relations $a_{j,l} = \frac{1}{2} \mathrm{Tr}(\beta_j^{q^i}\beta_l +\beta_l^{q^i} \beta_j -2\beta_j\beta_l).$ \end{proposition} \begin{proof} Let $x = \sum_{j=1}^{r} \beta_jx_j$. The equation $\mathrm{Tr}(f(x)) =0$ is equivalent to \begin{equation} \label{tra} \sum_{k=0}^{n-1} f(x)^{q^k} = 0. \end{equation} We have \begin{align*} \sum_{k=0}^{n-1} f(x)^{q^k} & = \sum_{k=0}^{n-1}\Bigl( \sum_{j=1}^{n} \beta_j x_j\Bigr)^{q^k} \Bigl( \sum_{l=1}^{n} (\beta_l^{q^i}-\beta_l) x_l \Bigr)^{q^k}-\mathrm{Tr}(\lambda)\\ &= \sum_{j,l=1}^{n} \Bigl( \sum_{i=0}^{n-1} \beta_j^{q^k}(\beta_l^{q^{i+k}}-\beta_l^{q^k})\Bigr) x_jx_l -\mathrm{Tr}(\lambda). \end{align*} The Equation \eqref{tra} assures us that \(\displaystyle\sum_{j,l=1}^{n} \left( \sum_{k=0}^{n-1} \beta_j^{q^k}(\beta_l^{q^{i+k}}-\beta_l^{q^k})\right) x_jx_l = \mathrm{Tr}(\lambda).\) Simetrizing this expression, we note that $$ \frac{1}{2}\displaystyle \sum_{k=0}^{n-1} \beta_j^{q^k}\beta_l^{q^k}(\beta_l^{q^{i+k}-q^k}+\beta_j^{q^{i+k}-q^k}-2)= \frac{1}2 \mathrm{Tr}\big(\beta_j^{q^i}\beta_l+\beta_l^{q^i}\beta_j-2\beta_j\beta_l \big)=a_{j,l}, $$ and the result follows. \end{proof} The matrix $A$ in Proposition \ref{pr2} can be rewritten as \[A = \frac{1}{2} (A_1+A_2+A_3),\] where $A_1 = (\mathrm{Tr}(\beta_j^{q^i}\beta_l))_{j,l}$, $A_2 = (\mathrm{Tr}(\beta_j\beta_l^{q^i}))_{j,l}$ and $A_3 = (\mathrm{Tr}(\beta_j\beta_l))_{j,l}$. If $\mathcal P$ is the $n\times n$ cyclic permutation matrix defined by \[\mathcal P = \begin{psmallmatrix} 0&1&0& \cdots & 0\\ 0&0&1& \cdots &0 \\ \vdots& \cdots & \ddots & \cdots &\vdots\\ 0&0&0& \cdots &1\\ 1&0&0& \cdots & 0 \end{psmallmatrix},\] from the fact that $\mathcal{P}^{-1} = \mathcal P^T,$ it follows that $A_1 = B\left(\mathcal{P}^i\right)^{^{\ }_T}B^T,\, A_2 = B\left(\mathcal{P}^i\right) B^T$ and $ A_3 = BB^T$. Therefore $A = \frac{1}{2} BM_{n,i}B^T$ where \[M_{n,i} = \left(\mathcal{P}^i\right)^{^{\ }_T}-2 Id +\mathcal{P}^i, \] and the matrix $M_{n,i}=(m_{k,l})$ is given by \[m_{k,l} = \begin{cases} -2 & \text{ if }k=l;\\ 1 & \text{ if } |k-l| = i;\\ 0 & \text{ otherwise, } \end{cases}\] with the convection that we enumerate the rows and columns of the matrix from $0$ to $n-1$, so $0\le k,l \le n-1$. Since $B$ is invertible, in order to determine the number of solutions of the quadratic form defined by $A$ it is enough to determine the rank of $M_{n,i}$ and the determinant of a reduced matrix of $M_{n,i}$. In order to find these invariants of $M_{n,i}$, first we consider the case $i=1$. \subsection{The case $i=1$.} The following proposition determines the rank of $M_{n,1}$ and the determinant of one of its reduced matrix. \begin{proposition}\label{det} The rank of the $n\times n$ matrix $M_{n,1} = \mathcal P^T-2 Id +\mathcal P$ over $\mathbb{F}_{q}$ is given by \[\text{rank } M_{n,1} = \left\{ \begin{array}{ll} n-1 \quad \text{ if } \gcd(n,p) = 1;\\ n-2 \quad \text{ if } \gcd(n,p) = p. \end{array}\right.\] Let $M_{n,1}'$ denote the principal submatrix of $M_{n,1}$ constructed from the first rank$(M_{n,1})$ rows and columns, then $M_{n,1}'$ is a reduced matrix of $M_{n,1}$ and \[ \det M_{n,1}' = \left\{ \begin{array}{ll} (-1)^{n-1}n \quad &\text{ if } \gcd(n,p) = 1;\\ (-1)^{n-1} \quad& \text{ if } \gcd(n,p) = p. \end{array}\right. \] \end{proposition} \begin{proof} Let us denote by $M_n$ the matrix $$M_n= \begin{psmallmatrix} -2 & 1 & 0 & \ldots & 0 & 0 & 0\\ 1& -2&1& \ldots & 0 & 0& 0\\ 0&1&-2& \ldots & 0 & 0&0\\ \vdots & \vdots & \vdots &\ddots& \vdots&\ddots & \vdots \\ 0& 0& 0 & \ldots & -2&1 &0\\ 0& 0& 0 & \ldots & 1&-2 &1\\ 0 & 0 & 0 & \ldots &0&1&-2\\ \end{psmallmatrix}.$$ We note that $M_{n,1}=M_n+R_n$, where $R_n=(r_{i,j})$ with $r_{i,j} = \begin{cases} 1 & \text{ if } (i,j) \in\{(1,n),(n,1)\}\\ 0 & \text{ otherwise. } \end{cases}$ If we put $$\displaystyle U=\begin{psmallmatrix} 1 & 0 & 0 & \ldots & 0 & 0 \\ 0& 1&0& \ldots & 0 & 0\\ \vdots & \vdots & \vdots &\ddots& \vdots & \vdots \\ 0& 0& 0 & \ldots & 1&0 \\ 1 & 1 & 1 & \ldots &1&1\\ \end{psmallmatrix}$$ then $$UM_{n,1}U^T=\begin{psmallmatrix} -2 & 1 & 0 & \ldots & 0 & 0 & 0\\ 1& -2&1& \ldots & 0 & 0& 0\\ 0&1&-2& \ldots & 0 & 0&0\\ \vdots & \ddots & \vdots &\ddots&\vdots&\ddots & \vdots \\ 0& 0& 0 & \ldots & -2&1 &0\\ 0& 0& 0 & \ldots & 1&-2 &0\\ 0 & 0 & 0 & \ldots&0&0&0 \\ \end{psmallmatrix} = \left( \begin{array}{c|c} M_{n-1} & 0 \\ \hline 0 & 0 \end{array}\right).$$ We claim that $L_{n-1} = \det M_{n-1} = (-1)^{(n-1)}n$ for $n>1$. In order to prove this, we expand the determinant of $M_{n-1} $ by the first row and obtain the recursive relation \begin{equation}\label{mh} L_{n-1} = -2 L_{n-2}- L_{n-3} \text{ for all } n\ge 4. \end{equation} This implies that the sequence $\{L_n\}_{n\ge2}$ satisfies a recurrence relation \eqref{mh} with associated characteristic polynomial given by $z^2+2z+1=(z+1)^2$, which has $-1$ as a double root. Therefore $L_{n-1} = A(-1)^n + B(-1)^n n$, where $A,B \in \mathbb{F}_q$. Since $ L_2 = 3 $ and $ L_3 = -4 $, we conclude that $A=0$ e $B=-1$ and consequently $ L_{n-1} = (-1)^{(n-1)}n$, as we wanted. Furthermore, if $\gcd(n,p)=1$, it follows that $L_{n-1} = (-1)^{(n-1)} n \neq 0$ and then the rank of $M_{n-1}$ is $n-1$. This implies that the rank of $M_{n,1}$ is also $n-1$. In the case $\gcd(n,p)=p$, defining $V$ as $$\displaystyle V=\begin{psmallmatrix} 1 & 0 & 0 & \ldots & 0& 0 & 0 & 0 \\ 0& 1&0& \ldots & 0& 0 & 0& 0\\ \vdots & \vdots & \vdots &\ddots& \vdots& \vdots & \vdots & \vdots \\ 0& 0& 0 & \ldots &1& 0&0&0 \\ 0& 0& 0 & \ldots &0& 1&0&0 \\ 1 & 2 & 3 & \ldots&n-3 & \frac {n-4}2&-1&0\\ 0 & 0 & 0 & \ldots &0&0&0&1\\ \end{psmallmatrix},$$ we observe that $V$ is invertible and $$VUM_{n,1}U^TV^T =\left( \begin{array}{c|c} M_{n-2} & 0\quad 0 \\ \hline \begin{array}{c}0 \cdots 0\\ 0 \cdots 0\end{array}& \begin{array}{cc} n&0\\ 0&0 \end{array} \end{array}\right).$$ Therefore $L_{n-2} = (-1)^{n-2}(n-1) \neq 0$ and the rank of $M_{n,1}$ is $n-2$, from where the result follows. \end{proof} The following definition will be useful to determine $N_n(C_i)$. \begin{definition} For each $\alpha \in \mathbb{F}_q$ we define \[\varepsilon_{\alpha} = \begin{cases} q-1 & \text{ if } \alpha =0; \\ -1 & \text{ otherwise.} \\ \end{cases} \] \end{definition} By Theorem \ref{sol} and Proposition \ref{det} we have the following theorem. \begin{theorem}\label{m1} Let $\lambda \in \mathbb{F}_{q^n}$ and $n$ a positive integer. The number $N_n(\mathcal C_{1})$ of affine rational points in $\mathbb{F}_{q^n}^2$ of the curve determined by the equation $y^q-y=x^{q+1}-x^2-\lambda$ is \[N_n(\mathcal C_1)= \left\{ \begin{array}{llll} q^{n}+ \chi(2(-1)^{\frac{n}2}n\mathrm{Tr}(\lambda))q^{(n+2)/2} \quad & \text{ if } \gcd(n,p) = 1 \text{ and } n \text{ is even;}\\ q^{n}+ \varepsilon_{\mathrm{Tr}(\lambda)} \chi((-1)^{(n-1)/2}n)q^{(n+1)/2} \quad &\text{ if } \gcd(n,p) = 1 \text{ and } n \text{ is odd;}\\ q^{n}+\varepsilon_{\mathrm{Tr}(\lambda)} \chi((-1)^{n/2}) q^{(n+2)/2} \quad &\text{ if } \gcd(n,p) = p \text{ and } n\text{ is even;}\\ q^{n} +\chi(2(-1)^{\frac{n-3}2}\mathrm{Tr}(\lambda))q^{(n+3)/2}\quad &\text{ if } \gcd(n,p) = p \text{ and } n \text{ is odd.} \end{array}\right.\] \end{theorem} This theorem allows us to determine when $\mathcal C_{1}$ is minimal or maximal with respect the Hasse-Weil bound, as we show in the following corollary. \begin{theorem} Consider the curve $\mathcal C_1$ given by \[\mathcal C_1 : y^q -y = x^{q+1}-x^2-\lambda.\] Then $\mathcal C_1$ is minimal if and only if one of the following holds \begin{itemize} \item $\mathrm{Tr}(\lambda) =0$, $2p$ divides $n$ and $q\equiv 1 \pmod 4$; \item $\mathrm{Tr}(\lambda) =0$, $4p$ divides $n$ and $q\equiv 3 \pmod 4$. \end{itemize} Moreover, $\mathcal C_1$ is maximal if and only if $\mathrm{Tr}(\lambda) =0,$ $2p$ divides $n$ and $q \equiv 3 \pmod 4.$ \end{theorem} \begin{proof} The result follows from Theorem \ref{m1} and the fact that the genus of $\mathcal C_1$ is $g= \frac{q(q-1)}2$. \end{proof} \subsection{The curve $y^q-y=x(x^{q^i}-x)-\lambda$ with $i\ge1$.} \begin{proposition}\label{det2} Let $i,n$ be integers such that $0<i< n$. Set $d= \gcd(i,n)$ and $ l = \frac{n}d$. The rank of the $n\times n$ matrix $M_{n,i}$ is $n-d$ if $n=2i$ and, otherwise we have that \[\text{rank } M_{n,i} = \left\{ \begin{array}{ll} n-d\quad &\text{ if } \gcd(n,p) = 1;\\ n-2d \quad &\text{ if } \gcd(n,p) = p. \end{array}\right.\] In addition, the matrices \begin{equation}\label{diagonal} \tilde M_{n,i}=\left( \begin{array}{c|c|c|c} M_{l,1} & 0&0 & 0 \\ \hline 0& M_{l,1}&0&0\\ \hline \vdots & \cdots & \ddots& \vdots\\ \hline 0&0&0& M_{l,1}\\ \end{array}\right) \quad \text{ and } \quad \tilde M'_{n,i}=\left( \begin{array}{c|c|c|c} M'_{l,1} & 0&0 & 0 \\ \hline 0& M'_{l,1}&0&0\\ \hline \vdots & \cdots & \ddots& \vdots\\ \hline 0&0&0& M'_{l,1}\\ \end{array}\right) \end{equation} are an equivalent matrix and a reduced matrix of $M_{n,i}$, respectively, where $\tilde M'_{l,1}$ is as the matrix given in Proposition \ref{det}. The determinant of the matrix $\tilde M_{n,i}$ is $(-1)^{i}2^i$ if $n=2i$ and, otherwise we have that \[ \det \tilde M'_{n,i} = \left\{ \begin{array}{ll} (-1)^{n-d}l^d \quad &\text{ if } \gcd(n,p) = 1;\\ (-1)^{n-2d}(l-1)^d \quad& \text{ if } \gcd(n,p) = p. \end{array}\right. \] \end{proposition} \begin{proof} For convenience, we enumerate the rows and columns of the matrix $M_{n,i}$ from $0$ to $n-1$. Suppose that $n$ is even and $i = \frac{n}2$. In this case, the matrix $M_{n,i}$ is given by \[a_{k,l} = \begin{cases} -2 &\text{ if } k=l,\\ 2 &\text{ if } k-l \equiv 0 \pmod {\frac{n}2},\\ 0 &\text{ otherwise. } \end{cases}\] Let us denote $D_{n/2}=2Id_{n/2}$, where $Id_{n/2}$ is the $\frac{n}2\times \frac{n}2$ identity matrix. We have that \[M_{n,n/2} = \left(\begin{array}{c|c} -D_{n/2}&D_{n/2}\\ \hline D_{n/2}& -D_{n/2} \\ \end{array}\right) \text{ that is equivalent to } \left(\begin{array}{c|c} D_{n/2}&0\\ \hline 0& 0 \\ \end{array}\right).\] Therefore \[\text{ rank } M_{n,\frac{n}2} = \frac{n}2=i \quad \text{ and } \quad \text{ det } \tilde M'_{n, \frac{n}2} = (-2)^{\frac{n}2}=(-2)^i\neq 0.\] This proves the case $n=2i$. For the other cases, firstly we show that it is enough to consider the case where $i=d$. After that, we will obtain a block diagonal matrix composed by $d$ matrices of the form $M_{l,1}$, where $n=ld.$ We observe that any permutation $\rho:\mathbb Z_n \to \mathbb Z_n$ defines a natural action over $\mathbb{F}_q^n$, given by the following map $$\begin{matrix} \rho: & \mathbb{F}_q^n &\to& \mathbb{F}_q^n\\ & (v_0, \dots, v_{n-1})& \mapsto& (v_{\rho(0)}, \dots, v_{\rho(n-1)}). \end{matrix}$$ This action is associated to an invertible matrix $M_{\rho}$ such that \[M_{\rho} \left(\begin{matrix} v_0\\\vdots\\v_{n-1} \end{matrix}\right) =\left(\begin{matrix} v_{\rho(0)}\\\vdots\\v_{\rho(n-1)}. \end{matrix}\right) \] Conversely, for any permutation matrix $R$, there exists a permutation $\rho': \mathbb Z_n \to \mathbb Z_n$ such that $M_{\rho'} = R.$ We observe that $\mathcal P^i$ determines the permutation $$\begin{matrix}\pi_i:& \mathbb Z_n & \to & \mathbb Z_n \\ &z & \mapsto & z+i. \end{matrix}$$ Let us consider the map \begin{align*} \sigma: \mathbb Z_n &\to \mathbb Z_n\\ a & \mapsto ai \end{align*} where $a \in \mathbb Z_n$ is an element of $\mathbb Z_n$ satisfying that $\gcd(a,n)=1$. Since $a$ and $n$ are relatively prime, $\sigma$ is a permutation. In addition, $\sigma$ induces a matrix $M_{\sigma}$ and the matrix $M_{\sigma} \mathcal P^i M_{\sigma}^{-1}$ determines a permutation of $\mathbb Z_n$ given by \begin{align*} \sigma \circ \pi_i \circ \sigma^{-1} (z) & = \sigma(\pi_i(\sigma^{-1}(z)))\\ & = \sigma(\pi_i(a^{-1}z))\\ &=\sigma(a^{-1}z+i)\\ & = z+ai. \end{align*} We know that the congruence $ai \equiv u\pmod n$ has a solution if $d$ divides $u$. Then, since $d = \gcd(i,n)$, there exists $a \in\mathbb Z_n$ such that $\sigma \circ \pi_i \circ \sigma^{-1}(z) = z+\gcd(i,n)$. This shows that, without loss of generality, we can replace $i$ by $d.$ For each $z\in [0,n-1]$, from the Euclidean Division Algorithm, there exist unique integers $r,s$ with $0\le s \le l-1$ and $0\le r \le d-1$ such that $z= sd+r$. Let us consider the map \begin{equation} \begin{matrix} \varphi:& \mathbb Z_n &\to & \mathbb Z_n\\ &sd+r &\mapsto& s+lr. \end{matrix} \end{equation} \textbf{Claim:} The map $\varphi$ is a permutation of the elements of $\mathbb Z_n$. Let us suppose, that there exist distinct elements $z_1,z_2 \in \mathbb Z_n$ such $\varphi(z_1) = \varphi(z_2)$. By the Euclidean Division, there exist $0\le s_1,s_2\le l-1$ and $0\le r_1,r_2\le d-1$ with $z_1=s_1d+r_1$ and $z_2 = s_2d+r_2$. Then \[\varphi(s_1d+r_1) = \varphi(s_2d+r_2) \Leftrightarrow s_1+lr_1=s_2+lr_2 \Leftrightarrow s_1-s_2=l(r_2-r_1).\] Since $0\le s_1,s_2 \le l-1$, the latter implies that $s_1=s_2=0$ and $r_1=r_2=0$. But this contradicts the fact that $z_1\neq z_2 $. Therefore $\varphi$ is a permutation. We will use $\varphi$ to permutate the rows and columns of $\mathcal P^d - 2Id + \left(\mathcal{P}^d\right)^{^{\ }_T}$ in order to obtain the diagonal blocks matrix $\tilde M_{n,i}$ with $d$ blocks. Recall that, the matrix $\left(\mathcal{P}^d\right)^{^{\ }_T} -2 Id +\mathcal P^d$ is given by $$a_{k,j} = \begin{cases} -2 & \text{ if } k=j;\\ 1 & \text{ if } |k-j|=d;\\ 0 & \text{ otherwise. } \\ \end{cases}$$ We obtain that $\varphi \circ \pi_d \circ \varphi^{-1}$ defines a permutation $\theta: \mathbb Z_n\to \mathbb Z_n$ given by \begin{align*} \theta(z) = \varphi \circ \pi_d \circ \varphi^{-1}(z) & = \varphi \circ \pi_d (\varphi^{-1}(s+rl))\\ & = \varphi \circ \pi_d (sd+r)\\ & = \varphi ((s+1)d+r)\\ & = (s+1) + rl, \end{align*} where $z=sl+r$ with $0\le s \le l-1$ and $0\le r \le d-1$. We know that $$M_{\varphi}\left(\bigl(\mathcal{P}^d\bigr)^{^{\ }_T} -2 Id +\mathcal P^d\right)M_{\varphi}^{-1}= \tilde M_{n,d}.$$ In order to do this, we show that the product of the permutation matrix $M_{\varphi}$ for $\left(\left(\mathcal{P}^d\right)^{^{\ }_T} -2 Id +\mathcal P^d\right)$ and $M_{\varphi}^{-1}$ takes the non-null entries of $M_{n,d}$ in the non-null entries of $\tilde M_{n,d}$. In the case $k=j$, writing $k=s+rl$ with $0\le s\le l-1$ and $0\le r\le d-1$, we have that \[ \theta(a_{k,k}) = a_{(s+1)+rl,(s+1)+rl}=a_{\theta(k),\theta(k)},\] this implies that the new matrix until have $a_{k,k} = -2$. If $a_{k,k+d} =1$, $k=s+rl$ with $0\le s\le l-1$ and $0\le r\le d-1$, we have that \[ \theta(a_{k,k+d}) = a_{(s+1)+rl,(s+2)+rl} = a_{\theta(k),\theta(k+d)},\] therefore we have $a_{k,k+1}=1$ viewing the indices module $l$. For $a_{k+d,k}=1$ is the same procedure, using the fact that $a_{k+d,k}$ is the entries transpose of $a_{k,k+d}$. The other entries are null and their images are also null. Therefore, we obtain the matrix in Equation \eqref{diagonal}. Using Proposition \ref{det} and the fact that the matrix $M_{n,i}$ is a block diagonal matrix with $d$ blocks equal to the matrix $M_{l,1}$, we determine the rank of $M_{n,i}$ and the determinant of the reduce matrix $\tilde M'_{n,i}$ of $M_{n,i}$. Consequently \[\text{ rank } M_{n,i} = \left\{\begin{array}{ll} (l-1)d = n-d\quad \text{ if } \gcd(n,p) = 1;\\ (l-2)d =n-2d \quad \text{ if } \gcd(n,p) = p, \end{array}\right.\] and \[ \det \tilde M_{n,i} = \left\{ \begin{array}{ll} \left((-1)^{l-1}l\right)^d = (-1)^{n-d}l^d\quad &\text{ if } \gcd(n,p) = 1;\\ \left((-1)^{l-1}\left(l-1\right)\right)^d = (-1)^{n-2d}(l-1)^d \quad& \text{ if } \gcd(n,p) = p. \end{array}\right. \] \end{proof} From Proposition \ref{pr2} we have that the matrix associated to the quadratic form $\mathrm{Tr}(cx(x^{q^i}-x))$ is \[A= \frac{c}2B(\mathcal{P}^T-2Id+\mathcal{P})B^T\] where $B$ is given in Equation \eqref{B}. Proposition \ref{det2} implies the following result. \begin{corollary}\label{b1} Let $c \in \mathbb{F}_q^*$ and let $i$ be an integer such that $0<i<n$. Set $d=\gcd(i,n)$ and $l = \frac{n}d$. The rank of the $n\times n$ matrix $A= \frac{c}2B(\left(\mathcal{P}^i\right)^{^{\ }_T} -2 Id +\mathcal P^i)B^T$ is given by \[\text{rank } A = \begin{cases} n-d & \text{ if } \gcd(n,p)=1;\\ n-2d & \text{ if } \gcd(n,p)=p. \end{cases}\] Let $A'$ be a reduced matrix of $A$. Then \[\chi(\det(A')) = \begin{cases} \chi((-2c)^{n-d}l^d) & \text{ if } \gcd(n,p)=1;\\ \chi((-2c)^{n-d}(l-1)^d) & \text{ if } \gcd(n,p)=p. \end{cases}\] \end{corollary} By Theorem \ref{sol} and Propositions \ref{pr2} and \ref{det2} we have the following theorem. \begin{theorem}\label{mm} Let $n,i$ be integers such that $0<i<n$ and put $d= \gcd(i,n)$ and $ l = \frac{n}d$. If $n=2i$, the number $N_n(\mathcal C_{i})$ of affine rational points in $\mathbb{F}_{q^n}^2$ of the curve determined by the equation $y^q-y=x^{q^i+1}-x^2-\lambda$ is \[N_n(\mathcal C_{i})= \left\{ \begin{array}{llll} q^{n}+ \chi((-1)^{(i+1)/2}\mathrm{Tr}(\lambda)) q^{(3i+1)/2}\quad & \text{ if } i \text{ is odd;}\\ q^{n}+ \varepsilon_{\mathrm{Tr}(\lambda)} \chi((-1)^{i/2}) q^{3i/2} \quad &\text{ if } i \text{ is even.}\\ \end{array}\right.\] If $n\neq 2i$, the number of affine rational points of $\mathcal C_i$ is \[N_n(\mathcal C_{i})= \left\{ \begin{array}{llll} q^{n}+ \chi(2(-1)^{(n-d+1)/2}\mathrm{Tr}(\lambda) l^d) q^{(n+d+1)/2} \quad & \text{ if } \gcd(n,p) = 1 \text{ and } n+d \text{ is odd;}\\ q^{n}+ \varepsilon_{\mathrm{Tr}(\lambda)} \chi((-1)^{(n-d)/2}l^d) q^{(n+d)/2} \quad &\text{ if } \gcd(n,p) = 1 \text{ and $n+d$ is even;}\\ q^{n}+ \chi(2(-1)^{(n+1)/2}\mathrm{Tr}(\lambda)(l-1)^d) q^{(n+2d+1)/2} \quad &\text{ if } \gcd(n,p) = p \text{ and } n \text{ is odd;}\\ q^{n}+ \varepsilon_{\mathrm{Tr}(\lambda)} \chi((-1)^{n/2}(l-1)^d)q^{(n+2d)/2} \quad &\text{ if } \gcd(n,p) = p \text{ and } n\text{ is even.}\\ \end{array}\right.\] \end{theorem} \begin{remark} The curve $\mathcal C_{i}$ has genus $g = \frac{(q-1)q^i}2$. The Hasse-Weil bound of $\mathcal C_{i}$ is given by \[|N_n(\mathcal C_{i})-q^n|\le (q-1)q^{\frac{n+2i}2}. \] \end{remark} Using Theorem \ref{mm}, we can determine the conditions when the curve $\mathcal C_i$ is maximal (or minimal) with respect the Hasse-Weil bound. \begin{theorem} Let $n,i$ be integers such that $0<i<n$, set $d= \gcd(i,n)$ and $ l = \frac{n}d$. The curve $$\mathcal C_{i}: y^q-y = x(x^{q^i}-x)-\lambda$$ is maximal if and only if $\mathrm{Tr}(\lambda)=0$, $2p$ divides $n$, $i$ divides $n$ and $(-1)^{n/2}(l-1)^d$ is a square in $\mathbb{F}_q$. The curve $\mathcal C_i$ is minimal if and only if $\mathrm{Tr}(\lambda)=0$, $2p$ divides $n$, $i$ divides $n$ and $(-1)^{n/2}(l-1)^d$ is not a square in $\mathbb{F}_q$. \end{theorem} \section{The number of affine rational points of the hypersufarce $y^q-y = \sum_{j=1}^r a_jx_j(x_j^{q^{i_j}}-x_j)-\lambda$} Let us denote $\mathcal H_r$ the hypersufarce $$\mathcal H_r : y^q-y = \sum_{j=1}^r a_jx_j(x_j^{q^{i_j}}-x_j)-\lambda,$$ where $a_j \in \mathbb{F}_q^*$ and $0<i_j<n$ for $j\in\{1,\dots,r\}$. We know from Theorem $5.4$ in \cite{LiNi} that $$\sum_{c\in\mathbb{F}_{q^n}} \psi(uc) =\begin{cases} 0 & \text{ if } u \neq 0;\\ q^n & \text{ if } u = 0. \end{cases}$$ We can use this fact to compute the number $N_{n}(\mathcal H_r)$, \begin{align}\label{AA1} \nonumber q^n N_{n}(\mathcal H_r) & = \sum_{c \in \mathbb{F}_{q^n}} \sum_{x_1 \in \mathbb{F}_{q^n}} \cdots \sum_{x_r\in \mathbb{F}_{q^n}} \sum_{y\in \mathbb{F}_{q^n}} \psi\left(c\left( \sum_{j=1}^r a_jx_j(x_j^{q^{i_j}}-x_j)-y^q+y-\lambda\right)\right)\\ \nonumber & =q^{(r+1)n} + \sum_{c \in \mathbb{F}_{q^n}^*} \sum_{x_1 \in \mathbb{F}_{q^n}} \cdots \sum_{x_r\in \mathbb{F}_{q^n}} \psi\left(c\left( \sum_{i=1}^r a_jx_j(x_j^{q^{i_j}}-x_j)-\lambda\right)\right) \sum_{y\in \mathbb{F}_{q^n}} \psi\left(c\left( -y^q+y\right)\right) \\ \nonumber & = q^{(r+1)n} + \sum_{c \in \mathbb{F}_{q^n}^*} \psi(-c\lambda) \prod_{j=1}^r \sum_{x_j \in \mathbb{F}_{q^n}}\psi\left(c\left( a_jx_j(x_j^{q^{i_j}}-x_j)\right)\right) \sum_{y\in \mathbb{F}_{q^n}} \psi\left(c\left( -y^q+y\right)\right) \\ \nonumber & = q^{(r+1)n} + \sum_{c \in \mathbb{F}_{q^n}^*} \psi(-c\lambda) \prod_{j=1}^r \sum_{x_j \in \mathbb{F}_{q^n}}\psi\left(c\left( a_j x_j(x_j^{q^{i_j}}-x_j)\right)\right) \sum_{y\in \mathbb{F}_{q^n}} \psi\left(y\left( -c^{q^{n-1}}+c\right)\right).\\ \end{align} We observe that $$\displaystyle \sum_{y\in \mathbb{F}_{q^n}} \psi\left(y\left( -c^{q^{n-1}}+c\right)\right) = \begin{cases} q^n& \text{ if } c^{q^{n-1}}-c =0;\\ 0 & \text{otherwise}.\\ \end{cases}$$ Since $c^{q^{n-1}}-c =0$ if and only if $c \in \mathbb{F}_{q^{n-1}}$, we conclude that the inner sum in \eqref{AA1} has non null terms only if $c \in \mathbb{F}_q. $ Therefore, we have that \begin{align} \label{AA2} N_{n}(\mathcal H_r) = q^{rn} + \sum_{c \in \mathbb{F}_q^*} \tilde\psi(-c\mathrm{Tr}(\lambda))\prod_{j=1}^r\sum_{x_j \in \mathbb{F}_{q^n}} \psi\left(ca\left( x_j(x_j^{q^{i_j}}-x_j)\right)\right) . \end{align} The following theorem gives explicit formulas for $N_n(\mathcal H_r)$. \begin{theorem}\label{th1} Let $\mathcal H_r: y^q-y = \sum_{j=1}^r a_jx_j(x_j^{q^{i_j}}-x_j)-\lambda$ with $\lambda \in \mathbb{F}_{q^n}$, $a_j \in \mathbb{F}_q$ and $0<i_j<n$. We denote $d_j= \gcd(i_j,n)$, $D=\sum_{j=1}^r d_j$, $l_j=\frac{n}{d_j}$, $L_1 = l_1^{d_1}\cdots l_r^{d_r}$, $L_2 = (l_1-1)^{d_1}\cdots (l_r-1)^{d_r}$, $A_1= a_1\cdots a_r$ and $A_2= a_1^{d_1}\cdots a_r^{d_r}$. The number $N_{n}(\mathcal H_r)$ of affine rational points of $\mathcal H_r$ in $\mathbb{F}_{q^n}^{r+1}$ is \[N_{n}(\mathcal H_r) = \begin{cases} q^{rn} +\tau^{s(nr-D)}\varepsilon_{\mathrm{Tr}(\lambda)}\chi(A_1^{nr}A_2^{-1}L_1)q^{\frac{nr+D}2} & \text{ if } \gcd(n,p)=1, nr-D \text{ is even};\\ q^{rn} + \tau^{s(rn-D+1)}\chi(2\mathrm{Tr}(\lambda)A_1^{nr}A_2^{-1}L_1)q^{\frac{nr+D+1}2} & \text{ if } \gcd(n,p)=1, nr-D \text{ is odd};\\ q^{rn} + \tau^{s(rn-2D)}\varepsilon_{\mathrm{Tr}(\lambda)}\chi(L_2)q^{\frac{nr+2D}2}& \text{ if } \gcd(n,p)=p, nr \text{ is even};\\ q^{rn} +\displaystyle \tau^{s(rn-2D+1)}\chi(2\mathrm{Tr}(\lambda)a_1L_2)q^{\frac{nr+2D+1}2} & \text{ if } \gcd(n,p)=p, nr \text{ is odd}.\\ \end{cases}\] \end{theorem} \begin{proof} From Equation \eqref{AA2} we have that \[N_{n}(\mathcal H_r) = q^{rn} + \sum_{c \in \mathbb{F}_q^*} \tilde\psi(-c\mathrm{Tr}(\lambda))\prod_{j=1}^r\sum_{x_j \in \mathbb{F}_{q^n}} \psi\left(ca_j\left( x_j(x_j^{q^{i_j}}-x_j)\right)\right).\] By Lemma \ref{soma} and Proposition \ref{b1}, we obtain that $N_{n}(\mathcal H_r)= q^{rn} +N_{\mathcal H_r} $ where \begin{align*}\label{e1} \nonumber N_{\mathcal H_r} & ={\footnotesize \begin{cases} \displaystyle\sum_{c \in \mathbb{F}_q^*}\tilde\psi(-c\mathrm{Tr}(\lambda))\prod_{j=1}^r \left((-1)^{(s+1)(n-d_j)}\tau^{s(n-d_j)} \chi\left((-2ca_j)^{(n-d_j)}l_j^{d_j}\right)q^{\frac{n+d_j}2}\right)&\text{ if } \gcd(n,p)=1;\\ \displaystyle\sum_{c \in \mathbb{F}_q^*}\tilde\psi(-c\mathrm{Tr}(\lambda))\prod_{j=1}^r \left((-1)^{(s+1)(n-2d_j)}\tau^{s(n-2d_j)} \chi\left((-2ca_j)^{(n-2d_j)}(l_j-1)^{d_j}\right)q^{\frac{n+2d_j}2}\right) & \text{ if } \gcd(n,p)=p.\\ \end{cases}}\\ & ={\small \begin{cases} \displaystyle(-1)^{(s+1)(rn-D)}\tau^{s(nr-D)}\sum_{c \in \mathbb{F}_q^*}\tilde\psi(-c\mathrm{Tr}(\lambda)) \chi\left((-2c)^{nr-D}A_1^{nr}A_2^{-1}L_1\right)q^{\frac{nr+D}2}&\text{ if } \gcd(n,p)=1;\\ \displaystyle(-1)^{rn(s+1)}\tau^{s(rn-2D)}\sum_{c \in \mathbb{F}_q^*}\tilde\psi(-c\mathrm{Tr}(\lambda)) \chi\left((-2c)^{nr}A_1^{nr}A_2^{-2}L_2\right)q^{\frac{nr+2D}2} & \text{ if } \gcd(n,p)=p.\\ \end{cases}}\\ \end{align*} We split the proof into the following cases. \begin{enumerate}[1)] \item $\gcd(n,p)=1$ and $nr-D$ is even. \begin{align*} N_{\mathcal H_r}& = \displaystyle\tau^{s(nr-D)}q^{\frac{(nr+D)}2}\chi(A_1^{nr}A_2^{-1}L_1)\sum_{c \in \mathbb{F}_q^*}\tilde\psi(-c\mathrm{Tr}(\lambda)) \\ & = \tau^{s(nr-D)}\varepsilon_{\mathrm{Tr}(\lambda)}\chi(A_1^{nr}A_2^{-1}L_1)q^{\frac{nr+D}2}. \end{align*} \item $\gcd(n,p)=1$ and $nr-D$ is odd. \begin{align*} N_{\mathcal H_r}&= \displaystyle(-1)^{(s+1)}\tau^{s(rn-D)}\chi(A_1^{nr}A_2^{-1}L_1)q^{\frac{nr+D}2}\sum_{c \in \mathbb{F}_q^*}\tilde\psi(-c\mathrm{Tr}(\lambda)) \chi\left(-2c\right)\\ & = \begin{cases}\displaystyle (-1)^{(s+1)}\tau^{s(rn-D)}\chi(A_1^{nr}A_2^{-1}L_1)q^{\frac{nr+D}2}\sum_{c \in \mathbb{F}_q^*}\chi\left(-2c\right)& \text{ if } \mathrm{Tr}(\lambda) = 0\\ \displaystyle(-1)^{(s+1)}\tau^{s(rn-D)}q^{\frac{nr+D}2}\chi(2\mathrm{Tr}(\lambda)A_1^{nr}A_2^{-1}L_1)\sum_{c \in \mathbb{F}_q^*}\tilde\psi(-c\mathrm{Tr}(\lambda)) \chi\left(-c\mathrm{Tr}(\lambda)\right) & \text{ if } \mathrm{Tr}(\lambda) \neq 0 \end{cases}\\ & = \begin{cases}\displaystyle 0& \text{ if } \mathrm{Tr}(\lambda) = 0\\ \displaystyle(-1)^{(s+1)}\tau^{s(rn-D)}q^{\frac{nr+D}2}\chi(2\mathrm{Tr}(\lambda)A_1^{nr}A_2^{-1}L_1)G(\tilde\psi, \chi) & \text{ if } \mathrm{Tr}(\lambda) \neq 0 \end{cases}\\ & = \tau^{s(rn-D+1)}\chi(2\mathrm{Tr}(\lambda)A_1^{nr}A_2^{-1}L_1)q^{\frac{nr+D+1}2}. \end{align*} \item $\gcd(n,p)=p$ and $nr$ is even. \begin{align*} N_{\mathcal H_r} &= \displaystyle\tau^{s(rn-2D)}\chi(A_1^{nr}L_2)q^{\frac{nr+2D}2}\sum_{c \in \mathbb{F}_q^*}\tilde\psi(-c\mathrm{Tr}(\lambda)) \\ & = \tau^{s(rn-2D)}\varepsilon_{\mathrm{Tr}(\lambda)}\chi(L_2)q^{\frac{nr+2D}2}. \end{align*} \item $\gcd(n,p)=p$ and $nr$ is odd. \begin{align*} N_{\mathcal H_r} &= \displaystyle(-1)^{s+1}\tau^{s(rn-2D)}\chi(A_1L_2)q^{\frac{(nr+2D)}2}\sum_{c \in \mathbb{F}_q^*}\tilde\psi(-c\mathrm{Tr}(\lambda)) \chi\left((-2c\right)\\ & = \begin{cases} \displaystyle (-1)^{s+1}\tau^{s(rn-2D)}\chi(A_1L_2)q^{\frac{nr+2D}2}\sum_{c \in \mathbb{F}_q^*}\chi\left(-2c\right) & \text{ if } \mathrm{Tr}(\lambda) =0 \\ \displaystyle (-1)^{s+1}\tau^{s(rn-2D)}\chi(2\mathrm{Tr}(\lambda)A_1L_2)q^{\frac{nr+2D}2}\sum_{c \in \mathbb{F}_q^*}\tilde\psi(-c\mathrm{Tr}(\lambda))\chi\left((-c\mathrm{Tr}(\lambda)\right) & \text{ if } \mathrm{Tr}(\lambda) \neq0 \end{cases}\\ &= \begin{cases} 0 & \text{ if } \mathrm{Tr}(\lambda) =0 \\ \displaystyle (-1)^{s+1}\tau^{s(rn-2D)}q^{\frac{nr+2D}2}\chi(2\mathrm{Tr}(\lambda)A_1L_2)G(\tilde \psi, \chi) & \text{ if } \mathrm{Tr}(\lambda) \neq0 \end{cases}\\ & = \displaystyle \tau^{s(rn-2D+1)}\chi(2\mathrm{Tr}(\lambda)A_1L_2)q^{\frac{nr+2D+1}2}.\\ \end{align*} \end{enumerate} \end{proof} The well-known Weil bound tells us that \[\left|N_n(\mathcal H_r) - q^{nr}\right|\le (q-1) \prod_{j=1}^r q^{i_j} q^{nr/2} = (q-1)q^{\frac{rn+2I}2},\] where $I = \sum_{j=1}^r i_j. $ By Theorem \ref{th1} this bound can be attend if and only if we are in the case $\gcd(n,p)=p, nr$ is even and $\mathrm{Tr}(\lambda)=0$. Using this fact, we obtain the following result, that assures us when the hypersurface $\mathcal H_r$ is maximal or minimal. \begin{theorem}\label{HW} Let $\mathcal H_r: y^q-y = \displaystyle\sum_{j=1}^r a_jx_j(x_j^{q^{i_j}}-x_j)-\lambda$ with $\lambda \in \mathbb{F}_{q^n}$, $a_j \in \mathbb{F}_q^*$ and $0<i_j<n$. Let $d_j = \gcd(n,i_j)$, $D=\sum_{j=1}^r d_j$, $l_j=\frac{n}{d_j}$ for $1\le j\le r$ and $L_2= (l_1-1)^{d_1} \cdots (l_r-1)^{d_r}$. The hypersurface $\mathcal H_r$ attains the upper Weil bound if and only if one of the following holds \begin{itemize} \item $\mathrm{Tr}(\lambda) =0$, $\gcd(n,p)=p$, $nr$ is even, $d_j=i_j$ for all $1\le j\le r$, $(nr-2D)s \equiv 0 \pmod 4$ and $\chi(L_2) =1;$ \item $\mathrm{Tr}(\lambda) =0$, $\gcd(n,p)=p$, $nr$ is even, $d_j=i_j$ for all $1\le j\le r$, $(nr-2D)s \equiv 2 \pmod 4$ and $\chi(L_2) =-1.$ \end{itemize} The hypersurface $\mathcal H_r$ attains the lower bound if and only if one of the following holds \begin{itemize} \item $\mathrm{Tr}(\lambda) =0$, $\gcd(n,p)=p$, $nr$ is even, $d_j=i_j$ for all $1\le j\le r$, $(nr-2D)s \equiv 2 \pmod 4$ and $\chi(L_2) =1;$ \item $\mathrm{Tr}(\lambda) =0$, $\gcd(n,p)=p$, $nr$ is even, $d_j=i_j$ for all $1\le j\le r$, $(nr-2D)s \equiv 0 \pmod 4$ and $\chi(L_2) =-1.$ \end{itemize} \end{theorem} \begin{example} Let $q=5^2$, $n=60$. We consider the Artin-Schreier hypersurface given by \[\mathcal H: y^q-y = x_1(x_1^{q^3}-x_1)+x_2(x_2^{q^4}-x_2)+x_3(x_3^{q^6}-x_3).\] Following the notation of Theorem \ref{HW}, we have that $i_1=d_1=3, i_2=d_2=4, i_3=d_3=6$, $l_1=20, l_2=15, l_3=10$ and $L_2=19^3\cdot 15^4\cdot 10^6$. Moreover, $\chi(L_2) = \chi(19) = 1$ and $(nr-2D)s \equiv (180-26)\cdot 2 \equiv 0 \pmod 4 $. It follows from Theorem \ref{HW} that $\mathcal H$ is $\mathbb F_{q^{60}}$-maximal. \end{example}
{'timestamp': '2022-11-22T02:27:40', 'yymm': '2211', 'arxiv_id': '2211.11371', 'language': 'en', 'url': 'https://arxiv.org/abs/2211.11371'}
\section{Introduction \label{sec:intro}} Many of iron-based superconductors, which include pnictides and chalcogenides, have high critical temperatures $T_c>50$~K allowing to refer to them as high-$T_c$ superconductors. The basic element is always a square lattice of Fe, though in some cases with orthorhombic distortions, surrounded by As or P in pnictides and by Se, Te, or S in chalcogenides~\cite{y_kamihara_08,SadovskiiReview2008,IzyumovReview2008,MazinReview,PaglioneReview,JohnstonReview,WenReview,StewartReview}. Weakly doped pnictides are antiferromagnetic metals. Though there is no ultimately accepted microscopic mechanism of superconductivity, the most promising candidate is the spin fluctuation mechanism of Cooper pairing~\cite{HirschfeldKorshunov2011,ChubukovReview2012,Korshunov2014eng,Hirschfeld2016}. It is tightly connected with the topology of the Fermi surface comprised of several sheets, namely, with the existence of hole and electron Fermi pockets for a wide range of doping concentrations $x$. Fermi surface, as well as states near the Fermi level, are formed by the iron $d$-orbitals and consists of two hole pockets near the $\Gamma=(0,0)$ point and two electron pockets centered at $(\pi,0)$ and $(0,\pi)$ points of the two-dimensional Brillouin zone corresponding to one Fe per unit cell. Proximity of the wave vector related to the scattering between particles at electron and hole sheets to the nesting wave vector $\mathbf{Q}$ results in strong antiferromagnetic fluctuations with the maximum of the spin susceptibility near $\mathbf{Q}$ that equal to $(\pi,0)$ or $(0,\pi)$. There is a qualitative and sometimes even quantitative agreement between the Fermi surface calculated within the density functional theory (DFT) and the one measured via quantum oscillations and by the angle-resolved photoemission spectroscopy (ARPES)~\cite{Kordyuk}. Absence of the insulating state in the undoped case points toward the moderate nature of the electronic correlations in such a multiorbital system~\cite{Anisimov2008eng,Kroll2008}. Iron magnetic moment differs from one family of Fe-based materials to another with the smallest value of $\sim 0.3\mu_B$ in LaFeAsO~\cite{KlaussKorshunov2008} to $\sim 3.3\mu_B$ in K$_2$Fe$_4$Se$_5$~\cite{Gretarsson2011}. This issue was discussed as originating from the effect of correlations~\cite{Haule2009,Hansmann2010,Toschi2012}. The concept of Hund's metal was put forward~\cite{Haule2009} to emphasize the role of Hund's exchange $J$ in the physics of Fe-based materials. In particular, the irreducible vertex corrections beyond the random phase approximation (RPA) for the magnetic susceptibility were calculated~\cite{Hansmann2010,Toschi2012} and compared to the neutron scattering experiments~\cite{Liu2012}. However, the RPA approach also gives reasonable results when compared to various experiments in the normal and superconducting states~\cite{HirschfeldKorshunov2011,Korshunov2014eng} thus providing the natural starting point for studying the low-energy physics of itinerant electrons in iron-based superconductors. Different mechanisms of superconductivity result in specific symmetries and structures of the gap in iron-based materials~\cite{HirschfeldKorshunov2011}. In the spin fluctuation theory of pairing within the RPA and in the functional renormalization group (fRG) approach, the leading superconducting instability in a wide range of dopings is characterized by the extended $s$-wave gap having the opposite signs on hole and electron Fermi surface pockets~\cite{HirschfeldKorshunov2011,Korshunov2014eng,Mazin2008,Graser2009,Kuroki2008,Chubukov2008,MaitiKorshunovPRL2011,MaitiKorshunovPRB2011,Classen2017}. The corresponding gap structure belongs to the $A_{1g}$ representation of the tetragonal symmetry group and is called $s_{\pm}$ state. On the other hand, orbital fluctuations results in the $s_{++}$ state with the gap having the same sign on all Fermi surface sheets~\cite{Kontani}. Therefore, by determining the gap structure, one can deduce the microscopic mechanism of superconductivity. In this respect, inelastic neutron scattering plays a special role since the imaginary part of the dynamical spin susceptibility $\chi(\mathbf{q},\omega)$ measured there carries information about the gap structure in the superconducting state. That is, the sign-changing $s_\pm$ gap leads to the formation of the spin resonance peak at or near the commensurate antiferromagnetic wave vector $\mathbf{q} = \mathbf{Q}$ connecting Fermi surface sheets with different signs of gaps on them~\cite{KorshunovEreminResonance2008,Maier2008,Maier2009}. In simple models, the peak appears at frequencies $\omega_R < 2\Delta$, where $\Delta$ is the gap magnitude. At present, the well defined peak was observed in neutron scattering on all iron-based superconductors for $T < T_c$ near the wave vector $\mathbf{Q}$, see, e.g., Refs.~\onlinecite{ChristiansonBKFA,ChristiansonBFCA,QiuFeSeTe,Park,Babkevich,Inosov2010,ArgyriouKorshunov2010,LumsdenReview,Castellan2011,Dai2015}. However, by introducing an additional damping of quasiparticles and by adjusting parameters, one can attain the appearance of a peak in the spin susceptibility in the $s_{++}$ state at frequencies above $2\Delta$~\cite{Onari2010,Onari2011}. Therefore, to determine whether the observed peak is the true spin resonance one has to explore the effect of different details of the superconducting state on it and deduce some criterion. Previously, the characteristic feature of the spin resonance in the case of unequal gaps on hole and electron pockets were established~\cite{KorshunovPRB2016,KorshunovJMMM2017} -- in the presence of larger and smaller gaps, $\Delta_L$ and $\Delta_S$, the criterion is the condition for the spin resonance frequency, $\omega_R \leq \Delta_L+\Delta_S$. Comparison of data from the neutron scattering on the peak frequency and data from various techniques on gap magnitudes leads to the conclusion that in most cases the observed peak fulfills the condition and, therefore, indicates the $s_\pm$ gap structure~\cite{KorshunovPRB2016,KorshunovJMMM2017}. However, the role of the gap anisotropy in the formation of the spin resonance peak is still an open question. For example, results of ARPES~\cite{Shimojima2011} and Andreev spectroscopy~\cite{Abdel-Hafiez2014,Kuzmicheva2016,Kuzmicheva2017} demonstrate anisotropy of the larger gap as large as 30\% in pnictides. On the qualitative level, the question was discussed in Ref.~\onlinecite{Maiti2011}; however, within the very simple four-band model and without a particular recipe for comparison to the experimental data. Here I consider the effect of the gap anisotropy on the dynamical spin susceptibility and the spin resonance within the realistic five-orbital model from Ref.~\onlinecite{Graser2009}. Two approaches to the gap structure are used. One is phenomenological with the model gap function that is parameterized to reflect the general form of the experimentally observed and the theoretically obtained gap. Due to some freedom in the choice of parameters and ability to vary them, this approach allows us to analyze basic effects of the gap anisotropy on the spin resonance peak. The other approach employs self-consistent calculation of the gap function within the spin fluctuation theory of pairing. Spin resonance peak is then calculated and compared to the results with the model gap. Obtained results lead to the adjustment of the condition $\omega_R \leq \Delta_L+\Delta_S$ that would allow us to make a comparison of experimental data on the peak frequency and gaps to answer the question on whether the observed peak is the true spin resonance originating from the $s_\pm$ state. The paper is organized as follows. In Section~\ref{sec:model}, the model and the approaches are presented. Results for the spin susceptibility for the model gap function are given in Section~\ref{sec:resultsmodelgap} and the magnetic response for the gap calculated within the spin fluctuation theory of pairing is shown in Section~\ref{sec:resultscalcgap}. Concluding remarks and the brief analysis of the experimental data are given in Section~\ref{sec:conclusion}. \section{Model and approximations \label{sec:model}} I use here a Hamiltonian $H = H_0 + H_{int}$ consisting of the tight-binding model $H_0$~\cite{Graser2009} and an on-site Coulomb (Hubbard) multiorbital interaction $H_{int}$. Hamiltonian $H_0$ is based on the DFT band structure for LaFeAsO~\cite{Cao2008} and it includes five iron $d$-orbitals ($d_{xz}$, $d_{yz}$, $d_{xy}$, $d_{x^2-y^2}$, $d_{3z^2-r^2}$), \begin{equation} H_0 = \sum_{\mathbf{k} \sigma} \sum_{l l'} \left[ t_{l l'}(\mathbf{k}) + \epsilon_{l} \delta_{l l'} \right] d_{\mathbf{k} l \sigma}^\dagger d_{\mathbf{k} l' \sigma}, \label{eq:H0} \end{equation} where $d_{\mathbf{k} l \sigma}^\dagger$ is the annihilation operator for an electron with momentum $\mathbf{k}$, spin $\sigma$, and orbital index $l$. Hopping matrix elements $t_{l l'}(\mathbf{k})$ and one-electron energies $\epsilon_{l}$ are given in Ref.~\onlinecite{Graser2009}. Fermi surface consists of two hole pockets, $\alpha_1$ and $\alpha_2$, near the $\Gamma$ point and two electron pockets, $\beta_1$ and $\beta_2$, centered at $(\pi,0)$ and $(0,\pi)$ points of the one-Fe Brillouin zone. Here I consider the case of small electron doping with $x=0.05$. Interaction part $H_{int}$ has the following form~\cite{Graser2009,Kuroki2008,Castallani1978,Oles1983}, \begin{eqnarray} H_{int} &=& U \sum_{f, m} n_{f m \uparrow} n_{f m \downarrow} + U' \sum_{f, m < l} n_{f l} n_{f m} \nonumber\\ && + J \sum_{f, m < l} \sum_{\sigma,\sigma'} d_{f l \sigma}^\dag d_{f m \sigma'}^\dag d_{f l \sigma'} d_{f m \sigma} \nonumber\\ && + J' \sum_{f, m \neq l} d_{f l \uparrow}^\dag d_{f l \downarrow}^\dag d_{f m \downarrow} d_{f m \uparrow}. \label{eq:Hint} \end{eqnarray} where $n_{f m} = n_{f m \uparrow} + n_{f m \downarrow}$, $n_{f m \sigma} = d_{f m \sigma}^\dag d_{f m \sigma}$ is the number of particles operator at site $f$, $U$ and $U'$ are intra- and interorbital Hubbard repulsions, $J$ is the Hund's exchange, and $J'$ is the pair hopping. To limit a number of free parameters in the theory, let us assume the spin-rotational invariance (SRI) that adds two constraints, $U'=U-2J$ and $J'=J$. There are still two parameters to be determined, $U$ and $J$. Their values crucially depend on the orbital basis of the model. For example, constrained DFT gives $U=3.5$~eV and $J=0.8$~eV for the full set of Fe-$d$ and As-$p$ orbital set ($p-d$ model for LaFeAsO), while for the model that includes only $d$-orbitals, it gives $U=0.75$~eV and $J=0.51$~eV~\cite{Anisimov2008eng}. Another approach, constrained RPA (cRPA), results in $U=2.69$~eV and $J=0.79$~eV~\cite{Aichhorn2009,Aichhorn2011} or in $U=1.97$~eV and $J=0.77$~eV~\cite{Roekeghem2016} for the full set of $d$- and $p$-orbitals with excluded Coulomb interaction at the $p$-orbitals ($d-dp$ model). The same cRPA for the $d$-only orbital set gives $U=2.2-3.3$~eV and $J=0.3-0.6$~eV~\cite{Miyake2008,Nakamura2008}. Such a dependence on the number of orbitals is due to the spatial extent of Wannier functions that are used to construct the matrix elements of the Coulomb interactions. As a general trend, a limited number of orbitals results in the smaller values of Hubbard parameters. For the five-orbital model studied here, the large values of $U$, greater than $\approx 1.5$~eV, results in the divergence of the spin susceptibility, i.e. the magnetic instability. Since the undoped LaFeAsO exhibits stripe antiferromagnetic order at low temperatures, the choice of parameters that provide closeness to the magnetic instability is reasonable. Therefore in what follows, I set $U=1.4$~eV. As for the Hund's exchange, it is taken to be $J=0.1-0.2$~eV. The $J/U$ ratio for the lower boundary, $J/U \approx 0.07$, is comparable to the widely discussed Hund's metal proposal for Fe-based materials with $J/U=0.35/4 \approx 0.08$~\cite{Haule2009}, while for the upper boundary, $J/U \approx 0.14$ is comparable with the cRPA ratio for the $d$-only orbital set, $J/U=0.43/2.92 \approx 0.14$~\cite{Miyake2008}. Matrix elements of the transverse component of the spin susceptibility are equal to~\cite{Korshunov2014eng} \begin{eqnarray} \label{eq:chipmmu} \chi^{ll',mm'}_{(0)+-}(\mathbf{q},\Omega) &=& -T \sum_{\mathbf{p},\omega_n, \mu,\nu} \left[ \varphi^{\mu}_{\mathbf{p} m} {\varphi^*}^{\mu}_{\mathbf{p} l} G_{\mu \uparrow}(\mathbf{p},\omega_n) \right.\nonumber\\ &\times& G_{\nu \downarrow}(\mathbf{p}+\mathbf{q},\Omega+\omega_n) \varphi^{\nu}_{\mathbf{p}+\mathbf{q} l'} {\varphi^*}^{\nu}_{\mathbf{p}+\mathbf{q} m'} \nonumber\\ &+& {\varphi^*}^{\mu}_{\mathbf{p} l} {\varphi^*}^{\mu}_{-\mathbf{p} m'} F^\dag_{\mu \uparrow}(\mathbf{p},\omega_n) \nonumber\\ &\times& \left. F_{\nu \downarrow}(\mathbf{p}+\mathbf{q},\Omega-\omega_n) \varphi^{\nu}_{\mathbf{p}+\mathbf{q} l'} \varphi^{\nu}_{-\mathbf{p}-\mathbf{q} m} \right], \end{eqnarray} where $\Omega$ and $\omega_n$ are bosonic and fermionic Matsubara frequencies, $G$ and $F$ are normal and anomalous (Gor'kov) Green's functions, $\mu$ and $\nu$ are band indices, $\varphi^{\mu}_{\mathbf{k} m}$ are matrix elements of orbital-to-band transformation, so that $d_{\mathbf{k} m \sigma} = \sum\limits_{\mu} \varphi^{\mu}_{\mathbf{k} m} b_{\mathbf{k} \mu \sigma}$. Here, $b_{\mathbf{k} \mu \sigma}$ is the electron annihilation operator in the band representation, where Green's function is diagonal with respect to band indices, $G_{\mu \sigma}(\mathbf{k},\omega_n) = 1 / \left( {\mathrm{i}}\omega_n - \varepsilon_{\mathbf{k}\mu\sigma} \right)$. Here I use two approaches to the superconducting state. The first one is phenomenological -- the gap function is chosen to simulate results of calculations and experimental findings, both of which are generally similar. Parameters of the gap function are treated as free, so one can model various situations including ones with the different sets of interaction parameters. In this case, the gap function belonging to the $A_{1g}$ representation of the tetragonal symmetry group and entering the anomalous Green's function is defined as \begin{equation} \label{eq:delta} \Delta_{\mathbf{k} \mu} = \Delta_{\mu}^{0} + \Delta_{\mu}^{1} \left(\cos k_x + \cos k_y \right)/2. \end{equation} Here, parameter $\Delta_{\mu}^{1}$ controls changes of gap amplitude in the band $\mu$, while $\Delta_{\mu}^{0}$ controls the gap magnitude for zero amplitude (we refer to it later as the `zero-amplitude gap magnitude'). The simplest possible $s_{++}$ state takes place for $\Delta_{\mu}^{1} = 0$ and $\Delta_{\mu}^{0} = \Delta_{\mu'}^{0}$, and the simplest state of the $s_\pm$-type can be obtained taking $\Delta_{\alpha_{1,2}}^{0} = -\Delta_{\beta_{1,2}}^{0}$. The specific feature of the FS topology in pnictides is that due to the shift of $k_x$ or $k_y$ by $\pi$ with respect to $(0,0)$ point, the gap on electron pockets will have a local, i.e., with respect to pocket's center, $d$-wave symmetry~\cite{MaitiKorshunovPRB2011}. The other approach to the superconducting state is to perform the spin fluctuation calculation of the gap function. I follow the procedure from Refs.~\onlinecite{Graser2009,Kemper2010,Korshunov2014eng}: calculate spin and charge susceptibilities in the RPA and combine them into the Cooper vertex entering the linearized gap equation. The latter is solved to obtain the eigenfunction $g(\mathbf{k})$, which is the gap function, and the eigenvalue $\lambda$; the leading instability corresponds to the largest $\lambda$. Below, all parameters of gaps are in units of $\Delta_0$ taken to be 5~meV in our calculations. Since all gaps have $A_{1g}$ symmetry and should not change upon the $\pi/2$ rotation, gaps on electron pockets $\beta_1$ and $\beta_2$ should be the same. Thus, $\Delta_{\beta_1}^{0,1} = \Delta_{\beta_2}^{0,1}$, which we denote simply as $\Delta_\beta^{0,1}$. To calculate the spin response, the RPA is used with the local Coulomb interaction $H_{int}$. Sum of the corresponding ladder diagrams that include electron-hole bubble in the matrix form, $\hat\chi_{(0)+-}(\mathbf{q},\omega)$, result in the following expression for the matrix of the RPA spin susceptibility~\cite{Korshunov2014eng}: \begin{eqnarray} \hat\chi_{+-}(\mathbf{q},\Omega) = \left[\hat{I} - \hat{U}_s \hat\chi_{(0)+-}(\mathbf{q},\Omega)\right]^{-1} \hat\chi_{(0)+-}(\mathbf{q},\Omega), \label{eq:chi_s_sol} \end{eqnarray} where $\hat{I}$ and $\hat{U}_s$ are the unit and interaction matrices, respectively, in the orbital basis. Explicit form of the latter is given in Ref.~\onlinecite{Graser2009}. In the next section I present results for the physical susceptibility $\chi_{+-}(\mathbf{q},\Omega) = \frac{1}{2} \sum_{l,m} \chi^{ll,mm}_{+-}(\mathbf{q},\Omega)$ that was analytically continued to the real frequency axis $\omega$ (${\mathrm{i}}\Omega \to \omega + {\mathrm{i}}\delta$, $\delta \to 0+$). The mechanism of the spin resonance peak formation in the superconducting state with the sign-changing gap is quite transparent~\cite{KorshunovEreminResonance2008}. Since $\chi_{(0)+-}(\mathbf{q},\omega)$ describes particle-hole excitations and since all excitations at frequencies less than about twice the gap magnitude are absent in the superconducting state, $\mathrm{Im}\chi_{(0)+-}(\mathbf{q},\omega)$ becomes finite only above this frequency value. The anomalous Green's functions entering Eq.~(\ref{eq:chipmmu}) give rise to the anomalous coherence factors, $\left[1 - \frac{\Delta_\mathbf{k} \Delta_{\mathbf{k}+\mathbf{q}}}{E_{\mathbf{k}} E_{\mathbf{k}+\mathbf{q}}}\right]$. If $\Delta_\mathbf{k}$ and $\Delta_{\mathbf{k}+\mathbf{q}}$ have the same sign, as it is for the $s_{++}$ state, the coherence factors vanish leading to a gradual increase of the spin susceptibility with increasing frequency for $\omega > \omega_c$ with $\omega_c = \min \left(|\Delta_\mathbf{k}| + |\Delta_{\mathbf{k}+\mathbf{q}}| \right)$. For the $s_\pm$ state, vector $\mathbf{q} = \mathbf{Q}$ connects Fermi surfaces with different signs of the gap, $\mathrm{sgn} {\Delta_\mathbf{k}} \neq \mathrm{sgn} {\Delta_{\mathbf{k}+\mathbf{q}}}$, resulting in the finite coherence factors that leads to a jump in the imaginary part of $\chi_{(0)+-}$ at $\omega_c$. For a certain set of interaction parameters entering the matrix $\hat{U}_s$, this results in a divergence of $\mathrm{Im}\chi_{+-}(\mathbf{Q},\omega)$~(\ref{eq:chi_s_sol}). The corresponding peak at a frequency $\omega_R \leq \omega_c$ is the true spin resonance. Since gaps entering the expression for $\omega_c$ correspond to bands separated by the wave vector $\mathbf{q}$, we can call $\omega_c$ the indirect or effective gap. That's the reason why in the case of unequal gaps in different bands, $\Delta_L$ and $\Delta_S$, connected by the wave vector $\mathbf{Q}$, we have $\omega_c = \Delta_L + \Delta_S$~\cite{KorshunovPRB2016}. \section{Results for the model gap function \label{sec:resultsmodelgap}} In this Section, Coulomb parameters are chosen to be $U=1.4$~eV and $J=0.15$~eV; the rest are constrained by the SRI. \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{fig_FeAs5orb_A1g_Delta_angle_spm_sExt} \includegraphics[width=1.0\columnwidth]{fig_FeAs5orb_s_pm_sExt_ImChi_set10_FSbig} \caption{(Color online) Top: Angular dependencies of gaps on hole ($\alpha_{1,2}$) and electron ($\beta_{1,2}$) Fermi surface pockets for the $s_\pm$ and the $s_{ext}$ states. Bottom: Frequency dependencies of imaginary parts of the corresponding spin susceptibilities at the wave vector $\mathbf{Q}$, as well as the normal state (non-SC) spin response. Inset: Magnitudes of gaps on the Fermi surface for the $s_{ext}$ state and the wave vector $\mathbf{Q}$.} \label{fig:spmsext} \end{center} \end{figure} Gap angular dependencies on electron and hole sheets and the corresponding frequency dependencies of imaginary parts of spin susceptibilities at the wave vector $\mathbf{Q}$ for two extended $s$-wave symmetries, namely, $s_\pm$ and $s_{ext}$ states~\cite{Graser2009}, are shown in Fig.~\ref{fig:spmsext}. The former one is the widely discussed, fully gapped $s_\pm$ state with a small gap angular dependence on each Fermi surface pocket, $\Delta_{\mathbf{k} \mu} = \Delta_{\mu} \cos(k_x) \cos(k_y)$ with $\Delta_{\alpha_1,\beta}=3$ and $\Delta_{\alpha_2}=1$. In this state, the spin resonance peak is formed at frequencies lower than $\Delta_{\beta}+\Delta_{\alpha_2}$~\cite{KorshunovPRB2016}, see the lower panel of Fig.~\ref{fig:spmsext}. The $s_{ext}$ state corresponds to such a strong anisotropy on electron pockets, that the gap becomes sign-changing there and develops a nodal structure. The latter is clearly seen in the inset of Fig.~\ref{fig:spmsext}, where the gap magnitude on the Fermi surface is shown. Parameters in Eq.~(\ref{eq:delta}) were set to be: $\Delta_{\mu}^{0}=0$, $\Delta_{\alpha_1}^{1}=3$, $\Delta_{\alpha_2}^{1}=1$, $\Delta_{\beta}^{1}=30$. The spin resonance is absent in this case since only near-nodal states with a tiny gap on electron pockets $\beta_{1,2}$ contribute to the susceptibility at the wave vector $\mathbf{Q}$, as seen in the inset in Fig.~\ref{fig:spmsext}. Therefore, the discontinuous jump in $\mathrm{Im}\chi_{(0)+-}$ required for the formation of the spin resonance appears at vanishingly small frequencies and the RPA spin response gets only a small boost compared to the normal state, see Fig.~\ref{fig:spmsext}. This is similar to the case of $d_{x^2-y^2}$ gap symmetry where the spin resonance is absent at the commensurate wave vector~\cite{KorshunovEreminResonance2008}. Of course, the spin resonance may appear at the incommensurate wave vector different from $\mathbf{Q}$, see the discussion in Ref.~\onlinecite{Maier2009}. \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{fig_FeAs5orb_A1g_Delta_angle_set1} \includegraphics[width=1.0\columnwidth]{fig_FeAs5orb_A1g_set1_ImChi_set10} \caption{Top: Angular dependencies of gaps for several $A_{1g}$-type states with the fixed gap anisotropy on hole pockets and varying zero-amplitude gap magnitude on electron pockets. Bottom: Corresponding frequency dependencies of $\mathrm{Im}\chi_{+-}(\mathbf{Q},\omega)$, as well as the spin response in the normal (non-SC), $s_\pm$, and $s_{ext}$ states.} \label{fig:A1gSet1} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{fig_FeAs5orb_A1g_Delta_angle_set2} \includegraphics[width=1.0\columnwidth]{fig_FeAs5orb_A1g_set2_ImChi_set10} \caption{Top: Angular dependencies of gaps for several $A_{1g}$-type states with the fixed zero-amplitude gap magnitude and varying gap anisotropy on electron pockets. The case when $\Delta_{\beta}^{0}$ is shifted while $\Delta_{\beta}^{1}$ kept minimal is also shown. Bottom: Frequency dependence of $\mathrm{Im}\chi_{+-}(\mathbf{Q},\omega)$ for these states, as well as for the normal and the $s_\pm$ states.} \label{fig:A1gSet2} \end{center} \end{figure} Most superconducting solutions in the spin fluctuation theory of pairing having the $A_{1g}$ symmetry are characterized by gaps with the weak angular dependence on hole pockets and a significant anisotropy on electron Fermi surface sheets~\cite{MaitiKorshunovPRB2011}. To model such a situation, I set the amplitude and the anisotropy of gaps on the hole pockets in Eq.~(\ref{eq:delta}) to be $\Delta_{\alpha_1}^{0}=1$, $\Delta_{\alpha_1}^{1}=0$, $\Delta_{\alpha_2}^{0}=-16.4$, $\Delta_{\alpha_2}^{1}=20$. This gives the constant gap on the inner hole pocket $\alpha_1$ and a weak anisotropy on the outer hole pocket $\alpha_2$. At the same time, the gap on $\alpha_1$ is approximately three times the gap on $\alpha_2$. This case is shown in Figs.~\ref{fig:A1gSet1}-\ref{fig:A1gSet2} and in~\onlinecite{PRBSuppl}. First, I fix the gap anisotropy on electron pockets by setting $\Delta_{\beta}^{1}=16$ and vary the zero-amplitude magnitude, $\Delta_{\beta}^{0}$, of the gap there. The result is shown in Fig.~\ref{fig:A1gSet1}. Once the average gap on electron pockets have the same sign as on hole pockets (the case of $\Delta_{\beta}^{0}=1$), the resonance condition, $\Delta_{\mathbf{k}} = -\Delta_{\mathbf{k}+\mathbf{Q}}$, is not fulfilled and the spin resonance is absent. For the opposite signs of gaps at the wave vector $\mathbf{Q}$, the spin resonance forms and its frequency is as higher as larger the absolute value of the zero-amplitude gap magnitude on electron pockets. Second, I change the gap anisotropy on electron pockets by varying $\Delta_{\beta}^{1}$ while the zero-amplitude gap magnitude is fixed, $\Delta_{\beta}^{0}=-2$. Results are shown in Fig.~\ref{fig:A1gSet2}. Experimentally observed gap anisotropy of 30\%~\cite{Shimojima2011,Abdel-Hafiez2014,Kuzmicheva2017} approximately corresponds to the case of $\Delta_{\beta}^{1}=4$ shown here. Evidently, decrease of the gap anisotropy leads to the increase of the spin resonance frequency. The same figure illustrates what happens when $\Delta_{\beta}^{0}$ is shifted to higher energies for the minimal amplitude shown. As expected, the spin resonance peak shifts to higher frequencies. Obviously, decrease of the spin resonance frequency with increasing the gap amplitude originates from the decrease of the effective gap at the wave vector $\mathbf{Q}$ entering the dynamical spin susceptibility. Decrease of the peak frequency is accompanied by the loss of its intensity due to the diminished spectral weight in agreement with the results of Ref.~\onlinecite{Maiti2011}. \section{Results for the calculated gap function \label{sec:resultscalcgap}} \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{fig_Delta_set18} \includegraphics[width=1.0\columnwidth]{fig_FeAs5orb_SFset18_ImChi_set10_FSbig} \caption{(Color online) Top: Angular dependencies of gaps on hole ($\alpha_{1,2}$) and electron ($\beta_{1,2}$) Fermi surface pockets calculated within the spin fluctuation pairing theory (SF gap) and obtained by fitting the parameters of Eq.~(\ref{eq:delta0123}). Bottom: Frequency dependencies of imaginary parts of the spin susceptibilities at the wave vector $\mathbf{Q}$ in the normal state (non-SC), for the model $s_\pm$ state, and for the SF gap. Magnitudes of the latter on the Fermi surface and the wave vector $\mathbf{Q}$ are shown in the inset. The SF gap was normalized by $\tilde\Delta_0=50$~meV to compare with our model results.} \label{fig:SFset18} \end{center} \end{figure} The linearized gap equation within the spin fluctuation theory of pairing was solved and the gap function $g(\mathbf{k})$ and the corresponding eigenvalue $\lambda$ were obtained. For $U=1.4$~eV, $J=0.1$, $0.15$, and $0.2$~eV, the leading instability is the $A_{1g}$ gap that can be parameterized as \begin{eqnarray} \label{eq:delta0123} \Delta_{\mathbf{k} \mu} &=& \Delta_{\mu}^{0} + \Delta_{\mu}^{1} \left(\cos k_x + \cos k_y \right)/2 + \Delta_{\mu}^{2} \cos k_x \cos k_y \nonumber\\ &+& \Delta_{\mu}^{3} \left(\cos 2k_x + \cos 2k_y \right)/2. \end{eqnarray} Two other instabilities has $d_{x^2-y^2}$ and $d_{xy}$ gap symmetries. For $U=1.4$~eV and $J=0.1$~eV as an example, $\lambda=0.24$, $0.19$, and $0.08$ corresponds to the $A_{1g}$ gap, $d_{x^2-y^2}$ gap, and $d_{xy}$ gap, respectively. Increase of $J$ doesn't change this hierarchy, see~\onlinecite{PRBSuppl} for details. In general, the observed situation is typical for iron-based superconductors and was extensively discussed within the leading angular harmonics approximation (LAHA)~\cite{MaitiKorshunovPRL2011,MaitiKorshunovPRB2011}. In the following, we call the obtained $A_{1g}$ gap the SF gap. The resulting gap angular dependence for $U=1.4$~eV and $J=0.15$~eV is shown in Fig.~\ref{fig:SFset18}. It was fitted by the functional form~(\ref{eq:delta0123}) and the following parameters were obtained (only nonzero values in units of $\Delta_0$ are presented): $\Delta_{\alpha_1}^0=-23.76$, $\Delta_{\alpha_1}^3=26$, $\Delta_{\alpha_2}^0=-4.76$, $\Delta_{\alpha_1}^3=6$, $\Delta_{\beta}^0=6.99$, $\Delta_{\beta}^1=-15.5$, $\Delta_{\beta}^3=-10$. Spin response for the gap function with the aforementioned parameters is shown in Fig.~\ref{fig:SFset18}. $\mathrm{Im}\chi_{+-}$ in the $s_\pm$ state is also shown there for comparison. The spin resonance peak appears in both cases, but at lower frequencies for the SF gap because of the smaller effective gap at the wave vector $\mathbf{Q}$. Note the similarity between the spin response for the SF gap and for the model gap with $\Delta_{\beta}^0=-1.6$ and $\Delta_{\beta}^1=16$ shown in Fig.~\ref{fig:A1gSet1}. The similarity stems again from the fact that the spin response at the wave vector $\mathbf{Q}$ is governed by the effective gap at the same wave vector. Therefore, even for the different functional forms of the gaps, (\ref{eq:delta}) and (\ref{eq:delta0123}), their comparable values at $\mathbf{Q}$ lead to the similarity in $\mathrm{Im}\chi_{+-}$. \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{fig_FeAs5orb_SFsets_ImChi_sets} \caption{(Color online) Frequency dependence of imaginary part of the spin susceptibility at the wave vector $\mathbf{Q}$ in the normal state (non-SC), for the SF gap, and for the model $s_\pm$ state. Susceptibilities are shown for different sets of interaction parameters.} \label{fig:SFsets} \end{center} \end{figure} Now we discuss the interaction dependence of the spin resonance peak. Hubbard repulsion was chosen to be $U=1.4$~eV so that the system is on the verge of the magnetic instability; slight increase of it results in the spin susceptibility divergence. Therefore, the spin response in this case is very pronounced. To see what happens near the point with $J=0.15$~eV, $\mathrm{Im}\chi_{+-}$ was calculated for $J=0.1$~eV and $J=0.2$~eV. Since the SF gap structure doesn't change much for the mentioned values of Hund's exchange, the gap parameters are fixed to be the same as for $U=1.4$~eV and $J=0.15$~eV. The results for the SF gap and for the $s_\pm$ gap are shown in Fig.~\ref{fig:SFsets}. Apparently, the peak shifts to lower frequencies and becomes higher and sharper with the increase of $J$. This trend is similar for both the SF and the model $s_\pm$ gaps. Such a behavior is due to the structure of the RPA susceptibility denominator. As was discussed above, in accordance with the Kramers-Kronig relations, the jump in $\mathrm{Im}\chi_{(0)+-}(\mathbf{Q},\omega)$ at $\omega_c$ leads to a logarithmic singularity in the real part of the susceptibility. Sice the divergence condition determining the spin resonance peak is $\hat{U}_s \hat\chi_{(0)+-}(\mathbf{q},\omega) = \hat{I}$, see Eq.~(\ref{eq:chi_s_sol}), the position of the peak is determined by the interaction matrix elements $\hat{U}_s$ and the behavior of $\mathrm{Re}\chi_{(0)+-}(\mathbf{Q},\omega)$ near the logarithmic singularity. The relation between these two quantities determines $\omega_R$. Here I vary $J$ thus effectively changing $\omega_R$. Increase of interaction parameters decreases $\hat{U}_s^{-1}$ and the divergence can take place for smaller values of $\mathrm{Re}\chi_{(0)+-}(\mathbf{Q},\omega)$. The latter appears at lower frequencies, thus $\omega_R$ shifts towards zero. That is exactly what is seen in Fig.~\ref{fig:SFsets}. \section{Conclusions \label{sec:conclusion}} \begin{table} \caption{\label{tab} Comparison of peak energies in inelastic neutron scattering, $\omega_{INS}$, and larger and smaller gaps, $\Delta_L$ and $\Delta_S$, in various Fe-based superconductors. Values of frequencies and gaps are given in meV. Here $*$, $**$, and $\dag$ marks gaps extracted from Andreev experiments, BCS fit of $H_{c1}(T)$, and tunneling spectra, respectively; otherwise, gaps are from ARPES. Here, ``$?$'' marks the ``expected'' value of $\omega_{INS}$ (according to value for nearest doping) in a material for which the measurement is absent. If the peak frequency and gaps satisfy condition $\omega_{INS} \leq \min(\Delta_L)+\min(\Delta_S)$, frequency is written in \DLS{bold face}, and if they satisfy condition $\omega_{INS} \leq 2\min(\Delta_L)$, \DLL{italic} is used.} \begin{tabular}{cccccc} \hline \hline \centering{Material} & $T_c$ (K) & $\omega_{INS}$ & $\min(\Delta_L)$, $\min(\Delta_S)$\\ \hline BaFe$_{1.9}$Co$_{0.1}$As$_2$ & 19 & \DLS{7.3}-\DLL{9.3}~\cite{Wang2016} & 5.0, 4.0~\cite{Wang2016} \\ BaFe$_{1.866}$Co$_{0.134}$As$_2$ & 25 & \DLS{7.0-8.0}~\cite{Wang2016} & 6.5, 4.6~\cite{Wang2016} \\ BaFe$_{1.81}$Co$_{0.19}$As$_2$ & 19 & \DLS{7.5-9.5}~\cite{Wang2016} & 5.6, 4.6~\cite{Wang2016} \\ BaFe$_{1.85}$Co$_{0.15}$As$_2$ & 25 & \DLS{7.7}-\DLL{10.0}~\cite{Inosov2010,Park} & 6.0, 3.8~\cite{Terashima2009} \\ BaFe$_{1.85}$Co$_{0.15}$As$_2$ & 25.5 & \DLS{9.5?} & 5.6, 4.0~\cite{Kawahara2010} \\ BaFe$_{1.8}$Co$_{0.2}$As$_2$ & 24.5 & \DLS{9.5?} & 8.2, 3.8$*$~\cite{Tortello2010} \\ \hline Ba$_{0.6}$K$_{0.4}$Fe$_2$As$_2$ & 38 & \DLS{13-14}~\cite{ChristiansonBKFA,Castellan2011,Shan2012} & 10.0, 5.0~\cite{Ding2008} \\ Ba$_{0.6}$K$_{0.4}$Fe$_2$As$_2$ & 38 & \DLL{13-14}~\cite{ChristiansonBKFA,Castellan2011,Shan2012} & 8.0, 4.0~\cite{Wray2008} \\ Ba$_{0.6}$K$_{0.4}$Fe$_2$As$_2$ & 38 & \DLL{13-14}~\cite{ChristiansonBKFA,Castellan2011,Shan2012} & 8.0, 2.0~\cite{Zhang2010,Shimojima2011} \\ Ba$_{0.6}$K$_{0.4}$Fe$_2$As$_2$ & 38 & \DLL{13-14}~\cite{ChristiansonBKFA,Castellan2011,Shan2012} & 8.4, 3.2$\dag$~\cite{Shan2012} \\ Ba$_{0.6}$K$_{0.4}$Fe$_2$As$_2$ & 35 & \DLL{14.8}-15.2~\cite{Castellan2011} & 7.5, 5~\cite{Zhao2008ARPES} \\ Ba$_{0.6}$K$_{0.4}$Fe$_2$As$_2$ & 37.5 & \DLL{14.8-15.2}~\cite{Castellan2011} & 8.5, 1.7$**$~\cite{Ren2008} \\ Ba$_{0.65}$K$_{0.35}$Fe$_2$As$_2$ & 34 & 12.2-13.5~\cite{Castellan2011} & 5.7, 1.4$*$~\cite{Abdel-Hafiez2014} \\ Ba$_{1-x}$K$_{x}$Fe$_2$As$_2$ & 32 & \DLL{14.9-15.3}~\cite{Castellan2011} & 7.8, 1.1~\cite{Evtushinsky2009,Evtushinsky2009NJP} \\ \hline FeSe & 8 & \DLS{4}~\cite{Wang.nmat4492} & 3.5, 2.5$\dag$~\cite{Kasahara.PNAS.111.16309} \\ FeSe & 8 & \DLL{4}~\cite{Wang.nmat4492} & 2.4, 0.6$*$~\cite{Ponomarev2013} \\ \hline LiFeAs & 18 & \DLS{4}-12~\cite{Taylor.PhysRevB.83.220514} & 4.7, 2.5~\cite{Borisenko.PhysRevLett.105.067002,Borisenko.symmetry-04-00251,Umezawa.PhysRevLett.108.037002} \\ LiFeAs & 18 & \DLS{4}-12~\cite{Taylor.PhysRevB.83.220514} & 5.1, 0.9$*$~\cite{Kuzmichev2012,Kuzmichev2013} \\ LiFeAs & 18 & \DLS{4}-12~\cite{Taylor.PhysRevB.83.220514} & 5.2, 2.3$\dag$~\cite{Chi.PhysRevLett.109.087002,Hanaguri.PhysRevB.85.214505,Nag.srep27926} \\ \hline NaFe$_{0.935}$Co$_{0.045}$As & 18 & \DLS{7}~\cite{Zhang.PhysRevLett.111.207002} & 4.5, 4.0~\cite{Zhang.PhysRevLett.111.207002,Ge.PhysRevX.3.011020} \\ NaFe$_{0.935}$Co$_{0.045}$As & 18 & \DLS{6.7-6.9}~\cite{Zhang.PhysRevB.88.064504} & 6.0, 5.0~\cite{Liu.PhysRevB.84.064519} \\ NaFe$_{0.95}$Co$_{0.05}$As & 18 & \DLS{7?} & 6.0, 5.0~\cite{Liu.PhysRevB.84.064519} \\ \hline CaKFe$_4$As$_4$ & 18 & \DLS{12.5}~\cite{Iida2017} & 10.0, 6.0~\cite{Mou2016} \\ \hline \hline \end{tabular} \end{table} Within the five-orbital model for iron-based materials, I considered the question on what happens to the spin resonance when the anisotropy of the gap changes. By using both model gap function and the one calculated via the spin fluctuation theory of pairing, it is shown that the spin resonance peak forms for most of the superconducting solutions originating from the spin fluctuation approach to the pairing and having the $A_{1g}$ symmetry, including the $s_\pm$ state. The peak frequency is as higher as larger the zero-amplitude gap magnitude on electron pockets. On the contrary, the increase of the anisotropy leads to the decrease of the peak frequency that is connected with the decrease of the effective gap at the scattering wave vector $\mathbf{Q}$. As for the experimental verification of the spin resonance appearance, the condition for the spin resonance frequency $\omega_R$ in the case of the anisotropic gaps $\Delta_{L,S}$ becomes $\omega_R \leq \min(\Delta_L)+\min(\Delta_S)$. If all values entering here fulfills this condition, then the observed peak is the true spin resonance. Otherwise, a calculation involving the details of the band structure and superconducting gap is required to make a definite conclusion. I collected available experimental data in Table~\ref{tab}. Note the data for Co-doped materials and a recently discovered CaKFe$_4$As$_4$ fall into the first category and, therefore, demonstrate presence of the $s_\pm$-type gap. Other materials require more efforts from both theoretical and experimental sides to (1) extract precise values of gaps and peak energies and (2) perform calculations for particular band and gap structures. Additional information can be gained from the temperature dependence of the resonance peak. Since the peak frequency $\omega_R$ is determined by the amplitude of gaps, and the gaps decrease with temperature while approaching $T_c$, $\omega_R(T)$ should also scale with $\Delta_{L,S}(T)$. Simultaneous measurement of the temperature dependence of gaps and peak frequency is highly desirable for understanding of the spin resonance details. \begin{acknowledgements} I would like to thank I. Eremin, S.A. Kuzmichev, and T.E. Kuzmicheva for useful discussions. This work was supported in part by Presidium of RAS Program for the Fundamental Studies No. 12, Gosbudget program No. 0356-2017-0030, and Foundation for the advancement of theoretical physics and mathematics ``BASIS''. \end{acknowledgements}
{'timestamp': '2018-09-28T02:02:19', 'yymm': '1803', 'arxiv_id': '1803.06736', 'language': 'en', 'url': 'https://arxiv.org/abs/1803.06736'}
\section{Introduction} Boolean satisfiability is a well known NP-complete problem that serves as the basis for many computationally hard tasks. Some of these applications, such as formal software verification and data mining, are well served by solvers which are capable of finding more than one satisfying assignment of variables, or even all such solutions \cite{toda2015implementing}. Quantum annealing (QA) has been applied to the solution of weighted max 2-SAT problems \cite{santra2014max,bian2010teaching}, the construction of SAT filters \cite{douglass2015constructing}, SAT variants \cite{roy2016mapping}, and frustrated loop problems inspired by SAT \cite{hen2015probing,king2015performance}. QA hardware operates by performing many annealing (solving) cycles, returning a potential solution sample for each cycle \cite{johnson2011quantum,bian2010teaching}. We seek to discover whether quantum annealers could provide an advantage in finding multiple solutions to satisfiability problems. We first provide a construction for posing satisfiability problems to quantum annealing hardware solvers in Section \ref{sec_sat_penalty}. Section \ref{sec_mixedsat} defines our problem set, and Section \ref{sec_results} details the results of our quantum and classical solution sampling comparision. \section{Cascading-OR SAT penalty functions} \label{sec_sat_penalty} Satisfiability problems in general consist of a conjunction of disjunctive clauses. The conjunction can be achieved on quantum annealing hardware by summing terms representing individual clauses, so we will focus here on defining the disjunctive clause Hamiltonian penalty functions, which we denote $H_k$, with $k$ the number of variables in each clause. The clauses are built using a cascading-OR construction, which requires $2(k-1)$ qubits per clause. It should be noted that the resulting clauses are not directly embeddable on QA hardware graphs. No direct embeddings are proposed here, primarily due to the fact that many clauses share variables, and by the time this is taken into account in building the Hamiltonian, a heuristic embedding algorithm for the problem as a whole will still be required. The overhead of heuristic embedders for QA hardware graphs is of course an entire line of research on its own. We begin our construction with two building blocks. The first is \begin{equation} H_2(x1,x2) = -\sigma^z_{x1} - \sigma^z_{x2} + \sigma^z_{x1}\otimes\sigma^z_{x2} , \end{equation} the ground states of which represent the states for which $x1\vee x2$ evaluates to true, i.e. the states for which the $2$-SAT clause is satisfied. We also define an "OR with output" penalty function \begin{equation} H_{OR}(x1,x2,z) = \sigma^z_{x1} + \sigma^z_{x2} - 2\sigma^z_z + \sigma^z_{x1}\otimes\sigma^z_{x2} - 2\sigma^z_{x1}\otimes\sigma^z_{z} - 2\sigma^z_{x2}\otimes\sigma^z_{z} , \end{equation} which has as its ground states the states where $x1 \vee x2 = z$, the crucial difference being that the correct state of the OR output is included in the penalty function. By substituting the output bit $z$ from the OR function for $x1$ in $H_2$, we can easily construct $H_3$: \begin{align} H_3 &= H_{OR}(x1,x2,z1) + H_2(z1,x3)\\ &= (\sigma^z_{x1} + \sigma^z_{x2} - 2\sigma^z_{z1} + \sigma^z_{x1}\otimes\sigma^z_{x2} - 2\sigma^z_{x1}\otimes\sigma^z_{z1} - 2\sigma^z_{x2}\otimes\sigma^z_{z1}) - \sigma^z_{z1} - \sigma^z_{x3} + \sigma^z_{z1}\otimes\sigma^z_{x3} , \end{align} where the part of the equation in parenthesis represents $H_{OR}$ applied to $x1$ and $x2$, with its output $z1$ cascaded into one of the qubits in $H_2$. Note that the term $\sigma^z_{z1}$ appears twice in the equation because it is shared between $H_{OR}$ and $H_2$, so the collected term would be $-3\sigma^z_{z1}$. See Figure \ref{blocks_figure} for diagrams of the basic penalty functions we have introduced. \begin{figure} \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=0.5\textwidth]{H2.png} \caption{2-SAT penalty} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=0.5\textwidth]{HOR.png} \caption{OR with output penalty} \end{subfigure} \caption{Building blocks for cascading-OR penalty functions.} \label{blocks_figure} \end{figure} Penalty functions $H_k$ of any size can be built inductively by extending this methodology. Start with a valid $H_{k-1}$, choose any of the qubits which directly represent problem variables (shown in above equations as $xi$), and substitute the output qubit of $H_{OR}$ into its place. The size of the penalty function grows by $2$ qubits because we are putting three in the place of one. Because we can choose any problem variable qubit, we can define multiple equivalent $H_k$ penalty functions depending on which substitution we use. Figure \ref{options_figure} shows several possible constructions of $H_3$ and $H_4$ penalty functions. \begin{figure} \centering \begin{subfigure}{0.45\textwidth} \includegraphics{H3_1.png} \caption{Option 1 for 3-SAT penalty} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics{H3_2.png} \caption{Option 2 for 3-SAT penalty} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics{H4_1.png} \caption{Option 1 for 4-SAT penalty} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics{H4_2.png} \caption{Option 2 for 4-SAT penalty} \end{subfigure} \caption{Options for $H_3$ and $H_4$. The penalty function can take different forms depending on which logical variable qubits are extended by $H_{OR}$; these forms become more varied as $k$ increases and the decision tree widens.} \label{options_figure} \end{figure} \section{Mixed SAT} \label{sec_mixedsat} The test problem set for this study consists of a group of mixed SAT problems which contain clauses of diverse lengths (as compared to the standard 3-SAT, in which each clause involves three variables). The test set was tailored to be feasible on the D-Wave 2X generation of quantum annealers. We ran $123$ instances with $n=30$ on a $1098$ qubit QA chip, taking $1000$ solution samples for each instance and parameter setting at an annealing time of $20$ microseconds per sample. These instances were a subset of a larger set of mixed SAT instances which was generated randomly, then downselected based on number of solutions (less than a million) and embeddability of the resulting penalty function on hardware. \section{Results} \label{sec_results} \subsection{Timing metrics} We considered two timing metrics for the quantum annealer results. The first, core annealing time, is important because it captures the key physics of the computation. The quantum annealer calculates the satisfying assignment by performing an adiabatic quantum evolution from a known original hardware state to the unknown problem solution. This process is the \textit{core anneal}, and is the key difference between quantum annealers and classical hardware. By separating the core anneal time from other hardware times, we get an idea of the essential limits on this type of computation. We also present results using wallclock time for the quantum annealer. This includes, among many things, programming time, thermalization time (so the chip can cool down after programming), core annealing time, readout time, and postprocessing time. The wallclock time is how long we have to wait for results from the current generation of quantum annealers. However, there is reason to believe that many of the elements within the wallclock time will be subject to engineering improvement in the near term. \subsection{Classical comparison} We used a recently published classical ALL-SAT solver \cite{toda2015implementing} to count the solutions for each problem and establish a timing baseline. Toda and Soh programmed several ALL-SAT modifications to MiniSAT, a well known single-solution SAT solver. We used the version that performed best in their benchmarks. The optimality of this solver for ALL-SAT problems in general and mixed SAT in particular is not established, but it is the current state of the art. All times shown for the classical solver are wallclock times. The solver timestamped each solution, so we have granular timing data as to when each solution was found. \subsection{Core annealing time} When only core annealing time is considered, most instances in the problem set initially find solutions faster than the classical solver. In Figure \ref{fig_instance_2_anneal}, the classical and quantum curves for time to find distinct solutions are shown. The quantum annealer initially finds solutions much more quickly, but slows in the rate at which it finds new solutions, allowing the classical curve to cross under it. Figure \ref{fig_crossover} shows these crossover points for the entire instance set. We find an initial quantum advantage in the tens of solutions for most instances. It is relevant to note that the sets of solutions found by the classical and quantum solvers at the crossover point have low overlap (see Figure \ref{fig_overlap} for details). \begin{figure} \centering \includegraphics[width=\textwidth]{instance_2_curves.pdf} \caption{Rate of solution finding for classical and quantum solvers. This plot is for an instance with one of the larger initial solution finding advantages compared to the classical alternative. The shapes of the curves are consistent across all instances in the problem set.} \label{fig_instance_2_anneal} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{crossover_scatter.pdf} \caption{Crossover points between classical and quantum solution finding rates. Each point represents one problem instance and shows the location of the crossover we see for a single problem in Figure \ref{fig_instance_2_anneal}. Error bars represent the resolution of the timing data.} \label{fig_crossover} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{answer_overlap_hist.pdf} \caption{Overlap between classical and quantum solution sets at crossover. Bars indicate the number of problem instances with a given fraction of solutions appearing in both the classical and quantum partial solution sets at the crossover point.} \label{fig_overlap} \end{figure} \subsection{Wallclock time} When all quantum processor activities are accounted for, and we compare quantum wallclock to classical wallclock time, the initial advantage disappears. Figure \ref{fig_instance_2_wallclock} shows the same instance as Figure \ref{fig_instance_2_anneal}, but with the quantum wallclock rather than core anneal time. The quantum curve is immediately at a disadvantage and never crosses the lower classical time curve. This is true for all instances in the set we studied. \begin{figure} \centering \includegraphics[width=\textwidth]{instance_2_curves_wallclock.pdf} \caption{Wallclock rate of solution finding for classical and quantum solvers. This is the same problem instance shown in Figure \ref{fig_instance_2_anneal}, with the quantum curve measured in wallclock rather than core annealing time. Any quantum advantage is lost.} \label{fig_instance_2_wallclock} \end{figure} \subsection{Solution sampling} Quantum annealers solve problems by sampling from the low energy state space of the penalty function, so it is natural to consider the application of quantum annealers as solution samplers for problems of interest. In particular, we were interested in whether quantum annealers could provide more variety in the answer set than classical solvers, particularly because the quantum penalty function can be subjected to trivial spin reversal transformations or parameter settings that leave the identity of the ground state solutions the same but change the physics of the solving process \cite{ronnow2014defining,pudenz2016parameter}. To measure the diversity of the solution set, we look at the Hamming distance between neighboring solution samples (Figure \ref{fig_hamming_neighbor}). We find that the classical solver is both more consistent in the distribution of solutions and finds solutions that are more widely separated in Hamming distance. \begin{figure} \centering \includegraphics[width=\textwidth]{instance_44_Hamming_from_neighbor_5.pdf} \caption{Hamming distance between neighboring solutions. The solutions for a typical problem instance are displayed here in the order they were returned from each solver. Each point represents the Hamming distance of the current solution from the one that came before it. Blue circles represent solutions from the classical solver. All other colors and shapes represent solutions from the quantum annealer, each type of marker indicating one trivial spin reversal transformation (SRT) of the problem's penalty function. All spin reversals on the quantum annealer behave consistently, but the distance between adjacent classical solutions is both higher and more consistent than the quantum solver can produce.} \label{fig_hamming_neighbor} \end{figure} \section{Conclusions} Quantum annealers have the potential to assist in finding multiple solutions to mixed satisfiability problems, but in their current form are not ready for the task. The advantage in time for finding an initial set of solution samples seen when only core annealing time is considered and the low overlap between the quantum and classical crossover solution sets points to a potential future role for a quantum solver as a helper to a classical solver. Indeed, it might be possible to run the quantum solver to crossover with the classical solver, apply a spin reversal transformation, and run the quantum solver again to find another set of solutions. However, with current quantum annealing technology, the overhead timing factors included in the wallclock time destroy any advantage.
{'timestamp': '2016-12-22T02:07:52', 'yymm': '1612', 'arxiv_id': '1612.07258', 'language': 'en', 'url': 'https://arxiv.org/abs/1612.07258'}
\section{Coherent state in a QD pump under a strong magnetic field} We show that in a 2D QD formed by time-dependent confinement potential, the evolution of a Gaussian state with spatial width $\simeq l_B$ follows the classical equation of motion of a coherent state under the strong-magnetic-field conditions that the QD has a size much larger than the magnetic length $l_B$ and that the QD confinement potential $U_\textrm{QD}$ changes slowly in the length scale of $l_B$ and the time scale of $\omega_c^{-1}$. We derive Eq.~(1) and the expression of the wave function overlap $|\ovl{\psi_c}{\psi}|^2$ mentioned in the main text. In order to elucidate the behavior and the validity of the result, we discuss an example, the time evolution of the ground state in a time-dependent anisotropic harmonic QD confinement potential. We consider the ground state of the Hamiltonian $H_\textrm{QD}$ at initial time $s=0$, and study its time evolution under $H_\textrm{QD}(s) = (\mb{p}- e\mb{r}\times B \hat{\mb{z}}/2)^2 / (2m_e^*) + U_\textrm{QD} (\mb{r}, s)$. Since at low energy the confinement potential $U_\textrm{QD} (s=0)$ is well approximated as an anisotropic harmonic potential, the ground state can be approximately expressed~\cite{Madhav-2} as a Gaussian form with certain harmonic frequencies $\omega_{0,x}$ and $\omega_{0,y}$, \begin{equation} \psi (\mb{r}, s=0) \approx \exp[ -\{\frac{x^2}{ l_B^2}\frac{\omega_{0,x}}{\omega_{0,x} +\omega_{0, y}}+\frac{y^2}{ l_B^2}\frac{\omega_{0,y}}{\omega_{0,x} +\omega_{0, y}}\}(1+ \frac{(\omega_{0,x} +\omega_{0, y})^2}{2\omega_c^2}) +i \frac{xy}{l_B^2} \frac{\omega_{0,x}-\omega_{0,y}}{\omega_{0,x} +\omega_{0, y}} (1+\frac{(\omega_{0,x} +\omega_{0, y})^2}{{ 4} \omega_c^2})], \label{init_state} \end{equation} up to the normalization constant. This approximation becomes more valid as $\omega_c$ becomes much larger than $\omega_{0,x}$ and $\omega_{0,y}$; when $\omega_c \gg \omega_{0,x}, \omega_{0,y}$, $\psi (\mb{r}, s=0) \to \exp [ - \mb{r}^2 / (2 W) ]$ with $W \sim l_B$ and the overlap between the exact ground state and $\exp [ - \mb{r}^2 / (2 W) ]$ is approximately $1 - \bo (4 \omega_{0, x (y)}^4 / \omega_c^4) $. Next, we study the time evolution of the ground state $\psi (\mb{r}, s=0)$ in Eq.~\eqref{init_state} under $H_\textrm{QD}(s)$. The evolution is decomposed into two parts, one from the kinetic Hamiltonian and the other from $U_\textrm{QD}(\mb{r}, s)$, \begin{equation}\label{small_time_evol} \psi(\mb{r'}, s + \delta s ) = \int d\mb{r} \, K_B(\mb{r'}, s + \delta s; \mb{r}, s) e^{-i {U_\textrm{QD}(\mb{r}, s)} \delta s +\bo[\delta s^2]} \psi(\mb{r}, s) \end{equation} for infinitesimal $\delta s$. $K_B (\mb{r}', s; \mb{r},0) \equiv \langle \mb{r'}| e^{-i s (\mb{p}- e\mb{r}\times \mb{B}/2)^2 / (2m_e^*)} | \mb{r} \rangle$ is obtained~\cite{Papadopoulos1971-2} as \begin{equation} K_B (\mb{r}', s; \mb{r}, 0) = \left(\frac{m_e^*\omega_c}{4\pi i \sin \frac{\omega_c s}{2}}\right)^{3/2} \exp\left[\frac{i}{2 l_B^2} \left( \cot \frac{\omega_c s}{2}(\mb{r}-\mb{r'})^2 +2 (\mb{r'} \times \mb{r})_z \right) \right]. \nonumber \end{equation} To compute the time evolution further, (i) the infinitesimal $\delta s$ is considered so that the commutator between the kinetic Hamiltonian and $U_\textrm{QD}(\mb{r}, s)$ is neglected in Eq.~\eqref{small_time_evol}, and (ii) at each instant $s$, $U_\textrm{QD}(\mb{r}, s)$ is approximately expressed as \begin{eqnarray} U_\textrm{QD}(\mb{r}, s) \simeq U_\textrm{QD}(\mb{r}_0 , s) +\sum_{i=x,y} \frac{\partial}{\partial x_i} U_\textrm{QD}(\mb{r}, s)|_{\mb{r}_0} (\mb{r}-\mb{r}_0)_i +\sum_{i,j=x,y}\frac{\partial^2}{2\partial x_i \partial x_j} U_\textrm{QD}(\mb{r}, s)|_{\mb{r}_0} (\mb{r}-\mb{r}_0)_i (\mb{r}- \mb{r}_0)_j, \label{TaylorExp} \end{eqnarray} where $U_\textrm{QD}(\mb{r}_0 , s)$ is the potential at the mean position $\mb{r}_0$ of $\psi(\mb{r}, s)$ at time $s$. This Talyor expansion up to the second order is sufficient when $U_\textrm{QD}(\mb{r}, s) - U_\textrm{QD}(\mb{r}_0, s)$ is much smaller than $\hbar \omega_c$ for $|\mb{r}-\mb{r}_0| < l_B$; when the total potential $U_\textrm{QD}(\mb{r}, s) + m_e^* \omega_c^2 \mb{r}^2 / 8$ from $U_\textrm{QD}$ and the magnetic confinement is Taylor expanded around $\mb{r}_0$, the expansion up to the second order dominates over the higher-order terms. Note that the drift motion of the state evolution is described by the first two terms of Eq.~\eqref{TaylorExp}, while the anisotropic shape of the Gaussian form is determined by the last term of Eq.~\eqref{TaylorExp}. To compute the time evolution of the state, we apply Eq.~\eqref{TaylorExp} to Eq.~\eqref{small_time_evol} and perform Gaussian integrals, assuming that $U_\textrm{QD}$ changes slowly in time $< \omega_c^{-1}$, namely $\partial U_\textrm{QD} / \partial s \ll \omega_c U_\textrm{QD}$. The result shows that the time evolved state remains in a Gaussian form. \begin{equation}\label{finite_time_evol} \psi(\mb{r}, s) = N \exp\left[ (\mb{r}-\mb{r}_c(s))^{\intercal} { \mb{R}} (\mb{r}_c(s))^\dagger { \begin{bmatrix} A(s) & C(s)/2 \\ C(s)/2 & B(s) \end{bmatrix} } { \mb{R}} (\mb{r}_c(s)) (\mb{r}-\mb{r}_c(s)) /l_B^2 \right] e^{i \mb{p}_c(s) \cdot \mb{r} } \end{equation} where $N \equiv \sqrt[4]{4 \mr{Re}[A(s)] \mr{Re}[B(s)] -\mr{Re}[C(s)]^2}/(\sqrt{\pi}l_B)$. The matrix $\mb{R}(\mb{r}_c(s)) \equiv \begin{bmatrix} \cos \phi(\mb{r}_c(s)) & \sin \phi(\mb{r}_c(s)) \\ -\sin \phi(\mb{r}_c(s)) & \cos \phi(\mb{r}_c(s)) \end{bmatrix}$ rotates the coordinate by angle $\phi(\mb{r}_c(s)) \equiv \frac{1}{2} \tan^{-1}[ \partial_{xy}^2 U_\textrm{QD}(\mb{r}_c(s))/\{\partial_{yy}^2 U_\textrm{QD}(\mb{r}_c(s))- \partial_{xx}^2 U_\textrm{QD}(\mb{r}_c(s))\}]$ so that the anisotropic directions (the major and minor axes) of the Gaussian form align the rotated axes $\mb{R}(\mb{r}_c(s)) \hat{\mb{x}}$ and $ \mb{R}(\mb{r}_c(s)) \hat{\mb{y}}$. { Remarkably, We notice that the mean position $\mb{r}_c(s)$ and mean momentum $\mb{p}_c(s)$ of the state at time $s$ are determined by the classical equation of motion (this is why we call $\psi$ a coherent state), \begin{equation} \label{eom_rp} \begin{aligned}{} \frac{d \mb{r}_c}{ds} &= \frac{\partial H_\textrm{QD}}{\partial \mb{p}_c} = \frac{\mb{p}_c}{m_e^*} -\frac{\omega_c}{2} \hat{\mb{z}} \times \mb{r}_c, \\ \frac{d \mb{p}_c}{ds} &= - \frac{\partial H_\textrm{QD}}{ \partial \mb{r}_c} = -\frac{m_e^* \omega_c^2}{4} \mb{r}_c + \frac{\partial}{\partial \mb{r}} U_\textrm{QD}(\mb{r}, s)|_{\mb{r}_c} - \frac{\omega_c}{2}\hat{\mb{z}} \times \mb{p}_c. \\ \end{aligned} \end{equation} Therefore, the ground state at the initial time $s=0$ evolves in time, propagating along the $\mb{E} \times \mb{B}$ drift determined by $\partial U_\textrm{QD}(\mb{r}, s)/ \partial \mb{r} $ and the magnetic field. The shape (the width and the anisotropy) of the Gaussian wave packet are determined by $A(s)$, $B(s)$ and $C(s)$, which are governed by the differential equations \begin{equation} \label{eom_c} \begin{aligned}{} \frac{d A}{d s} &= \frac{i}{4} (-1 + 4A^2) \omega_c + \frac{1}{2} (-C + i C^2/2) \omega_c -\frac{i \kappa_+^2(s)}{\omega_c}, \\ \frac{d B}{d s} &= \frac{i}{4} (-1 + 4 B^2) \omega_c + \frac{1}{2} (C + i C^2/2) \omega_c -\frac{i \kappa_-^2(s)}{\omega_c}, \\ \frac{d C}{d s} &= \omega_c(A -B) +i\omega_c (A +B) C, \end{aligned} \end{equation} where $\kappa_{\pm}(s) \equiv [ (\partial_{xx}^2 + \partial_{yy}^2) U_\textrm{QD}|_{\mb{r}_c}/2 \pm \{((\partial_{xx}^2- \partial_{yy}^2) U_\textrm{QD}|_{\mb{r}_c} )^2/4+ {( \partial_{xy}^2 U_\textrm{QD}|_{\mb{r}_c} )^2} \}^{1/2}]^{1/2}/\sqrt{m_e^*}$. The state in Eq.~\eqref{init_state} provides the initial condition of the differential equations in Eq.~\eqref{eom_c}. The solution of Eq.~\eqref{eom_c} is \begin{equation} \begin{aligned}{} \label{sol} A(s) &= -\frac{\kappa_{+}(s)}{\kappa_{+}(s)+\kappa_{-}(s)} -\frac{ \kappa_+(s) (\kappa_+ (s) + \kappa_- (s))}{2\omega_c^2} + \bo(\frac{\kappa_\pm^4}{\omega_c^4}) + \bo(\frac{1}{\kappa_\pm \omega_c}\frac{d\kappa_{\pm}}{ds}), \\ B(s) &= -\frac{\kappa_{-}(s)}{\kappa_{+}(s)+\kappa_{-}(s)} -\frac{\kappa_-(s) (\kappa_+ (s) + \kappa_- (s))}{2\omega_c^2} + \bo(\frac{\kappa_\pm^4}{\omega_c^4}) + \bo(\frac{1}{\kappa_\pm \omega_c}\frac{d\kappa_{\pm}}{ds}), \\ C(s) &= i \frac{\kappa_{+}(s)-\kappa_{-}(s)}{\kappa_{+}(s)+\kappa_{-}(s)} +i \frac{(\kappa_{+}(s)-\kappa_{-}(s))(\kappa_+ (s) + \kappa_- (s))}{4 \omega_c^2} + \bo(\frac{\kappa_\pm^4}{\omega_c^4}) + \bo(\frac{1}{\kappa_\pm \omega_c}\frac{d\kappa_{\pm}}{ds}). \end{aligned} \end{equation} Note that the errors $\bo(\cdots)$ are small under the conditions of $U_\textrm{QD}$ mentioned above; the smoothness of $U_\textrm{QD}$ in the length scale $l_B$, $U_\textrm{QD}(\mb{r} -\mb{r}_0, s) -U_\textrm{QD}(\mb{r}_0 s) \ll \hbar \omega_c$ for $|\mb{r} -\mb{r}_0| <l_B $, implies $\kappa_\pm \ll \omega_c$, while the smoothness of $U_\textrm{QD}$ in the time scale $\omega_c^{-1}$, $\partial U_\textrm{QD}/\partial s \ll \omega_c U_\textrm{QD} $, implies $\partial \kappa_{\pm}/\partial s \ll \omega_c \kappa_\pm$. Under the conditions, the second terms of Eqs. \eqref{sol} are also negligible as they become much smaller than the first terms. Then, the solution shows that the time evolved wave packet has an anisotropic Gaussian form of width of $l_B [1 - (\kappa_+-\kappa_-)/(\kappa_+ +\kappa_-)]^{-1/2}$ along the major axis and $l_B [1 +(\kappa_+-\kappa_-)/(\kappa_+ +\kappa_-)]^{-1/2}$ along the minor axis. When $U_\textrm{QD}$ is an isotropic harmonic potential, the Gaussian form is also isotropic with $\kappa_+ = \kappa_-$. For general anisotropic QD confinements, the Gaussian form is anisotropic, but this effect is not significant in realistic QDs as discussed below. Thus, $\psi (\mb{r}, s)$ is well approximated as \begin{equation} \psi_c(\mb{r}, s) =\frac{1}{\sqrt{\pi l_B^2}} \exp [-\frac{(\mb{r}-\mb{r}_c (s))^2}{2 l_B^2}+i \mb{p}_c (s) \cdot \mb{r}]. \nonumber \end{equation} In order to elucidate the behavior and the validity of the result, we discuss the time evolution of the ground state in a time-dependent anisotropic harmonic QD, $U_\textrm{QD}(\mb{r}, s) = (\omega_{0,x}^2(s) x^2 +\omega_{0,y}^2(s) y^2)/(2 m_e^*) + F(s) x$. Here, the force $F(s)$ describes a time-dependent shift of the center of the confinement in the $x$ direction. In this case, the error terms $\bo(\cdots)$ in Eq.~\eqref{sol} vanish and the time evolution $\psi(\mb{r}, s)$ is obtained from Eqs.~\eqref{finite_time_evol}, \eqref{eom_rp}, and \eqref{sol} as \begin{equation} \begin{aligned}{} \psi (\mb{r}, s) &= \exp \Big[ -\Big\{\frac{(x-x_c(s))^2}{ l_B^2}\frac{\omega_{0,x}(s)}{\omega_{0,x}(s) +\omega_{0, y}(s)}+\frac{(y-y_c(s))^2}{ l_B^2}\frac{\omega_{0,y}(s)}{\omega_{0,x}(s) +\omega_{0, y}(s)}\Big\} \big(1+ \frac{(\omega_{0,x}(s) +\omega_{0, y}(s))^2}{2\omega_c^2} \big) \\ &\quad\quad \quad +i \frac{(x-x_c(s)) (y-y_c(s))}{l_B^2} \frac{\omega_{0,x}(s)-\omega_{0,y}(s)}{\omega_{0,x}(s) +\omega_{0, y}(s)} \big(1+\frac{(\omega_{0,x}(s) +\omega_{0, y}(s))^2}{ 4 \omega_c^2 }\big)\Big] e^{i \mb{p}_c(s) \cdot \mb{r}}. \end{aligned} \label{EXACT} \end{equation} We note that when $\omega_{0,x(y)}(s)$ is time independent, Eq.~\eqref{EXACT} is identical to the result analytically obtained~\cite{Madhav-2} by diagonalizing the Hamiltonian. The overlap between $\psi$ and $\psi_c$ is obtained as \begin{equation} \label{ovl} |\ovl{\psi_c(s)}{\psi(s)}|^2 \approx 1 - \left(\frac{\omega_{0,x}(s) +\omega_{0,y}(s)}{2 \omega_c}\right)^4 -\frac{1}{2} \left(\frac{\omega_{0,x}(s)-\omega_{0,y}(s)}{\omega_{0,x}(s) +\omega_{0,y} (s)} \right)^2. \end{equation} The second term means that $\psi$ is well approximated by the Gaussian packet Eq.~\eqref{finite_time_evol} in a sufficiently strong magnetic field, and the last term shows that $\psi(s)$ is well described by the isotropic Gaussian packet $\psi_c$ when $U_\textrm{QD}$ is not too anisotropic. For example when $|\omega_{0,x}(s)-\omega_{0,y}(s)|$ is $\sim$30\% of $(\omega_{0,x}(s)+\omega_{0,y}(s))/2$ (this is the value that we find in the numerical simulation of an realistic QD pump in Fig.~1), the second term of Eq.~\eqref{ovl} is less than $10^{-2}$. } \section{Time dependent scattering theory} We here describe the scattering theory for the QD pump, which follows a Floquet theory~\cite{floquet-2}, and derive the emitted wave functions in Eqs.~(3) and (5). In the regime of $v_U \tau_d \gtrsim \Delta$, the QD pump is simplified into a scattering model in Fig.~\ref{figs1}. The coherent state propagates along a loop of coordinate $l$ which couples with the edge of the 2DEG outside the QD via the exit barrier located at $l=0$. The loop represents the trajectory of the state at $s > s_\textrm{mx}$. The state gains the potential energy $U(s)$ by $V_\textrm{G1}$. $U(s)$ increases linearly and then stays at its maximum $U_\textrm{mx}$ at $s > s_\textrm{mx}$, following $V_\textrm{G1}(s)$ in Fig.~1. The exit barrier is parameterized by scattering amplitudes $t_\mathcal{E}$, $r_\mathcal{E}$, $t'_\mathcal{E}$, and $r'_\mathcal{E}$ connecting plane waves of the QD ($\sim Y e^{i k l}$, $Y$ being the wave function in the transverse direction) and those of the 2DEG edge ($\sim Y e^{i k x}$) at the same energy $\mathcal{E}$. The 2DEG edge states belong to the lowest Landau level, since the edge state of the higher levels are located farther from the exit barrier (by distance longer than $l_B$) hence the coupling from the QD to them is much weaker. At time $s_0$, the coherent state has so low energy that $t_\mathcal{E} = 0$ within its energy window, and is located at $l=0^{-}$ for simplicity. \begin{figure}[t] \includegraphics[width=0.7\textwidth]{fig1_supp.pdf} \caption{Scattering problem (a) before and (b) after the gauge transformation where the information of the time dependence of the potential $U$ is attached onto the scattering amplitudes.} \label{figs1} \end{figure} We solve the scattering problem, using the gauge transformation where the dynamical phase by $U(s)$ is attached onto the scattering amplitudes, as in a Floquet theory~\cite{floquet-2}. We consider a phase $\Lambda (l,s) = \Theta(l-0^{+}) \Theta(L^{-}-l) \int_{-\infty}^s U(s')ds' $, where $L$ is the total length of the loop, $\Theta (x) = 1$ for $x > 0$, and $\Theta (x) = 0$ for $x < 0$; in this Supplementary Materials, we use the convention of $\hbar \equiv 1$. Then the potential $U$ and the vector potential $\mb{A}$ are gauge transformed as \begin{equation} \begin{aligned}{} \Phi =U(t) \,\,\,\,\,\, &\rightarrow \,\,\,\,\,\, \Phi - \partial \Lambda/\partial s =0 \\ \mb{A} =0 \,\,\,\,\,\, &\rightarrow \,\,\,\,\,\, \mb{A} + \nabla \Lambda = [\delta(l-0^{+})-\delta(l-L^{-})] \int_{-\infty}^s U(s')ds'. \end{aligned} \end{equation} After the transformation, the loop becomes time independent, and instead the coupling at $l=0$ (at $l=0^+$ and $l=L^-$) between the loop and the 2DEG edge becomes time dependent, carrying the information of $U(s)$. Then, to apply the gauge transformation, we decompose the loop into three regions, $l \in [L^-, L]$, $l \in [ 0, 0^+]$, and $l \in [0^+, L^-]$, and we assign state amplitudes $a_1$, $b_1$, and $c$, to the regions; cf. Fig.~\ref{figs1}(b). For example, a scattering state incoming from the 2DEG edge can be decomposed into an incident edge state of amplitude $a_{2, \mathcal{E}}$, an edge state with amplitude $b_{2, \mathcal{E}}$ outgoing from the coupling point, a loop state with amplitude $b_{1, \mathcal{E}}$ in $l \in [ 0, 0^+]$, a loop state with amplitude $a_{1, \mathcal{E}}$ in $l \in [L^-, L]$, and a loop state with amplitude $c_\mathcal{E}$ in $l \in [0^+, L^-]$. Here, $\mathcal{E}$ is the energy of the incident state. At $l=0^+$ and $l=L^-$, the information of the time-dependent $U(s)$ is attached to wave functions such that a wave function $\psi_{c, \mathcal{E}}(l,s)$ of energy $\mathcal{E}$ in $l \in [0^+, L^-]$ couples with $\psi_{b_1, \mathcal{E}}(s)$ in $l \in [0,0^+]$ and $\psi_{a_1, \mathcal{E}}(s)$ in $l \in [L^-,L]$ as \begin{align} \psi_{c, \mathcal{E}}(l=0^+, s) &= \psi_{b_1,\mathcal{E}} (s) e^{i\phi(s)}, \label{cb1} \\ \psi_{a_1, \mathcal{E}} (s) &= \psi_{c, \mathcal{E}} (l=L^-, s) e^{-i\phi(s)}, \nonumber \end{align} where $\phi (s) = \int_{-\infty}^s U(s') ds'$. Since $\psi_{c, \mathcal{E}}(l = L^-,s) = \psi_{c, \mathcal{E}} (l = 0^+, s - \tau_d)$, one finds \begin{equation} \psi_{a_1, \mathcal{E}} (s) = e^{-i\phi(s) +i\phi(s-\tau_d)} \psi_{b_1, \mathcal{E}} (s-\tau_d). \nonumber \end{equation} From the Fourier transformation of this, the relation between the amplitudes $a_1$ and $b_1$ is found as \begin{align} a_{1,\mathcal{E}} (E') &= \int dE'' g(E'-E'') b_{1, \mathcal{E}}(E'') e^{i E'' \tau_d} \label{a1} \end{align} where $g(E) \equiv \int dt ^{-i\phi(t)+i\phi(t-\tau_d)} e^{iE t}$. And, $a_1$ and $a_2$ are related with $b_1$ and $b_2$ as \begin{align} b_{1, \mathcal{E}}(E') &= \delta (E'-\mathcal{E}) t'_{\mathcal{E}} a_{2,\mathcal{E}} + e^{i\phi_\textrm{AB}} r_{E'} a_{1, \mathcal{E}}(E') \label{S1} \\ b_{2, \mathcal{E}} (E') &= \delta(E'-\mathcal{E}) r'_{\mathcal{E}} a_{2, \mathcal{E}} + t_{E'} a_{1 \mathcal{E}}(E'). \nonumber \end{align} Note that the Aharonov-Bohm phase $\phi_\textrm{AB} = 2\pi B \pi (l/2\pi)^2/(h/e) $ is attached to the reflection event in the loop. Next, we derive the Fabry-Perot type scattering state resulting from the incident state. Combining Eqs.~\eqref{a1} and \eqref{S1}, we find the recursive equations for $b_1$ and $b_2$, $b_{1, \mathcal{\mathcal{E}}}(E') = \delta (E'-\mathcal{E}) t'_{\mathcal{E}} a_{2,\mathcal{E}} + e^{i\phi_\textrm{AB}} r_{E'} \int dE'' g(E'-E'') e^{i E'' \tau_d} b_{1, \mathcal{E}}(E'')$, $ b_{2,\mathcal{E}}(E')= \delta(E'-\mathcal{E}) r'_\mathcal{E} a_{2, \mathcal{E}} +t_{E'} \int dE'' g(E'-E'') e^{iE'' \tau_d} b_{1, \mathcal{E}}(E'')$. Their Fourier transformations are \begin{align} b_{1, \mathcal{E}}(s) &= t'_\mathcal{E} e^{-i \mathcal{E} s} a_{2, \mathcal{E}} + e^{i\phi_\textrm{AB}} \int ds' r(s') e^{-i \{\phi(s-s') -\phi(s-s'-\tau_d) \}} b_{1, \mathcal{E}}(s-s'-\tau_d ) \label{b1e22} \\ b_{2, \mathcal{E}}(s) &= r'_\mathcal{E} e^{-i \mathcal{E} s} a_{2, \mathcal{E}} + \int ds' t(s') e^{-i \{\phi(s-s') -\phi(s-s'-\tau_d) \}} b_{1, \mathcal{E}}(s-s'-\tau_d ). \nonumber \end{align} $r(s) \equiv \int dE r_E e^{-i Es}$ and $t(s) \equiv \int dE t_E e^{-i Es}$ are the Fourier transforms of $r_E$ and $t_E$, and it can be approximated as a peak structure with width $1/\Delta$ at $s=0$. The integral in Eq.~\eqref{b1e22} is further evaluated by using the peak structure and under the condition of $\ddot{U} / (2\dot{U}) \ll \Delta$, $\int ds' r(s') e^{-i \{\phi(s-s')-\phi(s-s'-\tau_d)\}} e^{i \mathcal{E} s'} = \int ds' r(s') \exp[-i \{ \phi(s)-\phi(s-\tau_d) - (U(s) -U(s-\tau_d))s' +\frac{\dot{U}(s)-\dot{U}(s-\tau_d)}{2} (s')^2 + \bo(s'^3) \} ] e^{i \mathcal{E} s'}\approx r_{\mathcal{E} +U(s)-U(s-\tau_d)} e^{-i \{\phi(s)-\phi(s-\tau_d)\}} $; in the first equality, we use the Taylor expansion of $\phi(s-s')$ and $\phi(s-s'-\tau_d)$ at $s'=0$, considering the peak structure of $r(s')$ at $s'=0$; in the second equality, we ignore the quadratic term, applying the condition of $\ddot{U} / (2\dot{U}) \ll \Delta$. Then, $b_{1, \mathcal{E}}$ in Eq.~\eqref{b1e22} is iteratively solved as \begin{equation} \label{b1} b_{1, \mathcal{E}}(s) = t'_\mathcal{E} e^{-i\mathcal{E} s} a_{2, \mathcal{E}} +\sum_{M=1}^{\infty} e^{-i\phi(s)+i\phi(s-M\tau_d)} e^{i M \mathcal{E}\tau_d} e^{iM \phi_\textrm{AB}} \left[ \Pi_{m'=1}^M r_{\mathcal{E}+ U(s-(M-m'-1)\tau_d) -U(s-M\tau_d)} \right] t'_\mathcal{E} e^{-i\mathcal{E} s} a_{2, \mathcal{E}} \end{equation} Note that the condition of $\ddot{U} / (2\dot{U}) \ll \Delta$ is satisfied in usual experiments, because the smallest time scale for variation of $U$ is limited by the band width of signal generator. Namely, the smallest time scale is $\sim 50$ ps for 10 GHz bandwidth, and then $\ddot{U} / (2\dot{U}) \sim 0.02$ meV, while $\Delta \sim 0.5$ meV (see the main text). Plugging Eq.~\eqref{b1} into Eqs.~\eqref{cb1} and \eqref{b1e22}, we obtain the Fabry-Perot type expression of $c$ and $b_2$ in time domain, \begin{align} &c_\mathcal{E} ( s) = e^{i\phi(s)} t'_\mathcal{E} e^{-i\mathcal{E} s} a_{2, \mathcal{E}} +\sum_{M=1}^{\infty} e^{i\phi(s-M\tau_d)} e^{i M \mathcal{E}\tau_d} e^{iM \phi_\textrm{AB}} \left[ \Pi_{m'=1}^M r_{\mathcal{E}+ U(s-(M-m'-1)\tau_d) -U(s-M\tau_d)} \right] t'_\mathcal{E} e^{-i\mathcal{E} s} a_{2, \mathcal{E}} \label{ct} \\ &b_{2, \mathcal{E}}(s) = r'_{\mathcal{E}} e^{-i\mathcal{E} s}a_{2, \mathcal{E}} \nonumber \\ &+ \sum_{M=1}^{\infty} t_{\mathcal{E}+ U(s)-U(s-M\tau_d) } e^{-i\phi(s)+i\phi(s-M\tau_d)} e^{iM \mathcal{E} \tau_d} e^{i (M-1) \phi_\textrm{AB}} [ \Pi_{m'=1}^{M-1} r_{\mathcal{E}+ U(s-(M-m'-1)\tau_d) -U(s-M\tau_d)} ] t'_\mathcal{E} e^{-i\mathcal{E} s} a_{2, \mathcal{E}} \label{b2t} \end{align} Here each term of index $M$ in Eq.~\eqref{b2t} describes the process that the incident electron with amplitude $a_{2,\mathcal{E}}$ enters the loop at time $s-M\tau_d$, circles the loop $M$ times, and then escapes from the loop at time $s$. In the $M=1$ term we use $\prod_{m'=1}^{0}\equiv 1$ instead of 0, for brevity. Next, we determine the incident amplitudes $a_{2,\mathcal{E}}$ with which the resulting scattering state is identical to the coherent state $\psi_\textrm{coh} =(\sqrt{\pi/2} v\tau_p)^{-1/2} \exp[-l^2/(v\tau_p)^2] \exp[i\epsilon_0 l /v ]$ at time $s_0$, $\int d \mathcal{E} \psi_{c, \mathcal{E}}(l, s_0) = \psi_\textrm{coh}(l)$. We compute $a_{2,\mathcal{E}}$, using $\psi_{c, \mathcal{E}}(l, s_0) = c_{\mathcal{E}}(s_0-l/v)$ and Eq.~\eqref{ct}, and choosing the time dependence of $U(s)$ at $s \le s_0$ as $U(s\le s_0) = U(s_0)$ to simplify the calculation (note that the emitted wave function of $b_{2, \mathcal{E}}(s)$ at $s_\textrm{rd} \ge s_0$ does not rely on this specific choice): \begin{eqnarray} a_{2,\mathcal{E}} & \propto & e^{-i\phi(s_0)} e^{i \mathcal{E} s_0 } e^{-(\mathcal{E}-U(s_0)-\epsilon_0)^2 \tau_p^2/4} \frac{(t'_\mathcal{E})^*}{1-r_{\mathcal{E}} e^{-i(\mathcal{E}-U(s_0))\tau_d -i\phi_\textrm{AB}}} \label{coeff} \\ & = & e^{-i\phi(s_0)} e^{i \mathcal{E} s_0 } e^{-(\mathcal{E}-U(s_0)-\epsilon_0)^2 \tau_p^2/4} \sum_n \frac{(t'_\mathcal{E})^*}{|t_\mathcal{E}|^2/2 - i (\mathcal{E} - \mathcal{E}_n) \tau_d }, \nonumber \end{eqnarray} where $\mathcal{E}_n = 2 \pi \hbar n / \tau_d + U(s_0) -\phi_\textrm{AB}\hbar/\tau_d$ is the resonance energy of the loop at $s_0$. In the second equality, the expression is decomposed into the resonance states, since $r_\mathcal{E} \to 1$ at $s_0$. Finally, we compute the emitted state of $\psi(x,s) = \int d\mathcal{E} b_{2, \mathcal{E}}(s_\mathrm{rd})$, where the $s_\mathrm{rd} \equiv s - x/v_\mathrm{ed}$, and obtain Eq.~(3) \begin{equation} \label{gew} \begin{aligned}{} \psi(x,s) &\propto \sum_{n,m=1}^{\infty} e^{-(\mathcal{E}_n-\epsilon_0)^2 \tau_p^2/4} t_{\mathcal{E}_n+U(s_\mathrm{rd}) -U(s_0) } e^{i m\phi_\textrm{AB}} \left[ \Pi_{m'=0}^{m-1} r_{\mathcal{E}_n+U(s_\mathrm{rd}-(m-m') \tau_d) -U(s_0) } \right] \\ & \quad \quad \quad \times e^{-i (\mathcal{E}_n -U(s_0)) (s_\mathrm{rd}-s_0)} e^{im(\mathcal{E}_n-U(s_0))\tau_d } e^{-i \phi(s_\mathrm{rd}) + i \phi (s_0)} \zeta_m (s_\mathrm{rd}), \end{aligned} \end{equation} where $\zeta_{m}(s_\mathrm{rd}) = 1$ for $s_\mathrm{rd} \in [s_0+ m \tau_d, s_0 + (m+1) \tau_d )$ and $\zeta_m (s_\mathrm{rd}) = 0$ otherwise. Here, we performed the integral $\int d\mathcal{E} b_{2, \mathcal{E}}(s_\mathrm{rd})$, based on the fact that the Lorentzian function in the integrand becomes the delta function when $r_\mathcal{E} \to 1$. Note that Eq.~\eqref{gew} does not include the emission process with no circulation, since we choose $s_0$ such that the energy $\epsilon_0$ of the coherent state is much lower than the exit barrier height and hence $t_{\epsilon_0} \ll 1$. Next, we derive Eq.~(5) when $\tau_\mr{tun} \gg \tau_p$. This limit is achieved when $|t_{\epsilon_0 + U_\textrm{mx}}|$ is small. Then the emission occurs during the time of $U(s) = U_\textrm{mx}$, as $|t_\mathcal{E}|$ is more smaller at $s < s_\textrm{mx}$. This allows us to write the product term in Eq.~\eqref{gew} as an exponential decaying function in $s_\mathrm{rd}$ as $\Pi_{m'= 1}^{m} r_{\mathcal{E}_n+U(s_\mathrm{rd}- m')\tau_d) -U(s_0)} \approx (r_{\mathcal{E}_n+U(s_\mathrm{mx}) -U(s_0)})^{\lfloor (s-s_\mathrm{mx})/\tau_d \rfloor} \approx \exp[\, (\ln r_{\mathcal{E}_n+U(s_\mathrm{mx}) -U(s_0)}) \,(s_\mathrm{rd}-s_\mathrm{mx} )/\tau_d \, ] \approx \exp[\,- (s_\mathrm{rd}-s_\mathrm{mx}) |t_{\mathcal{E}_n+U(s_\mathrm{mx}) -U(s_0)}|^2 /(2\tau_d) \,]$, where $\lfloor x \rfloor$ means the integer obtained by flooring x; in the first approximation, the reflection amplitudes before $s_\textrm{mx}$ are replaced by 1 and in the other approximations, it is used that $r_{\mathcal{E}_n+U(s_\textrm{mx}) -U(s_0)}$ is close to 1. And, the phase gain by $U$ reduces to $e^{-i\phi (s_\mathrm{rd})+i\phi(s_0)} \propto e^{-i U(s_\mathrm{mx}) s_\mathrm{rd} }$ Then, the Eq. \eqref{gew} is expressed as \begin{equation}\label{ew_res} \psi(x,s) \propto \sum_{n=1}^{\infty} e^{-(\mathcal{E}_n-\epsilon_0)^2 \tau_p^2/4} t_{\mathcal{E}_n +U(s_\mathrm{mx})-U(s_0)} e^{-\frac{ |t_{\mathcal{E}_n+U(s_\mathrm{mx})-U(s_0)}|^2}{2\tau_d} (s_\mathrm{rd}-s_\mathrm{mx} )} e^{-i (\mathcal{E}_n+U(s_\mathrm{mx})-U(s_0)) s_\mathrm{rd}} \Theta(s_\mathrm{rd}-s_\mathrm{mx}), \end{equation} where $\Theta(x)$ is 1 for $x>0$ and 0 otherwise. Each $n$ term of this equation describes the emission of the $n$-th resonance state whose life time is $\tau_d/|t_{\mathcal{E}_n +U(s_\mathrm{mx})-U(s_0)}|^2$. The equation~(5) is derived from Eq.~\eqref{ew_res} using the fact that $t_{\mathcal{E}_n +U(s_\mathrm{mx})-U(s_0)}$ is a rapidly increasing function of $n$ in the limit of $\tau_{\mr{tun}} \gg \tau_p$. The detailed steps for the derivation are as follows. We first express the transmission amplitude as $t_{\mathcal{E}_n+U(s_\mathrm{mx})-U(s_0)} = t_{\epsilon_0+U(s_\mathrm{mx})-U(s_0)} \exp[\, \tau_\mr{tun} (\mathcal{E}_n -\epsilon_0) \,]$, using the definition of $\tau_\textrm{tun}$. Then the factor depending on $t_{\mathcal{E}_n+U(s_\mathrm{mx})-U(s_0)}$ in Eq.~\eqref{ew_res}, $t_{\mathcal{E}_n+U(s_\mathrm{mx})-U(s_0)} \exp[\, - |t_{\mathcal{E}_n+U(s_\mathrm{mx})-U(s_0)}|^2 (s_\mathrm{rd}-s_\mathrm{mx})/(2\tau_d) \, ]$, has a non-vanishing value only for the resonance levels $\mathcal{E}_n$ $\in [\epsilon_0 + ( \ln \frac{2\tau_d}{|t_{\epsilon_0+U(s_\mathrm{mx})-U(s_0)}|^2 (s_\mathrm{rd}-s_\mathrm{mx})} )/\tau_\mr{tun} - 1/\tau_{\mr{tun}}, \epsilon_0 + ( \ln \frac{2\tau_d}{|t_{\epsilon_0+U(s_\mathrm{mx})-U(s_0)}|^2 (s_\mathrm{rd}-s_\mathrm{mx})} )/\tau_\mr{tun} + 1/\tau_{\mr{tun}}]$. This means that at retarded time $s_\textrm{rd}$, the emitted wave function is determined only by the resonance levels whose lifetime is similar to $s_\textrm{rd}-s_\textrm{mx}$. When $\tau_p \ll \tau_{\mr{tun}}$, $t_{\mathcal{E}_n+U(s_\mathrm{mx})-U(s_0)} \exp[\, - |t_{\mathcal{E}_n+U(s_\mathrm{mx})-U(s_0)}|^2 (s_\mathrm{rd}-s_\mathrm{mx})/(2\tau_d) \, ]$ has a sharp peak of width $1/\tau_\textrm{tun} \ll 1/\tau_p$, thus the gaussian factor in Eq. \eqref{ew_res} can be approximated as $e^{-(\mathcal{E}_n-\epsilon_0)^2 \tau_p^2/4} \approx \exp[\, -\frac{\tau_p^2}{4\tau_\mr{tun}^2}(\ln \frac{|t_{\epsilon_0+U(s_\mathrm{mx})-U(s_0)}|^2 (s_\mathrm{rd}-s_\mathrm{mx})}{2\tau_d})^2 \,] $. Finally, using the identities, $\int d\mathcal{E}\, e^{\mathcal{E}/2} e^{-a e^\mathcal{E}} e^{i \mathcal{E} s} = a^{- 1/2 -i s} \Gamma[ 1/2+ i s] $, $\Gamma \left[ 1/2+ i s \right] \approx \sqrt{\pi \mr{sech} (\pi s )} \approx \sqrt{\pi/2} \mr{sech}(s/2)$, and $ \int ds\, \mr{sech}(as) e^{-i \omega s}= \pi \mr{sech} (\frac{\pi}{2a}\omega)/a $, we obtain Eq.~(5), \begin{align} \label{ll} \psi(x, s) \propto & \sqrt{\frac{2\tau_d}{(s_\mathrm{rd}-s_\mathrm{mx})|t_{\epsilon_0}|^2}} e^{- \left( \frac{\tau_p}{\tau_\mr{tun}} \ln \frac{|t_{\epsilon_0}|^2 (s_\mathrm{rd}-s_\mathrm{mx})}{\tau_d}\right)^2} \nonumber \\ & \times \sum_{n=1}^{\infty} e^{-i (\mathcal{E}_{n} +U(s_\textrm{mx})-U(s_0)) (s_\mathrm{rd}-s_\mathrm{mx})} \mr{sech} \left(\pi \ln \frac{(s_\mathrm{rd}-s_\mathrm{mx}) |t_{\mathcal{E}_{n} +U(s_\mathrm{mx})-U(s_0)}|^2}{2\tau_d} \right) \Theta(s_\mathrm{rd}-s_\mathrm{mx}). \end{align}
{'timestamp': '2016-10-25T02:08:15', 'yymm': '1610', 'arxiv_id': '1610.07293', 'language': 'en', 'url': 'https://arxiv.org/abs/1610.07293'}
\section{Introduction} The outburst of V445 Pup was discovered on 30 December 2000 by Kanatsu \citep{kan00}. The outburst shows unique properties such as absence of hydrogen, unusually strong carbon emission lines as well as strong emission lines of Na, Fe, Ti, Cr, Si, Mg etc. \citep{ash03,iij08}. The spectral features resemble those of classical slow novae except absence of hydrogen and strong emission lines of carbon \citep{iij08}. The development of the light curve is very slow (3.3 mag decline in 7.7 months) with a small outburst amplitude of 6 mag. From these features, this object has been suggested to be the first example of helium novae \citep{ash03,kat03}. In our earlier work \citep{kat03} we presented a theoretical light curve model with the assumption of blackbody emission from the photosphere, which resulted in the best fit model of a very massive white dwarf (WD) ($M_{\rm WD} \gtrsim 1.3 ~M_{\odot}$) and a relatively short distance of $d \lesssim 1$ kpc. However, \citet{iij08} recently showed that there are absorption lines of Na I D1/2 at the velocities of 16.0 and 73.5 km~s$^{-1}$, suggesting that V445 Pup is located in or beyond the Orion arm and its distance is as large as $3.5 - 6.5$ kpc. \citet{wou08} also suggested a large distance of $4.9$ kpc. Moreover, color indexes during the outburst are consistent with those of free-free emission. With these new observational aspects we have revised the light curve model of V445 Pup. Helium novae were theoretically predicted by \citet{kat89} as a nova outburst caused by a helium shell flash on a white dwarf. Such helium novae have long been a theoretical object until the first helium nova V445 Pup was discovered in 2000. \citet{kat89} assumed two types of helium accretion: one is helium accretion from a helium star companion, and the other is hydrogen-rich matter accretion with a high accretion rate and, beneath the steady hydrogen burning shell, ash helium accumulates on the WD. The latter case is divided into two kinds of systems depending on whether hydrogen shell burning is steady (stable) or not. To summarize, helium accretion onto a WD occurs in three types of systems: (1) helium accretion from a helium star companion; (2) steady hydrogen accretion with a rate high enough to keep steady hydrogen shell burning such as supersoft X-ray sources, e.g., RX J0513.9-6951 \citep[e.g.,][]{hac03kb, mcg06} and CAL 83 \citep[e.g.,][]{sch06}, (3) hydrogen accretion with a relatively low rate resulting in a recurrent nova such as RS Oph \citep[e.g.,][]{hac07kl} and U Sco \citep[e.g.,][]{hkkm00}. In the present paper, we regard that V445 Pup is Case (1), because no hydrogen lines were detected \citep{iij08}. In helium novae, mass loss owing to optically thick wind is relatively weak compared to hydrogen novae, and a large part of the accreted helium burns into carbon and oxygen and accumulates on the WD \citep{kat99,kat04}. Our previous model indicated that V445 Pup contains a massive WD ($M_{\rm WD} \gtrsim 1.3 ~M_{\odot}$) and the WD mass is increasing through helium shell flashes. Therefore, V445 Pup is a strong candidate of Type Ia supernova progenitors. Although there are two known evolutional paths of the single degenerate scenario toward Type Ia supernova \citep[e.g.,][]{hkn99}, helium accreting WDs such as V445 Pup do not belong to either of them. Therefore, it may indicate the third path to Type Ia supernovae. If various binary parameters of V445 Pup are determined, they provide us important clues to binary evolutions to Type Ia supernovae. In the next section we introduce our multi-band photometric observations, which indicates that free-free emission dominates in optical and near infrared. In \S 3 we present our free-free emission dominated light curve model and its application to V445 Pup. Then we discuss brightness in quiescence in \S 4. Discussion and conclusions follow in \S 5 and \S 6, respectively. \section{Observation} Shortly after the discovery of V445 Pup outburst, one of us (S.K.) started multi-band photometry with a 25.4 cm telescope [focal length $=1600$~mm, CCD camera $=$ Apogee AP-7 (SITE SIA502AB of $512 \times 512$ pixels)]. $V$ and $R_{\rm c}$ magnitudes are obtained from January 4 to May 6, 2001, and $I_{\rm c}$ from January 4, 2001 to January 15, 2007, with the comparison star, TYC 6543-2917-1 ($V=8.74$, $B-V=0.304$). All of our data can be taken from the data archive of Variable Star Observers League in Japan (VSOLJ)\footnote{http://vsolj.cetus-net.org/}. \begin{figure} \epsscale{1.15} \plotone{f1.color.epsi} \caption{ $V$ and $I_{\rm c}$ light curves of V445 Pup (large open circles). The $I_{\rm c}$ data are shifted up by one magnitude in order to separate it from the $V$ light curve. Open triangles denote $V$ magnitudes (IAUC No. 7552, 7553, 7557, 7559, 7569, 7620). Small open circles and downward arrows are taken from AAVSO to show very early and later phases of the outburst. Large downward arrows show an upper limit observation of $I_{\rm c} > 16.2$ (this work), $V > 20$ and $I > 19.5$ \citep{hen01}. Straight lines indicate the average decline rates of $1.9 ~{\rm mag} / 130 $~days $= 0.0146 ~{\rm mag}~{\rm day}^{-1}$ and $3.3 ~{\rm mag} / 230$ days $= 0.0143 ~{\rm mag}~{\rm day}^{-1}$ for $I_{\rm c}$ and $V$, respectively. } \label{lightobs.only} \end{figure} Figure \ref{lightobs.only} shows our $V$ and $I_{\rm c}$ magnitudes as well as other observations taken from literature. These two light curves show very slow evolution ($\sim 0.014$ mag day$^{-1}$) followed by an oscillatory behavior in the $V$ magnitude before it quickly darkened by dust blackout on about JD 2452100, i.e., 7.5 months after the discovery \citep{hen01, wou02, ash03}. Here, we assume the decline rate of the light curves as shown in Figure \ref{lightobs.only} ignoring the later oscillatory phase (JD 2452040 -- 2452100) because we assume steady-state in the nova envelope as mentioned below and, as a result, our theoretical light curve cannot treat unsteady oscillations. \begin{figure} \epsscale{1.15} \plotone{f2.epsi} \caption{ Color indexes of $V-I_{\rm c}$ and $V-R_{\rm c}$ as well as $I_{\rm c}$ and $V$. Open circles: this work. Open triangles: Gilmore (IAUC 7559, 7569) and Gilmore \& Kilmartin (IAUC 7620). The horizontal dashed lines indicate the mean values of $V-R_{\rm c}=0.5$ and $V-I_{\rm c}=1.12$. } \label{VmIc} \end{figure} Figure \ref{VmIc} shows evolution of color indexes $V-I_{\rm c}$ and $V-R_{\rm c}$ as well as $V$ and $I_{\rm c}$ themselves. We can see that both of $V-I_{\rm c}$ and $V-R_{\rm c}$ are almost constant with time, i.e., $\approx 1.12$ and $\approx 0.5$, respectively, during our observation. This means that the three light curves of $V$, $I_{\rm c}$, and $R_{\rm c}$ are almost parallel and each light curve shape is independent of wavelength. These are the characteristic properties of free-free emission dominated light curves as explained below. The flux of optically thin free-free emission is inversely proportional to the square of wavelength, i.e., $F_\lambda \propto \lambda^{-2}$. This spectrum shape is unchanged with time although the electron temperature rises during nova outburst because the emission coefficient depends only very slightly on the electron temperature \citep[e.g.,][]{bro70}. Therefore, the color indexes of free-free emission dominated light curves are unchanged with time. On the other hand, the color indexes change with time if blackbody emission dominates because the photospheric temperature rises with time in nova outbursts. Figures \ref{lightobs.only} and \ref{VmIc} indicate that the emission from V445 Pup is dominated by free-free emission rather than by blackbody emission. \begin{figure} \epsscale{1.15} \plotone{f3.epsi} \caption{ Color indexes against wavelength. Our observational color indexes of V445 Pup, $V-R_{\rm c}$ and $V-I_{\rm c}$, are plotted for individual data (dots) and their mean values of 0.5 and 1.12 (open squares). The color indexes of V1500 Cyg are added for $y$ magnitude instead of $V$ (filled triangles) \citep[taken from][]{hac07kc}. Arrows indicate the effective wavelength of six bands of $V$, $R$, $I$, $J$, $H$, and $K$. Solid curves denote color indexes of free-free emission for $E(B-V)=0$ and 0.5. } \label{magnitude_calibration} \end{figure} The color index of free-free emission is calculated from $\lambda F_\lambda \propto \lambda^{-1}$. When the reddening is known, the reddened color is obtained as \begin{equation} m_V - m_\lambda = (M_V - M_\lambda)_0 + c_\lambda E(B-V), \label{color_reddening_relation} \end{equation} where $(M_V - M_\lambda)_0$ is the intrinsic color and $c_\lambda$ is the reddening coefficient (these values are tabulated in Table 4 in \citet{hac07kc} for five colors of $V-R$, $V-I$, $V-J$, $V-H$, and $V-K$). Figure \ref{magnitude_calibration} shows the color indexes relative to $V$ for two cases of reddening, i.e., $E(B-V)=0$ and 0.5. For comparison, we have added four relative colors of V1500 Cyg \citep{hac06} because its continuum flux is known to be that of free-free emission \citep{gal76} except for the first few days after the optical peak. Here we use Str\"omgren $y$ magnitude instead of $V$, because the intermediate-wide $y$ band is almost emission-line free. Three ($y-I$, $y-H$, and $y-K$) of four color indexes are consistent with those of color indexes with $E(B-V)=0.5$ \citep{fer77, hac06}. The $J$ band is strongly contaminated with strong emission lines of \ion{He}{1} \citep[e.g.][]{bla76, kol76, she76}. This is the reason why the $y-J$ color index deviates from that of free-free emission \citep*[see][for more details]{hac07kc}. Our observed color indexes are shown in Figure \ref{magnitude_calibration}. The open squares are centered at the mean values of $V-R_{\rm c}=0.5$ and $V-I_{\rm c}=1.12$, whereas dots represent individual observations. In V445 Pup, no strong emission lines dominate the continuum in $V$, $R$, and $I$ bands \citep{kam02, iij08}. Therefore we conclude that the color indexes of V445 Pup are consistent with those of free-free emission with the reddening of $E(B-V)=0.51$ \citep{iij08}. \section{Modeling of Nova Light Curves} \label{model_nova_outburst} \subsection{Basic Model} After a thermonuclear runaway sets in on the surface of a WD, the envelope expands to a giant size and the optical luminosity reaches its maximum. Optically thick winds occur and the envelope reaches a steady state. Using the same method and numerical techniques as in \citet{kat94h,kat03}, we have followed evolution of a nova by connecting steady state solutions along the sequence of decreasing envelope mass. We have solved the equations of motion, continuity, radiative diffusion, and conservation of energy, from the bottom of the helium envelope through the photosphere. The winds are accelerated deep inside the photosphere. Updated OPAL opacities are used \citep{igl96}. As one of the boundary conditions for our numeral code, we assume that photons are emitted at the photosphere as a blackbody with a photospheric temperature, $T_{\rm ph}$. \citet{kat03} calculated the visual magnitude $M_V$ on the basis of blackbody emission and constructed the light curves. In the present work, we calculate free-free emission dominated light curves. The envelope structure, wind mass loss rate, photospheric temperature, and photospheric radius of the WD envelope are essentially the same as those in the previous model. The flux of free-free emission from the optically thin region outside the photosphere dominates in the continuum flux of optical and near infrared wavelength region and is approximated by \begin{equation} F_\nu \propto \int N_e N_i d V \propto \int_{R_{\rm ph}}^\infty {\dot M_{\rm wind}^2 \over {v_{\rm wind}^2 r^4}} r^2 dr \propto {\dot M_{\rm wind}^2 \over {v_{\rm ph}^2 R_{\rm ph}}}, \label{free-free-wind} \end{equation} where $F_\nu$ is the flux at the frequency $\nu$, $N_e$ and $N_i$ are the number densities of electrons and ions, respectively, $V$ is the volume, $R_{\rm ph}$ is the photospheric radius, $\dot M_{\rm wind}$ is the wind mass loss rate, and $v_{\rm ph}$ is the photospheric velocity \citep{hac06, hac07kc}. The proportionality constant in equation (\ref{free-free-wind}) cannot be determined a priori because we do not calculate radiative transfer outside the photosphere: These proportionality constant are determined by fitting with observational data \citep{hac06, hac07k, hac07kc}. \subsection{Free-free Light Curve and WD Mass} We have calculated free-free emission dominated light curves of V445 Pup for WD masses of 1.2, 1.3, 1.33, 1.35, 1.37, and $1.377 ~M_{\odot}$. The last one is the upper limit of the mass-accreting WDs \citep{nom84}. The adopted WD radius and chemical composition are listed in Table \ref{composition}. Here we assume that chemical composition is constant throughout the envelope. \begin{deluxetable}{llll} \tabletypesize{\scriptsize} \tablecaption{Model Parameters\tablenotemark{a} \label{composition}} \tablewidth{0pt} \tablehead{ \colhead{$M_{\rm WD}$} & \colhead{$\log R_{\rm WD}$} & \colhead{$Y$} & \colhead{$X_{\rm C+O}$} \cr \colhead{($M_{\odot}$)} & \colhead{($R_{\odot}$)} & \colhead{} & \colhead{} } \startdata 1.2 & -2.193 & 0.68 & 0.3 \cr 1.3 & -2.33 & 0.48 & 0.5 \cr 1.33 & -2.417 & 0.38 & 0.6 \cr 1.35 & -2.468 & 0.38 & 0.6 \cr 1.37 & -2.535 & 0.58 & 0.4 \cr 1.37 & -2.535 & 0.38 & 0.6 \cr 1.37 & -2.535 & 0.18 & 0.8 \cr 1.377 & -2.56 & 0.38 & 0.6 \enddata \tablenotetext{a}{The heavy element content is assumed to be $Z=0.02$ that includes carbon and oxygen by the solar ratio. } \end{deluxetable} \begin{figure} \epsscale{1.15} \plotone{f4.epsi} \caption{ Free-free emission dominated light curves for various WD masses. The chemical composition is assumed to be $Y=0.38$, $X_{\rm C+O}= 0.6$, and $Z=0.02$ throughout the envelope. The WD mass is attached to each curve. The dashed and dotted curves denote $M_{\rm WD}=1.37 ~M_\sun$ model but with different chemical composition of $X_{\rm C+O}= 0.4$ and 0.8, respectively. Small open circles denote the starting points of light curve fitting in Fig. \ref{lightcurve_fitting}: upper and lower points of $M_{\rm WD}= 1.37 ~M_\sun$ (solid line) correspond to Models 4 and 3 and those of 1.377 $M_\odot$ correspond to Models 8 and 7, respectively. A short solid line indicates the decline rate of observed $V$ data in Fig. \ref{lightobs.only} (3.3 mag / 230 days). } \label{lightcurve} \end{figure} Figure \ref{lightcurve} shows the calculated free-free light curves. More massive WDs show faster decline. This is mainly because the more massive WDs have the less massive envelope and the smaller mass envelope is quickly taken off by the wind. The figure also shows two additional models of $M_{\rm WD}= 1.37 ~M_\sun$ with different compositions of $X_{\rm C+O} =0.4$ and 0.8. These models show almost similar decline rates to that of $M_{\rm WD}= 1.37 ~M_\sun$ with $X_{\rm C+O}=0.6$. The starting point of our model light curve depends on the initial envelope mass. When an initial envelope mass is given, our nova model is located somewhere on the light curve. For a more massive ignition mass, it starts from an upper point of the light curve. Then the nova moves rightward with the decreasing envelope mass due to wind mass loss and nuclear burning. The development of helium shell flashes is much slower than that of hydrogen shell flashes mainly because of much larger envelope masses of helium shell flashes \citep{kat94h}. The short straight solid line in Figure \ref{lightcurve} represents the decline rate of $V$ light curve in Figure \ref{lightobs.only}. The length of the line indicates the observed period which is relatively short because of dust blackout 230 days after the optical maximum. We cannot compare the entire evolution period of the light curve with the observational data so that fitting leaves some ambiguity in choosing the best fit model. Even though, we may conclude that WDs with masses of $M_{\rm WD} \lesssim 1.33 ~M_\sun$ are very unlikely because their light curves are too slow to be comparable with the observation. \begin{deluxetable*}{lllllllllr} \tabletypesize{\scriptsize} \tablecaption{Parameters of Fitted Model \label{fitting_parameter}} \tablewidth{0pt} \tablehead{ \colhead{model } & \colhead{$M_{\rm WD}$} & \colhead{$Y$} & \colhead{$X_{\rm C+O}$} & \colhead{$\Delta M_{\rm He,ig}$\tablenotemark{a}} & \colhead{$\Delta M_{\rm ej}$\tablenotemark{b}} & \colhead{$\eta_{\rm He}$\tablenotemark{c}}& \colhead{Distance} & \colhead{$\dot M_{\rm He}$\tablenotemark{d}} & \colhead{$\tau_{\rm rec}$\tablenotemark{d}} \cr \colhead{No. } & \colhead{($M_\sun$)} & \colhead{} & \colhead{} &\colhead{($M_{\odot}$)} &\colhead{($M_{\odot}$)} & \colhead{} & \colhead{(kpc)} & \colhead{($M_\sun$~yr$^{-1}$)} & \colhead{(yr)} } \startdata 1& 1.35 & 0.38 & 0.6 &3.3E-4 &1.8E-4 & 0.45 & 6.6 & 6.E-8 & 5000 \cr 2& 1.37 & 0.58 & 0.4 & 1.6E-4 &9.2E-5 & 0.42 &5.0 &7.E-8 &2000 \cr 3& 1.37 & 0.38 & 0.6 &1.9E-4 &8.7E-5 & 0.53 & 4.8 & 6.E-8 & 3000 \cr 4& 1.37 & 0.38 & 0.6 &2.1E-4 &1.1E-4 & 0.49 & 5.5 & 5.E-8 & 4000 \cr 5& 1.37 & 0.18 & 0.8 &3.5E-4 &1.2E-4 &0.64 & 6.0 & 4.E-8 & 9000\cr 6& 1.37 & 0.18 & 0.8 &3.8E-4 &1.5E-4 &0.61 & 6.9 &4.E-8 &10000 \cr 7& 1.377 & 0.38 & 0.6 &1.8E-4 &7.4E-5 & 0.60 & 4.8 & 5.E-8 & 4000\cr 8& 1.377 & 0.38 & 0.6 &2.2E-4 &9.6E-5 & 0.56 & 5.2 & 4.E-8 & 5000 \cr \enddata \tablenotetext{a}{Envelope mass at ignition is assumed to be equal to the mass at the optical peak, which we assume as JD 2451872 (28 days before the discovery).} \tablenotetext{b}{Total mass ejected by the wind} \tablenotetext{c}{Mass accumulation efficiency} \tablenotetext{d}{estimated from Fig. \ref{dMenvMacc}} \end{deluxetable*} Figure \ref{lightcurve_fitting} represents the light curve fitting with observational data. Here, the model light curves of $I_{\rm c}$ are identical to those of $V$ but are lifted up by 1.12 mag, which is the mean value of $V-I_{\rm c}$ obtained in Figure \ref{VmIc}. It should be noted that, in this figure, these light curves are further lifted up by 1.0 mag because of $I_c - 1$. \begin{figure} \epsscale{1.15} \plotone{f5.epsi} \caption{ Our model light curves are fitted with the observation. Dotted line: Model 1. Thin dashed line: Model 2. Thin solid line: Model 3. Thick solid line: Model 4. Dash-three dotted line: Model 5. Thin dash-dotted line: Model 6. Dashed line: Model 7. Dash-dotted line: Model 8. Our model $I_{\rm c}$ light curves are identical with those of $V$ but are lifted up by 1.12 mag (and further shifted up by 1 mag for $I_{\rm c} - 1$). The ordinate on the right vertical axis represents the absolute magnitude of Model 4. For the other models, the model light curves are shifted down by 0.4 mag (Model 1), up by 0.2 mag (Model 2) up by 0.3 mag (Models 3 and 7), down by 0.2 mag (Model 5), down by 0.5 mag (Model 6), up by 0.1 mag (Model 8). The observational data are same as those in Fig. \ref{lightobs.only}. } \label{lightcurve_fitting} \end{figure} These light curves are a part of the model light curves in Figure \ref{lightcurve}. In the early phase of the outburst the light curve declines almost linearly so that fitting is not unique. We can fit any part of our model light curve if its decline rate is the same as $\approx 0.014-0.015 ~{\rm mag}~{\rm day}^{-1}$, for example, either top or middle part of the same light curve of $1.377 ~M_\sun$ WD in Figure \ref{lightcurve}. In such a case we show two possible extreme cases, i.e., the earliest starting point and the latest one by open circles, as shown in Figure \ref{lightcurve}. These model parameters are summarized in Table \ref{fitting_parameter}, where we distinguish the model by model number. Thus we have selected several light curves which show a reasonable agreement with the observation. However, we cannot choose the best fit model among them because the observed period is too short to discriminate a particular light curve from others because the difference among them appears only in a late stage. For relatively less massive WDs of $M_{\rm WD} \lesssim 1.33 ~M_{\odot}$, we cannot have reasonable fits with the observation for any part of the model light curves. Among relatively more massive WDs of $M_{\rm WD} \gtrsim 1.35 ~M_{\odot}$ in Table \ref{fitting_parameter}, Model 1 shows slightly slower decline. Therefore, the $1.35 ~M_{\odot}$ WD may be the lowest end for the WD mass. Thus we may conclude that V445 Pup contains a very massive WD of $M_{\rm WD} \gtrsim 1.35 ~M_{\odot}$. \subsection{Mass Accumulation Efficiency} During helium nova outbursts, a part of the helium envelope is blown off in winds, while the rest accumulates on the WD. We define the mass accumulation efficiency, $\eta_{\rm He}$, as the ratio of the envelope mass that remains on the WD after the helium nova outburst to the helium envelope mass at ignition, $\Delta M_{\rm He,ig}$ \citep{kat04}. The mass accumulation efficiency is estimated as follows. We have calculated the mass lost by winds during the outburst, $\Delta M_{\rm ej}$. Note that $\Delta M_{\rm ej}$ is the calculated total ejecta mass which is ejected during the wind phase, and not the mass ejected during the observing period which may be shorter than the wind phase. The ignition mass is approximated by the envelope mass at the optical peak, i.e., $\Delta M_{\rm He,ig} \approx \Delta M_{\rm He,peak}$. Since the exact time of the optical peak is unknown, we assume that the optical peak is reached on JD 2451872, i.e., the earliest prediscovery observation in the brightest stage reported to IAU Circular No. 7553, 28 days earlier than the discovery. The envelope mass at the optical peak is estimated from our wind solution and is listed as $\Delta M_{\rm He,ig}$ in Table \ref{fitting_parameter}. The resultant accumulation efficiency, \begin{equation} \eta_{\rm He} \equiv {{\Delta M_{\rm He,peak}-\Delta M_{\rm ej}} \over {\Delta M_{\rm He,peak}}}, \end{equation} is also listed in Table \ref{fitting_parameter}. The efficiencies are as high as $\sim 50$\%. The WD of V445 Pup is already very massive ($ M_{\rm WD} \gtrsim 1.35 ~M_{\odot}$) and its mass has increased through helium nova outbursts. Therefore, V445 Pup is a strong candidate of Type Ia supernova progenitors. \section{Quiescent Phase} Before the 2000 outburst, there was a 14.5 mag star at the position of V445 Pup (taken from the archive of VSNET\footnote{http://vsnet.kusastro.kyoto-u.ac.jp/vsnet/}), but no bright star has been observed since the dust blackout. We may regard that 14.5 mag is the preoutburst magnitude of the binary. There are two possible explanations for this quiescent phase luminosity, one is the accretion disk luminosity and the other is the luminosity of bright companion star. In the following subsections, we discuss how these possible sources contribute to the quiescent luminosity. \subsection{Accretion Disk} In some nova systems, an accretion disk mainly contributes to the brightness in their quiescent phase. If the preoutburst luminosity of V445 Pup comes from an accretion disk, its absolute magnitude is approximated by \begin{eqnarray} M_V {\rm (obs)}&=& -9.48 -{5\over 3}\log\left({M_{\rm WD}\over M_\sun} {{\dot M_{\rm acc}}\over {M_\sun ~{\rm yr}^{-1}}} \right) \cr & & -{5\over 2} \log(2\cos i), \label{accretion-disk-Mv} \end{eqnarray} where $M_{\rm WD}$ is the WD mass, $\dot M_{\rm acc}$ the mass accretion rate, and $i$ the inclination angle \citep[equation (A6) in][]{web87}. Assuming that $M_{\rm WD}= 1.37 ~M_{\odot}$ and $i=80$\arcdeg \citep{wou08}, we have $M_V =1.4$, 2.3, and 3.1 for the accretion rates of $1 \times 10^{-6}$, $3 \times 10^{-7}$, and $1 \times 10^{-7} ~M_{\odot}$~yr$^{-1}$, respectively. The apparent magnitude of the disk is calculated from \begin{equation} m_V-M_V = A_V + 5 \log D_{10}, \label{distance-modulus} \end{equation} where $D_{10}$ is the distance divided by 10 pc. With the absorption of $A_V = 1.6$ and the distance of 3 kpc, the apparent magnitude is estimated to be $m_V= 15.4$, 16.3 and 17.1, for the above accretion rates, respectively. For the distance of 6.5 kpc, we obtain brightnesses of $m_V = 17.1$, 18.0, and 18.8, respectively. All of these values are much fainter than 14.5 mag. Therefore, it is very unlikely that an accretion disk mainly contributes to the preoutburst luminosity. \begin{figure} \epsscale{1.15} \plotone{f6.color.epsi} \caption{ Evolutional tracks of helium stars with masses of $M_{\rm He}= 0.6$, 0.7, 0.8, 0.9, 1.0 (thick line), 1.2, 1.4, 1.6, 1.8, 2.0 (thick line), 2.5, and $3.0 ~M_\sun$ in the H-R diagram. The 0.6 and $0.7~M_\sun$ stars evolve blueward and do not become red giants. We stopped calculation when carbon ignites at the center for 2.5 and $3.0 ~M_\sun$ stars. Open circles and squares denote stars with a 14.5 mag brightness at the distance of 6.5 kpc and 4.9 kpc, respectively, for $A_V=1.6$. } \label{hestarevolution} \end{figure} \subsection{Helium Star Companion} Another possible source of the preoutburst brightness is a helium star companion. Figure \ref{hestarevolution} shows evolutional tracks of helium stars with masses between 0.6 and $3.0 ~M_\sun$ from the helium main-sequence to the red giant stage in the H-R diagram. We use OPAL opacities and an initial chemical composition of $X=0.0$, $Y=0.98$ and $Z=0.02$. The numerical method and input physics are the same as those in \citet{sai95}. As shown in Figure \ref{hestarevolution}, low mass helium stars do not evolve to a red giant. In our new calculation, the $0.8 ~M_{\odot}$ helium star evolves to a red giant, but the 0.6 and $0.7 ~M_{\odot}$ helium stars evolve toward blueward. \citet{pac71} showed that stars of $M_{\rm He} \gtrsim 1.0 ~M_{\odot}$ evolve to a red giant while 0.5, 0.7, and $0.85 ~M_{\odot}$ do not. Our calculations are essentially the same as those of \citet{pac71} and the difference is attributed mainly to the difference between the adopted opacities. Figure \ref{hestarevolution} also shows locations of stars whose apparent magnitudes are $m_V= 14.5$ for $A_V= 1.6$. Here, the distance is assumed to be 4.9 (open squares) or 6.5 kpc (open circles). For example, a star of (temperature, luminosity) $= (\log T_{\rm ph} ~({\rm K})$, $\log L_{\rm ph}/L_\sun) =$ (4.2, 2.81), (4.4, 3.25), and (4.6, 3.75) could be observed as a 14.5 mag star for 6.5 kpc and $(\log T_{\rm ph} ~(K)$, $\log L_{\rm ph}/L_\sun) =$ (4.2, 2.57), (4.4, 3.0), and (4.6, 3.51) for 4.9 kpc. Therefore, the observed 14.5 mag is consistent with the luminosities and temperatures of slightly evolved helium stars of $M_{\rm He} \gtrsim 0.8 ~M_\sun$ if the companion is a blue star of $\log T_{\rm ph} \gtrsim 4.5$. On the other hand, if the companion is redder than $\log T_{\rm ph} \lesssim 4.4$, there is no corresponding evolution track in the low luminosity region of the H-R diagram as shown in Figure \ref{hestarevolution}. In such a case the preoutburst luminosity cannot be attributed to a helium star companion. Kato (2001)\footnote{http://www.kusastro.kyoto-u.ac.jp/vsnet/Mail/alert5000/msg00493.html} suggested that preoutburst color of V445 Pup was much bluer than that of symbiotic stars (which have a red giant companion) but rather close to that of cataclysmic variables (main-sequence or little bit evolved companion). This argument is consistent with our estimate of relatively high surface temperature $\log T_{\rm ph} \gtrsim 4.5$ (see Fig. \ref{hestarevolution}). \section{Discussions} \subsection{Distance} As explained in \S\S 3.1 and 3.2, we do not solve energy transfer outside the photosphere and, thus, we cannot determine the proportionality constant in equation (\ref{free-free-wind}). Therefore, we cannot determine the distance to V445 Pup directly from the comparison of theoretical absolute magnitude with the observational apparent magnitude. Instead, we can approximately estimate the distance by assuming that the free-free flux is larger than the blackbody flux during the observing period. This assumption gives a lower limit to the distance. We expect that the distance estimated thus is close to the real value. The 8th column of Table \ref{fitting_parameter} lists the distance estimated for each model. These distances are consistent with the observational estimates of $3.5 \lesssim d \lesssim 6.5$ kpc \citep{iij08} and $\sim 4.9$ kpc \citep{wou08}. \subsection{Helium Ignition Mass and Recurrence Period} \begin{figure} \epsscale{1.15} \plotone{f7.color.epsi} \caption{ The helium ignition mass, $\Delta M_{\rm He, ig}$, of helium-accreting WDs is plotted against the helium accretion rate, $\dot M_{\rm He}$. The WD mass is attached to each curve. Straight dashed lines indicates the recurrence period. The open square indicates the ignition mass of each model in Table \ref{fitting_parameter}. } \label{dMenvMacc} \end{figure} We have calculated evolution of C+O white dwarfs accreting helium at various rates until the ignition of a helium shell flash. Chemical compositions assumed are $X_{\rm C}=0.48$ and $X_{\rm O}=0.50$ for the WD core, and $X=0.0$ and $Y=0.98$ for the accreted envelope. The initial model adopted for a given core mass and accretion rate is a steady-state model in which the heating due to the accretion is balanced with the radiative energy flow. Adopting such an initial model is justified because the WD has accreted matter from the companion for a long time and experienced many shell flashes. As the helium accretion proceeds, the temperature at the bottom of the envelope gradually increases. When the temperature becomes high enough the triple-alpha reaction causes a shell flash. The mass of the helium envelope at the ignition depends on the WD mass and the accretion rate as shown in Figure~\ref{dMenvMacc}. The envelope mass required to ignite a shell flash tends to be smaller for a higher accretion rate and for a larger WD mass. Helium flashes are weaker for higher accretion rates and lower WD masses. In particular they are very weak for accretion rates higher than $\sim 10^{-6} M_{\odot}$ yr$^{-1}$. The envelope mass at the ignition depends also on the core temperature. If the core temperature of the initial WD is lower than that of the steady state model employed for our calculations, the envelope mass at the ignition would be larger. The recurrence period, $\Delta M_{\rm He,ig}/{\dot M}_{\rm He}$, is also plotted in Figure~\ref{dMenvMacc} by dashed lines. The recurrence period corresponding to the ignition mass of our fitted models are listed in Table \ref{fitting_parameter}. \subsection{Mass Transfer from Helium Star Companion} We have estimated the mean accretion rate ${\dot M}_{\rm He}$ of the WD as in Table \ref{fitting_parameter}. The companion star feeds its mass to the WD via the Roche lobe overflow or by winds depending on whether it fills the Roche lobe or not. Here we examine if this accretion rate is comparable to the mass transfer rates of the Roche lobe overflow from a helium star companion. We followed stellar evolutions of helium stars from the main sequence assuming that the star always fills its effective Roche lobe of which radius is simply assumed to be $1.5 ~R_\sun$. The resultant mass loss rates are shown in Figure \ref{Hestar.dmdt} for stars with initial masses of 1.2, 1.0 and $0.8 ~M_\sun$. These results are hardly affected even if we change the Roche lobe radius. The mass loss rate decreases with the companion mass almost independently of the initial value. Except for the short initial and final stages, these rates are as large as $\dot M_{\rm He} \sim 10^{-6} ~M_\sun$~yr$^{-1}$ and much larger than the mass accretion rate estimated from our fitted models in Table \ref{fitting_parameter}. Since almost all of the mass lost by the companion accretes onto the WD in the case of Roche lobe overflow, the accretion may result in very weak shell flashes for such high rates as $10^{-6} M_\sun$~yr$^{-1}$. A strong shell flash like V445 Pup is realized only when the final stages of mass-transfer in which the mass transfer rate quickly drops with time. This rare case may correspond to a final stage of binary evolution after the WD had grown up in mass to near the Chandrasekhar limit from a less massive one and the companion had lost a large amount of mass via Roche lobe overflow. Such information will be useful for modeling a new path of binary evolution scenario that will lead to a Type Ia supernova. \begin{figure} \epsscale{1.15} \plotone{f8.epsi} \caption{ Mass loss rate of a helium star that fills its Roche lobe of the radius $1.5 ~R_\sun$. The initial stellar mass is 1.2, 1.0 (dotted line), and $0.8 ~M_\sun$. Time goes on from left to right. See the text for more detail. } \label{Hestar.dmdt} \end{figure} \subsection{Comparison with Other Observation} The mass of the dust shell can be estimated from infrared continuum flux with the assumption that the emission originated from warm dust. With infrared 10 $\mu$m spectrum, \citet{lyn04b} estimated the dust mass to be $2 \times 10^{-6} M_{\odot}$ for a distance of 3 kpc. This value would be increased if we adopt a larger distance instead of 3 kpc, but still consistent with our estimated ejecta mass $\Delta M_{\rm ej}\sim (0.7-1.8) \times 10^{-4} M_{\odot}$ in Table \ref{fitting_parameter}, because the dust mass is a small part of the ejected mass. \citet{lyn04a} reported the absence of He II or coronal lines on their near IR spectra and suggested that the ionizing source was not hot enough on 2004 January 14 and 16 (1146 days after the optical maximum). \citet{lyn05} also reported, from their near IR observation on 2005 November 16 that the object had faded and the thermal dust emission had virtually disappeared. They suggested that the dust had cooled significantly until that date (1818 days). This suggestion may constrain the WD mass of V445 Pup because Figure \ref{lightcurve} shows the outburst lasts more than 1800 days for less massive WDs ($ \lesssim 1.33 ~M_\sun$). If the WD had cooled down until the above date we may exclude less massive WD models ($ \lesssim 1.33 ~M_\sun$) because these WDs evolve slowly and their hot surfaces emit high energy photons at least until 1800 days after the optical peak. We have shown that the WD mass of V445 Pup is increasing with time. When the WD continues to grow up to the Chandrasekhar limit, central carbon ignition triggers a Type Ia supernova explosion if the WD consists of carbon and oxygen. We regards the star as a CO WD instead of an O-Ne-Mg WD because no indication of neon were observed in the nebula phase spectrum \citep{wou05}. Therefore, we consider that V445 Pup is a strong candidate of Type Ia supernova. When a binary consisting of a massive WD and a helium star like V445 Pup becomes a Type Ia supernova, its spectrum may more or less show a sign of helium. The search for such helium associated with a Type Ia supernova has been reported for a dozen Type Ia supernovae \citep{mar03, mat05}, but all of them are negative detection. It may be difficult to find such a system because this type of Type Ia supernovae are very rare. The binary seems to be still deeply obscured by the optically thick dust shell even several years after the outburst. This blackout period is much longer than that of the classical novae such as OS And ($\sim 20$ days), V705 Cas ($\sim 100$ days), and DQ Her ($\sim 100$ days). As the ejected mass estimated in Table \ref{fitting_parameter} is not much different from those of classical novae, the difference of the blackout period may be attributed to a large amount of dust in the extremely carbon-rich ejecta of a helium nova. Moreover, observed low ejection velocities of $\sim 500$ km~s$^{-1}$ \citep{iij08} is not much larger than the escape velocity of the binary with a relatively massive total mass [e.g., $1.35 ~M_\sun + (1 - 2) ~M_\sun$]. Both of these effects lengthen the dust blackout period. When the dust obscuration will be cleared in the future the period of blackout provides useful information on the dust shell. \section{Conclusions} Our main results are summarized as follows: 1. We have reproduced $V$ and $I_{\rm c}$ light curves using free-free emission dominated light curves calculated on the basis of the optically thick wind theory. 2. From the light curve analysis we have estimated the WD mass to be as massive as $M_{\rm WD} \gtrsim 1.35~M_\sun$. 3. Our light curve models are now consistent with a longer distance of $3.5 \lesssim d \lesssim 6.5$ kpc \citep{iij08} and 4.9 kpc \citep{wou08}. 4. We have estimated the ejecta mass as the mass lost by optically thick winds, i.e., $\Delta M_{\rm wind} \sim 10^{-4} M_\odot$. This amounts to about a half of the accreted helium matter so that the accumulation efficiency reaches $\sim 50$\%. 5. The white dwarf is already very close to the Chandrasekhar mass, i.e., $M_{\rm WD} \gtrsim 1.35~M_\sun$, and the WD mass had increased after the helium nova. Therefore, V445 Pup is a strong candidate of Type Ia supernova progenitors. 6. We emphasize importance of observations after the dense dust shell will disappear, especially observations of the color and magnitude, orbital period, and inclination angle of the orbit. These are important to specify companion nature. \acknowledgments M.K. and I.H. are grateful to people at the Astronomical Observatory of Padova and at the Department of Astronomy of the University of Padova, Italy, for their warm hospitality. Especially we thank Takashi Iijima for fruitful discussion on V445 Pup, which stimulated us to start this work. The authors thank Masaomi Tanaka for information on the helium detection in Type Ia supernovae. We also thank the anonymous referee for useful comments to improve the manuscript. Thanks are also due to the American Association of Variable Star Observers (AAVSO) for the photometric data of V445 Pup and Taichi Kato for introducing us the discussion in VSNET. This research has been supported in part by the Grants-in-Aid for Scientific Research (16540211, 20540227) from the Japan Society for the Promotion of Science.
{'timestamp': '2008-05-16T16:09:36', 'yymm': '0805', 'arxiv_id': '0805.2540', 'language': 'en', 'url': 'https://arxiv.org/abs/0805.2540'}
\section{Introduction} Inspired by Shannon’s classic information theory \cite{shannon1948mathematical}, Weaver and Shannon proposed a more general definition of a communication system involving three different levels of problems, namely, (i) transmission of bits (the technical problem); (ii) semantic exchange of transmitted bits (the semantic problem); and (iii) effect of semantic information exchange (the effectiveness problem). The first level of communication, which is the transmission of bits, has been well studied and realized in conventional communication systems based on Shannon’s bit-oriented technical framework. However, with the massive deployment of emerging devices, including Extended Reality (XR) and Unmanned Aerial Vehicles (UAVs), diverse tasks with stringent requirements pose critical challenges to traditional bit-oriented communications, which are already approaching the Shannon physical capacity limit. This imposes the Sixth Generation (6G) network towards a communication paradigm shift to semantic level and effectiveness level by exploiting the context of data and its importance to the task. Initial works on ``semantic communications'' have mainly focused on identifying the content of the traditional text and speech \cite{luo2022, shi2021semantic}, and the information freshness, i.e., AoI \cite{NowAoI} as a semantic metric that captures the timeliness of the information. However, these cannot capture the data importance sufficiently of achieving a specific task. In \cite{kountouris2021semantics}, a joint design of information generation, transmission, and reconstruction was proposed. Although the authors explored the benefits of including the effectiveness level in \cite{strinati20216g,popovski2020semantic}, an explicit and systematic communication framework incorporating both semantic level and effectiveness level has not been proposed yet. There is an urgent need for a unified communication framework aiming at task-oriented performances for diverse data types. % Motivated by this, in this paper, we propose a generic task-oriented and semantics-aware (TOSA) communication framework, which jointly considers the semantic level information about the data context, and effectiveness level performance metric that determines data importance, for different tasks with various data types. The main contributions of this paper are: \begin{enumerate} \item We first present unique characteristics of the traditional text, speech, image, video data types, and emerging $360^\circ$ video, sensor, haptic, and machine learning models. For each data type, we summarize the semantic information definition and extraction methods in Section~II. \item We then propose a generic TOSA communication framework for typical time critical and non-critical tasks, where semantic level and effectiveness level are jointly considered. Specifically, by exploiting the unique characteristics of different tasks, we present TOSA information, their extraction and recovery methods, and effectiveness level performance metrics to guarantee the task requirements in Section~III. \item To demonstrate the effectiveness of our proposed TOSA communication framework, we present the TOSA solution tailored for interactive Virtual Reality (VR) data with the aim to maximize the long-term quality of experience (QoE) within the VR interaction latency constraints and analyze the results in Section IV. \item Finally, we conclude the paper in Section V. \end{enumerate} \section{Semantic Information Extraction} In this section, we focus on analyzing the characteristics of all data types, and summarizing the semantic information definition with corresponding extraction methods as shown in Table~\ref{feature_summarization}. \begin{table}[!h] \caption{Semantic information extraction of different data types} \begin{center} \begin{tabular}{|c|c|c|} \hline \textbf{Data Type} &Semantic Information &\makecell[c]{Semantic Information\\ Extraction Method}\\ \hline Text &Embedding&BERT \\ \hline Speech &Embedding&BERT \\ \hline Image &Edge, Corner, Blob, Ridge&SIFT, CNN \\ \hline Video &Temporal Correlation&\makecell[c]{CNN} \\ \hline \makecell[c]{$360^\circ$ Video }&FoV&\makecell[c]{Biological Information\\ Compression}\\ \hline Haptic Data &JND&Web's Law\\ \hline \makecell[c]{Sensor and\\ Control Data}&Freshness& AoI\\ \hline \end{tabular} \label{feature_summarization} \end{center} \end{table} \subsection{{{Speech and Text}}} For a one-dimensional speech signal, the speech-to-text conversion can be first performed by speech recognition. With the extracted text information, various approaches developed by Natural Language Processing (NLP) community can be applied to extract embedding as typical semantic information, which represents the words, phrases, or text as a low-dimensional vector. The most famous embedding extraction method is Bidirectional Encoder Representations from Transformers (BERT) proposed by Google, which can be pre-trained and fine-tuned via one additional output layer for different text tasks. However, during the speech-to-text conversion process, the timbre and emotion conveyed in the speech may lose. \subsection{{Image and Video}} The image is a two-dimensional data type, where the image geometric structures, including edges, corners, blobs, and ridges, can be identified as typical semantic information. Although various traditional signal processing methods have been developed to extract image geometric structures such as SIFT (Scale-Invariant Feature Transform), the convolutional Neural Network (CNN) has shown stronger capability to extract complex geometric structures with its matrix kernel. Video is a typical three-dimensional data type as the combination of two-dimensional images with an extra time dimension. Therefore, the temporal correlation between adjacent frames can be identified as important semantic information, where the static background can be ignored during transmission. To extract temporal correlation information from video, different CNN structures have been utilized. \subsection{{$360^\circ$ Video}} The $360^\circ$ rendered video is a new data type in emerging XR applications. The most important semantic information is identified as human field-of-view (FoV), which occupies around one-third of the $360^\circ$ video and only has the highest resolution requirement at the center \cite{Liu_vr}. In this case, biological information, such as retinal foveation and ballistic saccadic eye movements can be leveraged for semantic information extraction. Therefore, biological information compression methods have been utilized to extract the semantic information, where retinal foveation and ballistic saccadic eye movements are jointly considered to optimize the semantic information extraction process. \subsection{{Haptic Data}} Haptic data consists of two submodalities, which are tactile information and kinesthetic \cite{Antonakoglou2018}. For tactile information, five major dimensions can be identified, which are friction, hardness perception, warmth conductivity, macroscopic roughness, and microscopic roughness. Kinesthetic information refers to the position/orientation of human body parts and external forces/torques applied to them. To reduce the redundant raw haptic data, Just Noticeable Difference (JND) is identified as valuable semantic information to filter the haptic signal that cannot be perceived by the human, where the Weber's law serves as an important semantic information extraction criterion. \subsection{Sensor and Control data} Sensors are usually deployed to monitor the physical characteristics of the environment (e.g., temperature, humidity, or traffic) in a geographical area. The acquisition of data is transformed to status updates that are transmitted through a network to the destination nodes. Then these data are processed in order to extract useful information, such as control commands or remote source reconstruction, that can be further utilized in the prediction of the evolution of the initial source. The accuracy of the reconstructed data either in control commands or to predict the evolution is directly related to the relevance or the semantic value of the data measurements. Thus, one important aspect is the generation of traffic and how it can be affected in order to filter only the most important samples so the redundant or less useful data will be eliminated to reduce potential congestion inside the network. The AoI has also a critical role in dynamic control systems, since it was shown that non-linear AoI and VoI are paradigm shifts and they can improve the performance of such systems. Furthermore, we have seen in early studies that the semantics of information (beyond timeliness) can provide further gains, by reducing the amount of information that is generated and transmitted without degrading the performance. \begin{figure*}[!h] \centerline{\includegraphics[scale=0.7]{framework_2.pdf}} \caption{TOSA communication framework for different tasks with diverse data types.} \label{TOSA_framework_fig} \end{figure*} \subsection{{Machine Learning Model}} With the massive deployment of machine learning algorithms, machine learning-related model has been regarded as another important data type. \begin{itemize} \item \textbf{Federated Learning (FL) Model:} The FL framework has been considered as a promising approach to preserve data privacy, where each participating device uploads the model gradients or model weights to the server and receives the global model from the server. \item \textbf{Split Learning (SL) Model:} Due to the limited computation capability of devices and heavy computation burden, SL has been proposed to split the neural network model between the server and devices, where the device executes the model up to the cut layer and sends the smashed data to the central server to execute the remaining layers. Then the gradient of the smashed data is transmitted back from the server to update the local model. \end{itemize} However, it is noted that explicit semantic information definition for machine learning model has not been proposed yet. \begin{table*}[!h] \caption{TOSA Communication Framework Summarization of different tasks} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Data Type} & Task & Communication Entity& Recovery& Latency Type & \makecell[c]{Effectiveness Level\\ Perfromance Metrics}\\ \hline \multirow{1}{*}{\makecell[l]{Speech}}&\makecell[c]{Speech Recognition} & Human-Machine &Yes/No&Non-critical &\makecell[l]{$\bullet$F-measure\\$\bullet$Accuracy\\$\bullet$BLEU\\$\bullet$Perplexity} \\ \hline \multirow{5}{*}{\makecell[l]{Image}}& \makecell[c]{Face Detection} & Machine-Machine &No&Non-critical & \makecell[l]{$\bullet$IoU\\$\bullet$mAP\\$\bullet$F-measure\\$\bullet$MAE} \\ \cline{2-5} & \makecell[c]{Road Segmentation} & Machine-Machine&Yes & Critical &\makecell[l]{$\bullet$IoU\\$\bullet$Pixel Accuracy\\$\bullet$MPA\\$\bullet$Latency}\\ \hline \multirow{5}{*}{\makecell[c]{$360^\circ$ Video}} & \makecell[c]{Display in AR}& Machine-Human& Yes& Non-Critical &\makecell[l]{$\bullet$PSNR\\$\bullet$SSIM\\$\bullet$Alignment Accuracy} \\ \cline{2-5} & \makecell[c]{Display in VR} & Machine-Human& No& Non-Critical & \makecell[l]{$\bullet$PSNR\\$\bullet$SSIM\\$\bullet$Timing Accuracy\\$\bullet$Position Accuracy}\\ \hline \makecell[c]{Haptic Data} & Grasping and Manipulation & Machine-Human&No & Critical& \makecell[l]{$\bullet$SNR\\$\bullet$SSIM}\\ \hline Sensor& Networked control systems & Machine-Machine& No& Critical& \makecell[l]{$\bullet$LGQ}\\ \hline \multirow{5}{*}{\makecell[c]{ML\\Model}}& Federated Learning & Machine-Machine &\rule[0pt]{1cm}{0.1em}& Critical/Non-Critical& \makecell[l]{$\bullet$Latency\\$\bullet$Reliability\\ $\bullet$Convergence Speed\\$\bullet$Accuracy} \\ \cline{2-6} & Split Learning & Machine-Machine &\rule[0pt]{1cm}{0.1em}& Critical/Non-Critical & \makecell[l]{$\bullet$Latency\\ $\bullet$Reliability \\$\bullet$Convergence Speed \\$\bullet$Accuracy}\\ \hline \end{tabular} \label{Tasks_summarization} \end{center} \end{table*} \section{TOSA Communication Framework} \label{different_task} In this section, we propose a generic TOSA communication framework incorporating both semantic level and effectiveness level, for typical time critical and non-critical tasks as shown in Fig.~\ref{TOSA_framework_fig}, where the TOSA information, its extraction and recovery methods, and effectiveness level performance metrics are presented in detail. \subsection{One-hop Task} We consider one-hop tasks with a single link transmission in this section, where each communication entity can be either human or machine as summarized in Table~\ref{Tasks_summarization}. \subsubsection{{Speech Recognition}} In a speech recognition task, the human speech needs to be transmitted to the server, and the speech recognition task can be further divided into conversation-type task (e.g., human inquiry) and command-type task (e.g., smart home control) depending on the speech content. The conversation-type task focuses on understanding the intent, language, and sentiment to provide human with free-flow conversations. The command-type task focuses on parsing the specific command over the transmitted speech and then controlling the target device/robot. In the conversation-type task, the TOSA information can be keywords and emotions. The device can obtain the TOSA information by transforming the speech signal into text and extracting keywords and emotions via BERT. Then the server recovers the text via transformer decoder. In the command-type task, the TOSA information can be the binary command, the device can directly parse the speech signal and obtain the binary command signal for transmission, where no recovery is needed at the receiver side. The effectiveness level performance metrics include F-measure, accuracy, bilingual evaluation understudy (BLEU), and perplexity, where user satisfaction should also be considered. \subsubsection{{Face Detection and Road Segmentation}} Face detection and road segmentation are two emerging image processing tasks \cite{Zhao2019, Minaee2022}, where the captured images are required to be transmitted to the central server for processing. However, the road segmentation task in autonomous driving applications imposes stringent latency and reliability requirements due to road safety issues. This is because the vehicles need to instantaneously react to the rapidly changing environment. For the time non-critical face detection task, TOSA information can be the face feature that is extracted via CNN. After being transmitted to the central server, the regions with CNN features (R-CNN) can be applied to perform face detection. For the time critical road segmentation task, one possible solution is to identify the region of interest (ROI) features, i.e., road, as the TOSA information, and crop the images via region proposal algorithms. Then, the central server can perform image segmentation via mask R-CNN. Both tasks can be evaluated via effectiveness level performance metrics, including Intersection over Union (IoU), mean average precision (mAP), F-measure, and mean absolute error (MAE). It is noted that road segmentation in the autonomous driving application can be evaluated via pixel accuracy, and mean pixel accuracy (MPA). However, the trade-off between accuracy and latency remains to be an important challenge to solve. \subsubsection{{Display in Extended Reality}} Based on the Milgram \& Kishino’s Reality–Virtuality Continuum, the XR can be classified as Augmented Reality (AR), Mixed Reality (MR), and VR, where MR is defined as a superset of AR. Therefore, we focus on the AR and VR display tasks in the following. In the AR display task, the central server transmits the rendered 3D model of a specific virtual object to the user. It is noted that the virtual object identification and its pose information related to the real world is the key to achieving alignment between virtual and physical objects. Therefore, the virtual object identification and pose information can be extracted as TOSA information to reduce the data size. Then, by sharing the same 3D model library, the receiver can locally reconstruct the 3D virtual object model based on the received TOSA information. To evaluate the 3D model transmission, Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) can be adopted as effectiveness level performance metrics. However, how to quantify the alignment accuracy among the virtual objects and physical objects as a performance metric remains to solve. In the VR display task, the central server transmits virtual $360^\circ$ video streaming to the user. To avoid the transmission of the whole $360^\circ$ video, the central server can predict the eye movements of the user and extract the corresponding FoV as TOSA information. Apart from the PSNR and SSIM mentioned in AR, timing accuracy and position accuracy are also important effectiveness level performance metrics to avoid cybersickness including: 1) initial delay: time difference between the start of head motion and that of the corresponding feedback; 2) settling delay: time difference between the stop of head motion and that of the corresponding feedback; 3) precision: angular positioning consistency between physical movement and visual feedback in terms of degrees; and 4) sensitivity: capability of inertial sensors to perceive subtle motions and subsequently provide feedback to users. \subsubsection{{Grasping and Manipulation}} Haptic communication has been incorporated by industries to perform grasping and manipulation for efficient manufacturing and profitable production rates, where the robot transmits the haptic data to the manipulator. The shape and weight of the objects to be held are measured using cutaneous feedback derived from the fingertip contact pressure and kinesthetic feedback of finger positions, which should be transmitted within stringent latency requirement to guarantee industrial operation safety. Due to the difficulty in supporting massive haptic data with stringent latency requirement, JND can be identified as an important TOSA information to ignore the haptic signal that cannot be perceived by the manipulator. Two effectiveness level performance metrics including SNR and SSIM have been verified to be applicable to vibrotactile quality assessment. \subsubsection{Control} In networked control systems (NCS), typically, multiple sensors measure the system state of their control processes and transmit the generated data over a resource-limited shared wireless network. These data usually are en-queued and then transmitted over unreliable channels that cause excessive delays resulting in outdated or even obsolete for decision-making based on less reliable information. Therefore, data freshness and importance are extracted as TOSA information via AoI and value of information (VoI) to guarantee the timing requirement, respectively \cite{TimingProcIEEE}. A typical effectiveness level performance metric that is used to minimize is the Linear Quadratic Gaussian (LQG) cost function, and usually the lower the value of the LQG function the higher the quality of control (QoC). \subsubsection{{Machine Learning}} In the following, we focus on task-oriented communications for two distributed ML models, which are FL and SL. \paragraph{Federated Learning} For the time non-critical tasks, such as NLP and image classification, the goal of FL is to guarantee a high learning accuracy without latency constraints. Traditional loss functions for NLP and image classification, such as mean square error (MSE), MAE, and cross-entropy, can be directly used as effectiveness level performance metrics. However, for time critical tasks, such as object recognition in self-driving cars, the goal of FL is to balance the trade-off between learning accuracy, communication latency, and computation latency. The effectiveness level performance metrics are loss functions with latency constraints. Time critical tasks bring communication challenges, and communication-efficient FL should be designed to decrease the model size to satisfy latency constraints via federated dropout, federated pruning, and model compression. Federated dropout is a simple way to prevent the learning model from overfitting through randomly dropping neurons and is only used during the training phase, which decreases communication and computation latencies and slightly improves learning accuracy. However, during the testing phase, the extracted task-oriented information is the whole learning model and transmitted between the server and devices. The extracted task-oriented information is the model with non-dropped weights. Meanwhile, federated dropout does not need model recovery. Unlike federated dropout only temporarily removing neurons, federated pruning permanently removes neurons in either or both training and testing phases. The extracted task-oriented information in federated pruning is the pruned model. The decision of which parameters to remove is made by considering the importance of each parameter. The pruning ratio should be carefully designed to guarantee learning accuracy and extra computation latency is required to calculate the importance of parameters. Thus, how to design federated pruning methods with low computation complexity needs to be investigated. In addition, federated pruning does not need model recovery. Model-compression schemes, such as sparsification and quantization decrease the model size. The extracted task-oriented information is the sparse or quantized model. However, these methods slightly decrease the convergence rate and achieve a modest accuracy (about 85$\%$). Thus, how to design a model compression algorithm with high learning accuracy still needs to be investigated. In addition, compressed FL model needs to be recovered at the receiver. \paragraph{Split Learning} In SL, the smashed data and its gradient associated with the cut layer are the extracted task-oriented information and transmitted between the server and devices, where no model recovery is required. When multiple devices exist in SL, all devices interact with the edge server in a sequential manner, resulting in high training latency. For time non-critical tasks, such as NLP and image classification, the goal of SL is to achieve high learning accuracy without latency constraints and the effectiveness level performance metrics are the same as that of the FL. However, for time critical tasks, the SL cannot guarantee the requirement of low latency because of its sequential training pattern. To adapt the SL to time critical tasks, such as real-time object tracking, splitfed learning (SFL) \cite{thapa2022splitfed} and hybrid split and federated learning (HSFL) \cite{Xiaolan} are proposed, where they combine the primary advantages of FL and SL. The effectiveness level performance metrics of the SFL and HSFL are learning accuracy and training latency. However, SFL and HSFL assume that the model is split at the same cut layer and the server-side model is trained in a synchronous mode. Splitting at the same cut layer leads to asynchronization of device-side model training and smashed data transmission. Thus, how to select an optimal split point and deal with the asynchronization of SL remain important challenges to solve. Also, different split points can result in different smashed data. Thus, how to merge these smashed data in the server-side model should be considered. \subsection{Chain Task} In this section, we analyze more complicated but practical chain tasks including XR-aided teleoperation and chain of control, where multiple entities cooperate through communication links to execute the task. \subsubsection{XR-aided Teleoperation} XR-aided teleoperation aims to integrate 3D virtual objects/environment into remote robotic control, which can provide the manipulator with immersive experience and high operation efficiency \cite{xr-aided}. To implement a closed-loop XR-aided teleoperation system, the wireless network is required to support mixed types of data traffic, which includes control and command (C\&C) transmission, haptic information feedback transmission, and rendered $360^{\circ}$ video feedback transmission. Stringent communication requirements have been proposed to support XR-aided teleoperation use case, where over 50 $\mathrm{Mb/s}$ bandwidth is needed to support video transmission, and reliability over $1-10^{-6}$ within millisecond latency is required to support haptic and C\&C transmission. As XR-aided teleoperation task relies on both parallel and consecutive communication links, how to guarantee the cooperation among these communication links to execute the task is of vital importance. Specifically, the parallel visual and haptic feedback transmissions should be aligned with each other when arriving at the manipulator, and consecutive C\&C and feedback transmissions should be within the motion-to-photon delay constraint. Either violation of alignment in parallel links or latency constraint in consecutive links will lead to BIP and cybersickness. Therefore, both parallel alignment and consecutive latency should be quantified into effectiveness level performance metrics to guarantee the success of XR-aided teleoperation. Moreover, due to the motion-to-photon delay, the control error between the expected trajectory and actual trajectory will accumulate along with the time, which may lead to task failure. Hence, how to alleviate the accumulated error remains an important challenge to solve. \subsubsection{Chain of control} In the scenario of a swarm of (autonomous) robots where they need to perform a collaborative task (or a set of tasks) within a deadline over a wireless network, an effective communication protocol that takes into account the peculiarities of such a scenario is needed. Otherwise, the generated and transmitted data will be of very high volume that eventually will face congestion in the network causing large delays and the operated control mechanisms will not be synced causing inefficient or even dangerous operation. Consider the simple case of two robots, let's say Robot A and Robot B that are communicating through a wireless network and they are not collocated. Robot A controls remotely Robot B such that to execute a task and the outcome of that operation will be fed to Robot A for performing a second operation to send the outcome back to Robot B. All this must happen within a strict deadline. The amount of information that is generated, transmitted, processed, and sent back can be very large with the traditional information agnostic approach. On the other hand, if we take into account the semantics of information and the purpose of communication, we change the whole information chain, from its generation point until its utilization. Therefore, defining TOSA metrics for the control loop and communication between a swarm of (autonomous) robots is crucial and it will significantly reduce the amount of information leading to a more efficient operation. \section{Case Study} We validate the effectiveness of ML algorithms in optimizing time-critical TOSA communication under one typical case, namely, MEC-enabled and reconfigurable intelligent surface (RIS)-assisted terahertz (THz) VR network. \begin{figure}[!h] \centering \includegraphics[width=3.5 in]{latency_qoe.pdf} \caption{Average QoE and VR interaction latency of the MEC-enabled THz VR network of each time slot via CDRL with viewpoint prediction, where the VR interaction latency constraint is 20 ms.} \label{basic_modules} \end{figure} The goal of wireless VR networks is to guarantee high QoE under VR interaction latency constraints. Note that the QoE guarantees the seamless, continuous, smooth, and uninterrupted experience of each VR user. However, traditional wireless VR networks transmit whole $360^\circ$ videos, which leads to low QoE and high VR interaction latency. In the simulation, we extract the FoV of VR users as TOSA information to decrease the transmission data size. To extract the FoV, viewpoints of VR users are predicted via recurrent neural network (RNN) algorithms. Based on the predicted viewpoints, the corresponding FoVs of VR video frames can be rendered and transmitted in advance. Thus, the MEC does not need to render and transmit the whole $360^\circ$ VR video frames, which can substantially decrease the VR interaction latency and improve the QoE of VR users. Fig. 2 plots the average QoE and VR interaction latency of each time slot via constrained deep reinforcement learning with the viewpoint prediction via RNN. It is observed that the proposed ML algorithms maximize the long-term QoE of VR users under the VR interaction latency threshold. \section{Conclusion} In this article, we propose a generic task-oriented and semantics-aware (TOSA) communication framework incorporating both semantic and effectiveness levels for various tasks with diverse data types. We first identify the the unique characteristics of all existing and new data types in 6G networks and summarize the semantic information with its extraction methods. To achieve task-oriented communications for various data types, we then present the corresponding TOSA information, their TOSA information extraction and recovery methods, and effectiveness level performance metrics for both time critical and non-critical tasks. Importantly, our results demonstrate that our proposed TOSA communication framework can be tailored for VR data to maximize the long-term QoE within VR interaction latency constraints. The paradigm shift from conventional Shannon’s bit-oriented communication design towards the TOSA communication design will flourish new research on task-driven, context and importance-aware data transmission in 6G networks. % \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{'timestamp': '2022-10-19T02:01:36', 'yymm': '2210', 'arxiv_id': '2210.09372', 'language': 'en', 'url': 'https://arxiv.org/abs/2210.09372'}
\section{Introduction}\label{sec:intro} \subsection{Motivation} Let $G$ be a finite graph on the vertex set $\{1,2,\ldots,n\}$. There are many profitable ways to associate a polytope to $G$. One well-known example is the \emph{edge polytope} of $G$, obtained by taking the convex hull of the vectors $e_i+e_j$ for each edge $\{i,j\}$ in $G$, where $e_i$ denotes the $i\textsuperscript{th}$ standard basis vector in $\mathbb{R}^n$. Equivalently, the edge polytope is the convex hull of the columns of the unsigned vertex-edge incidence matrix of $G$. Many geometric, combinatorial, and algebraic properties of edge polytopes have been established over the past several decades, e.g.~\cite{HibiEhrhartEdge,hibiohsugiedgepolytope,TranZiegler,VillarrealEdgePolytopes}. Another well-known matrix associated with a graph $G$ is the Laplacian $L(G)$ (defined in Section~\ref{sec:background}). Our purpose in this paper is to study the analogue of the edge polytope obtained by taking the convex hull of the columns of $L(G)$, resulting in a lattice simplex that we call the \emph{Laplacian simplex} of $G$ and denote $T_G$. While to our knowledge the simplex $T_G$ has not been previously studied, there has been recent research regarding graph Laplacians from the perspective of polyhedral combinatorics and integer-point enumeration. For example, M. Beck and the first author investigated hyperplane arrangements defined by graph Laplacians with connections to nowhere-harmonic colorings and inside-out polytopes~\cite{BeckBraunNHColorings}. A. Padrol and J. Pfeifle explored Laplacian Eigenpolytopes~\cite{padrolpfeiflelaplacian} with a focus on the effect of graph operations on the associated polytopes. The first author, R. Davis, J. Doering, A. Harrison, J. Noll, and C. Taylor studied integer-point enumeration for polyhedral cones constrained by graph Laplacian minors~\cite{BraunLaplacianMinors}. In a recent preprint~\cite{matrixtreetheorem}, A. Dall and J. Pfeifle analyzed polyhedral decompositions of the zonotope defined as the Minkowski sum of the line segments from the origin to each column of $L(G)$ in order to give a polyhedral proof of the Matrix-Tree Theorem. Beyond the motivation of studying $T_G$ in order to develop a Laplacian analogue of the theory of edge polytopes, our primary motivation in this paper is the following conjecture (all undefined terms are defined in Section~\ref{sec:background}). \begin{conjecture}[Hibi and Ohsugi \cite{hibiohsugiconj}] \label{conj:hibiohsugi} If $\mathcal P$ is a lattice polytope that is reflexive and satisfies the integer decomposition property, then $\mathcal P$ has a unimodal Ehrhart $h^*$-vector. \end{conjecture} The cause of unimodality for $h^*$-vectors in Ehrhart theory is mysterious. Schepers and van Langenhoven~\cite{schepersvanl} have raised the question of whether or not the integer decomposition property alone is sufficient to force unimodality of the $h^*$-vector for a lattice polytope. In general, the interplay of the qualities of a lattice polytope being reflexive, satisfying the integer decomposition property, and having a unimodal $h^*$-vector is not well-understood~\cite{BraunUnimodal}. Thus, when new families of lattice polytopes are introduced, it is of interest to explore how these three properties behave for that family. Further, lattice simplices have been shown to be a rich source of examples and have been the subject of several recent investigations, especially in the context of Conjecture~\ref{conj:hibiohsugi}~\cite{BraunDavisFreeSum,BraunDavisSolusIDP,PayneLattice,SolusNumeral}. \subsection{Our Contributions} After reviewing necessary background in Section~\ref{sec:background}, we introduce and establish basic properties of Laplacian simplices in Section~\ref{sec:lapsimp}. We show that several graph-theoretic operations produce reflexive Laplacian simplices (Theorem~\ref{thm:bridge} and Proposition~\ref{evenreflexive}). We prove that if $G$ is a tree, odd cycle, complete graph, or the whiskering of an even cycle, then $T_G$ is reflexive (Proposition~\ref{prop:trees}, Theorem~\ref{cycle}, Proposition~\ref{evenreflexive}, and Theorem~\ref{complete}). As a result of a general investigation of the structure of $h^*$-vectors for odd cycles (Theorem~\ref{primes}), we show that if $n$ is odd then $T_{C_n}$ does not have the integer decomposition property (Corollary~\ref{cor:oddcyclenotidp}). On the other hand, we show that $T_{K_n}$ does have the integer decomposition property since it admits a regular unimodular triangulation (Corollary~\ref{cor:completeidp}). We prove that for trees, odd cycles, and complete graphs, the $h^*$-vectors of their Laplacian simplices are unimodal (Corollary~\ref{cor:treeunim}, Theorem~\ref{unimodal}, and Corollary~\ref{cor:completeunimodal}). Additionally, we provide a combinatorial interpretation of the $h^*$-vector for $T_{K_n}$ (Proposition~\ref{prop:completeh*}) and we determine that, when $n$ is an odd prime, the $h^*$-vector of $T_{C_n}$ is given by $(h_0^*,\ldots,h_{n-1}^*)=(1,\ldots,1,n^2-n+1,1,\ldots, 1)$ (Theorem~\ref{primes}). \section{Background}\label{sec:background} \subsection{Reflexive Polytopes}\label{sec:reflexive} A \emph{lattice polytope} of dimension $d$ is the convex hull of finitely many points in $\mathbb{Z}^n$, which together affinely span a $d$-dimensional hyperplane of $\mathbb{R}^n$. Two lattice polytopes are \emph{unimodularly equivalent} if there is a lattice preserving affine isomorphism mapping them onto each other. Consequently we consider lattice polytopes up to affine automorphisms of the lattice. The \emph{dual polytope} of a full dimensional polytope $\mathcal P$ which contains the origin in its interior is \[ \mathcal P^* := \{ x \in \mathbb{R}^d \mid x \cdot y \le 1 \text{ for all } y \in \mathcal P\} \, . \] Duality satisfies $(\mathcal P^*)^* = \mathcal P$. A $d$-polytope formed by the convex hull of $d+1$ vertices is called a \emph{$d$-simplex}. Reflexive polytopes are a particularly important class of polytopes first introduced in \cite{BatyrevDualPolyhedra}. \begin{definition} A lattice polytope $\mathcal P$ is called \emph{reflexive} if it contains the origin in its interior, and its dual $\mathcal P^*$ is a lattice polytope. \end{definition} Any lattice translate of a reflexive polytope is also called reflexive. The following generalization of reflexive polytopes was introduced in \cite{Nill}. A lattice point is \emph{primitive} if the line segment joining it and the origin contains no other lattice points. The local index $\ell_F$ is equal to the integral distance from the origin to the affine hyperplane spanned by $F$. \begin{definition} \label{def:ref} A lattice polytope $\mathcal P$ is \emph{$\ell$-reflexive} if, for some $\ell \in \mathbb{Z}_{>0}$, the following conditions hold: \begin{enumerate}[(i)] \item $\mathcal P$ contains the origin in its (strict) interior; \item The vertices of $\mathcal P$ are primitive; \item For any facet $F$ of $\mathcal P$ the local index $\ell_F = \ell$. \end{enumerate} \end{definition} We refer to $\mathcal P$ as a \emph{reflexive polytope of index $\ell$}. The reflexive polytopes of index $1$ are precisely the reflexive polytopes in Definition~\ref{def:ref}. \subsection{Ehrhart Theory}\label{sec:ehrhart} For $t \in \mathbb{Z}_{>0}$, the \emph{$t$\textsuperscript{th} dilate} of $P$ is given by $t \mathcal P := \{tp \mid p \in \mathcal P\}$. One technique used to recover dilates of polytopes is \emph{coning over the polytope}. Given $\mathcal P = \conv{v_1, \ldots , v_m} \subseteq \mathbb{R}^n$, we lift these vertices into $\mathbb{R}^{n+1}$ by appending $1$ as their last coordinate to define $w_1=(v_1, 1), \ldots , w_m = (v_m, 1).$ The \emph{cone over $\mathcal P$} is \[ \text{cone}(\mathcal P) = \{ \lambda_1 w_1 + \lambda_2 w_2 + \cdots + \lambda_m w_m \mid \lambda_1, \lambda_2, \ldots , \lambda_m \ge 0\} \subseteq \mathbb{R}^{n+1} \, . \] For each $t \in \mathbb{Z}_{>0}$ we recover $t\mathcal P$ by considering $\text{cone}(\mathcal P) \cap \{z_{n+1}=t\}$. To record the number of lattice points we let $L_{\mathcal P}(t) = | t \mathcal P \cap \mathbb{Z}^n |$. In \cite{Ehrhart}, Ehrhart proved that $L_{\mathcal P}(t)$, called the Ehrhart polynomial of $\mathcal P$, is a polynomial in degree $d=\dim(\mathcal{P})$ with generating function \[ \text{Ehr}_{\mathcal P}(z) = 1 + \sum_{t\ge1} L_{\mathcal P}(t)z^t = \frac{h_d^*z^d + h_{d-1}^*z^{d-1} + \cdots + h_1^*z + h_0^*}{(1-z)^{d+1}}. \] The above is referred to as the \emph{Ehrhart series} of $\mathcal P$. We call $h^*(\mathcal P) = (h_0^*, h_1^*, \ldots , h_d^*)$ the \emph{$h^*$-vector} or \emph{$\delta$-vector} of $\mathcal P$. The \emph{Euclidean volume} of a polytope $\mathcal P$ is vol$(\mathcal P) = \frac{1}{d!}\sum_{i=0}^d h_i^*$. The \emph{normalized volume} is given by $d!\text{vol}(\mathcal P) = \sum_{i=0}^d h_i^*$. Stanley proved the $h^*$-vector of a convex lattice $d$-polytope satisfies $h_0^* =1$ and $h_i^* \in \mathbb{Z}_{\ge 0}$ \cite{Stanley1}. Note that if $\mathcal P$ and $\mathcal Q$ are lattice polytopes such that $\mathcal Q$ is the image of $\mathcal P$ under an affine unimodular transformation, then their Ehrhart series are equal. A vector $x = (x_0, x_1, \ldots , x_d)$ is \emph{unimodal} if there exists a $j \in [d]$ such that $x_i \le x_{i+1}$ for all $0 \le i < j$ and $x_k\ge x_{k+1}$ for all $j \le k < d$. A major open problem in Ehrhart theory is to determine properties of $\mathcal P$ that imply unimodality of $h^*(\mathcal P)$ \cite{BraunUnimodal}. For the case of symmetric $h^*$-vectors, Hibi established the following connection to reflexive polytopes. \begin{theorem}[Hibi \cite{Hibi1}] A $d$-dimensional lattice polytope $\mathcal P \subseteq \mathbb{R}^d$ containing the origin in its interior is reflexive if and only if $h^*(\mathcal P)$ satisfies $h_i^* = h_{d-i}^*$ for $0 \le i \le \lfloor \frac{d}{2} \rfloor.$ \end{theorem} Thus, when investigating symmetric $h^*$-vectors, reflexive polytopes (and, more generally, Gorenstein polytopes) are the correct class to work with. As indicated by Conjecture~\ref{conj:hibiohsugi}, the following property has been frequently correlated with unimodality, and is interesting in its own right. \begin{definition} A lattice polytope $\mathcal P \subseteq \mathbb{R}^n$ has the \emph{integer decomposition property} if, for every integer $t \in \mathbb{Z}_{>0}$ and for all $p \in t\mathcal P \cap \mathbb{Z}^n$, there exists $p_1 , \ldots , p_t \in \mathcal P \cap \mathbb{Z}$ such that $p = p_1 + \cdots + p_t$. We will frequently say that $\mathcal P$ is IDP when $\mathcal P$ possesses this property. \end{definition} It is well-known that if $\mathcal P$ admits a unimodular triangulation, then $\mathcal P$ is IDP; we will use this fact when analyzing complete graphs. \subsection{Lattice Simplices} Simplices play a special role in Ehrhart theory, as there is a method for computing their $h^*$-vectors that is simple to state (though not always to apply). \begin{definition} \label{def:fpp} Given a lattice simplex $\mathcal{P}\subset \mathbb{R}^{n-1}$ with vertices $\{v_i\}_{i \in [n]}$, the \emph{fundamental parallelepiped} of $\mathcal P$ is the subset of $\cone{\mathcal P}$ defined by \[ \Pi_{\mathcal P} := \left\{ \sum_{i=1}^n \lambda_i (v_i, 1) \mid 0 \le \lambda_i < 1 \right\} \, . \] Further, $|\Pi_{\mathcal{P}}\cap \mathbb{Z}^n|$ is equal to the determinant of the matrix whose $i$\textsuperscript{th} row is given by $(v_i,1)$. \end{definition} \begin{lemma}[see Chapter 3 of \cite{BeckRobinsCCD}]\label{lem:fpp} Given a lattice simplex $\mathcal P$, \[ h_i^*(\mathcal{P}) = \left\lvert \Pi_{\mathcal P} \cap \{ x \in \mathbb{Z}^n \mid x_{n}=i \} \right\rvert \, . \] \end{lemma} Using the notation from Definition~\ref{def:fpp}, let $A$ be the matrix whose $i$\textsuperscript{th} row is $(v_i,1)$. One approach to determine $h^*(\mathcal{P})$ in this case is to recognize that finding lattice points in $\Pi_{\mathcal{P}}$ is equivalent to finding integer vectors of the form $\lambda \cdot A$ with $0\leq \lambda_i< 1$ for all $i$. Cramer's rule implies the $\lambda \in \mathbb{Q}^n$ that yield integer vectors will have entries of the form \[ \lambda_i = \frac{b_i}{\det{A}} < 1 \] for $b_i \in \mathbb{Z}_{\geq 0}$. In particular, if $x = \frac{1}{\det(A)} b \cdot A \in \mathbb{Z}^n$, then $b_i = \det{A(i, x)}$ where $A(i, x)$ is the matrix obtained by replacing the $i$\textsuperscript{th} row of $A$ by $x$. Since $A(i, x)$ is an integer matrix, $\det{A(i,x)} \in \mathbb{Z}$. Notice that for any $\lambda$, the last coordinate of $\lambda A$ is $\langle \lambda, \mathbbm{1} \rangle = \sum_{i=1}^n \frac{b_i}{\det{A}}.$ Thus, we have \[ \Pi_{\mathcal{P}} \cap \mathbb{Z}^n = \mathbb{Z}^n\cap \left\{ \frac{1}{\det{A}} b \cdot A \mid 0 \le b_i < \det(A), b_i\in \mathbb{Z}, \sum_{i=1}^n b_i \equiv 0 \bmod \det(A) \right\} \, . \] One profitable method for determining the lattice points in $\Pi_{\mathcal{P}}$ is to find the $\det(A)$-many lattice points in the right-hand set above, by first considering all the $b$-vectors that satisfy the given modular equation. \subsection{Graph Laplacians}\label{sec:laplacians} Let $G$ be a connected graph with vertex set $V(G) = [n] := \{1, 2, \ldots, n\}$ and edge set $E(G)$. The \emph{Laplacian matrix} $L$ of a graph $G$ is defined to be the difference of the degree matrix and the $\{0, 1\}$-adjacency matrix of a graph. Thus, $L$ has rows and columns indexed by $[n]$ with entries $a_{ii}=\deg{i}$, $a_{ij} = -1$ if $\{i,j\} \in E(G)$, and $0$ otherwise. We let $\kappa$ denote the number of spanning trees of $G$. The following facts are well-known \cite{Bapat}. \begin{prop} The Laplacian matrix $L$ of a connected graph $G$ with vertex set $[n]$ satisfies the following: \begin{enumerate}[(i)] \item $L \in \mathbb{Z}^{n \times n}$ is symmetric. \item Each row and column sum of $L$ is $0$. \item $\ker_{\mathbb{R}}{L} = \langle \mathbbm{1} \rangle$ and $\text{im}_{\mathbb{R}}{\text{ }L}=\langle \mathbbm{1} \rangle^{\perp}$ \item $\text{rk }{L} = n-1$ \item (The Matrix-Tree Theorem \cite{Kirchhoff}) Any cofactor of $L$ is equal to $\kappa$. \end{enumerate} \end{prop} In this paper we often refer to a submatrix of $L$ defined by restricting to specified rows and columns. For $S, T \subseteq [n]$, define $L(S \mid T)$ to be the matrix with rows from $L$ indexed by $[n]\setminus S$ and columns from $L$ indexed by $[n] \setminus T$. Equivalently, $L(S \mid T)$ is obtained from $L$ by the deletion of rows indexed by $S$ and columns indexed by $T$. For simplicity, we define $L(i)$ to be the matrix obtained by deleting the $i$\textsuperscript{th} column of $L$, that is, $L(i) := L( \emptyset \mid i) \in \mathbb{Z}^{n \times (n-1)}$. \section{The Laplacian Simplex of a Finite Graph}\label{sec:lapsimp} \subsection{Definition and Basic Properties} Assume that $G$ is a connected graph with Laplacian matrix $L$. Consider $L(i) \in \mathbb{Z}^{n \times (n-1)}$. It is a straightforward exercise to show the rank of $L(i)$ is $n-1$. We recognize the rows of $L(i)$ as points in $\mathbb{Z}^{n-1}$ and consider their convex hull, $\conv{L(i)^T}$, where $\conv{A}$ refers to the convex hull of the columns of the matrix $A$. Notice the rows of ${L(i)}$ form a collection of $n$ affinely independent lattice points, which makes $\conv{L(i)^T}$ an $n-1$ dimensional simplex. \begin{prop}\label{equivalence} The lattice simplices $\conv{L(i)^T}$ and $\conv{L(j)^T}$ are unimodularly equivalent for all $i, j \in [n]$. \end{prop} \begin{proof} Notice the matrices $L(i)$ and $L(j)$ differ by only one column when $i \ne j$. In particular we can write $L(i) \cdot U = L(j)$ where $U \in \mathbb{Z}^{n-1 \times n-1}$ has columns $c_k$ for $1 \le k \le n-1$ defined to be \[ c_k = \left\{ \begin{array}{ll} e_{\ell} & \text{column $k$ in $L(j)$ is column $\ell$ in $L(i)$} \\ (-1, -1, \ldots, -1)^T & \text{column $k$ in $L(j)$ is not among columns of $L(i)$ } \end{array} \right\} \] where $e_{\ell}$ is the vector with a $1$ in the $\ell\textsuperscript{th}$ entry and $0$ else. Notice $U$ has integer entries and $\det{U} = \pm 1$, as computed by expanding along the column with all entries equal to $-1$. This shows $U$ is a unimodular matrix. Further, $U$ maps the vertices of $\conv{L(i)^T}$ onto the vertices of $\conv{L(i)^T}$. Thus $\conv{L(i)^T}$ and $\conv{L(j)^T}$ are unimodularly equivalent lattice polytopes. \end{proof} Given a fixed graph $G$, we choose a representative for this equivalence class of lattice simplices to be used throughout, unless otherwise noted. Let $B = \{ e_1 - e_2, e_2-e_3, \ldots, e_{n-1} - e_n\}$ be the standard basis for the orthogonal complement of the all-ones vector $\mathbbm{1} \in \mathbb{R}^n$, where $e_i \in \mathbb{R}^n$ is the standard basis vector that contains a $1$ in the $i$\textsuperscript{th} entry and $0$ else. Then $B$ is a basis of the column space of $L$. Define $L_B \in \mathbb{Z}^{n \times (n-1)}$ to be the representation of the matrix $L$ with respect to the basis $B$. In practice, $L_B$ can be computed using the matrix multiplication $L_B = L \cdot A$ where $A$ is the upper triangular $(n \times (n-1))$ matrix with entries \begin{equation}\label{eqn:A} a_{ij}= \left\{ \begin{array}{ll} 1 & i \le j \le n-1 \\ 0 & \text{else} \end{array} \right\}. \end{equation} \begin{example} Given the cycle $C_5$ of length five, we have \[ L = \left[ \begin{array}{rrrrr} 2 & -1 & 0 & 0 & -1 \\ -1 & 2 & -1 & 0 & 0 \\ 0 & -1& 2 & -1 & 0 \\ 0 & 0 &-1 & 2 & -1 \\ -1 & 0& 0 & -1 & 2 \end{array} \right] \hspace{.2in} L_B = \left[ \begin{array}{rrrr} 2 & 1 & 1& 1 \\ -1& 1 & 0 & 0 \\ 0 & -1& 1 & 0 \\ 0 & 0 & -1 & 1 \\ -1 & -1 & -1 & -2 \end{array} \right] \, . \] \end{example} This brings us to the object of study in this paper. \begin{definition} For a connected graph $G$, the $n-1$ dimensional lattice simplex \[T_G:= \conv{(L_B)^T} \subseteq \mathbb{R}^{n-1}\] is called the $\emph{Laplacian Simplex}$ associated to the graph $G$. \end{definition} \begin{prop}\label{properties} Let $G$ be a connected graph on $n$ vertices. \begin{enumerate}[(i)] \item $T_G$ is a representative of the equivalence class $\{\conv{L(i)^T}\}_{i \in [n]}$. \item $T_G$ has normalized volume equal to $n \cdot \kappa$. \item $T_G$ contains the origin in its interior. \item $h_i^*(T_G) \ge 1$ for all $0 \le i \le n-1.$ \end{enumerate} \end{prop} \begin{proof} \begin{enumerate}[(i)] \item Notice we can write $L(n) \cdot A(n \mid \emptyset) = L_B$ where $A$ is the matrix defined in equation (\ref{eqn:A}). Let $U:=A(n \mid \emptyset)$. Then $U$ is the upper diagonal matrix of all ones so that $\det{U} = 1$. This implies $T_G$ is unimodularly equivalent to $\conv{L(n)^T}$. By Proposition \ref{equivalence}, the result follows. \item Since $T_G$ is a simplex, the normalized volume of $T_G$ is equal to \[ \left| \det{[L_B\mid\mathbbm{1}]} \right|= \left| \sum_{i=1}^n (-1)^{i+n} M_{in} \right| = \left| \sum_{i=1}^n C_{in} \right|, \] where $M_{i,n}$ is a minor of $[L_B \mid \mathbbm{1}]$, $C_{i,n}$ is the corresponding cofactor, and the determinant is expanded along the appended column of ones. The relation $L(n) \cdot U = L_B$ yields $L(i \mid n) \cdot U = L_B(i \mid \emptyset).$ Then for each cofactor, \begin{equation*} \begin{split} C_{i,n} &= (-1)^{i+n}\det L_B(i \mid \emptyset ) \\ &= (-1)^{i+n}\det(L(i\mid n)\cdot U) \\ &= (-1)^{i+n}\det L(i\mid n)\det U \\ &= (-1)^{i+n}\det L(i\mid n) \\ &= \bar{C}_{i,n} \\ &= \kappa \end{split} \end{equation*} where $\bar{C}_{i,n}$ is the cofactor of $L$, and the last equality is a result of the Matrix Tree Theorem. Summing over all $i \in [n]$ yields the desired result. \item Note the sum of all rows of $L_B$ is $0$, and $L_B$ has no column with all entries equal to $0$. It follows that $(0, \ldots, 0) \in \mathbb{Z}^{n}$ is in the interior of $T_G$. \item Observe each column in $L_B$ sums to $0$. Consider lattice points of the form \[ p_i = \left(\frac{i}{n}, \frac{i}{n}, \ldots, \frac{i}{n}\right) \cdot [L_B \mid \mathbbm{1}] = \left(0, 0, \ldots, 0, i \right) \in \mathbb{Z}^{1 \times n} \] for each $0 \le i < n$. Then $p_i \in \Pi_{T_G} \cap \{x \in \mathbb{Z}^n \mid x_{n} = i\}$ implies $h_i^*(T_G) \ge 1$ for each $0 \le i \le n-1$. \end{enumerate} \end{proof} \begin{example} The simplex $T_{C_5}$ is obtained as the convex hull of the columns of the transpose of \[ L_B = \left[ \begin{array}{rrrr} 2 & 1 & 1& 1 \\ -1& 1 & 0 & 0 \\ 0 & -1& 1 & 0 \\ 0 & 0 & -1 & 1 \\ -1 & -1 & -1 & -2 \end{array} \right] \, . \] The determinant of $L_B$ with a column of ones appended is easily computed to be $25$. By applying Lemma~\ref{lem:fpp} to $T_{C_5}$, it is straightforward to verify that $h^*(T_{C_5})=(1,1,21,1,1)$. \end{example} In the proof of (ii) in Proposition \ref{properties} above, we showed the minor obtained by deleting the $i$\textsuperscript{th} row of $L_B$ is equal to the minor obtained by deleting the $n$\textsuperscript{th} column and the $i$\textsuperscript{th} row of $L$ for some $i \in [n]$, i.e., $\det{L_B(i \mid \emptyset)} = \det{L(i \mid n)}$ for any $i \in [n]$. The second minors of $L_B$ and $L$ are related in the following manner, which we will need in subsequent sections. \begin{lemma}\label{determinant} Let $i,k \in [n]$ and $j \in [n-1]$ such that $i \ne k$. Then \[ \det{L_B(i,k \mid j )} = \det{L(i,k \mid j,n)} + \det{L(i,k \mid j+1, n)}. \] In the case $j=n-1$, $\det{L_B(i,k \mid n-1 )} = \det{L(i,k \mid n-1,n)}.$ \end{lemma} \begin{proof} Recall $L_B = L \cdot A$ where $A$ is the $n \times (n-1)$ upper diagonal matrix defined in equation (\ref{eqn:A}). It follows $L_B(i, k \mid j) = L(i, k \mid \emptyset) \cdot A(j)$. Apply the Cauchy-Binet formula to compute the determinant \begin{equation*} \begin{split} \det{L_B(i,k \mid j)} &= \sum_{S \in \binom{[n]}{n-2}} \det{L(i, k \mid \emptyset)_{[n-2],S}} \det{A(j)_{S,[n-2]}} \\ &= \det{L(i, k \mid \emptyset)_{[n-2],[n] \setminus \{j,n\}}} \det{A(j)_{[n]\setminus \{j,n\},[n-2]}} \\ & \phantom{.......} + \det{L(i, k \mid \emptyset)_{[n-2],[n]\setminus \{(j+1),n\}}} \det{A(j)_{[n]\setminus \{(j+1),n\},[n-2]}} \\ &= \det{L(i,k \mid j,n)} + \det{L(i,k \mid j+1, n)}. \end{split} \end{equation*} The only nonzero terms in the sum arise from choosing $(n-2)$ linearly independent rows in $A$. Based on the structure of $A$, there are only two ways to do this unless we are in the case $j=n-1$ in which there is exactly one way. \end{proof} The following is a special case of a general characterization of reflexive simplices using cofactor expansions. \begin{theorem}\label{characterization} For a connected graph $G$ with Laplacian matrix $L$, $T_G$ is reflexive if and only if for each $i \in [n]$, $\kappa$ divides \[ \sum_{k=1}^{n-1} C_{kj} = \sum_{k=1}^{n-1} (-1)^{k+j}M_{kj} \] for each $1 \le j \le n-1$. Here $C_{kj}$ is the cofactor and $M_{kj}$ is the minor of the matrix $L_B(i \mid \emptyset) \in \mathbb{Z}^{(n-1) \times (n-1)}$. \end{theorem} \begin{proof} We show $T_{G}$ is reflexive by showing the vertices of its dual polytope are lattice points. By \cite[Theorem 2.11]{ZieglerLectures}, the hyperplance description of the dual polytope is given by $T_{G}^* = \{x \in \mathbb{R}^{n-1} \mid L_B \cdot x \le \mathbbm{1} \}.$ Each intersection of $(n-1)$ hyperplanes will yield a unique vertex of $T_{G}^*$ since any first minor of $L_B$ is nonzero. Let $\{v_1, v_2, \ldots, v_{n} \}$ be the set of vertices of $T^*_{G}$. Each $v_i$ satisfies \[ L_B(i \mid \emptyset) v_i = \mathbbm{1} \] for $i \in [n]$. Reindex the rows of $L_B(i \mid \emptyset)$ in increasing order by $[n-1]$. We can write \[ v_i = L_B(i \mid \emptyset)^{-1} \cdot \mathbbm{1} = \frac{1}{\det{L_B(i \mid \emptyset)}} C^T \cdot \mathbbm{1} \] where $C^T$ is the $(n-1) \times (n-1)$ matrix whose whose $(j,k)$ entry is the $(k,j)$ cofactor of $L_B( i \mid \emptyset)$, which we denote as $C_{k j}$. Since $\det{L_B(i \mid \emptyset)} = \det{L( i \mid n)} = \pm \kappa$, each vertex is of the form \[ v_i = \frac{1}{\pm \kappa} \left( \sum_{k=1}^{n-1} C_{k1}, \sum_{k=1}^{n-1} C_{k2}, \ldots, \sum_{k=1}^{n-1} C_{k(n-1)}\right)^T, \] which is a lattice point if and only if $\kappa$ divides each coordinate. \end{proof} \begin{remark} Apply Lemma \ref{determinant} to Proposition \ref{characterization} to yield a condition on the second minors of $L$ when determining if $T_G$ is reflexive. Notice \begin{equation*} \begin{split} (C^T)_{j k} &= C_{k j} \\ &= (-1)^{k + j} \det{L_B( i, k \mid j )} \\ &= (-1)^{k + j} \left(\det{L(i, k \mid j, n)}+\det{L(i,k \mid j+1, n)} \right), \end{split} \end{equation*} which shows for a given $v_i$, its $\ell \textsuperscript{th}$ coordinate has the form \[ \frac{1}{\pm \kappa} \sum_{k=1}^{n-1}C_{k \ell} = \frac{1}{\pm \kappa} \sum_{k=1}^{n-1} (-1)^{k + \ell} \left(\det{L(i, k \mid \ell, n)}+\det{L(i,k \mid \ell+1, n)} \right). \] \end{remark} \begin{remark} Computing alternating sums of second minors of Laplacian matrices can be challenging. Thus, we often verify reflexivity by explicitly computing the vertices of $T_G^*$ via ad hoc methods. \end{remark} \subsection{Graph Operations and Laplacian Simplices} We next introduce an operation on a graph that preserves the lattice-equivalence class of $T_G$. \begin{prop}\label{algorithm} Let $G$ be a connected graph on $n$ vertices such that the following cut is possible. Partition $V(G)$ into vertex sets $A$ and $B$ such that all edges between $A$ and $B$ are incident to a single vertex $x \in A$; label those edges $\{e_1, \ldots, e_k \}$. Additionally suppose $x$ has a leaf with adjacent vertex $y \in A$. Form a new graph $G'$ by moving the edges $\{e_1, \ldots, e_k\}$ previously incident to $x$ to be incident to $y$. Then $G'$ has vertex set $V(G)$, and edge set $\left( E(G) \setminus \{e_1, \ldots, e_k\}\right) \cup \{\{y, v_i \}:i=1,\ldots,k\}$ where $e_i = \{x, v_i\} \in E(G)$. Then $T_{G} \cong T_{G'}$. \end{prop} \begin{proof} Label the vertices of $G$ with $[n]$. Observe $G'$ has the same labels since $V(G) = V(G')$. We refer to each vertex by its label for simplicity. Let $N_G(i)$ be the set of neighbors of vertex $i$ in $G$, that is, $N_G(i) := \{ j \in V(G) \mid \{i, j\} \in E(G) \}$. Let $L$ be the Laplacian matrix of $G$ and $L'$ be the Laplacian matrix of $G'$. We describe row operations that take each row $r_i \in L$ to row $r'_i \in L'$. For each $i \in V(G)$, $1 \le i \le n$, we have the following cases. Consider $i \in A$ such that $i \ne x, y$. Then $N_G(i) = N_{G'}(i)$, so we set $r'_i=r_i$ since the $i\textsuperscript{th}$ row is the same in $L$ and $L'$. Then $r'_i \in L'$. Consider $i \in B \setminus N_{G'}(x)$. Again, $N_G(i) = N_{G'}(i)$, so we set $r'_i = r_i$ and have $r'_i \in L'$. Consider $i \in B \cap N_{G'}(x)$. The degree of $i$ is constant in $G$ and $G'$, but $\{i, x\} \in E(G)$ becomes $\{i, y\} \in E(G')$ in the described algorithm. Set $r'_i = r_i - r_y$ to reflect the change in incident edges of $i$ from $G$ to $G'$. Since $y \in V(G)$ is a leaf, $r'_i$ now has a $0$ in the $x\textsuperscript{th}$ coordinate, a $-1$ in the $y\textsuperscript{th}$ coordinate, and all remaining coordinates are unchanged. Then $r'_i \in L'.$ Consider $i = x$. Set $r'_x = r_x + \sum_{j \in B} r_j$. Observe $N_G(x) \setminus N_{G'}(x) = \{v_1, \ldots, v_k \}$. Then adding $\sum_{\ell =1}^{k} r_{v_\ell}$ decreases the $x\textsuperscript{th}$ coordinate of $r_x$ by $k$, which is the new degree of vertex $x \in V(G')$. Adding the other rows does contribute to the $x\textsuperscript{th}$ coordinate of $r'_x$ since those vertices are not adjacent to $x \in V(G)$; however, we must add all rows corresponding to $j \in B$ to obtain a $0$ in all coordinates indexed by $j \in B$. Notice the coordinates indexed by the vertices in $A$ remain fixed. Then $r'_x \in L'$. Finally consider $i=y$. Set $r'_y = (k+1)r_y -\sum_{j \in B} r_j$. The $y\textsuperscript{th}$ coordinate of $r'_y$ is $k+1$, which is the degree of $y$ in $V(G')$. Observe $N_{G'}(y) \setminus N_G(y) = \{v_1, \ldots, v_k\}.$ Then subtracting $\sum_{\ell =1}^k r_{v_\ell}$ from $(k+1)r_y$ ensures the $x\textsuperscript{th}$ coordinate of $r'_y$ is $-1$. We subtract all rows corresponding to $j \in B$ from $(k+1)r_y$ to obtain a $-1$ in all coordinates of $r'_y$ indexed by $\{v_\ell \}_{\ell=1}^k$. Then $r'_y \in L'$. It is straightforward to verify that the collection of row operations described above is a unimodular transformation of the Laplacian matrix and thus can be represented by the multiplication of unimodular matrix $U \in \mathbb{Z}^{n \times n}$ such that $U\cdot L = L'$. It follows that $U\cdot L(n) = L'(n)$. Thus $\conv{L(n)^T}= \conv{L'(n)^T}$, and we have shown $T_G \cong T_{G'}$. \end{proof} \begin{example}\label{5} In the figure below, the graph on the left is the wedge of $K_5$ and $C_5$ with a leaf, and the graph on the right is the bridge of $K_5$ and $C_5$ with the appropriate labels. \definecolor{zzttqq}{rgb}{0.6,0.2,0.} \begin{center} \begin{tikzpicture}[scale=.5][line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(7.5,0.5) rectangle (16.2,8.); \fill[color=zzttqq] (10.,-4.) -- (10.,-6.) -- (11.902113032590307,-6.618033988749895) -- (13.077683537175254,-5.) -- (11.902113032590307,-3.381966011250105) -- cycle; \fill(4.854972311947651,-2.95708556678168) -- (4.854972311947651,-4.95708556678168) -- (6.757085344537957,-5.575119555531575) -- (7.932655849122904,-3.957085566781681) -- (6.757085344537959,-2.3390515780317855) -- cycle; \fill[color=white](12.,4.) -- (12.,2.) -- (13.902113032590307,1.381966011250105) -- (15.077683537175254,3.) -- (13.902113032590307,4.618033988749895) -- cycle; \draw (10.,-4.)-- (10.,-6.); \draw (10.,-6.)-- (11.902113032590307,-6.618033988749895); \draw (11.902113032590307,-6.618033988749895)-- (13.077683537175254,-5.); \draw (13.077683537175254,-5.)-- (11.902113032590307,-3.381966011250105); \draw (11.902113032590307,-3.381966011250105)-- (10.,-4.); \draw (4.854972311947651,-2.95708556678168)-- (4.854972311947651,-4.95708556678168); \draw (4.854972311947651,-4.95708556678168)-- (6.757085344537957,-5.575119555531575); \draw (6.757085344537957,-5.575119555531575)-- (7.932655849122904,-3.957085566781681); \draw (7.932655849122904,-3.957085566781681)-- (6.757085344537959,-2.3390515780317855); \draw (6.757085344537959,-2.3390515780317855)-- (4.854972311947651,-2.95708556678168); \draw (12.,4.)-- (12.,2.); \draw (12.,2.)-- (13.902113032590307,1.381966011250105); \draw (13.902113032590307,1.381966011250105)-- (15.077683537175254,3.); \draw (15.077683537175254,3.)-- (13.902113032590307,4.618033988749895); \draw (13.902113032590307,4.618033988749895)-- (12.,4.); \draw (10.,2.)-- (12.,4.); \draw (7.932655849122904,-3.957085566781681)-- (10.,-4.); \draw (13.902113032590307,4.618033988749895)-- (12.,2.); \draw (15.077683537175254,3.)-- (12.,4.); \draw (13.902113032590307,4.618033988749895)-- (13.902113032590307,1.381966011250105); \draw (15.077683537175254,3.)-- (12.,2.); \draw (13.902113032590307,1.381966011250105)-- (12.,4.); \draw (10.,-4.)-- (13.077683537175254,-5.); \draw (10.,-4.)-- (11.902113032590307,-6.618033988749895); \draw (11.902113032590307,-3.381966011250105)-- (10.,-6.); \draw (11.902113032590307,-3.381966011250105)-- (11.902113032590307,-6.618033988749895); \draw (10.,-6.)-- (13.077683537175254,-5.); \draw (10.097886967409693,6.618033988749895)-- (8.922316462824746,5.); \draw (8.922316462824746,5.)-- (10.097886967409693,3.381966011250105); \draw (10.097886967409693,3.381966011250105)-- (12.,4.); \draw (12.,6.)-- (10.097886967409693,6.618033988749895); \draw (12.,4.)-- (12.,6.); \begin{scriptsize} \draw [fill=black] (10.,-4.) circle (2.5pt); \draw[color=black] (9.79819918683186,-3.087469617134429) node {10}; \draw [fill=black] (10.,-6.) circle (2.5pt); \draw[color=black] (9.29899178703002,-5.9011840523811605) node {8}; \draw [fill=black] (11.902113032590307,-6.618033988749895) circle (2.5pt); \draw[color=black] (12.47576614940536,-6.491156433965153) node {7}; \draw [fill=black] (13.077683537175254,-5.) circle (2.5pt); \draw[color=black] (13.655710912573344,-4.902769252777482) node {6}; \draw [fill=black] (11.902113032590307,-3.381966011250105) circle (2.5pt); \draw[color=black] (12.47576614940536,-2.8151746717879713) node {5}; \draw [fill=black] (4.854972311947651,-2.95708556678168) circle (2.5pt); \draw[color=black] (4.170770316338401,-2.8605571626790476) node {2}; \draw [fill=black] (4.854972311947651,-4.95708556678168) circle (2.5pt); \draw[color=black] (4.080005334556248,-4.948151743668558) node {3}; \draw[color=black] (7.256779696931588,-1.7259948904021396) node {1}; \draw [fill=black] (6.757085344537957,-5.575119555531575) circle (2.5pt); \draw[color=black] (6.076834933763605,-5.810419070599008) node {4}; \draw [fill=black] (7.932655849122904,-3.957085566781681) circle (2.5pt); \draw[color=black] (8.255194496535267,-3.132852108025505) node {9}; \draw [fill=black] (6.757085344537959,-2.3390515780317855) circle (2.5pt); \draw [fill=black] (12.,4.) circle (2.5pt); \draw[color=black] (12.339618676732131,4.854466288803926) node {$9$}; \draw [fill=black] (12.,2.) circle (2.5pt); \draw[color=black] (11.431968858910606,1.7684569082107366) node {$8$}; \draw [fill=black] (13.902113032590307,1.381966011250105) circle (2.5pt); \draw[color=black] (14.56336073039487,1.405396981082126) node {$7$}; \draw [fill=black] (15.077683537175254,3.) circle (2.5pt); \draw[color=black] (15.561775529998547,3.674521525635942) node {$6$}; \draw [fill=black] (13.902113032590307,4.618033988749895) circle (2.5pt); \draw[color=black] (14.20030080326626,5.444438670387918) node {$5$}; \draw [fill=black] (10.097886967409693,6.618033988749895) circle (2.5pt); \draw[color=black] (10.433554059306928,7.441268269595276) node {$2$}; \draw [fill=black] (8.922316462824746,5.) circle (2.5pt); \draw[color=black] (8.436724460099573,5.762116106625452) node {$3$}; \draw [fill=black] (10.097886967409693,3.381966011250105) circle (2.5pt); \draw[color=black] (9.43513925970325,3.4929915620716363) node {$4$}; \draw [fill=black] (10.,2.) circle (2.5pt); \draw[color=black] (9.026696841683563,2.2222818171214995) node {$10$}; \draw [fill=black] (12.,6.) circle (2.5pt); \draw[color=black] (12.248853694949979,6.8512958880112835) node {$1$}; \end{scriptsize} \end{tikzpicture} \begin{tikzpicture}[scale=.5][line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(3.5,-8.) rectangle (15.,-1.); \fill[color=white] (10.,-4.) -- (10.,-6.) -- (11.902113032590307,-6.618033988749895) -- (13.077683537175254,-5.) -- (11.902113032590307,-3.381966011250105) -- cycle; \fill[color=white](4.854972311947651,-2.95708556678168) -- (4.854972311947651,-4.95708556678168) -- (6.757085344537957,-5.575119555531575) -- (7.932655849122904,-3.957085566781681) -- (6.757085344537959,-2.3390515780317855) -- cycle; \fill[color=white] (12.,4.) -- (12.,2.) -- (13.902113032590307,1.381966011250105) -- (15.077683537175254,3.) -- (13.902113032590307,4.618033988749895) -- cycle; \draw (10.,-4.)-- (10.,-6.); \draw (10.,-6.)-- (11.902113032590307,-6.618033988749895); \draw (11.902113032590307,-6.618033988749895)-- (13.077683537175254,-5.); \draw (13.077683537175254,-5.)-- (11.902113032590307,-3.381966011250105); \draw (11.902113032590307,-3.381966011250105)-- (10.,-4.); \draw (4.854972311947651,-2.95708556678168)-- (4.854972311947651,-4.95708556678168); \draw (4.854972311947651,-4.95708556678168)-- (6.757085344537957,-5.575119555531575); \draw (6.757085344537957,-5.575119555531575)-- (7.932655849122904,-3.957085566781681); \draw (7.932655849122904,-3.957085566781681)-- (6.757085344537959,-2.3390515780317855); \draw (6.757085344537959,-2.3390515780317855)-- (4.854972311947651,-2.95708556678168); \draw (12.,4.)-- (12.,2.); \draw (12.,2.)-- (13.902113032590307,1.381966011250105); \draw (13.902113032590307,1.381966011250105)-- (15.077683537175254,3.); \draw (15.077683537175254,3.)-- (13.902113032590307,4.618033988749895); \draw (13.902113032590307,4.618033988749895)-- (12.,4.); \draw (10.,2.)-- (12.,4.); \draw (7.932655849122904,-3.957085566781681)-- (10.,-4.); \draw (13.902113032590307,4.618033988749895)-- (12.,2.); \draw (15.077683537175254,3.)-- (12.,4.); \draw (13.902113032590307,4.618033988749895)-- (13.902113032590307,1.381966011250105); \draw (15.077683537175254,3.)-- (12.,2.); \draw (13.902113032590307,1.381966011250105)-- (12.,4.); \draw (10.,-4.)-- (13.077683537175254,-5.); \draw (10.,-4.)-- (11.902113032590307,-6.618033988749895); \draw (11.902113032590307,-3.381966011250105)-- (10.,-6.); \draw (11.902113032590307,-3.381966011250105)-- (11.902113032590307,-6.618033988749895); \draw (10.,-6.)-- (13.077683537175254,-5.); \draw (10.097886967409693,6.618033988749895)-- (8.922316462824746,5.); \draw (8.922316462824746,5.)-- (10.097886967409693,3.381966011250105); \draw (10.097886967409693,3.381966011250105)-- (12.,4.); \draw (12.,6.)-- (10.097886967409693,6.618033988749895); \draw (12.,4.)-- (12.,6.); \begin{scriptsize} \draw [fill=black] (10.,-4.) circle (2.5pt); \draw[color=black] (9.79819918683186,-3.087469617134429) node {10}; \draw [fill=black] (10.,-6.) circle (2.5pt); \draw[color=black] (9.29899178703002,-5.9011840523811605) node {8}; \draw [fill=black] (11.902113032590307,-6.618033988749895) circle (2.5pt); \draw[color=black] (12.47576614940536,-6.491156433965153) node {7}; \draw [fill=black] (13.077683537175254,-5.) circle (2.5pt); \draw[color=black] (13.655710912573344,-4.902769252777482) node {6}; \draw [fill=black] (11.902113032590307,-3.381966011250105) circle (2.5pt); \draw[color=black] (12.47576614940536,-2.8151746717879713) node {5}; \draw [fill=black] (4.854972311947651,-2.95708556678168) circle (2.5pt); \draw[color=black] (4.170770316338401,-2.8605571626790476) node {2}; \draw [fill=black] (4.854972311947651,-4.95708556678168) circle (2.5pt); \draw[color=black] (4.080005334556248,-4.948151743668558) node {3}; \draw[color=black] (7.256779696931588,-1.7259948904021396) node {1}; \draw [fill=black] (6.757085344537957,-5.575119555531575) circle (2.5pt); \draw[color=black] (6.076834933763605,-5.810419070599008) node {4}; \draw [fill=black] (7.932655849122904,-3.957085566781681) circle (2.5pt); \draw[color=black] (8.255194496535267,-3.132852108025505) node {9}; \draw [fill=black] (6.757085344537959,-2.3390515780317855) circle (2.5pt); \draw [fill=black] (12.,4.) circle (2.5pt); \draw[color=black] (12.339618676732131,4.854466288803926) node {$9$}; \draw [fill=black] (12.,2.) circle (2.5pt); \draw[color=black] (11.431968858910606,1.7684569082107366) node {$8$}; \draw [fill=black] (13.902113032590307,1.381966011250105) circle (2.5pt); \draw[color=black] (14.56336073039487,1.405396981082126) node {$7$}; \draw [fill=black] (15.077683537175254,3.) circle (2.5pt); \draw[color=black] (15.561775529998547,3.674521525635942) node {$6$}; \draw [fill=black] (13.902113032590307,4.618033988749895) circle (2.5pt); \draw[color=black] (14.20030080326626,5.444438670387918) node {$5$}; \draw [fill=black] (10.097886967409693,6.618033988749895) circle (2.5pt); \draw[color=black] (10.433554059306928,7.441268269595276) node {$2$}; \draw [fill=black] (8.922316462824746,5.) circle (2.5pt); \draw[color=black] (8.436724460099573,5.762116106625452) node {$3$}; \draw [fill=black] (10.097886967409693,3.381966011250105) circle (2.5pt); \draw[color=black] (9.43513925970325,3.4929915620716363) node {$4$}; \draw [fill=black] (10.,2.) circle (2.5pt); \draw[color=black] (9.026696841683563,2.2222818171214995) node {$10$}; \draw [fill=black] (12.,6.) circle (2.5pt); \draw[color=black] (12.248853694949979,6.8512958880112835) node {$1$}; \end{scriptsize} \end{tikzpicture} \end{center} In the graph on the left, let $A=\{1,2,3,4,9,10\}$, let $x=9$ and let $y=10$. It is straightforward to verify that with this assignment, the graphs above are related via Proposition \ref{algorithm}, and thus their respective Laplacian simplices are lattice equivalent. \end{example} \begin{remark} It is not obvious which graph operations, aside from the transformations detailed in the proof of Proposition~\ref{algorithm} and those found in Proposition~\ref{tail}, will result in unimodularly equivalent Laplacian simplices. It would be interesting to investigate this phenomenon further. \end{remark} We next provide in Theorem~\ref{thm:bridge} an operation on graphs that preserves reflexivity of Laplacian simplices. We will require the following lemma. \begin{lemma}\label{keylemma} Let $A \in \mathbb{Z}^{k \times k}$. If $\left(\det{A}\right)$ divides $mC_{k i}$ for each $i \in [k]$, where $C_{k i}$ is the cofactor of $A$, and $Ax = \mathbbm{1}$ has an integer solution $x \in \mathbb{Z}^{k}$, then $Aw = [1, \ldots, 1, 1+m]^T$ has an integer solution $w \in \mathbb{Z}^k$. \end{lemma} \begin{proof} Notice we can write \[ Aw = A(x+y) = Ax + Ay= \begin{bmatrix} 1 \\ \vdots \\ 1 \\ 1+m \end{bmatrix} = \begin{bmatrix} 1 \\ \vdots \\ 1 \\ 1 \\ \end{bmatrix} + \begin{bmatrix} 0 \\ \vdots \\ 0 \\ m \end{bmatrix}. \] Solving the system $Ay = [0, \ldots, 0, m]^T$ yields \[ y = A^{-1} \cdot \begin{bmatrix} 0 \\ \vdots \\ 0 \\ m \\ \end{bmatrix} = \frac{1}{\det{A}} C^T \cdot \begin{bmatrix} 0 \\ \vdots \\ 0 \\ m \\ \end{bmatrix} = \frac{m}{\det{A}} \begin{bmatrix} C_{k 1} \\ C_{k 2} \\ \vdots \\ C_{k k} \\ \end{bmatrix} \] in which $C_{k i}$ is the cofactor of $A$. The above is an integer for each $i \in [k]$ by assumption. Set $w_j = x_j + y_j \in \mathbb{Z}$, and the result follows. \end{proof} We apply Lemma \ref{keylemma} when considering a connected graph $G$ on $m=n$ vertices with $A = L_B(i \mid \emptyset)$ for any $i \in [n]$. Here $\det{L_B(i \mid \emptyset)}= \pm \kappa$. Observe in this case the condition $Ax = \mathbbm{1}$ for all $i \in [n]$ is equivalent to $T_G$ being a reflexive Laplacian simplex. \begin{theorem}\label{thm:bridge} Let $G$ and $G'$ be graphs with vertex set $[n]$ such that $T_G$ and $T_{G'}$ are reflexive. Suppose $\kappa_G$ divides $nM_{ij}$ and $\kappa_{G'}$ divides $nM_{ij}'$ for all $i,j \in [n-1]$, where $M_{ij} = \det{L_B(i,n \mid j)}$ with $L$ as the Laplacian matrix of $G$, and $M_{ij}'$ is defined similarly. Let $H$ be the graph formed by $G$ and $G'$ with $V(H)= V(G) \uplus V(G')$ and $E(H)= E(G) \uplus E(G') \uplus \{i,i'\}$ where $i \in V(G)$ and $i' \in V(G')$. Then $T_{H}$ is reflexive. \end{theorem} \begin{proof} To show $T_H$ is reflexive, we show $T_H^*$ is a lattice simplex. Label the vertices of $H$ such that $V(G) = [n]$, $V(G')=[2n] \setminus [n]$. Let $L_B$, $L_B(G)$, and $L_B(G')$ be the Laplacian matrices with basis $B$ of the graphs $H$, $G$ and $G'$, respectively. Then $L_B$ is of the form \[ \left[ \begin{array}{ccc|r|ccc} & & & 0 & & & \\ & L_B(G) & & \vdots & & 0 & \\ & & & 0 & & & \\ & & & 1 & & &\\ \cline{1-3} \cline{5-7} & & & -1 & & &\\ & 0 & & 0 & & L_B(G') &\\ & & & \vdots & & & \\ & & & 0 & & & \\ \end{array} \right]. \] For $1 \le i \le 2n$, the vertex $v_i$ of $T_H^*$ is the solution to $L_B(i \mid \emptyset)v_i = \mathbbm{1}$. We consider two cases: $i \in [n-1]$ and $i = n$. The cases $i =n+1$ and $i \in [2n] \setminus [n+1]$ follow without loss of generality. First suppose $i \in [n-1]$. Then $L_B(i \mid \emptyset) v_i = \mathbbm{1}$ can be solved the following way. Multiply each side of the equation on the left by the $(2n-1) \times (2n-1)$ unimodular matrix \[ \left[ \begin{array}{ccc|cc|cccc} & & & 0 & 0 & & & &\\ & I_{n-2} & & \vdots & \vdots & & 0 & & \\ & & & 0 & 0 & & & &\\ \cline{1-3} & & & 1 & 1 & 1 & \cdots &\cdots & 1 \\ & & & 0 & 1 & 1 & \cdots & \cdots & 1 \\ \cline{6-9} & 0 & & 0 & 0 & & & &\\ & & & \vdots & \vdots & & I_{n-1} & &\\ & & & 0 & 0 & & & &\\ \end{array}\right] \] to obtain \begin{equation*} \begin{split} \left[ \begin{array}{ccc|c|ccc} & & & 0 & & & \\ & L_B(G)(i \mid \emptyset) & & \vdots & & 0 & \\ & & & \vdots & & & \\ & & & 0 & & &\\ \cline{1-3} & & & -1 & 0 & \cdots & 0 \\ \cline{5-7} & & & 0 & & &\\ & 0 & & \vdots & & L_B(G')(1 \mid \emptyset) & \\ & & & \vdots & & & \\ & & & 0 & & & \\ \end{array}\right] v_i &= \begin{bmatrix} 1 \\ \vdots \\ 1 \\ n+1 \\ n \\ \hline 1 \\ \vdots \\ \vdots \\ 1 \\ \end{bmatrix}. \end{split} \end{equation*} We write $(v_i)_k$ to denote the $k\textsuperscript{th}$ coordinate of $v_i$. Then $(v_i)_k \in \mathbb{Z}$ for all $k \in [n-1]$ by Lemma \ref{keylemma}. Observe from the above multiplication $(v_i)_n=-n$. Finally, $(v_i)_k \in \mathbb{Z}$ for all $k$, $n+1 \le k \le 2n-1$, as a consequence of $T_{G'}$ being reflexive, i.e., $T_{G'}^*$ is a lattice polytope. Now suppose $i = n$. Replace $I_{n-2}$ with $I_{n-1}$ and follow the same argument as above. Then $(v_i)_n=-n$, and it follows all other coordinates of $v_i$ are integers since $T_G^*$ and $T_{G'}^*$ are lattice polytopes. \end{proof} \begin{remark} The bridge graph construction described in Theorem~\ref{thm:bridge} can be obtained by applying Proposition~\ref{algorithm} to the wedge of two graphs $G$ and $G'$ with a leaf attached to the wedge point. Proposition \ref{algorithm} shows the Laplacian simplex associated to the wedge of $G$ and $G'$ is lattice equivalent to the Laplacian simplex associated to the bridge of $G$ and $G'$. Thus, the wedge of $G$ and $G'$ is reflexive if $G$ and $G'$ satisfy the conditions in Theorem~\ref{thm:bridge}. \end{remark} The following proposition shows that Theorem \ref{thm:bridge} applies to graphs such as the one given in Example~\ref{5}. \begin{prop} If $G=C_{2k+1}$ and $G'=K_{2k+1}$, then the bridge graph between these is reflexive. \end{prop} \begin{proof} For cyclic graphs on $n$ vertices, the number of spanning trees is $n$. This and Lemma~\ref{lem:completebridge} show that both cyclic graphs and complete graphs satisfy the condition $\kappa$ divides $|V(G)|\cdot M_{ij}$, as described in Lemma \ref{keylemma}. We show in later sections that $T_{K_n}$ and $T_{C_{2k+1}}$ are reflexive Laplacian simplices. \end{proof} \begin{lemma}\label{lem:completebridge} For all $n \ge 1$, $G=K_n$ satisfies the conditions of Lemma \ref{keylemma}; that is, for each $i \in [n-1]$, $\kappa$ divides $nM_{nj}$ for each $1 \le j \le n-1$. Here $M_{nj} = \det{L_B(i, n \mid j)}$. \end{lemma} \begin{proof} It is sufficient to show for each $1 \le i,j \le n-1$, $\kappa$ divides $nM_{ij}$ where $M_{ij} = \det{L(i, n \mid j, n)}$. By Lemma \ref{determinant}, this implies the result. For $G=K_n$, recall Cayley's formula yields $\kappa = n^{n-2}$. Then we must show $n^{n-3}$ divides $M_{ij}$ There are two cases to consider. Suppose $i=j$. Then using row operations on $L(i,n \mid i,n) \in \mathbb{Z}^{(n-2) \times (n-2)}$ which preserve the determinant, we have \begin{equation*} \begin{split} M_{ii} &= \det{\left[ \begin{array}{rrrrr} (n-1) & -1 & \cdots & \cdots & -1 \\ -1 & \ddots & \ddots & & \vdots \\ \vdots & \ddots & \ddots &\ddots & \vdots \\ \vdots & & \ddots &\ddots & -1 \\ -1 & \cdots & \cdots & -1 & (n-1) \\ \end{array} \right]} \\ &= \det{ \begin{bmatrix} 2 & 2 & \cdots & \cdots & 2 \\ -1 & (n-1) & -1 & \cdots & -1 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ \vdots & & \ddots & \ddots & -1 \\ -1 & \cdots & \cdots & -1 &(n-1) \\ \end{bmatrix}} \\ &= 2\det{ \begin{bmatrix} 1 & 1 & \cdots & \cdots & 1 \\ -1 & (n-1) & -1 & \cdots & -1 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ \vdots & & \ddots & \ddots & -1 \\ -1 & \cdots & \cdots & -1 & (n-1) \\ \end{bmatrix}} \\ &= 2\det{ \begin{bmatrix} 1 & 1 & \cdots & \cdots & 1 \\ 0 & n & 0 & \cdots & 0 \\ \vdots & 0 & n & \ddots & \vdots \\ \vdots & & \ddots & \ddots & 0 \\ 0 & \cdots & \cdots & 0 & n \\ \end{bmatrix}} \\ &= 2n^{n-3}. \end{split} \end{equation*} In the case $i \ne j$, $L(i,n \mid j, n) \in \mathbb{Z}^{(n-2) \times (n-2)}$ contains exactly one row and one column with all entries of $-1$. Without loss of generality we have \begin{equation*} \begin{split} M_{ij} &= \det{\begin{bmatrix} -1 & -1 & \cdots & \cdots & -1 \\ -1 & (n-1) & -1 & \cdots & \vdots \\ \vdots & -1 & \ddots & \ddots & \vdots \\ \vdots & & \ddots & \ddots & -1 \\ -1 & \cdots & \cdots & -1 & (n-1) \\ \end{bmatrix}} \\ &= \det{\begin{bmatrix} -1 & -1 & \cdots & \cdots & -1 \\ 0 & n & 0 & \cdots & 0 \\ \vdots & 0 & n & \ddots & \vdots \\ \vdots & & \ddots & \ddots & 0 \\ 0 & \cdots & \cdots & 0 & n \\ \end{bmatrix}} \\ &= -n^{n-3}. \end{split} \end{equation*} \end{proof} \section{Trees} Consider the case where $G=P_k$, a path on $k$ vertices. Label the vertices along the path with the elements of $[k]$ in increasing order. Then $L$ and consequently $L_B$ have the form \[ L = \left[ \begin{array}{rrrrrr} 1 & -1 & 0 & \cdots & \cdots & 0 \\ -1 & 2 & -1& 0 & & \vdots \\ 0 & -1 & 2 & -1 & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0 \\ \vdots & & \ddots & \ddots & 2 & -1 \\ 0 & \cdots & \cdots & 0 & -1 & 1 \end{array} \right] \hspace{.2in} L_B = \left[ \begin{array}{rrrrrr} 1 & 0 & \cdots & \cdots & 0 \\ -1 & 1 & 0 & & \vdots \\ 0 & -1 & 1 & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & 0 \\ \vdots & & \ddots & -1 & 1 \\ 0 & \cdots & \cdots & 0 & -1 \end{array} \right] \] Observe that multiplication by the lower triangular matrix of all ones yields \[ L_B \cdot \begin{bmatrix} 1 & 0 & \cdots & \cdots & 0 \\ \vdots & \ddots & \ddots & & \vdots \\ \vdots & & \ddots & \ddots & \vdots \\ \vdots & & & \ddots & 0 \\ 1 & \cdots & \cdots & 1 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & \cdots & \cdots & 0 \\ 0 & 1 & \ddots & & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ \vdots & & \ddots & \ddots & 0 \\ 0 & \cdots & \cdots & 0 & 1 \\ -1 & \cdots & \cdots & -1 & -1 \end{bmatrix} \] Since the lower triangular matrix is an element in ${\text GL}_{k-1}(\mathbb{Z})$, it follows that $T_P$ is lattice equivalent to \[ S_{k-1}(1) := \conv{ e_1, e_2, \cdots , e_{k-1}, - \sum_{i=1}^{k-1} e_i} \, . \] We leave it as an exercise for the reader to show that $S_{k-1}(1)$ is the unique reflexive $(k-1)$-polytope of minimal volume. This extends to all trees as follows. \begin{prop}\label{prop:trees} Let $G$ be a tree on $n$ vertices. Then $T_G$ is unimodularly equivalent to $S_{n-1}(1)$. \end{prop} \begin{proof} Let $G$ be a tree on $n$ vertices. Then $T_G$ is a simplex that contains the origin in its strict interior and has normalized volume equal to $n$, since $G$ has only one spanning tree. Consider the triangulation of $T_G$ that consists of creating a pyramid at the origin over each facet. Since $G$ is a tree, \[ \text{vol}(T_G) = \sum_{\text{facet}} \text{vol}(F) = 1\cdot n = n \, . \] There are $n$ facets, so each must have vol$(F)=1$. Applying a unimodular transformation to $n-1$ of the vertices of $T_G$, we can assume that the vertices of $T_G$ are the $n$ standard basis vectors and a single integer vector in the strictly negative orthant (so that the origin is in the interior of $T_G$). Because the normalized volume of the pyramid over each facet is equal to $1$, it follows that the final vertex is $-\mathbbm{1}$. \end{proof} \begin{corollary}\label{cor:treeunim} The $h^*$-vector of the Laplacian simplex for any tree is $(1,1,\ldots,1)$, hence is unimodal. \end{corollary} \begin{corollary}\label{tree transform} Let $G$ be a tree on $n$ vertices with Laplacian matrix $L_B$. Then there exists $U \in {\text GL}_{n-1}(\mathbb{Z})$ such that \[ L_B \cdot U = \left[ \begin{array}{rrrr} 1 & 0 & \cdots & 0 \\ 0 & 1 & \ddots & \vdots \\ \vdots & \ddots & \ddots & 0 \\ 0 & \cdots & 0 & 1 \\ -1 & \cdots & \cdots & -1 \end{array} \right] \] \end{corollary} The next proposition asserts that attaching an arbitrary tree with $k$ vertices to a graph on $n$ vertices yields a lattice isomorphism between the resulting Laplacian simplex and the Laplacian simplex obtained by attaching any other tree with $k$ vertices at the same root. \begin{prop}\label{tail} Let $G$ be a connected graph on $n$ vertices, and let $v$ be a vertex of $G$. Let $G'$ be the graph obtained from $G$ by attaching $k$ vertices such that $G'$ restricted to the vertex set $ \{ v \} \cup [k]$ forms a tree, call it $T$. The edges of $G'$ are the edges from $G$ along with any edges among the vertices $\{v\} \cup [k]$. Let $P$ be the graph obtained from $G$ by attaching $k$ vertices such that $P$ restricted to the vertex set $\{ v\} \cup [k]$ forms a path. Then $T_{G'} \cong T_{P}.$ \end{prop} \begin{proof} The reduced Laplacian matrix associated to $T_{G'}$ is the following $(n+k) \times (n+k-1)$ matrix: \[ \left[ \begin{array}{ccc|ccc} & & & & & \\ & L_B(G) & & & 0 & \\ & & & & & \\ & & & & & \\ \cline{4-6} & & & & & \\ \cline{1-3} & & & & & \\ & 0 & & & L_B(T) & \\ & & & & & \\ \end{array} \right] \] Here $L_B(T) \in \mathbb{Z}^{(k+1) \times k}$ is the Laplacian matrix for $T$, the tree on $(k+1)$ vertices. Let $U \in {\text GL}_{k}(\mathbb{Z})$ be the matrix such that $L_B(T) \cdot U$ gives the matrix with vertex set $S_{k}(1)$ as in Corollary~\ref{tree transform}. Then we have \[ \left[ \begin{array}{ccc|ccc} & & & & & \\ & L_B(G) & & & 0 & \\ & & & & & \\ & & & & & \\ \cline{4-6} & & & & & \\ \cline{1-3} & & & & & \\ & 0 & & & L_B(T) & \\ & & & & & \\ \end{array} \right] \cdot \left[ \begin{array}{ccc|ccc} & & & & & \\ & I_{n-1} & & & 0 & \\ & & & & & \\ \cline{1-6} & & & & & \\ & & & & & \\ & 0 & & & U & \\ & & & & & \\ \end{array} \right] = \left[ \begin{array}{ccc|ccc} & & & & & \\ & L_B(G) & & & 0 & \\ & & & & & \\ & & & & & \\ \cline{4-6} & & & & & \\ \cline{1-3} & & & & & \\ & 0 & & & L_B(P) & \\ & & & & & \\ \end{array} \right] \] For any set of $k$ vertices we attach to a vertex $v \in V(G)$ to obtain a tree on the vertex set $\{v\} \cup [k]$, we get a corresponding unimodular matrix $U$ such that the above multiplication holds. The determinant of the $(n-1+k) \times (n-1+k)$ transformation matrix is equal to the determinant of $U$, which is $\pm 1$. Then $T_{G'}$ is lattice equivalent to $T_P$ for any such $G'$. \end{proof} \begin{remark} It follows from Theorem~\ref{thm:bridge} that bridging a tree to a graph $G$ with $T_G$ reflexive and $L(G)$ satisfying the appropriate division condition on minors will result in a new reflexive Laplacian simplex. Further, Proposition~\ref{tail} shows that the equivalence class of the resulting reflexive simplex is independent of the choice of tree used in the attachment. \end{remark} \section{Cycles} Let $C_n$ denote the cycle with $n$ vertices. In this section, we show that odd cycles are reflexive and have unimodal $h^*$-vectors, but fail to be IDP. We show that whiskering even cycles results in reflexive Laplacian simplices. Finally, we determine the $h^*$-vectors for $T_{C_n}$ when $n$ is an odd prime. \subsection{Reflexivity and Whiskering} \begin{theorem}\label{cycle} For $n \ge 3$, the simplex $T_{C_n}$ is reflexive if and only if $n$ is odd. For $k \ge 2$, the simplex $T_{C_{2k}}$ is $2$-reflexive. \end{theorem} \begin{proof} Let $C_n$ be a cycle with vertex set $[n]$ and vertices labeled cyclically. Then $L$ and consequently $L_B$ have the form (when rows and columns are suitably labeled) \[ L = \left[ \begin{array}{rrrrrr} 2 & -1& 0 & \cdots & 0 & -1 \\ -1& 2 & -1 & \ddots & & 0 \\ 0 & -1& 2 & -1 & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0 \\ 0 & & \ddots &-1 & 2 & -1 \\ -1 & 0 & \cdots & 0 & -1 & 2 \end{array}\right] \hspace{.2in} L_B = \left[ \begin{array}{rrrrr} 2 & 1 & \cdots & \cdots & 1 \\ -1& 1 & 0 & \cdots & 0 \\ 0 & -1& 1 & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & 0 \\ 0 & \cdots & 0 & -1 & 1 \\ -1 & -1 & \cdots & -1 & -2 \end{array}\right] \, . \] To show that $T_{C_n}$ is reflexive, we show $T_{C_n}^*= \{x \mid L_B x \le \mathbbm{1} \}$ is a lattice polytope. Each intersection of $(n-1)$ of these facet hyperplanes will yield a unique vertex of $T_{C_n}^*$, since the rank of $L_B$ is $n-1$. For each $i \in [n]$, let $v_i \in \mathbb{R}^{n-1}$ be the vertex that satisfies $L_B(i \mid \emptyset) \cdot v_i = \mathbbm{1}$. Solving the appropriate system of linear equations yields \begin{equation*} \begin{split} v_1 &= \left( \frac{1-n}{2}, \frac{3-n}{2}, \frac{5-n}{2}, \cdots , \frac{n-5}{2}, \frac{n-3}{2} \right) = \left( \frac{(2j-1)-n}{2} \right)_{j=1}^{n-1} \\ v_i &= \left( \left( \frac{(2j+1)+n-2i}{2} \right)_{j=1}^{i-1}, \left(\frac{(2j+1) -n - 2i}{2} \right)_{j=i}^{n-1} \right), \text{ for } 2 \le i \le n-1 \\ v_n &= \left( \frac{3-n}{2} , \frac{5-n}{2}, \frac{7-n}{2}, \cdots , \frac{n-3}{2}, \frac{n-1}{2} \right)= \left(\frac{(2j+1)-n}{2} \right)_{j=1}^{n-1} \\ \end{split} \end{equation*} These are the vertices of $T_{C_n}^*$. Note $v_i \in \mathbb{Z}^{n-1}$ only if $n$ is odd. Then $T_{C_n}$ is reflexive if and only if $n$ is odd. For the even case, observe the coordinates of each vertex of $2\cdot T_{C_{2k}}^*$ are relatively prime. Then each of these vertices is primitive. Thus, for $n=2k$ each vertex of $T_{C_{2k}}^*$ is a multiple of $\frac{1}{2}$, which allows us to write \[ T_{C_{2k}} = \left\{ x \mid \frac{1}{2} \tilde{A}x \le \mathbbm{1} \right\}= \left\{x \mid \tilde{A}x \le 2 \cdot \mathbbm{1} \right\} \] where $\tilde{A} \in \mathbb{Z}^{n \times (n-1)}$. The facets of $T_{C_{2k}}$ have supporting hyperplanes $\langle r_i, x \rangle = 2$ where $r_i$ is the $i$\textsuperscript{th} row of $\tilde{A}$. Thus $T_{C_{2k}}$ is a $2$-reflexive Laplacian simplex. \end{proof} \begin{example} Below are the dual polytopes to $T_{C_n}$ for small $n$. \begin{itemize} \item $T_{C_3}^* = \conv{(-1,0), (1,-1), (0,1)}$ \item $T_{C_4}^* = \conv{ (-\frac{3}{2}, - \frac{1}{2}, \frac{1}{2}), (\frac{3}{2}, -\frac{3}{2}, -\frac{1}{2}), (\frac{1}{2}, \frac{3}{2}, - \frac{3}{2}), (-\frac{1}{2}, \frac{1}{2}, \frac{3}{2})}$ \item $T_{C_5}^* = \conv{(-2,-1,0,1), (2,-2,-1,0), (1,2,-2,-1) , (0,1,2,-2), (-1,0,1,2)}$ \end{itemize} \end{example} Although $T_{C_{2k}}$ is not reflexive, we show next that whiskering $C_{2k}$ results in a graph $W(C_{2k})$ such that $T_{W(C_{2k})}$ is reflexive. The technique of whiskering graphs has been studied previously in the context of Cohen-Macaulay edge ideals, see \cite[Theorem 4.4]{DochtermannEngstromEdgeIdeals} and \cite{VillarrealCMGraphs}. \begin{definition}\label{whisker} To add a \emph{whisker} at a vertex $x \in V(G)$, one adds a new vertex $y$ and the edge connecting $x$ and $y$. Let $W(G)$ denote the graph obtained by whiskering all vertices in $G$. We call $W(G)$ the \emph{whiskered graph of $G$}. If $V(G) = \{ x_1, \ldots, x_n\}$ and $E(G) = E$, then $V(W(G)) = V(G) \cup \{y_1, \ldots, y_n\}$ and $E(W(G))=E \cup \{ \{x_1, y_2\}, \ldots, \{x_n, y_n \} \}$. \end{definition} \begin{prop}\label{evenreflexive} $T_{W(C_{n})}$ is reflexive for even integers $n \ge 2$. \end{prop} \begin{proof} $W(C_n)$ is a graph with vertex set $[2n]$ and $2n$ edges. Label the vertices of the cycle with $[n]$ in a cyclic manner. Label the vertices of each whisker with $i$ and $n+i$ where $i\in [n]$. The Laplacian matrix has the following form. \[ L=\left[\begin{array}{ ccc | ccc} & & & & & \\ & L + I_n& & &-I_n & \\ & & & & & \\ \hline & & & & & \\ & -I_n& & & I_n& \\ & & & & & \\ \end{array}\right] \] Consequently if $A$ is the $n \times (n-1)$ matrix given by Equation~\eqref{eqn:A}, then \[ L_B=\left[\begin{array}{ ccc | cccc} & & & & & & \\ & L_B(C_n) + A& & & A^T & & \\ & & & & & & \\ & & & 1 & \cdots & \cdots & 1\\ \hline & & & & & & \\ & -A & & & -A^T& & \\ & & & & & & \\ & & & -1 &\cdots & \cdots &-1 \\ \end{array}\right]. \] We show $T_{W(C_n)}$ is reflexive by showing $T_{W(C_n)}^*$ is a lattice polytope. Each vertex of the dual is a solution to $L_B(i \mid \emptyset) v_i = \mathbbm{1}$. We consider the following cases. \textbf{Case:} $1 \le i \le n$. Multiply both sides of $L_B(i \mid \emptyset) v_i = \mathbbm{1}$ by the $(2n-1) \times (2n-1)$ upper diagonal matrix with the following entries. \[ x_{\ell k}= \begin{cases} 1, & \text{if $\ell = k$} \\ 1, & \text{if $\ell < k$ and $\{v_\ell, v_k\}$ is a whisker} \\ -1, & \text{if $n < \ell = k-1$} \end{cases} \] In this matrix each of the first $n-1$ rows will have exactly two non-zero entries of value $1$, which corresponds to adding the two rows of $L_B(i \mid \emptyset)$ that are indexed by the labels of a whisker in the graph. The last $n$ rows will have an entry of $1$ along the diagonal and an entry of $-1$ on the superdiagonal, which corresponds to subtracting consecutive rows in $L_B(i \mid \emptyset)$ to achieve cancellation. We obtain the following system of linear equations. \[ \left[\begin{array}{ ccc | cccc} & & & & & & \\ & L_B(C_n)(i \mid \emptyset) & & & & 0 & \\ & & & & & & \\ & & & & & & \\ \hline & & & 0 & & & \\ & -I_{n-1} & & \vdots & & I_{n-1} & \\ & & & 0 & & & \\ 0& \cdots &0 & -1 &\cdots & \cdots &-1 \\ \end{array}\right] v_i = \begin{bmatrix} 2 \\ \vdots \\ 2 \\ 0 \\ \vdots \\ 0 \\ 1 \\ \end{bmatrix} \] Let $(v_i^*)_j$ denote the $j\textsuperscript{th}$ coordinate of the vertex $v_i \in \mathbb{Q}^{n-1}$ of $T_{C_n}^*$ described in Proposition \ref{cycle}. Then the vertex $v_i$ of $T_{W(C_n)}^*$ has the following form. \[ (v_i)_j = \begin{cases} 2(v_i^*)_j, & \text{if $1 \le j \le n-1$} \\ -1 - \sum_{k=1}^{n-1} 2(v_i^*)_k, & \text{if $j=n$} \\ 2(v_i^*)_{j-n}, & \text{if $n+1 \le j \le 2n-1$} \\ \end{cases} \] Since $2(v_i^*)_j \in \mathbb{Z}$ by Proposition \ref{cycle} for $1 \le j \le n-1$, then $v_i \in \mathbb{Z}^{2n-1}$. \\ \textbf{Case:} $n+2 \le i \le 2n$. The strategy is to multiply the equality $L_B( i \mid \emptyset) v_i = \mathbbm{1}$ by the matrix that performs the following row operations. Let $r_m \in \mathbb{Z}^{2n-1}$ denote the $m\textsuperscript{th}$ row of $L_B( i \mid \emptyset)$. For each whisker with vertex labels $\{m, n+m\}$, replace $r_m$ with $r_m + r_{n+m}$ for $m \in [n]$. Row $i-n$ will not have a row to add because the index of its whisker is the index of the deleted row. Since each column in $L_B$ sums to $0$, the negative sum of all the rows of $L_B(i \mid \emptyset)$ is equal to the row removed. We recover the missing row by replacing $r_{i-n}$ with $-\sum_{k=1}^{2n-1} r_k$ for $r_k \in L_B(i \mid \emptyset)$. Then as in the previous case, we want to replace row $r_k$ with $r_k - r_{k+1}$ for $n+1 \le k \le 2n-2$. Here $r_{i-n}$ plays the role of the deleted $r_i$. We obtain a similar system of linear equations found in the first case. The vertex $v_i$ of $T_{W(C_n)}^*$ has the following form. \[ (v_i)_j = \begin{cases} 2(v_i^*)_j, & \text{if $1 \le j \le n-1$} \\ -1 - \sum_{k=1}^{n-1} 2(v_i^*)_k, & \text{if $j = n$} \\ 2(v_i^*)_{j-n}, & \text{if $n+1 \le j \le 2n-1$ and $j \ne i-1, i$} \\ 2(v_i^*)_{j-n} + 2n, & \text{if $j=i-1$} \\ 2(v_i^*)_{j-n} - 2n, & \text{if $j=i$} \\ \end{cases} \] Observe in the case $i=2n$, the last equality is not applicable since $j \in [2n-1]$. Then $v_i \in \mathbb{Z}^{2n-1}$. \\ \textbf{Case:} $i=n+1$. Here $(v_i)_{i-1} = (v_i)_{n} = -(2n-1) - \sum_{k=1}^{n-1} 2(v_i^*)_k \in \mathbb{Z}$ and the other coordinates are as described above. Then $v_i \in \mathbb{Z}^{2n-1}$. \end{proof} We extend Proposition \ref{evenreflexive} to a more general result, that whiskering a graph whose Laplacian simplex is $2$-reflexive results in a graph whose Laplacian simplex is reflexive. Although even cycles are the only known graph type to result in $2$-reflexive Laplacian simplices, we include the following result. \begin{prop} If $G$ is a connected graph on $n$ vertices such that $T_G$ is $2$-reflexive, then $T_{W(G)}$ is reflexive for all $n \ge 2$. \end{prop} \begin{proof} If $T_G$ is $2$-reflexive, then each vertex $v_i$ of $T_G^*$ satisfies $2 v_i \in \mathbb{Z}^{n-1}$ for each $1 \le i \le n$. As in the proof of Proposition \ref{evenreflexive}, we can find descriptions of the vertices of $T_{W(G)}^*$ in terms of the coordinates from vertices of $T_G^*$ to show they are lattice points. The result follows. \end{proof} Given a graph $G$ with $T_G$ is reflexive, we have already seen that attaching a tree on $|V(G)|$ vertices to obtain a new graph $G'$ on $2\cdot |V(G)|$ vertices results in the reflexive Laplacian simplex $T_{G'}$. Whiskering a graph also preserves the reflexivity of $T_G$, as seen in the following result. \begin{prop} If $G$ is a connected graph on $n$ vertices such that $T_G$ is reflexive, then $T_{W(G)}$ is reflexive for all $n \ge 1$. \end{prop} \begin{proof} If $T_G$ is reflexive, then vertices of $T_G^*$ are integer and satisfy $L_B(i \mid \emptyset)v_i = \mathbbm{1}$ for all $1 \le i \le n$. Observe $2v_i \in \mathbb{Z}^{n-1}$ satisfies $L_B(i \mid \emptyset) 2v_i = 2\cdot \mathbbm{1}$. Following the proof technique in Proposition \ref{evenreflexive}, we can find descriptions of the vertices of $T_{W(G)}^*$ in terms of the coordinates from vertices of $T_G^*$ to show they are lattice points. \end{proof} \subsection{$h^*$-Unimodality} For odd $n$, our proof of the following theorem can be interpreted as establishing the existence of a weak Lefschetz element in the quotient of the semigroup algebra associated to $\cone{T_{C_n}}$ by the system of parameters corresponding to the ray generators of the cone. This proof approach is not universally applicable, as there are examples of reflexive IDP simplices with unimodal $h^*$-vectors for which this proof method fails~\cite{BraunDavisFreeSum}. \begin{theorem}\label{unimodal} For odd $n$, $h^*(T_{C_n})$ is unimodal. \end{theorem} \begin{proof} Recall from Lemma~\ref{lem:fpp} that $h^*_i(T_{C_n})$ is the number of lattice points in $\Pi_{T_{C_n}}$ at height $i$. Theorem~\ref{cycle} shows $h^*_i(T_{C_n})$ is symmetric. Our goal is to prove that for $i\leq \lfloor n/2\rfloor$ we have $h_i^*\leq h^*_{i+1}$. This will show that $h^*(T_{C_n})$ is unimodal. While $\kappa=n$ for $C_n$, we will freely use both $\kappa$ and $n$ to denote this quantity, as it is often helpful to distinguish between the number of spanning trees and the number of vertices. Lattice points in the fundamental parallelepiped of $T_{C_n}$ can be described as follows: \[ \mathbb{Z}^n \cap \left\{\frac{1}{\kappa n} b \cdot [L_B \mid \mathbbm{1}] \mid 0 \le b_i < \kappa n, b_i\in \mathbb{Z}_{\geq 0}, \sum_{i=1}^n b_i \equiv 0 \bmod \kappa n \right\}. \] We will use the modular equation above extensively in our analysis. Denote the height of a lattice point in $\Pi_{T_{C_n}}$ by \[ h(b):= \dfrac{\sum_{i=1}^n b_i}{n \kappa} \in \mathbb{Z}_{\ge 0} \, . \] We first show that every lattice point in $\Pi_{T_{C_n}}$ arising from $b$ satisfies \[ \dfrac{(k-j+1)(b_1-b_n)}{\kappa n} + \dfrac{b_j - b_{k+1}}{\kappa n} \in \mathbb{Z} \] for each $1 \le j < k \le n-1$. Since the lattice point lies in $\Pi_{T_{C_n}}$, we have the following constraint equations: \[ \frac{b_1 - b_n + b_i - b_{i+1}}{\kappa n} \in \mathbb{Z} \] for each $1 \le i \le n-1$. Summing any consecutive set of these equations where $1 \le j \le k \le n-1$ yields \[ \sum_{i=j}^k \left( \dfrac{b_1-b_n}{\kappa n} + \dfrac{b_i - b_{i+1}}{\kappa n} \right) \in \mathbb{Z} \, . \] The result follows. Thus, each vector $b$ corresponding to an integer point in $\Pi_{T_{C_n}}$ satisfies $\kappa \mid (b_1 - b_n)$, which follows from setting $j=1$ and $k=n-1$. We next claim that every lattice point in $\Pi_{T_{C_n}}$ arises from $b \in \mathbb{Z}^n$ such that $b_i \equiv b_j \mod(\kappa)$ for each $1 \le i, j \le n$. To prove this, set $\frac{b_1-b_n}{\kappa}= B \in \mathbb{Z}.$ Then for each $1 \le i \le n-1$, our constraint equation becomes $\frac{B}{n} + \frac{b_i-b_{i+1}}{\kappa n} = C$ for some $C \in \mathbb{Z}$. Then $\frac{b_i - b_{i+1}}{\kappa} = Cn - B \in \mathbb{Z}$ holds for each $i$. The result follows. \textbf{First Major Claim:} For $n$ odd, any lattice point in $\Pi_{T_{C_n}}$ arises from $b \in \mathbb{Z}^n$ such that $b_i \equiv 0 \mod(\kappa)$ for each $1 \le i \le n$. To prove this, let $b_i = m_i \kappa + \alpha$ such that $0 \le m_i < \kappa$ and $0 \le \alpha < \kappa$. Constraint equations yield \[ \frac{b_1-b_n + b_i - b_{i+1}}{\kappa n} = \frac{m_1-m_n+m_i-m_{i+1}}{n} \in \mathbb{Z} \] using $\kappa = n$. Summing all $n-1$ integer expressions with linear coefficients yields \[ \sum_{i=1}^k i(m_1 - m_n + m_i - m_{i+1}) = \frac{n(n-1)}{2}m_1 +\sum_{i=1}^{n-1} m_i - (n-1)m_n - \frac{n(n-1)}{2}m_n, \] which is divisible by $n$. Call the resulting sum $An$ for some $A \in \mathbb{Z}$. Finally, notice the last constraint equation (corresponding to $h(b)$) can be written \begin{equation*} \begin{split} \frac{\sum_{i=1}^n b_i}{\kappa n} &= \frac{\sum_{i=1}^n m_i + \alpha}{n} \\ &= \frac{m_n + An - \frac{n(n-1)}{2}m_1 + (n-1)m_n + \frac{n(n-1)}{2}m_n + \alpha}{n} \in \mathbb{Z}. \end{split} \end{equation*} Then $n$ odd implies $n$ divides $\frac{n(n-1)}{2}$ so that $n$ divides $\alpha$. Since $0 \le \alpha < n$, then $\alpha = 0$ as desired. \textbf{Second Major Claim:} Consider $T_{C_n}$ for odd $n$. Suppose $h(b) < \frac{n-1}{2}$. If $p \in \Pi_{T_{C_n}} \cap \mathbb{Z}^n$, then $ p + (0, \cdots, 0, 1)^T \in \Pi_{T_{C_n}} \cap \mathbb{Z}^n.$ To establish this, it suffices to prove that for every $p = \frac{1}{n^2}b \cdot [L_B \mid \mathbbm{1}] \in \Pi_{T_{C_n}} \cap \mathbb{Z}^n$ such that $h(b)< \frac{n-1}{2}$, we have $b_i < n(n-1)$ for each $i$. This would imply \[ p + (0, \cdots, 0, 1)^T = \frac{1}{n^2} (b+n \mathbbm{1}) \cdot [L_B \mid \mathbbm{1}] \in \Pi_{T_{C_n}} \cap \mathbb{Z}^n \, , \] providing an injection from the lattice points in $\Pi_{T_{C_n}}$ at height $i$ to those at height $i+1$. Constraint equations yield, using the same notation as in the proof of our first major claim, that \[ -m_{j-1} + 2m_j - m_{j+1} \in n\mathbb{Z} \] for each $1 \le j \le n$. Note that this comes from subtracting the two integers \[ \frac{m_1+m_j- m_{j+1} - m_n}{n} - \frac{m_1+m_{j-1} - m_j - m_n}{n} = \frac{2m_j - (m_{j-1} + m_{j+1})}{n} \in \mathbb{Z} \] for each $2 \le j \le n-1$, as well as \[ \frac{2m_1 - m_2 -m_n}{n}, \frac{-(m_1 + m_{n-1} - 2m_n)}{n} \in \mathbb{Z} \, . \] For a contradiction, suppose there exists a $j$ such that $b_j = n(n-1)$. Then $m_j = n-1$. Constraints on the other variables $m_i$ imply \[ 0 \le \frac{2(n-1)-(m_{j-1}+m_{j+1})}{n} \le 1 \implies 2(n-1)-(m_{j-1} + m_{j+1}) = 0 \text{ or } n. \] \noindent Case 1: If the above is $0$, then \[ 2(n-1) = m_{j-1} + m_{j+1} \implies m_{j-1} = m_{j+1} = n-1. \] Apply these substitutions on other constraint equations to yield $m_i = n-1$ for all $1 \le i \le n$. Then \[ h(b) = \frac{\sum_{i=1}^n m_i}{n} = \dfrac{n(n-1)}{n} = n-1 > \dfrac{n-1}{2}, \] which is a contradiction. \noindent Case 2: If the above is $1$, then $n-2=m_{j-1}+m_{j+1}$. Adding subsequent constraint equations yields \begin{equation*} \begin{split} \left( -m_j + 2m_{j-1} - m_{j-2}\right) + \left(-m_j + 2m_{j+1} - m_{j+2} \right) &= -2m_j + 2(m_{j-1} + m_{j+1}) - (m_{j-2} + m_{j+2}) \\ &= -2(n-1) + 2(n-2) - (m_{j-2} + m_{j+2}) \\ &= -2 -(m_{j-2} + m_{j+2}) \end{split} \end{equation*} \noindent Since the above is in $n\mathbb{Z}$, it is equal to either $-2n$ or $-n$. \noindent Case 2a: If the above is equal to $-2n$, then $m_{j-2}=m_{j+2} = n-1$. Then \[ -m_{j-3} + 2m_{j-2} - m_{j-1} = -m_{j-3} + m_{j+1} \in n \mathbb{Z} \implies m_{j-3} = m_{j+1}. \] A similar argument shows $m_{j+3} = m_{j-1}$. Continuing in this way shows $m_{j \pm k} = m_{j \mp 1}$ for remaining $m_i$. Then for each of the $\frac{n-3}{2}$ pairs, $m_{j-k} + m_{j+k}=n-2$ where $k \in \{ 1, \hat{2}, 3, \cdots, \frac{n-1}{2} \}$. But then \begin{equation*} \begin{split} h(b) &= \frac{\sum_{i=1}^n m_i}{n} \\ &= \frac{n-1 + 2(n-1) + \frac{n-3}{2}(n-2)}{n} \\ &= \frac{n+1}{2}, \end{split} \end{equation*} which is a contradiction. \noindent Case 2b: If the above is equal to $-n$, then $m_{j-2}+m_{j+2}=n-2$. Adding subsequent constraint equations as above yields $n-2 -(m_{j-3}+m_{j+3})$. Since the above is in $n\mathbb{Z}$, it is equal to either $-2n$ or $-n$. \noindent Case 2b(i): If the above is equal to $-n$, then $m_{j-3}=m_{j+3}=n-1$. Following the same argument as Case 2a leads to the contradiction, $h(b) = \dfrac{n+1}{2}$.\\ \noindent Case 2b(ii): If the above is equal to $-2n$, then $m_{j-3} + m_{j+3} = n-2$. Continuing in this manner yields $m_{j-k}+m_{j+k} = n-2$ for all $k \in \{ 1, 2, \cdots, \frac{n-1}{2} \}$. But then \[ h(b) = \frac{n-1 + \frac{(n-1)}{2}(n-2)}{n} = \frac{n-1}{2}, \] which is a contradiction. This concludes the proof of our second major claim. The second claim implies that for $i\leq \lfloor n/2 \rfloor$, we have $h_i^*\leq h_{i+1}^*$. Thus, our proof is complete. \end{proof} \subsection{Structure of $h^*$-vectors} We next classify the lattice points in the fundamental parallelepiped for $T_{C_n}$ by considering the matrix $[L_B \mid \mathbbm{1}]$ over the ring $\mathbb{Z}/\kappa \mathbb{Z}$. Let \[ [\widetilde{L} \mid \mathbbm{1}] := [L_B \mid \mathbbm{1}] \bmod \kappa \, . \] Recall that for a cycle we have $n=\kappa$. \begin{lemma}\label{kernel} For $C_{n}$ with odd $n$ and corresponding reduced Laplacian matrix $[L_B \mid \mathbbm{1}]$, we have \[ \ker_{\mathbb{Z}/\kappa \mathbb{Z}}{[\widetilde{L} \mid \mathbbm{1}]} = \{ x \in \left(\mathbb{Z}/\kappa \mathbb{Z}\right)^n \mid x[L_B\mid \mathbbm{1}] \equiv \mathbf{0} \mod{\kappa} \} = \langle \mathbbm{1}^n, (0, 1, \cdots, n-1) \rangle . \] \end{lemma} \begin{proof} Consider the second principal minor of $[L_B\mid \mathbbm{1}]$ with the first and $n$\textsuperscript{th} rows and columns deleted. The matrix $[L_B \mid \mathbbm{1}](1,n \mid 1,n)$ is the lower diagonal matrix of the following form: \[ \left[ \begin{array}{rrrrr} 1 & 0& 0 & \cdots & 0 \\ -1& 1 & 0 & & \vdots \\ 0 & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & 0 \\ 0 & \cdots & 0 & -1 & 1 \end{array} \right] \] Then $\det{[L_B \mid \mathbbm{1}](1,n \mid 1,n)} = 1$ implies there are $n-2$ linearly independent columns, hence $\text{rk}_{\mathbb{Z}/\kappa \mathbb{Z}} [L_B \mid \mathbbm{1}] \ge n-2$. Since the entries in each column of $[L_B\mid \mathbbm{1}]$ sum to $0$, then \[\mathbbm{1} \cdot [L_B\mid \mathbbm{1}] = (0, 0, \ldots, 0, n) \equiv \bf{0} \mod \kappa\] implies $\mathbbm{1} \in \ker_{\mathbb{Z}/\kappa \mathbb{Z}}{[L_B\mid \mathbbm{1}]}$. Consider \[ (0, 1, \ldots, n-1) \cdot \left[\begin{array}{rrrrrrr} 2 & 1& 1 & \cdots & \cdots & 1 & 1\\ -1& 1 & 0& \cdots & \cdots & 0 & 1 \\ 0 & -1& 1 & \ddots & & \vdots & \vdots\\ \vdots & \ddots & \ddots & \ddots & \ddots& \vdots & \vdots \\ \vdots & & \ddots & \ddots & \ddots & 0 & \vdots \\ 0 & \cdots & \cdots & 0 & -1 & 1 & 1 \\ -1 & -1 & -1 & \cdots & -1 & -2 &1 \end{array}\right] = \left(-n, \ldots, -n, \frac{n(n-1)}{2}\right) \equiv \bf{0} \mod \kappa. \] This shows $(0, 1, \ldots , n-1) \in \ker_{\mathbb{Z}/\kappa \mathbb{Z}}{[L_B\mid \mathbbm{1}]}$. Since these two vectors are linearly independent, we have $\text{rk}_{\mathbb{Z}/\kappa \mathbb{Z}} [L_B \mid \mathbbm{1}] \leq n-2$. Thus, the kernel is two-dimensional and we have found a basis. \end{proof} \begin{theorem}\label{cyclefpp} For odd $n \ge 3$, lattice points in $\Pi_{T_{C_n}}$ are of the form \[\frac{(\alpha \mathbbm{1} + \beta (0, 1, \ldots, n-1)) \mod{\kappa}}{\kappa} \cdot [L_B \mid \mathbbm{1}] \] for all $\alpha, \beta \in \mathbb{Z} / \kappa \mathbb{Z}$. Thus, $h^*_i(T_G)$ is equal to the cardinality of \[ \left\{ \frac{(\alpha \mathbbm{1} + \beta (0, 1, \ldots, n-1)) \mod \kappa}{\kappa} \cdot [L_B \mid \mathbbm{1}] \mid 0\leq \alpha, \beta<\kappa-1, \frac{1}{\kappa} \sum_{j=0}^{n-1} (\alpha + j \beta \mod \kappa ) = i \right\}. \] \end{theorem} \begin{proof} Since $|\Pi_{T_{C_n}} \cap \mathbb{Z}^n | = \sum_{i=0}^{n-1} h^*_i(T_{C_n}) = n \kappa = n^2$, there are $n^2$ lattice points in the fundamental parallelepiped. Similarly, there are $n^2$ possible linear combinations of $\mathbbm{1}$ and $(0,1,2,\ldots,n-1)$ in $\mathbb{Z}/\kappa \mathbb{Z}$. We show that each such linear combination yields a lattice point. Recall the sum of the coordinates down each of the first $n-1$ columns of $[L_B \mid \mathbbm{1}]$ is $0$. Since \[ (\alpha \mathbbm{1} + \beta (0, 1, \ldots, n-1)) \cdot [L_B \mid \mathbbm{1}] \equiv \mathbf{0} \mod \kappa \] by Lemma \ref{kernel}, it follows that \[(\alpha \mathbbm{1} + \beta (0, 1, \ldots, n-1) \mod \kappa) \cdot [L_B \mid \mathbbm{1}] \equiv \mathbf{0} \mod \kappa. \] Then $\dfrac{(\alpha \mathbbm{1} + \beta (0, 1, \ldots, n-1)) \mod{\kappa}}{\kappa} \cdot [L_B \mid \mathbbm{1}]$ is a lattice point. Since we are reducing the numerators of the entries in the vector of coefficients modulo $\kappa$ prior to dividing by $\kappa$, it follows that each entry in the coefficient vector is greater than or equal to $0$ and strictly less than $1$, and hence the resulting lattice point is an element of $\Pi_{T_{C_n}}$. \end{proof} \begin{theorem}\label{primes} Consider $C_n$ where $n \ge 3$ is odd. Let $n = p_1^{a_1} p_2^{a_2} \cdots p_k^{a_k}$ be the prime factorization of $n$ where $p_1 > p_2 > \cdots > p_k$. Then \[ h^*(T_{C_n}) = (1, \ldots, 1, h^*_m, h^*_{m+1}, \ldots, h^*_{\frac{n-1}{2}}, \ldots , h^*_{n-m-1}, h^*_{n-m}, 1, \ldots, 1) \] where $m=\frac{1}{2}(n - p_1^{a_1} \cdots p_k^{a_k-1})$ and $h_m > 1$. Further, if $\mathbb{Z}_n^*$ denotes the group of units of $\mathbb{Z}_n$, we have that $h^*_{(n-1)/2}\geq n\cdot |\mathbb{Z}_n^*|+1$. In particular, if $n$ is prime, we have \[ h^*(T_{C_n}) = (1, \ldots, 1, n^2 - n + 1 , 1, \ldots, 1) \] \end{theorem} \begin{proof} Keeping in mind that $n=\kappa$ for $C_n$, denote the height of the lattice point \[ \frac{(\alpha \mathbbm{1} + \beta (0, 1, \ldots, n-1)) \bmod n}{n}\cdot [L_B \mid \mathbbm{1}] \] in the fundamental parallelepiped by \[ h(\alpha,\beta):=\frac{1}{n} \sum_{j=0}^{n-1} ((\alpha + j \beta) \bmod n )\, . \] Each $\alpha \in \mathbb{Z} / n \mathbb{Z}$ paired with $\beta = 0$ produces a lattice point at a unique height in $\Pi_{T_{C_n}}$, and thus each $h_i^*\geq 1$. Let $\mathbb{Z}_n^*$ denote the group of units of $\mathbb{Z}_n$. If $\beta\in\mathbb{Z}_n^*$, then $\beta (0, 1, \ldots, n-1)\bmod n$ yields a vector that is a permutation of $(0, 1, \ldots, n-1)$, and thus for any $\alpha$ we have the height of the resulting lattice point is $(n-1)/2$, proving that $h^*_{(n-1)/2}\geq n\cdot |\mathbb{Z}_n^*|+1$. Thus, when $n$ is an odd prime, it follows that \[ h^*(T_{C_n}) = (1, \ldots, 1, n^2 - n + 1 , 1, \ldots, 1)\, . \] Now, suppose that $\gcd(\beta,n)=\prod p_i^{b_i}\neq 1$. Then the order of $\beta$ in $\mathbb{Z}_n$ is $\prod p_i^{a_i-b_i}$, and (after some reductions in summands modulo $n$) \[ h(\alpha,\beta)=\frac{1}{n}\cdot \prod p_i^{b_i}\cdot \left(\sum_{j=0}^{\prod p_i^{a_i-b_i}-1}\left((\alpha+j\prod p_i^{b_i})\bmod n\right) \right) \, . \] Thus, we see that for a fixed $\beta$, the height is minimized (not uniquely) when $\alpha=0$. In this case, we have \begin{align*} h(0,\beta) & =\frac{1}{n}\cdot \prod p_i^{b_i}\cdot \left(\sum_{j=0}^{\prod p_i^{a_i-b_i}-1}\left(j\prod p_i^{b_i}\bmod n\right) \right) \\ & = \frac{1}{n}\cdot \prod p_i^{b_i}\cdot \prod p_i^{b_i}\cdot\left(\sum_{j=0}^{\prod p_i^{a_i-b_i}-1}j \right) \\ & = \frac{n-\prod p_i^{b_i}}{2} \, . \end{align*} This value is minimized when $\prod p_i^{b_i}=p_1^{a_1} \cdots p_k^{a_k-1}$, and this height is attained more than once by setting $\beta=p_1^{a_1} \cdots p_k^{a_k-1}$ and $\alpha=0,1,2,\ldots,p_1^{a_1} \cdots p_k^{a_k-1}-1$. \end{proof} \begin{corollary} \label{cor:oddcyclenotidp} $T_{C_n}$ is not IDP for odd $n \ge 3$. \end{corollary} \begin{proof} Theorem~\ref{primes} yields $h^*_1(T_{C_n}) = 1$ for odd $n \ge 3$. It is known \cite[Corollary 3.16]{BeckRobinsCCD} that for an integral convex $d$-polytope $\mathcal P$, $h^*_1(\mathcal P) = |\mathcal{P}\cap \mathbb{Z}^n| - (d+1)$. In this case, \[ |T_{C_n}\cap \mathbb{Z}^n| = h^*_1(T_{C_n}) + (n-1) + 1 = n+1 \] is the number of lattice points in $T_{C_n}$. In particular, the lattice points consist of the $n$ vertices of $T_{C_n}$ and the origin. Then $\Pi_{T_{C_n}} \cap \{x\mid x_{n}=1\} \cap \mathbb{Z}^n = (0,0, \ldots, 0, 1)$. If $T_{C_n}$ is IDP, then every lattice point in $\Pi_{T_{C_n}}$ is of the form $(0, \ldots, 0, 1) + \cdots + (0, \ldots, 0, 1),$ which is not true by Proposition \ref{cyclefpp}. The result follows. \end{proof} \section{Complete Graphs} The simplex $T_{K_n}$ is a generalized permutohedron, where a \emph{permutohedron} $P_n(x_1, \ldots ,x_n)$ for $x_i \in \mathbb{R}$ is the convex hull of the $n!$ points obtained from $(x_1, \ldots , x_n)$ by permutations of the coordinates. For $K_n$, the Laplacian matrix has diagonal entries equal to $n-1$ and all other entries equal to $-1$. Then $\conv{L(n)^T} = P_n(n-1, -1, \ldots, -1) \cong P_n(n, 1, \ldots , 1)$. Many properties of generalized permutahedra are known \cite{Postnikov}. While some of the findings in this section follow from these general results, for the sake of completeness we will prove all results in this section from first principles. \subsection{Reflexivity, Triangulations, and $h^*$-Unimodality} \begin{theorem}\label{complete} The simplices $T_{K_n}$ are reflexive for $n \ge 1$. \end{theorem} \begin{proof} Observe $L_B$ is an $n \times (n-1)$ integer matrix of the form \[ L_B= \begin{bmatrix} (n-1) & (n-2) & (n-3) & \cdots & \cdots & 1 \\ -1 & (n-2) & (n-3) & \cdots & \cdots & 1 \\ -1 & -2 & (n-3) & \cdots & \cdots & \vdots\\ -1 & -2 & -3 & (n-4) & \cdots & \vdots \\ \vdots & \vdots & \vdots & -4 & \ddots & \vdots \\ \vdots & \vdots & \vdots & \vdots & & 1 \\ -1 & -2 & -3 & \cdots & \cdots & -(n-1) \end{bmatrix} \] To prove $T_{K_n}$ is reflexive, we show $T_{K_n} = \{ x \in \mathbb{R}^{n-1} \mid A x \le \mathbbm{1} \}$ for some $A \in \mathbb{Z}^{n \times (n-1)}$. We claim that $A$ has the following form: \[ A = \left[ \begin{array}{rrrrr} -1 & 0 & 0 & \cdots & 0 \\ 1 & -1 & 0 & & \vdots \\ 0 & 1 & -1 & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & 0 \\ \vdots & & \ddots & 1 & -1 \\ 0 & \cdots & \cdots & 0 & 1 \end{array} \right] \in \{0, \pm 1\}^{n \times (n-1)}. \] Let $r_i$ be the $i$\textsuperscript{th} row of $L_B$. Observe that $A(i \mid \emptyset) r_i = \mathbbm{1}$ for each $1 \le i \le n$. Then $\{r_i\}_{i=1}^n$ is a set of intersection points of defining hyperplanes of $T_{K_n}$ taken $(n-1)$ at a time. Notice $\text{rk }{A} = n-1$, and further, each matrix $A(i \mid \emptyset)$ has full rank. This implies $\{r_i \}_{i=1}^n$ is the set of unique intersection points. Thus $\{x \mid Ax \le \mathbbm{1} \}=\conv{r_1, r_2, \cdots, r_n} = T_{K_n}$ shows that $T_{K_n}$ is reflexive. \end{proof} \begin{prop}\label{triangulation} The simplex $T_{K_n}$ has a regular unimodular triangulation. \end{prop} \begin{proof} Since the matrix of the facet normals is a signed vertex-edge incidence matrix for a path, it is totally unimodular. Thus, it follows from \cite[Theorem 2.4]{regulartriangulations} that $T_{K_n}$ has a regular unimodular triangulation. \end{proof} \begin{corollary} \label{cor:completeidp} The simplex $T_{K_n}$ is IDP. \end{corollary} \begin{proof} If $T_{K_n}$ admits a unimodular triangulation, it follows that $T_{K_n}$ is IDP because $\text{cone}(T_{K_n})$ is a union of unimodular cones with lattice-point generators of degree $1$. \end{proof} Theorem~\ref{complete} implies that $h^*(T_{K_n})$ is symmetric. The following theorem implies that if $\mathcal P$ is reflexive and admits a regular unimodular triangulation, then $h_{\mathcal P}^*$ is unimodal. \begin{theorem}[Athanasiadis \cite{athanasiadisstable}]\label{unimodal2} Let $\mathcal P$ be a $d$-dimensional lattice polytope with $h^*_{\mathcal P} = (h_0^*, h_1^*, \ldots , h_d^*)$. If $\mathcal P$ admits a regular unimodular triangulation, then $h_i^* \ge h_{d-i+1}^*$ for $1 \le i \le \lfloor (d+1)/2 \rfloor$, \[ h_{\lfloor (d+1)/2 \rfloor}^* \ge \cdots \ge h_{d-1}^* \ge h_d^* \] and \[ h_i^* \le \binom{h_1^* + i -1}{i} \] for $0 \le i \le d$. \end{theorem} \begin{corollary}\label{cor:completeunimodal} For each $n \ge 2$, $h^*(T_{K_n})$ is unimodal. \end{corollary} \subsection{$h^*(T_{K_n})$ and Weak Compositions} The following is a classification of all lattice points in $\cone{T_{K_n}}$. \begin{theorem}\label{K_n cone} The lattice points at height $h$ in $\cone{T_{K_n}}$ are in bijection with weak compositions of $h\cdot n$ of length $n$, where the height of the lattice point in the cone is given by the last coordinate of the lattice point. \end{theorem} \begin{proof} Recall the $t$\textsuperscript{th} dilate of the polytope $T_{K_n} \subset \mathbb{R}^n$ is given by \[ \cone{T_{K_n}} \cap \{z\mid z_n = t\} = \{ \mathbf{\lambda} \cdot [L_B\mid \mathbbm{1}] \mid \lambda \in \mathbb{R}_{\ge 0}^n, \sum_{i=1}^n \lambda_i = t \},\] since the last coordinate of the lattice point is given by $\sum_{i=1}^n \lambda_i$. Notice each lattice point in $\cone{T_{K_n}}$ corresponds uniquely to a lattice point in $tT_{K_n}$ where $t$ is the last coordinate of the point. Then the lattice points of $tT_{K_n}$ are all $x = \lambda \cdot [L_B \mid \mathbbm{1}] \in \mathbb{Z}^n$ where $0 \le \lambda_i = \dfrac{b_i}{\kappa n}$ for $b_i \in \mathbb{Z}_{\ge 0}$ and $\sum_{i=1}^n \lambda_i = t.$ Define the map \begin{equation*} \begin{split} \Phi: \{ \text{length $n$ weak compositions of $tn$} \} &\to \{ \text{lattice points of $tT_{K_n}$} \} \\ c &\mapsto \dfrac{1}{n} c \cdot [L_B \mid \mathbbm{1}] \\ \end{split} \end{equation*} To show $\Phi(c)$ is a lattice point, consider \[\Phi(c) = \dfrac{1}{n}[c_1, c_2, \cdots, c_n] \cdot \begin{bmatrix} (n-1) & (n-2) & (n-3) & \cdots & 1 & 1 \\ -1 & (n-2) & (n-3) & \cdots & 1 & 1\\ -1 & -2 & (n-3) & \cdots & 1 & 1\\ -1 & -2 & -3 & \cdots & 1 & 1\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ -1 & -2 & -3 & \cdots & -(n-1) & 1 \end{bmatrix} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ \vdots \\ \vdots \\ x_n \end{bmatrix} \, . \] Since $c$ is a weak composition of $tn$, $0 \le \dfrac{c_i}{n} \le t$ for all $i$ and $\frac{1}{n} \sum_{i=1}^n c_i = t$. Multiplying the above expression yields $x_i = \left(\sum_{j=1}^i c_j \right) -it$ for all $1 \le i \le n-1$ and $x_n = t$. This implies $x \in \mathbb{Z}^n$, which shows $x$ is a lattice point in $tT_{K_n}$. To show $\Phi$ is a bijection, we consider the inverse \begin{equation*} \begin{split} \Phi^{-1}: \{ \text{lattice points of $tT_{K_n}$} \} &\to \{ \text{length $n$ weak compositions of $tn$} \} \\ x &\mapsto n x \cdot [L_B \mid \mathbbm{1}]^{-1} \\ \end{split} \end{equation*} It can be shown that \[ [L_B\mid \mathbbm{1}]^{-1} = \dfrac{1}{n} \begin{bmatrix} 1 & -1 & 0 & \cdots & \cdots & 0 \\ 0 & 1 & -1 & \ddots & & \vdots \\ \vdots & 0 & 1 & -1 & \ddots & \vdots \\ \vdots & & \ddots & \ddots & \ddots & 0 \\ 0 & \cdots & \cdots & 0 & 1 & -1 \\ 1 & \cdots & \cdots & \cdots & 1 & 1 \end{bmatrix} \, . \] Thus \begin{equation*} \begin{split} c &= n x \cdot [L_B \mid \mathbbm{1}]^{-1} \\ &= (x_1 + x_n, -x_1 + x_2 + x_n, -x_2 + x_3 + x_n, \ldots , -x_{n-2} + x_{n-1} + x_n, -x_{n-1} + x_n) \\ &= (x_1 + t , -x_1 + x_2 + t, -x_2 + x_3 + t, \ldots, -x_{n-2} + x_{n-1} + t, -x_{n-1} + t ) \, . \end{split} \end{equation*} It remains to show that $c$ is a weak composition of $tn$. First note that $\sum_{i =1}^n c_i = \sum_{i =1}^n t = tn$. Next we show each $c_i \ge 0$. This is equivalent to $x_1 \ge -t, -x_{n-1} \ge -t,$ and $-x_i + x_{i+1} \ge -t$ for all $2 \le i \le (n-2)$. Recall from the hyperplane description of $tT_{K_n}$ that $x$ is lattice point if it satisfies \[ \begin{bmatrix} -1 & 0 & \cdots & \cdots & 0 \\ 1 & -1 & \ddots & & \vdots \\ 0 & 1 & -1 & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & 0 \\ \vdots & & \ddots & 1 & -1 \\ 0 & \cdots & \cdots & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ \vdots \\ x_{n-1} \end{bmatrix} \le \begin{bmatrix} t \\ t \\ t \\ \vdots \\ \vdots \\ t \end{bmatrix} \] These inequalities show $c$ is a weak composition of $tn$ of length $n$. Note $\Phi \circ \Phi^{-1} (x) = x$ and $\Phi^{-1} \circ \Phi(c) = c$. Thus $\Phi$ is a bijection. \end{proof} \begin{corollary} The Ehrhart polynomial of $T_{K_n}$ is $L_{T_{K_n}}(t) = \binom{tn+n-1}{n-1}.$ \end{corollary} \begin{proof} The number of weak compositions of $tn$ of length $n$ is $\binom{tn+n-1}{n-1}$. Then the result follows directly from Theorem~\ref{K_n cone}. \end{proof} We next restrict $\Phi$ to obtain a classification of the lattice points in the fundamental parallelepiped, $\Pi_{T_{K_n}}$. \begin{corollary}\label{bijection} The lattice points of $\Pi_{T_{K_n}}$ are in bijection with weak compositions of $hn$ of length $n$ with each part of size strictly less than $n$. \end{corollary} \begin{proof} Every $x \in \Pi_{T_{K_n}} \cap \mathbb{Z}^n $ is of the form $x = \dfrac{1}{\kappa n} b \cdot [L_B \mid \mathbbm{1}]$ such that $0 \le \dfrac{b_i}{\kappa n} < 1$ for each $i \in [n]$, i.e., $0 \le \dfrac{b_i}{\kappa} < n$. Each coordinate of the lattice point has the form $x_i = \left( \sum_{j=1}^i \dfrac{b_j}{\kappa} \right) - ih$, which is an integer. It follows by induction on $j$ that $\kappa$ divides $b_j$ for each $1 \le j \le n$. Then it follows from $\dfrac{1}{\kappa} \sum_{i=1}^n b_i= hn$ that $\left(\dfrac{1}{\kappa} b\right)$ is a weak composition of $hn$ of length $n$ with parts no greater than $n-1$. With each $c \in \{\text{length $n$ weak compositions of $tn$ with parts of size less than $n$} \}$, associate $\kappa c = b$. This $b$ will generate a lattice point in the fundamental parallelepiped. The result follows. \end{proof} \begin{prop} \label{prop:completeh*} For each $n \ge 2$, the $h^*$-vector of $T_{K_n}$ is given by \[ h^*(T_{K_n}) = (1, m_1, \ldots, m_n) \] where $m_i$ is the number of weak compositions of $in$ of length $n$ with parts of size less than $n$. \end{prop} \begin{proof} From Lemma~\ref{lem:fpp}, $h_i^*$ enumerates $|\{\Pi_{T_{K_n}} \cap \{ x_n = i\} \cap \mathbb{Z}^n \}|$. By Corollary \ref{bijection}, the result follows. \end{proof} \bibliographystyle{plain}
{'timestamp': '2017-06-23T02:00:53', 'yymm': '1706', 'arxiv_id': '1706.07085', 'language': 'en', 'url': 'https://arxiv.org/abs/1706.07085'}
\section{surface passivation} In the wurtzite structure, each atom has 4 nearest neighbor (nn) dissimilar atoms and hence 4 bonds, each hosting 2 electrons. Using boron nitride (BN) as an example, by the ECM, each B atom (with three valence electrons) contributes 3/4 = 0.75 electrons to the bond, whereas each N atom (with five valence electrons) contributes 5/4 = 1.25 electrons to the bond. On the (0001) surfaces, one of the four bonds is cut, leaving one dangling bond (DB) on each surface atom. Here, each surface B atom would like to give away its 0.75 electrons, leaving it with an empty DB, while each surface N atom would like to take 0.75 electrons to result in a doubly-occupied DB (i.e., a lone-pair). One can achieve this by having a 2 $\times$ 2 surface reconstruction with one cation vacancy per 2 $\times$ 2 for the (0001) surface, or one anion vacancy per $2 \times 2$ for the $(000\overline{1})$ surface, as shown in Fig. S1 (b), in which case charge transfer happens locally at each individual surface. Alternatively, one can add a pseudo hydrogen (pseudo-H) atom with 1.25 electrons to each surface B atom, or a pseudo-H atom with 0.75 electrons to each surface N atom, to saturate their DBs, as shown in Fig. S1(a). Passivation is a local perturbation, as adding a charge-neutral object to the surface, such as the pseudo-H atom, does not lead to a long-distance charge transfer across the slab. Hence, the resulting polarization, and the integrated polarization charge, should not depend on the choice of the passivation. Indeed, both passivation schemes, as well as a mixing of the two, yield identical results in our calculation. \begin{figure}[tbp] \includegraphics[width=0.9\columnwidth]{SI.png} \caption{\label{fig:S1} Comparison of two passivation schemes for boron nitride (BN). (a) Pseudo hydrogen (associated w/ nuclear and electronic charge of 5/4 or 3/4) is used to satisfy electron counting and saturate surface dangling bonds, leaving each surface semiconducting. (b) A more physical approach to surface passivation in which 1/4 of the surface atoms (either B or N) are replaced with vacancies, leading to semiconducting surfaces which self-compensate, i.e. containing equal numbers of empty B dangling bonds and filled N lone pairs. } \end{figure} \end{document}
{'timestamp': '2021-09-01T02:25:42', 'yymm': '2108', 'arxiv_id': '2108.13950', 'language': 'en', 'url': 'https://arxiv.org/abs/2108.13950'}
\section{Introduction}\label{Intro} The central limit theorem (CLT) for partial sums $S_n=\sum_{j=1}^{n}X_j$ of stationary real-valued random variables $\{X_j\}$, exhibiting some type of ``weak dependence", is one of the main topics in probability theory, stating that $(S_n-{\mathbb E}[S_n])/\sqrt{Var(S_n)}$ converges in distributions towards a standard normal random variable. The almost sure invariance principle (ASIP) is a stronger result stating that there is a coupling between $\{S_n\}$ and a standard Brownian motion $(W_t)_{t\ge 0}$ such that \[ \left|S_n-{\mathbb E}[S_n]-W_{V_n}\right|=o(V_n^\frac12),\,\,\text{almost surely}. \] Both the CLT and the ASIP have corresponding versions for vector-valued sequences. The ASIP yields, for instance, the functional central limit theorem and the law of iterated logarithm (see \cite{PS}). While such results are well established for stationary sequences (see, for instance, \cite{PS}, \cite{BP}, \cite{Shao}, \cite{Rio}, \cite{PelASIP} and \cite{GO} and references therein), in the non-stationary case much less is known, especially when the variance (or the covariance matrix) of $S_n$ grows sub-linearly in $n$. For instance, in \cite{WZ} a vector-valued ASIP was obtained under conditions guaranteeing that the covariance matrix grows linearly fast. Similar results were obtained for random dynamical systems in \cite{DFGTV1} and \cite{DH}, and the ASIP for elliptic Markov chains in random dynamical environment can be obtained similarly. For these models the variance (or the covariance matrix) of the underlying partial sums $S_n$ grows linearly fast in $n$, while in \cite{Hyd} a real-valued ASIP was obtained for time-dependent dynamical systems under the assumption that $\text{Var}(S_n)$ grows faster than $n^{\frac12}$. In this short paper we prove the ASIP for non-stationary, uniformly bounded, real or vector valued exponentially fast ${\alpha}$-mixing\footnote{namely, strong mixing.} sequences of random variables.\footnote{We will also assume that $\liminf_{n\to\infty}\phi(n)<\frac12$, were $\phi(\cdot)$ are the, so-called, $\phi$-mixing coefficients.} Under a certain assumption which always holds true for real-valued sequences we obtain the ASIP rates $o(V_n^{\frac14+{\delta}})$ for any ${\delta}>0$, where $V_n=\text{Var}(S_n)$ or $V_n=\inf_{|u|=1}\text{Cov}(S_n)u\cdot u$. Then, in the vector-valued case, we will show that this assumption holds true for several classes of contracting Markov chains. The results rely on a recent modification of \cite[Theorem 1.3]{GO}, together with a block-partition argument, which in some sense reduces the problem to the case when the variance or the covariance matrix of $S_n$ grow linearly fast in $n$. \section{Preliminaries and main results} Let $X_1,X_2,...$ be a sequence of zero-mean uniformly bounded $d$-dimensional random vectors defined on a probability space $({\Omega},{\mathcal F},{\mathbb P})$. Let ${\mathcal F}_j$ denote the ${\sigma}$-algebra generated by $X_1,...,X_j$ and let ${\mathcal F}_{j,\infty}$ denote the ${\sigma}$-algebra generated by $X_k$ for $k\geq j$. We recall that the ${\alpha}$ and $\phi$ mixing coefficients of the sequence are given by \begin{equation}\label{al def} {\alpha}(k)=\sup\left\{\left|{\mathbb P}(A\cap B)-{\mathbb P}(A){\mathbb P}(B)\right|: A\in{\mathcal F}_j,\, B\in{\mathcal F}_{j+k,\infty},\, j\in{\mathbb N}\right\} \end{equation} and \begin{equation}\label{phi def} \phi(k)=\sup\left\{\left|{\mathbb P}(B|A)-{\mathbb P}(B)\right|: A\in{\mathcal F}_j,\, B\in{\mathcal F}_{j+k,\infty},\, j\in{\mathbb N},\,\,{\mathbb P}(A)>0\right\}. \end{equation} Then both ${\alpha}(\cdot)$ and $\phi(\cdot)$ measure the ``amount of dependence" of the sequence $\{X_j\}$, in the sense that $X_j$'s are independent iff both ${\alpha}$ and $\phi$ are identically $0$. We also note that ${\alpha}(k)\leq\phi(k)$ (which can be seen from their definitions). We will assume here that there are $C>0$, ${\delta}\in(0,1)$ and $n_0\in{\mathbb N}$ so that \begin{equation}\label{al mix} {\alpha}(n)\leq C{\delta}^n \end{equation} and \begin{equation}\label{phi half} \phi(n_0)<\frac12. \end{equation} These are the mixing (weak-dependence) assumptions discussed in Section \ref{Intro}. For each $n\in\mathbb N$ set \[ S_n=\sum_{k=1}^n\left(X_k-\mathbb E[X_k]\right) \] and put $V_n=\text{Cov}(S_n)$ (which is a $d\times d$ matrix). Let us also set $$ S_{n,m}=\sum_{j=n}^{m}X_j,\,\,V_{n,m}=\text{Cov}(S_{n,m}),\,s_n=\min_{|u|=1}(V_n u\cdot u). $$ Then in the scalar case $d=1$ we have $s_n=V_n$. We consider here the following condition. \begin{assumption}\label{AssVV} There are constants $C_1,C_2\geq1$ so that for any $n$ and $m$ with $\|S_{n,m}\|_{L^2}\geq C_1$ the ratio between the largest and smallest eigenvalues of the covariance matrix of $S_{n,m}$ does not exceed $C_2$, namely $$ \max_{|u|=1}(V_{n,m} u\cdot u)\leq C_2\min_{|u|=1}(V_{n,m} u\cdot u). $$ \end{assumption} This assumption trivially holds true for real-valued sequences, and in Section \ref{SecVV} we will show that it holds for certain classes of additive vector-valued functionals $X_j=f_j(\xi_j)$ of inhomogeneous ``sufficiently contracting" Markov chains $\{\xi_j\}$. Note also that $$ V_{n,m} u\cdot u=\text{Var}(S_{n,m}\cdot u) $$ and so Assumption \ref{AssVV} gives us a certain type of uniform control of these variances\footnote{However, $s_n$ can still have an arbitrary slow grow rate}. Our main result here is the following: \begin{theorem}\label{Main Thm} Let Assumption \ref{AssVV} hold. Suppose that (\ref{al mix}) and \eqref{phi half} hold true and that $\lim_{n\to\infty}s_n=\infty$. Then for any ${\delta}>0$ there exists a coupling between $\{X_k\}$ and a sequence of independent zero-mean Guassian random vectors $Z_1,Z_2,\ldots$ such that almost surely we have, \begin{equation}\label{Rate} \left|S_n-\sum_{j=1}^{n}Z_j\right|=o(s_n^{1/4+{\delta}}). \end{equation} Moreover, there is a constant $C=C_\delta>0$ so that for all $n\geq1$ and a unit vector $u\in\mathbb R^d$, \begin{equation}\label{Var est1} \left\|S_n\cdot u\right\|_{L^2}^2-Cs_n^{1/2+\delta}\leq \left\|\sum_{j=1}^n Z_j\cdot u\right\|_{L^2}^2\leq \left\|S_n\cdot u \right\|_{L^2}^2+Cs_n^{1/2+\delta}. \end{equation} \end{theorem} \begin{remark} \, \vskip0.2cm \textit{(i)} In the scalar case $d=1$, \eqref{Var est1} yields that the difference between the variances is $O(V_n^{\frac 12+\delta})$. Thus, using \eqref{Var est1} together with~\cite[Theorem 3.2 A]{HR}, we conclude that in the scalar case, for any $\delta>0$ there is a coupling of $\{X_n\}$ with a standard Brownian motion $\{W(t):\,t\geq0\}$ such that \begin{equation}\label{ZZZ} \left|\sum_{j=1}^{n}X_j-W(V_n)\right|=o(V_n^{\frac14+\delta}),\quad\text{a.s.}. \end{equation} A corresponding result in the vector-valued case does not seem to hold true because in the non-stationary setup the structure of the covariance matrix $V_n$ does not stabilize as $n\to\infty$, which makes it less likely to obtain an approximation by a single Guassian process like a standard $d$-dimensional Brownian motion. \vskip0.3cm \textit{(ii)} For stationary sequences $\{X_n\}$, in \cite[Theorem 1.4]{Shao} Shao showed that if $\phi(n)\ll \ln^{-r} n$ and ${\mathbb E}[|X_n|^{2+{\delta}}]<\infty$ for some ${\delta}>0$ and $r>(2+{\delta})/(2+2{\delta})$, then there exists a coupling with a standard Brownian motion so that the left hand side of (\ref{Rate}) is of order $o(V_n^{1/2}\ln^{-{\theta}} V_n)$ for any $0<{\theta}<(r(1+{\delta}))/(2(2+2{\delta}))-\frac14$. In comparison with \cite{Shao}, we get the ASIP in the non-stationary case, but only for uniformly bounded exponentially fast ${\alpha}$-mixing sequences such that $\liminf_{n}\phi(n)<\frac12$. \vskip0.3cm \textit{(iii)} We would like to stress that even in the scalar case $d=1$ no growth rates on the variance (such as $V_n\geq n^{\varepsilon}$) is required in Theorem \ref{Main Thm}. This is in contrast, for instance, with \cite{Hyd} where the authors assume that $V_n\geq n^{\frac12+{\delta}}$, and \cite{GO} and \cite{WZ} where the authors assumed linear growth. Note that in the latter papers vector-valued variables were considered. \vskip0.3cm \textit{(iv)} Many papers about the ASIP rely on martingale approximation (e.g. \cite{Hyd} and \cite{WZ}). However, for the best of our knowledge the best rate in the vector-valued case that can be achieved using martingales (in the stationary case) is $o(n^{1/3}(\log n)^{1+{\varepsilon}})=o\big(|V_n|^{1/3}(\log|V_n|)^{1+{\varepsilon}}\big)$ (see \cite{CDF}), and so an attempt to use existing results for martingales seems to yield a weaker rate than the ones obtained in the above theorem. \end{remark} \section{Proof of Theorem \ref{Main Thm}.}\label{Sec3} \subsection{Linearizing the covariance matrix} The first step in the proof is to make a certain reduction to that case when $s_n=\min_{|u|=1}(V_n u\cdot u)$ grows linearly fast in $n$. \begin{proposition}\label{VarPrp} Then there are constants $A_1,A_2>0$ and disjoint sets $I_j=\{a_j,a_j+1,...,b_j\}$ whose union cover ${\mathbb N}$ so that $b_{j-1}<a_j$ for all $j>2$ and for each $j$ and a unit vector $u$ we have \begin{equation}\label{A} A_1\leq \left\|\sum_{k\in I_j}X_k\cdot u \right\|_2\leq \max_{m\in I_j}\left\|\sum_{k=a_j}^{m}X_k\cdot u \right\|_2\leq A_2. \end{equation} Moreover, let $k_n=\max\{k: b_k\leq n\}$ and set $\Xi_j=\sum_{k\in B_j}X_k$. (i) There is a constant $c>0$ so that, \begin{equation}\label{Block Cntrl} \max_{j}\max_{a\in I_j}\left\|\sum_{k=a}^{b_j}X_k\right\|_{L^2}\leq c. \end{equation} (ii) There are constants $R_1,R_2>0$ such that for any sufficiently large $n$ and all unit vectors $u$ we have \begin{equation}\label{k n vn.1} R_1k_n\leq \text{Var}(S_n\cdot u)=\text{Cov}(S_n)u\cdot u\leq R_2k_n. \end{equation} (iii) For for every ${\varepsilon}>0$ we have \begin{equation}\label{Last} \left|S_n-\sum_{j=1}^{k_n}\Xi_j\right|=o(s_n^{{\varepsilon}}),\,\,{\mathbb P}-\text{a.s.} \end{equation} \end{proposition} We will break the proof of Proposition \ref{VarPrp} into several steps. Let us fix some unit vector $u_0$, and let $\xi_j=X_j\cdot u_0$. For every finite $M\subset{\mathbb N}$ set \[ S(M)=\sum_{j\in M}X_j\cdot u_0. \] The first result we need is the following: \begin{lemma}\label{L1} For every $p>2$ there is a constant $C_p>0$ so that if $M_1$ and $M_2$ are two finite subsets of ${\mathbb N}$ so that $\min M_2-\max M_1\geq r$ then \begin{equation}\label{p cov} \left|\text{Cov}(S(M_1),S(M_2))\right|\leq C_p(1+\|S(M_1)\|_{L^2})(1+\|S(M_2)\|_{L^2})\left({\alpha}(r)\right)^{1-2/p}. \end{equation} \end{lemma} \begin{proof} By applying \cite[Corollary A.2]{Hall} we get that \begin{equation}\label{p cov0} \left|\text{Cov}(S(M_1),S(M_2))\right|\leq 8\|S(M_1)\|_{L^p}\|S(M_2)\|_{L^p}\left({\alpha}(r)\right)^{1-2/p}. \end{equation} On the other hand, since \eqref{phi half} holds, by applying \cite[Theorem 6.17]{PelBook} and taking into account that $X_j$ are uniformly bounded we get that $$ \|S(M_i)\|_{L^p}\leq A_p(1+\|S(M_i)\|_{L^2}) $$ where $A_p>0$ is a constant that depends only on $p$ (and $n_0$ from \eqref{phi half}). \end{proof} We need next the following result. \begin{lemma}\label{Lemma 2} Let us fix some $p>2$. Let $r$ be so that $\sum_{m=1}^{\infty}\left({\alpha}(rm)\right)^{1-2/p}<\frac{1}{32 C_p}$, where $C_p$ is the constant from Lemma \ref{L1}. Then for any integer $k\geq1$ and $r$-separated ``blocks" $M_1<M_2<\cdots<M_k$, (i.e. $m_i\leq m_j-r$ for any $i<j$ and $m_i\in M_i$, $m_j\in M_j$) so that $\|S(M_i)\|_{L^2}\geq 1$ we have \[ \frac12\sum_{i=1}^k\text{Var}(S(M_i))\leq \text{Var}(S(M_1\cup M_2\cup\cdots\cup M_k))\leq \frac{3}2\sum_{i=1}^k\text{Var}(S(M_i)). \] \end{lemma} \begin{proof} Let $k\in{\mathbb N}$ and $M_1<M_2<...<M_k$ be $r$-separated sets. Then $$ \text{Var}(S(M_1\cup M_2\cup\cdots\cup M_k))=\sum_{i=1}^k\|S(M_i)\|_{L^2}^2+2\sum_{1\leq i<j\leq k}\text{Cov}(S(M_i),S(M_j)). $$ Next, set $\gamma(k)=\big({\alpha}(k)\big)^{1-2/p}$. Then by \eqref{p cov}, and taking into account that $\|S(M_i)\|_2\geq1$, \begin{equation}\label{SimTo} 2\sum_{1\leq i<j\leq k}|\text{Cov}(S(M_i),S(M_j))|\leq 16C_p\sum_{1\leq i<j\leq k}\gamma(r(j-i))\|S(M_i)\|_{L^2} \|S(M_j)\|_{L^2} \end{equation} $$\leq 8C_p\sum_{1\leq i<j\leq k}\gamma(r(j-i))(\|S(M_i)\|_{L^2}^2+\|S(M_j)\|_{L^2}^2) = 8C_p\sum_{j=1}^{k}\|S(M_j)\|_{L^2}^2\sum_{i=1}^{j-1}\gamma(r(j-i))+$$ $$8C_p\sum_{i=1}^{k-1}\|S(M_i)\|_{L^2}^2\sum_{j=i+1}^{k}\gamma(r(j-i))\leq \left(16C_p\sum_{m\geq 1}\gamma(rm)\right)\sum_{j=1}^{k}\|S(M_j)\|_{L^2}^2. $$ The proof is completed using that $16C_p\sum_{m\geq 1}\gamma(rm)<\frac12$. \end{proof} The next step is the following result. \begin{lemma}\label{Cor1} Fix some $p>2$. Let $M_1<M_2<M_3<...<M_k$ be finite $r$-separated blocks, where $r$ comes from Lemma \ref{Lemma 2}. For each $j$ let $I_j=M_j+\{1,...,r\}$. Suppose that the variance of each $S(M_j)\in[A,2A]$ for some $A>1$. For every $k$ set $M^{(k)}=M_1\cup M_2\cup\cdots\cup M_k$ and $I^{(k)}=I_{1}\cup I_{2}\cup\cdots\cup I_{k}$. Then, \begin{equation}\label{CorEq} \left|\frac{\text{Var}(S(M^{(k)}))}{\text{Var}(S(I^{(k)}))}-1\right|\leq \frac{2Q(A)}{A} \end{equation} where\footnote{Note that $Q(A)/A\to 0$ as $A\to\infty$.} with $L=\sup_j\|X_j\|_\infty$ and $Q_0=2C_p(1+rL)(1+L)\sum_{m\geq1}\left({\alpha}(m)\right)^{2-2/p}$ we have, $$ Q(A)=Q(A,r,p)=Q_0+2\sqrt{3AQ_0} $$ \end{lemma} \begin{proof} Let $X=S(M^{(k)})$ and $Y=S(I^{(k)})-X$. Then $$ \text{Var}(X+Y)=\text{Var}(X)+\text{Var}(Y)+2\text{Cov}(X,Y) $$ and so \begin{equation}\label{One} \left|\text{Var}(X+Y)-\text{Var}(X)\right|\leq \text{Var}(Y)+2\left(\text{Var}(X)\text{Var}(Y)\right)^{1/2}. \end{equation} Now, by Lemma \ref{Lemma 2}, \begin{equation}\label{Two} \frac{Ak}{2}\leq \frac12\sum_{j=1}^k\text{Var}(S(M_j))\leq \text{Var}(X)\leq \frac 32\sum_{j=1}^k\text{Var}(S(M_j))\leq 3Ak. \end{equation} On the other hand, let $D_j=I_j\setminus M_j$. Then $$ \text{Var}(Y)\leq\sum_{j=1}^k|\text{Cov}(D_j,Y)|. $$ Write $D_i=\{d_i+1,...,d_i+r\}$. Then $$ |\text{Cov}(D_j,Y)|\leq\sum_{s\leq d_j}|\text{Cov}(D_j,X_s)|+\sum_{s>d_j+r}|\text{Cov}(D_j,X_s)|. $$ Since $\|D_j\|_p\leq rL$ and $\|X_s\|_p\leq L$ for every $p>1$, using \eqref{p cov} to bound the two sums on the above right hand side we derive that $$ |\text{Cov}(D_j,Y)|\leq 2C_p(1+rL)(1+L)\sum_{m\geq1}\left({\alpha}(m)\right)^{2-2/p}=Q_0. $$ Thus, $$ \text{Var}(Y)\leq Q_0k. $$ Using \eqref{One} and \eqref{Two} we conclude that $$ \left|\text{Var}(X+Y)-\text{Var}(X)\right|\leq \left(Q_0+2\sqrt{3AQ_0}\right)k=Qk. $$ The proof of the lemma is completed by dividing the above left hand side by $\text{Var}(X)$ and using \eqref{Two}. \end{proof} \begin{proof}[Proof of Proposition \ref{VarPrp}] Let us fix some unit vector $u_0$, and some $p>2$. Set $\xi_{j}=X_j\cdot u_0$. Then $S(M)=\sum_{j\in M}\xi_j$. Let $r$ be as in Lemma \ref{Lemma 2}, let $A$ be large enough so that $A\geq 4Q(A)+1$ (then the right hand side of (\ref{CorEq}) does not exceed $1/2$). Set ${\mathcal S}_n=\sum_{j=1}^n\xi_j$. Then $\text{Var}({\mathcal S}_n)$ diverges because of Assumption \ref{AssVV} and our assumption that $s_n\to\infty$. Thus, we can construct intervals $M_1,M_2,...$ in the integers so that \begin{enumerate} \item $M_j$ is to the left of $M_{j+1}$, and $\min M_{j+1}-\max M_j=r+1$; \vskip0.2cm \item $\sqrt A\leq \|S(M_j)\|_{L^2}\leq\sqrt A+L$. \end{enumerate} Indeed, given that $M_j=\{a_j,...,b_j\}$ was constructed we define $M_{j+1}=\{b_j+r+1,...,b_{j+1}\}$, where $b_{j+1}$ is the first index $b\geq b_j+r+1$ so that $\|S(M_{j+1})\|_{L^2}\geq \sqrt A$. Let us define $I_j=M_j+\{1,2....,r\}$. Then by Lemma \ref{Cor1}, we see that \eqref{A}, \eqref{k n vn.1} hold true with the specific unit vector $u=u_0$, and \eqref{Block Cntrl} holds with $\xi_j=X_j\cdot u_0$ instead of $X_j$. Thus, by Assumption \ref{AssVV}, if $A$ is large enough then we see that \eqref{A} and \eqref{k n vn.1} hold true (possibly with other constants) for an arbitrary unit vector, and \eqref{Block Cntrl} holds with $\xi_j=X_j\cdot u$ (possibly with a different constant) for an arbitrary unit vector $u$. By taking the supremum over $u$, we obtain \eqref{Block Cntrl}. In order to prove \eqref{Last}, for each $q\geq1$ write $I_j=\{a_j,...,b_j\}$ and set \[ {\mathcal D}_q=\max_{b_{q}\leq n< a_{q+1}}|S_n-S_{b_q}|. \] Then with $\Theta_j=\sum_{k\in I_j}X_k$ and $k_n=\max\{k: b_k\leq n\}$, \begin{equation}\label{Fin} \left|S_n-\sum_{j=1}^{k_n}\Theta_j\right|\leq {\mathcal D}_{k_n}. \end{equation} Using \cite[Theorem 6.17]{PelBook} and \eqref{Block Cntrl} we see that that for any $p>2$ there is a constant $c_p$ so that for every $q\in{\mathbb N}$ we have \[ \|{\mathcal D}_q\|_{L^p}\leq c_p. \] Thus, by the Markov inequality for any ${\varepsilon}>0$ and $p>2$ we have \[ P(|{\mathcal D}_q|\geq q^{\varepsilon})=P(|{\mathcal D}_q|^p\geq q^{{\varepsilon} p})\leq c_p^pq^{-{\varepsilon} p}. \] Taking $p>1/{\varepsilon}$ we get from the Borel-Cantelli Lemma that \[ |{\mathcal D}_q|=O(q^{\varepsilon}),\,\text{a.s.} \] The desired estimate (\ref{Last}) follows now by plugging in $q=k_n$ in the above and using (\ref{Fin}) and (\ref{k n vn.1}). \end{proof} \section{Proof of the ASIP} The proof of Theorem \ref{Main Thm} relies on applying \cite[Theorem 2.1]{DH} with an arbitrary $p>2$. The latter theorem is a modification of Theorem \cite[Theorem 1.3]{GO} suited for more general non-stationary sequences of random vectors. The standing assumption in both theorems can be described as follows. Let $(A_1, A_2, \ldots )$ be an ${\mathbb R}^d$-valued process on some probability space $(\Omega, \mathcal F, \mathbb P)$. Then there exists $\varepsilon_0>0$ and $C,c>0$ such that for any $n,m\in {\mathbb N}$, $a_1<a_2< \ldots <a_{n+m+k}$, $k\in {\mathbb N}$ and $t_1,\ldots ,t_{n+m}\in\mathbb R^d$ with $|t_j|\leq\varepsilon_0$, we have that \begin{eqnarray}\label{(H)} \Big|\mathbb E\big(e^{i\sum_{j=1}^nt_j\cdot(\sum_{\ell=a_j}^{a_{j+1}-1}A_\ell)+i\sum_{j=n+1}^{n+m}t_j\cdot(\sum_{\ell=a_j+k}^{a_{j+1}+k-1}A_\ell)}\big)\\ -\mathbb E\big(e^{i\sum_{j=1}^nt_j\cdot(\sum_{\ell=a_j}^{a_{j+1}-1}A_\ell)}\big)\cdot\mathbb E\big(e^{i\sum_{j=n+1}^{n+m}t_j\cdot(\sum_{\ell=a_j+k}^{a_{j+1}+k-1}A_\ell)}\big)\Big|\nonumber\\\leq C(1+\max|a_{j+1}-a_j|)^{C(n+m)}e^{-ck}.\nonumber \end{eqnarray} The first part of the proof is to show that $A_j=\Theta_j=\sum_{k\in I_j}X_k$ satisfies \eqref{(H)}, which follows directly from the exponential ${\alpha}$-mixing rates \eqref{al mix}. Next, let us verify the rest of the conditions of \cite[Theorem 2.1]{DH}. Set $${\mathcal A}_n=\sum_{j=1}^n A_j.$$ Then, by Proposition \ref{VarPrp} we have $$ \min_{|u|=1}\text{Cov}({\mathcal A}_n)u\cdot u\geq Cn $$ where $C>0$ is a constant. This shows that the first additional condition is satisfied. To show that $A_j$ are uniformly bounded in $L^p$, notice that by \cite[Theorem 6.17]{PelBook} applied with $\xi_j=X_j\cdot u$ for an arbitrary unit vector $u$ we get that for every $p>2$ we have $$ \sup_n\|A_n\|_{L^p}<\infty. $$ The last condition we need to verify is that \begin{equation}\label{Need Last} \left|\text{Cov}(A_n\cdot u, A_{n+k}\cdot u)\right|\leq C{\delta}^k \end{equation} for some $C>0$ and ${\delta}\in(0,1)$ and all unit vectors $u$. By \cite[Corollary A.2]{Hall} we have $$ \left|\text{Cov}(A_n\cdot u, A_{n+k}\cdot u)\right|\leq \|A_n\cdot u\|_{L^p}\|A_{n+k}\cdot u\|_{L^p}\left({\alpha}(k)\right)^{1-2/p}$$ and so by taking $p>2$ we get \eqref{Need Last} (recall \eqref{al mix}). Next, by applying \cite[Theorem 2.1]{DH} with the sequence $A_j=\Theta_j=\sum_{k\in I_j}X_k$ we conclude that there is a coupling between $(A_j)_j$ and a sequence $Z_1,Z_2,...$ of independent centered Gaussian random vectors so that for any ${\varepsilon}>0$, \begin{equation}\label{asip.0} \left|\sum_{i=1}^{k}A_i-\sum_{j=1}^k Z_j\right|=o(k^{\frac 14+{\varepsilon}}),\,\,\text{a.s.} \end{equation} and that all the properties specified in Theorem \ref{Main Thm} hold true for the new sequence $A_j=\Theta_j$. Now Theorem \ref{Main Thm} follows by plugging in $k=k_n$, using \eqref{k n vn.1}, and then approximating $S_n$ with ${\mathcal A}_{k_n}=\sum_{j=1}^{k_n}\Theta_j$, relying on \eqref{Fin}. \section{Verification of the additional conditions in the non-scalar case: Markov chains}\label{SecVV} Assumption \ref{AssVV} trivially holds true for real-valued random variables. In this section we discuss natural conditions for Assumption \ref{AssVV} to hold for certain additive functionals of contracting Markov chains. \subsection{Duburushin's contracting chains} Let us recall the definition of Duburushin's contraction coefficients (\cite{Dub}). If $Q(x,\cdot)$ is a regular family of Markovian transition operators between two space $\mathcal X$ and $\mathcal Y$, then \[ \pi(Q)=\sup\{|Q_j(x_1,E)-Q_j(x_2,E)|:\,x_1,x_2\in{\mathcal X}, E\in{\mathcal B}({\mathcal Y})\} \] where ${\mathcal B}(\mathcal Y)$ is the underlying $\sigma$-algebra on $\mathcal Y$. Let $\{\xi_j\}$ be a Markov chain. Let $Q_j(x,\Gamma)={\mathbb P}(\xi_{j+1}\in\Gamma|\xi_j=x)$ and suppose that $$ \delta:=\sup_{j}\pi(Q_j)<1. $$ Then by \cite{VarSeth} the chain $\{\xi_j\}$ is exponentially fast $\phi$-mixing. Let us now take a sequence $f_j$ of bounded functions and set $X_j=f_j(\xi_j)$. Then by \cite{VarSeth}, there are positive constants $A=A_{\delta}$ and $B=B_{\delta}$ so that for each unit vector $u$, $$ A\sum_{j=n}^m\text{Var}(X_j\cdot u)\leq \text{Var}(S_{n,m}\cdot u)\leq B\sum_{j=n}^m\text{Var}(X_j\cdot u) $$ We thus get the following result. \begin{proposition} Assumption \ref{AssVV}(and hence Theorem \ref{Main Thm}) holds true if ${\delta}<1$ and there is a constant $C>1$ so that the ratio between the largest and smallest eigenvalues of the matrix $\text{Cov}(f_j(\xi_j))$ is bounded by $C$. \end{proposition} \subsubsection{Uniformly elliptic chains} In this section we consider a less general class of Markov chains $\{\xi_j\}$, but more general functionals. Let $\{\xi_j\}$ be a Markov chain with transition densities \[ {\mathbb P}(\xi_{j+1}\in A|\xi_{j}=x)=\int_{A}p_i(x,y)d\mu_{i+1}(y) \] where $\mu_i$ is a measure on the state space $\mathcal X_i$ of $\xi_i$ and $A\in\mathcal X_{j+1}$ is a measurable set. We assume that exists $\varepsilon_0>0$ so that for any $i$ we have $\sup_{x,y}p_i(x,y)\leq 1/\varepsilon_0$, and the transition probabilities of the second step transition densities of $\xi_{i+2}$ given $\xi_i$ are bounded from below by $\varepsilon_0$ (this is the uniform ellipticity condition): \[ \inf_{i\geq1}\inf_{x,z}\int p_i(x,y)p_i(y,z)d\mu_{i+1}(y)\geq \varepsilon_0. \] Then the resulting Markov chain $\{\xi_j\}$ is exponentially fast $\phi$-mixing (see \cite[Proposition 1.22]{DS}). Next, we take a uniformly bounded sequence of measurable functions $f_j:\mathcal X_j\times\mathcal X_{j+1}\to\\mathbb R^d$ and set $X_j=f_j(\xi_j,\xi_{j+1})$. Let us fix some unit vector $u$. Then, by applying \cite[Theorem 2.1]{DS} with the real-valued functions $f_j\cdot u$ (which are uniformly bounded in both $j$ and $u$) we see that there are non-negative numbers $u_i(f;u)=u_i(f_{i-2}\cdot u,f_{i-1}\cdot u,f_i\cdot u)$ and constants $A,B,C,D>0$ which depend only on $\varepsilon_0$ and $K:=\sup_j\sup|f_j|$ so that for any $m,n$, $m-n\geq 3$, \begin{equation}\label{Var2} A\sum_{j=n+3}^{m} u_j^2(f;u)-B\leq\text{Var}(S_{n,m}\cdot u)\leq C\sum_{j=n+3}^{m} u_j^2(f;u)+D \end{equation} where we recall that $S_{n,m}=\sum_{j=n}^{m}X_j$. The numbers $u_i(f;u)$ are given in \cite[Definition 1.14]{DS}: $u_i^2(f;u)=(u_i(f;u))^2$ is the variance of the balance (in the terminology of \cite{DS}) function $\Gamma_i=\Gamma_{i,f\cdot u}$ given by \begin{eqnarray*} {\Gamma}_i(x_{i-2},x_{i-1},x_i,y_{i-2},y_{i-1},y_i)=f_{i-2}(x_{i-2},x_{i-1})\cdot u+f_{i-1}(x_{i-1},x_i)\cdot u+f_{i}(x_i,y_{i+1})\cdot u\\-f_{i-2}(x_{i-2},y_{i-1})\cdot u-f_{i-1}(y_{i-1},y_i)\cdot u-f_{i}(y_i,y_{i+1})\cdot u \end{eqnarray*} corresponding to the hexagon generated by $(x_{i-1},x_{i},x_{i+1};y_{i-1},y_{i},y_{i+1})$, with respect to the probability measure on the space of hexagons at positions $i$ which was introduced in \cite[Section 1.3]{DS}. We thus have the following result. \begin{proposition} Assumption \ref{AssVV}(and hence Theorem \ref{Main Thm}) holds true if there is a constant $C>1$ so that the ratio between the largest and smallest eigenvalues of the matrix $(u_i^2(f,e_j))_{i,j}$ is bounded by $C$ (here $e_j$ is the $j$-th standard unit vector). \end{proposition} \subsubsection{Weaker results for uniformly contracting Markov chains} Let $\{\xi_j\}$ be a Markov chain. Let us consider the transition operators $Q_j$ given by $Q_jg(x)=\mathbb E[g(\xi_{j+1})|\xi_j=x]$. For each $j\geq1$ let $\rho_j$ be the $L^2$-operator norm of the restriction of $Q_j$ to the space of zero-mean square-integrable functions $g(\xi_{i+1})$ (see \cite{Pel}). We assume here that \[ \sup_{j}\rho_j:=\rho<1. \] In these circumstances the Markov chain $\{\xi_j\}$ is exponentially fast $\rho$-mixing (see \cite{Pel}). Note also that by \cite[Lemma 4.1]{VarSeth} we have, \[ \rho_j\leq\sqrt{\pi(Q_j)} \] and so this is a more general family of Markov chains than the one considered in the previous sections. Let $f_j:\mathcal X_j\to{\mathbb R}$ be a sequence of uniformly bounded functions and set $X_j=f_j(\xi_j)$. We prove here the following result. \begin{theorem}\label{Thm1} Suppose that $s_n=\min_{|u|=1}(V_n u\cdot u)\geq n^{\delta_0}$ for some $\delta_0>0$. Assume also that there exists $C\geq 1$ so that for each $j$ the ratio between the largest and smallest eigenvalues of the matrix $\text{Cov}(X_j)$ is bounded by $C$ for any $j\geq1$. Then there is a coupling of $\{X_j\}$ with a sequence of independent centered Gaussian vectors $\{Z_j\}$ with the properties described in Theorem \ref{Main Thm}. \end{theorem} Relying on \eqref{Var1} below, the condition $s_n\geq n^{\delta_0}$ can be guaranteed by imposing appropriate restrictions on $\min_{|u|=1}(\text{Cov}(X_j)u\cdot u)$ \begin{proof} First, by \cite[Proposition 13]{Pel}, for every unit vector $u$ we have \begin{equation}\label{Var1} C_1\sum_{j=n}^{m}\text{Var}(X_j\cdot u)\leq\text{Var}(S_{n,m}\cdot u)\leq C_2\sum_{j=n}^{m}\text{Var}(X_j\cdot u) \end{equation} Thus, Assumption \ref{AssVV} holds true when there is a constant $C>0$ so that the ratio between the largest and smallest eigenvalues of the matrix $\text{Cov}(X_j)$ is bounded by $C$ for any $j\geq1$. \vskip0.1cm The proof of Theorem \ref{Thm1} proceeds now similarly to the proof of Theorem \ref{Main Thm}, with the following exception: we cannot use \cite[Theorem 6.17]{PelBook} in order to obtain \eqref{Fin}, since it requires \eqref{phi half}. In order to overcome this difficulty, consider first the scalar case $d=1$. Then, along the lines of the proof of \cite[Lemma 2.7]{DS}, Dolgopyat and Sarig have shown that for every exponentially fast $\rho$-mixing sequence $\{X_j\}$ which is uniformly bounded by some $K$, for all even $p\geq2$ there exist constants $E_{p,K}>0$ and $V_{p,K}>0$, depending only on $p$ and $K$, so that for all $n$ and $m$ with $\sum_{j=m}^{m+n-1}\text{Var}(X_j)\geq V_{p,K}$, we have \begin{equation}\label{Lp Bounds1} \|S_{m,n}\|_{L^p}\leq E_{p,K}\left(\sum_{j=m}^{m+n-1}\text{Var}(X_j)\right)^{1/2}. \end{equation} Therefore, there are constants $C_{p,K}$ so that for all $n,m\in{\mathbb N}$, \begin{equation}\label{Lp Bounds2} \|S_{m,n}\|_{L^p}\leq C_{p,K}\left(1+\sum_{j=m}^{m+n-1}\text{Var}(X_j)\right)^{1/2}. \end{equation} By (\ref{Var1}) we have that \[ \sum_{j=m}^{m+n-1}\text{Var}(f_j(X_j))\leq \frac{1+\rho}{1-\rho}\text{Var}(S_{n,m}) \] and so there is a constant $R_p>0$ so that for all $n,m$ we have \begin{equation}\label{Est} \|S_{n,m}\|_{L^p}\leq R_p(1+\|S_{n,m}\|_{L^2}). \end{equation} By replacing $X_j$ with $X_j\cdot u$ for an arbitrary unit vector $u$ and then taking the supremum over $u$, we see that \eqref{Est} holds true also in the vector-valued case $d>1$. Finally, let us show that \eqref{Fin} holds true. By the Markov inequality, with $\mathcal B_n=\sum_{j=1}^{k_n}\Theta_j$, for any ${\varepsilon}>0$ and $q>0$ we have $$ {\mathbb P}(|S_n-\mathcal B_n|\geq n^{{\varepsilon}})={\mathbb P}(|S_n-\mathcal B_n|^q\geq n^{{\varepsilon} q})\leq n^{-{\varepsilon} q}\|S_n-\mathcal B_n\|_{L^q}^q\leq R_{q,K}(1+c)n^{-{\varepsilon} q} $$ where in the last inequality we have also used \eqref{Est} and that $\|S_n-{\mathcal B}_n\|_{L^2}\leq c$ is bounded in $n$. Taking $q>1/{\varepsilon}$ and applying the Borel-Cantelli lemma we get that $$ |S_n-\mathcal B_n|=o(n^{\varepsilon})=o(s_n^{\frac{\varepsilon}{ \delta_0}}),\,\text{a.s.} $$ Since $\varepsilon$ is arbitrary small we get that for every $\varepsilon>0$ we have $$ |S_n-{\mathcal B}_n|=o(s_n^\varepsilon),\,\,\text{a.s.} $$ Now the proof of Theorem \ref{Thm1} is completed similarly to the end of the proof of Theorem \ref{Main Thm}. \end{proof} \begin{acknowledgment} The original rates obtained in previous versions of this paper were $o(n^{\delta})+o(V_n^{1/4+{\delta}})$, for any ${\delta}>0$. I would like to thank D. Dolgopyat for several discussions which helped improving these rates to the current rates $o(V_n^{1/4+{\delta}})$ in the ${\alpha}$-mixing case. \end{acknowledgment}
{'timestamp': '2021-11-25T02:07:49', 'yymm': '2005', 'arxiv_id': '2005.02915', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.02915'}
\section{Avant propos} Le but de cet article est de pr\'esent\'e un exemple d'int\'eraction entre la th\'eorie des nombres et la physique {\it exp\'erimentale}. J'insiste sur l'aspect exp\'erimental, car il n'est pas rare de trouver des applications de la th\'eorie des nombres en physique th\'eorique, ou plus g\'en\'eralement en physique math\'ematique (voir \cite{wald}). L'int\'er\^et est donc d'avoir sous la main un dispositif exp\'erimental qui permet de visualiser directement cette int\'eraction.\\ Les math\'ematiques sous-jacente \`a cette int\'eraction sont \'el\'ementaires (essentiellement, elles tournent autour des fractions continues). J'ai n\'eanmoins d\'ecid\'e d'en faire une pr\'esentation compl\`{e}te pour au moins deux raisons:\\ \begin{itemize} \item La premi\`ere est que l'approche que nous allons emprunter n'est pas l'approche standard du sujet, et sugg\`ere des interpr\'etations et des notions nouvelles.\\ \item La seconde est que ce texte est destin\'e \`a une communaut\'e beaucoup plus large que les math\'ematiciens, qui n'est pas sp\'ecialement famili\`ere des notions math\'ematiques utilis\'ees. \end{itemize} \section{Introduction} Les syst\`emes de communications n\'ecessitent la mise au point de circuits \'electroniques permettant de convertir, moduler et d\'etecter des fr\'equences. Par exemple, la propagation des ondes radio est plus efficace pour des fr\'equences \'elev\'ees. On cherche donc \`a transformer le signal afin de faire porter l'information initiale par un signal haute fr\'equence. Ce proc\'ed\'e a de plus l'avantage de r\'eduire la taille des antennes n\'ecessaires \`a la reception (voir \cite{smith},p.487). La {\it boucle ouverte} est le composant \'electronique de base le plus r\'ependu pour effectuer des modification de fr\'equences. Il est fond\'e sur un {\it m\'elangeur} qui th\'eoriquement ``fait" le produit de deux signaux.\\ R\'ecemment, une s\'erie d'exp\'eriences destin\'ees \`a \'etudier le {\it bruit en 1/f} , men\'ees par michel Planat au LPMO, ont conduit \`a un renouvellement de notre comprehension du m\'elangeur et de la boucle ouverte. Ce renouvellement est d\^u en partie \`a la grande pr\'ecision des mesures du spectre des fr\'equences et d'amplitudes du signal de sortie.\\ Les principales nouveaut\'es dans l'analyse du m\'elangeur et de la boucle ouverte sont les suivantes : \begin{itemize} \item Le spectre des fr\'equences est gouvern\'e par une analyse de type {\it diophantienne}. \item Il existe une {\it r\'esolution} minimale (en temps et espace), intrins\`{e}que au syst\`{e}me, structurant l'espace des fr\'equences via l'analyse diophantienne du point pr\'ec\'edent. \end{itemize} Le r\'esultat principal de cet article est un th\'eor\`eme abstrait permettant de pr\'edire la structure du spectre exp\'erimentale de fr\'equences.\\ Les points 1 et 2 demandent l'introduction d'{\it espaces de r\'esolution} (appel\'es espaces de r\'esolution {\it arithm\'etiques} dans \cite{CrDe}). Ils prennent en compte l'aspect diophantiens et les contraintes de r\'esolution minimale en espace et temps.\\ La n\'ecessit\'e d'avoir une information sur l'approximation diophantienne des nombres r\'eels conduit naturellement aux {\it fractions continues}. On en donne une pr\'e\-sen\-ta\-tion originale via les deux op\'erations \'el\'ementaires $x\mapsto x+1$ et $x\mapsto 1/x$. Notamment, on obtient une repr\'esentation nouvelle, \`a notre connaissance, de l'{\it arbre de Farey}. Ce choix de construction et de repr\'esentation des nombres est dict\'e par la n\'ecessit\'e d'avoir une traduction aussi simple que possible des contraintes de r\'esolution.\\ La contrainte de r\'esolution en {\it espace} s'interpr\'ete comme l'existence d'un {\it entier}, not\'e $a_{max}$, au del\`{a} duquel, les nombres sont identifi\'es avec l'infini. On introduit ainsi une structure d'{\it \'{e}chelle} naturelle, dans l'ensemble pr\'ec\'edent, en faisant appara\^{\i}tre des z\^ones de {\it blocage} (ou d'{\it accrochage}) des nombres rationnels, des z\^ones de {\it transitions} vers les z\^ones de blocage et enfin des z\^ones d'{\it instabilit\'e}, correspondant \`a des irrationnels. Concr\'etement, on retrouve l'ensemble des fractions continues {\it \`a quotients partiels born\'es} par $a_{max}$. On en donne une construction originale faisant intervenir un {\it syst\`{e}me dynamique} naturel sur l'ensemble des fractions continues et conduisant \`a une {\it dynamique des nombres}. Cette dynamique n'est apparente que lorsque $a_{max}$ est fini. On montre ainsi qu'il existe, d\`es qu'une contrainte de r\'esolution est fix\'ee, une {\it hi\'erarchie} naturelle des nombres, hi\'erarchie qui disparait si on regarde $\hbox{\tenbb R}$ tout entier.\\ La contrainte en {\it temps}, se traduit par l'existence d'une borne $n_{max}$ \`a la longueur des fractions continues. Autrement dit, le syst\`eme ne peut pas ``descendre" dans le d\'eveloppement en fraction continue d'un nombre ind\'efiniment. On introduit alors une notion de z\^one {\it floue}, qui repr\'esente des endroits ou l'{\it analyse} du syst\`eme ne donne aucune information, les nombres \`a analyser ayant un d\'eveloppement en fraction continue trop grand. Autrement dit, le syst\`eme fait bien quelque chose, mais il est {\it impossible} de savoir quoi.\\ Le spectre des amplitudes ne se laisse pas aussi facilement capturer. Il n'existe pas pour le moment un analogue du th\'eor\`eme de structure. \section{Spectre de fr\'equences exp\'erimental} On pr\'esente le mod\`{e}le de m\'elangeur et de filtre passe-bas qui nous servira dans le reste de l'article. Nous d\'ecrivons le spectre des fr\'equences exp\'erimental obtenu. Nous formulons ensuite notre approche du spectre des fr\'equences et l'hypoth\`ese principale de ce travail, \`a savoir que la boucle ouverte ``fait" de l'approximation diophantienne des fr\'equences du signal. \subsection{La boucle ouverte} La {\it boucle ouverte} ou montage {\it superh\'et\'erodyne} d\'ecouvert par Armstrong et Schottky en 1924 permet d'\'etudier un signal, appel\'e {\it signal de r\'ef\'erence} et not\'e $s_0 (t)$, \`a partir d'un signal connu not\'e $s_1 (t)$. La fr\'equence de l'oscillateur de r\'ef\'erence est not\'ee $f_0 (t)$. Le signal connu est produit par un oscillateur dit local de fr\'equence $f_1$.\\ La boucle ouverte est compos\'ee d'un {\it m\'elangeur} qui doit multiplier les deux signaux et d'un filtre dit passe-bas de fr\'equence de coupure $f_c$, qui doit couper les fr\'equences au dessus de $f_c$. On a donc le dispositif suivant: \begin{center} \includegraphics[width=0.5\textwidth]{melang.eps} {La boucle ouverte} \end{center} Que fait ce montage ?\\ Si l'on suppose que le m\'elangeur effectue r\'eellement le produit des deux signaux, nous obtenons en sortie du m\'elangeur un signal de la forme \begin{equation} s(t)=\displaystyle {a_0 (t) a_1 (t) \over 2} \left ( \cos ((f_0 (t)+f_1 )t) +\cos ((f_0 (t)-f_1 (t))t) \right ) . \end{equation} Supposons que $f_0 (t)+f_1 >f_c$ pour tout $t$, on a en appliquant le filtre passe-bas un signal de la forme \begin{equation} s(t)=\displaystyle {a_0 (t) a_1 (t) \over 2} \cos ((f_0 (t)-f_1 (t))t) . \end{equation} On voit que l'action du m\'elangeur id\'eal est lin\'eaire en les fr\'equences et non lin\'eaire en les amplitudes.\\ Malheureusement, un m\'elangeur r\'eel a un comportement beaucoup plus compliqu\'e. Par ailleurs, la mod\'elisation des m\'elangeurs est loin d'\^etre facile, m\^eme si par exemple, on connait exactement tous les composants \'electroniques qui le constitue. On renvoie \`a (\cite{lee},chapitre 12) pour plus de d\'etails. Le m\'elangeur fait en g\'en\'eral appara\^\i tre un spectre dit d'intermodulation (\cite{lee},p.314), i.e. l'ensemble des combinaisons \`a coefficients entiers entre $f_0$ et $f_1$: \begin{equation} pf_0 -qf_1 ,\ \ p,q\in \hbox{\tenbb Z} . \end{equation} La structure du spectre des fr\'equences obtenu en sortie de la boucle ouverte refl\`ete l'existence de ces modulations. La mod\'elisation des composantes \'etant difficile il ne reste qu'une approche directe pour tenter de pr\'edire et expliquer la structure du spectre des fr\'equences. \subsection{R\'esultats exp\'{e}rimentaux} Le spectre des fr\'equences est de la forme suivante:\\ \begin{center} \includegraphics[width=0.5\textwidth]{freq08.eps} {Spectre des fr\'equences} \end{center} Les principaux traits de sa structure sont: \begin{itemize} \item Les fr\'equences sont situ\'ees dans des bassins autour de fr\'equences rationnelles $p/q$, appell\'ees {\it z\^ones d'accrochage}. Les bords de ces z\^ones sont not\'ees $\nu^- (p/q)$ et $\nu^+ (p/q)$. \item Les bassins ne sont pas {\it sym\'etriques}. En effet, on observe que \begin{equation} p/q -\nu^- (p/q) \not= \nu^+ (p/q) -p/q . \end{equation} \end{itemize} Dans la suite, nous allons d\'evelopper une th\'eorie {\it quantitative} qui rend compte de ces deux faits de mani\`ere pr\'ecise. \section{Spectre de fr\'equence th\'eorique} \subsection{Formalisation et hypoth\`ese diophantienne} La principale diff\'erence entre le m\'elangeur id\'eal et le m\'elangeur ``r\'eel" est l'apparition d'harmoniques de la forme \begin{equation} f_{p,q} (t)=pf_1 -qf_0 (t), \end{equation} avec $(p,q)\in \hbox{\tenbb Z}^2 \setminus \{ (0,0)\}$.\\ L'action du filtre passe-bas de fr\'equence de coupure $f_c$ conduit \`a ne conserver que les harmoniques satisfaisant la relation \begin{equation} \mid f_{p,q} (t) \mid < f_c . \end{equation} Notons \begin{equation} \nu (t)={f_0 (t) \over f_1} , \end{equation} la fr\'equence {\it normalis\'ee}.\\ Le spectre des fr\'equences est donc gouvern\'e par une \'equation du type \begin{equation} \label{inifreq} \left | \nu (t) -\displaystyle {p\over q} \right | \leq \displaystyle {f_c \over q} . \end{equation} Comprendre le spectre des fr\'equences, c'est d\'eterminer les fr\'equences normalis\'ees autoris\'ees par la relation (\ref{inifreq}).\\ Nous allons pour un moment oublier l'aspect temporel (donc la dynamique de $\nu (t)$) et nous concentrer sur l'aspect statique du spectre des fr\'equences.\\ La premi\`ere id\'ee est que cette \'equation suffit \`a elle seule, \`a reconstruire le spectre des fr\'equences, ce qui est en soit un peu os\'e, car cela suppose une ind\'ependance du spectre des fr\'equences vis \`a vis des amplitudes.\\ Sans hypoth\`ese sur la nature des approximations de $\nu$, on s'attend \`a trouver un bassin autour de chaque rationnel, bord\'e par deux segments de pente $\pm q$. C'est effectivement le cas. Malheureusement, cette approche ne permet pas de rendre compte de la {\it dissym\'etrie} des bassins observ\'ee exp\'erimentalement.\\ Si l'\'equation (\ref{inifreq}) contient l'essentiel de l'information sur la nature du spectre de fr\'equence, c'est donc que {\it l'approximation de $\nu$ n'est pas triviale}. Il reste donc \`a d\'eterminer la {\it nature} de l'approximation effectu\'ee par le d\'etecteur.\\ L'hypoth\`ese que nous allons faire est que le d\'etecteur a un comportement {\it diophantien}, i.e. que les approximations d'une fr\'equences $\nu$ sont effectu\'ees par des {\it convergents}. Autrement dit, on doit v\'erifier la condition $$ \left | \nu (t) -\displaystyle {p_i\over q_i} \right | \leq \displaystyle {f_c \over f_0 q_i} \leq {1\over \nu_{i+1} q_i^2 }, \eqno{(D)} $$ o\`u $\nu=[\nu_0 ,\nu_1 ,\dots ]$ repr\'esente le d\'eveloppement en fraction continue de $\nu$, et $p_i/q_i =[\nu_0 ,\dots ,\nu_i ]$ est le $i$-\`eme convergent de $\nu$. \subsection{Principaux r\'esultats} Cette conditions a plusieurs cons\'equences, qu'il conviendra ensuite de v\'erifier exp\'erimentalement:\\ i) Les fr\'equences $\nu$ observ\'ees en sortie du d\'etecteur sont tr\`es contraintes par (D). Soit $p/q$ un rationnel fix\'e, alors les $\nu$ associ\'es ont un d\'eveloppement en fraction continue qui v\'erifie \begin{equation} \label{amax} 1\leq \nu_{i+1} <\displaystyle {f_1 \over f_c q} . \end{equation} ii) La condition (\ref{amax}) impose un seuil maximal pour $q$, \`a savoir \begin{equation} q\leq \left [ \displaystyle {f_1 \over f_0 } \right ] . \end{equation} Comme le d\'enominateur $q_i$ d'un convergent croit avec $i$, cela impose une profondeur maximale dans le d\'eveloppement en fraction continue de $\nu$.\\ Nous allons r\'esumer la discussion pr\'ec\'edente par le th\'eor\`eme suivant, qui pour un rationnel donn\'e $p/q$, d\'ecrit l'ensemble des nombres r\'eels {\it admissibles} sous la contrainte (\ref{inifreq}) et l'hypoth\`ese diophantienne. \begin{thm} \label{spectre static} Soit $p/q$ une fraction irr\'eductible. On note $S(p/q)$ l'ensemble des nombres r\'eels satisfaisant (\ref{inifreq}) sous l'hyppoth\`ese diophantienne (D). Alors, on a:\\ - L'ensemble $S(p/q)$ est non vide si et seulement si $q\leq \left [ f_1 /f_c \right ]$, o\`u $[x]$ d\'esigne la partie enti\`ere de $x$.\\ - Soit $[a_1 ,\dots ,a_n]$ le d\'eveloppement en fractions continues de $p/q$. L'ensemble $S(p/q)$ est l'ensemble des nombres r\'eesl $x\in \hbox{\tenbb R}$ de d\'eveloppement en fractions continues $[x_1,\dots ,x_k ,\dots ]$ tels que \begin{equation} x_i =a_i \ \mbox{\rm pour}\ i=1,\dots, n, \end{equation} et \begin{equation} x_{n+1} \leq f_1 /f_c q . \end{equation} \end{thm} Nous appelons {\it spectre th\'eorique} des fr\'equences l'ensemble \begin{equation} S_{f_1 /f_c} =\left \{ \nu \in \hbox{\tenbb R} \ \mbox{\rm satisfaisant}\ (D) \right \} . \end{equation} Le th\'eor\`eme \ref{spectre static} permet de pr\'eciser la structure de $S_{f_1 /f_c}$. C'est un th\'eor\`eme de nature pr\'edictive puisqu'il permet de reconstruire le spectre des fr\'equences \`a partir de la donn\'ee de $f_1$ et $f_c$. Afin de comparer les pr\'edictions de notre th\'eorie avec le spectre exp\'erimental ${\mathcal S}_{f_1 /f_c}$, nous d\'emontrons le r\'esultat suivant: \begin{lem} \label{bordaccro} Soit $p/q$ une fraction irr\'eductible , avec $q\leq \left [ f_1 /f_c \right ]$ et $[a_0 ,a_1 ,\dots ,a_n ]$ son d\'eveloppement en fraction continue. Le bord de la z\^one d'accrochage est donn\'e par \begin{equation} \label{formeaccro} \left . \begin{array}{lll} \nu^{\sigma } & = & [a_0 ,\dots ,a_n ,a] , \\ \nu^{-\sigma } & = & [a_0 ,\dots ,a_n -1 ,1,a ] , \end{array} \right . \end{equation} avec $a=[ f_1 /f_c q ]$ et $\sigma =+$ si $n$ est pair et $\sigma =-$ si $n$ est impair. \end{lem} La d\'emonstration est donn\'ee \`a la section \ref{zoneaccro}.\\ Nous avons maintenant la caract\'erisation analytique des principaux \'el\'ements g\'eom\'etriques du spectre des fr\'equences th\'eoriques. \subsection{Confirmation exp\'{e}rimentale} On peut tester la validit\'e de l'hypoth\`ese diophantienne (D) via le th\'eor\`eme \ref{spectre static} et le lemme \ref{bordaccro}. \begin{itemize} \item Michel Planat et Serge Dos Santos \cite{serge} ont montr\'e que les bords des z\^ones d'accrochage sont de la forme (\ref{formeaccro}).\\ \item L'hypoth\`ese diophantienne pr\'edit l'existence dans une z\^one d'accrochage donn\'ee, d'une borne sup\'erieure pour les quotients partiels $a_{\rm max}$. Cette borne se retrouve dans l'expression des bords de la z\^one d'accrochage. Michel Planat et Jean-Philippe Marillet \cite{mar} ont montr\'e que $a_{\rm max}$ est bien de la forme $f_1 /f_c q$. \end{itemize} \subsection{Exploration de la condition diophantienne} Pour mieux comprendre la nature de l'hypoth\`ese diophantienne (D), nous allons proc\'eder en deux \'etapes: \begin{itemize} \item La premi\`ere consiste \`a comprendre l'effet d'une contrainte, que j'appellerai de {\it r\'esolution}, sur les quotients partiels d'une fraction continue, qui d\'ecoule du point i) ci-dessus. Autrement dit, nous allons regarder la structure des fractions continues \`a quotients partiels born\'es par un $a_{\rm max}\in \hbox{\tenbb N}$ fix\'e. Cette \'etude, faite dans la prochaine section sera riche d'enseignement sur la diff\'erence essentielle qu'il existe entre travailler sur les nombres r\'eels, et travailler sous une contrainte de r\'esolution. En particulier, nous verrons que l'existence d'une r\'esolution induit de fait une {\it hi\'erarchie} des nombres et donne naissance \`a une {\it dynamique des nombres}.\\ \item La seconde \'etape prend en compte l'aspect dynamique du spectre des fr\'equences. La d\'etection dans un bassin donn\'e fait appara\^\i tre des sauts de fr\'equences. On donne ici une justification th\'eorique de l'existence de ces sauts qui fait intervenir de mani\`ere essentielle l'hypoth\`ese diophantienne. \end{itemize} \section{Espaces de r\'esolution: aspects g\'eom\'etriques} Ce paragraphe donne une construction g\'eom\'etrique de l'ensemble des fractions continues \`a quotient partiels born\'es faisant appara\^\i tre une structure d'arbre. Cette construction n'est sans doute pas nouvelle, mais nous n'avons pas trouv\'e de r\'ef\'erence faisant appara\^\i tre simplement les structures dont nous avons besoin. On renvoie au livre de G.H. Hardy et E.M. Wright (\cite{HW},p. 164-169) pour la pr\'esentation standard. \subsection{G\'eom\'etrie des fractions continues} \subsubsection{Repr\'esentation des fractions irr\'eductibles} Soit $p/q$ une fraction {\it irr\'eductible} de $\hbox{\tenbb Q}$. On lui associe le point $(q,p)\in \hbox{\tenbb Z}^2$, ou de mani\`ere \'equivalente, la droite de $\hbox{\tenbb Z}^2$ passant par $0$ et $(q,p)$, de pente $p/q$ et d'\'equation $qx-py=0$. On a donc une bijection entre $\hbox{\tenbb Q}\bigcup \{ \infty \}$ et $P^1 (\hbox{\tenbb Z}^2 )$, l'ensemble des droites vectorielles de $\hbox{\tenbb Z}^2$, d\'efinie comme l'ensemble des points de $\hbox{\tenbb Z}^2$ modulo l'\'equivalence $(q,p )\sim (q' ,p')$ si et seulement si il existe un entier $\lambda\in \hbox{\tenbb Z}$ tel que $(q,p)=\lambda (q',p')$ ou $(q',p')=\lambda (q,p)$. \begin{figure} \label{irred} \centering \includegraphics[width=1\textwidth]{irred01.eps} \caption{Points premiers de $Z^2$ ($pgcd(p,q)=1$).} \end{figure} Chaque droite $D$ de $P^1 (\hbox{\tenbb Z}^2 )$ est isomorphe \`a $\hbox{\tenbb Z}$, et est engendr\'e par un des deux points $(q,p)$, $(-q,-p)$ de $D$ v\'erifiant $<q,p>=1$. Ces points sont dit {\it premiers} dans $\hbox{\tenbb Z}^2$. On note $\mathcal P$ l'ensemble des points premiers de $\hbox{\tenbb Z}^2$. \vskip .3cm L'anneau des $\hbox{\tenbb Z}$-matrices $2\times 2$, not\'e $M_2 (\hbox{\tenbb Z} )$, agit naturellement sur $\hbox{\tenbb Z}^2$ : \begin{equation} \left . \begin{array}{llll} \forall A=\left ( \begin{array}{lll} a & b \\ c & d \end{array} \right ) \in M_2 (\hbox{\tenbb Z} ) , & \hbox{\tenbb Z}^2 & \stackrel{A}{\rightarrow} & \hbox{\tenbb Z}^2 , \\ & (q,p) & \mapsto & (aq+bp,cq+dp) . \end{array} \right . \end{equation} Cette action induit une action sur $\hbox{\tenbb Q}$ via les {\it transformations de M\"obius} : \begin{equation} \left . \begin{array}{llll} A\in M_2 (\hbox{\tenbb Z} ) , & \hbox{\tenbb Q} & \stackrel{A}{\rightarrow} & \hbox{\tenbb Q} , \\ & z=p/q & \mapsto & (cz+d)/(az+b) . \end{array} \right . \end{equation} La matrice $A$ pr\'eserve $\mathcal P$ si et seulement si $\mid \mbox{\rm det} (A)\mid =1$, i.e. $A$ est inversible dans $M_2 (\hbox{\tenbb Z} )$. On consid\`ere donc l'action de $GL_2 (\hbox{\tenbb Z} )$, l'ensemble des matrices inversibles de $M_2 (\hbox{\tenbb Z} )$, sur $\hbox{\tenbb Q}$, via les transformations de M\"obius. \subsubsection{Fractions continues et $F_2^+$} On renvoie \`a Khintchine \cite{Kh} pour plus de d\'etails. \vskip .6cm Soient $(a_0 ,\dots ,a_n )$ une suite finie d'entiers avec $a_n \not= 0$. On note $[a_0 ,\dots ,a_n ]$ la fraction continue finie \begin{equation} a_0 +\displaystyle {1\over a_1 +\displaystyle {1\over a_2 + \displaystyle {1\over \dots +\displaystyle {1\over a_n}}}} . \end{equation} On conservera la m\^eme notation pour une suite de longueur infinie. Supposons tous les $a_i >0$ pour $i>0$, alors tout {\it irrationnel} a une {\it unique} repr\'esentation comme fraction continue infinie. Par contre, l'\'egalit\'e \begin{equation} \label{nonun} [a_0 ,\dots ,a_n ] =[a_0 ,\dots ,a_n -1 ,1] , \end{equation} montre qu'un rationnel poss\`{e}de deux \'ecritures. On en d\'eduit deux fa\c{c}on de rendre unique la repr\'esentation d'un rationnel : \vskip .3cm i - tout nombre rationnel poss\`{e}de un unique d\'eveloppement en fraction continue de longueur pair (ou impair). \vskip .3cm ii - tout nombre rationnel poss\`{e}de un unique d\'eveloppement en fraction continue se terminant par un entier $>1$. \vskip .3cm La premi\`ere repr\'esentation est adapt\'ee \`a l'introduction du {\it groupe modulaire}. La seconde supprime les extensions virtuelles de la fraction continue via (\ref{nonun}). Elle est bien adapt\'ee \`a la construction de l'espace de r\'esolution. \vskip .3cm D'apr\`es l'algorithme des fractions continues, il est possible de construire toutes les fractions continues via les applications \'el\'ementaires de {\it translation}, not\'ee $T$, et d'{\it inversion}, not\'ee $S$ : \begin{equation} T\ :\ \left . \begin{array}{lll} \hbox{\tenbb R} & \rightarrow & \hbox{\tenbb R} ,\\ x & \mapsto x+1 , \end{array} \right . \ \ \ \mbox{\rm et} \ \ \ S\ :\ \left . \begin{array}{lll} \hbox{\tenbb R} & \rightarrow & \hbox{\tenbb R} ,\\ x & \mapsto 1/x . \end{array} \right . \end{equation} On peut restreindre l'action de $T$ (resp. $S$) \`a $Q$. Dans ce cas, on a deux homographies qui sont repr\'esent\'ees dans $M_2 (\hbox{\tenbb Z} )$ par les matrices \begin{equation} T= \left ( \begin{array}{ll} 1 & 0 \\ 1 & 1 \end{array} \right ) , \ \ \ \mbox{\rm et}\ \ \ S= \left ( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array} \right ) . \end{equation} Une fraction continue $[a_0 ,\dots ,a_n ]$ s'\'ecrit donc $T^{a_0} S T^{a_1} S \dots S T^{a_n } (1,0).$ Pour obtenir une repr\'esentation unique, on choisi des repr\'esentations de longueur paire. On introduit la matrice $J =STS$, de la forme \begin{equation} J= \left ( \begin{array}{ll} 1 & 1 \\ 0 & 1 \end{array} \right ) , \end{equation} qui correspond \`a la transformation $x\rightarrow \displaystyle {x\over x+1}$ sur $\hbox{\tenbb Q}$.\\ On a : \begin{thm} Tout nombre rationnel $[a_0 ,\dots ,a_{2n} ]$ admet une unique repr\'esentation de la forme \begin{equation} T^{a_0} J^{a_1} \dots J^{a_{2n-1}} T^{a_{2n}} (1,0) . \end{equation} \end{thm} Les matrices $T$ et $J$ sont {\it unimodulaires} (de d\'eterminant $1$). Elles engendrent le {\it groupe modulaire} $PSL_2 (\hbox{\tenbb Z} )$, qui est isomorphe au groupe libre de rang 2 $F_2$, $T$ et $J$ \'etant deux g\'en\'erateurs libres. On note $F_2^+$ le semi-groupe des mots \'ecrit avec des puissances positives de $T$ et $J$. On a \begin{cor} L'application \begin{equation} \left . \begin{array}{lll} \hbox{\tenbb Q} & \rightarrow & F_2^+ , \\ \left [ a_0 ,\dots ,a_{2n} \right ] & \mapsto & T^{a_0} J^{a_1} \dots J^{a_{2n-1}} T^{a_{2n}} , \end{array} \right . \end{equation} est une bijection. Le groupe libre $F_2^+$ agit \`a gauche sur $\hbox{\tenbb Q}$. \end{cor} La d\'emonstration d\'ecoule du th\'eor\`eme pr\'ec\'edent. \subsection{L'arbre de Farey} \subsubsection{Terminologie sur les arbres} On renvoie au livre de Serre (\cite{Se2}, $\S$.2.2, p.28) pour plus de d\'etails. On rappelle qu'un {\it arbre} est un graphe connexe, non vide, sans circuit. On adopte la convention suivante sur la repr\'esentation d'un arbre par un dessin : un point correspond \`a un {\it sommet} de l'arbre, et une ligne joignant deux points marqu\'es correspond \`a une {\it ar\^etes}. Si l'arbre est orient\'e, une ar\^ete $\{ P,Q\}$ \'etant donn\'e, on appelle le sommet $P$, l'{\it origine} de $\{ P,Q\}$ et $Q$ le sommet {\it terminal} de $\{ P,Q\}$. Ces deux sommets sont les {\it extr\'emit\'es} de $\{ P,Q\}$. Un sommet $P$ d'un arbre $\Gamma$ orient\'e \'etant donn\'e, on appellera {\it fils} de $P$ l'ensemble des sommets terminaux des ar\^etes ayant $P$ comme origine. On appellera {\it p\`ere} de $P$, l'origine de l'ar\^ete ayant $P$ comme sommet terminal. \subsubsection{Arbre de Farey} Habituellement, on repr\'esente l'arbre de Farey via l'action du groupe modulaire sur le demi-plan de Poincar\'e (ou de mani\`ere \'equivalente sur le disque de Poincar\'e). On en donne ici une repr\'esentation dans $\hbox{\tenbb Z}^2$, plus commode pour la suite. \vskip .3cm On note $L_{\infty}$ la droite passant par $(1,0)$, engendr\'e par l'action de $T$ sur le segment $[(1,0),(1,1)]$. Elle est de pente $\infty$. De m\^eme, on note $L_0$ la droite passant par $(0,1)$, engendr\'e par l'action de $J$ sur le segment $[(0,1),(1,1)]$. Elle est de pente $0$. \vskip .3cm L'action de $F_2^+$ sur $\hbox{\tenbb Z}^2$, induit une action de $F_2^+$ sur $L_0$ et $L_{\infty}$. On obtient la figure suivante : \begin{figure}[h] \label{PFTree} \centering \includegraphics[width=1\textwidth]{FPT01.eps} \caption{L'arbre de Farey} \end{figure} On note $\mathcal T$ l'ensemble ainsi obtenu. On a : \begin{thm} L'ensemble $\mathcal T$ est un arbre (ou plut\^{o}t, la r\'{e}alisation geom\'etrique d'un arbre) dont les sommets sont les points $(q,p)\in\hbox{\tenbb Z}^2$ irr\'eductibles. \end{thm} Ce r\'esultat est classique (au moins dans le demi-plan de Poincar\'e, voir (\cite{Se2},$\S$.4.2,p.52-53)). \vskip .3cm Nous allons pr\'eciser, la relation entre le d\'eveloppement en fraction continue d'un sommet de $\mathcal T$, celui de ses fils et de son p\`ere. Pour cela, nous introduisons la notion de {\it branches} et {\it rameaux} de l'arbre $\mathcal T$. \begin{defi} Une branche de $\mathcal T$ est l'image par un mot de $F_2^+$ des droites $L_0$ ou $L_{\infty}$. Soit $B$ une branche de $\mathcal T$, on appellera rameau de $B$ en $P$, une branche distincte de $B$ ayant pour origine le sommet $P$. \end{defi} Une cons\'equence du th\'{e}or\`{e}me pr\'ec\'edent est : \begin{cor} Soit $M=(q,p)\in \hbox{\tenbb Z}^2$, $(q,p)\not= (1,1)$, un sommet de $\mathcal T$, alors $M$ appartiens \`a deux branches distinctes $B_M^m$ et $B_M^f$, appel\'ees branche m\`ere et fille. La branche m\`ere admet la branche fille comme rameau en $M$. \end{cor} La caract\'erisation de l'arbre $\mathcal T$ en terme de fractions continues s'\'enonce maintenant comme suit : \begin{thm} Soit $M=(q,p)$ un sommet de $\mathcal T$, tel que $p/q=[a_0 ,\dots a_{2n} ]$. On note $B_M^m$ et $B_M^f$ ses branches m\`ere et fille respectivement. On a : i - l'origine de la branche m\`ere est $$ \left . \begin{array}{ll} \left [ a_0 ,\dots ,a_{2n-1} ,1 \right ] & \mbox{\rm si}\ \ a_{2n} >1 , \\ \left [ a_0 ,\dots ,a_{2n-2} +1 \right ] & \mbox{\rm si}\ \ a_{2n} =1 . \end{array} \right . $$ ii - la pente de la branche m\`ere est $$ \left . \begin{array}{ll} \left [ 0,a_0 ,\dots ,a_{2n-1} -1,1 \right ] & \mbox{\rm si}\ \ a_{2n} >1 , \\ \left [ 0,a_0 ,\dots ,a_{2n-2} \right ] & \mbox{\rm si}\ \ a_{2n} =1 . \end{array} \right . $$ iii - La pente de la branche fille est $$ \left . \begin{array}{ll} \left [ a_0 ,\dots ,a_{2n} -1 \right ]\ & \mbox{\rm si}\ \ a_{2n} >1 , \\ \left [ a_0 ,\dots ,a_{2n-1} \right ] & \mbox{\rm si}\ \ a_{2n} =1 . \end{array} \right . $$ \end{thm} \begin{proof} Elle repose sur la construction it\'erative de l'arbre de Farey. \vskip .3cm i) La branche m\`ere de $M$ est l'image par un mot de $w \in F_2^+$ de la droite $L_0$ ou $L_{\infty}$. Supposons que $B_M^m$ soit l'image de $L_{\infty}$ par $w$ (le cas de $L_0$ se d\'emontre de la m\^eme mani\`ere). L'origine de $B_M^m$ est donc $w(1,0)$. Si $M \not= w(1,0)$, il existe un entier $k>0$ tel que \begin{equation} \label{egalm} M=w\displaystyle T^k (1,0)=T^{a_0 }\dots \dots T^{a_{2n}} (1,0) , \end{equation} ou $w\displaystyle T^k$ est le mot obtenu par concat\'enation de $w$ et $\displaystyle T^k$. Comme le mot $w$ ne se termine pas par un $T^l$, $l>0$, on d\'eduit de (\ref{egalm}) et de l'unicit\'e de l'\'ecriture de $M$, $k=\displaystyle a_{2n}$ et $w=T^{a_0} \dots J^{a_{2n-1}} T$ si $a_{2n} >1$, d'o\`u l'origine de la branche m\`ere dans ce cas est $w(1,0)=[a_0 ,\dots ,a_{2n-1} ,1]$. Si $a_{2n} =1$, on \'ecrit $[a_0 ,\dots ,a_{2n-1} ,1]=[a_0 ,\dots ,a_{2n-1} +1]$, d'o\`u $M=w\displaystyle T^{a_{2n-1}} (1,0)$, avec $w=T^{a_0 } B^{a_1} \dots T^{a_{2n+2} +1}$. On a donc l'origine de la branche m\`ere donn\'ee par $w(1,0)=[a_0 ,\dots ,a_{2n+2} +1]$. \vskip .3cm ii - Il suffit de noter que $L_0$ (resp. $L_{\infty}$) est parall\'ele \`a la droite passant par $(0,0)$ et $(1,0)$ (resp. $(0,0)$ et $(0,1)$). Quel que soit le mot $w\in F_2^+$, on a $w.(0,0)=(0,0)$ car $w$ est une application lin\'{e}aire. En utilisant i), la pente de la branche m\`ere de $M=w.(1,0)$, $w=T^{a_0 }\dots \dots T^{a_{2n}}$ est donc donn\'ee par $T^0 J^{a_1} T^{a_2} \dots J^{a_{2n-1}} T^{1} (1,0)$ si $a_{2n} >1$ et par $T^0 J^{a_1} \dots , T^{a_{2n-2}} (1,0)$ si $a_{2n} =1$. \vskip .3cm iii - La d\'emonstration est analogue \`a ii) en consid\'erant la branche fille comme une branche m\`ere d'origine $[a_0 ,\dots ,a_{2n}]$. \end{proof} \subsection{Ensembles de r\'esolution} Le spectre des fr\'equences serait donn\'e par l'ensemble pr\'ec\'edent si aucune contrainte de r\'esolution n'existait, i.e. dans un syst\`eme id\'eal (au sens math\'ematique). Les r\'esultats exp\'erimentaux et la physique, imposent l'existence d'une r\'esolution minimale. Dans ce paragraphe, nous interpr\'etons cette contrainte et en donnons l'effet sur le spectre des fr\'equences. \subsubsection{La contrainte de r\'esolution} Il faut traduire la notion intuitive de {\it r\'esolution} de mani\`ere \`a en obtenir une traduction simple sur l'ensemble des fractions continues.\\ \begin{itemize} \item {\it Hypoth\`ese de r\'esolution (nombres)}. Soit $a>0$ un entier. On identifie tout nombre r\'eel $x\geq a$ \`a $\infty$. \end{itemize} On remarque que cette hypoth\`ese de r\'esolution {\it \`a l'infinie} implique, via l'action de l'application $S$, une condition de r\'esolution {\it en z\'ero}. En effet, tous les nombres r\'eels $0\leq x \leq 1/a$ sont identifi\'es \`a $0$. \vskip .3cm On note ${\mathcal R}_a$ l'ensemble des nombres r\'eels obtenus. \vskip .3cm L' hypoth\`ese se traduit sur les mots {\it admissibles} de $F_2^+$. \begin{itemize} \item {\it Hypoth\`ese de r\'esolution (mots)}. Les seuls mots admissibles de $F_2^+$ sont ceux ne contenant que des $T^i$ avec $i<a$. \end{itemize} On en d\'eduit donc le th\'eor\`eme suivant : \begin{thm} L'ensemble de r\'esolution ${\mathcal R}_a$ est l'ensemble des fractions continues \`a quotients partiels born\'es. \end{thm} Ce th\'eor\`eme n'apporte pas beaucoup \`a la compr\'ehension de l'ensemble ${\mathcal R}_a$. Nous allons pr\'eciser la structure g\'eom\'etrique et dynamique de cet ensemble dans le prochain paragraphe. \section{Espaces de r\'esolution: aspects dynamiques} \subsection{Syst\`eme dynamique de r\'esolution} On travaille maintenant dans $\bar{\hbox{\tenbb R}}^+ =\hbox{\tenbb R}^+ \cup \{ \infty \}$. Pour tout $a\in \mathbb{N}^*$, nous allons introduire une application naturelle sur $\bar{\hbox{\tenbb R}}$ appell\'ee {\it application de r\'esolution}. \\ Soit $a\in \hbox{\tenbb N}^*$, on note $F_2^+ (a)$ l'ensemble des mots de $F_2^+$ ne contenant pas de sous mots $T^k$ ou $J^k$ avec $k\geq a$. \begin{defi} Soit $a\in \hbox{\tenbb N}^*$, $w$ un mot fini de $F_2^+$, $w=w_1 \dots w_n$, on d\'efinit l'application de $F_2^+$ dans $F_2^+ $ qui \`a $w$ associe $w_a$ obtenu en rempla\c{c}ant le premier $T^i$ ou $J^i$ avec $i\geq a$ par $\infty$ ou $O$ respectivement. On note $R_a$ cette application. \end{defi} L'application $R_a$ d\'efini un syst\`eme dynamique sur $F_2^+$. L'ensemble invariant maximal de $R_a$ est $F_2^+ (a)$. La traduction sur les nombres se fait via l'application \begin{equation} \left . \begin{array}{llll} r_a \ :\ & \bar{\hbox{\tenbb R}}^+ & \rightarrow & \bar{\hbox{\tenbb R}}^+ , \\ & x=w(1,0) & \mapsto & x_a =R_a (w) (1,0) . \end{array} \right . \end{equation} L'application $r_a$ d\'efini un syst\`eme dynamique sur $\bar{\hbox{\tenbb R}}^+$. Ce syst\`eme dynamique est \`a ma connaissance nouveau. Son graphe est donn\'e pour $a=3$ par: \begin{figure}[h] \label{esc201} \centering \includegraphics[width=1\textwidth]{esc201.eps} \caption{Graphe de l'application $r_3$.} \end{figure} L'ensemble invariant maximal de $r_a$ est ${\mathcal R}_a$.\\ Cette vision dynamique de l'ensemble des fractions continues \`a quotients partiels born\'es permet de d\'efinir une dynamique {\it naturelle} des nombres. Pr\'ecisons tout d'abord la structure g\'eom\'etrique de ${\mathcal R}_a$. \subsection{Arbre de r\'esolution} Avant de formuler le th\'eor\`eme de structure sur ${\mathcal R}_a$, nous pouvons, en utilisant le proc\'ed\'e de construction utilis\'e pour l'arbre de Farey, construire l'ensemble ${\mathcal R}_a$ pour un $a$ fix\'e. Par exemple, dans le cas $a=3$, on a: \begin{figure}[h] \label{tructree01} \centering \includegraphics[width=1\textwidth]{trunctree01.ps.eps} \caption{L'arbre de r\'esolution ${\mathcal R}_3$.} \end{figure} Le principal effet de la contrainte de r\'esolution est d'ouvrir les z\^ones du plan asso\c{c}i\'ees \`a un rationnel donn\'e. Par ailleurs, les nombres compris dans cette z\^ones sont envoy\'es par $r_a$ sur le rationnel correspondant au noeud. \section{Construction dynamique et th\'eor\`eme de structure} La construction pr\'ec\'edente sur $\mathbb{Z}^2$ donne une vision g\'eom\'etrique qui n'est pas adapt\'ee \`a une comparaison directe avec le spectre de fr\'equences exp\'erimental. L'ensemble ${\mathcal R}_a$ peut se visualiser en tra\c{c}ant le graphe de la fonction not\'ee $e_a :\mathbb{R} \rightarrow \mathbb{R}$ et d\'efinie par \begin{equation} x\longmapsto \mid x-r_a (x) \mid , \end{equation} qui donne l'erreur d'approximation.\\ On obtient la figure suivante: \begin{figure}[h] \label{lap01} \centering \includegraphics[width=1\textwidth]{err01.eps} \caption{Graphe de l'erreur d'approximation pour $a=3$} \end{figure} On peut aussi la faire ``\`a la main" de mani\`ere it\'erative, en transportant la premi\`ere structure qui apparait, \`a savoir la z\^one d'accumulation au voisinage de z\'ero, et en regardant ce que devient cette structure via les op\'erations $x\mapsto x+1$ et $x\mapsto 1/x$. Le tra\c{c}\'e de la fonction d'erreur d'approximation apparait ainsi tout seul. Il montre aussi comment la premi\`ere z\^one d'accrochage apparait au voisinage de $1$ par transport de la z\^one d'accumulation en $0$ via la translation et l'inversion laissant fixe le point $1$. Cette construction \`a l'avantage d'\^etre simple et parlante.\\ Nous avons le th\'eor\`eme de structure suivant : \begin{thm} Soit $a\in \hbox{\tenbb N}^*$, l'ensemble de r\'esolution ${\mathcal R}_a$ se d\'ecompose en: \vskip .3cm i - rationnels attractifs : soit $p/q$ un tel rationnel, il d\'efini un interval d'accrochage $I^a_{p/q} = [\nu^- (p/q) ,\nu^+ (p/q)]$ tel que pour tout $x\in I_{p/q}$, on a $r_a (x)=p/q$. \vskip .3cm ii - rationnels transitoires : soit $p/q$ un tel rationnel, il d\'efini un interval de transit \`a droite (resp. \`a gauche) $I^+_{p/q} =[p/q , \nu^+ (p/q)]$ (resp. $I_{p/q}^- =[\nu^- (p/q) ,p/q]$) tel que pour tout $x\in I^+ (p/q)$ (resp. $x\in I^- (p/q)$), on a $r_a (x)=p/q$. \vskip .3cm iii - irrationnels de blocage : ils sont obtenus comme accumulation de zones de blocage. \vskip .3cm iv - irrationnels transitoire : ils sont obtenus comme accumulation de zones de transit. \vskip .3cm v - irrationnels mixtes : soit $\xi$ un tel irrationnel. Il est obtenu comme accumulation de zones de transit et de blocage. \end{thm} Ce th\'eor\`eme n'est qu'une retraduction du fait que l'ensemble de r\'esolution ${\mathcal R}_a$ est un arbre. On peut aussi le voir directement via la construction it\'erative de ${\mathcal R}_a$. \subsection{Sur les z\^ones d'accrochages} \label{zoneaccro} Dans ce paragraphe, on travaille dans un ensemble de r\'esolution donn\'e ${\mathcal R}_a$, $a\in \hbox{\tenbb N}^*$.\\ Soit $x=p/q$ un rationnel de blocage. On a \begin{lem} \label{func} Pour tout nombre rationnel de blocage $x=p/q\in {\mathcal R}_a$, on a \begin{equation} \left . \begin{array}{lll} i\ \ -\ \nu^{\sigma} (1+x) & = & 1+\nu^{\sigma} (x) , \ \ \ \sigma =\pm, \\ ii\ -\ \nu^{\sigma } (1/x) & = & \displaystyle {1\over \nu^{-\sigma } (x)} . \end{array} \right . \end{equation} \end{lem} Ces relations gardent un sens pour tout nombre $x\in \hbox{\tenbb R}^*$, ce qui permettra de ne plus pr\'eciser si on travaille avec un rationnel de blocage. \begin{lem} Soit $p/q =[a_0 ,\dots ,a_n ] \in {\mathcal R}_a$, on a \begin{equation} \left . \begin{array}{lll} \nu^{\sigma } & = & [a_0 ,\dots ,a_n ,a] , \\ \nu^{-\sigma } & = & [a_0 ,\dots ,a_n -1 ,1,a ] , \end{array} \right . \end{equation} avec $\sigma =+$ si $n$ est pair et $\sigma =-$ si $n$ est impair. \end{lem} \begin{proof} On fait la d\'emonstration pour $\nu^+$, la d\'emarche \'etant analogue pour $\nu^-$. On a $$\nu^+ ([a_0 ,\dots ,a_n ]) =\nu^+ (a_0 +\displaystyle {1\over [a_1 ,\dots ,a_n ]} ) =a_0 + \nu^+ (\displaystyle {1\over [a_1 ,\dots ,a_n ]} ) ,$$ par l'\'egalit\'e i) du lemme \ref{func}. De plus, on a $$\nu^+ (\displaystyle {1\over [a_1 ,\dots ,a_n ]} )=\displaystyle {1\over \nu^- ([a_1 ,\dots ,a_n ] )} ,$$ par ii). Une simple r\'ecurrence donne donc $$\nu^+ ([a_0 ,\dots ,a_n ])=[a_0 ,\dots ,a_{n-1} ,\nu^{\sigma } (a_n ) ],$$ avec $\sigma =+$ si $n$ est impair et $\sigma =-$ sinon. Comme on a pour tout entier $0< m <a$, $\nu^+ (m)=m+\displaystyle {1\over a}$, $\nu^- (m) =m-1+\displaystyle {1\over 1+\displaystyle {1\over a}}$ et de plus, $\nu^+ (0)=1/a$, $\nu^- (a)=a-1+\displaystyle {1\over 1+\displaystyle {1\over a}}$, on en d\'eduit le lemme. \end{proof} Ce r\'esultat est le plus frappant vis \`a vis des donn\'ees exp\'erimentales. Les valeurs du bord des z\^ones d'accrochage pr\'edites via ce lemme sont en accord quasi parfait avec celles obtenues exp\'erimentalement (voir \cite{serge} et \cite{mar}). \subsection{Bassin d'attraction d'un rationnel} Soit $a\in\mathbb{N}^*$ et $p/q$ un rationnel donn\'e de ${\mathcal R}_a$. Le bassin d'attraction de $p/q$, not\'e ${\mathcal A}(p/q)$, est d\'efini comme \begin{equation} {\mathcal A} (p/q)=\left \{ x\in \mathbb{R},\ \exists k\in\mathbb{N},\ r_a^k (x)=p/q \right \} , \end{equation} o\`u $r_a^k =r_a \circ \dots r_a$ $k$ fois.\\ Ces bassins sont form\'es de la z\^one d'accrochage proprement dire et des z\^ones transitoires accol\'ees.\\ Avant de donner une caract\'erisation du bord du bassin d'attraction, regardons un exemple ou $r_a$ agit non trivialement:\\ Soit $a=3$ et $p/q=[0,1,2,1,3]$. On a $r_3 (p/q)=[0,1,2,1]=[0,1,3]$ et $r_3^2 (p/q)=[0,1]$.\\ On voit donc ici un exemple de dynamique des approximations via l'application $r_3$. Ce ph\'enom\`ene est d\^u \`a l'existence de rationnels dont la fraction continue est de la forme \begin{equation} [a_1 ,\dots ,a_n ,a_1 ,1,a] . \end{equation} Pour ces nombres l'action de $r_a$ ne donne pas de suite la bonne approximation. En effet, on a \begin{equation} r_a ([a_1 ,\dots ,a_n ,a_1 ,1,a] )=[a_1 ,\dots ,a_n ,a] , \end{equation} soit \begin{equation} r_a^2 ([a_1 ,\dots ,a_n ,a_1 ,1,a] )=[a_1 ,\dots ,a_n ] . \end{equation} L'\'evolution dynamique de l'approximation de $[a_1 ,\dots ,a_n ,a_1 ,1,a]$ s'arr\'ete si et seulement si $a_n <a$. Le ph\'enom\`ene ci-dessus est \`a l'origine de la terminologie de rationnels {\it transitoires} dans le th\'eor\`eme de structure.\\ Le lemme suivant caract\'erise simplement le bord du bassin d'attraction d'un rationnel: \begin{lem} Soit $a\in \mathbb{N}^*$ et $[a_1 ,\dots ,a_n]$ un rationnel donn\'e de ${\mathcal R}_a$. Les bords de son bassin d'attraction sont des irrationnels quadratiques. Pr\'ecis\'ement, les valeurs des bords sont \begin{equation} [a_1 ,\dots ,a_n ,a-1 ,1,\dots , a-1 ,1,\dots ] \ \mbox{\rm et}\ [a_1 ,\dots ,a_n -1,1, a-1 ,1,\dots , a-1 ,1,\dots ] . \end{equation} \end{lem} \begin{proof} La d\'emonstration repose sur la construction it\'erative du bord de la z\^one d'accrochage en $1$. On transporte ensuite ces bords pour obtenir le rationnel choisi. Nous allons faire la construction pour le bord droit du bassin d'attraction, le bord gauche n'offrant pas plus de difficult\'es.\\ Une z\^one transitoire \'etant donn\'ee \`a droite de $1$, on obtient la prochaine en appliquant les op\'erations suivantes: $x\mapsto 1/x$, $x\mapsto x+a-1$, $x\mapsto 1/x$ er $x\mapsto x+1$. Autrement dit, on it\`ere l'application \begin{equation} t_a (x)=\displaystyle 1+{x\over 1+x(a-1)} . \end{equation} Les points fixes de cette fonction sont des irrationnels quadratiques. La forme de l'application $t_a$ traduite sur les fractions continues nous dit que ces irrationnels s'obtiennent en collant aux fractions continues du bord des z\^ones d'accrochage une suite infinie de $a-1,1$. \end{proof} On peut \'etudier d'autres types de nombres irrationnels obtenus comme par exemple accumulation de z\^ones de blocage. On renvoie \`a (\cite{CrDe},p.317-318) pour un exemple. N\'eanmoins, ces r\'esultats sont difficiles \`a tester et interpr\'eter au niveau exp\'erimental et physique. \section{Approche dynamique du spectre des fr\'equences} L'analyse pr\'ec\'edente permet une reconstruction globale du spectre des fr\'equences, mais ne dit pas la mani\`ere dont les fr\'equences bougent au cours du temps lors de la d\'etection du signal. Or, de r\'ecentes exp\'eriences de Michel Planat et Jean-Philippe Marillet \cite{mar} ont mis en \'evidence l'existence de sauts de fr\'equences au voisinage des r\'esonances. Ce paragraphe donne une base th\'eorique \`a ce ph\'enom\`ene fond\'ee sur l'hypoth\`ese diophantienne. \subsection{Dynamique des fractions continues} Pour tout $\nu\in R\setminus \mathbb{R}$, on note $p_i /q_i$ son $i$-\`eme convergent. Pour chaque valeur de $i$, on regarde la z\^one d'accrochage attach\'ee au rationnel $p_i /q_i$. La taille de cette z\^one est proportionnelle \`a $q_i$. Pour comprendre la dynamique des fractions continues sous l'hypoth\`ese diophantienne on doit \'etudier l'\'evolution des $q_i$ lorsque $i$ croit. \subsection{Exposants de stabilit\'e} Il est possible de quantifier les variations de $q_i$ lorsque $i$ croit. Pour tout $i\geq 1$, il existe un unique r\'eel $\tau_i \geq 1$ et $\gamma_i >0$ tel que \begin{equation} q_{i+1} =\gamma_i q_i^{\tau_i}, \end{equation} avec $1\leq \gamma_i <q_i$. L'exposant $\tau_i$ peut se concevoir comme un exposant caract\'erisant la stabilit\'e de $q_i$ lorsque $i$ croit.\\ L'analyse diophantienne fournit des renseignements int\'eressants sur cet exposant: \begin{lem} Soit $\nu\in \mathbb{R}$, ses exposants de stabilit\'e sont uniform\'ement born\'es si et seulement si $\nu$ est un nombre diophantien. \end{lem} Ce lemme d\'ecoule du th\'eor\`eme de Siegel \cite{sie}: un nombre r\'eel $\nu $ est diophantien si et seulement si il existe $\gamma >0$ et $\tau \geq 1$ tels que $q_{i+1} \leq \gamma q_i^{\tau}$.\\ On peut aussi \'etudier l'\'evolution des $q_i$ via la {\it fonction de Brujno} \cite{br}:\\ La fonction de Brujno, not\'ee ${\mathcal B}$, est d\'efinie pour tout $\nu\in \mathbb{R}\setminus \mathbb{Q}$, par \begin{equation} {\mathcal B} (\nu )=\displaystyle\sum_{i\geq 0} \displaystyle {\log q_{i+1} \over q_i } , \end{equation} o\`u $p_i/q_i$ est le $i$-\`eme convergent de $\nu$.\\ On renvoie au travail de S. Marmi, P. Moussa et J-C. Yoccoz \cite{mmy} pour une \'etude d\'etaill\'ee des propri\'et\'es de cette fonction. \subsection{Instabilit\'e au voisinage des r\'esonances} Commen\c{c}ons par un fait exp\'erimental: lorsque $f_0 =1.00000007$ MHz et $f_1 =0.599975$ MHz, on a \begin{equation} \displaystyle {f_0 \over f_1} =0.599974958...=[0,1,1,2,1596,1,10,\dots ] . \end{equation} On observe que le d\'etecteur effectue des sauts autour des valeurs suivantes de fr\'equence de battement: \begin{equation} f=135,\ 261,\ 386\ {\rm Hz} . \end{equation} Comment comprendre ce ph\'enom\`ene ?\\ Pour tout $a\in\mathbb{N}^*$, on note $\nu (a)$ le nombre $[0,1,1,2,a]$ et $p(a)/q(a)$ son \'ecriture sous forme de fraction irr\'eductible. Si on note $f(a)$ la fr\'equence d\'efinie par \begin{equation} f(a)=\mid p(a) f_0 -q(a) f_1 \mid , \end{equation} on obtient pour $a=1593,\ 1594$ et $1595$ les fr\'equences de battement $135$, $261$ et $386$ respectivement.\\ Autrement dit, les sauts de fr\'equences observ\'ees correspondent \`a des fluctuations des quotients partiels, en particulier du param\`etre de troncature.\\ On peut comprendre cette situation de la fa\c{c}on suivante: lorsque le d\'enominateur du $i$-th convergent devient instable (i.e. lorsqu'on a une augmentation brusque du quotient partiel dans le d\'eveloppement en fraction continue), on a des z\^ones d'accrochage tr\`es fines. De ce fait, le syst\`eme devient sensible aux perturbations. \section{R\'ealit\'e ou artefact ?} La th\'eorie que nous avons propos\'e n'explique pas pourquoi le syst\`eme fait de l'approximation diophantienne. Il me semble que si une raison claire existe elle doit se trouver du cot\'e de la physique microscopique et d'une compr\'ehension plus fine de la physique des m\'elangeurs.\\ On peut aussi mettre en doute le fait que les effets de hi\'erarchie que nous avons observ\'e sont dus au syst\`eme physique et donc mettent en \'evidence finalement des propri\'et\'es de la nature. Cette suspicion tient au fait que nous n'avons pas acc\'e \`a des donn\'ees bruts. En effet, entre l'exp\'erience proprement dite et les donn\'ees se trouve un ordinateur pour l'acquisition et le traitement des donn\'ees (il y a une phase de comptage sur le signal). Rien ne dit que la fa\c{c}on d'effectuer ce comptage et du m\^eme coup tout le traitement des donn\'ees n'est pas finalement biais\'e. Cette situation est in\'evitable et entre en fait dans tout proc\'ed\'e de mesure d'un syst\`eme physique.\\ Je ne crois pas qu'il soit possible de trancher pour le moment. Il me semble que le probl\`eme est de m\^eme nature que celui de d\'ecider si le monde r\'eel est un continuum ou discret. Je renvoie \`a la discussion de E. Schr\"odinger (\cite{scr},p.41-59) pour plus de d\'etails.\\ Ce qui est s\^ur c'est que de nombreux probl\`emes de physiques font intervenir d'une mani\`ere ou d'une autre des r\'esolutions, i.e. des limites \`a notre mesure du r\'eel. Cette limitation n'a dans certains cas que peut d'incidence, comme dans l'\'etude de beaucoup de ph\'enom\`enes macroscopiques. La pr\'ecision toujours plus grande des mesures, notamment dans le cas des oscillateurs, nous fait toucher du doigt il me semble la structure infime du r\'eel. On tombe alors sur des ph\'enom\`enes nouveaux mais de port\'ee universelle. Je renvoie encore une fois au texte de E. Schr\"odinger (\cite{scr},p.49-59) ou, sur quelques pages, il donne une construction tr\`es proche dans l'esprit des espaces de r\'esolutions pour d\'emontrer les difficult\'es li\'ees \`a l'hypoth\`ese d'une nature continue.
{'timestamp': '2005-09-26T02:07:28', 'yymm': '0509', 'arxiv_id': 'math-ph/0509053', 'language': 'fr', 'url': 'https://arxiv.org/abs/math-ph/0509053'}
\section{Introduction} After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file. Please follow the steps and style guidelines outlined below for submitting your author response. The author rebuttal is optional and, following similar guidelines to previous CVPR conferences, is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers. It is NOT intended to add new contributions (theorems, algorithms, experiments) that were absent in the original submission and NOT specifically requested by the reviewers. You may optionally add a figure, graph, or proof to your rebuttal to better illustrate your answer to the reviewers' comments. Per a passed 2018 PAMI-TC motion, reviewers should refrain from requesting significant additional experiments for the rebuttal or penalize for lack of additional experiments. Authors should refrain from including new experimental results in the rebuttal, especially when not specifically requested to do so by the reviewers. Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers. Just like the original submission, the rebuttal must maintain anonymity and cannot include external links that reveal the author identity or circumvent the length restriction. The rebuttal must comply with this template (the use of sections is not required, though it is recommended to structure the rebuttal for ease of reading). \subsection{Response length} Author responses must be no longer than 1 page in length including any references and figures. Overlength responses will simply not be reviewed. This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. \section{Formatting your Response} {\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.} All text must be in a two-column format. The total allowable size of the text area is $6\frac78$ inches (17.46 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The top margin should begin 1 inch (2.54 cm) from the top edge of the page. The bottom margin should be $1\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately $1\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page. Please number any displayed equations. It is important for readers to be able to refer to any particular equation. Wherever Times is specified, Times Roman may also be used. Main text should be in 10-point Times, single-spaced. Section headings should be in 10 or 12 point Times. All paragraphs should be indented 1 pica (approx.~$\frac{1}{6}$ inch or 0.422 cm). Figure and table captions should be 9-point Roman type as in \cref{fig:onecol}. List and number all bibliographical references in 9-point Times, single-spaced, at the end of your response. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Alpher05}. Where appropriate, include the name(s) of editors of referenced books. \begin{figure}[t] \centering \fbox{\rule{0pt}{0.5in} \rule{0.9\linewidth}{0pt}} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:onecol} \end{figure} To avoid ambiguities, it is best if the numbering for equations, figures, tables, and references in the author response does not overlap with that in the main paper (the reviewer may wonder if you talk about \cref{fig:onecol} in the author response or in the paper). See \LaTeX\ template for a workaround. \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the response. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Readers (and reviewers), even of an electronic copy, may choose to print your response in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it is almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.pdf} \end{verbatim} } {\small \bibliographystyle{ieee_fullname} \section{Introduction} \begin{figure}[t] \centering \begin{subfigure}{0.33\linewidth} \begin{minipage}[t]{0.33\linewidth} \includegraphics[width=1.0in]{1/11.jpg}\vspace{0.1cm} \includegraphics[width=1.0in]{1/14.png} \end{minipage} \caption*{Input} \end{subfigure}\hspace{-1mm} \begin{subfigure}{0.33\linewidth} \begin{minipage}[t]{0.33\linewidth} \includegraphics[width=1.0in]{1/12.png}\vspace{0.1cm} \includegraphics[width=1.0in]{1/15.png} \end{minipage} \caption*{MIRNet} \end{subfigure}\hspace{-1mm} \begin{subfigure}{0.33\linewidth} \begin{minipage}[t]{0.33\linewidth} \includegraphics[width=1.0in]{1/13.png}\vspace{0.1cm} \includegraphics[width=1.0in]{1/16.png} \end{minipage} \caption*{Ours} \end{subfigure} \caption{Visual comparision with supervised low light image enhancement method MIRNet~\cite{han2020mirnet}.The proposed method can well imporve the contrast and reduce noises and at same time reduce the color bias.} \vspace{-0.5cm} \end{figure} Low-light Image Enhancement is a fundamental task in low-level vision and an important pre-processing step in many other vision tasks. images captured under the unsuitable lighting are either too dark or too bright. The art of recovering the original clean image from its corrupted measurements is studied under the image restoration task. It is an ill-posed inverse problem,due to the existence of many possible solutions. While we enhancing the brightness, we also need to tackle the color distortion, the amplified noise, the loss of detail and texture information and the blurred edges. Traditional methods usually address low-light image enhancement by histogram equalization(HE)-based approaches~\cite{pizer1987adaptive,pizer1990contrast,2007Brightness}, Retinex Theory~\cite{1978The,jobson1997properties,Jobson1997A}, Gamma Correction~\cite{rahman2016adaptive}, etc. However, these methods need to manually set parameters,have poor generalization ability and often result in visible noise for real low-light images. With the rapid development of deep learning technology, numerous advanced approaches~\cite{2017LLNet,2018LLCNN} have been developed and achieve impressive success. By involving the strategy of a noise-robust autoencoder-based way and ReLU activate function into deep architectures, LLNet~\cite{2017LLNet} is proposed to enlighten images with minimum pixel-level saturation and achieves a much higher peak signal (PSNR) to noise ratio than conventional state-of-the-art approaches~\cite{guo2016lime}. For the pursuit of highly accurate restoring results,some follow-up works have been proposed to decomposes low-light input into reflectance and illumination maps~\cite{wei2018deep,zhang2019kindling}. To involve multi-scale information,some most advanced end-to-end methods~\cite{2020Attention,han2020mirnet} apply U-net~\cite{ronneberger2015u} as their basic structure and add some dense residual block in each level. Although these methods obtain competitive performance on benchmark datasets, their heavy inference time hinder their application. To seek a better trade-off between image denosing performance and the consumption of computational resources, Densely self-guided Wavelet network (DSWN)~\cite{2020Densely} is proposed for image denoising task by a top-down guidance strategy. DSWN generates multi-resolution inputs with the discrete wavelet transformation (DWT) and inverse discrete wavelet transform (IDWT) before any convolutional operation. Large-scale contextual information extracted at low resolution is gradually propagated into the higher resolution sub-networks to guide the feature extraction processes at these scales. Using such a structure,DSWN is able to achieve a better denoising performance than U-net with less runtime and GPU memory. Inspired by DSWN, we proposed a Attention based Broadly Self-guided Network (ABSGN) (as is shown in Figure 2) which is able to improve performance of MIRNet (as is shown in Figure 1) and require less runtime than the state-of-the-art densely networks based on U-net structure. In ABSGN, we design Global Spatial Attention (GSA) Block (as shown in Figure 3) to better get the global information at the lowest resolution level. Then, we embed dilation convolution into Densely conneted Block to enlarge receptive Field using the self-guided strategy~\cite{gu2019self}, which we called Multi-level Guided Dense Block (MGDB), To achieve a better performance, we adopt more MGDB blocks with dense connections at the full resolution level. We combine discrete wavelet transformation (DWT) and a convolution layer with 3x3 convolution kernel to achieve the upsampling process, Corresponding to which we use inverse discrete wavelet transform (IDWT) and a convolution layer with 3x3 convolution kernel to achieve the downsampling process. Such a design can better extract the information of multi-scale feature maps. In addition,wavelet has been applied to denoising task in traditional methods\cite{sardy2001robust}. Utilizing wavelet transform to incorporate multi-scale information makes it possible for the network to have time-frequency analysis capabilities. Such a structure is able to help our network to deal with the low-light image enhancement task at different exposures.Our main contributions are concluded as follows: \begin{itemize} \item As we konw,we firstly introduce Self-guided Network and Multi-level wavelet transform into low-light image enhancement,which can accelerate the inference speed and avoid down-sampling information loss. \item We embed dilation convolution into Dense Block to enlarge the receptive Field using the self-guided strategy,which achieve a higher PSNR and preserve more details. \item We propose a new global Attention module called GSA,which has better global feature extraction capability and further improve image restoration performance. \item We design a broadly self-guided wavelet network which outperforms conventional methods and is more efficient than the state-of-the-art low-light image enhancement networks with dense blocks. \end{itemize} \section{Related works} In this section, we briefly introduce some works related to our research. First, we review some deep learning based low-light image Enhancement, then we discuss some previous works including Attention mechanism and Feature extraction. \subsection{Deep Neural Networks for low-light image enhancement} In recent years, researches have shown that deep learning technologies outperform traditional methods on low-light image Enhancement by extracting more suitable image features~\cite{2018GLADNet}. including supervised learning,unsupervised learning and zero-shot learning. Most of the early works based on supervised learning train the networks. As we know,K.G.Lore et.al~\cite{2017LLNet} firstly designed a deep neural networks with stacked sparse denosing autoencoder perform low-light image enhancement with minimum pixel-level saturation.Using feedforward CNN with different Gaussian convolution kernels, MSR-Net~\cite{shen2017msr} is able to simulate the pipeline of Multi-scale Retinex (MSR) for directly learning end-to-end mapping between dark and bright images. GLADNet~\cite{2018GLADNet} is able to calculates global illumination and has global illumination-aware, which better handle a wide range of light levels. By introducing Retinex theory to explicitly decompose the image into reflectance and illumination, RetinexNet~\cite{wei2018deep} is able to enhance the lightnesss over illumination. The similar work KinD~\cite{zhang2019kindling} additionally introduce degradation removal in the reflectance to improve the quality of the restored image. KinD++ is the improved version of KinD,which Effectively improve the quality of reflection map by using Multi-scale illumination attention(MISA) module. Considering both Retinex model and GAN, RDGAN~\cite{2019RDGAN} further improved the restored image quality by embedding a GAN network. Besides, by involving the strategy of generative adversarial network (GAN) into deep architectures, EnlightenGan~\cite{2021EnlightenGAN} is proposed to handle the limitation of low/normal-light pairs, which learns a one-to-many relation from low-light to normal-light image space without paired datasets. To overcome the lack of paired training data, Zhang et.al ~\cite{zhang2021self} presented a self-supervised low-light Image Enhancemnet, which fully make use of Retinex theroy to decompose the low-light image and enhance the reflectance map to view as the final restored image. In addition, Zero-DCE~\cite{2020Zero} estimates the pixel-wise and the high-order curves for dynamic range adjustment of a low-light image in an unsupervised way, which use meticulously designed non-reference loss functions for training. Recently, Syed Waqas Zamir et al~\cite{han2020mirnet} proposed a Multiscale Residual Block (MRB) that uses both residual learning and Attention Unit as its basic structure, maximizing feature reuse and achieving a significant improvement in the performance of low light image enhancement. \subsection{Attention mechanism} The attention mechanism has been studied in a wide variety of computer vision problems, which try to mimic the human visual system in making substantial use of contextual information in understanding RGB images. This involves discarding unwanted regions in the image, while focusing on more important parts containing rich features of our vision task. Based on such an idea,SENet~\cite{hu2018squeeze} is to learn feature weights according to loss through the network,so that the effective feature map weight is large, and the invalid or small effect is small. Using such a strategy to train the model to achieve better results. Further, Woo,S.et.al propose Channel Attention Module (CBA) and Spatial Attention module (SPA) to pay attention to different components. In this paper,we propose a novel Global attention CNN model. In which,We use the spatial Attention to improve the attention to different area in feature maps. Since the feature map of each channel has different contributions to the following network. Therefore, we also embed CBA into our MGDB to extract more useful information for low-light image enhancement. The structure of channel attention is illustrated in Figure 2. \subsection{Feature Extraction} PixelShuffle and wavelet transform have been proposed to replace pooling and interpolation to avoid information loss. Multi-level wavelet transform is considered by DSWN~\cite{2020Densely} to achieve better receptive field size and avoid down-sampling information loss by embedding wavelet transform into CNN architecture. DSWN owns more power to model both spatial context and inter-subband dependency by embedding DWT and IDWT to CNN. In this paper,our proposed network adopts the same method as DSWN to extract multi-scale information with a totally different architecture from DSWN. As we know, stacking multiple convolutional layers can help us effectively extract high-dimensional information in the feature map. However, this will lead to a substantial increase in network parameters, requiring the massive amount of training data to prevent overfitting. In addition, the size of the convolution kernel used in the convolution layer will bring different receptive fields to feature extraction. The larger the size of the convolution kernel used, the larger the receptive field. But it will bring heavy computation and slower running time. With Densely Connected Residual Block, DRNet~\cite{2018Image} mitigates the problems of overfitting, vanishing gradient, and training instability during training very deep and wide networks. Moreover, it can improve the propagation and reuse of features by creating direct connections from the previous layers to the subsequent layers. Recently, Injecting holes in the standard convolution map, The dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. The number of parameters associated with each layer is identical. The receptive field grows exponentially. Utilizing the dilation convolutions and Self-guided leanring strategy, MLGRB~\cite{WANG2019206} can extract the spatial contextual conformation to enhance further the feature representation ability. Inspired by MLGRB, we adopts the similar architecture as MLGRB to further acquire multi-scale information in different resolution space. \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{test11.png} \caption{An illustration of our proposed network} \end{figure*} \section{Attention based Broadly Self-guided Network} In this section, we firstly introduce the overall network structure and then introduce the details of ABSGN. \subsection{Overall Structure of ABSGN} Our proposed network use a top-down self-guidance architecture to better exploit image multi-scale information. Information extracted at low resolution is gradually propagated into the higher resolution sub-networks to guide the feature extraction processes. Firstly, we pass the input image through a Conv+PReLU layers to obtain a main feature map with 32 channels. Then instead of PixelShuffle and PixelUnShuffle,DWT and IDWT are used to generate multi-scale inputs. ABSGN uses wavelet transform to transform the main feature map to three smaller scales. At the lowest resolution layer, we offer a Global Spatial Attention (GSA) Block including spatial attention module~\cite{2018CBAM} to be aware of the global context/color information. Apart from the lowest solution level, we add Multi-level Guided Densely Block (MGDB) one or two blocks as shown in Figure 2. At the full resolution layer, we adopt Dense connetions consists of two MGDB to imporove the reuse of the main feature map. In the rest of this paper, we simply refer to the advantages of GSA and MGDB. we also make a comparative experiment to prove our conclusion. In addition, we find batch normalization is harmful for the denoising performance and only use one normalization layer in this network. \subsection{Detail Structure of ABSGN} different form DSWN, we fistly acquire the main feature map with 32 channels from the input image by a Conv+PReLU layer. the aim of which is to increase the feature map by increasing the number of channel. we use DWT and a Conv+PReLU layer to finish the downsampling process. Using a Conv+PReLU layer to decrease the number of channels,which not only can decrease the calculation but also fine-tune the down-sampled feature map, Performing the above downsampling operation three times in sequence to obtain feature maps of different sizes. The top level of ABSGN works on the smallest spatial resolution to extract large scale information.As is shown in Figure 3, the top sub-network use the GSA block to get the global information. which contains two Conv+PReLU layers and AdaptiveAvgPool2d, AdaptiveMaxPool2d, interpolation (the Resize block in Figure 3) and Spatial Attention module (SPA). Particulary, given an input feature map, i.e.X, with a size of $H\times W\times C$, AdaptiveAvgPool2d and AdaptiveMaxPool2d is employed to extract the representative information. Average of them is used for the global information producing a feature map with a size of $1\times 1\times C$. Then, an interpolation function is utilized to upscale the feature map with global information which is processed by a Conv+PReLU to shrink the number of channels, yielding a global feature map with a size of $H\times W\times C_1$. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{module.png} \caption{Diagram of Global Spatial Attention (GSA) block} \end{figure} Then we apply SPA block to increase attention to different areas of the global feature map. The SPA block simultaneously applies max-pooling and average-pool Along channel dimention, then cancatenate the two feature maps to generate feature descriptors, the purpose of which is to highlight the information area. the feature descriptors then generates a spatial attention map through a convolution layer, Finally the map normalized by Activation function (Sigmoid) multiplies the input to obtain the output as optimized global feature map. Finally a concat function and a Conv+PReLU function are employed to combine the input feature map (encoding local information) and the optimized global feature map (encoding global information) to produce an output feature map with $H\times W\times C$. Corresponds to the above downsampling process, we use IDWT and a Conv+PReLU layer to finish the upsampling process. At the middle two levels, 1x1 convolutional kernel layers are used to merge information extracted from different resolution. The network structure of the middle sub-networks take MGDB to fully extract the feature inforamation, which embed dilation convolution with different dilation factor into Dense Block. As is shown in Figure 2, we apply self-guided strategy to use the dilation covolution with greater dilation factor to guide the dilation covolution with the smaller one. before the final output, we add Channel Attention module~\cite{2018CBAM} to enhance the attention to different channels. As for the full resolution level, we add more MGDB to reuse the main feature map, which enhance the feature extraction capability of ABSGN, after merging information from all the scales, we use two Conv+PReLU layers to acqiure the final output as the restored image. By adding gradient loss, our network is able to achieve better retention of details without reducing PSNR. Inspired by a new joint loss function~\cite{2015Loss}, our network uses L1 loss plus SSIM loss for training. The total loss is as follows: \begin{equation} \mit L_{ABSGN} = \gamma\mit L_{SSIM}(I,\hat{I})+(1-\gamma)\mit L_{l1}(I,\hat{I}) \end{equation} where $\gamma \in [0,1]$ is the weight to balance the two terms. Here we choose the value of the $\gamma$ is 0.16. \section{Experiment} In this section, we first introduce the training details and provide experimental results on different datasets. Then we compare ABSGN with several state-of-the-art low-light image enhancement approaches. \begin{figure*}[t] \centering \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{4/41.png} \caption*{(1) Input} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{4/42.png} \caption*{(2) LIME} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{4/43.png} \caption*{(3) GLADNet} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{4/44.png} \caption*{(4) EnlightenGAN} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{4/45.png} \caption*{(5) KinD++} \end{subfigure} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{4/46.png} \caption*{(6) Zero-DCE} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{4/47.png} \caption*{(7) RUAS} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{4/48.png} \caption*{(8) MIRNet} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{4/49.png} \caption*{(9) Ours} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{4/410.png} \caption*{(10) Reference} \end{subfigure} \caption{Restoring results of the conventional methods and the proposed method on LOL dataset(lighter)} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{5/51.png} \caption*{(1) Input} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{5/52.png} \caption*{(2) LIME} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{5/53.png} \caption*{(3) GLADNet} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{5/54.png} \caption*{(4) EnlightenGAN} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{5/55.png} \caption*{(5) KinD++} \end{subfigure} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{5/56.png} \caption*{(6) Zero-DCE} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{5/57.png} \caption*{(7) RUAS} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{5/58.png} \caption*{(8) MIRNet} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{5/59.png} \caption*{(9) Ours} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.2\linewidth} \centering \includegraphics[width=1.3in]{5/510.png} \caption*{(10) Reference} \end{subfigure} \caption{Restoring results of the conventional methods and the proposed method on LOL dataset(Darker)} \end{figure*} \subsection{Experimental Setting} Low-Light (LOL) dataset~\cite{wei2018deep} is a publicly available dark-light paired images dataset in the real sense. The low-light images are collected by changing exposure time and ISO. It contains 500 images in total, The resolution of each of these images is 400 × 600. we use 485 images of them for training, and the rest for evaluation as suggested by Most state-of-the-art low-light image enhancement solutions select the LOL as their training dataset~\cite{2018GLADNet,zhang2019kindling}. To train our ABSGN model, we also use LOL dataset as training and validation dataset for low-light image enhancement task. The LOL dataset consists of two parts: the real-world dataset and the synthetic dataset. Because the LOL synthetic dataset cannot simulate the degradation of real-world images very well, we deprecate the synthetic dataset where low-light images are synthesized from normal-light images and only use the real-world dataset. Our datasets for comparision also includes LIME~\cite{guo2016lime}, MEF~\cite{2015Perceptual}, DICM~\cite{2013Contrast} which are used by some recent low-light image enhancment networks~\cite{zhang2019kindling}. For these datasets lack of reference images, we use non-reference metric including No-reference Image Quality and Uncertainly Evaluator (UNIQUE)~\cite{zhang2021uncertainty} and Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE)~\cite{2012No}. In addition, we also choose another commonly used data set called MIT-Adobe FiveK dataset for training and evaluation. MIT-Adobe FiveK~\cite{2011Learning} contains 5000 images of various indoor and outdoor scenes captured with DSLR cameras in different lighting conditions. The tonal attributes of all images are manually adjusted by five different trained photographers (labelled as experts A to E). we choose the enhanced images of expert C as the ground-truth. Moreover, the first 4500 images are used for training and the last 500 for testing. When training our model, we randomly crop 256 × 256 patches from the training images. The input patches of the proposed network are randomly flipped and rotated for data augmentation. The parameters of network are Kaiming initialized~\cite{2015Delving}. We train the whole network for 300 epochs overall. The learning rate is initialized as $1\times10^{-4}$ at the first 200 epochs and reduce to $5\times10^{-5}$ in the next 50 epochs. We finetune our model at the last 50 epochs with a $1\times10^{-5}$ learning rate. For optimization, we use Adam optimizer ~\cite{2014Adam} with $\beta{1} = 0.5$,$\beta2 = 0.999$ and batch size equals to 5. We use L1 loss and SSIM loss for the total loss function, L1 loss is a PSNR-oriented optimization in the training process ~\cite{2015Loss}. SSIM loss can keep the overall structure. We adopt PSNR,SSIM~\cite{2004Image}, LPIPS ~\cite{zhang2018unreasonable} and FSIM ~\cite{zhang2011fsim} as the quantative metrics to measure the performance of our method. We use the Pytorch framework to build our model on the platform with Nvidia TITAN XP GPU and an Intel(R) Xeon(R) E5-2620 v4 2.10GHz CPU. \subsection{Evaluation on LOL Dataset} We compare our proposed network with several state-of-the-art low-light image enhancement solutions: MSRCR~\cite{Jobson1997A}, LIME, NPE, JED~\cite{2018Joint}, MBLLEN~\cite{lv2018mbllen}, RetinexNet, GLADNet, RDGAN, KinD++, Zero-DCE, EnlightenGAN and MIRNet. To compare the performance, we determined to take above measures as the objective and subjective measurements. \begin{table}[t]\small \centering \caption{quantitative comparision of serveral metric between our method and state-of-the-art methods on LOL dataset.} \begin{tabular}{c|cccc} \hline \textbf{Method} & \textbf{SSIM}$\uparrow$ & \textbf{PSNR}$\uparrow$ & \textbf{LPIPS}$\downarrow$ &\textbf{FSIM}$\uparrow$ \\ \hline Input & 0.1914 & 7.77 & 0.4173 &0.7190 \\\hline MSRCR & 0.4615 & 13.17 & 0.4404 &0.8450 \\\hline LIME & 0.4449 & 16.76 & 0.4183 &0.8549 \\\hline NPE & 0.4839 & 16.97 & 0.4156 &0.8964 \\\hline JED & 0.6509 & 13.69 & 0.3549 &0.8812 \\\hline MBLLEN & 0.7247 & 17.86 & 0.3672 &0.9262 \\\hline RetinexNet & 0.4249 & 16.77 & 0.4670 &0.8642 \\\hline GLADNet & 0.6820 & 19.72 & 0.3994 &0.9329 \\\hline RDGAN & 0.6357 & 15.94 & 0.3985 &0.9276 \\\hline KinD++ & 0.8203 & 21.30 & 0.1614 &0.9424 \\\hline Zero-DCE & 0.5623 & 14.87 & 0.3852 &0.9276 \\\hline EnlightenGAN & 0.6515 & 17.48 & 0.3903 &0.9226 \\\hline MIRNet & 0.8321 & 24.14 & 0.0846 &0.9547 \\\hline \textbf{ours} & { \textbf{0.8680}} & { \textbf{24.59}} & {\textbf{0.0772}} & {\textbf{0.9659}} \\ \hline \end{tabular} \end{table} \begin{table}[ht]\scriptsiz \caption{$UNIQUE\uparrow/BRISQUE\uparrow$ Comparison on NPE, LIME, MEF, DICM} \centering \renewcommand{\arraystretch}{2.0} \begin{tabular}{c|ccc|c} \hline \textbf{Method} & \textbf{LIME} & \textbf{MEF} & \textbf{DICM} &\textbf{Average}\\ [3pt]\hline Dark & 0.826/21.81 & 0.738/23.56 &0.795/21.57 &0.786/22.31 \\[3pt] PIE & 0.791/22.72 & 0.752/11.02 &0.791/21.72 &0.778/18.49 \\[3pt] LIME & 0.774/20.44 & 0.722/15.25 &0.758/23.48 &0.751/19.72 \\[3pt] MBLLEN & 0.768/30.26 & 0.717/37.44 &0.787/32.44 &0.757/33.38 \\[3pt] RetinexNet & 0.794/31.47 & 0.755/20.08 &0.770/29.53 &0.773/27.03 \\[3pt] KinD & 0.766/39.29 & 0.747/31.36 &0.776/32.71 &0.763/34.45 \\[3pt] Zero-DCE & 0.811/21.40 & 0.762/16.84 &0.777/27.35 &0.783/21.86 \\[3pt] MIRNet & 0.814/33.73 & 0.768/21.45 &0.812/33.71 &0.798/29.63 \\[3pt]\hline \textbf{ours} & { \textbf{0.827 / 32.23}} & { \textbf{0.784 / 38.52}} & { \textbf{0.804 / 33.23}} & { \textbf{0.806 / 34.66}}\\[3pt] \hline \end{tabular} \vspace{-2mm} \end{table} As shown in table 1, our method achieves the best performance in PSNR, SSIM, LPIPS and FSIM, MIRNet is the second best method, Although MIRNet adpots more complicated structure and has more than ten times the number of parameters, ABSGN is still able to achieve better PSNR and SSIM than MIRNet on average. Figure 4 and Figure 5 show some example results in LOL dataset with different light levels. we can see that our ABSGN shows the best PSNR and SSIM in different light levels. There also show some detail results of several low-light image enhancement, where KinD++ is a classical decomposing network and MIRNet has excellent feature extraction capabilities. We can see all the three low-light image enhancment networks are able to achieve a obvious improvement compared with low light images. ABSGN is better than KinD++ and MIRNet in some details such as texture details of the book in Figure 4 and the color of the embroidery in Figure 5. Our proposed method is able to handle different light levels and reserve more details at the same time. \begin{figure*}[t] \centering \begin{subfigure}{0.24\linewidth} \begin{minipage}[t]{0.24\linewidth} \includegraphics[width=4cm,height=3cm]{11/71.JPG}\vspace{0.1cm} \includegraphics[width=4cm,height=3cm]{7/75.png}\vspace{0.1cm} \includegraphics[width=4cm,height=3cm]{7/79.png} \end{minipage} \caption*{Input} \end{subfigure}\hspace{-0.1cm} \begin{subfigure}{0.24\linewidth} \begin{minipage}[t]{0.24\linewidth} \includegraphics[width=4cm,height=3cm]{11/72.png}\vspace{0.1cm} \includegraphics[width=4cm,height=3cm]{7/76.png}\vspace{0.1cm} \includegraphics[width=4cm,height=3cm]{7/710.png} \end{minipage} \caption*{KinD++} \end{subfigure}\hspace{-0.1cm} \begin{subfigure}{0.24\linewidth} \begin{minipage}[t]{0.24\linewidth} \includegraphics[width=4cm,height=3cm]{11/73.png}\vspace{0.1cm} \includegraphics[width=4cm,height=3cm]{7/77.png}\vspace{0.1cm} \includegraphics[width=4cm,height=3cm]{7/711.png} \end{minipage} \caption*{MIRNet} \end{subfigure}\hspace{-0.1cm} \begin{subfigure}{0.24\linewidth} \begin{minipage}[t]{0.24\linewidth} \includegraphics[width=4cm,height=3cm]{11/74.png}\vspace{0.1cm} \includegraphics[width=4cm,height=3cm]{7/78.png}\vspace{0.1cm} \includegraphics[width=4cm,height=3cm]{7/712.png} \end{minipage} \caption*{Ours} \end{subfigure} \caption{Visual Comparision on DICM,LIME,MEF from top to bottom. from left to right:Dark,KInD++,MIRNet,ours } \vspace{-0.2cm} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}{0.33\linewidth} \centering \includegraphics[width=1.5in]{6/61.png} \caption*{(1) Input} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.33\linewidth} \centering \includegraphics[width=1.5in]{6/62.png} \caption*{(2) HDRNet~\cite{gharbi2017deep}} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.33\linewidth} \centering \includegraphics[width=1.5in]{6/63.png} \caption*{(3) DeepUPE~\cite{2019Underexposed}} \end{subfigure} \begin{subfigure}{0.33\linewidth} \centering \includegraphics[width=1.5in]{6/64.png} \caption*{(4) MIRNet} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.33\linewidth} \centering \includegraphics[width=1.5in]{6/65.png} \caption*{(5) ABSGN(ours)} \end{subfigure}\hspace{-2mm} \begin{subfigure}{0.33\linewidth} \centering \includegraphics[width=1.5in]{6/66.png} \caption*{(6) References} \end{subfigure} \caption{Restoring results of the conventional methods and the proposed method on MIT-Adobe FiveK dataset} \end{figure*} Table 2 shows the comparision on LIME, MEF and DICM datasets.Our model has the best average UNIQUE and the best average BRISQUE. From figure 6, we can also conclude a similar conclusion to the LIME, MEF and DICM datasets. At a darker region, Restoring networks tend to smooth the noise too much, because the network is difficult to distinguish true details from the dark region. ABSGN can better preserve details at dark regions,such as soils and floors. \subsection{Evaluation on MIT-Adobe FiveK dataset} We report PSNR/SSIM values of our method and several other techniques in Table 3 for the MIT-Adobe FiveK datasets,respectively. It can be seen that our Network achieves significant improvements over previous approaches.Notably,when compared to the recent best methods, ABSGN obtains 1.52 dB performance gain over over MIRNet on the Adobe-Fivek dataset. \begin{table}[t]\footnotesize \caption{Image Enhancement between our method and state-of-the-art methods on MIT-Adobe FiveK dataset.} \centering \renewcommand{\arraystretch}{2.0} \begin{tabular}{cccccc} \hline \textbf{Method} & \textbf{HDRNet} & \textbf{DPE} & \textbf{DeepUPE} &\textbf{MIRNet} &\textbf{ours}\\ \hline PSNR & 21.96 &22.15 &23.04 &23.73 &25.25 \\ SSIM & 0.866 &0.850 &0.893 &0.925 &0.931 \\\hline \end{tabular} \vspace{-3mm} \end{table} We also show visual results in Figure 7. Compared to other techniques, our method generates enhanced images that are natural and vivid in appearance and have better global and local contrast. \subsection{Ablation study} In this part, to prove the effectiveness of the proposed module and the neccessity of introducing the method, we have made comparative Experiment. besides, we did another comparative experiment to emphasize the advantage and practical value of ABSGN. \subsubsection{Contribution of our proposed module} This ablation study is to answer the question that why not just adpot Dense Connetly Residual (DCR) Block to get the local information and using Global Attention Aware (GIA) module like GIANet~\cite{meng2020gia} to get the global information like many end-to-end Network, if the network adopt the conventional Global Attention module,Whether there will be better effect. As is mentioned in Sec.3.1, the MGDB adpot the similar self-guided strategy to introduce the dilation convolution,which has larger receptive field and better local feature extraction capacity. the CNN based method GSA module is introduced in our proposed method to avoid the problems that too long convolution layers cause the loss of global information. In Table 4, we show the ablation study of these two modules.we use the DenseRes Block to replace our MGDB to test the effect. In Addition, we try to apply the custom convolution to take place of all the dilation convolutions, which will tell our the necessity of introducing Dilation Convolution (DC). The results suggest that MGDB can get more local information and better performance in SSIM/PSNR. Similarity, we respectively use SPA module and GIA module substitute for our GSA module to prove the advantage of our proposed module. As is shown in Table 4, Compared to SPA module and GIA module,our module has irreplaceable advantage. these above module can’t replace our GSA module to extract the global information. \begin{table}[t]\smal \caption{Comparision experiment using different modules on LOL dataset.} \centering \setlength{\tabcolsep}{2mm} \begin{tabular}{c|ccccc} \hline \textbf{Method} & \textbf{SPA} & \textbf{GIA} & \textbf{DCR} & \textbf{w/o DC} & \textbf{ours} \\ [4pt]\hline PSNR(dB) & 22.30 & 23.42 & 22.67 & 23.16 &24.59 \\ SSIM & 0.8456 & 0.8515 & 0.8483 & 0.8546 &0.8675 \\\hline \end{tabular}} \end{table} \subsubsection{Adavantage of ABSGN} We compare BSWN with other state-of-art deep learning low-light image enhancement methods in terms of the number of parameter,the inference time and UQI~\cite{wang2002universal} in LOL Dataset. As is shown in tabe 5, our model has the best UQI, meaning our results has closest information with the reference images. \begin{table}[t] \caption{Runtime cost and performance comparison of our mothod and other state-of-art deep learning methods on the LOL dataset.} \centering \begin{tabular}{c|cc|c} \toprule Deep learning Method & Params & Time cost & UQI$\uparrow$\\ \midrule MBLLEN & 1.95M & 80ms & 0.8261 \\ RetinexNet & 9.2M & 20ms & 0.9110 \\ GLAD & 11M & 25ms & 0.9204 \\ RDGAN & 4.2M & 30ms & 0.8296 \\ Zero-DCE & 0.97M & 2ms & 0.7205 \\ ElightenGAN & 33M & 20ms & 0.8499 \\ KinD++ & 35.7M & 280ms & 0.9482 \\ MIRNet & 365M & 340ms & 0.9556 \\ Our model & 33M & 14ms & 0.9589 \\ \bottomrule \end{tabular} \label{tab:example} \end{table} \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{scatter_best1.jpg} \caption{Runtime cost and SSIM comparison of our method and other state-of-the-art deep learning methods on the LOLdataset.} \end{figure} Although the parameters of our model is a little heavy, MIRNet is ten times bigger than ours. what’s more, inference speed of our model is very fast. As is shown in Figure 8, we tested all of the deep learing models on the same platform. The closer the method is to the upper left corner in the figure, the faster the model is and the higher the SSIM is. As is shown in the last colume of table in table 5. our model only take 14 ms to process a real-world low-light image with resolution of 600x400 from the LOL dataset. The speed of our model ranks second among all of the compared methods. Although Zero-DCE has very fast inference speed,which is only to deal with a little darker image. the biggest problem of Zero-DCE is difficult to restore the dark image especially in the night, which leads to its lack of strong applicability and application prospects. In order to reduce the depth of the network, make full use of the system's ability to parallelize, reduce the depth of the network by strengthening the width of the network, our model apply self-guidance strategy to parallel deal with the feature map in different resolutions. This is why our model can run at such a high speed keeping excellent restoring capacity. \section{Conclution} In this paper, we proposed a Attention based Broadly Self-guided Network (ABSGN) for low-light image enhancement. ABSGN adopts a top-down manner to restore the low-light images. We use wavelet transform and a conv+PReLU layer to generate input variations with different spatial resolutions. At the lowest solution, we design GSA module to fully collect the global information. Then, we embed a MGDB into the middle two low spatial resolution levels to fully get local information. respectively, At the full resolution level, we employ more MGDB and a Dense connections to further reuse the information of the main feature map from the input image. The proposed ABSGN was validated on image restoration and real world low-light image enhancement benchmark and ABSGN is able to generate higher quality results than the compared state-of-the-art methods. Further, our ABSGN has excellent inference speed, which has good practical value and application prospects. {\small \bibliographystyle{ieee_fullname}
{'timestamp': '2021-12-16T02:13:30', 'yymm': '2112', 'arxiv_id': '2112.06226', 'language': 'en', 'url': 'https://arxiv.org/abs/2112.06226'}
\section{Introduction} \subsection{Canonical partition functions for log-Coulomb gases}\label{1_1} Let $X$ be a topological space endowed with a metric $d$ and a finite positive Borel measure $\mu$ satisfying $\mu^N\{(x_1,\dots,x_N)\in X^N:x_i=x_j\text{ for some $i\neq j$}\}=0$ for every $N\geq 1$. A \emph{log-Coulomb gas} in $X$ is a statistical model described as follows: Consider $N\geq 1$ particles with fixed charge values $\mathfrak{q}_1,\dots,\mathfrak{q}_N\in\R$ and corresponding variable locations $x_1,\dots,x_N\in X$. Whether the charge values are distinct or not, we assume particles are distinguished by the labels $1,2,\dots,N$, so that unique configurations of the system correspond to unique tuples $(x_1,\dots,x_N)\in X^N$. Each tuple is called a \emph{microstate} and has an \emph{energy} defined by $$E(x_1,\dots,x_N):=\begin{cases}-\sum_{1\leq i<j\leq N}\mathfrak{q}_i\mathfrak{q_j}\log d(x_i,x_j)&\text{if $x_i\neq x_j$ for all $i<j$},\\\infty&\text{otherwise}.\end{cases}$$ Note that $E^{-1}(\infty)$ has measure 0 in $X^N$ by our choice of $\mu$, and that $E$ is identically zero if $N=1$. We assume the system is in thermal equilibrium with a heat reservoir at some \emph{inverse temperature} $\beta>0$, so that its microstates form a canonical ensemble distributed according to the density $e^{-\beta E(x_1,\dots,x_N)}$. The \emph{canonical partition function} $\beta\mapsto Z_N(X,\beta)$ is defined as the total mass of this density, namely \begin{equation}\label{canonical} Z_N(X,\beta):=\int_{X^N}e^{-\beta E(x_1,\dots,x_N)}\,d\mu^N=\int_{X^N}\prod_{1\leq i<j\leq N}d(x_i,x_j)^{\mathfrak{q}_i\mathfrak{q}_j\beta}\,d\mu^N~. \end{equation} Given $(X,d,\mu)$ and $\mathfrak{q}_1,\dots,\mathfrak{q}_N$, the explicit formula for $Z_N(X,\beta)$ is of primary interest, as it yields fundamental relationships between the observable parameters of the system and its temperature. For instance, the system's dimensionless free energy, mean energy, and energy fluctuation (variance) are respectively given by $-\log Z_N(X,\beta)$, $-\partial/\partial\beta\log Z_N(X,\beta)$, and $\partial^2/\partial\beta^2\log Z_N(X,\beta)$, all of which are functions of $\beta$ (and hence of temperature). Below are two examples in which the formula for $Z_N(X,\beta)$ is known. \begin{example}\label{arch} If $X=\R$ with the standard metric $d$, the standard Gaussian measure $\mu$, and charge values $\mathfrak{q}_1=\dots=\mathfrak{q}_N=1$, then \emph{Mehta's integral formula} \cite{FW} states that $Z_N(\R,\beta)$ converges absolutely for all complex $\beta$ with $\re(\beta)>-2/N$, and in this case $$Z_N(\R,\beta)=\prod_{j=1}^N\frac{\Gamma(1+j\beta/2)}{\Gamma(1+\beta/2)}~.$$ At the special values $\beta\in\{1,2,4\}$, the probability density $\frac{1}{Z_N(\R,\beta)}e^{-\beta E(x_1,\dots,x_N)}$ coincides with the joint density of the $N$ eigenvalues $x_1,\dots,x_N$ (counted with multiplicity) of the $N\times N$ Gaussian orthogonal $(\beta=1)$, unitary $(\beta=2)$, and symplectic $(\beta=4)$ matrix ensembles. \end{example} \begin{example}\label{Z_3} If $X=\Z_p$ with $d(x,y)=|x-y|_p$, the Haar probability measure $\mu$, and $N=3$ charges with $\mathfrak{q}_1=1$, $\mathfrak{q}_2=2$, and $\mathfrak{q}_3=3$, then a general theorem in \cite{Web20} implies that $Z_3(\Z_p,\beta)$ converges absolutely for all complex $\beta$ with $\re(\beta)>-1/6$, and in this case $$Z_3(\Z_p,\beta)=\frac{p^{11\beta}}{p^{2+11\beta}-1}\cdot\left((p-1)(p-2)+(p-1)^2\left[\frac{1}{p^{1+2\beta}-1}+\frac{1}{p^{1+3\beta}-1}+\frac{1}{p^{1+6\beta}-1}\right]\right)~.$$ \end{example} Note that $\beta\mapsto Z_N(X,\beta)$ always extends to a complex domain containing the line $\re(\beta)=0$. To simultaneously treat all possible choices of $\mathfrak{q}_i$, we extend further to a subset of $\C^{N(N-1)/2}$ as follows: \begin{definition}\label{gen_Z} For any $N\geq 0$ and $(X,d,\mu)$ as above, we write $\bm{s}$ for a complex tuple $(s_{ij})_{1\leq i<j\leq N}$ (the empty tuple if $N=0$ or $N=1$) and define $\mathcal{Z}_0(X,\bm{s}):=1$ and $$\mathcal{Z}_N(X,\bm{s}):=\int_{X^N}\prod_{1\leq i<j\leq N}d(x_i,x_j)^{s_{ij}}\,d\mu^N\qquad\text{for $N\geq 1$.}$$ \end{definition} Once the formula and domain for $\bm{s}\mapsto\mathcal{Z}_N(X,\bm{s})$ are known, then for any choice of $\mathfrak{q}_1,\dots,\mathfrak{q}_N\in\R$, the formula and domain for $\beta\mapsto Z_N(X,\beta)$ follow by specializing $\mathcal{Z}_N(X,\bm{s})$ to ${s_{ij}=\mathfrak{q}_i\mathfrak{q}_j\beta}$. Thus, given $(X,d,\mu)$, the main problem is to determine the formula and domain of $\bm{s}\mapsto\mathcal{Z}_N(X,\bm{s})$. \subsection{$p$-fields, projective lines, and splitting chains}\label{1_2} Of our two main goals in this paper, the first is to determine the explicit formula and domain of $\bm{s}\mapsto\mathcal{Z}_N(\P^1(K),\bm{s})$, where $K$ is a $p$-field and its projective line $\P^1(K)$ is endowed with a natural metric and measure. To make this precise, we briefly recall well-known properties of $p$-fields (see \cite{Weil}, for instance) and establish some notation. Fix a $p$-field $K$, write $|\cdot|$ for its canonical absolute value, write $d$ for the associated metric (i.e., $d(x,y):=|x-y|$), and define $$R:=\{x\in K:|x|\leq 1\}\qquad\text{and}\qquad P:=\{x\in K:|x|<1\}~.$$ The closed unit ball $R$ is the maximal compact subring of $K$, the open unit ball $P$ is the unique maximal ideal in $R$, and the group of units is $R^\times=R\setminus P=\{x\in K:|x|=1\}$. The residue field $R/P$ is isomorphic to $\F_q$ for some prime power $q$, and there is a canonical isomorphism of the cyclic group $(R/P)^\times$ onto the group of $(q-1)$th roots of unity in $K$. Fixing a primitive such root $\xi\in K$ and sending $P\mapsto 0$ extends the isomorphism $(R/P)^\times\to\{1,\xi,\dots,\xi^{q-2}\}$ to a bijection $R/P\to\{0,1,\xi,\dots,\xi^{q-2}\}$ with inverse $y\mapsto y+P$. Therefore $\{0,1,\xi,\dots,\xi^{q-2}\}$ is a complete set of representatives for the cosets of $P\subset R$, i.e., \begin{equation}\label{R_decomp} R=P\sqcup\underbrace{(1+P)\sqcup(\xi+P)\sqcup\dots\sqcup(\xi^{q-2}+P)}_{R^\times}~. \end{equation} Fix a uniformizer $\pi\in K$ (any element satisfying $P=\pi R$) and let $\mu$ be the unique additive Haar measure on $K$ satisfying $\mu(R)=1$. The open balls in $K$ are precisely the sets of the form $y+\pi^vR$ with $y\in K$ and $v\in\Z$, and every such ball is compact with measure equal to its radius, i.e., $\mu(y+\pi^vR)=|\pi^v|=q^{-v}$. In particular, each of the $q$ cosets of $P$ in \eqref{R_decomp} is a compact open ball with measure (and radius) $q^{-1}$, and two elements $x,y\in R$ satisfy $|x-y|=1$ if and only if they belong to different cosets. Henceforth, we reserve the symbols $K$, $|\cdot|$, $d$, $R$, $P$, $q$, $\xi$, $\pi$, and $\mu$ for the items above, and we distinguish $|\cdot|$ from the standard absolute value on $\C$ by writing $|\cdot|_\C$ for the latter.\\ We now recall some useful facts from \cite{FiliPetsche} in our present notation. The projective line of $K$ is the quotient space $\P^1(K)=(K^2\setminus\{(0,0)\})/\sim$, where $(x_0,x_1)\sim(y_0,y_1)$ if and only if $y_0=\lambda x_0$ and $y_1=\lambda x_1$ for some $\lambda\in K^\times$. Thus we may understand $\P^1(K)$ concretely as the set of symbols $[x_0:x_1]$ with $(x_0,x_1)\in K^2\setminus\{(0,0)\}$, subject to the relation $[\lambda x_0:\lambda x_1]=[x_0:x_1]$ for all $\lambda\in K^\times$ and endowed with the topology induced by the quotient $(x_0,x_1)\mapsto[x_0:x_1]$. The projective line is compact and metrizable by the \emph{spherical metric} $\delta:\P^1(K)\times\P^1(K)\to\{0\}\cup\{q^{-v}:v\in\Z_{\geq 0}\}$, which is defined via \begin{equation}\label{sph_met_def} \delta([x_0:x_1],[y_0:y_1]):=\frac{|x_0y_1-x_1y_0|}{\max\{|x_0|,|x_1|\}\cdot\max\{|y_0|,|y_1|\}}~. \end{equation} In particular, every open set in $\P^1(K)$ is a union of balls of the form \begin{equation}\label{ball_def} B_v[x_0:x_1]:=\{[y_0:y_1]\in\P^1(K):\delta([x_0:x_1],[y_0:y_1])\leq q^{-v}\} \end{equation} with $[x_0:x_1]\in\P^1(K)$ and $v\in\Z_{\geq0}$, and every such ball is open and compact. The projective linear group $PGL_2(R)$ is the quotient of $GL_2(R)=\{A\in M_2(R):\det(A)\in R^\times\}$ by its center, namely $Z=\{\left(\begin{smallmatrix}\lambda&0\\0&\lambda\end{smallmatrix}\right):\lambda\in R^\times\}\cong R^\times$. It is straightforward to verify that the rule $$\phi[x_0:x_1]:=[ax_0+bx_1:cx_0+dx_1]~,$$ where $\phi\in PGL_2(R)$ and $\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)\in GL_2(R)$ is any representative of $\phi$, gives a well-defined transitive action of $PGL_2(R)$ on $\P^1(K)$. \begin{lemma}[$PGL_2(R)$-invariance \cite{FiliPetsche}]\label{invariance} The spherical metric satisfies $$\delta(\phi[x_0:x_1],\phi[y_0:y_1])=\delta([x_0:x_1],[y_0:y_1])$$ for all $\phi\in PGL_2(R)$ and all $[x_0:x_1],[y_0:y_1]\in\P^1(K)$. There is also a unique Borel probability measure $\nu$ on $\P^1(K)$ satisfying $$\nu(\phi(M))=\nu(M)$$ for all $\phi\in PGL_2(R)$ and all Borel subsets $M\subset\P^1(K)$. In particular, for each $v\in\Z_{\geq0}$ the relation $\phi(B_v[x_0:x_1])=B_v(\phi[x_0:x_1])$ defines a transitive $PGL_2(R)$ action on the set of balls of radius $q^{-v}$, and thus $\nu(B_v[x_0:x_1])$ depends only on $v$. \end{lemma} It is routine to verify that $\mu^N\{(x_1,\dots,x_N)\in(y+\pi^vR)^N:x_i=x_j\text{ for some $i\neq j$}\}=0$ and $$\nu^N\{([x_{1,0}:x_{1,1}],\dots,[x_{N,0}:x_{N,1}])\in(\P^1(K))^N:[x_{i,0}:x_{i,1}]=[x_{j,0}:x_{j,1}]\text{ for some $i\neq j$}\}=0$$ for all $N\geq 1$, so we have suitable metrics and measures to define log-Coulomb gases in $X=y+\pi^vR$ and $X=\P^1(K)$. \Cref{gen_Z} specializes to these $X$ as follows: \begin{definition}\label{main_Z_def} With $N\geq 0$ and $\bm{s}$ as before, we have $\mathcal{Z}_0(y+\pi^vR,\bm{s})=\mathcal{Z}_0(\P^1(K),\bm{s})=1$, and \begin{align*} \mathcal{Z}_N(y+\pi^vR,\bm{s})&=\int_{(y+\pi^vR)^N}\prod_{1\leq i<j\leq N}|x_i-x_j|^{s_{ij}}\,d\mu^N\qquad\text{and}\\ \mathcal{Z}_N(\P^1(K),\bm{s})&=\int_{(\P^1(K))^N}\prod_{1\leq i<j\leq N}\delta([x_{i,0}:x_{i,1}],[x_{j,0}:x_{j,1}])^{s_{ij}}\,d\nu^N \end{align*} for $N\geq 1$. The first integral is independent of $y$, and equal to $\mathcal{Z}_N(R,\bm{s})$ if $v=0$ or $\mathcal{Z}_N(P,\bm{s})$ if $v=1$. \end{definition} The formulas and domains of absolute convergence for the integrals above can be stated neatly in terms of the following items from \cite{Web20}: \begin{definition}[Splitting chains]\label{splchdef} A \emph{splitting chain} of order $N\geq 2$ and length $L\geq 1$ is a tuple $\spl=(\ptn_0,\dots,\ptn_L)$ of partitions of $[N]=\{1,\dots,N\}$ satisfying $$\{[N]\}=\ptn_0>\ptn_1>\ptn_2>\dots>\ptn_L=\{\{1\},\dots,\{N\}\}~.$$ \begin{itemize} \item[(a)] Each non-singleton part $\lambda\in\ptn_0\cup\ptn_1\cup\dots\cup\ptn_L$ is called a \emph{branch} of $\spl$. We write $\mathcal{B}(\spl)$ for the set of all branches of $\spl$, i.e., $$\mathcal{B}(\spl):=(\ptn_0\cup\dots\cup\ptn_{L-1})\setminus\ptn_L~.$$ \item[(b)] Each $\lambda\in\mathcal{B}(\spl)$ must appear in a final partition $\ptn_\ell$ before refining into two or more parts in $\ptn_{\ell+1}$, so we define its \emph{depth} $\ell_\spl(\lambda)\in\{0,1,\dots,L-1\}$ and \emph{degree} $\deg_\spl(\lambda)\in\{2,3,\dots,N\}$ by $$\ell_\spl(\lambda):=\max\{\ell:\lambda\in\ptn_\ell\}\qquad\text{and}\qquad\deg_\spl(\lambda):=\#\{\lambda'\in\ptn_{\ell_\spl(\lambda)+1}:\lambda'\subset\lambda\}~.$$ \item[(c)] We say that $\spl$ is \emph{reduced} if each $\lambda\in\mathcal{B}(\spl)$ satisfies $\lambda\in\ptn_\ell\iff\ell=\ell_\spl(\lambda)$. \end{itemize} Write $\mathcal{R}_N$ for the set of reduced splitting chains of order $N$ and define $$\Omega_N:=\bigcap_{\substack{\lambda\subset[N]\\\#\lambda>1}}\left\{\bm{s}:\re(e_\lambda(\bm{s}))>0\right\}\qquad\text{where}\qquad e_\lambda(\bm{s}):=\#\lambda-1+\sum_{\substack{i<j\\i,j\in\lambda}}s_{ij}~.$$ \end{definition} Proposition 3.15 and Theorem 2.6(c) in \cite{Web20} imply the following proposition, which shall be generalized slightly in order to prove the main results of this paper: \begin{proposition}\label{R_prop} For $N\geq 2$, the integral $\mathcal{Z}_N(R,\bm{s})$ converges absolutely if and only if $\bm{s}\in\Omega_N$, and in this case it can be written as the finite sum $$\mathcal{Z}_N(R,\bm{s})=q^{\sum_{i<j}s_{ij}}\sum_{\spl\in\mathcal{R}_N}\prod_{\lambda\in\mathcal{B}(\spl)}\frac{(q-1)_{\deg_{\spl}(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1}~.$$ Here $(q-1)_n$ stands for the degree $n$ falling factorial $(z)_n=z(z-1)\dots(z-n+1)\in\Z[z]$ evaluated at the integer $z=q-1$. \end{proposition} \section{Statement of results}\label{2} \subsection{The projective analogue}\label{2_1} Our first main result is the following analogue of \Cref{R_prop}: \begin{theorem}\label{main1} For $N\geq 2$, the integral $\mathcal{Z}_N(\P^1(K),\bm{s})$ converges absolutely if and only if $\bm{s}\in\Omega_N$, and in this case it can be written as the finite sum $$\mathcal{Z}_N(\P^1(K),\bm{s})=\frac{1}{(q+1)^{N-1}}\sum_{\spl\in\mathcal{R}_N}\frac{q^{N+\sum_{i<j}s_{ij}}+1-\deg_{\spl}([N])}{q+1-\deg_{\spl}([N])}\prod_{\lambda\in\mathcal{B}(\spl)}\frac{(q-1)_{\deg_{\spl}(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1}~.$$ The summand for each $\spl\in\mathcal{R}_N$ is defined for all prime powers $q$, as the denominator $q+1-\deg_\spl([N])$ is cancelled by the factor $(q-1)_{\deg_\spl([N])-1}$ inside the product over $\lambda\in\mathcal{B}(\spl)$. \end{theorem} The evident similarities between \Cref{R_prop} and \Cref{main1} follow from explicit relationship between the metrics and measures on $K$ and those on $\P^1(K)$. These relationships also play a role in the upcoming results, so they are worth recalling now. Note that $[x_0:x_1]\neq[1:0]$ if and only if $x_1\neq 0$, in which case $x=x_0/x_1$ is the unique element of $K$ satisfying $[x:1]=[x_0:x_1]$. Therefore the rule $\iota(x):=[x:1]$ defines a bijection $\iota:K\to\P^1(K)\setminus\{[1:0]\}$ and relates the metric structures of $K$ and $\P^1(K)$ in a simple way: Given $x,y\in K$, \eqref{sph_met_def} implies $\delta(\iota(x),[1:0])=(\max\{1,|x|\})^{-1}$ and \begin{equation}\label{metric_rule} \delta(\iota(x),\iota(y))=\begin{cases}|x-y|&\text{if $x,y\in R$},\\1&\text{if $x\in R$ and $y\notin R$},\\|1/x-1/y|&\text{if }x,y\notin R.\end{cases} \end{equation} Using the definitions \eqref{ball_def} and $v_K(x):=-\log_q|x|$ for $x\in K^\times$, along with \eqref{metric_rule} and the \emph{strong triangle equality} (i.e., $|x+y|=\max\{|x|,|y|\}$ whenever $|x|\neq|y|$), one easily verifies that \begin{equation}\label{ball_rule} \iota(y+\pi^vR)=\begin{cases}B_v(\iota(y))&\text{if }y\in R,\\B_{v-2v_K(y)}(\iota(y))&\text{if }y\notin R,\end{cases} \end{equation} whenever $y\in K$ and $v\in\Z_{>0}$. That is, $\iota$ sends the open ball of radius $r\in(0,1)$ centered at $y\in K$ onto the open ball of radius $r/\max\{1,|y|^2\}$ centered at $\iota(y)\in\P^1(K)\setminus\{[1:0]\}$, so $\iota:K\to\P^1(K)\setminus\{[1:0]\}$ is a homeomorphism that restricts to an isometry on $R$ and a contraction on $K\setminus R$. The map $\iota$ also relates the measures on $K$ and $\P^1(K)$ in a simple way: Given $v>0$ and a complete set of representatives $y_1,\dots,y_{q^v}\in R$ for the cosets of $\pi^vR\subset R$, applying \eqref{ball_rule} to the partition $R=(y_1+\pi^vR)\sqcup\dots\sqcup(y_{q^v}+\pi^vR)$ yields $$\iota(R)=B_v[y_1:1]\sqcup\cdots\sqcup B_v[y_{q^v}:1]~.$$ Therefore $PGL_2(R)$-invariance of $\nu$ (\Cref{invariance}) implies $\nu(\iota(R))=q^v\cdot\nu(B_v[0:1])$. On the other hand, $$\iota(K\setminus R)=\iota(\{x:|x|\geq q\})=\{\iota(x):\delta(\iota(x),[1:0])\leq q^{-1}\}=B_1[1:0]\setminus\{[1:0]\}$$ implies $\iota(R)\sqcup B_1[1:0]=\iota(R)\sqcup\iota(K\setminus R)\sqcup\{[1:0]\}=\P^1(K)$, which has measure 1. But $\nu(\iota(R))=q\cdot\nu(B_1[1:0])$, so the measure of $\iota(R)$ must be $q/(q+1)$, and therefore every ball $B_v[x_0:x_1]\subset\P^1(K)$ with $v>0$ has measure $q^{-v}\cdot q/(q+1)$. Combining this with \eqref{ball_rule}, one concludes that the measure $\nu$ on $\P^1(K)\setminus\{[1:0]\}$ pulls back along $\iota$ to an explicit measure on $K$: \begin{equation}\label{measure_rule} \nu(\iota(M))=\frac{q}{q+1}\int_M\left(\max\{1,|x|^2\}\right)^{-1}\,d\mu\qquad\text{for any Borel subset }M\subset K~. \end{equation} Finally, \eqref{R_decomp} and \eqref{ball_rule} give a nice refinement of $\P^1(K)=\iota(R)\sqcup B_1[1:0]$ in terms of the $(q-1)$th roots of unity in $K$, which should be understood as the projective analogue of \eqref{R_decomp}: \begin{equation}\label{P(K)_decomp} \P^1(K)=B_1[0:1]\sqcup\underbrace{B_1[1:1]\sqcup B_1[\xi:1]\sqcup\dots\sqcup B_1[\xi^{q-2}:1]}_{\iota(R^\times)}\sqcup B_1[1:0]~. \end{equation} Indeed, all $q+1$ of the parts in the partition are balls with measure $1/(q+1)$ and radius $q^{-1}$, and two points $[x_0:x_1],[y_0:y_1]\in\P^1(K)$ satisfy $\delta([y_0:y_1],[y_0:y_1])=1$ if and only if $[x_0:x_1]$ and $[y_0:y_1]$ belong to different parts. Note that $\iota$ sends $R^\times$ onto the ``equator" $\iota(R^\times)$, i.e., the set of points in $\P^1(K)$ with $\delta$-distance 1 from both the ``south pole" $[0:1]$ and the ``north pole" $[1:0]$. \subsection{Relationships between grand canonical partition functions}\label{2_2} So far we have only considered log-Coulomb gases with $N$ labeled (and hence distinguishable) particles. Our second main result concerns the situation in which all particles are identical with charge $\mathfrak{q}_i=1$ for all $i$, in which case a microstate $(x_1,\dots,x_N)\in X^N$ is regarded as unique only up to permutations of its entries. Since the energy $E(x_1,\dots,x_N)$ and measure on $X^N$ are invariant under such permutations, each unlabeled microstate makes the contribution $e^{-\beta E(x_1,\dots,x_N)}d\mu^N$ to the integral $Z_N(X,\beta)$ in \eqref{canonical} precisely $N!$ times. Therefore the canonical partition function for the unlabeled microstates is given by $Z_N(X,\beta)/N!$. We further assume that the system exchanges particles with the heat reservoir with chemical potential $\eta$ and define the \emph{fugacity parameter} $f=e^{\eta\beta}$. In this situation the particle number $N\geq 0$ is treated as a random variable and the canonical partition function is replaced by the \emph{grand canonical partition function} \begin{equation}\label{granddef} Z(f,X,\beta):=\sum_{N=0}^\infty Z_N(X,\beta)\frac{f^N}{N!} \end{equation} with the familiar convention $Z_0(X,\beta)=1$. Many properties of the system can be deduced from the grand canonical partition function. For instance, if $\beta>0$ is fixed and $Z_N(X,\beta)$ is sub-exponential in $N$, then $Z(f,X,\beta)$ is analytic in $f$ and the expected number of particles in the system is given by $f\frac{\partial}{\partial f}\log(Z(f,X,\beta))$. The canonical partition function for each $N\geq 0$ can also be recovered by evaluating the $N$th derivative of $Z(f,X,\beta)$ with respect to $f$ at $f=0$. We are interested in the examples $Z(f,R,\beta)$, $Z(f,P,\beta)$, and $Z(f,\P^1(K),\beta)$, which turn out to share several common properties and simple relationships. By setting $s_{ij}=\beta$ in \Cref{main_Z_def}, one sees that $|Z_N(R,\beta)|_\C$, $|Z_N(P,\beta)|_\C$, and $|Z_N(\P^1(K),\beta)|_\C$ are bounded above by 1 for all $N\geq 0$ and all $\beta>0$, and hence $Z(f,R,\beta)$, $Z(f,P,\beta)$, and $Z(f,\P^1(K),\beta)$ are analytic in $f$ when $\beta>0$. Sinclair recently found an elegant relationship between the first two, which is closely related to the partition of $R$ in \eqref{R_decomp}: \begin{proposition}[The $q$th Power Law \cite{ChrisA}]\label{Chris} For $\beta>0$ we have $$Z(f,R,\beta)=(Z(f,P,\beta))^q~.$$ \end{proposition} Roughly speaking, the $q$th Power Law states that a log-Coulomb gas in $R$ exchanging energy and particles with a heat reservoir ``factors" into $q$ identical sub-gases (one in each coset of $P$) that exchange energy and particles with the reservoir. For $\beta>0$, note that the series equation $Z(f,R,\beta)=(Z(f,P,\beta))^q$ is equivalent to the coefficient identities \begin{equation}\label{Rcoeff} \frac{Z_N(R,\beta)}{N!}=\sum_{\substack{N_0+\dots+N_{q-1}=N\\N_0,\dots,N_{q-1}\geq 0}}\prod_{k=0}^{q-1}\frac{Z_{N_k}(P,\beta)}{N_k!}\qquad\text{for all $N\geq 0$.} \end{equation} The $\beta=1$ case of \eqref{Rcoeff} is given in \cite{BGMR}, in which the positive number $Z_N(R,1)/N!$ is recognized as the probability that a random monic polynomial in $R[x]$ splits completely in $R$. The more general $\beta>0$ case given in \cite{ChrisA} makes explicit use of the partition of $R$ into cosets of $P$ (as in \eqref{R_decomp}). In \Cref{3}, we will use the analogous partition of $\P^1(K)$ into $q+1$ balls (recall \eqref{P(K)_decomp}) to show that \begin{equation}\label{P(K)coeff} \frac{Z_N(\P^1(K),\beta)}{N!}=\sum_{\substack{N_0+\dots+N_q=N\\N_0,\dots,N_q\geq 0}}\prod_{k=0}^q\left(\frac{q}{q+1}\right)^{N_k}\frac{Z_{N_k}(P,\beta)}{N_k!}\qquad\text{for all $\beta>0$ and $N\geq 0$,} \end{equation} which immediately implies our second main result: \begin{theorem}[The $(q+1)$th Power Law]\label{main2} For all $\beta>0$ we have $$Z(f,\P^1(K),\beta)=(Z(\tfrac{qf}{q+1},P,\beta))^{q+1}~.$$ \end{theorem} Like the $q$th Power Law, the $(q+1)$th Power Law roughly states that a log-Coulomb gas in $\P^1(K)$ exchanging energy and particles with a heat reservoir ``factors" into $q+1$ identical sub-gases in the balls $B_1[0:1]$, $B[1:1]$, $B[\xi:1]$, $\dots$, $B[\xi^{q-2}:1]$, $B_1[1:0]$ (each isometrically homeomorphic to $P$), with fugacity $\frac{qf}{q+1}$. The $q$th Power Law allows the $(q+1)$th Power Law to be written more crudely as \begin{equation}\label{P(K)=RP} Z(f,\P^1(K),\beta)=Z(\tfrac{qf}{q+1},R,\beta)\cdot Z(\tfrac{qf}{q+1},P,\beta)~, \end{equation} which is to say that the gas in $\P^1(K)$ ``factors" into two sub-gases: one in $\iota(R)$ and one in $B[1:0]$ (which are respectively isometrically homeomorphic to $R$ and $P$), both with fugacity $\frac{qf}{q+1}$. \subsection{Functional equations and a quadratic recurrence}\label{2_3} Although \Cref{R_prop} and \Cref{main1} provide explicit formulas for $Z_N(R,\beta)$ and $Z_N(\P^1(K),\beta)$, they are not efficient for computation because they require a complete list of reduced splitting chains of order $N$. For a practical alternative, we take advantage of both Power Laws and the following ideas from \cite{BGMR} and \cite{ChrisA}: Apply $Z(f,P,\beta)\cdot\frac{\partial}{\partial f}$ to the equation $Z(f,R,\beta)=(Z(f,P,\beta))^q$ to get $$Z(f,P,\beta)\cdot\frac{\partial}{\partial f}Z(f,R,\beta)=q\cdot Z(f,R,\beta)\cdot\frac{\partial}{\partial f}Z(f,P,\beta)~,$$ then expand both sides as power series in $f$ to obtain the coefficient equations \begin{equation}\label{coeff} \sum_{k=1}^N\frac{Z_{N-k}(P,\beta)}{(N-k)!}\frac{Z_k(R,\beta)}{(k-1)!}=q\cdot\sum_{k=1}^N\frac{Z_{N-k}(R,\beta)}{(N-k)!}\frac{Z_k(P,\beta)}{(k-1)!}\qquad\text{for all }N\geq 1. \end{equation} The identities $Z_j(P,\beta)=q^{-j-\binom{j}{2}\beta}Z_j(R,\beta)$ follow easily from \Cref{main_Z_def} and eliminate all instances of $Z_j(P,\beta)$ in \eqref{coeff} while introducing powers of the form $q^{-j-\binom{j}{2}\beta}$. For $N\geq 2$, a careful rearrangement of these powers, the factorials, and the terms in \eqref{coeff} yields the explicit recurrence $$\frac{Z_N(R,\beta)}{N!q^{\frac{1}{2}\binom{N}{2}\beta}}=\sum_{k=1}^{N-1}\frac{k}{N}\cdot\frac{\sinh\left(\frac{\log(q)}{2}\left[\left(N+\binom{N}{2}\beta\right)\left(1-\frac{2k}{N}\right)+1\right]\right)}{\sinh\left(\frac{\log(q)}{2}\left[\left(N+\binom{N}{2}\beta\right)-1\right]\right)}\cdot\frac{Z_{N-k}(R,\beta)}{(N-k)!q^{\frac{1}{2}\binom{N-k}{2}\beta}}\cdot\frac{Z_k(R,\beta)}{k!q^{\frac{1}{2}\binom{k}{2}\beta}}~.$$ The expression at left is identically 1 if $N=0$ or $N=1$, so induction confirms that it is polynomial in ratios of hyperbolic sines for all $N\geq 0$. In particular, its dependence on $q$ is carried only by the factor $\log(q)$ appearing inside the hyperbolic sines, which motivates the following lemma: \begin{lemma}[The Quadratic Recurrence]\label{quad_rec} Set $F_0(t,\beta)=F_1(t,\beta)=1$ for all $\beta\in\C$ and all $t\in\R$. For $N\geq 2$, $\re(\beta)>-2/N$, and $t\in\R$, define $F_N(t,\beta)$ by the recurrence $$F_N(t,\beta):=\begin{cases}\displaystyle{\sum_{k=1}^{N-1}\frac{k}{N}\cdot\frac{\sinh\left(\frac{t}{2}\left[\left(N+\binom{N}{2}\beta\right)\left(1-\frac{2k}{N}\right)+1\right]\right)}{\sinh\left(\frac{t}{2}\left[\left(N+\binom{N}{2}\beta\right)-1\right]\right)}\cdot F_{N-k}(t,\beta)\cdot F_k(t,\beta)}&\text{if $t\neq 0$,}\\ \\\displaystyle{\sum_{k=1}^{N-1}\frac{k}{N}\cdot\frac{\left(N+\binom{N}{2}\beta\right)\left(1-\frac{2k}{N}\right)+1}{\left(N+\binom{N}{2}\beta\right)-1}\cdot F_{N-k}(0,\beta)\cdot F_k(0,\beta)}&\text{if $t=0$.}\end{cases}$$ \begin{enumerate} \item[(a)] For fixed $N\geq 2$ and fixed $t$, the function $\beta\mapsto F_N(t,\beta)$ is holomorphic for $\re(\beta)>-2/N$. \item[(b)] For fixed $N\geq 2$ and fixed $\beta$, the function $t\mapsto F_N(t,\beta)$ is defined, smooth, and even on $\R$. \end{enumerate} \end{lemma} Both parts of the Quadratic Recurrence are straightforward to verify by induction. An interesting and immediate consequence of The Quadratic Recurrence and the preceding discussion is the formula $$Z_N(R,\beta)=N!q^{\frac{1}{2}\binom{N}{2}\beta}F_N(\log(q),\beta)~.$$ It offers a computationally efficient alternative to the ``univariate case" of \Cref{R_prop} (when $s_{ij}=\beta$ for all $i<j$) and extends $Z_N(R,\beta)$ to a smooth function of $q\in(0,\infty)$. Moreover, it transforms nicely under the involution $q\mapsto q^{-1}$: $$Z_N(R,\beta)\big|_{q\mapsto q^{-1}}=N!q^{-\frac{1}{2}\binom{N}{2}\beta}F_N\left(\log(q^{-1}),\beta\right)=N!q^{-\frac{1}{2}\binom{N}{2}\beta}F_N\left(\log(q),\beta\right)=q^{-\binom{N}{2}\beta}Z_N(R,\beta)~.$$ The Quadratic Recurrence serves the projective analogue as well. Expanding \eqref{P(K)=RP} into powers of $f$ yields the coefficient equations \begin{equation}\label{Z_N/N!} \frac{Z_N(\P^1(K),\beta)}{N!}=\sum_{k=0}^N\left(\frac{q}{q+1}\right)^N\frac{Z_{N-k}(R,\beta)}{(N-k)!}\frac{Z_k(P,\beta)}{k!}\qquad\text{for all }N\geq 0, \end{equation} and the identities $Z_j(P,\beta)=q^{-j-\binom{j}{2}\beta}Z_j(R,\beta)$ and $Z_j(R,\beta)=j!q^{\frac{1}{2}\binom{j}{2}\beta}F_j(\log(q),\beta)$ allow the $k$th summand to be rewritten as $$\left(\frac{q}{q+1}\right)^N\frac{Z_{N-k}(R,\beta)}{(N-k)!}\frac{Z_k(P,\beta)}{k!}=\frac{q^{\frac{1}{2}\left(N+\binom{N}{2}\beta\right)\left(1-\frac{2k}{N}\right)}}{\left(2\cosh\left(\frac{\log(q)}{2}\right)\right)^N}\cdot F_{N-k}(\log(q),\beta)\cdot F_k(\log(q),\beta)\qquad\text{for $N\geq 1$}.$$ Thus, adding two copies of the sum in \eqref{Z_N/N!} together, pairing the $k$th term of the first copy with the $(N-k)$th term of the second copy, and dividing by $2$ gives $$\frac{Z_N(\P^1(K),\beta)}{N!}=\sum_{k=0}^N\frac{\cosh\left(\frac{\log(q)}{2}\left(N+\binom{N}{2}\beta\right)\left(1-\frac{2k}{N}\right)\right)}{\left(2\cosh\left(\frac{\log(q)}{2}\right)\right)^N}\cdot F_{N-k}(\log(q),\beta)\cdot F_k(\log(q),\beta)\qquad\text{for $N\geq 1$},$$ which is valid for $\re(\beta)>-2/N$. Through this formula, $Z_N(\P^1(K),\beta)$ clearly extends to a smooth function of $q\in(0,\infty)$ and is invariant under the involution $q\mapsto q^{-1}$. We conclude this section by summarizing these observations: \begin{theorem}[Efficient Formulas and Functional Equations] Suppose $N\geq 2$ and $\re(\beta)>-2/N$, and define $(F_k(t,\beta))_{k=0}^N$ as in \Cref{quad_rec}. The $N$th canonical partition functions are given by the formulas \begin{align*} Z_N(R,\beta)&=N!q^{\frac{1}{2}\binom{N}{2}\beta}F_N(\log(q),\beta)\qquad\text{and}\\ \\ Z_N(\P(K),\beta)&=N!\sum_{k=0}^N\frac{\cosh\left(\frac{\log(q)}{2}\left(N+\binom{N}{2}\beta\right)\left(1-\frac{2k}{N}\right)\right)}{\left(2\cosh\left(\frac{\log(q)}{2}\right)\right)^N}\cdot F_{N-k}(\log(q),\beta)\cdot F_k(\log(q),\beta)~, \end{align*} which extend $Z_N(R,\beta)$ and $Z_N(\P^1(K),\beta)$ to smooth functions of $q\in(0,\infty)$ satisfying $$Z_N(R,\beta)\big|_{q\mapsto q^{-1}}=q^{-\binom{N}{2}\beta}Z_N(R,\beta)\qquad\text{and}\qquad Z_N(\P^1(K),\beta)\big|_{q\mapsto q^{-1}}=Z_N(\P^1(K),\beta)~.$$ \end{theorem} It should be noted here that the $q\mapsto q^{-1}$ functional equation at left is a special case of the one proved in \cite{DenMeus}, and that both functional equations closely resemble the ones in \cite{Voll10}. \newpage \section{Proofs of the main results}\label{3} This section will establish the proofs of \Cref{main1,main2}. The common step in both is a decomposition of $(\P^1(K))^N$ into $(q+1)^N$ cells that are isometrically isomorphic to $P^N$, which combines with the metric and measure properties in \Cref{1_2} to create the key relationship between the canonical partition functions for $X=\P^1(K)$ and $X=P$. We will prove this relationship first, then conclude the proofs of \Cref{main1,main2} in their own subsections. \subsection{Decomposing the integral over $(\P^1(K))^N$} We begin with an integer $N\geq 2$ that shall remain fixed for the rest of this section, reserve the symbol $\bm{s}$ for a complex tuple $(s_{ij})_{1\leq i<j\leq N}$, and fix the following notation to better organize the forthcoming arguments: \begin{notation} Let $I$ be a subset of $[N]=\{1,\dots,N\}$. \begin{itemize} \item For any set $X$ we write $X^I$ for the product $\prod_{i\in I}X=\{x_I=(x_i)_{i\in I}:x_i\in X\}$ and assume $X^I$ has the product topology if $X$ is a topological space. \item We write $\mu^I$ for the product Haar measure on $K^I$ satisfying $\mu^I(R^I)=1$, and we make this consistent for $I=\varnothing$ by giving the singleton space $K^\varnothing=R^\varnothing=\{0\}$ measure 1. We also write $\nu^I$ for the product measure on $(\P^1(K))^I$, with the same convention for $I=\varnothing$. \item For a measurable subset $X\subset K$ we set $\mathcal{Z}_\varnothing(X,\bm{s}):=1$ and $$\mathcal{Z}_I(X,\bm{s}):=\int_{X^I}\prod_{\substack{i<j\\i,j\in I}}|x_i-x_j|^{s_{ij}}\,d\mu^I\qquad\text{if}\quad I\neq\varnothing~.$$ Note that $\mathcal{Z}_I(X,\bm{s})$ is constant with respect to the entry $s_{ij}$ if $i\in[N]\setminus I$ or $j\in[N]\setminus I$, and it is equal to $\mathcal{Z}_N(X,\bm{s})$ if $I=[N]$. \item Using the constant $q=\#(R/P)$, we write $(I_0,\dots,I_q)\vdash[N]$ for an \emph{ordered} partition of $[N]$ into at most $q+1$ parts. That is, $(I_0,\dots,I_q)\vdash[N]$ means $I_0,\dots,I_q$ are $q+1$ disjoint ordered subsets of $[N]$ with union equal to $[N]$, where some $I_k$ may be empty. \end{itemize} \end{notation} In addition to the above, it will be useful to consider $I$-analogues of splitting chains: \begin{definition}\label{Isplchdef} Suppose $I\subset[N]$. An \emph{$I$-splitting chain} of length $L\geq 0$ is a tuple $\spl=(\ptn_0,\dots,\ptn_L)$ of partitions of $I$ satisfying $$\{I\}=\ptn_0>\ptn_1>\ptn_2>\dots>\ptn_L=\{\{i\}:i\in I\}~.$$ If $\#I\geq 2$, we define $\mathcal{B}(\spl)$, $\ell_\spl(\lambda)$, and $\deg_\spl(\lambda)\in\{2,3,\dots,\#I\}$ just as in \Cref{splchdef}. Otherwise $\mathcal{B}(\spl)$ will be treated as the empty set and there is no need to define $\ell_\spl$ or $\deg_\spl$. Finally, we call an $I$-splitting chain $\spl$ \emph{reduced} if each $\lambda\in\mathcal{B}(\spl)$ satisfies $\lambda\in\ptn_\ell\iff\ell=\ell_\spl(\lambda)$, write $\mathcal{R}_I$ for the set of reduced $I$-splitting chains, and define $$\Omega_I:=\bigcap_{\substack{\lambda\subset I\\\#\lambda>1}}\left\{\bm{s}:\re(e_\lambda(\bm{s}))>0\right\}\qquad\text{where}\qquad e_\lambda(\bm{s}):=\#\lambda-1+\sum_{\substack{i<j\\i,j\in\lambda}}s_{ij}~.$$ \end{definition} Note that $\mathcal{R}_\varnothing=\varnothing$ because $I=\varnothing$ has no partitions, $\Omega_\varnothing=\C^{N(N-1)/2}$ because $\Omega_\varnothing$ is an intersection of subsets of $\C^{N(N-1)/2}$ taken over an empty index set, and $e_\varnothing(\bm{s})=-1$ for a similar reason. For each singleton $\{i\}$, the set $\mathcal{R}_{\{i\}}$ is comprised of a single splitting chain of length zero, $\Omega_{\{i\}}=\C^{N(N-1)/2}$ for the same reason as the $I=\varnothing$ case, and similarly $e_{\{i\}}(\bm{s})=0$. At the other extreme, taking $I=[N]$ in \Cref{Isplchdef} recovers \Cref{splchdef} and gives $\Omega_I=\Omega_N$. \begin{proposition}\label{Iold} For any $v\in\Z$ and any nonempty subset $I\subset[N]$, the integral $\mathcal{Z}_I(\pi^vR,\bm{s})$ converges absolutely if and only if $\bm{s}\in\Omega_I$, and in this case $$\mathcal{Z}_I(\pi^vR,\bm{s})=\frac{1}{q^{(v-1)(e_I(\bm{s})+1)+\#I}}\sum_{\spl\in\mathcal{R}_I}\prod_{\lambda\in\mathcal{B}(\spl)}\frac{(q-1)_{\deg_{\spl}(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1}~.$$ In particular, we recover \Cref{R_prop} by taking $I=[N]$ and $v=0$. \end{proposition} \begin{proof} First suppose $I$ is a singleton, so that the product inside the integral $Z_I(\pi^vR,\bm{s})$ is empty and hence $$Z_I(\pi^vR,\bm{s})=\int_{(\pi^vR)^I}\,d\mu^I=\int_{\pi^vR}\,d\mu=q^{-v}~.$$ This integral is constant, and hence absolutely convergent, for all $\bm{s}\in\C^{N(N-1)/2}=\Omega_I$. On the other hand, $\mathcal{R}_I$ consists of a single $I$-splitting chain, namely the one-tuple $\spl=(\{I\})$. Then $\mathcal{B}(\spl)=\varnothing$ and $e_I(\bm{s})=0$ imply $$\frac{1}{q^{(v-1)(e_I(\bm{s})+1)+\#I}}\sum_{\spl\in\mathcal{R}_I}\prod_{\lambda\in\mathcal{B}(\spl)}\frac{(q-1)_{\deg_{\spl}(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1}=\frac{1}{q^{(v-1)\cdot1+1}}\prod_{\lambda\in\varnothing}\frac{(q-1)_{\deg_{\spl}(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1}=q^{-v}$$ as well, so the claim holds for any singleton subset $I\subset[N]$. Now suppose $I$ is not a singleton. By relabeling $I$ we may assume $I=[n]$ where $2\leq n\leq N$. By Proposition 3.15 and Lemma 3.16(c) in \cite{Web20}, the integral $$\mathcal{Z}_I(R,\bm{s})=\mathcal{Z}_n(R,\bm{s})=\int_{R^n}\prod_{1\leq i<j\leq n}|x_i-x_j|^{s_{ij}}\,d\mu^N$$ converges absolutely if and only if $\bm{s}$ belongs to the intersection \begin{equation}\label{polytope} \bigcap_{\spl\in\mathcal{R}_n}\bigcap_{\lambda\in\mathcal{B}(\spl)}\left\{\bm{s}:\re(e_{\lambda}(\bm{s}))>0\right\}~, \end{equation} and for such $\bm{s}$ we have $$\mathcal{Z}_n(R,\bm{s})=q^{e_{[n]}(\bm{s})+1-n}\sum_{\spl\in\mathcal{R}_n}\prod_{\lambda\in\mathcal{B}(\spl)}\frac{(q-1)_{\deg_{\spl}(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1}~.$$ Changing variables in the integral $\mathcal{Z}_n(R,\bm{s})$ by the homothety $R^n\to(\pi^vR)^n$ gives $$\mathcal{Z}_N(\pi^vR,\bm{s})=q^{-v(e_{[N]}(\bm{s})+1)}\cdot\mathcal{Z}_n(R,\bm{s})=\frac{1}{q^{(v-1)(e_{[n]}(\bm{s})+1)+n}}\sum_{\spl\in\mathcal{R}_n}\prod_{\lambda\in\mathcal{B}(\spl)}\frac{(q-1)_{\deg_{\spl}(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1}~,$$ and the first equality implies that the domain of absolute convergence for $\mathcal{Z}_n(\pi^vR,\bm{s})$ is also the intersection appearing in \eqref{polytope}. But every subset $\lambda\subset[n]$ with $\#\lambda>1$ appears as a branch in at least one reduced splitting chain of order $n$, so the intersection in \eqref{polytope} is precisely $\Omega_{[n]}$. Therefore the claim holds for $I=[n]$, and hence for any non-singleton subset $I\subset[N]$.\\ \end{proof} The $v=1$ case of \Cref{Iold} has an important relationship with the main result of this section, which is the following theorem. \begin{theorem}\label{main} For each $N\geq 2$, the integral $\mathcal{Z}_N(\P^1(K),\bm{s})$ converges absolutely if and only if $\bm{s}\in\Omega_N$, and in this case $$\mathcal{Z}_N(\P^1(K),\bm{s})=\left(\frac{q}{q+1}\right)^N\sum_{(I_0,\dots,I_q)\vdash[N]}\prod_{k=0}^q\mathcal{Z}_{I_k}(P,\bm{s})~.$$ \end{theorem} \begin{proof} The partition of $\P^1(K)$ in \eqref{P(K)_decomp} can be rewritten in the form $$\P^1(K)=\bigsqcup_{k=0}^q\phi_k(B_1[0:1])~,$$ where $\phi_k\in PGL_2(R)$ is the element represented by $\left(\begin{smallmatrix}1&0\\0&1\end{smallmatrix}\right)$ if $k=0$, $\left(\begin{smallmatrix}1&\xi^{k-1}\\0&1\end{smallmatrix}\right)$ if $0<k<q$, or $\left(\begin{smallmatrix}0&1\\1&0\end{smallmatrix}\right)$ if $k=q$. This leads to a partition of the $N$-fold product, $$(\P^1(K))^N=\bigsqcup_{(I_0,\dots,I_q)\vdash[N]}C(I_0,\dots,I_q)~,$$ where each part is a ``cell" of the form \begin{align*} C(I_0,\dots,I_q)&:=\{([x_{1,0}:x_{1,1}],\dots,[x_{N,0}:x_{N,1}])\in(\P^1(K))^N:[x_{i,0}:x_{i,1}]\in\phi_k(B_1[0:1])\iff i\in I_k\}\\ &=\prod_{k=0}^q(\phi_k(B_1[0:1]))^{I_k}~. \end{align*} Accordingly, the integral $\mathcal{Z}_N(\P^1(K),\bm{s})$ breaks into a sum of integrals of the form \begin{equation}\label{cube_integral} \int_{C(I_0,\dots,I_q)}\prod_{1\leq i<j\leq N}\delta([x_{i,0}:x_{i,1}],[x_{j,0}:x_{j,1}])^{s_{ij}}\,d\nu^N~, \end{equation} summed over all $(I_0,\dots,I_q)\vdash[N]$. Since every cell $C(I_0,\dots,I_q)$ has positive measure, the integral $\mathcal{Z}_N(\P^1(K),\bm{s})$ converges absolutely if and only if the integral in \eqref{cube_integral} converges absolutely for all $(I_0,\dots,I_q)\vdash[N]$. Fix one $(I_0,\dots,I_q)$ for the moment. By \eqref{P(K)_decomp} and the definition of the $\phi_k$'s above, note that the entries of each tuple $([x_{1,0}:x_{1,1}],\dots,[x_{N,0}:x_{N,1}])\in C(I_0,\dots,I_q)$ satisfy $\delta([x_{i,0}:x_{i,1}],[x_{j,0}:x_{j,1}])^{s_{ij}}=1$ if and only if $i$ and $j$ belong to different parts of $(I_0,\dots,I_q)$. Therefore the integrand in \eqref{cube_integral} factors as \begin{align*} \prod_{1\leq i<j\leq N}\delta([x_{i,0}:x_{i,1}],[x_{j,0}:x_{j,1}])^{s_{ij}}&=\prod_{k=0}^q\prod_{\substack{i<j\\i,j\in I_k}}\delta([x_{i,0}:x_{i,1}],[x_{j,0}:x_{j,1}])^{s_{ij}}~, \end{align*} and the measure on $C(I_0,\dots,I_q)$ factors in a similar way, namely $\prod_{k=0}^qd\nu^{I_k}$ where $\nu^{I_k}$ is the product measure on $(\P^1(K))^{I_k}$. Now Fubini's Theorem for positive functions and $PGL_2(R)$-invariance give \begin{align*} \int_{C(I_0,\dots,I_q)}&\Bigg|\prod_{1\leq i<j\leq N}\delta([x_{i,0}:x_{i,1}],[x_{j,0}:x_{j,1}])^{s_{ij}}\Bigg|_\C\,d\nu^N\\ &=\prod_{k=0}^q\int_{(\phi_k(B_1[0:1]))^{I_k}}\Bigg|\prod_{\substack{i<j\\i,j\in I_k}}\delta([x_{i,0}:x_{i,1}],[x_{j,0}:x_{j,1}])^{s_{ij}}\Bigg|_\C\,d\nu^{I_k}\\ &=\prod_{k=0}^q\int_{(B_1[0:1])^{I_k}}\Bigg|\prod_{\substack{i<j\\i,j\in I_k}}\delta([x_{i,0}:x_{i,1}],[x_{j,0}:x_{j,1}])^{s_{ij}}\Bigg|_\C\,d\nu^{I_k}~, \end{align*} so the integral in \eqref{cube_integral} converges absolutely if and only if all $q+1$ of the integrals of the form \begin{equation}\label{I_k_integral} \int_{(B_1[0:1])^{I_k}}\prod_{\substack{i<j\\i,j\in I_k}}\delta([x_{i,0}:x_{i,1}],[x_{j,0}:x_{j,1}])^{s_{ij}}\,d\nu^{I_k} \end{equation} converge absolutely. The change of variables $P^{I_k}\to(B_1[0:1])^{I_k}$ given by $\iota:P\to B_1[0:1]$ in each coordinate, along with \eqref{metric_rule}, \eqref{ball_rule}, and \eqref{measure_rule}, allows the integral in \eqref{I_k_integral} to be rewritten as $(\frac{q}{q+1})^{\#I_k}\mathcal{Z}_{I_k}(P,\bm{s})$, and thus \Cref{Iold} implies that it converges absolutely if and only if $\bm{s}\in\Omega_{I_k}$. Therefore the integral over $C(I_0,\dots,I_q)$ in \eqref{cube_integral} converges absolutely if and only if $\bm{s}\in\Omega_{I_0}\cap\dots\cap\Omega_{I_q}$, and in this case Fubini's Theorem for absolutely integrable functions, $PGL_2(R)$-invariance, and the change of variables above allow it to be rewritten as \begin{align*} \int_{C(I_0,\dots,I_q)}&\prod_{1\leq i<j\leq N}\delta([x_{i,0}:x_{i,1}],[x_{j,0}:x_{j,1}])^{s_{ij}}\,d\nu^N\\ &=\prod_{k=0}^q\int_{(B_1[0:1])^{I_k}}\prod_{\substack{i<j\\i,j\in I_k}}\delta([x_{i,0}:x_{i,1}],[x_{j,0}:x_{j,1}])^{s_{ij}}\,d\nu^{I_k}\\ &=\prod_{k=0}^q\left(\frac{q}{q+1}\right)^{\#I_k}\mathcal{Z}_{I_k}(P,\bm{s})\\ &=\left(\frac{q}{q+1}\right)^N\prod_{k=0}^q\mathcal{Z}_{I_k}(P,\bm{s})~. \end{align*} Finally, since $\mathcal{Z}_N(\P^1(K),\bm{s})$ is the sum of these integrals over all $(I_0,\dots,I_q)\vdash[N]$, it converges absolutely if and only if $$\bm{s}\in\bigcap_{(I_1,\dots,I_q)\vdash[N]}\left(\Omega_{I_1}\cap\dots\cap\Omega_{I_q}\right)=\bigcap_{\substack{I\subset[N]\\\#I>1}}\Omega_I~.$$ The last equality of intersections holds because each subset $I\subset[N]$ with $\#I>1$ appears as a part in at least one of the ordered partitions $(I_1,\dots,I_q)\vdash[N]$, and because none of the parts with $\#I_k\leq 1$ affect the intersection (because $\Omega_{I_k}=\C^{N(N-1)/2}$ for such $I_k$). The intersection of $\Omega_I$ over all $I\subset[N]$ with $\#I>1$ is clearly equal to $\Omega_{[N]}=\Omega_N$ by \Cref{Isplchdef}, so the proof is complete.\\ \end{proof} \subsection{Finishing the proof of Theorem 2.1} \Cref{main} established that the integral $\mathcal{Z}_N(\P^1(K),\bm{s})$ converges absolutely if and only if $\bm{s}\in\Omega_N$, and for such $\bm{s}$ it gave \begin{equation}\label{sum1} \mathcal{Z}_N(\P^1(K),\bm{s})=\left(\frac{q}{q+1}\right)^N\sum_{(I_0,\dots,I_q)\vdash[N]}\prod_{k=0}^q\mathcal{Z}_{I_k}(P,\bm{s})~. \end{equation} It remains to show that the righthand sum can be converted into the sum over $\spl\in\mathcal{R}_N$ proposed in \Cref{main1}. \begin{proof}[Proof of \Cref{main1}] We begin by breaking the terms of the sum in \eqref{sum1} into two main groups. The simpler group is indexed by those $(I_0,\dots,I_q)$ with $I_j=[N]$ for some $j$ and $I_k=\varnothing$ for all $k\neq j$, in which case $\mathcal{Z}_{I_j}(P,\bm{s})=\mathcal{Z}_N(P,\bm{s})$ and $\mathcal{Z}_{I_k}(P,\bm{s})=1$ for all $k\neq j$. Therefore each of the group's $q+1$ terms (one for each $j\in\{0,\dots,q\}$) contributes the quantity $\prod_{k=0}^q\mathcal{Z}_{I_k}(P,\bm{s})=\mathcal{Z}_N(P,\bm{s})$ to the sum in \eqref{sum1} for a total contribution of \begin{equation}\label{term1} (q+1)\mathcal{Z}_N(P,\bm{s})=\frac{q+1}{q^N}\sum_{\spl\in\mathcal{R}_N}\prod_{\lambda\in\mathcal{B}(\spl)}\frac{(q-1)_{\deg_\spl(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1} \end{equation} by the $v=1$ and $I=[N]$ case of \Cref{Iold}. The other group of terms is indexed by the ordered partitions $(I_0,\dots,I_q)\vdash[N]$ satisfying $I_0,\dots,I_q\subsetneq[N]$. To deal with them carefully, we fix one such $(I_0,\dots,I_q)$ for the moment, and note that the number $d$ of nonempty parts $I_k$ must be at least 2. Thus we have indices $k_1,\dots,k_d\in\{0,\dots,q\}$ with $I_{k_j}\neq\varnothing$, and for every $k\in\{0,\dots,q\}\setminus\{k_1,\dots,k_d\}$ we have $I_k=\varnothing$ and hence $\mathcal{Z}_{I_k}(P,\bm{s})=1$. For the nonempty sets $I_{k_j}$, \Cref{Iold} expands $\mathcal{Z}_{I_{k_j}}(P,\bm{s})$ as a sum over $\mathcal{R}_{I_{k_j}}$ (whose elements shall be denoted $\spl_j$ instead of $\spl$) and hence \begin{align*} \prod_{k=0}^q\mathcal{Z}_{I_k}(P,\bm{s})&=\prod_{j=1}^d\frac{1}{q^{\#I_{k_j}}}\sum_{\spl_j\in\mathcal{R}_{I_{k_j}}}\prod_{\lambda\in\mathcal{B}(\spl_j)}\frac{(q-1)_{\deg_{\spl_j}(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1}\\ &=\frac{1}{q^N}\sum_{(\spl_1,\dots,\spl_d)\in\mathcal{R}_{I_{k_1}}\times\cdots\times\mathcal{R}_{I_{k_d}}}\prod_{j=1}^d\prod_{\lambda\in\mathcal{B}(\spl_j)}\frac{(q-1)_{\deg_{\spl_j}(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1}\\ &=\frac{1}{q^N}\sum_{(\spl_1,\dots,\spl_d)\in\mathcal{R}_{I_{k_1}}\times\cdots\times\mathcal{R}_{I_{k_d}}}\prod_{\lambda\in\mathcal{B}(\spl_1)\sqcup\cdots\sqcup\mathcal{B}(\spl_d)}\frac{(q-1)_{\deg_{\spl_j}(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1}~. \end{align*} We now make use of a simple correspondence between the tuples $(\spl_1,\dots,\spl_d)\in\mathcal{R}_{I_{k_1}}\times\cdots\times\mathcal{R}_{I_{k_d}}$ and the reduced splitting chains $\spl=(\ptn_0,\ptn_1,\dots,\ptn_L)\in\mathcal{R}_N$ satisfying $\ptn_1=\{I_{k_1},\dots,I_{k_d}\}$. To establish it, note that each $\spl\in\mathcal{R}_N$ corresponds uniquely to its branch set $\mathcal{B}(\spl)$ (Lemma 2.5(b) of \cite{Web20}), which generalizes in an obvious way to reduced $I$-splitting chains (for any nonempty $I\subset[N]$). Now if $\spl=(\ptn_0,\ptn_1,\dots,\ptn_L)\in\mathcal{R}_N$ satisfies $\ptn_1=\{I_{k_1},\dots,I_{k_d}\}$, the corresponding branch set $\mathcal{B}(\spl)$ decomposes as $$\mathcal{B}(\spl)=\{[N]\}\sqcup\bigsqcup_{j=1}^d\{\lambda\in\mathcal{B}(\spl):\lambda\subset I_{k_j}\}~.$$ Each of the sets $\{\lambda\in\mathcal{B}(\spl):\lambda\subset I_{k_j}\}$ is the branch set $\mathcal{B}(\spl_j)$ for a unique $\spl_j\in\mathcal{R}_{I_{k_j}}$, so in this sense $\spl$ ``breaks" into a unique tuple $(\spl_1,\dots,\spl_d)\in\mathcal{R}_{I_{k_1}}\times\cdots\times\mathcal{R}_{I_{k_d}}$. On the other hand, any tuple $(\spl_1,\dots,\spl_d)\in\mathcal{R}_{I_{k_1}}\times\cdots\times\mathcal{R}_{I_{k_d}}$ can be ``assembled" as follows. Since $\{I_{k_1},\dots,I_{k_d}\}$ is a partition of $[N]$, taking the union of the $d$ branch sets $\mathcal{B}(\spl_1),\dots,\mathcal{B}(\spl_d)$ and the singleton $\{[N]\}$ forms the branch set $\mathcal{B}(\spl)$ for a unique $\spl\in\mathcal{R}_N$. It is clear that ``breaking" and ``assembling" are inverses, giving a correspondence $\mathcal{R}_N\longleftrightarrow\mathcal{R}_{I_{k_1}}\times\cdots\times\mathcal{R}_{I_{k_d}}$ under which each identification $\spl\longleftrightarrow(\spl_1,\dots,\spl_d)$ amounts to a branch set equation, i.e., $$\mathcal{B}(\spl)\setminus\{[N]\}=\mathcal{B}(\spl_1)\sqcup\cdots\sqcup\mathcal{B}(\spl_d)~.$$ In particular, each $\lambda\in\mathcal{B}(\spl)\setminus\{[N]\}$ is contained in exactly one $\mathcal{B}(\spl_j)$, and $\deg_\spl(\lambda)=\deg_{\spl_j}(\lambda)$ by \Cref{splchdef} in this case. These facts allow the sum over $\mathcal{R}_{I_1}\times\cdots\times\mathcal{R}_{I_d}$ above to be rewritten as a sum over all $\spl\in\mathcal{R}_N$ with $\ptn_1=\{I_{k_1},\dots,I_{k_q}\}$, and each product over $\lambda\in\mathcal{B}(\spl_{k_1})\sqcup\cdots\sqcup\mathcal{B}(\spl_{k_d})$ inside it is simply a product over $\lambda\in\mathcal{B}(\spl)\setminus\{[N]\}$. We conclude that an ordered partition $(I_0,\dots,I_q)\vdash[N]$ with $I_0,\dots,I_q\subsetneq[N]$ contributes the quantity \begin{equation}\label{term2} \prod_{k=0}^q\mathcal{Z}_{I_k}(P,\bm{s})=\frac{1}{q^N}\sum_{\substack{\spl\in\mathcal{R}_N\\\ptn_1=\{I_{k_1},\dots,I_{k_d}\}}}\prod_{\lambda\in\mathcal{B}(\spl)\setminus\{[N]\}}\frac{(q-1)_{\deg_\spl(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1} \end{equation} to the sum in \eqref{sum1}, where $\{I_{k_1},\dots,I_{k_d}\}$ is the (unordered) subset of nonempty parts in that particular ordered partition. We must now total the contribution in \eqref{term2} over all possible $(I_0,\dots,I_q)\vdash[N]$ with $I_0,\dots,I_q\subsetneq[N]$. Given a partition $\{\lambda_1,\dots,\lambda_d\}\vdash[N]$ with $d\geq 2$, note that there are precisely $(q+1)_d=(q+1)\cdot(q)_{d-1}$ ordered partitions $(I_0,\dots,I_q)\vdash[N]$ such that $\{I_{k_1},\dots,I_{k_d}\}=\{\lambda_1,\dots,\lambda_d\}$. Therefore summing \eqref{term2} over all $(I_0,\dots,I_q)\vdash[N]$ with $I_0,\dots,I_q\subsetneq[N]$ gives \begin{align*} \sum_{\substack{(I_0,\dots,I_q)\vdash[N]\\I_0,\dots,I_q\subsetneq[N]}}\prod_{k=0}^q\mathcal{Z}_{I_k}(P,\bm{s})&=\frac{1}{q^N}\sum_{\substack{(I_0,\dots,I_q)\vdash[N]\\I_0,\dots,I_q\subsetneq[N]}}\sum_{\substack{\spl\in\mathcal{R}_N\\\ptn_1=\{I_{k_1},\dots,I_{k_d}\}}}\prod_{\lambda\in\mathcal{B}(\spl)\setminus\{[N]\}}\frac{(q-1)_{\deg_\spl(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1}\\ &=\frac{q+1}{q^N}\sum_{\substack{\{\lambda_1,\dots,\lambda_d\}\vdash[N]\\d\geq 2}}(q)_{d-1}\sum_{\substack{\spl\in\mathcal{R}_N\\\ptn_1=\{\lambda_1,\dots,\lambda_d\}}}\prod_{\lambda\in\mathcal{B}(\spl)\setminus\{[N]\}}\frac{(q-1)_{\deg_\spl(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1}~. \end{align*} Given a partition $\{\lambda_1,\dots,\lambda_d\}\vdash[N]$, those splitting chains $\spl\in\mathcal{R}_N$ with $\ptn_1=\{\lambda_1,\dots,\lambda_d\}$ all have $\deg_\spl([N])=\#\ptn_1=d$ by \Cref{splchdef}. Moreover, no $\spl\in\mathcal{R}_N$ is missed or repeated in the sum of sums above, so it can be rewritten as \begin{align*} \sum_{\substack{(I_0,\dots,I_q)\vdash[N]\\I_0,\dots,I_q\subsetneq[N]}}\prod_{k=0}^q\mathcal{Z}_{I_k}(P,\bm{s})&=\frac{q+1}{q^N}\sum_{\spl\in\mathcal{R}_N}(q)_{\deg_\spl([N])-1}\prod_{\lambda\in\mathcal{B}(\spl)\setminus\{[N]\}}\frac{(q-1)_{\deg_\spl(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1}\\ &=\frac{q+1}{q^N}\sum_{\spl\in\mathcal{R}_N}\frac{(q)_{\deg_\spl([N])-1}}{(q-1)_{\deg_\spl([N])-1}}\cdot(q^{e_{[N]}(\bm{s})}-1)\prod_{\lambda\in\mathcal{B}(\spl)}\frac{(q-1)_{\deg_\spl(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1}\\ &=\frac{q+1}{q^N}\sum_{\spl\in\mathcal{R}_N}\frac{q^{N+\sum_{i<j}s_{ij}}-q}{q+1-\deg_\spl([N])}\prod_{\lambda\in\mathcal{B}(\spl)}\frac{(q-1)_{\deg_\spl(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1}~. \end{align*} Note that the summand for each $\spl\in\mathcal{R}_N$ is still defined for any prime power $q$ since the denominators $(q-1)_{\deg_\spl([N])-1}$ and $q+1-\deg_\spl([N])$ (which vanish when $q=\deg_\spl([N])-1$) are cancelled by the numerator $(q-1)_{\deg_\spl([N])-1}$ appearing in the product over $\lambda\in\mathcal{B}(\spl)$. Finally, we evaluate the righthand side of \eqref{sum1} by combining the sum directly above with that in \eqref{term1} and multiplying through by $(\frac{q}{q+1})^N$. This yields the desired formula for $\mathcal{Z}_N(\P^1(K),\bm{s})$: \begin{align*} \mathcal{Z}_N(\P^1(K),\bm{s})&=\frac{1}{(q+1)^{N-1}}\sum_{\spl\in\mathcal{R}_N}\left(1+\frac{q^{N+\sum_{i<j}s_{ij}}-q}{q+1-\deg_\spl([N])}\right)\prod_{\lambda\in\mathcal{B}(\spl)}\frac{(q-1)_{\deg_\spl(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1}\\ &=\frac{1}{(q+1)^{N-1}}\sum_{\spl\in\mathcal{R}_N}\frac{q^{N+\sum_{i<j}s_{ij}}+1-\deg_\spl([N])}{q+1-\deg_\spl([N])}\prod_{\lambda\in\mathcal{B}(\spl)}\frac{(q-1)_{\deg_\spl(\lambda)-1}}{q^{e_\lambda(\bm{s})}-1} \end{align*} \end{proof} \subsection{Finishing the proof of Theorem 2.3} Our final task is to prove the $(q+1)$th Power Law, which we noted in \Cref{2_2} is equivalent to the equations in \eqref{P(K)coeff}. That is, it remains to prove $$\frac{Z_N(\P^1(K),\beta)}{N!}=\sum_{\substack{N_0+\dots+N_q=N\\N_0,\dots,N_q\geq 0}}\prod_{k=0}^q\left(\frac{q}{q+1}\right)^{N_k}\frac{Z_{N_k}(P,\beta)}{N_k!}\qquad\text{for all $\beta>0$ and $N\geq 0$}.$$ \begin{proof} Fix $N\geq 0$ and $\beta>0$, and fix $\bm{s}$ via $s_{ij}=\beta$ for all $i<j$, so that $\mathcal{Z}_N(\P^1(K),\bm{s})=Z_N(\P^1(K),\beta)$ and $\mathcal{Z}_I(P,\bm{s})=Z_{\#I}(P,\beta)$ for any subset $I\subset[N]$. The formula in \Cref{main} relates these functions of $\beta$ via \begin{align*} Z_N(\P^1(K),\beta)&=\mathcal{Z}_N(\P^1(K),\bm{s})\\ &=\left(\frac{q}{q+1}\right)^N\sum_{(I_0,\dots,I_q)\vdash[N]}\prod_{k=0}^q\mathcal{Z}_{\#I_k}(P,\bm{s})\\ &=\sum_{(I_0,\dots,I_q)\vdash[N]}\prod_{k=0}^q\left(\frac{q}{q+1}\right)^{\#I_k}Z_{\#I_k}(P,\beta)~. \end{align*} For each choice of $q+1$ ordered integers $N_0,\dots,N_q\geq 0$ satisfying $N_0+\dots+N_q=N$, there are precisely $$\binom{N}{N_0,\dots,N_q}=\frac{N!}{N_0!\cdots N_q!}$$ ordered partitions $(I_0,\dots,I_q)\vdash[N]$ satisfying $\#I_0=N_0,\dots,\#I_q=N_q$. Finally, grouping ordered partitions according to all possible ordered integer choices establishes the desired equation: \begin{align*} \frac{Z_N(\P^1(K),\bm{s})}{N!}&=\frac{1}{N!}\cdot\sum_{(I_0,\dots,I_q)\vdash[N]}\prod_{k=0}^q\left(\frac{q}{q+1}\right)^{\#I_k}Z_{\#I_k}(P,\beta)\\ &=\frac{1}{N!}\cdot\sum_{\substack{N_0+\dots+N_q=N\\N_0,\dots,N_q\geq 0}}\binom{N}{N_0,\dots,N_q}\prod_{k=0}^q\left(\frac{q}{q+1}\right)^{N_k}Z_{N_k}(P,\beta)\\ &=\sum_{\substack{N_0+\dots+N_q=N\\N_0,\dots,N_q\geq 0}}\prod_{k=0}^q\left(\frac{q}{q+1}\right)^{N_k}\frac{Z_{N_k}(P,\beta)}{N_k!}~. \end{align*} \end{proof} \noindent\textbf{Acknowledgements}: I would like to thank Clay Petsche for confirming several details about the measure and metric on $\P^1(K)$, and I would like to thank Chris Sinclair for many useful suggestions regarding the Power Laws and the Quadratic Recurrence.
{'timestamp': '2021-10-18T02:11:14', 'yymm': '2110', 'arxiv_id': '2110.07854', 'language': 'en', 'url': 'https://arxiv.org/abs/2110.07854'}
\section{Introduction} The AdS/CFT correspondence \cite{MGW} has been explored beyond the supergravity approximation \cite{BMN,GKP}. In the BMN limit \cite{BMN} for the $\mathcal{N} = 4$ super Yang-Mills (SYM) theory the perturbative scaling dimensions of gauge invariant near-BPS operators with large R-charge can be matched with the energies of certain string states, which has been interpreted as the semiclassical quantization of nearly point-like string with large angular momentum along the central circle of $S^5$ \cite{GKP}. Moreover, the energies of various semiclassical string with several large angular momenta in $AdS_5 \times S^5$ have been shown in \cite{FRM,FT,BMSZ,AFRT,ART,AS,AT,SS} to match with the anomalous dimensions of the corresponding gauge invariant non-BPS operators which can be computed by using the Bethe ansatz \cite{MZ} for diagonalization of the dilatation operator \cite{BKS,NB,NBS,BDS,NBP}, that is represented by a Hamiltonian of an integrable spin chain. From the view point of integrability the gauge/string duality has been further confirmed by verifying the equivalence between the classical string Bethe equation for the classical $AdS_5 \times S^5$ string sigma model and the Bethe equation for the spin chain in the various sectors such as SU(2), SL(2), SO(6) and so \cite{KMMZ,KZ}, while it has been demonstrated at the level of effective action where an interpolating spin chain sigma model action describing the continuum limit of the spin chain in the coherent basis was constructed \cite{MK,KRT,AAT}. In \cite{LM} Lunin and Maldacena have found a supergravity background dual to the Leigh-Strassler or $\beta$-deformation of $\mathcal{N} = 4$ SYM theory \cite{LS} by applying a sequence of T-duality transformations and shifts of angular coordinates to the original $AdS_5 \times S^5$ background. They have taken the plane-wave limit of the deformed $AdS_5 \times \tilde{S}^5$ background with a deformed five-sphere $\tilde{S}^5$ and have shown that the string spectrum in the pp-wave coincides with the spectrum of BMN-type operators in the $\beta$-deformed $\mathcal{N} = 4$ SYM \cite{NP}. The Lax representation for the bosonic string theory in the real $\beta$-deformed background has been constructed \cite{SF}, which is similar to the undeformed case \cite{AF}. The gauge/string duality for the $\beta$-deformed case has been investigated by comparing the energies of semiclassical strings to the anomalous dimensions of the gauge theory operators in the two-scalar sector \cite{FRT}. The energy of circular string with two equal angular momenta in $\tilde{S}^5$ has been computed from the semiclassical approach and has been also derived from the anisotropic Landau-Lifshitz equation for the interpolating string sigma model action which was obtained by taking the ``fast-string" limit of the world sheet action in $AdS_5 \times \tilde{S}^5$. This interpolating action on the string theory side has been shown to coincide with the continuum limit of the coherent state action of an anisotropic XXZ spin chain on the $\beta$-deformed $\mathcal{N} = 4$ SYM theory side, which was constructed using the one-loop dilatation operator in the deformed gauge theory \cite{RR,BC}. The classical string Bethe equation on the string theory side has been derived from the Lax representation of \cite{SF} to coincide with the thermodynamic limit of the Bethe equation for the anisotropic spin chain. Various relevant aspects of the gauge/string duality for the marginal deformed backgrounds have been investigated \cite{KMSS,BDR,BR,FG,GN}. Multi-spin configurations of strings which move in both $AdS_5$ and $\tilde{S}^5$ parts of the $\beta$-deformed background have been constructed \cite{BDR}, where the winding numbers and the frequencies associated with the angular momenta take the same magnitudes respectively for each part of $AdS_5 \times \tilde{S}^5$. This property was also seen in the circular string solution with two equal spins \cite{FRT} which is specified by the same frequencies and winding numbers. We will construct a circular string solution with two unequal spins in $\tilde{S}^5$ which is characterized by different frequencies and winding numbers. The energy of the string solution will be computed and represented in terms of the unequal winding numbers, the unequal angular momenta and the deformation parameter. This energy spectrum on the string theory side will be compared with that of the solution in \cite{FRT} for the anisotropic Landau-Lifshitz equation and the $\beta$-deformed Bethe equation for the spin configuration with the filling fraction away from one half. \section{Rotating string solution with two unequal spins} We consider a rotating closed string motion in the supergravity background dual to the real $\beta$-deformation of the $\mathcal{N}=4$ SYM theory. The $\beta$-deformed background for real $\beta = \gamma$ is given by \cite{LM} \begin{eqnarray} ds_{str}^2 &=& R^2 \left[ ds_{AdS_5}^2 + \sum_{i=1}^{3}(d\rho_i^2 + G\rho_i^2 d\phi_i^2 ) + \tilde{\gamma}^2G \rho_1^2 \rho_2^2 \rho_3^2(\sum_{i=1}^{3} d\phi_i)^2 \right], \nonumber \\ B_2 &=& R^2 \tilde{\gamma} G w_2, \hspace{1cm} w_2 \equiv \rho_1^2\rho_2^2d\phi_1d\phi_2 + \rho_2^2\rho_3^2d\phi_2d\phi_3 + \rho_3^2\rho_1^2d\phi_3d\phi_1, \\ G^{-1} &=& 1 + \tilde{\gamma}^2 Q, \hspace{1cm} Q \equiv \rho_1^2\rho_2^2 + \rho_2^2\rho_3^2 + \rho_3^2\rho_1^2, \hspace{1cm} \sum_{i=1}^{3}\rho_i^2 = 1, \nonumber \end{eqnarray} where \begin{equation} (\rho_1, \rho_2, \rho_3) = (\sin\alpha \cos \theta, \sin \alpha \sin\theta, \cos\alpha), \hspace{1cm} R^4 = Ng_{YM}^2 = \lambda \end{equation} and the regular deformation parameter $\tilde{\gamma}$ of the supergravity background is related with the real deformation parameter $\gamma$ of the deformed $\mathcal{N}=4$ SYM as $\tilde{\gamma} = R^2 \gamma$. We concentrate on a configuration that a closed string is staying at the center of $AdS_5$ and moving in the $\tilde{S}^3$ part of the deformed five-sphere defined by \begin{equation} \alpha = \frac{\pi}{2}, \hspace{1cm} \mathrm{i.e.} \hspace{1cm} \rho_1 = \cos\theta, \; \rho_2 = \sin\theta, \; \rho_3 = 0. \end{equation} The relevant bosonic string action takes the form \begin{eqnarray} S &=& -\frac{1}{2} R^2\int d\tau \int \frac{d\sigma}{2\pi} \Bigl[ \gamma^{\alpha\beta}(-\partial_{\alpha}t \partial_{\beta}t + \partial_{\alpha}\theta \partial_{\beta}\theta + G\cos^2\theta \partial_{\alpha}\phi_1 \partial_{\beta}\phi_1 + G\sin^2\theta \partial_{\alpha}\phi_2 \partial_{\beta}\phi_2 ) \nonumber \\ & & - 2\epsilon^{\alpha\beta} \tilde{\gamma} G \sin^2\theta \cos^2\theta \partial_{\alpha}\phi_1 \partial_{\beta}\phi_2 \Bigr], \end{eqnarray} where $\epsilon^{01}=1, \gamma^{\alpha\beta}$ is expressed as $\gamma^{\alpha\beta}=\sqrt{-h} h^{\alpha\beta}$ in terms of a world-sheet metric $h^{\alpha\beta}$, and \begin{equation} G = \frac{1}{1 + \frac{\tilde{\gamma}^2}{4}\sin^22\theta}. \end{equation} We choose the conformal gauge $\gamma^{\alpha\beta}= \mathrm{diag}(-1,1)$ and make the following ansatz describing a closed string rotating and wound in the $\phi_1$ and $\phi_2$ directions \begin{equation} t = \kappa\tau, \hspace{1cm} \phi_1 = \omega_1 \tau + m_1\sigma, \hspace{1cm} \phi_2 = \omega_2 \tau + m_2\sigma, \hspace{1cm} \theta = \theta_0 = \mathrm{const}, \end{equation} where $m_1, m_2$ are the winding numbers. The string equation of motion for $\theta$ is satisfied when the constant $\theta_0$ is specified by \begin{eqnarray} \left[\omega_1^2 - m_1^2 - (\omega_2^2 - m_2^2)\right] \left( 1 + \frac{\tilde{\gamma}^2}{4}\sin^22\theta_0 \right) -2\tilde{\gamma} (\omega_1 m_2 - \omega_2m_1)\cos 2\theta_0 \nonumber \\ + \tilde{\gamma}^2 \cos2\theta_0 \left[\cos^2\theta_0(\omega_1^2 - m_1^2) + \sin^2\theta_0(\omega_2^2 - m_2^2) \right] = 0. \label{thc}\end{eqnarray} It has a simple string solution with two equal angular momenta, which is described by $\theta_0 = \pi/4, \; \omega_1^2 - m_1^2 = \omega_2^2 - m_2^2$ in ref. \cite{FRT}. Here we look for an extended solution with two unequal angular momenta which reduces to the simple solution in a special parameter limit. The equation (\ref{thc}) is rewritten in terms of $x = \cos2\theta_0$ as \begin{equation} \frac{\tilde{\gamma}^2}{4}(\Omega_1 - \Omega_2)x^2 + \tilde{\gamma}\left( \frac{\Omega_1 + \Omega_2}{2} \tilde{\gamma} - 2\Omega_0 \right) x + \left(1 + \frac{\tilde{\gamma}^2}{4} \right)(\Omega_1 - \Omega_2) = 0, \label{omx}\end{equation} where $\Omega_i \equiv \omega_i^2 - m_i^2 \; (i=1, 2), \; \Omega_0 \equiv \omega_1m_2 - \omega_2m_1$. This equation determines $x$ in terms of $\omega_i, \; m_i(i=1,2)$ and $\tilde{\gamma}$. The angular momenta $\mathcal{J}_1 = J_1/\sqrt{\lambda}$ and $\mathcal{J}_2 = J_2/\sqrt{\lambda}$ coming from the rotations in the $\phi_1$ and $\phi_2$ directions are obtained by \begin{eqnarray} \mathcal{J}_1 &=& \frac{1}{1 + \frac{\tilde{\gamma}^2}{4}(1-x^2)}\left[ \frac{1+x}{2}\omega_1 + \frac{\tilde{\gamma}}{4}( 1 - x^2 )m_2 \right], \label{tao} \\ \mathcal{J}_2 &=& \frac{1}{1 + \frac{\tilde{\gamma}^2}{4}(1-x^2)}\left[ \frac{1-x}{2}\omega_2 - \frac{\tilde{\gamma}}{4}( 1 - x^2 )m_1 \right]. \label{tat}\end{eqnarray} From them the frequencies $\omega_1$ and $\omega_2$ are expressed as \begin{eqnarray} \omega_1 &=& \frac{2}{1+x}\left[ \left( 1 + \frac{\tilde{\gamma}^2}{4}(1-x^2) \right) \mathcal{J}_1 - \frac{\tilde{\gamma}}{4}( 1 - x^2 )m_2 \right], \label{omo} \\ \omega_2 &=& \frac{2}{1-x}\left[ \left( 1 + \frac{\tilde{\gamma}^2}{4}(1-x^2) \right) \mathcal{J}_2 + \frac{\tilde{\gamma}}{4}( 1 - x^2 )m_1 \right]. \label{omt}\end{eqnarray} The conformal gauge constraints imply \begin{eqnarray} \kappa^2 = \frac{1}{1 + \frac{\tilde{\gamma}^2}{4}(1-x^2)} \left[ \frac{1 + x}{2} (\omega_1^2 + m_1^2 ) + \frac{1 - x}{2}(\omega_2^2 + m_2^2 ) \right], \label{kax} \\ \frac{1 + x}{2}\omega_1m_1 + \frac{1 - x}{2}\omega_2m_2 = 0. \label{omm}\end{eqnarray} The substitution of $\omega_1$ and $\omega_2$ in (\ref{omo}) and (\ref{omt}) into the eq. (\ref{omm}) leads to a compact expression \begin{equation} \mathcal{J}_1 m_1 + \mathcal{J}_2 m_2 = 0. \label{tam}\end{equation} It is noted that this expression for the $\gamma$-deformed background takes the same form as that for the undeformed background. When $\theta_0 = \pi/4$, that is, $x = 0$, the eqs. (\ref{tao}), (\ref{tat}) and (\ref{omm}) provide $\mathcal{J}_1 = \mathcal{J}_2 = \mathcal{J}/2, \; \omega_1 = \omega_2 = \mathcal{J} + \tilde{\gamma}(m + \tilde{\gamma}\mathcal{J}/2)/2$ with $m_1 = -m_2 \equiv m$ and the total spin $\mathcal{J} = \mathcal{J}_1 + \mathcal{J}_2$, which is the special string solution with two equal angular momenta \cite{FRT}. From (\ref{kax}) the energy of circular string solution is specified by \begin{equation} E^2 = \frac{\lambda}{1 + \frac{\tilde{\gamma}^2}{4}(1-x^2)} \left[ \frac{1 + x}{2} (\omega_1^2 + m_1^2 ) + \frac{1 - x}{2}(\omega_2^2 + m_2^2 ) \right]. \label{exo}\end{equation} The quadratic equation (\ref{omx}) yields a solution expressed in terms of $\omega_1, \omega_2$, which is further inserted into (\ref{tao}), (\ref{tat}). If we can determine $\omega_1, \omega_2$ as functions of $\mathcal{J}_1, \mathcal{J}_2$ from the two inserted equations, we substitute these functions into (\ref{exo}) to obtain the energy expressed in terms of $\mathcal{J}_1, \mathcal{J}_2$. However, it is impossible to derive the functions so that we take the following alternative procedure. First combining (\ref{omo}) and (\ref{omt}) with (\ref{exo}) we have the energy expression \begin{eqnarray} E^2 &=& \lambda \Biggl[ 2\left( 1 + \frac{\tilde{\gamma}^2}{4}(1-x^2) \right) \left( \frac{\mathcal{J}_1^2}{1+x} + \frac{\mathcal{J}_2^2}{1-x} \right) + \tilde{\gamma}( - m_2\mathcal{J}_1 + m_1\mathcal{J}_2 ) \nonumber \\ &+& \tilde{\gamma}( m_2\mathcal{J}_1 + m_1\mathcal{J}_2 )x + \frac{1 + x}{2}m_1^2 + \frac{1 - x}{2}m_2^2 \Biggr]. \label{ext}\end{eqnarray} When $\omega_1$ and $\omega_2$ in (\ref{omo}) and (\ref{omt}) are directly substituted into (\ref{omx}) we have an involved equation for $x$. If a solution $x$ of the transcendental equation is obtained as a function of $\mathcal{J}_i, m_i (i=1,2), \tilde{\gamma}$ and inserted into (\ref{ext}), then the energy of string solution is expressed in terms of the angular momenta, the winding numbers and the deformation parameter. \section{Energy-spin relation} In order to solve the transcendental equation we consider the parameter region specified by $x \ll 1$ and take the expansion around $x = 0$. This is the case of almost equal spins, i.e. $\Delta J \ll J, \; \Delta J \equiv J_1 - J_2$. We use (\ref{omo}) and (\ref{omt}) to expand $\Omega_1 - \Omega_2$ in (\ref{omx}) in powers of $x$ as \begin{equation} \Omega_1 - \Omega_2 = A_0 + A_1x + A_2x^2 + A_3x^3 + \cdots , \label{oma}\end{equation} where \begin{eqnarray} &A_0 = 4\left(1 + \frac{\tilde{\gamma}^2}{4}\right) \left[ \left(1 + \frac{\tilde{\gamma}^2}{4}\right)(\mathcal{J}_1^2 - \mathcal{J}_2^2) - \frac{\tilde{\gamma}}{2} (m_2\mathcal{J}_1 + m_1\mathcal{J}_2) - \frac{1}{4} (m_1^2 - m_2^2) \right],& \nonumber \\ &A_1 = -8 \left[\left(1 + \frac{\tilde{\gamma}^2}{4}\right)^2(\mathcal{J}_1^2 + \mathcal{J}_2^2) + \frac{\tilde{\gamma}}{2}\left(1 + \frac{\tilde{\gamma}^2}{4}\right)( - m_2\mathcal{J}_1 + m_1\mathcal{J}_2 ) + \frac{\tilde{\gamma}^2}{16} (m_1^2 + m_2^2) \right],& \nonumber \\ &A_2 = 12\left(1 + \frac{\tilde{\gamma}^2}{4}\right)\left(1 + \frac{\tilde{\gamma}^2}{12} \right) (\mathcal{J}_1^2 - \mathcal{J}_2^2) - 4\tilde{\gamma}\left(1 + \frac{\tilde{\gamma}^2}{8}\right) (m_2\mathcal{J}_1 + m_1\mathcal{J}_2) - \frac{\tilde{\gamma}^2}{4} (m_1^2 - m_2^2),& \nonumber \\ &A_3 = -16 \left[\left(1 + \frac{\tilde{\gamma}^2}{4}\right)(\mathcal{J}_1^2 + \mathcal{J}_2^2) + \frac{\tilde{\gamma}}{4}( - m_2\mathcal{J}_1 + m_1\mathcal{J}_2 ) \right]. & \end{eqnarray} Similarly, $(\Omega_1 + \Omega_2)\tilde{\gamma}/2 - 2\Omega_0$ in (\ref{omx}) can be expanded as \begin{equation} \frac{\Omega_1 + \Omega_2}{2} \tilde{\gamma} - 2\Omega_0 = B_0 + B_1x + B_2x^2 + \cdots, \nonumber \label{omb}\end{equation} where \begin{eqnarray} &B_0 = 2\left(1 + \frac{\tilde{\gamma}^2}{4}\right)^2 \left[\tilde{\gamma}(\mathcal{J}_1^2 + \mathcal{J}_2^2) + 2( - m_2\mathcal{J}_1 + m_1\mathcal{J}_2 ) + \frac{\tilde{\gamma}}{4} \frac{m_1^2 + m_2^2} {1 + \frac{\tilde{\gamma}^2}{4}} \right], & \nonumber \\ &B_1 = -4\left(1 + \frac{\tilde{\gamma}^2}{4}\right) \left[ \tilde{\gamma}\left(1 + \frac{\tilde{\gamma}^2}{4}\right)(\mathcal{J}_1^2 - \mathcal{J}_2^2) - \left( 1 + \frac{\tilde{\gamma}^2}{2} \right)(m_2\mathcal{J}_1 + m_1\mathcal{J}_2) - \frac{\tilde{\gamma}}{4} (m_1^2 - m_2^2) \right],& \nonumber \\ &B_2 = 6\left(1 + \frac{\tilde{\gamma}^2}{4}\right) \left[\tilde{\gamma}\left(1 + \frac{\tilde{\gamma}^2}{12}\right)(\mathcal{J}_1^2 + \mathcal{J}_2^2) + \frac{2}{3}\left(1 + \frac{\tilde{\gamma}^2}{4}\right) ( - m_2\mathcal{J}_1 + m_1\mathcal{J}_2 ) + \frac{\tilde{\gamma}^3}{48} \frac{m_1^2 + m_2^2}{1 + \frac{\tilde{\gamma}^2}{4}} \right]. & \end{eqnarray} It is noted that the coefficients $A_k, B_k \;(k=0,1,2,\cdots)$ show the alternate expressions for $k=$ even and $k=$ odd. Hence by combining (\ref{oma}) and (\ref{omb}) with (\ref{omx}) and taking account of the behaviors that $A_0$ is of order $\epsilon$, while $A_1$ and $B_0$ are of order $\epsilon^0$, we make the leading order estimation of $x$ as \begin{equation} x_1 = - \frac{\left(1 + \frac{\tilde{\gamma}^2}{4}\right)A_0}{\tilde{\gamma} B_0 + \left(1 + \frac{\tilde{\gamma}^2}{4}\right)A_1 }, \end{equation} which is rewritten by \begin{equation} x_1 = \frac{1 + \frac{\tilde{\gamma}^2}{4}}{2(\mathcal{J}_1^2 + \mathcal{J}_2^2)} \left[ \mathcal{J}_1^2 - \mathcal{J}_2^2 - \frac{\tilde{\gamma}}{2} \frac{m_2\mathcal{J}_1 + m_1\mathcal{J}_2} {1 + \frac{\tilde{\gamma}^2}{4}} - \frac{m_1^2 - m_2^2}{4\left(1 + \frac{\tilde{\gamma}^2}{4}\right) } \right]. \label{xmn}\end{equation} Using the Virasoro constraint (\ref{tam}) and a parameter $\alpha = \mathcal{J}_1/\mathcal{J}$ with the total spin $\mathcal{J}$ we illustrate that $x_1$ is the term of order $\epsilon = 2\alpha - 1$ \begin{equation} x_1 = \frac{2\alpha - 1}{1 + (2\alpha - 1)^2} \left[ 1 + \frac{\tilde{\gamma}^2}{4} + \frac{\tilde{\gamma}}{2} \frac{m_1}{\mathcal{J}_2} + \frac{1}{4}\left( \frac{m_1}{\mathcal{J}_2} \right)^2 \right]. \end{equation} It is possible to estimate the non-leading term $x_2$ by substituting $x = x_1 + x_2$ into the eq. (\ref{omx}) accompanied with (\ref{oma}) and (\ref{omb}). By considering the behaviors that $A_0, A_2$ and $B_1$ are of order $\epsilon$, while $A_1, A_3, B_0$ and $B_2$ are of order $\epsilon^0$, we obtain \begin{eqnarray} x_2 = -\frac{1}{\tilde{\gamma} B_0 + \left(1 + \frac{\tilde{\gamma}^2}{4}\right)A_1 } \Biggl[ x_1^2 \left(\frac{\tilde{\gamma}^2}{4}A_0 + \tilde{\gamma} B_1 + \left(1 + \frac{\tilde{\gamma}^2}{4}\right)A_2 \right) \nonumber \\ + x_1^3\left( \frac{\tilde{\gamma}^2}{4}A_1 + \tilde{\gamma} B_2 + \left(1 + \frac{\tilde{\gamma}^2}{4}\right)A_3 \right) \Biggr], \end{eqnarray} which is the term of order $\epsilon^3$ as shown by \begin{eqnarray} x_2 &=& \frac{x_1^2}{\mathcal{J}_1^2 + \mathcal{J}_2^2} \left[ \frac{3}{2} \left(1-\frac{\tilde{\gamma}^2}{6}\right)(\mathcal{J}_1^2 - \mathcal{J}_2^2) + \frac{\tilde{\gamma}^3}{8} \frac{m_2\mathcal{J}_1 + m_1\mathcal{J}_2}{1 + \frac{\tilde{\gamma}^2}{4}} + \frac{\tilde{\gamma}^2}{16} \frac{m_1^2 - m_2^2}{1 + \frac{\tilde{\gamma}^2}{4}} \right] \nonumber \\ &-& \frac{2x_1^3}{1 + \frac{\tilde{\gamma}^2}{4}}. \label{xtw}\end{eqnarray} The substitution of $x = x_1 + x_2 + x_3$ where $x_3$ is the term of order $\epsilon^5$ into the energy expression (\ref{ext}) yields the following expansion up to order $\epsilon^6$ \begin{eqnarray} E^2 &=& \lambda \Biggl[ 2(\mathcal{J}_1^2 + \mathcal{J}_2^2) + \frac{1}{2}(m_2 - \tilde{\gamma}\mathcal{J}_1)^2 + \frac{1}{2}(m_1 + \tilde{\gamma}\mathcal{J}_2)^2 \nonumber \\ &+& (x_1 + x_2 + x_3)\left( -2\left(1 + \frac{\tilde{\gamma}^2}{4}\right) (\mathcal{J}_1^2 - \mathcal{J}_2^2) + \tilde{\gamma}(m_2\mathcal{J}_1 + m_1\mathcal{J}_2) + \frac{m_1^2 - m_2^2}{2} \right) \nonumber \\ &+& 2(\mathcal{J}_1^2 + \mathcal{J}_2^2)(x_1^2 + 2x_1x_2 + 2x_1x_3 + x_2^2 + x_1^4 + 4x_1^3x_2 + x_1^6 ) \\ &-& 2(\mathcal{J}_1^2 - \mathcal{J}_2^2)( x_1^3 + 3 x_1^2x_2 + x_1^5 ) + \cdots \Biggr]. \nonumber \end{eqnarray} Since the fourth term can be expressed as $(x_1 + x_2 + x_3) (-4(\mathcal{J}_1^2 + \mathcal{J}_2^2)x_1)$, we have \begin{eqnarray} E^2 &=& P + 2(J_1^2 + J_2^2)(x_1^4 + x_2^2 + 4x_1^3x_2 + x_1^6) -2(J_1^2 - J_2^2)(x_1^3 + 3x_1^2x_2 + x_1^5 ) + \cdots, \\ P &\equiv& 2(J_1^2 + J_2^2)( 1- x_1^2 ) + \frac{\lambda}{2} (m_2 - \gamma J_1)^2 + \frac{\lambda}{2}(m_1 + \gamma J_2)^2, \nonumber \end{eqnarray} where the $x_1x_3$ term has been canceled out. The expression $x_1$ in (\ref{xmn}) is rewritten by \begin{equation} x_1 = \frac{1}{1 + (2\alpha -1)^2} \left[ 2\alpha -1 + \frac{\tilde{\lambda}}{4} (m_2 - \gamma J_1)^2 - \frac{\tilde{\lambda}}{4}(m_1 + \gamma J_2)^2 \right] \equiv \frac{\bar{x}_1}{1+(2\alpha-1)^2}, \label{xon}\end{equation} where $\tilde{\lambda}$ is the effective coupling constant $\tilde{\lambda} = \lambda/J^2 = 1/\mathcal{J}^2$, so that $P$ takes the form \begin{eqnarray} P &=& P_0 - \frac{\tilde{\lambda}^2J^2}{16}\left[(m_2 - \gamma J_1)^2 - (m_1 + \gamma J_2)^2\right]^2 + J^2(2\alpha -1)^2(1 + (2\alpha -1)^2) x_1^2, \\ P_0 &\equiv& J^2\left[ 1 - \frac{\tilde{\lambda}(2\alpha -1)}{2} \left( (m_2 - \gamma J_1)^2 - (m_1 + \gamma J_2)^2 \right) \right] + \frac{\lambda}{2}(m_2 - \gamma J_1)^2 + \frac{\lambda}{2} (m_1 + \gamma J_2)^2. \nonumber \end{eqnarray} The leading part $P_0$ which is of order $\epsilon^0$ is rearranged into \begin{equation} P_0 = J^2\left[ 1 + \frac{J_1J_2}{J^2} \tilde{\lambda} (\gamma J + m_1 - m_2)^2 + \frac{\tilde{\lambda}}{J^2}(m_1J_1 + m_2J_2 )^2 \right], \end{equation} which is expressed through the Virasoro constraint (\ref{tam}) as \begin{equation} P_0 = J^2 + J^2\alpha( 1 - \alpha )\tilde{\lambda}(\gamma J + m_1 - m_2)^2. \end{equation} Gathering together with the expression $x_2$ in (\ref{xtw}) described by \begin{eqnarray} x_2 &=& \frac{x_1^2}{1 + (2\alpha -1)^2} \bar{x}_2, \\ \bar{x}_2 &\equiv& \left(1-\frac{\tilde{\gamma}^2}{2} \right)(2\alpha -1) + \tilde{\lambda} \gamma ( m_2J_1 + m_1J_2 ) + \frac{\tilde{\lambda}}{2}(m_1^2 - m_2^2) \nonumber \end{eqnarray} we get the following energy expression \begin{eqnarray} E^2 &=& P_0 - \frac{\tilde{\lambda}^2J^2}{16}\left[(m_2 - \gamma J_1)^2 - (m_1 + \gamma J_2)^2\right]^2 \\ &+& \frac{\tilde{\lambda}^2J^2}{16}\frac{x_1^2}{1 + (2\alpha -1)^2}\left[(m_2 - \gamma J_1)^2 - (m_1 + \gamma J_2)^2\right]^2 + 2J^2\bar{x}_1^2(2\alpha -1)^4 + P_6 + \cdots, \nonumber \end{eqnarray} where \begin{eqnarray} P_6 &\equiv& J^2\bar{x}_1^4 \Biggl[ -2(2\alpha -1)\bar{x}_1 + (2\alpha-1)\tilde{\lambda}(\gamma J)^2\bar{x}_2 + \bar{x}_1^2 + \frac{\tilde{\gamma}^4}{4}(2\alpha -1)^2 \nonumber \\ &-& \left( 2\alpha -1 + \tilde{\lambda} \gamma ( m_2J_1 + m_1J_2 ) + \frac{\tilde{\lambda}}{2}(m_1^2 - m_2^2) \right)^2 \Biggr]. \end{eqnarray} The second term is of order $\epsilon^2$ and the third term is of order $\epsilon^4$ for the leading part, while the fourth term and $P_6$ are of order $\epsilon^6$. Now we use the resulting expression up to order $\epsilon^4$ as well as up to order $\tilde{\lambda}^2$ \begin{equation} E^2 = P_0 - \frac{\tilde{\lambda}^2J^2}{16}\left[(m_2 - \gamma J_1)^2 - (m_1 + \gamma J_2)^2\right]^2( 1 - (2\alpha -1)^2) + \cdots \end{equation} to extract the energy of the solution as \begin{eqnarray} E &=& J + \frac{\lambda}{2J}\alpha(1-\alpha)(\gamma J + m_1 - m_2)^2 \label{eja} \\ &-& \frac{\lambda^2}{8J^3}\alpha(1-\alpha)\left[ \left((m_2 - \gamma J_1)^2 - (m_1 + \gamma J_2)^2 \right)^2 + \alpha(1-\alpha) (\gamma J + m_1 - m_2)^4\right] + \cdots. \nonumber \end{eqnarray} The second term of order $\tilde{\lambda}$, that is, the ``one-loop" energy correction agrees with the one-loop result of ref. \cite{FRT}, where the $\beta$-deformed Landau-Lifshitz action produced from ``fast-string" expansion of the string sigma model action is shown to have a rational solution with two unequal momenta and the one-loop anomalous dimension is computed also by solving the Bethe equation for the corresponding anisotropic spin chain. The undeformed limit $\gamma \rightarrow 0$ for (\ref{eja}) yields \begin{equation} E = J + \frac{\lambda}{2J}m(n-m) - \frac{\lambda^2}{8J^3}m(n-m)(n^2 -3nm + 3m^2) + \cdots, \label{emn}\end{equation} where the relations provided from (\ref{tam}) for the undeformed case \begin{equation} \frac{J_1}{J} = -\frac{m_2}{m_1 - m_2}, \hspace{1cm} \frac{J_2}{J} = \frac{m_1}{m_1 - m_2} \label{jra}\end{equation} have been used and the winding numbers $m_1, m_2$ have been replaced by $m_1 = n - m, \; m_2 = -m$. The reduced expression (\ref{emn}) including the ``one-loop" and ``two-loop" corrections was presented \cite{KMMZ} by analyzing the classical Bethe equation for the classical $AdS_5 \times S^5$ string sigma model as well as the Bethe equation for the spin chain in the SU(2) sector. On the other hand in ref. \cite{ART} a general class of rotating string solutions in the undeformed background was derived and the ``one-loop" correction in (\ref{emn}) was presented by solving the relations for the two spin sector \begin{equation} \mathcal{E}^2 = 2\sum_{i=1}^2\omega_i\mathcal{J}_i - \nu^2, \hspace{1cm} \sum_{i=1}^2\frac{\mathcal{J}_i}{\omega_i} = 1, \hspace{1cm} \sum_{i=1}^2m_i\mathcal{J}_i = 0, \end{equation} where $\omega_i^2 - m_i^2 = \nu^2\; (i=1,2)$ and the last equation indeed takes the same form as (\ref{tam}). Here we demonstrate that these relations also yield the ``two-loop" correction. Indeed the following large $\mathcal{J}$ expansion \begin{equation} \mathcal{E}^2 = \mathcal{J}^2 + \sum_{i=1}^2m_i^2\frac{\mathcal{J}_i}{\mathcal{J}} + \frac{1}{4\mathcal{J}^2}\left[ \left(\sum_{i=1}^2m_i^2\frac{\mathcal{J}_i}{\mathcal{J}}\right)^2 - \sum_{i=1}^2m_i^4\frac{\mathcal{J}_i}{\mathcal{J}} \right] + \cdots \end{equation} leads to \begin{equation} E = \sqrt{\lambda}\mathcal{E} = J + \frac{\lambda}{2J}\sum_{i=1}^2m_i^2 \frac{J_i}{J} - \frac{\lambda^2}{8J^3}\sum_{i=1}^2m_i^4\frac{J_i}{J} + \cdots. \label{emj}\end{equation} The substitution of $m_1 = n - m, \; m_2= -m$ and (\ref{jra}) into (\ref{emj}) reproduces (\ref{emn}). Following the argument of \cite{FRT}, the structure of the energy of a multi-spin solution can be captured as double expansions in $\tilde{\lambda}$ and $\tilde{\gamma}$. The smoothness of the deformation gives \begin{equation} E = \sqrt{\lambda}\mathcal{E}(\tilde{\gamma},\mathcal{J}) = \sqrt{\lambda}\left[ \mathcal{E}_0(\mathcal{J}) + \tilde{\gamma} f_1(\mathcal{J}) + \tilde{\gamma}^2 f_2(\mathcal{J}) + \tilde{\gamma}^3 f_3(\mathcal{J}) + \tilde{\gamma}^4 f_4(\mathcal{J}) + \cdots \right], \label{ex}\end{equation} while the energy for the undeformed case has the usual regular large $\mathcal{J}$ or small $\tilde{\lambda}$ expansion \begin{equation} E_0 = \sqrt{\lambda}\mathcal{E}_0(\mathcal{J}) = Jf_0(\tilde{\lambda}) = J(1 + c_1\tilde{\lambda} + c_2\tilde{\lambda}^2 + \cdots). \end{equation} Through $\tilde{\gamma} = \sqrt{\tilde{\lambda}}\gamma J$ the expansion (\ref{ex}) turns out to be \begin{equation} E = J\left[ f_0(\tilde{\lambda}) + \tilde{\lambda} \gamma Jf_1(\mathcal{J}) + \tilde{\lambda} (\gamma J)^2 \frac{f_2(\mathcal{J})}{\mathcal{J}} + \tilde{\lambda}^2 (\gamma J)^3f_3(\mathcal{J}) + \tilde{\lambda}^2 (\gamma J)^4\frac{f_4(\mathcal{J})}{\mathcal{J}} + \cdots \right]. \end{equation} If $f_1(\mathcal{J}), f_2(\mathcal{J})/\mathcal{J}, f_3(\mathcal{J}), f_4(\mathcal{J})/\mathcal{J}$ have the regular expansions in $1/\mathcal{J}^2 = \tilde{\lambda}$ as \begin{equation} f_1 = \sum_{k=0}^{\infty}c_k^1\tilde{\lambda}^k, \hspace{1cm} \frac{f_2}{\mathcal{J}} = \sum_{k=0}^{\infty}c_k^2\tilde{\lambda}^k, \hspace{1cm} f_3 = \sum_{k=0}^{\infty} c_k^3\tilde{\lambda}^k, \hspace{1cm} \frac{f_4}{\mathcal{J}} = \sum_{k=0}^{\infty}c_k^4\tilde{\lambda}^k, \end{equation} then $E$ also takes the following regular expansion \begin{eqnarray} E &=& J\Bigl[ 1 + \tilde{\lambda}\left(c_1 + c_0^1\gamma J + c_0^2(\gamma J)^2 \right) \nonumber \\ &+& \tilde{\lambda}^2 \left( c_2 + c_1^1\gamma J + c_1^2(\gamma J)^2 + c_0^3(\gamma J)^3 + c_0^4(\gamma J)^4 \right) + \cdots \Bigr]. \end{eqnarray} Hence we can see that the ``two-loop" part of order $\tilde{\lambda}^2$ contains the terms $(\gamma J)^n$ with $n \le 4$. This behavior is indeed seen in our explicit ``two-loop" result \begin{eqnarray} E_2 &=& -\frac{\lambda^2}{8J^3}\alpha(1-\alpha)\Bigl[ \left((1-2\alpha)\gamma J + m_1 + m_2\right)^2(\gamma J + m_1 - m_2)^2 \nonumber \\ &+& \alpha(1-\alpha)(\gamma J + m_1 - m_2)^4\Bigr]. \end{eqnarray} \section{Conclusion} Analyzing the closed string motion in the $\gamma$-deformed $AdS_5 \times \tilde{S}^5$ background by the semiclassical approach, we have presented a transcendental equation which determines the energy of a solution describing a rotating and wound string with two unequal spins in $\tilde{S}^5$. By using an expansion procedure for the case of almost equal spins we have solved the transcendental equation to extract the string energy in terms of the angular momenta, the winding numbers and the deformation parameter. The ``one-loop" part of the string energy expanded in $\lambda/J^2$ has reproduced the one-loop anomalous dimension of long two-scalar operator which was found in ref. \cite{FRT} as a solution for the anisotropic Landau-Lifshitz and Bethe equations for the $\gamma$-deformed $\mathcal{N}=4$ SYM for the $SU(2)_{\gamma}$ sector. This agreement at the one-loop level confirms the gauge/string duality between the superstring theory in the $\gamma$-deformed $AdS_5 \times \tilde{S}^5$ background and the $\gamma$-deformed $\mathcal{N}=4$ SYM theory. We have observed that the ``two-loop" part of the string energy consists of the terms $(\gamma J)^n$ with $n \le 4$, and recovers the corresponding expected ``two-loop" string energy for the undeformed case when we take the undeformed limit $\gamma \rightarrow 0$. In order to confirm the gauge/string duality at the two-loop level it is desirable to construct some two-loop dilatation operator for the $\gamma$-deformed $\mathcal{N}=4$ SYM theory and compute the two-loop anomalous dimension of the relevant gauge invariant scalar operator.
{'timestamp': '2005-09-28T02:05:20', 'yymm': '0509', 'arxiv_id': 'hep-th/0509195', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-th/0509195'}
\section{Introduction} \qquad The uncapacitated facility location (UFL) problem has received lots of attention due to its connection to operational research and infrastructure planning. In the UFL problem, we are given a set of potential facility locations $F$, each $i\in F$ with a facility cost $f_i$, a set of clients $C$ and a metric $d$ over $F\cup C$. We need to open a set of facilities $F'$, and assign each client to an opened facility, so that the sum of the total facility opening cost and the total assignment cost is minimized. The UFL is NP-hard and the current best algorithm for this problem was designed and analyzed by Li \cite{ShiLi}, that reached an approximation ratio of 1.488. Another algorithm by Byrka, parametrized by $\gamma$ achieved an optimal bi-factor approximation ratio when $\gamma>1.6774$. All these algorithms are based on the algorithm of Chudak and Shmoys \cite{CS} (known as the Chudak-Shmoys algorithm) and a filtering technique of Lin and Vitter\cite{Lin}. The algorithms of Byrka and Li also utilized a (1.11,1.78) bifactor algorithm by Jain et al.\cite{Jain1}. On the negative side, Guha and Kuller \cite{Guha} showed that there is no 1.463-approximation unless \textbf{NP}$\subseteq$\textbf{DTIME}$(n^{O(\log\log n)})$. This condition was later strengthened by Sviridenko \cite{Svir} to be "\textbf{NP}=\textbf{P}". Jain et al. \cite{Jain1} generalized these work and proved that there is no $(\gamma_f,\gamma_c)$ bifactor approximation algorithm for $\gamma_c<1+2e^{-\gamma_f}$ unless \textbf{NP}$\subseteq$\textbf{DTIME}$(n^{O(\log\log n)})$. We therefore refer to this result as the hardness curve of the UFL problem (notice that the hardness curve pass the point (1.463,1.463)). The algorithm by Byrka \cite{Byrka1}\cite{Byrka2} was optimal for $\gamma_f>1.6774$, but for $\gamma_f<1.6774$, there is no current algorithm that can reach this hardness curve. The main difficulty in tackling the problem is to understand the regularity and irregularity of the instance. A instance is said to "regular" if all facilities to which a client is served in the optimal fractional solution to the LP, which is shown below, is of approximate the same distance to that client. While in an irregular case, the facilities to which a client is served in the optimal fractional solution varies in distance to that client. We have to notice that irregularity is not an absolute definition. The degree to which the instance is irregular may be different for different instances. We now consider an adversary that might produce instances to fight against our algorithms. It was shown that to fight against Byrka's algorithm for $\gamma>\gamma_0$ \ref{Byrka} or Li's algorithm\ref{Li}, the adversary would like to produce an completely regular case. In the case of Byrka's algorithm, that means that the case when $d_{ave}^C(j)=d_{ave}^D(j)$ will make the bounds tight. In the case of Li's algorithm, that means that the best strategy by an adversary would be render the characteristic function a threshold function. On the other hand, however, the regular case is also the easiest case to deal with, as is indicated by section 3.1 in Byrka's paper \cite{Byrka2}. Our failing to come up with an algorithm that does better than that of Li's indicates the existence of a worst case to Li's algorithm that is a very irregular case. This research serves as an attempt to improve over the 1.488-approximation algorithm of Li. He allowed the algorithm to make a random choice of $\gamma$, which was the parameter in Byrka's algorithm, and in combination with the $(1.11,1.78)$ JMS algorithm, obtained the 1.488-approximation algorithm. In his paper, he developed a "zero-sum game" whose value equals the approximation ratio of the algorithm and found the best strategy for the adversary that made it easy to analyze the algorithm. This best strategy turned out to be a very "regular" instance which was rather easy to handle if the parameter $\gamma$ was chosen optimally instead of randomly. We also observed that the so-called "zero-sum game" was in fact not a "zero-sum game". Instead, it was a vector game at its core and thus the Minimax Principle didn't apply. Based on these observations, we attempted to improve the algorithm by making an optimal choice of $\gamma$ instead of a random choice. Our analysis of the algorithm used Blackwell's Approachability Theorem. Unfortunately, our computer simulation showed that our algorithm did not actually improve Li's algorithm. However, this failure might bring more insights into either the UFL problem or the theory of vector games. \section{Byrka's Algorithm and Analysis} \label{Byrka} \quad In this section, we review the algorithm by Byrka\cite{Byrka2}, yet our notation follows that of Li's\cite{ShiLi}. The algorithm starts by solving the following linear programming for the UFL problem: \begin{eqnarray*} \min &&\quad \sum_{i\in F, j\in C} d(i,j)x_{i,j}+\sum_{i\in F}f_iy_i\quad s.t.\\ &&\quad \sum_{i\in F}=1\qquad \forall j\in C\\ &&\quad x_{i,j}\leq y_i\qquad \forall i\in F,j\in C\\ &&\quad x_{i,j},y_i\geq 0 \qquad \forall i\in F,j\in C \end{eqnarray*} Throughout this article, we will assume that we have solved the above LP and obtain an optimal solution $x^*$ and $y^*$. For each client $j$, the set of facilities to which it is served more than 0 is denoted as $F_j$. The algorithm starts by a filtering process with parameter $\gamma$: we create a "filtered" solution $\overline{x}$, $\overline{y}$ as follows: $\overline{y}=\gamma y^*$ . We may split those facilities with $\overline{y}>1$ to two facilities. Then since the $\overline{y}$ are fixed, we may reassign the $\overline{x}$ facilities in a greedy manner: for each client $j$, sort the facilities by their distances to $j$ and then for each facility $i$ in order, assign $\overline{x}_{ij}=\overline{y_i}$ until the sum of all assignments of client $j$ reaches 1. W.l.o.g., we may assume that the solution obtained in this way is a complete solution, that is $\overline{x}_{ij}$ is either $\overline{y_i}$ or 0. This can be done via a facility splitting argument. A more detailed one can be found in \cite{}. For each client $j\in C$, after the filtering process, we may distinguish between two kinds of facilities with respect to that client: close facilities and distant facilities. We say a facility $i$ is a close facility to client $j$ if $\overline{x}_{ij}>0$. We say a facility $j$ is a distant facility to client $j$ if $\overline{x}_{ij}=0$ but $x^*_{ij}>0$. We denote the set of close and distant facilities to client $j$ as $F_j^C$ and $F_j^D$ respectively. Now we define a distance from a client $j$ to a set of facilities $F'\subseteq F$ $d(j, F')$ as the average distance to the facilities $F'$ with respect to the solution $\overline{y}$. We define $d_{ave}^C(j)=d(j,F_j^C)$, $d_{ave}^D(j)=d(j,F_j^D)$ and $d_{ave}(j)=d(j,F_j)$. We also define $d_{max}^C(j)$ as the maximum distance from $j$ to a facility in $F_j^C$. Byrka's algorith follows the idea developed Chudak and Shmoys, which first select some facilities as the cluster centers, according to some criteria quantity. However, the criteria quantity chosen here is $K(j)=d_{max}^C(j)+d_{ave}^C(j)$. The process of choosing clustering centers is as follows. We consider each client in the order of increasing criteria quantity $K$. Suppose we are now at client $j$, if it is not assigned to any clustering center, we make it a clustering center, and for all those clients $j'$ that satisfies $F^C_{j'}\cap F^C_j\neq \emptyset$, we assign client $j'$ to the cluster center $j$. After this process, the clients are divided into several clusters. Call the set of all cluster centers $C'$. We now round the solution $(\overline{x},\overline{y})$ randomly. For each cluster center, we open exactly one of its close facilitis randomly with probabilities $\overline{y}_i$. For each facility that is not a close facility of any cluster centers, open it independently with probability $\overline{y}_i$. Then connect each client $j$ to its closest facility. Let $C_j$ be the connection cost of client $j$. The expected cost of the algorithm is just $\gamma$. The key to bounding the expected total connection cost is to bound $C_j$ under the condition that there is no open facility in $F_j$. Byrka was able to show that $d(j,F_{j'}^C\backslash F_j)\leq d_{ave}^D(j)+d_{max}^C(j)+d_{ave}^C(j)$ and thus bound the $\E[C_j]\leq (1-e^{-1}+e^{-\gamma})d_{ave}^C(j)+(e^{-1}+e^{-\gamma})d_{ave}^D(j)$. But the connection cost of the optimal solution is $d_{ave}(j)=\frac{1}{\gamma}d_{ave}^C(j)+\frac{\gamma-1}{\gamma}d_{ave}^D(j)$. By computing the maximum ratio between these two costs, we have an optimal bifactor algorithm for $\gamma>\gamma_0\approx 1.67736$ that achieves a bifactor approximation ratio of $(\gamma,1+2e^{-\gamma})$. Also by taking $\gamma=\gamma_0$ and in combination with the $(1.11,1.7764)$-bifactor approximation by Jain, Mahadian and Saberi, we can get a 1.50-approximation algorithm for the UFL problem. In his paper \cite{Byrka2}, he also pointed out that by making $\gamma$ random instead of a fixed value might improve the approximation ratio and that's the basis of Li's algorithm and analysis. \section{Li's Algorithm and Analysis} \label{Li} Li's algorithm is based on Byrka's algorithm by making $\gamma$ random. Yet the analysis is very involved. Li's analysis utilized an important concept called characteristic function (see Definition 15 in Li's paper \cite{ShiLi}. \begin{defn}[Definition 15 in Li's paper] Given a UFL instance and its optimal fractional solution $(x,y)$, the characteristic function $h_j:[0,1]\rightarrow \mathbb{R}$ of some client $j\in C$ is defined as follows. Let $i_1,i_2,\cdots,i_m$ be the facilities in $F_j$, in the non-decreasing order of distances to $j$. Then $h_j(p)=d(i_t,j)$, where $t$ is the minimum number such that $\sum_{s=1}^ty_{i_s}\geq p$. The characteristic function of the instance is defined as $h=\sum_{j\in C}h_j$. \end{defn} It is not hard to see that the characteristic function is a non-decreasing piece-wise constant function. Li also came up with a better bound on the connection cost, which is Lemma 12 in his paper \cite{ShiLi}. \begin{lemma}[Lemma 12 in Li's paper] For some facility $j\notin C'$, let $j'$ be the cluster center of $j$. We have, $$ d(j,F_{j'}^C\backslash F_j)\leq (2-\gamma)d_{max}^C(j)+(\gamma-1)d_{ave}^D(j)+d_{max}^C(j')+d_{ave}^C(j'). $$ \end{lemma} With this bound, he was able to show the following bound on the expected connection cost. \begin{lemma}[Lemma 20 in Li's paper] The expected cost of the integral solution is $$ \E[C]\leq \int_0^1h(p)e^{-\gamma p}\gamma \dd p+e^{-\gamma}(\gamma\int_0^1h(p)\dd p+(3-\gamma)h(\frac{1}{\gamma})). $$ \end{lemma} To come up with an explicit distribution for $\gamma$, Li introduced a zero-sum game. We can scale the instance so that $\int_0^1h(p)\dd p=1$. Let $$\alpha(\gamma,h)=\int_0^1h(p)e^{-\gamma p}\gamma \dd p+e^{-\gamma}(\gamma+(3-\gamma)h(\frac{1}{\gamma}))$$. The game is between an algorithm designer $A$ and an adversary $B$. The strategy of $A$ is a pair $(\mu,\theta)$, where $\theta$ is the probability of playing the JMS algorithm and $\mu$ is $1-\theta$ times a probability density function for $\gamma$. The strategy for $B$ is a non-decreasing piece-wise constant function $h:[0,1]\rightarrow \textbf{R}*$ such that $\int_0^1h(p)\dd p=1$. The value of the game, when the strategy of $A$ is $(\mu,\theta)$ and the strategy of $B$ is $h$, is defined as: $$ \nu(\mu,\theta,h)=\max\{\int_1^\infty \gamma\mu(\gamma)\dd \gamma+1.11\theta,\int_1^\infty \alpha(\gamma,h)\mu(\gamma)\dd \gamma+1.78\theta\} $$ A threshold function $h_q:[0,1]\rightarrow \mathbb{R},0\leq q < 1$ is defined as: $$ h_q(p)= \begin{cases} 0 &\quad p\leq q\\ \frac{1}{1-q} & \quad p>q \end{cases} $$ And Li was able to show that there is a best strategy for $B$ that is a threshold function. From this fact he was able to find the optimal strategy for $A$: with probability $\theta\approx 0.2$, run the JMS algorithm; with probability about 0.5, run Byrka's algorithm witth parameter $\gamma_1\approx 1.5$; with the remaining probability, run Byrka's algorithm with parameter $\gamma$ selected uniformly between $\gamma_1$ and $\gamma_2\approx 2$. A very refined analysis by Li proved that this distribution was able to achieve an approximation ratio of 1.4879. \section{An Attempt to Improve Li's Algorithm} \quad In this section, we present our attempt to improve over Li's algorithm. Although computer simulation illustrates that this approach might not actually work, it does provide some insights. It also provides a framework to design better approximation algorithms. Li's analysis showed that there is a best strategy for $B$ which is a threshold function, which corresponds to a very regular case. But if the instance is regular, we might not need to randomly pick $\gamma$. It might be better if we can examine the instance carefully and find the best $\gamma$ for the instance. This idea makes use of the fact that there is an order in the game: the adversary first pick an instance and then the algorithm designer runs his algorithm. That means the algorithm designer is able to first examine the instance and then make his dicision in choosing the best parameter $\gamma$, which might help to improve the approximation ratio. Our method of choosing the best parameter $\gamma$ is as follows: first discretize the region $[1,M]$, where $M$ is a constant large enough, to many possible choices for the parameter $\gamma$. Then for each choice of $\gamma$, find the correspondingly best $\theta$ so that the balanced approximation ratio is the smallest. Then choose the $\gamma$ that corresponds to the smallest approximation ratio, and the corresponding $\theta$ that achieves it. It is obvious that this approach achieves a performance at least as good as Li's algorithm, since choosing the parameter randomly cannot do better than picking it optimally. Another very important reason why this approach might does better is due to the fact that the game defined by Li is not strictly a zero-sum game. According to Von Neumann's Minimax Principle, it will not be helpful if it is a zero-sum game, since a best strategy by $B$ will not be influenced by the order of the players. But the game is not a zero-sum game, since the value of the game is a maximum between two quatities. We shall now restate this game as a vector game. To make it easier to state and more importantly, easy to simulate with a computer, we shall apply discretization to the game. Notice the game is in fact continuous and its definition is almost the same with what we are going to present here. We shall first discretize the region in which $\gamma$ might take value to very tiny pieces and call these discretized value $\gamma_1,\gamma_2,\cdots$. These indicates the pure strategy of $A$ if he is going to run Byrka's algorithm. Another pure strategy of $A$ is the JMS algorithm. So the strategy set of $A$ can be written as $\{JMS,\gamma_1,\gamma_2,\cdots\}$, and $A$ is allowed to play a mixed strategy. The pure strategy of $A$ is denoted by $(\theta,\gamma)$, where $\theta$ is either 1 (indicating the JMS algorithm), or 0 (where in this case, $A$ runs Byrka's algorithm with parameter $\gamma$). Now we define the strategy of $B$. Since every characteristic function can be written as a linear combination of a finite number of threshold function, we might think of the strategy of $B$ as a mixed strategy over all threshold functions. We can therefore discretize the region $[0,1]$ to very tiny pieces $p_1,p_2\cdots$, and the strategy set of $B$ is thus $\{p_1,p_2,\cdots\}$. The value of the game is in fact a vector $\alpha$, which is defined as $\alpha(\theta, \gamma, p)=(C_f,C_c)(\theta,\gamma,p)=((1-\theta)\gamma+1.11\theta,(1-\theta)\alpha(\gamma,h_p)+1.78\theta)$. We should stress here that this value is defined when both players play pure strategy, thus $\theta\in \{0,1\}$. If players play mixed strategies, then we shall define the value of the game in an intuitive way, that is the expected vector value. The approximation ratio of this algorithm is determined by the region which is approachable by $A$. That is, we seek to find the smallest $\beta$ so that the 2-D region defined by the half-plains $x\leq \beta,y\leq \beta$ is approachable. In this case, $\beta$ will be the approximation ratio of this algorithm. To find such a $\beta$, we may first guess how small can it be and we suppose our guess is $\beta_0$. Our task at this moment is to show that the region defined by $\beta_0$ is approachable. According to Blackwell's Approachability Theorem, it suffices to prove the approachability of all the supporting hyperplanes, which are just all the lines that pass through the point $(\beta_0,\beta_0)$ with non-positive slope in this case. We may assume that each of the line is defined by the equation with parameter $0\leq \phi\leq 1$: $\phi x+(1-\phi)y=\beta_0$, then if we only consider the game with respect to this line, we can get a zero-sum game with value $\phi C_f+(1-\phi)C_c-\beta_0$. We call this quantity the value of the game with respect to the line parametrized by $\phi$. When $\phi$ is determined, the remaining zero-sum game can be solved by any standard method, and in this research, we use linear programming to handle it. We seek to find a mixed strategy for $B$ such that for any pure strategy of $A$, the value of the game with respect to the line parametrized by $\phi$ is larger than 0. If we can find such a strategy for $B$, then the line parametrized by $\phi$ is not approachable. In the language of LP, the line parametrized by $\phi$ is not approachable if and only if the corresponding LP stated above has at least a solution that corresponds to a mixed strategy for $B$. We may therefore write out the LP explicitly and test whether or not a solution can be found. If our guess for $\beta_0$ is indeed correct (notice it might not be the smallest of such correct guesses), Blackwell's Approachability Theorem tells us that for any $\phi\geq 0$, the line parametrized by $\phi$ is approachable. On the other hand, if we can check through all possible $0\leq\phi\leq 1$ and show that every one of them is approachable, we might conclude that the region defined by $\beta_0$ is approachable. To make the problem tractable to our computer, we should discretize for $\phi$ as well. Then we enumerate each possible value for $\phi$ and see if it is approachable. In our simulation, we discretized each of $\gamma$,$p$ and $\phi$ to 1000 possible values and tested for approachability with different $\beta$. Unfortunately, the smallest such $\beta$ found was $1.4880$, which showed that our attempt to improve over Li's algorithm didn't work. However, the result found by this simulation is interesting, the startegy for $B$ turned out to be very complicated: it had a value of around 0.6 for $p=0$, and very tiny values ($\leq 0.02$) for all other values of $p$. But we have to be careful here in understanding this result because the coefficients of each $h_p$ is determined by the following equation: $$ h=\sum_{i=0}^{m-1}(c_{i+1}-c_i)(1-p_i)h_{p_i} $$ Therefore, what the result really says is that there is an average distance of 0.6 where there are no facilities within it, but outside this region, facilities seem to be distributed very uniformly (in respect of distance) around the client. \section{Implication of Our Results} Although our attempt failed to improve over Li's algorithm, it had interesting implications to both the UFL problem and vector games. \subsection{Implication to the UFL problem} The result of our linear programming showed the existence of a highly irregular instance which could defeat the filtering process and our attempt to take the best $\gamma$ value. In other words, the filtering process alone cannot capture every aspect of the irregularity of the instance. If we want to design a better algorithm, we might need a better method to capture the irregularity of the problem. \subsection{Implication to vector games} As a byproduct, another interesting aspect of the implication of our result lies in the theory of vector games. At present, no corresponding result to the Minimax theorem of zero-sum games is known to vector games. However, our failure tends to show that the order of playing for vector games might not be important, under some condition. Admittedly, the vector game in our research is special in that (1) the first coordinate has nothing to do with the strategy of $B$ and (2) we only care about the maximum of the two coordinates. At this point, we don't know what aspects of this game leads to the fact that the order of playing is not important. Neither do we know whether a general conclusion exists for vector games. \section{Acknowledgements} I thank my instructor Li for helpful discussions. \bibliographystyle{plain}
{'timestamp': '2016-10-25T02:10:55', 'yymm': '1610', 'arxiv_id': '1610.07434', 'language': 'en', 'url': 'https://arxiv.org/abs/1610.07434'}
\section{Introduction} Coronal mass ejections (CMEs) are energetic expulsions of plasma from the solar corona that are driven by the release of magnetic energy typically in the range of $10^{32-33}$ ergs. The majority of CMEs originate from the eruption of pre-existing large-scale helmet streamers \citep[]{WM_hundhausen93}. Less common fast CMEs typically come from smaller, more concentrated locations of magnetic flux referred to as active regions. In this case, the CMEs often occur shortly after the flux has emerged at the photosphere, but can also happen even as the active region is decaying. While CMEs occur in a wide range of circumstances, all CMEs originate above photospheric magnetic polarity inversion lines (neutral lines), which exhibit strong magnetic shear. Shear implies that the magnetic field has a strong component parallel to the photospheric line that separates magnetic flux of opposite sign, and in this configuration, the field possesses significant free energy. In contrast, a potential field runs perpendicular to the inversion line and has no free energy. There is enormous evidence for the existence of highly sheared magnetic fields associated with CMEs and large flares. At the photosphere, magnetic shear is measured directly with vector magnetographs \citep[e.g.][]{WM_hagyard84, WM_zirin93, WM_falconer2002, WM_yang2004, WM_liu2005}. Higher in the atmosphere, the magnetic field is difficult to measure directly, but its direction may be inferred from plasma structures formed within the field. Seen in chromospheric H$\alpha$ absorption, filaments form only over photospheric inversion lines \citep[]{WM_zirin83} along which the magnetic field is nearly parallel \citep[]{WM_leroy89}. Fibrils and H$\alpha$ loops that overlay photospheric bipolar active regions are also indicative of magnetic shear \citep[]{WM_foukal71}. Comparisons between vector magnetograms and H$\alpha$ images show that the direction of the sheared photospheric magnetic field coincides with the orientation of such fibril structures \citep[]{WM_zhang95}. Higher in the corona, evidence of magnetic shear is found in loops visible in the extreme ultraviolet \citep[]{WM_liu2005} and X-ray sigmoids \citep[]{WM_moore2001}. These structures run nearly parallel to the photospheric magnetic inversion line prior to CMEs, and are followed by the reformation of closed bright loops that are much more potential in structure. Finally, observations by the Transition Region and Coronal Explorer (TRACE) show that 86 percent of two-ribbon flares show a strong-to-weak shear change of the ribbon footpoints that indicates the eruption of a sheared core of flux \citep{WM_su2007}. Sheared magnetic fields are at the epicenter of solar eruptive behavior. Large flares are preferentially found to occur along the most highly sheared portions of magnetic inversion lines \citep[]{WM_hagyard84, WM_zhang95}. More recent analysis by \citet{WM_schrijver2005} found that shear flows associated with flux emergence drove enhanced flaring. Similarly, active region CME productivity is also strongly correlated with magnetic shear as shown by \citet{WM_falconer2001, WM_falconer2002, WM_falconer2006}. It is not coincidental that large flares and CMEs are strongly associated with filaments, which are known to form only along sheared magnetic inversion lines \citep[]{WM_zirin83}. The buildup of magnetic shear is essential for energetic eruptions, and for this reason, it is of fundamental importance to understanding solar activity. Currently the majority of CME initiation models rely on the application of artificially imposed shear flows. Examples of such models include \citet{WM_antiochos99, WM_mikic94, WM_guo98, WM_amari2003}. Until recently, there was no theory to account for these large scale shear flows. In this paper, we discuss a series of simulations that illustrate a physical process by which these shear flows are self-organized in emerging magnetic fields. We will give a close comparison of these simulations in the context of new observations that make a more complete and compelling picture of a fundamental cause of eruptive solar magnetic activity. \section{Simulations} \begin{figure*}[ht!] \begin{center} \includegraphics[angle=0,scale=0.85]{f01.ps} \end{center} \caption{Magnetic flux emerging from a horizontal layer into the corona in the form of an axisymmetric arcade. The top row illustrates the time evolution of the horizontal shear velocity, $U_x$, shown in color with field lines confined to the plane drawn black. The bottom row shows the evolution of fully three-dimensional magnetic field lines viewed from above projected onto the horizontal $x-y$ plane. The footpoints of the field lines at the photosphere are shown as black dots. The shear flow is clearly manifest in the footpoint motion. The magnetic field near the center of the arcade evolves to be nearly parallel to the inversion line, while field lines near the periphery of the system are more nearly orthogonal.} \label{layer} \end{figure*} Shearing motions and magnetic field alignment with the polarity inversion line, so frequently observed in active regions, are readily explained as a response to the Lorentz force that arises when magnetic flux emerges in a gravitationally stratified atmosphere. The motions take the form of large-amplitude shear Alfv\'en waves in which the magnetic tension force drives horizontal flows in opposite directions across the polarity inversion line. The physical process was first shown by \citet{WM_manchester2000a, WM_manchester2000b, WM_manchester2001} and found in later simulations by \citet{WM_fan2001, WM_magara2003, WM_archontis2004}. As will be discussed in greater detail, what ultimately produces the shearing Lorentz force is the nonuniform expansion of the emerging magnetic field. The shearing process was first simulated by \citet{WM_manchester2001}, with a two-and-a-half-dimensional (2.5D) simulation of flux emergence from a horizontal magnetic layer placed 2-3 pressure scale heights below the photosphere. The crucial aspect of this simulation is that the magnetic field is initially in a {\it sheared} configuration oriented at a 45 degree angle to the plane of variation. In this case, for spatial and temporal variations of the instability described by $e^{i(\bf{k} \cdot r - \omega t)}$, $\bf{k}$ is oblique to the magnetic field ($\bf{B}$). This distinguishes the mode of instability from a Parker mode \citep[]{WM_parker66} in which $\bf{k}$ is parallel to $\bf{B}$ or an interchange mode in which $\bf{k}$ is perpendicular to $\bf{B}$. With $\bf{k}$ oblique to $\bf{B}$, the instability can be thought of as a mixed mode, \citep[e.g.][]{WM_cat90, WM_matsumoto93, WM_kusano98}. In this case, there will be a component of the Lorentz force out of the plane of variation any where the magnetic component out of the plane ($B_x$ in the chosen coordinates) is not constant along field lines. In the simulation by \citet{WM_manchester2001}, it was shown that the departure of $B_x$ from constant values along field lines produced strong shear flows in the emerging flux. When the magnetic flux rises forming an arcade as shown in Figure \ref{layer}, expansion causes the magnitude of $B_x$ to decrease in the arcade. The gradient in $B_x$ along field lines results in the Lorentz force that drives the shear flows. Examining the expression for the $x$ component of the Lorentz (tension) force, $F_{x} = \frac{1}{4 \pi} \nabla B_{x} \cdot ({\bf B_y + B_z})$, we see the reason for the shear. The gradient in $B_x$ is negative moving up the arcade in the direction of ${\bf B}$ on one side of the arcade, while on the opposite side, the gradient of $B_x$ along ${\bf B}$ is positive. \begin{figure*}[ht!] \begin{center} \includegraphics[angle=0,scale=0.60]{f02.ps} \end{center} \caption{ Partial eruption of a three-dimensional emerging magnetic flux rope. Panel (a) illustrates the Lorentz force with magnetic stream lines (confined to the $y-z$ plane at the central cross section of the rope) shown with white lines while black lines show the direction of the current density. The existence of the Lorentz force $(\bf{j \times B})$ out of the plane is clearly seen where the field and current density are oblique. The magnetic field crosses $\bf{j}$ in opposite directions on opposite sides of the rope producing the shear flow. The large vertical gradient in the axial component (shown in color) produces the horizontal cross-field current. Panel (b) shows the horizontal shear velocity, which clearly occurs where $\bf{j}$ and $\bf{B}$ are non-parallel. The bottom panels show the resulting eruption of the flux rope. At the photosphere, the vertical magnetic field strength is shown in color and the horizontal direction is shown with vectors in Panel (c). Here, the current sheet (where the magnetic field is reconnecting) is shown with a gray isosurface, along with magnetic field lines entering the sheet. Coronal field lines and the current sheet form a sigmoid structure that runs nearly parallel to the highly sheared photospheric magnetic field. Panel (d) shows the flux rope splitting apart at the current sheet. The upper part of the rope erupts into the corona (blue and red lines) while the lower part forms deep dips that remain just above the photosphere filled with dense plasma.} \label{rope} \end{figure*} These shear flows are illustrated in color in the top row of Figure \ref{layer}. Here, we see the persistence of shear flows as the magnetic field (drawn with black lines) expands in the atmosphere. The magnitude of the shear flow velocity reaches a peak of 15 km/s high in the arcade, 3-4 km/s at the photosphere and approximately 1 km/s below the photosphere. The maximum shear speed typically reaches half the value of the local Alfv\'en speed. The bottom row of Figure \ref{layer} shows the time evolution of magnetic field lines integrated in three-dimensional space, and shown from above projected onto the $x-y$ plane. Footpoints of the field lines are shown with black dots. These panels show the shear displacement of the footpoints and how the magnetic field evolves to be parallel to the polarity inversion line. Note that the field is most highly sheared in close proximity to the polarity inversion line, and grows more nearly perpendicular to the line with greater distance from it. This same shearing process is found in more complex simulations such as three-dimensional emerging flux ropes \citep[]{WM_fan2001, WM_magara2003, WM_manchester2004, WM_archontis2004}. Panel (a) of Figure \ref{rope} shows in greater detail than before the structure of the magnetic field and current systems in the emerging flux rope of \citet{WM_manchester2004} that leads to the Lorentz force, which drives the shear flows. Here, the axial field strength of the rope is shown in color, while the direction of the magnetic field and current density (confined to the plane) are shown with white and black lines respectively on the vertical plane that is at the central cross section of the flux rope. The large expansion of the upper part of the flux rope produces a vertical gradient in the axial component of the magnetic field $(B_x)$ that results in a horizontal current $(j_y)$. The magnetic field crosses the current in opposite direction on opposite sides of the flux rope. This produces the Lorentz force out of the plane that reverses direction across the flux rope, and drives the horizontal shear flow. Panel (b) shows the magnitude of these shear flows on the same central cross section of the flux rope. The shear flow is greatest precisely where the field is most nearly perpendicular to the current reaching a magnitude of approximately 30 km/s. At this time, the flux rope has risen twice as high as the flux that emerged from the magnetic layer and similarly the shear flow in the rope is also twice as fast. \begin{figure*}[ht!]\begin{center}\includegraphics[angle=0,scale=0.80]{f03.ps} \end{center} \caption{Results of the magnetic arcade simulation. The initial state is shown on the left with the three-dimensional arcade field lines drawn in black, and the density and numerical grid are shown on the outer boundary of the computational domain. The middle and right panels, respectively, show the shear and vertical velocities in color at the central vertical plane of the simulation. Field lines (confined to the plane) are drawn white. Note the much larger scale of the arcade compared to the flux emergence simulations along with the much higher velocities.} \label{arcade} \end{figure*} In the case of \citet{WM_manchester2004} the flux rope erupted, which was not found in earlier simulations such as \citep[]{WM_fan2001}. This eruption resulted from high speed shear flows close to the neutral line that formed a highly sheared core that lifted off, and was followed by magnetic reconnection. The bottom panels show the magnetic and current structure at the time of the eruption. Panel (c) gives a view of the system that shows the vertical magnetic field in color and the horizontal field with arrows at the photosphere. A current sheet is illustrated with an isosurface of current density. The sigmoid shape of the sheet reflects the shape of both the coronal field lines passing close to the sheet as well as the highly sheared field at the photosphere. This model strongly suggests that X-ray sigmoids seen prior to and during CMEs are indicative of a highly sheared magnetic field geometry. Panel (d) shows that at this current sheet, reconnection is taking place that separates the upper part of the rope that is erupting from the lower part that has V-shaped field lines full of dense plasma that remains attached to the photosphere. Such flux separations in CMEs have been suggested by \citet{WM_gilbert2000} based on observations of filament eruptions. Finally we discuss a numerical simulation of an arcade eruption. This simulation is essentially identical to the simulation of \citet{WM_manchester2003}, with the exception that it is carried out in fully three-dimensional space with the BATS-R-US MHD code developed at the University of Michigan. The left panel of Figure \ref{arcade} shows the initial state of the simulation with field lines drawn in black showing the sheared magnetic arcade. On the boundaries of the computational domain, the plasma mass density is shown in color along with the computational grid shown with white lines. The center and right panels show the shear and vertical velocities respectively in color on the central plane of the simulation. The magnetic field confined to the plane (ignoring the component out of the plane) is drawn with white lines in both panels. In these panels, we find that the arcade erupts very violently with a shear flow that reach a magnitude of 200 km/s. The arcade rises at a peak velocity of 150 km/s, and drives a shock ahead of it in the corona. In this simulation, the eruption is caused by shear flows driven by the Lorentz force. A shearing catastrophe occurs when the $B_x$ component of the magnetic field can not be equilibrated along field lines. This simulation is different from the pervious two discussed in that it does not treat flux emergence through the photosphere, but only models the coronal plasma at a temperature of one million degrees. Not needing to resolve the photospheric pressure scale hight allows much larger cells and a computational domain that is 25 times larger than that of the previous two simulations. The result of this coronal arcade simulation is an eruption that is 5 times faster than the flux rope eruption, and extends out to half a solar radius above the surface. This progression of eruption velocity with the size of the flux system being treating offers compelling evidence that shear flows driven by the Lorentz are capable of producing fast CMEs from large active regions. \section{Discussion} The build up of magnetic energy in active regions is essential to the onset of CMEs and flares. The magnetic stress must pass from the convection zone into the corona in the form of non-potential fields, and effectively couple layers of the atmosphere. Observations show that such non-potential fields occurs along magnetic polarity inversion lines where the magnetic field is highly sheared. The evidence that magnetic shear is essential to CMEs and flares is provided by the very strong correlation between photospheric shear flows, flux emergence and the onset of CMEs and large flares \citep[]{WM_meunier2003, WM_yang2004, WM_schrijver2005}. These shear flows are found to be strongest along the magnetic inversion line precisely where flares are found \citep[]{WM_yang2004, WM_deng2006}. This complements earlier evidence that in the case of two-ribbon flares \citep[]{WM_zirin84}, prior to the eruption, H$\alpha$ arches over the inversion line are highly sheared, afterward, the arches are nearly perpendicular to the inversion line. \citet{WM_su2006} found a similar pattern of magnetic shear loss in the apparent motion of footpoints in two-ribbon flares. There are even now observations of subphotospheric shear flows with a magnitude of 1-2 km/s 4-6 Mm below the photosphere that occur during flux emergence \citep[]{WM_kosovichev2006}, which simulations show is fully consistent with the Lorentz force driving mechanism \citep[]{WM_manchester2007}. These ubiquitous shear flows and sheared magnetic fields so strongly associated with CMEs are readily explained by the Lorentz force that occurs when flux emerges and expands in a gravitationally stratified atmosphere. This physical process explains and synthesizes many observations of active regions and gives them meaning in a larger context. This shearing mechanism explains (1) the coincidence of the magnetic neutral line with the velocity neutral line, (2) the impulsive nature of shearing in newly emerged flux, (3) the magnitude of the shear velocity in different layers of the atmosphere, (4) the large scale pattern of magnetic shear in active regions, (5) the transport of magnetic flux, and energy from the convection zone into the corona, (6) eruptions such as CMEs and flares. With so much explained, it still remains a numerical challenge to model an active region with sufficient resolution to produce a large scale CME by this shearing mechanism. The rope emergence simulation discussed here only produces a flux concentration that is one tenth the size of an active region, which at this scale simply can not produce an eruption the size of a CME. However, this simulation illustrates the basic process by which CMEs and flares must occur, and current simulations already show a very favorable scaling of eruption size. With increases in computer power, simulations of flux emergence should soon be producing CME size eruptions by shear flows driven by the Lorentz force. \acknowledgements{ Ward Manchester is supported in part at the University of Michigan by NASA SR\&T grant NNG06GD62G. The simulations shown here were performed on NCAR and NASA super computers.}
{'timestamp': '2008-12-18T17:58:32', 'yymm': '0812', 'arxiv_id': '0812.3589', 'language': 'en', 'url': 'https://arxiv.org/abs/0812.3589'}
\section{Introduction} The use of hybrid density functionals \cite{Becke1993}, which include a portion of Hartree-Fock exchange (HF-exchange), has proven to be an invaluable approach to density functional theory (DFT). Within the solid-state community, the one proposed by Heyd, Scuseria and Ernzerhof \cite{Heyd2003,Krukau2006} (HSE) has become a rather popular choice, particularly for diamagnetic and low-spin systems. HSE-like functionals screen long-ranged and computationally demanding HF-exchange interactions, essentially by introducing a fraction of HF-exchange solely within the short-range part of the functional. Long-range contributions are simply accounted for with a local or semi-local functional, often the one prescribed by Perdew, Burke and Ernzerhof (PBE) to the generalised gradient approximation \cite{Perdew1996}. Besides being able to account for HF-exchange interactions in metals \cite{Ashcroft1976}, screened hybrid density functionals excel at describing accurately the band structure of finite-gap crystals \cite{Heyd2005,Bailey2010,Bernasconi2011,Garza2016}, a virtue with huge importance for the study of defects (including impurities, dislocations and surfaces) with states in the forbidden gap. While the calculation of the non-local Fock integral is rather efficient when using a Gaussian basis (see for instance Ref.~\cite{Bush2011}), this is not the case for plane waves (PW), which are a popular and natural choice, particularly among the solid-state community. Plane wave hybrid-DFT calculations can take thousands of times longer than using (semi-)local functionals. For instance, we found that, using the same number of CPUs, one PBE single-point calculation of the oxygen vacancy in MgO took 28 seconds, against over 9 hours after changing the exchange-correlation treatment to HSE (around 1200 times as long). For this reason, the computation of large-scale problems within PW/hybrid-DFT has to rely on \emph{pre-relaxed} atomic positions obtained using a cheaper local or semi-local approach. Many examples of this approach have been reported in the literature, including in the study of magnetic materials, two-dimensional materials and surfaces \cite{Noh2014,Supatutkul2017,Zhachuk2018}, as well as dopants, impurities and radiation-defects in several materials \cite{Szabo2010,Chanier2013,Trinh2013,Colleoni2016,Coutinho2017}. Significant differences between fully-relaxed PBE- and pre-relaxed HSE-level results were obtained. For instance, quantities like the location of defect transition levels or the height of migration barriers show considerable discrepancies. This raises the following question: if the fully-relaxed HSE ground state structures were employed instead, would the result be the same? Or at least within an acceptable error bar? In the 1980s, many self-consistent local density calculations were carried out assuming fixed structures obtained on the basis of known bond lengths and bond strengths (see for instance Ref.~\cite{Walle1989}). Despite being less refined, this approximation is similar to using DFT/pseudopotential pre-relaxed geometries for the calculation of Mössbauer parameters using all-electron full-potential methods \cite{Wright2016}. This approach has also been used to obtain the formation energy and electronic structure of defects using Hedin's $GW$ method \cite{Rinke2009,Avakyan2018} or diffusion quantum Monte Carlo \cite{Flores2018}. Again, the above practice raises the question: can we actually rely on energies of state-of-the-art calculations that employ local-DFT geometries? \begin{figure} \begin{centering} \includegraphics[width=8cm]{figure1} \par\end{centering} \caption{\label{fig1}Schematic representation of the relaxation method followed in this work. The structure is pre-relaxed within PBE-level, down to the $Q_{\mathrm{pre}}$ structure (upper curve). A subsequent relaxation using HSE starts with total energy $E_{\mathrm{pre}}$, and yields the fully-relaxed HSE ground state configuration $Q_{\mathrm{full}}$ with energy $E_{\mathrm{full}}$. $\Delta E$ is the HSE relaxation energy, and quantifies the error of the pre-relaxed calculation.} \end{figure} Let us look at this problem with the help of Figure~\ref{fig1}, where potential energy curves corresponding to sequential PBE- and HSE-level relaxations are illustrated. A starting (guessed) structure is subjected to a first relaxation step using the PBE functional. This provides the pre-relaxed minimum-energy geometry, $Q_{\mathrm{pre}}$, which is plugged into a single-point HSE calculation to provide the pre-relaxed state $(Q_{\mathrm{pre}},E_{\mathrm{pre}})$. A subsequent relaxation step also using HSE drives the system to the final ground state configuration $(Q_{\mathrm{full}},E_{\mathrm{full}})$. The use of the pre-relaxed $(Q_{\mathrm{pre}},E_{\mathrm{pre}})$ instead of the fully-relaxed $(Q_{\mathrm{full}},E_{\mathrm{full}})$ state is rather tempting. Unfortunately the underlying effects governing the error $\Delta E=E_{\mathrm{full}}-E_{\mathrm{pre}}$ are not obvious. Below we find some answers based on the magnitude of spurious strain fields and resonances between defect- and crystalline-related states, due to the change of functional. We address this issue by assessing the accuracy of HSE single-point calculations performed on pre-relaxed structures. These are compared to corresponding HSE fully-relaxed calculations. Four case studies are considered, namely the oxygen vacancy in magnesium oxide, the oxygen interstitial in silicon, the Si(001) surface, and the carbon interstitial in silicon carbide. In each case, we calculate $\Delta E$ values, the displacement of the most relevant atoms (closest to the defect), as well as selected observables. In Section~\ref{sec:method} we disclose the technical details of the calculations. In Section~\ref{sec:results} we present and discuss our results. Finally, we draw the conclusions in Section~\ref{sec:conclusions}. \section{Method details\label{sec:method}} First-principles calculations were performed within hybrid and semi-local density functional theory using the Vienna Ab-initio Simulation Package (VASP) \cite{Kresse1993,Kresse1994,Kresse1996,Kresse1996a}. The HF-exchange mixing fraction and screening parameter for the HSE functional were $a=1/4$ and $\omega=0.2$~Å$^{-1}$, respectively (resulting in the commonly named HSE06 functional) \cite{Krukau2006}. The projector-augmented wave method was employed to avoid explicit treatment of core electrons in the Kohn-Sham equations \cite{Blochl1994}. These were solved self-consistently using the PW formalism, until the total energy between two consecutive steps differed by less than $10^{-7}$~eV. The choice of energy cutoff values for PW, as well as the density of Brillouin-zone (BZ) sampling meshes, essentially depend on the problem at hand (chemical species, material of interest and supercell sizes). They were chosen after convergence tests, ensuring that absolute (relative) energies were converged within less than 10~meV/atom (less than 1~meV/atom). HSE lattice parameters were adopted in all four case studies referred above. Ionic relaxations stopped when the maximum force acting on every atom became smaller than 0.01~eV/Å. For each case, $(Q_{\mathrm{pre}},E_{\mathrm{pre}})$ and $(Q_{\mathrm{full}},E_{\mathrm{full}})$ pairs were obtained by means of the above two-step recipe (see Figure~\ref{fig1}), where $\Delta E=E_{\mathrm{full}}-E_{\mathrm{pre}}$. For the oxygen vacancy in magnesium oxide (MgO:V$_{\mathrm{O}}$), we used a 64-atom cubic supercell (minus an oxygen atom). The PW energy cutoff was converged at $E_{\mathrm{cut}}=500$~eV and a Monkhorst-Pack (MP) $2\times2\times2$ grid of $\mathbf{k}$-points proved adequate \cite{Monkhorst1976}. The calculated HSE-level (PBE-level) lattice parameter was $a_{0}^{\mathrm{HSE}}=4.200$~Å ($a_{0}^{\mathrm{PBE}}=4.255$~Å), comparing well with the experimental value of $a_{0}^{\mathrm{exp}}=4.216$~Å \cite{Hirata1977}. The HSE-level (PBE-level) direct band gap was $E_{\mathrm{g}}^{\mathrm{HSE}}=6.64$~eV ($E_{\mathrm{g}}^{\mathrm{PBE}}=4.73$~eV), underestimating the observed gap of $E_{\mathrm{g}}^{\mathrm{exp}}=7.8$~eV by 15\% (40\%) \cite{Whited1969}. The oxygen vacancy (V$_{\mathrm{O}}$) in MgO is a double donor. We investigated the deviation of the transition levels obtained from single-point energies (of pre-relaxed structures) with respect to those from fully-relaxed HSE-level structures. Unwanted Coulomb interactions between periodic replicas of charged defects were removed from the total energy according to the method proposed by Freysoldt, Neugebauer and Van de Walle \cite{Freysoldt2009}. For the oxygen interstitial (O$_{\mathrm{i}}$) defect in Si, several structures close to the minimum ground state and to the saddle point for migration were explored. Supercells with 64 Si atoms (plus one O atom) were used, and the BZ was sampled using an MP $2\times2\times2$ grid of $\mathbf{k}$-points. The calculated lattice parameter was $a_{0}^{\mathrm{HSE}}=5.432$~Å ($a_{0}^{\mathrm{PBE}}=5.469$~Å), also in good agreement with the measured value $a_{0}^{\mathrm{exp}}=5.431$~Å \cite{Mohr2016}. The Kohn-Sham band gap was estimated as $E_{\mathrm{g}}^{\mathrm{HSE}}=1.15$~eV ($E_{\mathrm{g}}^{\mathrm{PBE}}=0.57$~eV), which is to be compared with $E_{\mathrm{g}}^{\mathrm{exp}}=1.17$~eV from optical experiments \cite{Green1990}. The PW energy cutoff was set at $E_{\mathrm{cut}}=400$~eV. The third case considered was the Si(001) surface, where the energy difference between symmetric $(2\times1)$ and asymmetric b$(2\times1)$ reconstructions was investigated \cite{Yin1981}. We employed 19-monolayer-thick symmetric slabs (composed of 38 Si atoms), separated by 11~Å of vacuum space. An MP $6\times6\times1$ grid of $\mathbf{k}$-points was used to sample the BZ of both reconstructed surfaces and, like in the previous case, $E_{\mathrm{cut}}=400$~eV. Finally, carbon interstitial (C$_{\mathrm{i}}$) defects in 3$C$-SiC were investigated on cubic supercells with 64 atoms (plus the interstitial C atom). The energy cutoff was $E_{\mathrm{cut}}=400$~eV and the BZ sampling was carried out using an MP $4\times4\times4$ grid of $\mathbf{k}$-points. The calculated lattice parameter and Kohn-Sham gap were $a_{0}^{\mathrm{HSE}}=4.347$~Å ($a_{0}^{\mathrm{PBE}}=4.380$~Å) and $E_{\mathrm{g}}^{\mathrm{HSE}}=2.25$~eV ($E_{\mathrm{g}}^{\mathrm{PBE}}=1.33$~eV), respectively. Again, these compare with $a_{0}^{\mathrm{exp}}=4.360$~Å and $E_{\mathrm{g}}^{\mathrm{exp}}=2.42$~eV from experiments, as expected \cite{Bimber1981}. We investigated two structures of the C$_{\mathrm{i}}$ defect reported in the literature, namely a tilted-$\langle001\rangle$ split-interstitial with $C_{1h}$ symmetry \cite{Bockstedte2003} and an upright-$\langle001\rangle$ split-interstitial with $D_{2d}$ symmetry (or simply $\langle001\rangle$ split-interstitial) \cite{Gali2003}. The latter comprises a $\langle001\rangle$-aligned C-C dimer at the carbon site (both C atoms are three-fold coordinated and symmetrically equivalent), while in the monoclinic structure the C-C bond makes an angle with the $\langle100\rangle$ axis, ending up in a structure where one of the C atoms has four-fold coordination, while the other keeps the three-fold coordination. \section{Results and discussion\label{sec:results}} \subsection{Oxygen vacancy in MgO} The oxygen vacancy in MgO has the same symmetry ($O_{h}$) in all stable charge states ($q=0,\,+1$ and $+2$). However, the displacement of its neighbours (from their crystallographic positions) is variable. Accordingly, whereas Mg$^{2+}$ first neighbours are pushed away from the vacant site by $d\approx0.085$~Å per ionised electron, O$^{2-}$ next neighbours are attracted to the centre by $d\approx0.035$~Å for each ionisation. These results are also close to those reported by Rinke and co-workers \cite{Rinke2012} using the local density approximation, where outward and inward displacements of about 0.09~Å and 0.03~Å per ionisation were reported for Mg and O atoms, respectively. In the top-most section of Table~\ref{tab1}, we report the displacement ($d$) of Mg and O nearest neighbours to V$_{\mathrm{O}}$ from fully-relaxed geometries with respect to pre-relaxed ones. Results are shown for neutral, positively and double positively charged defects. Pre-relaxed and fully-relaxed defect geometries are very similar. The largest atomic displacement with respect to the pre-relaxed geometry was 0.015~Å, and that was observed for the Mg first neighbours. Still, after relaxation, total energies differed by at most $\Delta E\sim-40$~meV from those of pre-relaxed structures. \begin{table*} \caption{\label{tab1}Four data sets related to (1) V$_{\mathrm{O}}$ in MgO in different charge states, (2) four configurations of O$_{\mathrm{i}}$ in Si, (3) two reconstructions for the Si(001) surface, and (4) two configurations of C$_{\mathrm{i}}$ in $3C$-SiC. First and second data rows of each set show displacement magnitudes ($d$) of selected atoms after full HSE relaxation, relatively to pre-relaxed positions. The third row of each data set reports the relaxation energy $\Delta E$ or surface relaxation energy $\Delta\sigma=\Delta E/2A$ of pre-relaxed structures, where $A$ is the surface unit cell area. The fourth and fifth rows of each data set show HSE results using pre-relaxed and fully-relaxed geometries. These include transition levels $E(q/q\!+\!1)-E_{\mathrm{c}}$ with respect to the conduction band bottom, relative energies $E-E_{\mathrm{GS}}$ with respect to the ground state, and surface formation energies, $\sigma$ (see text).} \centering{}% \begin{tabular}{cccccc} \hline MgO:V$_{\mathrm{O}}$ & Units & $q=0$ & $q=+1$ & $q=+2$ & \tabularnewline $d$(Mg) & Å & 0.007 & 0.010 & 0.015 & \tabularnewline $d$(O) & Å & 0.001 & 0.001 & 0.001 & \tabularnewline $\Delta E$ & eV & $-0.003$ & $-0.012$ & $-0.038$ & \tabularnewline $E_{\mathrm{\text{pre}}}(q/q+1)-E_{\mathrm{c}}$ & eV & 2.872 & 4.190 & --- & \tabularnewline $E_{\mathrm{full}}(q/q+1)-E_{\mathrm{c}}$ & eV & 2.863 & 4.163 & --- & \tabularnewline \hline Si:O$_{\mathrm{i}}$ & Units & $C_{1h}$ & $C_{2}$ & $D_{3d}$ & $C_{2v}$\tabularnewline $d$(O) & Å & 0.129 & 0.127 & 0.000 & 0.045\tabularnewline $d$(Si) & Å & 0.012 & 0.012 & 0.014 & 0.027\tabularnewline $\Delta E$ & eV & $-0.019$ & $-0.018$ & $-0.009$ & $-0.013$\tabularnewline $E_{\mathrm{pre}}-E_{\mathrm{pre},\mathrm{GS}}$ & eV & 0.000 & 0.002 & 0.001 & 2.758\tabularnewline $E_{\mathrm{full}}-E_{\mathrm{full},\mathrm{GS}}$ & eV & 0.000 & 0.002 & 0.010 & 2.764\tabularnewline \hline Si(001) & Units & $(2\times1)$ & b$(2\times1)$ & & \tabularnewline $d$(Si$_{1}$) & Å & 0.147 & 0.115 & & \tabularnewline $d$(Si$_{2}$) & Å & 0.147 & 0.164 & & \tabularnewline $\Delta\sigma$ & meV/Å$^{2}$ & $-1.048$ & $-0.856$ & & \tabularnewline $\sigma_{\mathrm{pre}}$ & meV/Å$^{2}$ & 97.23 & 95.39 & & \tabularnewline $\sigma_{\mathrm{full}}$ & meV/Å$^{2}$ & 96.18 & 94.53 & & \tabularnewline \hline $3C$-SiC:C$_{\mathrm{i}}$ & Units & $q=0$ & $q=+1$ & & \tabularnewline $d$(C$_{1}$) & Å & 0.013 & 0.000 & & \tabularnewline $d$(C$_{2}$) & Å & 0.539 & 0.000 & & \tabularnewline $\Delta E$ & eV & $-0.101$ & $-0.005$ & & \tabularnewline $E_{\mathrm{\text{pre}}}(q/q+1)-E_{\mathrm{c}}$ & eV & 0.367 & --- & & \tabularnewline $E_{\mathrm{full}}(q/q+1)-E_{\mathrm{c}}$ & eV & 0.476 & --- & & \tabularnewline \hline \end{tabular} \end{table*} In the above, we inspected the quality of pre-relaxed structures for the calculation of HSE total energies. The same analysis could be done for the calculation of defect formation energies, \begin{equation} E_{\mathrm{f}}(q)=E(q)-\sum_{i}n_{i}\mu_{i}+q(E_{\mathrm{v}}+E_{\mathrm{F}}).\label{eq:fe} \end{equation} This is the amount of energy required to combine $n_{i}$ elements of species $i$ and form a defective crystal (with computed energy $E$). Elements are assumed to be available in standard phases with respective chemical potentials $\mu_{i}$ (see Ref.~\cite{Freysoldt2014} and references therein for further details). In Eq.~\ref{eq:fe}, $q$ is the charge state of the defect, obtained by exchanging electrons between an electronic reservoir with chemical potential $E_{\mathrm{v}}+E_{\mathrm{F}}$ (where $E_{\mathrm{F}}$ is the Fermi level with respect to the valence band top energy, $E_{\mathrm{v}}$) and the highest occupied or unoccupied states. It is assumed that the calculation of $\mu_{i}$ and $E_{\mathrm{v}}$ in Eq.~\ref{eq:fe} does not involve a pre-relaxation step. Hence, the error of the calculated formation energies using pre-relaxed structures is the same as that of total energies, $\Delta E_{\mathrm{f}}\equiv\Delta E$. We emphasise that $\Delta E\leq0$ , so that the error involving the energy difference between two pre-relaxed structures benefits from cancelation effects. An example of such a quantity is a defect transition level, which by definition, is given by the location of $E_{\mathrm{F}}$ where $E_{\mathrm{f}}(q)=E_{\mathrm{f}}(q+1)$. In the case of a double donor such as MgO:V$_{\mathrm{O}}$, the $(q+1)$-th donor level (with respect to the conduction band minimum, $E_{\mathrm{c}}$) is given by $E_{\mathrm{c}}-E(q/q+1)=[\epsilon_{\mathrm{c}}+E(q+1)]-E(q)$, where $\epsilon_{\mathrm{c}}$ is the lowest unoccupied state in a bulk calculation. Hence, for MgO:V$_{\mathrm{O}}$ we obtain $E_{\mathrm{c}}-E(0/+)=2.86$~eV and $E_{\mathrm{c}}-E(+/\!+\!+)=4.16$~eV using fully-relaxed HSE energies. These figures would be closer to those obtained from $G_{0}W_{0}$ quasi-particle energies \cite{Rinke2012} should we have accounted for Frank-Condon relaxation effects and for a better correlation treatment to improve the gap width. However, these issues are not relevant for the present analysis. We are interested in assessing the quality of $E_{\mathrm{c}}-E_{\mathrm{pre}}(q/q+1)$ obtained from pre-relaxed energies, with respect to the analogous calculation using fully-relaxed structures, $E_{\mathrm{c}}-E_{\mathrm{full}}(q/q+1)$. The results are shown in Table~\ref{tab1}. The error bar of the pre-relaxed results is of the order of few tens of meV, which is quite acceptable for semiconductors and insulators with a gap width in the eV range. \subsection{Interstitial oxygen in silicon} Our second case is a well-established defect in crystalline silicon, namely Si:O$_{\mathrm{i}}$. In the ground state, O$_{\mathrm{i}}$ adopts a puckered bond-centre configuration \cite{Coutinho2000}. The potential energy surface for rotation of the O atom around the Si-Si bond has the shape of a flat `Mexican hat' (with small bumps in the meV range). Accordingly, the O atom is slightly displaced (0.3~Å) away from the centre of a $[111]$-aligned Si-Si bond, either along $[1\bar{1}0]$ or along $[11\bar{2}]$ (resulting in defects with $C_{2}$ or $C_{1h}$ symmetry, respectively). The perfect bond-centred structure has $D_{3d}$ symmetry and it is a maximum in the potential energy landscape. Long-range diffusion of O$_{\mathrm{i}}$ in Si occurs via sequential jumps between neighbouring puckered configurations. At the saddle-point, the structure passes close to a $C_{2v}$-symmetric configuration (often referred to as `Y-lid'), consisting of a $\langle100\rangle$-aligned O-Si dimer sharing a Si site, where both O and Si atoms are three-fold coordinated \cite{Coutinho2000}. The relaxation of this structure was achieved with help of force symmetrisation. The results are shown in Table~\ref{tab1}. Concerning HSE-relaxed energies, we obtained the following relative energies with respect to the $C_{1h}$ ground state: $E_{\mathrm{full}}-E_{\mathrm{full},\mathrm{GS}}=2$~meV, 10~meV and 2.76~eV for the $C_{2}$, $D_{3d}$ and $C_{2v}$ structures, respectively, confirming the flatness of the potential around the bond centre. The energy of the $C_{2v}$ structure is consistent with analogous results obtained by Binder and Pasquarello \cite{Binder2014}, where it was demonstrated that hybrid-DFT was able account for the observed 2.53~eV migration barrier of O$_{\mathrm{i}}$ in silicon \cite{Corbett1964}. This contrasts with local and semi-local functionals which predict a barrier of about 2~eV \cite{Coutinho2000}. Unlike MgO:V$_{\mathrm{O}}$, pre-relaxed structures of O$_{\mathrm{i}}$ in Si hold a \emph{lingering strain} when plugged into a HSE calculation. This effect is unavoidable and stems from the fact that PBE over-estimates Si-O bond lengths with respect to those from HSE. The result is the straightening of the Si-O-Si unit after HSE full relaxation, with the O atom approaching the bond centre site. Due to the soft nature of the Si-O-Si bending potential, the above displacements lead to tolerable energy changes. Values of $\Delta E$ in Table~\ref{tab1} show that fully relaxed HSE total energies are at most a few tens of meV below those obtained from single-point calculations performed on pre-relaxed structures. Interestingly, and again due to cancelation effects, the difference between relative energies whether using fully-relaxed ($E_{\mathrm{full}}-E_{\mathrm{full},\mathrm{GS}}$) or pre-relaxed ($E_{\mathrm{pre}}-E_{\mathrm{pre},\mathrm{GS}}$) structures is negligible (<10~meV). \subsection{Si(001)-$\mathit{(2\times1)}$ surface} The Si(001) surface reconstruction is known to be made of dimers, each possessing two unsaturated radicals. Experiments show that charge transfer between these radicals leads to dimer buckling \cite{Yin1981}, where one of the atoms becomes protruded, while the other drops into the surface. The smallest surface unit cell which is able to capture this effect contains a single dimer. We can either calculate a symmetric Si(001)-$(2\times1)$ reconstruction, where the dimerised Si atoms have the same height (no buckling), or the buckled Si(001)-b$(2\times1)$. Table~\ref{tab1} shows that, for both reconstructions, atom displacements of pre-relaxed slabs are large. Large displacements were also found for subsurface Si atoms. The surface formation energy, $\sigma$, was obtained as $2A\sigma=E-\sum n_{i}\mu_{i}$, where $A$ is the area of the surface unit cell and the factor of two accounts for the identical and opposite-facing surfaces on the slab with total energy $E$. Fully-relaxed surface formation energies are about $|\Delta\sigma|\sim1$~meV/Å$^{2}$ lower than pre-relaxed analogues, a value which is regarded too large to make pre-relaxed energies reliable for absolute formation energy calculations. Such strong relaxation energies and displacements arise from a mismatch between PBE- and HSE-level lattice parameters of about 0.7\%, which leads to a corresponding contraction of the whole slab thickness upon full HSE-relaxation. Despite the above, relative energies benefit from cancelation effects. Table~\ref{tab1} shows that Si(001)-b$(2\times1)$ is more stable than Si(001)-$(2\times1)$ by 1.84~meV/Å and 1.65~meV/Å when using pre-relaxed and fully-relaxed energies ($\sigma_{\mathrm{pre}}$ and $\sigma_{\mathrm{full}}$), respectively. \subsection{Carbon self-interstitial in in 3C-SiC} The stable configuration of the carbon self-interstitial in $3C$-SiC has been investigated concurrently by different groups, with different structures being proposed for the neutral charge state. Gali \emph{et~al.} \cite{Gali2003} reported a spin-1 split-interstitial with $D_{2d}$ symmetry, being made of a $\langle001\rangle$-aligned C-C dimer on a carbon site. The electronic structure consisted on a semi-occupied doublet state arising from orthogonal $\pi$-like orbitals on both three-fold coordinated carbon atoms. Conversely, Bockstedte \emph{et~al.} \cite{Bockstedte2003} found a diamagnetic monoclinic structure ($C_{1h}$ symmetry), where the C$_{1}$-C$_{2}$ dimer was tilted towards the $\langle110\rangle$ direction, resulting in three-fold and four-fold coordinated atoms, respectively. We note that, while the paramagnetic state was obtained from a post-corrected hybrid-DFT calculation, the diamagnetic and low-symmetry state was found within PBE. Our calculations confirm the above conflicting results. PBE- and HSE-relaxations lead to C$_{\mathrm{i}}(C_{1h})$ and C$_{\mathrm{i}}(D_{2d})$ ground state structures with total spin $S=0$ and $S=1$, respectively. Therefore, after obtaining the pre-relaxed structure ($C_{1h}$), a subsequent HSE calculation displaces the four-fold coordinated C$_{2}$ atom by more than 0.5~Å, raising the symmetry to $D_{2d}$ and flipping the spin to $S=1$ (see Table~\ref{tab1}). The difference between pre-relaxed and fully-relaxed HSE energies is as much as $\Delta E=-0.1$~eV. Obviously this poses a major problem to the use of the pre-relaxed structures. We will return to this issue below. In the positive charge state, the C$_{\mathrm{i}}$ defect suffers a weak Jahn-Teller distortion involving a dynamic overlap of the $\pi$-orbitals \cite{Bockstedte2003,Gali2003}, and the symmetry is lowered from $D_{2d}$ to $D_{2}$. Pre-relaxed and fully-relaxed structures show essentially the same geometry (as shown by the small atomic displacements in the bottom data set of Table~\ref{tab1}), and the relaxation energy is only $\Delta E=-5$~meV. Due to the large relaxation energy obtained for the neutral charge state, the calculation of the donor level using pre-relaxed energies does not profit from error cancelation effects. The result differs from the fully-relaxed donor level by about 0.1~eV (ten times larger than MgO:V$_{\mathrm{O}}$). \begin{figure} \begin{centering} \includegraphics[width=8.5cm]{figure2} \par\end{centering} \caption{\label{fig2}Kohn-Sham eigenvalues (symbols) of linearly interpolated C$_{\mathrm{i}}$ defects in 512-atom SiC supercells at $\mathbf{k}=\Gamma$. (a) and (b) represent PBE- and HSE-level results, respectively. Only the highest occupied (closed symbols) and the next four unoccupied levels (open symbols) are shown. Circles and squares represent eigenvalues from $C_{1h}$- and $D_{2d}$-symmetric structures, respectively. Data points connected by Bézier lines correspond to interpolated diamagnetic ($S=0$) states, while both data sets in the middle correspond to paramagnetic ($S=1$) states. Valence band and conduction band states of the crystal are represented by shaded regions.} \end{figure} We investigated the origin of the structure/spin disparity between the above PBE and hybrid-DFT calculations. In order to rule out dispersion effects due to the finite size of the supercell, we also carried out 216-atom and 512-atom supercell calculations (plus one C atom), using $2\times2\times2$ and $1\times1\times1$ ($\Gamma$-point) $\mathbf{k}$-point grids for BZ sampling. Full HSE relaxation was not possible for these cells. Instead, the $D_{2d}$ structure was subjected to a symmetry-constrained PBE relaxation, followed by an HSE single-point calculation. Irrespectively of the supercell size, the HSE energy of the pre-relaxed C$_{\mathrm{i}}(D_{2d},S=1)$ state was lower than that of C$_{\mathrm{i}}(C_{1h},S=0)$ by 0.1~eV. We went on and inspected the Kohn-Sham levels of seven neutral diamagnetic structures, obtained after linear interpolation between C$_{\mathrm{i}}(C_{1h})$ and C$_{\mathrm{i}}(D_{2d})$. Analysis of defect levels is more conveniently done at the $\mathbf{k}=\Gamma$ point, where the wave-functions are real. However, in order to preserve the BZ-sampling quality, large cubic supercells with 512 atoms (plus one carbon atom) were employed for PBE-relaxations and respective HSE single-point calculations. The results are shown in Figure~\ref{fig2} by data points connected by Bézier curves for better perception. Circles and squares apply to $C_{1h}$ and $D_{2d}$ defect symmetries, respectively. Also for the sake of clarity, only the highest occupied (closed symbols) and the next four unoccupied levels (open symbols) are shown. Left- and right-hand sides of the figure refer to results obtained with PBE and HSE exchange-correlation functionals, respectively. The valence band top was aligned on both sides at the origin of the energy scale. The paramagnetic state C$_{\mathrm{i}}(D_{2d},S=1)$ was also investigated. Its electronic structure is shown in the middle region of the figure (for both PBE and HSE functionals). If we ignore the difference in the gap width, the electronic structure of C$_{\mathrm{i}}(C_{1h},S=0)$ is rather similar whether it is calculated using PBE or HSE (left and right edges of Figure~\ref{fig2}, respectively). The defect is a symmetric singlet state ($A$ within $C_{1h}$ point group), with a fully occupied p-like deep level localised on C$_{1}$ and C$_{2}$ atoms (see inset of \ref{fig2}). As the structure evolves to $D_{2d}$, the $A$-state mixes with the conduction band states and the defect becomes a shallow donor. Note that C$_{\mathrm{i}}(D_{2d},S=0)$ is not stable -- within PBE-level, atomic relaxation drives the geometry back to the $C_{1h}$ structure, while within HSE the ground state is C$_{\mathrm{i}}(D_{2d},S=1)$. Hence, comparing PBE and HSE results in Figure~\ref{fig2}, we readily conclude that within PBE the exchange interactions are underestimated due to the strong resonance between the doubly degenerate $E$-level and the low-lying conduction band states. For that reason, the PBE functional fails to describe the ground state structure of neutral C$_{\mathrm{i}}$ in $3C$-SiC, and that undermines any pre-relaxed HSE calculation. \section{Conclusions\label{sec:conclusions}} We investigated the suitability of atomistic geometries, particularly of defects in semiconductors and insulators, obtained within a semi-local DFT method (referred to as \emph{pre-relaxed} structures), to be used on single-point hybrid-DFT calculations. To that end, four distinct case studies were investigated, namely the oxygen vacancy in magnesium oxide, the oxygen interstitial in silicon, the Si(001) surface and the carbon self-interstitial in cubic silicon carbide. We found at least two important sources of error that should be monitored. The first is the presence of lingering strain within the pre-relaxed structure, which will be released should a full hybrid-DFT calculation be performed. The relaxation energy, $\Delta E$, arises from slight differences in equilibrium bond lengths as obtained from semi-local and hybrid-DFT methods. It is interpreted as the error bar of single-point HSE energies based on pre-relaxed structures, including of formation energies. The magnitude of $\Delta E$ is system-dependent. For localised point defects, the effect was estimated to be in the range of a few tens of meV. This is acceptable for the calculation of most defect-related observables, including formation energies. Extended defects, on the other hand, are expected to be affected by larger errors. The surface formation energy of two Si(001) pre-relaxed reconstructions were calculated with a discrepancy of $\Delta\sigma~\sim-1$~meV/Å. This is close to the difference between the surface formation energies of symmetric and buckled dimerised Si(001)-$(2\times1)$ reconstructions, and larger than the usual error bar needed for this type of calculation. The problem arises from the lattice mismatch generated by the different functionals, that leads to the accumulation of lingering strain across a large volume of the pre-relaxed geometry (particularly in bulk-like regions). The presence of vacuum in the slab allows the strain to relax, releasing relatively large amounts of energy during a full hybrid-DFT relaxation. We also found that calculations based on energy differences of pre-relaxed structures benefit from error cancelation effects. This is because $\Delta E$ is always negative. The error bar in this case was about one order of magnitude lower than that affecting total energies. This feature applies for instance to transition levels, binding energies, migration barriers, but also to surface formation energy differences. The second source of error results from an over-mixing of defect states with the host bands during the pre-relaxation stage. This effect also depends strongly on the problem at hand. Local and semi-local functionals are known to underestimate the width of band gaps of insulators and semiconductors. This favours resonances involving defect levels edging conduction band and valence band states. The result is rather similar to the pseudo-Jahn-Teller effect and, as such, leads to artificial bond formation and breaking. Obviously, the spurious pre-relaxed geometries will lead to misleading single-point hybrid-DFT energies. We discuss the effect in the light of a detailed analysis of the carbon interstitial in $3C$-SiC. Many examples which are expected to show analogous resonances have been reported in the literature. These include the negative carbon vacancy in $4H$-SiC \cite{Trinh2013}, or the cadmium vacancy in CdTe \cite{Flores2018}. \ack{}{} This work is supported by the NATO SPS programme, project number 985215. JC thanks the Fundação para a Ciência e a Tecnologia (FCT) for support under project UID/CTM/50025/2013, co-funded by FEDER funds through the COMPETE 2020 Program. JDG acknowledges the financial support from the I3N through grant BPD-11(5017/2018). \section*{References}{} \bibliographystyle{iopart-num} \providecommand{\newblock}{}
{'timestamp': '2019-02-04T02:10:50', 'yymm': '1902', 'arxiv_id': '1902.00281', 'language': 'en', 'url': 'https://arxiv.org/abs/1902.00281'}
\section{Introduction} The requirement of reproducible computational research is becoming increasingly important and mandatory across the sciences~\cite{national2019reproducibility}. Because reproducibility implies a certain level of openness and sharing of data and code, parts of the scientific community have developed standards around documenting and publishing these research outputs~\cite{wilkinson2016fair,chen2019open}. Publishing data and code as a replication package in a data repository is considered to be a best practice for enabling research reproducibility and transparency~\cite{molloy2011open}.~\footnote{At Dataverse, the data, and code used to reproduce a published study are called "replication data" or a "replication package"; in Whole Tale, this is called a "tale", and in Code Ocean a "reproducible capsule".} Some academic journals endorse this approach for publishing research outputs, and they often encourage (or require) their authors to release a replication package upon publication. Data repositories, such as Dataverse or Dryad, are the predominantly encouraged mode for sharing research data and code followed by journals' own websites (Figure~\ref{fig:mode})~\cite{crosas2018data}. For example, the American Journal of Political Science (AJPS) and the journal Political Analysis have their own collections within the Harvard Dataverse repository, which is their required venue for sharing research data and code. Recent case studies~\cite{collberg2016repeatability} reported that the research material published in data repositories does not often guarantee reproducibility. This is in part because, in the current form, data repositories do not capture all software and system dependencies necessary for code execution. Even when this information is documented by the original authors in an instructions file (like readme), contextual information still might be missing, which could make the process of research verification and reuse hard or impossible. This is also often the case with some of the alternative ways of publishing research data and code, for example, through the journal's website. A study~\cite{rowhani2018badges} reported that the majority of supplemental data deposited on a journal's website was inaccessible due to broken links. Such problems are less likely to happen in data repositories that follow standards for long-term archival and support persistent identifiers. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{mode-hist.pdf} \caption{Aggregated results for most popular data sharing mode in anthropology, economics, history, PoliSci+IR, psychology, and sociology from Ref.~\cite{crosas2018data}.} \label{fig:mode} \end{figure} Some researchers prefer to release their data and code on their personal websites or websites like GitHub and GitLab. This approach does not natively provide a standardized persistent citation for referencing and accessing the research materials, nor sufficient metadata to make it discoverable in data search engines like Google Dataset Search and DataCite search. In addition, it does not guarantee long-term accessibility as data repositories do. Because research deposited this way does not typically contain a runtime environment, nor system or contextual information, this approach is also often ineffective in enabling computational reproducibility~\cite{pimentel2019large}. New cloud services have emerged to support research data organization, collaborative work, and reproducibility~\cite{perkel2019make}. Even though the number of different and useful reproducibility tools is constantly increasing, in this paper, we are going to focus on the following projects: Code Ocean~\cite{staubitz2016codeocean}, Whole Tale~\cite{brinckman2019computing}, Renku~\footnote{https://renkulab.io} and Binder~\cite{jupyter2018binder, kluyver2016jupyter}. All of these tools are available through a web browser, and they are based on the containerization technology Docker, which provides a standardized way to capture the computational environment that can be shared, reproduced, and reused. \begin{enumerate} \item Code Ocean is a research collaboration platform that enables its users to develop, execute, share, and publish their data and code. The platform supports a large number of programming languages including R, C/C++, Java, Python, and it is currently the only platform that supports code sharing in proprietary software like MATLAB and Stata. \item Whole Tale is a free and open-source reproducibility platform that, by capturing data, code, and a complete software environment, enables researchers to examine, transform and republish research data that was used in an academic article. \item Renku is a project similar to Whole Tale that focuses on employing tools for best coding practices to facilitate collaborative work and reproducibility. \item Binder is a free and open-source project that allows users to run notebooks (Jupyter or R) and other code files by creating a containerized environment using configuration files within a replication package (or a repository). \end{enumerate} Even though virtual containers are currently considered the most comprehensive way to preserve computational research~\cite{piccolo2016tools,jimenez2015role}, they do not entirely comply with modern scientific workflows and needs. Through the use of containers, the reproducibility platforms in most cases fail to support FAIR principles (Findable, Accessible, Interoperable, and Reusable)~\cite{wilkinson2016fair}, standardized persistent citation, and long-term preservation of research outputs as data repositories strive to do. Findability is enabled through standard or community-used metadata schemas that document research artifacts. Data and code stored in a Docker container on a reproducibility platform are not easily visible nor accessible from outside of the container, which thus hinders their findability. This could be an issue for a researcher looking for a specific dataset rather than a replication package. In addition, unlike data repositories, reproducibility platforms do not undertake a commitment to the archival of research materials. This means that, for example, in a scenario where a reproducibility platform runs out of funding, the deposited research could be inaccessible. Individually data repositories and reproducibility platforms cannot fully support scientific workflows and requirements for reproducibility and preservation. This paper explains how these shortcomings could be overcome through integration that would result in a robust paradigm for preserving computational research and enabling reproducibility and reuse while making the replication packages FAIR. We argue that through the integration of these existing projects, rather than inventing new ones, we could combine the functionalities that effectively complement each other. \section{Related work} Through integration, reproducibility platforms and data repositories create a synergy that addresses weaknesses of both approaches. Some of these integrations are already on the way: \begin{itemize} \item CLOCKSS~\footnote{https://clockss.org} is an archiving repository that preserves data with regular validity checks. Unlike other data repositories, it does not provide public or user access to the preserved content, except in special cases that are referred to as ``triggered content''. Code Ocean has partnered with CLOCKSS to preserve in perpetuity research capsules associated with publications from some of the collaborating journals. \item The Whole Tale platform relies on integrations with external resources for long-term stewardship and preservation. They already enable data import from data repositories, and a publishing functionality for a replication package is currently underway through DataONE, Dataverse, and Zenodo~\cite{chard2019implementing}. \item Stencila is an open-source office suite designed for creating interactive, data-driven publications. With its familiar user interface, it is geared toward the users of Microsoft Word and Excel. It integrates data and code as a self-contained part of the publication, and it also enables external researches to explore the data and write custom code. Stencila and the journal eLife have partnered up to facilitate reproducible publications~\cite{maciocci2019introducing}. \end{itemize} \section{Implementation} In this paper, we present our developments in the context of the Dataverse Project, which is a free and open-source software platform to archive, share, and cite research data. Currently, 55 institutions around the globe run Dataverse instances as their data repository. Dataverse's integration with the reproducibility platforms has propelled a series of questions and developments around advancing reproducibility for its vast and diverse user community. First, while container files can be uploaded to Dataverse, there is no special handling for these files, which can result in mixed outcomes for researchers trying to verify reproducibility. Second, it is important to facilitate the capture of computational dependencies for the Dataverse users who choose not to use a reproducibility platform. Finally, in a replication package with multiple seemingly disorganized code files, it would be important to minimize the time and effort of an external user who wants to rerun and reuse the files. Therefore, new functionality to support container-based deposits, organization, and access needs to be added to Dataverse to improve reproducibility. \subsection{Integration with reproducibility platforms} Dataverse integration with the reproducibility platforms should allow both adding new research material into Dataverse, and importing and reusing the existing material from Dataverse into a reproducibility platform. This communication is implemented through a series of existing and new APIs. The reproducibility platforms that have an ongoing integration collaboration with Dataverse are Code Ocean, Whole Tale, Binder, and Renku. \begin{figure} \centering \includegraphics[width=\linewidth]{dv-view.png} \caption{Preliminary view of how research stored in Dataverse can be viewed and explored in reproducibility platforms with a button click.} \label{fig:dataverse} \end{figure} Importing research material from Dataverse means that data and code that already exist in Dataverse could be transferred directly into a reproducibility platform. On the Dataverse side, this is implemented through a new button "Explore", shown in Figure~\ref{fig:dataverse}. When the button is clicked, the replication package is copied and sent to a reproducibility platform where it, using the configuration files from the package, creates a Docker container, places all data and code into it and provides a view through a web browser. This means that the Dataverse users will not need to download any of the files to their personal computers, nor will they need to set up a computational environment to execute and explore the deposited files. So far, the "Explore" button is functional for the Whole Tale platform, while the others are underway.~\footnote{Dataverse documentation for integrations: http://guides.dataverse.org/en/4.20/admin/ external-tools.html} The researcher whose starting point is the reproducibility platform will be able to import materials for their analysis directly from Dataverse. An example where a researcher is importing Dataverse open data as "external data" into Whole Tale is shown in Figure~\ref{fig:wt}, as this integration is now implemented. Similarly, Figure~\ref{fig:binder} shows new integration developments with the lightweight cloud platform, Binder, that now enables the users to import and view data from Dataverse. The export of the encapsulated research material into Dataverse will also be possible, which means that, once an analysis is ready for dissemination, the researchers would initiate "analysis export" in a reproducibility platform, that would then copy the files from a Docker container into Dataverse. This way, all necessary computational dependencies are automatically recorded by a reproducibility tool and stored at a data repository following preservation standards. This functionality is already implemented on Renku.~\footnote{Integration code at https://github.com/SwissDataScienceCenter/renku-python/pull/909} \begin{figure} \centering \includegraphics[width=\linewidth]{wt-example.png} \caption{Using data from Dataverse in the Whole Tale environment. Snapshot from YouTube video \url{https://www.youtube.com/watch?v=oWEcFpEUmrU}. Credit: Craig Willis.} \label{fig:wt} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{binder-fig-2.png} \caption{Binder’s GUI (\url{https://mybinder.org}) supports viewing content from Dataverse.} \label{fig:binder} \end{figure} \iffalse \begin{figure} \centering \includegraphics[width=.8\linewidth]{gui.png} \caption{New GUI for replication metadata.} \label{fig:rep} \end{figure}\fi \subsection{Handling containers} Importing replication packages from the reproducibility platforms means that Dataverse would need to support the capture of their virtual containers. Since all aforementioned reproducibility platforms are based on the containerization technology Docker, new Dataverse developments focus on Docker containers. Docker containers can be built automatically from the instructions laid out in a "Dockerfile". A Dockerfile is an often tiny text file that contains commands, typically for installing software and dependencies, to set up a runtime environment needed for research analysis. The Dataverse platform will encourage depositing Dockerfiles to capture the computational environment. This would allow the users to explore replication packages in any supported reproducibility tool as Dockerfiles are agnostic to computational platforms. An alternative solution would be to create a Dataverse Docker registry where the whole images would be preserved. This approach will not be pursued at the time due to excessive storage requirements. It is important to mention that at present, any file can be stored at Dataverse, including a Dockerfile, and that there are currently dozens of Dockerfiles stored at Harvard's instance of Dataverse. However, whereas previously Dockerfiles were considered as "other files", in the new development they will be pre-identified at upload, and thus will require additional metadata. When a reproducibility platform automatically generates a Dockerfile, it is likely to be suitable for portability and preservation. However, when a researcher prepares it, this might not be the case. Dockerfiles could be susceptible to some of the practices that cause irreproducibility, like the use of absolute (fixed) file paths, which is why Dataverse will encourage its users to use best practices when depositing these files. In particular, before depositing a Dockerfile, the researcher will be prompted to confirm that their Dockerfile does not include any of the common reproducibility errors. \subsection{Capturing execution commands} In addition to capturing a Docker container via Dockerfile, it is important to capture the sequence of steps that the user ran to obtain their results. This applies to the results obtained with command-line languages such as Python, MATLAB, Julia. Capturing the command sequence is particularly important when there are multiple code files within the replication dataset without the clear notation in which order they should be executed. The commands will be captured in the replication package metadata using a community standard to be determined (see, for example, RO-Crate~\cite{carragainro}). In case the replication package was pulled from a reproducibility tool, like Code Ocean and Whole Tale, these replication commands would be automatically populated. For example, Code Ocean generates the commands that build and run a Docker container for each replication package, and it also encourages researchers to specify the command sequence in a file called "run" to automatize their code. This means that all command sequences that run "outside" and "inside" the container are captured. Dataverse users who choose not to use a reproducibility platform would need to manually specify this sequence based on presented best practices. \subsection{Improving FAIR-ness} Because no dataset would be 'hidden' within a virtual container at Dataverse, all files originally used in research would be indexed and thus findable by one of the common dataset search engines. They would also be accessible directly from the dataset landing page on the web. Their interoperability and reusability would be now improved with the integration with the reproducibility platforms, as the barriers to creating a runtime environment and running code files would be alleviated. \begin{figure*}[htb!] \centering \includegraphics[width=0.85\linewidth]{dvpic-2.pdf} \caption{Four main workflows that Dataverse aims to support with reproducibility platform integration.} \label{fig:int} \end{figure*} \subsection{New data metrics} Dataverse traditionally aims to provide incentives to researchers to share data through data citation credit, data metrics such as a count of downloads for datasets and access requests for restricted data. One of the completed new developments includes integrating certifications or science badges, such as Open Data and Open Materials, within a dataset landing page on Dataverse. The new support for reproducibility tools and containers will also result in creating new metrics for the users. The datasets that are deposited through a reproducibility platform into Dataverse will be denoted with a 'reproducibility certification' badge that will signal their origin and easy execution on the cloud. For example, a replication package that was received from a reproducibility platform Whole Tale, will include its origin information and encourage its exploration and reuse through Whole Tale. \section{Functionality and use-cases} The Dataverse integration with the reproducibility platforms and the new developments that improve the reproducibility of deposited research will facilitate research workflows relating to verification, preservation, and reuse in the following ways (shown in Figure~\ref{fig:int}): \begin{enumerate} \item Research encapsulation. The first supported workflow enables authors to deposit their data and code through Code Ocean, Whole Tale, or Renku, which then create a replication package that is sent for dissemination and preservation to Dataverse. The Dataverse users who were not previously familiar with the containerization technology Docker will now be able to containerize their research through the new workflow. In addition, this workflow is particularly important for prestigious academic journals that verify research reproducibility through third-party curation services and a reproducibility platform. For example, code review at the journal Political Analysis, which collaborates with Code Ocean and Harvard Dataverse for data dissemination and preservation, will be significantly sped up with the deployment of this workflow, as all the code associated with a publication will already be automatized, containerized and available on the cloud. \item Modify and republish research. The second workflow covers pulling a replication package from Dataverse and republishing it after an update. This would create a new version of the package in Dataverse, as well as track provenance about the original package. Peer-review and revisions of the package should thus be much easier. In addition, the replication packages on Dataverse that currently do not have information on their runtime environment could be updated and republished with a Dockerfile generated by one of the reproducibility platforms. \item View deposited research materials. The third functionality allows viewing and exploring the content of deposited research without the need to download the files and install new software. This could be particularly valuable for external researchers and students who would like to understand research results or reuse data or code. \item Preserve computational environment with Dockerfile. Through the new developments in Dataverse that encourage depositing Dockerfiles with best practices, the researchers who are experienced in using Docker will now be able to adequately preserve these files in the repository. \end{enumerate} \section{Conclusions} In the last decade, there has been extensive discussion around preservation, reproducibility, and openness of computational research, which resulted in creating multiple new tools to facilitate these efforts, the most popular being data repositories and reproducibility platforms. However, individually these two approaches cannot fully facilitate findable, interoperable, reusable, and reproducible research materials. This paper presents a robust solution achieved through their integration. Described integrations have resulted in developing new functionality in Dataverse, such as expanding on the existing API, introducing new replication-package metadata, and handling virtual containers via Dockerfile. In addition to allowing research preservation in a reproducible and reusable way through the integrations, Dataverse aims to identify new and useful data metrics to be displayed on the dataset landing page. Due to the fact that there is an increasing number of similar reproducibility tools, this paper also advocates for considering integration with an existing solution before (re)inventing a new reproducibility tool.
{'timestamp': '2020-06-18T02:04:36', 'yymm': '2005', 'arxiv_id': '2005.02985', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.02985'}
\section{Linear $k_\perp$ factorization is broken} Heavy nuclei are strongly absorbing targets and bring a new scale into the perturbative QCD (pQCD) description of hard processes \cite{Mueller}. This has severe consequences for the relations between various hard scattering observables. In a regime of small absorption, small--$x$ processes are adequatly described by the linear $k_\perp$--factorization, and the pertinent observables are linear functionals of a universal unintegrated gluon distribution. Not so for the strongly absorbing target, where the linear $k_\perp$--factorization is broken \cite{DIS_Dijets}, and has to be replaced by a new, nonlinear $k_\perp$--factorization \cite{DIS_Dijets,Nonuniversality}, where observables are in general nonlinear functionals of a properly defined unintegrated glue. The relevant formalism has been worked out for all interesting processes \cite{Nonuniversality,Nonlinear} (see \cite{CGC} for references on related work), but in this very short contribution we concentrate on deep inelastic scattering (DIS). Here, in the typical inelastic DIS event the nuclear debris will be left in a state with multiple color excited nucleons after the $q\bar q$ dipole exchanged many gluons with the target. The partial cross sections for final states with a fixed number of color excited nucleons are the topological cross sections. It is customary to describe them in a language of unitarity cuts through multipomeron exchange diagrams \cite{AGK}. In our approach \cite{Cutting_Rules}, color excited nucleons in the final state give a clear--cut definition of a cut pomeron. Topological cross sections carry useful information on the correlation between forward or midrapidity jet/dijet production and multiproduction in the nuclear fragmentation region as well as on the centrality of a collision. \section{Nuclear collective glue and its unitarity cut interpretation} The basic ingredient of the nonlinear $k_\perp$--factorization is the collective nuclear unintegrated glue, which made its first appearance in our work on the diffractive breakup of pions into jets $\pi A \to \mathrm{jet}_1 \mathrm{jet}_2 A$ \cite{NSS}. Indeed, in the high energy limit, the nearly back--to--back jets acquire their large transverse momenta directly from gluons. It is then natural use the diffractive $S$--matrix of a $q \bar q$--dipole $S_A({\bf{b}},x,{\bf{r}})$ for defining the nuclear unintegrated glue: \begin{eqnarray} \int {d^2 {\bf{r}} \over (2 \pi)^2} \, S_A({\bf{b}},x,{\bf{r}}) \exp(-i{\bf{p}} {\bf{r}}) = S_A({\bf{b}},x,{\bf{r}} \to \infty) \delta^{(2)}({\bf{p}}) + \phi({\bf{b}},x, {\bf{p}}) \equiv \Phi({\bf{b}},x,{\bf{p}}) \, . \end{eqnarray} Notice that it resums multiple scatterings of a dipole, so that there is no straightforward relation to the conventional parton distribution which corresponds to just two partons in the $t$--channel. It is still meaningful to call it an unintegrated glue -- one reason was given above -- another one, besides its role in factorization formulas is its small-$x$ evolution property: The so--defined $\phi({\bf{b}},x,{\bf{p}})$ can be shown \cite{NS_LPM} to obey \footnote{Strictly speaking only a few iterations of this equation make good sense.} the Balitskii--Kovchegov \cite{BK} evolution equation. Close to $x_A \sim (m_N R_A)^{-1}$, for heavy nuclei, the dipole $S$--matrix is the familiar Glauber--Gribov exponential $S_A({\bf{b}},x_A,{\bf{r}}) = \exp[-\sigma(x_A,{\bf{r}})T({\bf{b}})/2]$; for large dipole sizes it can be expressed as $S_A({\bf{b}},x_A,{\bf{r}} \to \infty) = \exp[-\nu_A(x_A,{\bf{b}})]$. Here the nuclear opacity $\nu_A(x_A,{\bf{b}}) = {1 \over 2 } \sigma_0(x_A) T({\bf{b}})$, is given in terms of the dipole cross section for large dipoles $\sigma_0(x) = \sigma(x_A,{\bf{r}} \to \infty)$. In momentum space, a useful expansion is in terms of multiple convolutions of the free--nucleon unintegrated glue (we use a notation $f(x,{\bf{p}}) \propto {\bf{p}}^{-4} \partial G(x,{\bf{p}}^2)/ \partial \log({\bf{p}}^2)$): \begin{eqnarray} \phi({\bf{b}},x_A,{\bf{p}}) = \sum_{j \geq 1} w_j \big(\nu_A(x_A,{\bf{b}})\big) f^{(j)}(x_A,{\bf{p}}) \, . \label{Nuc_glue} \end{eqnarray} Here \begin{equation} w_j(x_A,\nu_A) = {\nu_A^j(x_A,{\bf{b}}) \over j! } \exp[-\nu_A(x_A,{\bf{b}})] , \, f^{(j)} (x_A,{\bf{p}}) = \displaystyle \int \big[ \prod^j d^2 {\bf{\kappa}}_i f(x_A,{\bf{\kappa}}_i) \big] \delta^{(2)}( {\bf{p}} - \sum \kappa_i) \, . \end{equation} Curiously, the very same collective nuclear glue is proportional to the spectrum of quasielastically scattered quarks: \begin{equation} {d\sigma(qA\to qX) \over d^2 {\bf{b}} d^2 {\bf{p}}} \propto \phi({\bf{b}},x_A,{\bf{p}}) \, . \end{equation} Now, we can state the {\bf{first unitarity cutting rule}} in momentum space: the $k$--th order term in the expansion (\ref{Nuc_glue}) corresponds to the topological cross section for the quark--nucleus scattering with $k$ color excited nucleons in the final state: \begin{equation} {d \sigma^{(k)} (qA \to qX) \over d^2 {\bf{b}} d^2 {\bf{p}}} \propto w_k \big(\nu_A({\bf{b}}) \big) f^{(k)} ({\bf{p}}) \, . \end{equation} This simple substitution rule forms at the heart of the cutting rules applied to the nonlinear quadratures of \cite{Nonlinear}. \section{Standard AGK vs. QCD} Given the close relation between the nuclear unintegrated glue and the Glauber--Gribov scattering theory from color dipoles, one may be tempted to play around with various expansions of the exponential. Taking inspiration from 1970's hadronic models one may then 'derive' expressions for topological cross sections. For example, the inelastic cross section of the $q \bar{q}$-dipole-nucleus interaction is certainly obtained from: \begin{eqnarray} \Gamma^{inel} ({\bf{b}}, {\bf{r}}) &=& 1 - \exp[- \sigma({\bf{r}}) T({\bf{b}})] = \sum_k \Gamma^{(k)}({\bf{b}},{\bf{r}}) \, , \end{eqnarray} and $\Gamma^{(k)}({\bf{b}},{\bf{r}}) = \exp[-\sigma({\bf{r}}) T({\bf{b}})] (\sigma({\bf{r}}) T({\bf{b}}))^k/k!$ is then interpreted as the $k$--cut Pomeron topological cross section. This is entirely incorrect, the reason is that this result neglects the color--coupled channel structure of the intranuclear evolution of the color dipole. Interestingly, a simple closed expression can be obtained with full account for color \cite{Cutting_Rules}: \begin{eqnarray} \Gamma^{(k)}({\bf{b}},{\bf{r}}) = \sigma({\bf{r}}) T({\bf{b}}) \, w_{k-1}(2 \nu_A({\bf{b}}) ) {e^{-2 \nu_A({\bf{b}})} \over \lambda^k} \gamma(k,\lambda) \, , \nonumber \end{eqnarray} where $ \lambda = 2 \nu_A({\bf{b}}) - \sigma({\bf{r}}) T({\bf{b}})$, and $\gamma(k,x)$ is an incomplete Gamma--function. For a more quantitative comparison, consult fig 1. We see that the standard Glauber--AGK predicts a strong hierarchy: $k$ cuts are suppressed by the $k$--th power of the dipole cross section. In the QCD--cutting rules there is an additional dimensionful parameter, the opacity of a nucleus for large dipoles $\nu_A$, and the distribution over $k$ is substantially broader. This difference will be more dramatic the smaller the dipole and reflects itself in the predicted $Q^2$--dependence of DIS structure functions with fixed multiplicity of cut Pomerons. More figures, as well as another example for the failure of standard AGK, can be found on the conference website. \section{Conclusions} Topological cross sections can be obtained from nonlinear $k_\perp$ factorization formulas by straightforward substitution (cutting) rules. For a correct isolation of topological cross sections a careful treatment of the color coupled channel properties of the color(!) dipole intranuclear evolution is mandatory. Don't be misguided by simple formulas derived in a single channel context, or by a too literal analogy between color transparency and the Chudakov--Perkins suppression of multiple ionisation by small size $e^+ e^-$ pairs in QED. \begin{figure} \includegraphics[height=.4\textheight, angle=270]{Gl_larger2_proc} \includegraphics[height=.4\textheight, angle=270]{QCD_larger2_proc} \caption{{\bf{Left}}: the profile function for $k$ cut Pomerons according to standard Glauber--AGK for a fairly large dipole $r = 0.6$ fm at $x=0.01$ for $A=208$. {\bf{Right}}: the same for the QCD cutting rules.} \end{figure} \begin{theacknowledgments} It is a pleasure to thank the organizers for the kind invitation. This work was partially supported by the Polish Ministry for Science and Higher Education (MNiSW) under contract 1916/B/H03/2008/34. \end{theacknowledgments} \bibliographystyle{aipproc}
{'timestamp': '2008-12-19T16:49:09', 'yymm': '0812', 'arxiv_id': '0812.3802', 'language': 'en', 'url': 'https://arxiv.org/abs/0812.3802'}
\section{Introduction} The emergence of IPv6 in the Internet offers interesting possibilities for studies to compare IPv6 and IPv4 structures and attributes in the Internet. Interesting questions are, \textit{e.g.,}\xspace whether and to what extent IPv6 addresses are co-deployed on existing IPv4 hardware, whether correlated IPv6 and IPv4 attacks originate from the same hosts, what levels of redundancy can be achieved by co-deploying IPv6, or to conduct IPv6-IPv4 geolocation comparisons. An important prerequisite for such studies is the identification and classification of related IPv6 and IPv4 addresses. One such association can be gained from DNS queries, which yields IPv6-IPv4 address pairs that deliver the same service, but may be hosted on different machines. We address the problem of determining whether a set of IPv6 and IPv4 addresses are located on the same machine in a dual-stack setup. As in prior work\,\cite{beverly2015server}, we use the term \textit{sibling} for such a relation. This level of relation may help to draw deeper conclusions from service-level IPv6-IPv4 comparative studies, \textit{e.g.,}\xspace on latency\,\cite{bajpai2015ipv4} or security comparisons\,\cite{czyzbackdoor}. We base our classification approach on active measurements of TCP timestamps, based on prior work by Kohno\,\cite{kohno2005remote}, Zander\,\cite{zander2008improved}, and Beverly and Berger\,\cite{beverly2015server}. Our approach leverages novel features, such as the identification of unique nonlinear patterns caused by variable skew. Based on these features, we train and test various classifier models, using thorough train/test splits and cross-validation to avoid overfitting. Our contributions are: \begin{itemize} \item We identify 682 ground truth\xspace hosts, of which a large fraction exhibits variable clock skew \item We define novel features for sibling classification, capable of, \textit{e.g.,}\xspace identifying and comparing variable clock skew \item We utilize thorough train/test methodology and machine-learning to build and evaluate classifier models \item We achieve excellent train and test performance even for hosts with variable clock skew \item We establish scalability through large-scale measurements and find 149k server siblings \item We publicly share our ground truth\xspace, code, and data \end{itemize} We structure this work as follows. In Section\,\ref{sec:bgrel}, we discuss background and related work. We present our methodology in Section\,\ref{sec:meth}, and define features in Section\,\ref{sec:features}. Section\,\ref{sec:models} presents and evaluates our models, followed by their large-scale application in Section\,\ref{sec:ls}. We discuss outliers and influencing factors in Section\,\ref{sec:discussion}, concluded by Section\,\ref{sec:concl}. \section{Related Work}\label{sec:bgrel} {\noindent}We introduce background and related work in four categories: remote clock skew estimation, remote identification, IPv6-IPv4 comparative studies, and IPv6-IPv4 sibling detection. \textbf{Remote Clock Skew Estimation: }% Accurate time-keeping on computing machinery is a notoriously difficult problem: precisely oscillating hardware is prohibitively expensive for most machines. The dominant protocol to synchronize low-precision machines, NTP, exhibits many difficulties even after decades of development \cite{maloneleap,veitch2016ntp}. Hence, clocks in most devices in the Internet do not run in sync with true time, but deviate from it to an extent that is measurable over the Internet. This deviation is called \textit{skew}, and can either be consistent over time (\textit{constant skew}), or vary over time (\textit{variable skew}). Protocols or protocol extensions that include timestamps from a remote machine allow for measuring clock skew with good accuracy by comparing local and remote timestamps over time. This skew can be used to remotely identify network devices. Foundations in this field were laid by Paxson \cite{paxson98} in 1998 and Moon et al. \cite{moon1999estimation} in 1999. Kohno et al.\,\cite{kohno2005remote} in 2005 first apply these techniques to TCP timestamps. They conduct a variety of case studies on the influence of external factors on timestamp behavior, e.g., power-saving or virtualization settings. Murdoch\,\cite{murdoch2006hot}, and Zander and Murdoch \cite{zander2008improved} actively induce variable skew on remote devices to identify Tor hidden services. However, their method of decision taking is tailored for few hosts and human inspection. \textbf{Other Remote Identification Techniques: } Determining whether a set of IP addresses belong to the same router is an important and well-understood problem in Internet research. Scientific tools such as Ally\,\cite{spring2002measuring}, Radar Gun\,\cite{radargun2008} and MIDAR\,\cite{midar2013} use IP Identification (\textit{IP ID}) header values to answer this question, exploiting the fact that the \textit{IP ID} counter is commonly shared between interfaces. Unlike IPv4, IPv6 only offers \textit{IP ID} values in an extension header for fragmented packets (cf.\,RFC\,2460). In 2013, Luckie et al. \cite{luckie2013speedtrap} publish \textit{speedtrap}, which uses forced packet fragmentation for alias resolution in IPv6. In 2015, Beverly et al. \cite{ipv6router} use IPv6 identification values to measure router uptime. \textbf{IPv6-IPv4 Comparative Studies: }% In 2015, Bajbai and Schönwälder\,\cite{bajpai2015ipv4} compare connection setup latency of domains for IPv6 and IPv4 addresses. They cite content being served from different machines as one of the potential reasons for latency differences. In 2016, Czyz et al. \cite{czyzbackdoor} compare security characteristics at service level, \textit{i.e.,}\xspace \textit{AAAA} and \textit{A} records of a domain. They find IPv6 addresses to frequently have worse security characteristics. \textbf{IPv6-IPv4 Sibling Detection: }% The problem of classifying sibling relationships at a machine level has first been tackled by Berger et al.\,\cite{berger2013internet}. Using customized DNS replies, they associate DNS client resolvers through opportunistic passive probing and open DNS resolvers through active probing. This technique only works on DNS clients or open resolvers, and requires a DNS server backend infrastructure. In 2015, Beverly and Berger\,\cite{beverly2015server} refine prior work on remote clock skew estimation through TCP timestamps and apply it to actively probe IPv6-IPv4 servers for sibling classification. Their algorithm is as follows: First, they filter non-siblings based on different TCP option signatures. Second, they classify the kind of TCP timestamp behavior (e.g., random, monotonic, non-monotonic). Third, they compare the angle of two constant clock skews to determine a sibling/non-sibling relationship. They achieve very good metrics (99.6\% precision) on their training data, but acknowledge their comparably small ground truth data set\xspace of 61 hosts might be prone to overfitting. They highlight the existence of hosts with variable clock skew, for which we provide precise classification features and models in this work. \section{Methodology}\label{sec:meth} We put great care into avoiding overfitting and providing a sibling classifier that generalizes well. First, we collect a sufficiently large and diverse ground truth\xspace, significantly exceeding that of prior work. We then conduct a series of traffic measurements against this ground truth\xspace and a large-scale\xspace data set\xspace. Next, we define features potentially suited to discern siblings and non-siblings. Subsequently, we develop sibling decision algorithms based on these features, leveraging both manual analysis and machine learning algorithms. We train and evaluate those algorithms based on a train/test split in 10-fold cross-validation. \textbf{Acquiring a Ground Truth\xspace Data Set\xspace: }\label{sec:gt} A critical success factor of this work is to obtain a ground truth\xspace data set\xspace with numerous and diverse hosts. As prior work does not publish their ground truth\xspace data set\xspace, we set out to construct our own by (i) collecting ground truth\xspace servers from personally known operators and (ii) leveraging public frameworks which enforce an IPv6 and IPv4 address to reside on the same machine. For the latter, we include \textit{RIPE Atlas} anchors\,\cite{ripeatlas} and \textit{NLNOG RING} probes\,\cite{nlnogring}. Table \ref{tab:gt} lists our ground truth\xspace data and compares it to related work. Ripe Atlas anchors exist in two different hardware versions\,\cite{bajpai2015lessons}, which we split out as \textit{RAv1} and \textit{RAv2}. Within the groups of \textit{RAv1} and \textit{RAv2}, there is no hardware or software diversity. \textit{NLNOG Ring} (``\textit{ring''}) nodes are formed by installing a provided image onto a virtual or physical machine. The \textit{ring} group hence offers hardware diversity, but no software diversity. Our \textit{servers} group offers soft- and hardware diversity. The \textit{RAv1}, \textit{RAv2} and \textit{ring} groups offer good geographical diversity, while the \textit{servers} group centers on Germany. Please note that this data set\xspace allows for testing both sibling and non-sibling relationships, as non-siblings can be created by mixing addresses from different servers. \begin{table} \caption{Our ground truth\xspace data set\xspace covers diverse Autonomous Systems (ASes), Countries (CC), and clock skew characteristics. Subsets can have hardware and/or software diversity.} \label{tab:gt} \centering \resizebox{\columnwidth}{!} {\begin{tabular}{lrrrrr} \toprule Data Set\xspace & Hosts& \#AS & \#CC & Skew & Div. \\ \midrule 2016-03 (``\textit{03}'') & 458&373&40 & variable & sw+hw\\ \midrule 2016-12 (``\textit{12}'')& 682 & 536 & 80 & variable & sw+hw\\ ~~\textit{servers} & 31 &9& 5& variable & sw+hw\\ ~~\textit{ring} & 430 & 383& 56 & variable & hw\\ ~~\textit{RAv1} & 12 & 12 & 11 & variable & -\\ ~~\textit{RAv2} & 209 & 192 & 64 & constant & - \\ \midrule Beverly\,\cite{beverly2015server}& 61 & 34 & 19 & constant & unkn.\\ \bottomrule \end{tabular}}% \end{table} \textbf{Measurement Methodology: }% To obtain a sufficient amount of fingerprints, we repeatedly connect to every sibling candidate IP address in parallel for a duration of ten hours, with the goal of acquiring at least one TCP Timestamp per minute. We open a HTTP connection and issue a \textit{GET research\_scan} query. As this resource-heavy approach repeatedly requires a full TCP and HTTP connection for every IP address, we process batches of 10k IP address pairs for our large-scale measurements. Our ground-truth measurements fit into one batch. As our methodology aims to identify similarities in clock skew, measurements to all IP address of a sibling candidate need to be conducted in the same batch. We leverage the TCP keepalive option to avoid establishing a new connection every minute, but found many servers to quickly close our connections after few keepalive packets. Our measurement stack consists of a Python3 master that dispatches work to several processes, which in turn start one \textit{{urllib3}} thread per target IP address. Moving to a C library or high-speed packet-processing frameworks such as DPDK \cite{dpdk} or libmoon\,\cite{moongenimc} might significantly reduce the kernel packet processing overhead and allow for larger batches. We acknowledge that our measurements might be considered intrusive, which we discuss later in this Section. \textbf{Measurement Runs: }% In March 2016, we conduct one run against our 2016-03 ground truth\xspace, referred to as \textit{gt1}. In December 2016 and January 2017, we conduct six measurements against the 2016-12 ground truth, referred to as \textit{gt2} through \textit{gt6}. Notable are the runs \textit{gt4} through \textit{gt6}, which cover the time before, during and after the leap second on December 31, 2016. We also conduct a large-scale\xspace measurement campaign in August 2016, which we further discuss in Section \ref{sec:ls}. As the measured offset is relative to the offset of our own clock, we usually disable \textit{ntpd} to avoid creating non-linear\xspace offsets from clock adjustments on our machine. We enable \textit{ntpd} during the \textit{gt2} measurement to test this hypothesis. \textbf{Training and Evaluation Methodology: }% Training an algorithm on a few hundred ground truth\xspace hosts that will later be applied on millions of hosts comes with two challenges: First, the ground truth obtained might not be a representative sample of the full population of hosts in the Internet. Second, classifier training may overfit the training data, achieving very good train/test metrics on the ground truth, but failing on large-scale\xspace application. We aim to mitigate the risk of training on a non-representative sample by establishing software, hardware, geographical, and administrative diversity in our ground truth\xspace. We find our ground truth\xspace to exhibit diverse TCP timestamping characteristics even when compared to our large-scale\xspace data set\xspace. To avoid overfitting our ground truth\xspace for both machine-learned and manually assembled decision algorithms, we deploy a strategy of train/test splits and cross-validation. We also aim to minimize model complexity to provide better generalization.% \textbf{Ethical Considerations: }% \noindent We follow an internal multi-party approval process, among others based on Partridge and Allman\,\cite{partridge2016ethical}, before any measurement activities are carried out. We conclude that our measurements and the resulting data can not harm individuals, but may result in investigative effort for system administrators. We aim to minimize this effort by deploying scanning best-practice efforts of (i) using dedicated scan machines with explanatory websites, (ii) maintaining a blacklist, (iii) reply to every abuse e-mail (seven received, one asking for blacklisting, six curious), and (iv) request URLs preceded by \textit{{/research\_scan}} to allow quick identification of our connections. Furthermore, based on user discussions, we will respect \texttt{\footnotesize{robots.txt}} and set a descriptive HTTP user agent in future work. To conclude, we argue that no individual was harmed by our active measurements. We also conclude that the gathered data bears little privacy intrusion, and hence release all data that was based on publicly available sources. \section{Features}\label{sec:features} In this section, we present the set of features investigated and later leveraged by our algorithms for sibling detection. We present the features in the order they are calculated, as some features depend on the existence of others. It is important to note that there is a distinction in the nature of these features. Namely, a feature can be either \textit{falsifying} or \textit{verifying}. \textit{Falsifying} features may only help to determine a non-sibling relationship, whereas \textit{verifying} features can actually determine a sibling relationship with confidence. For every group of features, we explain their calculation and list their specific outputs, where \textit{output$_6$} indicates an IPv6 feature, and \textit{output$_4$} an IPv4 feature. \textbf{Network Level Features: }% We test various network level features, such as network latency, initial Time-to-Live values, OS predictions, or open ports. We find those features to have very low discriminative power and exclude them from further analysis. Our technical report gives details on these measurements and the obtained results\,\cite{rouhi16}. \textbf{TCP Options Fingerprint: }% Similar to Beverly and Berger\,\cite{beverly2015server}, we leverage TCP options as a first falsifying feature. We compare the \emph{presence} and \emph{order} of options and the \textit{no operation (NOP)} padding bytes. Additionally, if the \textit{window scale} option is present, we consider its value for the process of falsifying non-siblings, as it has demonstrated high discriminative capability in our test data set\xspace. We find values of some options such as \textit{MSS} to frequently differ by non-static offsets even for ground truth siblings. Thus, we do not include those in this filtering step. As an example, we frequently find the option fingerprint \texttt{MSS-SACK-TS-NOP-WS07} in our ground truth\xspace data set\xspace, and the \texttt{MSS-NOP-WS08-SACK-TS} in our large-scale\xspace data set\xspace. We highlight that asking for more exotic options slightly increases diversity in answers, but we did not find the effect strong enough to justify the additional measurement overhead. \textbf{Features}: \textit{Options$_4$}, \textit{Options$_6$}, $\textit{opts\_diff} = !(\textit{Options$_4$} == \textit{Options$_6$)}$. \textbf{Remote TCP Timestamp Clock Frequency: }% In a next step, we calculate the remote clock frequency as employed by Kohno et al.\,\cite{kohno2005remote}. We first calculate relative remote TCP timestamps (32-bit unsigned integers) as $v_i=T_i-T_1$, where $T_i$ is the TCP timestamp contained in the $i$-th received packet. Then, the relative received timestamps are calculated as $x_i = t_i - t_1$, where $t_i$ is the time the $i$-th packet was observed by the prober and $x_i$ is in seconds. We check the resulting array $[x, v]$ for monotonicity and fix wrap-arounds. We then solve a linear regression against $[x ,v]$, where the resulting slope is the remote clock frequency \textit{Hz}. We find typical values of 10, 100, 250 and 1000 \textit{Hz}, all within the range of 1 to 1000 Hz as in RFC\,7323. We also save the $R^2$ value of the linear regression, $R_\text{\textit{Hz}}^2$. Low or different $R_\text{\textit{Hz}}^2$ values may indicate erratic time-stamping behavior such as randomized timestamps. We expect a sibling's clock to tick with the same frequency for IPv6 and IPv4. Hence, for each sibling candidate, we calculate the difference between $\text{\textit{Hz}}_4$ and $\text{\textit{Hz}}_6$ as a falsifying metric. This produces one false negative occurrence in the ground truth data set\xspace, which we discuss in Section \ref{sec:discussion}. \textbf{Features}: $\text{\textit{Hz}}_4$, $\text{\textit{Hz}}_6$, \textit{hz\_diff}, $R_\text{\textit{Hz4}}^2$, $R_\text{\textit{Hz6}}^2$. \textbf{Raw TCP Timestamp Value: }% As a next step, we compare the raw TCP timestamp values $T_1^4$ and $T_1^6$ of a sibling candidate pair. As the TCP timestamp clock is, except for wrap-arounds, typically monotonically increasing, the raw value of the 32-bit timestamp offers a certain level of entropy across hosts. We expect the values for a sibling to be very close for IPv6 and IPv4 as they are generated from the same clock. Using the \textit{Hz} values calculated in the previous paragraph, we can calculate the absolute difference between two remote clocks using Equation \ref{eq:tcpraw}: % \begin{align* \centering \Delta_{tcp} &= T_1^4 / \text{\textit{Hz}}_4 - T_1^6 / \text{\textit{Hz}}_{6} &[s] \\ \Delta_{rec} &= t_i^4 - t_i^6 &[s] \numberthis \label{eq:tcpraw} \\ \Delta_{tcpraw} &= | \Delta_{tcp} - \Delta_{rec}| &[s] \end{align*} First, we convert the raw TCP timestamps to seconds by dividing through \textit{Hz}, Second, we calculate the difference between the local received timestamps. The final metric $\Delta_{tcpraw}$ is obtained by computing the absolute difference between the two values mentioned above. This metric can be interpreted as the time difference between the last TCP timestamp counter reset for IPv6 and IPv4. \textbf{Feature}: $\Delta_{tcpraw}$. \textbf{Clock Offset and Skew Calculation: }% In a next step, we can estimate the remote clock offset, \textit{i.e.,}\xspace the deviation of a remote clock from its expected values. To do so, we calculate the expected relative remote time $w_i = v_i/\text{\textit{Hz}}$ and the offset to the observed time $y_i = w_i - x_i$, both measured for the $i$-th observed packet. The resulting array $[x_i, y_i]$ is then used for more fine-grained investigations such as clock skew estimation, which will be explained in the following paragraphs. This array is also used for plots in this work. As noted by Kohno et al \cite{kohno2005remote}, the derivative of this array would theoretically be the \textit{skew} of the remote clock. However, due to delay variances and various other effects, it is not sound to form the derivative of this array, but we rather use a more stable method to obtain the skew. In this work, we use Robust Linear Regression~\cite{theil1992rank} to obtain a robust and outlier-resistant regression, whose slope $\alpha$ we use as an estimation of remote clock skew. The rationale behind using this method is that offset points are in nature heavily impacted by various network dynamics \cite{paxson98} and hence prone to outliers. Additionally, we store $R^2_{skew}$ which is the linear regression's coefficient of determination and is used to estimate the quality of the line fitted by the regression. \textbf{Features}: $\alpha_4$, $\alpha_6$, $\alpha_\textit{diff}$, $R^2_\textit{skew4}$, $R^2_\textit{skew6}$, $R^2_\textit{skewdiff}$. \textbf{Calculation of Dynamic Range: }% Another feature we consider is the \textit{dynamic range} of the offset array: While some hosts exhibit an offset of several seconds over the course of 10 hours, other hosts exhibit an offset of few milliseconds (cf. Figure \ref{fig:skewconstvar}). As this dynamic might be valuable information, we aim to extract it as a feature. To calculate this dynamic range in a manner that is stable against latency-caused outliers, we first prune the top and bottom 2.5\% of offset arrays, and then store the difference between maximum and minimum as \textit{rng}$_4$ and \textit{rng}$_6$. \textbf{Features}: \textit{rng}$_4$, \textit{rng}$_6$, $\textit{rng\_diff}$=$|\textit{rng}_4$-$\textit{rng}_6|$, $\textit{rng\_avg}$=$(\textit{rng}_4 + \textit{rng}_6)/2$, $\textit{rng\_diff\_rel}$=$\textit{rng\_diff}/\textit{rng\_avg}$. \begin{figure}[h] \begin{subfigure}[b]{0.45\columnwidth} \input{figures/tcptimestamp/lin_skew1.tex} \vspace{-5mm} \end{subfigure} \begin{subfigure}[b]{0.45\columnwidth} \input{figures/tcptimestamp/spline_lrg_dynam1.tex} \vspace{-5mm} \end{subfigure} \caption{Siblings with constant skew and small dynamics (left), and variable skew and large dynamics (right).} \label{fig:skewconstvar} \end{figure} \textbf{Variable Skew Calculation: } While $\alpha$ and $R^2_{skew}$ fuel sibling/non-sibling classification for hosts with constant skew, additional steps are necessary to gain insight into the behavior of sibling candidates with variable skew. To approach variable skew, we fit a polynomial spline against the $[x_i, y_i]$ arrays of a sibling candidate. Among various options to fit polynomial splines, we find it well-suited to pick 13 equidistant offset points and fit cubic splines between these candidate points in an approximative manner. Figure~\ref{fig:splinefit_sib} shows the curve fitting approach for both siblings and non-siblings. In the next step, we minimize the area between the two splines by shifting the \textit{y}-offset of one slope. This minimal area between the v4-spline and the v6-spline, \textit{spl\_diff}, is an output of our variable skew calculation. As the area between the two splines is also proportional to the dynamics of offsets, we also provide scaled version $\textit{spl\_diff\_scaled}=\textit{spl\_diff}/\textit{rng\_diff}$. \textbf{Features}: \textit{spl\_diff}, \textit{spl\_diff\_scaled}. \input{figures/splinefit_sib.tex}% \section{Classifier Model Training and Evaluation}\label{sec:models} Based on the discussed features, it is possible to train classifier \textit{models}, that can be used to predict whether a pair of IP addresses is a sibling or not. In this section, we explain our approach to train and evaluate various such models, preceded by an explanation of our evaluation methodology. \subsection{Model Evaluation}% To evaluate classifier models, a wide range of metrics exists, of which we focus on two: First, we prioritize high \textit{precision}, defined as $tp/(tp+fp)$, \textit{i.e.,}\xspace a low number of false positives. With higher precision, fewer non-siblings will be among predicted siblings. Second, we also want metrics for other aspects of performance to be stable to avoid overprioritizing precision. For this, we use the Matthews Correlation Coefficient \textit{(MCC)}\,\cite{mcc-sklearn} as defined in Equation \ref{eq:mcc}. This coefficient nicely and robustly factors in all kinds of aspects of classifier performance for binary problems, where 1 would be a perfect score and 0 the worst, \textit{i.e.,}\xspace a coin flip. \begin{equation} \label{eq:mcc} \text{MCC} = \frac{ tp \cdot tn - fp \cdot fn } {\sqrt{ (tp + fp) ( tp + fn ) ( tn + fp ) ( tn + fn ) } } \end{equation} For training and evaluation purposes, we generate around 400k\footnote{$(n\cdot(n-1)), n=682$, we use both the v6-v4 and v4-v6 combination} non-siblings from our data set\xspace by mixing the IPv6 address of one sibling with the IPv4 addresses of other siblings. We generate the maximum possible number of non-siblings for each evaluation, and then equally weight both output classes for the classifier training. We generate non-siblings only for siblings actually used, \textit{e.g.,}\xspace we first split siblings into train and test, and then form the non-siblings based on these splits. Table \ref{tab:gt-eval-hl} shows the evaluation results for our different models. Table \ref{tab:gt-eval-groups} investigates whether a model's results are dependent on the evaluated subgroup. Having established our evaluation approach, we now discuss filtering steps applied during feature calculation, followed by the individual models. \subsection{First-Order Filtering for All Models \label{sec:firstorderfilters} Certain sanity checks on first-order filters apply to all mod\-els, \textit{e.g.,}\xspace if the calculation of \textit{Hz} fails, many dependent features can not be calculated. \textbf{Different TCP Options}: If sibling candidates offer different TCP options, our algorithm stops with a non-sibling decision before continuing with feature calculation. \textbf{Hz Calculation}: When calculating the remote clock frequency, the linear regression applied to do so may fail for reasons such as randomized timestamps. To only incorporate sound remote Hz frequencies, we require the $R^2$ parameter to be above $0.9$. We classify sibling candidates with failed Hz calculations as non-siblings. \textbf{Different Hz Values}: When a sibling candidate offers different \textit{Hz} values for IPv6 and IPv4, calculation of dependent features becomes meaningless. We hence decide candidates with different Hz values as non-siblings. \textbf{Too Small Hz Values: } When a Hz value below 1\,Hz is calculated, we also stop further calculation and decide for a non-sibling relationship. In all these cases, we decide for a non-sibling relationship and stop calculating other features. This works with a very low false negative rate (4 in 682) for our ground truth\xspace data set\xspace. We discuss those outliers in Section \ref{sec:outliers}. \subsection{Beverly/Berger Algorithm} For comparison, we implement the sibling detection algorithm as proposed by Beverly and Berger~ \cite{beverly2015server}. The algorithm is designed for constant skew and works primarily by comparing the constant skew of sibling candidates. The algorithm does not find siblings with high precision ($<1$$\%$, see Table \ref{tab:gt-eval-hl}) in our combined data set\xspace, which includes many hosts with variable clock drift. We find the algorithm's performance to vary slightly for different groups of our test set (see Table \ref{tab:gt-eval-groups}). As this variation is not systematic (\textit{i.e.,}\xspace due to overwhelming group characteristics), we argue that this is based on circumstantial existence of hosts that are well-fit to Beverly and Berger's algorithm. \begin{table} \caption{Hand-Tuned and Machine-Learned Classifiers train and test very well, speaking to good generalization. Beverly algorithm is not generalizing well to the more diverse data set\xspace.} \label{tab:gt-eval-hl} \centering \resizebox{\columnwidth}{!} {\begin{tabular}{lrrrrr} \toprule Algo. & Train DS & Test DS & Prec. & MCC & Type \\ \midrule Bev. & Bev. & Bev. & 99.6\% & n/a & Train \\ Bev. & Bev. & 03$\cup$12 & .9\% & .17 & Test \\ HT1 & 03 & 03 & 100\% & .99 & Train\\ HT1 & 03 & 12$\setminus$03 & 99.49\% & .98 & Test\\ ML1 & 03$\cup$12 & 03$\cup$12 & 99.36\% & 1.0 & Train \\ ML1 & 03$\cup$12 & 03$\cup$12 & 99.88\% & 1.0 & Test \\ \bottomrule \multicolumn{6}{p{\columnwidth}}{\footnotesize{Legend: HT1 denotes our hand-tuned algorithm. 03 denotes the 2016-03 data set\xspace, 12 the 2016-12 data set\xspace. ML1 values are the arithmetic mean of 10-fold cross-validation. All tests against 2016-12 are arithmetic means over the results from measurement runs 2 through 7. Calculation of MCC for Bev. data set not possible from metrics given by Beverly and Berger~ \cite{beverly2015server}.}}\\ \end{tabular}}% \end{table} \begin{table} \caption{Performance of Beverly algorithm slightly dependent on group, our hand-tuned and machine-learned (not shown) algorithm independent from group.} \label{tab:gt-eval-groups} \centering \resizebox{\columnwidth}{!} {\begin{tabular}{lrrrrr} \toprule Algo. & Train DS & Test DS & Prec. & MCC & Type \\ \midrule Bev. & Bev. & 03$\cup$12-server & 8.33\% & .17 & Test \\ Bev. & Bev. & 03$\cup$12-ring & 1.09\% & .09 & Test \\ Bev. & Bev. & 03$\cup$12-rav1 & 8.35\% & .00 & Test \\ Bev. & Bev. & 03$\cup$12-rav2 & .79\% & .05 & Test \\ \midrule HT1 & 03 & 12$\setminus$03-server & 100\% & .99 & Test\\ HT1 & 03 & 12$\setminus$03-rav2 & 99.16\% & .98 & Test\\ HT1 & 03 & 12$\setminus$03-ring & 100\% & .98 & Test\\ HT1 & 03 & 12$\setminus$03-rav1 & 100\% & 1.0 & Test\\ \bottomrule \end{tabular}}% \end{table} \subsection{Hand-Tuned Decision Algorithm} One of our classifiers is a hand-tuned decision algorithm similar to Beverly and Berger's\,\cite{beverly2015server}. We hand-tune our algorithm against our \textit{2016-03} ground truth\xspace and test it against the newly added hosts of the \textit{2016-12} ground truth\xspace. This results in a 40\% train and 60\% test set, with all subgroups achieving $>$40\% test size. \input{algos/classification.tex} The formalized algorithm is displayed in Algorithm~\ref{algo:decision algorithm}. The following provides a terse description of its high-level decision taking steps. The algorithm offers many subtleties, and we recommend our source code and tech report \cite{rouhi16} as a detailed reference. Similar to Beverly and Berger, we first eliminate candidates with different TCP options. Then, our algorithm performs first-order filtering a described in Section \ref{sec:firstorderfilters}. Third, we eliminate candidates with raw TCP timestamps too far apart. In line 5, we test whether to apply linear logic by evaluating the $R^2_\textit{skew}$ values of robust linear regression. Skews with differing slope signs are classified as non-siblings (line 6), whereas small slope differences are classified as siblings (line 7). In line 8 and 9 we classify those cases as non-siblings if only one skew is clearly constant and there is a large difference in $R^2_\textit{skew}$ values. If linear testing was not conducted or not decisive, we apply nonlinear testing. For this, we first test the dynamics of both signals to exclude cases with negligible (line 11) or very different dynamics (lines 12 and 13). We take further decision based on the minimal area between non-linear splines in lines 14 to 18. Based on whether the overall dynamics are large (line 14-16) or small (line 17-19), we apply different thresholds. We found this simplistic distinction between large and normal dynamics to provide good results on our data set\xspace, but acknowledge that this step could potentially be improved by means of finer tuning, for example by scaling the threshold by the dynamics. Our algorithm, similar to Beverly's and Berger's, features some guard intervals. In those, we can not take a meaningful sibling/non-sibling decision and decide for unknown. As visible in Tables \ref{tab:gt-eval-hl} and \ref{tab:gt-eval-groups}, our algorithm achieves very good ($>$$99\%$ precision, $\geq$$.98$ MCC) metrics in training and testing and is insensitive to subgroups. We argue that this algorithm likely generalizes well to new data sets\xspace. \subsection{Decision Tree}% Using scikit-learn~\cite{sklearn}, we train a CART Decision Tree on our features described in Section~\ref{sec:features}. For each of the seven measurement runs \textit{gt1} through \textit{gt7}, we do a 10-fold cross-validation with proportional selection from all subgroups (\textit{servers}, \textit{ring}, \textit{rav1}, \textit{rav2}). We find all models to consistently perform well with low variance, and report the arithmetic mean across validation folds and measurement runs for Table \ref{tab:gt-eval-hl}. We also find all models' performance to be independent of groups. For model selection, we export all generated decision trees and find very little variance: All trees contain just one branch, where they use a single threshold against the $\Delta_{tcpraw}$ feature to decide for sibling or non-sibling, \textit{i.e.,}\xspace as a \textit{verifying} metric. Our hand-tuned algorithm used a threshold of $>$$1s$ as a \textit{falsifying} metric. We find the majority of models to pick a threshold of $>$$0.2557s$ for non-siblings and pick that value for our final model. Please note that the \textit{ML1} model is preceded by the first-order filters described in Section \ref{sec:firstorderfilters}. We argue that this model, due to its simplicity, will likely generalize best and recommend it for further use. \section{Large-Scale Application \& Results}\label{sec:ls} We apply our measurement methodology and classifier models to a large-scale\xspace data set\xspace to evaluate their scalability and suitability for finding sibling pairs for large-scale structural Internet studies. We first identify sibling candidates by resolving \textit{A} and \textit{AAAA} records for 162M domains, obtained from both registrars and ``drop list'' resellers. % We filter blacklisted IP addresses, and form sibling candidates from all possible \textit{A} and \textit{AAAA} combinations per domain. Table \ref{tab:ls1} details the statistics of this process by top-level domain. \input{tables/largescale-domainstats.tex} We find the number of sibling candidates to be bound by the relatively few \textit{AAAA} records: We obtain only 6M \textit{AAAA} records, compared to 168M \textit{A} records. The number of sibling candidates, as the cross-product of \textit{A} and \textit{AAAA} records, will multiply with increased IPv6 deployment. Processing of the resulting 8.9M candidates is quantified in Table \ref{tab:ls2}. \input{tables/largescale-domainstats-2.tex}\\ After processing our sibling candidates to unique IPv6 and IPv4 addresses, we scan those addresses with zmap \cite{durumeric2013zmap} on port TCP/80. We leverage our previously developed IPv6-capable version of zmap for this \cite{Gasser2016ipv6}. We find the majority of IP addresses to be responsive on port TCP/80. Discovery on more ports, such as TCP/443, might yield more responsive hosts. We next eliminate machines that do not offer the TCP Timestamp option, which is a prerequisite to applying our technique. We find 57\% (IPv4) and 69\% (IPv6) of responsive IP addresses to offer the TCP Timestamp option. The higher percentage for IPv6 could be caused by IPv6 being offered by newer machines with more modern TCP configurations. As we use the TCP option fingerprint of a remote host to filter for non-siblings, we extend zmap with TCP options capabilities. We chose to form a complex TCP options payload as this offers more possibilities for different TCP stacks to offer different replies. We ask for the set of options of: \texttt{$<$SACK permitted,} \texttt{Timestamps,} \texttt{Window Scale,} \texttt{TCP Fast Open, Unknown, MPTCP$>$}. We include an unknown TCP option (by using a reserved option identifier) as this may also trigger a range of different responses, from simple mirroring to correctly dropping the unknown option. However, this step removes only few hundred non-siblings in this large-scale\xspace data set\xspace. We reassemble sibling candidates with both usable IPv6 and IPv4 addresses, resulting in 6.6M sibling candidates. Those 6.6M sibling candidates represent 371k unique IPv6-IPv4 address combinations. In the next step, we measure the 371k unique candidate pairs by dividing them into batches of 10k addresses. Through these measurements, we obtain a sufficient count of timestamps for 351k of 371k address pairs. \input{tables/largescale-domainstats-3.tex} We then apply all discussed algorithms on this data set\xspace and display the results in Table \ref{tab:ls3}. Several conclusions stem from its analysis: First, our \textit{HT} algorithm was possibly tuned too conservatively and only identifies about a third of the siblings identified by our \textit{ML1} algorithm. Second, our reproduction of Beverly's and Berger's algorithm produces the most siblings, but its very low precision numbers as evaluated before likely cause these to be mainly false positives. Third, the amount of unknown/error decisions is surprisingly high. In the ground truth evaluation, we mapped those decisions to a non-sibling decision to allow for binary evaluation. As we can not evaluate the large-scale measurement against a ground truth, we display the unknown/error category. Future work might dig into these unknown/error cases, try to find a ground truth\xspace, and perform further optimization. We investigate the contrast of siblings for \textit{HT} and \textit{ML1}, and find 57k siblings to intersect. Only few hundred of the \textit{HT} siblings do not intersect, caused by the more aggressive $\Delta_{tcpraw}$ threshold in \textit{ML1}. Coming back to our initial goal, finding a confident set of siblings to study Internet-wide structural behavior, we conclude that both the \textit{ML1} and the \textit{HT} can find a significant number of siblings in Internet-wide scans. For model selection, we repeat our argument that the simplicity of the \textit{ML1} makes it likely to generalize best, and the \textit{HT} model likely suffers from a high false negative rate on our large-scale\xspace data set\xspace. \section{Outliers \& Discussion}\label{sec:discussion} In this section, we analyze outliers in our ground truth\xspace and discuss influence of several factors onto our measurements. \subsection{Analysis of Ground Truth Outliers}\label{sec:outliers} We discuss the few outliers in our ground truth measurements and evaluation. First, the Ripe Atlas node \#6220 across measurements (i) returns different Hz values for IPv6 and IPv4 (100 and 1000), and (ii) has a significant ($\sim$$2^{28}$) raw TCP timestamps difference, equaling $\sim$$3$ days difference at 1000 Hz). We have contacted the operators of this node to possibly obtain an explanation for this behavior. Second, we find the hosts \textit{ovh0X.ring.nlnog.net} to return varying TCP options for IPv4 addresses through a measurement, typically varying between only \texttt{MSS} and \texttt{MSS-SACK-TS-NOP-WS07}. Using tracebox\,\cite{detal2013revealing}, we typically receive responses that strip all but the MSS option at the last or penultimate hop. We conduct traceroutes path measurements from these machines and find the default IPv4 route traversing several RFC\,1918 IP addresses, possibly indicating a NAT or tunnel techniques interfering with our measurements. It is unclear why the hosts sometimes proceeds with a full set of TCP options. We consider both hosts legitimate cases for our classifiers to take a non-sibling decision. While Ripe Atlas and NLNOG Ring ensure that the associated IP addresses reside on the host, deployed middleboxes or proxies seem to distort this sibling relationship. Hence, we consider it positive that our models did not classify these cases as siblings. \begin{figure*}[!tb] \centering \begin{subfigure}[b]{0.225\textwidth} \input{figures/leapseconds/plots_tex.bit01.ring.nlnog.net-9065417606542677966.tex} \end{subfigure}% \begin{subfigure}[b]{0.225\textwidth} \input{figures/leapseconds/plots_tex.netrouting02.ring.nlnog.net--6616766576734116169.tex} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \input{figures/leapseconds/plots_tex.nynex01.ring.nlnog.net-3974175742968011748.tex} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \input{figures/leapseconds/plots_tex.world4you01.ring.nlnog.net--686520199934996791.tex} \end{subfigure} \vspace{-.5cm} \caption{Leap second observations of four NLNOG ring siblings.} \label{fig:leapseconds} \end{figure*} \subsection{Influence of Measurement Machine's Clock Skew} Irregularities in the measurement machine's clock may influence our measurements. We aim to minimize those irregularities by maintaining thermal conditions for the period of a measurement and by disabling the \textit{ntpd} daemon during our measurements. We conduct one measurement run (\textit{gt2}) with \textit{ntpd} enabled and find \textit{ntpd} interventions to be visible in manual analysis (through non-linear\xspace dynamics replicated across all hosts). However, all classifier models returned equally good results for this measurement run. \subsection{Influence of Leap Seconds} We conduct measurement runs before, during, and after the leap second on Dec 31, 2016. We find the metrics of our classifiers to be invariant to this circumstance, but interesting effects appear from visual inspection. Figure \ref{fig:leapseconds} shows clock offsets for the measurement during the leap second, which happens 5 hours into the measurement and is marked by a vertical line. We show four hosts with interesting behavior, most ($>$$99\%$) servers show no effect from the leap second at all. This is expected behavior, as the TCP timestamping clock is supposed to monotonically tick without regard to leap seconds. Host \textit{bit01} shows a typical reaction unaware of leap seconds, with \textit{ntpd} adjusting clock speed after the leap second. Host \textit{netrouting02} seems to smooth out the leap second by starting to slow its clock about an hour before the leap second, a technique similar to the \textit{leap smear} deployed by Google~\cite{googleleapsmear}. Host \textit{nynex01} reacts to the leap second with some delay, probably caused by periodic \textit{ntpd} polling. Remarkably, it rapidly adjusts its clock by a full second with some minor corrections following. Host \textit{world4you} periodically adjusts its clock by a hard change instead of changing the tick speed. For some time after the leap second, no clock change is conducted, likely until local time has surpassed its remote equivalent. \subsection{Influence of Ripe Atlas Hardware Version} Ripe Atlas anchors exist in two hardware versions which offer different characteristics\,\cite{bajpai2015lessons}. Interestingly, remotely measuring the TCP timestamps of Ripe Atlas anchors reveals their hardware version, as all \textit{v1} anchors exhibit variable skew, while all \textit{v2} anchors offer constant skew (cf. Figure \ref{fig:ra_skew}). \begin{figure \begin{subfigure}[b]{.45\columnwidth} \input{figures/ra/plots_tex.RA_6016-752441353800632200.tex} \end{subfigure} \begin{subfigure}[b]{.45\columnwidth} \input{figures/ra/plots_tex.RA_6019--6905216013479891187.tex} \end{subfigure} \vspace{-.5cm} \caption{Ripe Atlas nodes with variable skew for \textit{v1} (left) and constant skew for \textit{v2} (right).} \label{fig:ra_skew} \end{figure} \section{Conclusion and Future Work}\label{sec:concl} We systematically approach the classification of IPv6-IPv4 servers siblings through active measurements, mainly reliant on TCP timestamping to estimate remote clock skew. We extract a variety of features from our active measurements and feed these into (i) a reproduction of existing work's algorithm, (ii) a hand-tuned algorithm developed by us, and (iii) machine-learning approaches. We find our algorithm, which significantly extends existing work by various features, to perform very well. Our machine-learning trained decision tree surprises with a very simple, but highly precise model. We apply our classifier models against a large-scale measurement and find different, but always significant, counts of siblings based on domain lists. We discuss outliers, likely caused by proxies or middleboxes, and the influence of leap seconds onto the TCP timestamping clock. We release our ground truth, code and data to the scientific community to allow for reproducibility and further research in this area. \textbf{Future Work:} One direction of future work is the curation of larger sibling ground truth data sets\xspace. We hope to start this process with the release of our ground truth\xspace data set\xspace on GitHub. Another direction is the reduction of measurement effort in terms of duration, frequency, or both. Especially, the very discriminative raw TCP timestamp feature should work well with few data points, and hence only require few packets instead of hour-long measurements. Furthermore, the application of our technique on passive traffic captures to distinguish siblings sounds like a promising research goal. \textbf{Data Release: }% We publish our curated ground truth\xspace data set\xspace, acquired raw data, and developed source code for both reproducibility (cf.~\cite{AcmArtifacts,reproduc2017}) and use by other researchers under: \centerline{\texttt{\url{https://github.com/tumi8/siblings}}} This website includes directions on how to obtain the large raw data set\xspace from an archival storage server. \textbf{Acknowledgments: } We gratefully thank the various contributors of ground-truth server data. This work has been supported by the German Federal Ministry of Education and Research, project X-CHECK, grant 16KIS0530, and project AutoMon, grant 16KIS0411.
{'timestamp': '2017-05-03T02:06:00', 'yymm': '1610', 'arxiv_id': '1610.07251', 'language': 'en', 'url': 'https://arxiv.org/abs/1610.07251'}
\section{Introduction} A system of second order ordinary differential equations (SODE), whose coefficients functions are positively two-homogeneous, can be identified with a second order vector field, which is called a spray. If such a system represents the variational (Euler-Lagrange) equations of the energy of a Finsler metric, the system is said to be Finsler metrizable, the corresponding spray represents the geodesic spray of the Finsler metric. In such a case, the system comes with a fixed parameterisation, which is given by the arc-length of the Finsler metric. An orientation preserving reparametrisation of a homogeneous SODE can change substantially the geometry of the given system, \cite[Section 3(b)]{Douglas27}. Two sprays that are obtained by such reparametrisation are called projectively related. It has been shown in \cite{BM12} that the property of being Finsler metrizable is very unstable to reparameterization and hence to projective deformations. Within the geometric setting one can associate to a spray, important information are encoded in the Riemann curvature tensor ($R$-curvature or Jacobi endomorphism). Projective deformations that preserve the Riemann curvature are called Funk functions. In this paper we are interested in the following question, which is due do Zhongmin Shen, \cite[page 184]{Shen01}. Can we projectively deform a Finsler metric, by a Funk function, and obtain a new Finsler metric? In other words, can we have, within the same projective class, two Finsler metrics with the same Jacobi endomorphism? We prove, in Theorem \ref{thm1}, that projective deformations by Funk functions of geodesic sprays, of non-vanishing scalar flag curvature, do not preserve the property of being Finsler metrizable. As a consequence we obtain that for an isotropic spray, its projective class cannot have more than one geodesic spray with the same Riemann curvature as the given spray. The negative answer to Shen's question is somehow surprising and heavily relies on the fact that the original geodesic spray is not $R$-flat. The projective metrizability problem for a flat spray is known as (the Finslerian version of) Hilbert's Fourth Problem, \cite{Alvarez05, Crampin11}. It is known that in the case of an $R$-flat spray, any projective deformation by a Funk function leads to a Finsler metrizable spray, see \cite[Theorem 7.1]{GM00}, \cite[Theorem 10.3.5]{Shen01}. Given the negative answer to Shen's question it is natural to ask if a given spray is projectively equivalent to a geodesic spray with the same curvature. In Proposition \ref{prop1}, we show that there exist sprays for which the answer is negative. \section{A geometric framework for sprays and Finsler spaces} In this section, we provide a geometric framework that we will use to study, in the next sections, some problems related to projective deformations of sprays and Finsler spaces by Funk functions. The main references that we use for providing this framework are \cite{BCS00, Grifone72, Shen01, Szilasi03}. \subsection{A geometric framework for sprays} In this work, we consider $M$ an $n$-dimensional smooth and connected manifold, and $(TM, \pi, M)$ its tangent bundle. Local coordinates on $M$ are denoted by $(x^i)$, while induced coordinates on $TM$ are denoted by $(x^i, y^i)$. Most of the geometric structures in our work will be defined not on the tangent space $TM$, but on te slit tangent space $T_0M=TM\setminus\{0\}$, which is the tangent space with the zero section removed. Standard notations will be used in this paper, $C^{\infty}(M)$ represents the set of smooth functions on $M$, ${\mathfrak X}(M)$ is the set of vector fields on $M$, and $\Lambda ^k(M)$ is the set of $k$-forms on $M$. The geometric framework that we will use in this work is based on the Fr\"olicher-Nijenhuis formalism, \cite{FN56, GM00}. There are two important derivations in this formalism. For a vector valued $\ell$-form $L$ on $M$, consider $i_L$ and $d_L$ the corresponding derivations of degree $(\ell-1)$ and $\ell$, respectively. The two derivations are connected by the following formula \begin{eqnarray*} d_L=i_L\circ d - (-1)^{\ell-1}d \circ i_L. \end{eqnarray*} If $K$ and $L$ are two vector valued forms on $M$, of degrees $k$ and $\ell$, then the Fr\"olicher-Nijenhuis bracket $[K, L]$ is the vector valued $(k+\ell)$-form, uniquely determined by \begin{eqnarray*} d_{[K,L]}=d_K\circ d_L - (-1)^{k\ell}d_L\circ d_K. \end{eqnarray*} In this work, we will use various commutation formulae for these derivations and the Fr\"olicher-Nijenhuis bracket, following Grifone and Muzsnay \cite[Appendix A]{GM00}. There are two canonical structures on $TM$, one is the Liouville (dilation) vector field ${\mathbb C}$ and the other one is the tangent structure (vertical endomorphism) $J$. Locally, these two structures are given by \begin{eqnarray*} {\mathbb C}=y^i\frac{\partial}{\partial y^i}, \quad J=\frac{\partial}{\partial y^i}\otimes dx^i. \end{eqnarray*} A system of second order ordinary differential equations (SODE), in normal form, \begin{eqnarray} \frac{d^2x^i}{dt^2} + 2G^i\left(x, \frac{dx}{dt}\right)=0, \label{sode} \end{eqnarray} can be identified with a special vector field $S\in {\mathfrak X}(TM)$, which is called a \emph{semispray} and satisfies the condition $JS={\mathbb C}$. In this work, a special attention will be paid to those SODE that are positively homogeneous of order two, with respect to the fiber coordinates. To address the most general cases, the corresponding vector field $S$ has to be defined on $T_0M$. The homogeneity condition reads $[{\mathbb C}, S]=S$ and the vector field $S$ is called a \emph{spray}. Locally, a spray $S\in {\mathfrak X}(T_0M)$ is given by \begin{eqnarray} S=y^i\frac{\partial}{\partial x^i} - 2G^i(x,y)\frac{\partial}{\partial y^i}. \label{spray} \end{eqnarray} The functions $G^i$, locally defined on $T_0M$, are $2$-homogeneous with respect to the fiber coordinates. A curve $c(t)=(x^i(t))$ is called a \emph{geodesic} of a spray $S$ if $S\circ \dot{c}(t)= \ddot{c}(t)$, which means that it satisfies the system \eqref{sode}. An orientation-preserving reparameterization of the second-order system \eqref{sode} leads to a new second order system and therefore gives rise to a new spray $\widetilde{S}=S-2P{\mathbb C}$, \cite[Section 3(b)]{Douglas27}, \cite[Chapter 12]{Shen01}. The two sprays $S$ and $\widetilde{S}$ are said to be \emph{projectively related}. The $1$-homogeneous function $P$ is called the projective deformation of the spray $S$. For discussing various problems for a given SODE \eqref{sode} one can associate a geometric setting to the corresponding spray. This geometric setting uses the Fr\"olicher-Nijenhuis bracket of the given spray $S$ and the tangent structure $J$. The first ingredient to introduce this geometric setting is the \emph{horizontal projector} associated to the spray $S$, and it is given by \cite{Grifone72} \begin{eqnarray*} h=\frac{1}{2}\left(\operatorname{Id} - [S,J]\right). \end{eqnarray*} The next geometric structure carries curvature information about the given spray $S$ and it is called the \emph{Jacobi endomorphism}, \cite[Section 3.6]{Szilasi03}, or the Riemann curvature, \cite[Definition 8.1.2]{Shen01}. This is a vector valued $1$-form, given by \begin{eqnarray} \Phi=\left(\operatorname{Id} - h\right) \circ [S,h]. \label{jacobi} \end{eqnarray} A spray $S$ is said to be \emph{isotropic} if its Jacobi endomorphism takes the form \begin{eqnarray} \Phi = \rho J - \alpha \otimes {\mathbb C}. \label{isophi} \end{eqnarray} The function $\rho$ is called the \emph{Ricci scalar} and it is given by $(n-1)\rho=\operatorname{Tr}(\Phi)$. The semi-basic $1$-form $\alpha$ is related to the Ricci scalar by $i_S\alpha=\rho$. In this work, we study when projective deformations preserve or not some properties of the original spray. Therefore, we recall first the relations between the geometric structures induced by two projectively related sprays. For two such sprays $S_0$ and $S=S_0-2P{\mathbb C}$, the corresponding horizontal projectors and Jacobi endomorphisms are related by, \cite[(4.8)]{BM12}, \begin{eqnarray} h & = & h_0-PJ - d_JP\otimes {\mathbb C}. \label{hh0} \\ \Phi & = & \Phi_0 + \left(P^2 - S_0(P)\right) J - \left(d_J(S_0(P) - P^2) + 3(Pd_JP - d_{h_0}P) \right)\otimes {\mathbb C}. \label{phiphi0} \end{eqnarray} As one can see from the two formulae \eqref{isophi} and \eqref{phiphi0}, projective deformations preserve the isotropy condition. In this work, we will pay special attention to those projective deformations that preserve the Jacobi endomorphism. Such a projective deformation is called a \emph{Funk function} for the original spray. From formula \eqref{phiphi0}, we can see that a $1$-homogeneous function $P$ is a Funk function for the spray $S_0$, if and only if it satisfies \begin{eqnarray} d_{h_0}P=Pd_JP. \label{eq_Funk} \end{eqnarray} See also \cite[Prop. 12.1.3]{Shen01} for alternative expressions of formulae \eqref{phiphi0} and \eqref{eq_Funk} in local coordinates. \subsection{A geometric framework for Finsler spaces of scalar flag curvature} We recall now the notion of a Finsler space, and pay special attention to those Finsler spaces of scalar flag curvature. \begin{defn} A \emph{Finsler function} is a continuous non-negative function $F: TM\to {\mathbb R}$ that satisfies the following conditions: \begin{itemize} \label{def_Finsler} \item[i)] $F$ is smooth on $T_0M$ and $F(x,y)=0$ if and only if $y=0$; \item[ii)] $F$ is positively homogeneous of order $1$ in the fiber coordinates; \item[iii)] the $2$-form $dd_JF^2$ is a symplectic form on $T_0M$. \end{itemize} \end{defn} There are cases when the conditions of the above definition can be relaxed. One can allow for the function $F$ to be defined on some open cone $A\subset T_0M$, in which case we talk about a \emph{conic-pseudo Finsler function}. We can also allow for the function $F$ not to satisfy the condition iii) of Definition \ref{def_Finsler}, in which case we will say that $F$ is a \emph{degenerate Finsler function}, \cite{AIM93}. A spray $S\in {\mathfrak X}(T_0M)$ is said to be \emph{Finsler metrizable} if there exists a Finsler function $F$ that satisfies \begin{eqnarray} i_Sdd_JF^2=-dF^2. \label{geodspray1} \end{eqnarray} In such a case, the spray $S$ is called the \emph{geodesic spray} of the Finsler function $F$. Using the homogeneity properties, it can be shown that a spray $S$ is Finsler metrizable if and only if \begin{eqnarray} d_h F^2 = 0. \label{geodspray2} \end{eqnarray} The Finsler metrizability problem is a particular case of the inverse problem of Lagrangian mechanics, which consists in characterising systems of SODE that are variational. In the Finslerian context, the various methods for studying the inverse problem has been adapted and developed using various techniques in \cite{BM13, BM14, CMS13, GM00, Muzsnay06, Szilasi03}. \begin{defn} Consider $F$ a Finsler function and let $S$ be its geodesic spray. The Finsler function $F$ is said to be of \emph{scalar flag curvature} (SFC) if there exists a function $\kappa \in C^{\infty}(T_0M)$ such that the Jacobi endomorphism of the geodesic spray $S$ is given by \begin{eqnarray} \Phi = \kappa\left(F^2 J - Fd_JF\otimes {\mathbb C}\right). \label{phik} \end{eqnarray} \end{defn} By comparing the two formulae \eqref{isophi} and \eqref{phik} we observe that for Finsler functions of scalar flag curvature, the geodesic spray is isotropic. The converse of this statement is true in the following sense. If an isotropic spray is Finsler metrizable, then the corresponding Finsler function has scalar flag curvature, \cite[Lemma 8.2.2]{Shen01}. \section{Projective deformations by Funk functions} In \cite[page 184]{Shen01}, Zhongmin Shen asks the following question: given a Funk function $P$ on a Finsler space $(M, F_0)$, decide whether or not there exists a Finsler metric $F$ that is projectively related to $F_0$, with the projective factor $P$. Since Funk functions preserve the Jacobi endomorphism under projective deformations, one can reformulate the question as follows. Decide wether or not there exists a Finsler function $F$, projectively related to $F_0$, having the same Jacobi endomorphism with $F_0$. When the Finsler function $F_0$ is $R$-flat, the answer is known, every projective deformation by a Funk function leads to a Finsler metrizable spray, \cite[Theorem 7.1]{GM00}, \cite[Theorem 10.3.5]{Shen01}. In the next theorem, we prove that the answer to Shen's Question is negative, for the case when the Finsler function that we start with has non-vanishing scalar flag curvature. \begin{thm} \label{thm1} Let $F_0$ be a Finsler function of scalar flag curvature $\kappa_0\neq 0$ and having the geodesic spray $S_0$. Then, there is no projective deformation of $S_0$, by a Funk function $P$, that will lead to a Finsler metrizable spray $S=S_0-2P{\mathbb C}$. \end{thm} \begin{proof} Consider $F_0$ a Finsler function of non-vanishing scalar flag curvature $\kappa_0$ and let $S_0$ be its geodesic spray. All geometric structures associated with the Finsler space $(M, F_0)$ will be denoted with the subscript $0$. The Jacobi endomorphism of the spray $S_0$ is given by \begin{eqnarray} \Phi_0=\kappa_0 F^2_0 J - \kappa_0F_0 d_JF_0 \otimes {\mathbb C}. \label{phi0} \end{eqnarray} We will prove the theorem by contradiction. Therefore, we assume that there exists a non-vanishing Funk function $P$ for the Finsler function $F_0$, such that the projectively related spray $S=S_0-2P{\mathbb C}$ is Finsler metrizable by a Finsler function $F$. Since, $P$ is a Funk function, it follows that the Jacobi endomorphism $\Phi$ of the spray $S$ is given by $\Phi=\Phi_0$. From formula \eqref{phi0} it follows that $\Phi=\Phi_0$ is isotropic and using the fact that $S$ is metrizable, we obtain that $S$ has scalar flag curvature $\kappa$. Consequently, its Jacobi endomorphism is given by formula \eqref{phik}. By comparing the two formulae \eqref{phi0} and \eqref{phik} and using the fact that $\Phi_0=\Phi$, we obtain \begin{eqnarray*} \kappa_0 F^2_0 = \kappa F^2, \quad \kappa_0 F_0 d_JF_0 = \kappa F d_JF. \end{eqnarray*} From the above two formulae, and using the fact that $\kappa_0\neq 0$, we obtain \begin{eqnarray*} \frac{d_JF}{F}=\frac{d_JF_0}{F_0}, \end{eqnarray*} which implies $d_J(\ln F)=d_J(\ln F_0)$ on $T_0M$. Therefore, there exists a basic function $a$, locally defined on $M$, such that \begin{eqnarray} F(x,y)=e^{2a(x)}F_0(x,y), \forall (x,y)\in T_0M. \label{ff0}\end{eqnarray} Now, we use the fact that $S$ is the geodesic spray of the Finsler function $F$, which, using formula \eqref{geodspray2}, implies that $S(F)=0$. $S$ is projectively related to $S_0$, which means $S=S_0-2P{\mathbb C}$ and hence $S_0(F)=2P{\mathbb C}(F)$. Last formula fixes the projective deformation factor $P$, which in view of formula \eqref{ff0} and the fact that $S_0(F_0)=0$, is given by \begin{eqnarray} P=\frac{S_0(F)}{2F}=\frac{S_0(e^{2a}F_0)}{2e^{2a}F_0}= \frac{S_0(e^{2a})F_0}{2e^{2a}F_0} = \frac{S_0(e^{2a})F_0}{2e^{2a}F_0}=S_0(a)=a^c. \label{pac} \end{eqnarray} In the above formula $a^c$ is the complete lift of the function $a$. Since we assumed that the projective factor $P$ is non-vanishing, it follows that $a^c$ has the same property. Again, from the fact that $S$ is the geodesic spray of the Finsler function $F$, it follows that $d_hF=0$. We use now formula \eqref{hh0} that relates the horizontal projectors $h$ and $h_0$ of the two projectively related sprays $S$ and $S_0$. It follows \begin{eqnarray*} d_{h_0}F- Pd_JF - d_JP{\mathbb C}(F)=0. \end{eqnarray*} We use the above formula, as well as formula \eqref{ff0}, to obtain \begin{eqnarray} 2e^{2a}F_0 da - a^c e^{2a} d_J F_0 - e^{2a}F_0 da=0. \label{eq1}\end{eqnarray} To obtain the above formula we did use also that $a$ is a basic function and therefore $d_{h_0}a=da$ and $d_Ja^c=da$. In view of these remarks, we can write formula \eqref{eq1} as follows \begin{eqnarray*} F_0 d_Ja^c - a^c d_JF_0=0. \end{eqnarray*} Using the fact that $a^c\neq 0$, we can write above formula as \begin{eqnarray*} d_J\left(\frac{F_0}{a^c}\right)=0. \end{eqnarray*} Last formula implies that $F_0/a^c=b$ is a basic function and therefore $F_0(x,y)=b(x)\frac{\partial a}{\partial x^i}(x) y^i$, $\forall (x,y)\in TM$, which is not possible due to the regularity condition that the Finsler function $F_0$ has to satisfy. \end{proof} One can give an alternative proof of Theorem \ref{thm1} by using the scalar flag curvature (SFC) test provided by \cite[Theorem 3.1]{BM14}. Within the same hypothesis of Theorem \ref{thm1}, it can be shown that the projective deformation $S=S_0-2P{\mathbb C}$, by a Funk function, is not Finsler metrizable since one condition of the SFC test is not satisfied. We presented here a direct proof, to make the paper self contained. We can reformulate the result of Theorem \ref{thm1} as follows. Let $F_0$ be a Finsler function of scalar flag curvature $\kappa_0\neq 0$ and let $S_0$ be its geodesic spray with the Jacobi endomorphism $\Phi_0$. Then, within the projective class of $S_0$, there is exactly one geodesic spray, and that one is exactly $S_0$ that has $\Phi_0$ as the Jacobi endomorphism. We point out here the importance of the condition $\kappa_0 \neq 0$. The proof of Theorem \ref{thm1} is based on formula \eqref{ff0} which is not true, in view of the previous two formulae, in the case $\kappa_0 = 0$. For the alternative proof of the Theorem \ref{thm1}, using \cite[Theorem 3.1]{BM14}, we mention that the SFC test is valid only if the Ricci scalar does not vanish. In the case $\kappa_0=0$, which means that the spray $S_0$ is $R$-flat, it is known that any deformation of the geodesic spray $S_0$ by a Funk function leads to a spray that is Finsler metrizable, \cite[Theorem 7.1]{GM00}, \cite[Theorem 10.3.5]{Shen01}. The following corollary is a consequence of Theorem \ref{thm1} and of the above discussion. \begin{cor} \label{cor1} Let $S_0$ be an isotropic spray, with Jacobi endomorphism $\Phi_0$ and non-vanishing Ricci scalar. Then, the projective class of $S_0$ contains at most one Finsler metrizable spray that has $\Phi_0$ as Jacobi endomorphism. \end{cor} The statement in the above corollary gives rise to a new question: is there any case when we have none? In the next proposition, we will show that the answer to this question is affirmative if the dimension of the configuration manifold is grater than two. \begin{prop} \label{prop1} We assume that $\dim M\geq 3$. Then, there exists a spray $S_0$ with the Jacobi endomorphism $\Phi_0$ such that the projective class of $S_0$ does not contain any Finsler metrizable spray having the same Jacobi endomorphism $\Phi_0$. \end{prop} \begin{proof} We consider $\widetilde{S}$ the geodesic spray of a Finsler function $\widetilde{F}$ of constant flag curvature $\widetilde{k}$. According to \cite[Theorem 5.1]{BM12}, the spray \begin{eqnarray} S_0=\widetilde{S} - 2\lambda \widetilde{F}{\mathbb C} \label{s0stilde} \end{eqnarray} is not Finsler metrizable for any real value of $\lambda$ such that $\widetilde{k}+\lambda^2\neq 0$ and $\lambda \neq 0$. We fix such $\lambda$ and the spray $S_0$. Using formula \eqref{phiphi0}, it follows that the Jacobi endomorphism $\Phi_0$ of the spray $S_0$ is given by \begin{eqnarray} \Phi_0=\left(\widetilde{k}+\lambda^2\right) \left(\widetilde{F}^2 - \widetilde{F} d_J \widetilde{F} \otimes {\mathbb C}\right). \label{phikl}\end{eqnarray} We will prove by contradiction that the projective class of $S_0$ does not contain any Finsler metrizable spray, whose Jacobi endomorphism is given by formula \eqref{phikl}. Accordingly, we assume that there is a Funk function $P$ for the spray $S_0$ such that the spray $S=S_0-2P{\mathbb C}$ is metrizable by a Finsler function $F$. Since $P$ is a Funk function, it follows that $S_0$ and $S$ have the same Jacobi endomorphism, $\Phi_0=\Phi$. A first consequence is that the spray $S$ is isotropic and being Finsler metrizable, it follows that it is of scalar flag curvature $\kappa$. Therefore, the Jacobi endomorphism $\Phi$ is given by formula \eqref{phik}. By comparing the two formulae \eqref{phikl} and \eqref{phik} and using the fact that $\Phi_0=\Phi$, we obtain that the two Ricci scalars, as well as the two semi-basic $1$-forms coincide \begin{eqnarray} \rho_0= \left(\widetilde{k}+\lambda^2\right) \widetilde{F}^2 = \rho = \kappa F^2, \quad \alpha_0 = \left(\widetilde{k}+\lambda^2\right) \widetilde{F} d_J \widetilde{F}= \alpha = \kappa F d_JF. \label{rr0} \end{eqnarray} From the above formulae we have that $d_J\rho_0=2\alpha_0$ and therefore $d_J\rho=2\alpha$. Last formula implies $F^2d_J\kappa + 2\kappa Fd_JF = 2\kappa Fd_JF$, which means $d_J\kappa = 0$. At this moment we have that $\kappa$ is a function which does not depend on the fibre coordinates. With this argument, using the assumption that $\dim{M}\geq 3$ and the Finslerian version of Schur's Lemma \cite[Lemma 3.10.2]{BCS00} we obtain that the scalar flag curvature $\kappa$ is a constant. We express now the spray $S$ in terms of the original spray $\widetilde{S}$ that we started with, \begin{eqnarray} S=\widetilde{S}-2\left(\lambda \widetilde{F} + P\right){\mathbb C}. \label{sstilde} \end{eqnarray} Since $S$ is the geodesic spray of the Finsler function $F$ and $\widetilde{S}$ is the geodesic spray of the Finsler function $\widetilde{F}$ it follows that $S(F)=0$ and $\widetilde{S}(\widetilde{F})=0$. From first formula \eqref{rr0} we have $(\widetilde{k}+\lambda^2) \widetilde{F}^2 = \kappa F^2$. We apply to both sides of this formula the spray $S$ given by \eqref{sstilde} and obtain $\lambda\widetilde{F} + P=0$. Therefore, the projective factor is given by $P=-\lambda \widetilde{F}$. However, we will show that this projective factor $P$ does not satisfy the equation \eqref{eq_Funk} and therefore it is not a Funk function for the spray $S_0$. The projectively related sprays $S_0$ and $\widetilde{S}$ are related by formula \eqref{s0stilde}. Using the form of the projective factor $P=-\lambda \widetilde{F}$, as well as the formula \eqref{hh0}, we obtain that the corresponding horizontal projectors $h_0$ and $\widetilde{h}$ are related by \begin{eqnarray*} h_0 = \widetilde{h} + \lambda \widetilde{F} J + \lambda d_J \widetilde{F} \otimes {\mathbb C}. \end{eqnarray*} We evaluate now the two sides of the equation \eqref{eq_Funk} for the projective factor $P=-\lambda \widetilde{F}$. For the right hand side we have \begin{eqnarray*} d_{h_0}P=-\lambda d_{\widetilde{h}} \widetilde{F} - \lambda^2 \widetilde{F} d_J \widetilde{F} - \lambda^2 \widetilde{F} d_J \widetilde{F} = - 2 \lambda^2 \widetilde{F} d_J \widetilde{F}. \end{eqnarray*} In the above calculations we used the fact that $\widetilde{S}$ is the geodesic spray of $\widetilde{F}$ and hence $d_{\widetilde{h}} \widetilde{F}=0$. For the right hand side of the equation \eqref{eq_Funk} we have \begin{eqnarray*} Pd_JP=\lambda^2\widetilde{F} d_J\widetilde{F}. \end{eqnarray*} It follows that the projective factor $P=-\lambda \widetilde{F}$ is not a Funk function for the spray $S_0$. Therefore, we can conclude that for the spray $S_0$, given by formula \eqref{s0stilde} and that is not Finsler metrizable, there is no projective deformation by a Funk function that will lead to a Finsler metrizable spray. \end{proof} We can provide an alternative proof of Proposition \ref{prop1} using the constant flag curvature (CFC) test from \cite[Theorem 4.1]{BM13}. More exactly, we can show that the spray $S$ given by formula \eqref{sstilde} is not metrizable by a Finsler function of constant flag curvature, and hence not Finsler metrizable, see also \cite[Theorem 4.2]{BM13}. \subsection*{Acknowledgments} I express my warm thanks to Zolt\'an Muzsnay for the discussions we had on the results of this paper. This work has been supported by the Bilateral Cooperation Program Romania-Hungary 672/2013-2014.
{'timestamp': '2015-11-18T02:07:42', 'yymm': '1412', 'arxiv_id': '1412.5282', 'language': 'en', 'url': 'https://arxiv.org/abs/1412.5282'}
\section{Introduction}\label{sec:intro} Social media posts with false claims have led to real-world threats on many aspects such as politics~\citep{pizzagate}, social order~\citep{salt}, and personal health~\citep{shuanghuanglian}. To tackle this issue, over 300 fact-checking projects have been launched, such as Snopes\footnote{\url{https://www.snopes.com}} and Jiaozhen\footnote{\url{https://fact.qq.com/}}~\citep{duke-report}. Meanwhile, automatic systems have been developed for detecting suspicious claims on social media~\citep{newsverify, credeye}. This is however not the end. A considerable amount of false claims continually spread, even though they are already proved false. According to a recent report~\citep{tencent-report}, around 12\% of false claims published on Chinese social media, are actually ``old'', as they have been debunked previously. Hence, detecting previously fact-checked claims is an important task. According to the seminal work by \citet{shaar2020}, the task is tackled by a two-stage information retrieval approach. Its typical workflow is illustrated in Figure~\ref{fig1}(a). Given a claim as a query, in the first stage a basic searcher (e.g., BM25 \citealp{bm25}) searches for candidate articles from a collection of fact-checking articles (FC-articles). In the second stage, a more powerful model (e.g., BERT,~\citealp{bert}) reranks the candidates to provide evidence for manual or automatic detection. Existing works focus on the reranking stage: \citet{vo2020} model the interactions between a claim and the whole candidate articles, while \citet{shaar2020} extract several semantically similar sentences from FC-articles as a proxy. Nevertheless, these methods treat FC-articles as \emph{general} documents and ignore characteristics of FC-articles. Figure~\ref{fig1}(b) shows three sentences from candidate articles for the given claim. Among them, S1 is more friendly to semantic matching than S2 and S3 because the whole S1 focuses on describing its topic and does not contain tokens irrelevant to the given claim, e.g., "has spread over years" in S2. Thus, a semantic-based model does not require to have strong filtering capability. If we use only general methods on this task, the relevant S2 and S3 may be neglected while irrelevant S1 is focused. To let the model focus on key sentences (i.e., sentences as a good proxy of article-level relevance) like S2 and S3, we need to consider two characteristics of FC-articles besides semantics: \textbf{C1}. Claims are often quoted to describe the checked events (e.g., the underlined text in S2); \textbf{C2}. Event-irrelevant patterns to introduce or debunk claims are common in FC-articles (e.g., bold texts in S2 and S3). Based on the observations, we propose a novel reranker, \model~(\underline{M}emory-enhanced \underline{T}ransformers for \underline{M}atching). The reranker identifies key sentences per article using claim- and pattern-sentence relevance, and then integrates information from the claim, key sentences, and patterns for article-level relevance prediction. In particular, regarding \textbf{C1}, we propose \texttt{ROUGE}-guided Transformer (ROT) to score claim-sentence relevance literally and semantically. As for \textbf{C2}, we obtain the pattern vectors by clustering the difference of sentence and claim vectors for scoring pattern-sentence relevance and store them in the Pattern Memory Bank (PMB). The joint use of ROT and PMB allows us to identify key sentences that reflect the two characteristics of FC-articles. Subsequently, fine-grained interactions among claims and key sentences are modeled by the multi-layer Transformer and aggregated with patterns to obtain an article-level feature representation. The article feature is fed into a Multi-layer Perceptron (MLP) to predict the claim-article relevance. To validate the effectiveness of our method, we built the first Chinese dataset for this task with 11,934 claims collected from Chinese Weibo\footnote{\url{https://weibo.com}} and 27,505 fact-checking articles from multiple sources. 39,178 claim-article pairs are annotated as relevant. Experiments on the English dataset and the newly built Chinese dataset show that \model~outperforms existing methods. Further human evaluation and case studies prove that \model~ finds key sentences as explanations. Our main contributions are as follows: \begin{compactitem} \item We propose a novel reranker \model~for fact-checked claim detection, which can better identify key sentences in fact-checking articles by exploiting their characteristics. \item We design \texttt{ROUGE}-guided Transformer to combine lexical and semantic information and propose a memory mechanism to capture and exploit common patterns in fact-checking articles. \item Experiments on two real-world datasets show that \model~outperforms existing methods. Further human evaluation and case studies prove that our model finds key sentences as good explanations. \item We built the first Chinese dataset for fact-checked claim detection with fact-checking articles from diverse sources. \end{compactitem} \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{figs/arch2_cr.pdf} \caption{Architecture of \model. Given a claim $q$ and a candidate article $d$ with $l$ sentences, $s_1,\mydots,s_l$, \model~\ding{172} feeds $(q,s)$ pairs into \texttt{ROUGE}-guided Transformer (ROT) to obtain claim-sentence scores in both lexical and semantic aspects; \ding{173} matches residual embeddings $\bm{r}_{s,q}$ with vectors in Pattern Memory Bank (PMB) (here, only four are shown) to obtain pattern-sentence scores; \ding{174} identifies $k_2$ key sentences by combining the two scores (here, $k_2=2$, and $s_i$ and $s_l$ are selected); \ding{175} models interaction among $\bm{q^\prime},\bm{s^\prime},$ and the nearest memory vector $\bm{m}$ for each key sentence; and \ding{176} perform score-weighted aggregation and predict the claim-article relevance.} \label{fig:arch} \end{figure*} \section{Related Work}\label{sec:rw} To defend against false information, researchers are mainly devoted to two threads: (1) \textbf{Automatic fact-checking} methods mainly retrieve relevant factual information from designated sources and judge the claim's veracity. \citet{fever} use Wikipedia as a fact tank and build a shared task for automatic fact-checking, while \citet{declare} and \citet{RDD} retrieve webpages as evidence and use their stances on claims for veracity prediction. (2) \textbf{Fake news detection} methods often use non-factual signals, such as styles~\citep{style-aaai,style-image}, emotions~\citep{ajao, dual-emotion}, source credibility~\citep{fang}, user response~\citep{defend} and diffusion network~\citep{early, alone}. However, these methods mainly aim at newly emerged claims and do not address those claims that have been fact-checked but continually spread. Our work is in a new thread, \textbf{detecting previously fact-checked claims}. \citet{vo2020} models interaction between claims and FC-articles by combining GloVe~\citep{glove} and ELMo embeddings~\citep{elmo}. \citet{shaar2020} train a RankSVM with scores from BM25 and Sentence-BERT for relevance prediction. These methods ignore the characteristics of FC-articles, which limits the ranking performance and explainability. \section{Proposed Method}\label{sec:method} Given a claim $q$ and a candidate set of $k_1$ FC-articles $\mathcal{D}$ obtained by a standard full-text retrieval model (BM25), we aim to rerank FC-articles truly relevant w.r.t. $q$ at the top by modeling fine-grained relevance between $q$ and each article $d\in \mathcal{D}$. This is accomplished by Memory-enhanced Transformers for Matching (\model), which conceptually has two steps, (1) Key Sentence Identification and (2) Article Relevance Prediction, see Figure~\ref{fig:arch}. For an article of $l$ sentences, let $\mathcal{S}=\{s_{1},\mydots,s_{l}\}$ be its sentence set. In Step (1), for each sentence, we derive claim-sentence relevance score from \texttt{ROUGE}-guided Transformer (ROT) and pattern-sentence relevance score from Pattern Memory Bank (PMB). The scores indicate how similar the sentence is to the claim and pattern vectors, i.e., how possible to be a key sentence. Top $k_2$ sentences are selected for more complicated interactions and aggregation with the claim and pattern vectors in Step (2). The aggregated vector is used for the final prediction. We detail the components and then summarize the training procedure below. \subsection{Key Sentence Identification} \subsubsection{\texttt{ROUGE}-guided Transformer (ROT)} ROT (left top of Figure.~\ref{fig:arch}) is used to evaluate the relevance between $q$ and each sentence $s$ in $\{\mathcal{S}_i\}^{k_1}_{i=1}$, both lexically and semantically. Inspired by \citep{complementing}, we choose to ``inject'' the ability to consider lexical relevance into the semantic model. As the BERT is proved to capture and evaluate semantic relevance~\citep{BERTScore}, we use a one-layer Transformer initialized with the first block of pretrained BERT to obtain the initial semantic representation of $q$ and $s$: \begin{equation} {\bm z}_{q,s} = \mathrm{Transformer}\left(\mathtt{[CLS]}\ q\ \mathtt{[SEP]}\ s \right) \end{equation} where $\mathtt{[CLS]}$ and $\mathtt{[SEP]}$ are preserved tokens and $\bm{z}_{q,s}$ is the output representation. To force ROT to consider the lexical relevance, we finetune the pretrained Transformer with the guidance of \texttt{ROUGE}~\citep{rouge}, a widely-used metric to evaluate the lexical similarity of two segments in summarization and translation tasks. The intuition is that lexical relevance can be characterized by token overlapping, which ROUGE exactly measures. We minimize the mean square error between the prediction and the precision and recall of \texttt{ROUGE-2} between $q$ and $s$ ($\texttt{R}_{2} \in \mathbb{R}^2$) to optimize the ROT: \begin{equation} \hat{\texttt{R}}(q,s)=\mathrm{MLP}\big(\bm{z}_{q,s}(\mathtt{[CLS]})\big) \end{equation} \begin{equation} \mathcal{L}_{R} = \Vert\hat{\texttt{R}}(q,s)-\texttt{R}_{2}(q,s)\Vert^2_2 + \lambda_R\Vert\Delta\theta\Vert^2_2 \end{equation} where the first term is the regression loss and the second is to constraint the change of parameters as the ability to capture semantic relevance should be maintained. $\lambda_R$ is a control factor and $\Delta\theta$ represents the change of parameters. \subsubsection{Pattern Memory Bank (PMB)} The Pattern Memory Bank (PMB) is to generate, store, and update the vectors which represent the common patterns in FC-articles. The vectors in PMB will be used to evaluate pattern-sentence relevance (see~Section \ref{sec:selection}). Here we detail how to formulate, initialize, and update these patterns below. \noindent\textbf{Formulation.} Intuitively, one can summarize the templates, like ``\mydots has been debunked by\mydots'', and explicitly do \emph{exact} matching, but the templates are costly to obtain and hard to integrate into neural models. Instead, we \emph{implicitly} represent the common patterns using vectors derived from embeddings of our model, ROT. Inspired by~\citep{memory-bank}, we use a memory bank $\mathcal{M}$ to store $K$ common patterns (as vectors), i.e., $\mathcal{M}=\{\bm{m}_i\}^{K}_{i=1}$. \noindent\textbf{Initialization.} We first represent each $q$ in the training set and $s$ in the corresponding articles by averaging its token embeddings (from the embedding layer of ROT). Considering that a pattern vector should be \emph {event-irrelevant}, we heuristically remove the event-related part in $s$ as possible by calculating the residual embeddings $\bm{r}_{s,q}$, i.e., subtracting $\bm{q}$ from $\bm{s}$. We rule out the residual embeddings that do not satisfy $t_{low}<\left\|\bm{r}_{s,q}\right\|_2<t_{high}$, because they are unlikely to contain good pattern information: $\left\|\bm{r}_{s,q}\right\|_2\leq t_{low}$ indicates $q$ and $s$ are highly similar and thus leave little pattern information, while $\left\|\bm{r}_{s,q}\right\|_2\geq t_{high}$ indicates $s$ may not align with $q$ in terms of the event, so the corresponding $r_{s,q}$ is of little sense. Finally, we aggregate the valid residual embeddings into $K$ clusters using K-means and obtain the initial memory bank $\mathcal{M}$: \begin{equation} \mathcal{M}=\mathrm{K}\mbox{-}\mathrm{means}\big(\{\bm{r}_{s,q}^{valid}\}\big) \!=\! \{\bm{m}_1,\mydots,\bm{m}_K\} \end{equation} where $\{\bm{r}_{s,q}^{valid}\}$ is the set of valid residual embeddings. \noindent\textbf{Update.} As the initial $K$ vectors may not accurately represent common patterns, we update the memory bank according to the feedbacks of results during training: If the model predicts rightly, the key sentence, say $s$, should be used to update its nearest pattern vector $m$. To maintain stability, we use an epoch-wise update instead of an iteration-wise update. Take updating $\bm{m}$ as an example. After an epoch, we extract all $n$ key sentences whose nearest pattern vector is $\bm{m}$ and their $n$ corresponding claims, which is denoted as a tuple set $(\mathcal{S}, \mathcal{Q})^m$. Then $(\mathcal{S}, \mathcal{Q})^m$ is separated into two subsets, $\mathcal{R}^{m}$ and $\mathcal{W}^{m}$, which contain $n_r$ and $n_w$ sentence-claim tuples from the rightly and wrongly predicted samples, respectively. The core of our update mechanism (Figure~\ref{fig:mem_update}) is to draw $\bm{m}$ closer to the residual embeddings in $\mathcal{R}^{m}$ and push it away from those in $\mathcal{W}^{m}$. We denote the $i^{th}$ residual embedding from the two subsets as $\bm{r}_{\scriptscriptstyle \mathcal{R}_i^m}$ and $\bm{r}_{\scriptscriptstyle \mathcal{W}_i^m}$, respectively. \begin{figure}[t] \centering \includegraphics[width=0.43\textwidth]{figs/mem_update} \caption{Illustration for Memory Vector Update.} \label{fig:mem_update} \end{figure} To determine the update direction, we calculate a weighted sum of residual embeddings according to the predicted matching scores. For $(s,q)$, suppose \model~output $\hat{y}_{s,q} \in [0,1]$ as the predicted matching score of $q$ and $d$ (whose key sentence is $s$), the weight of $\bm{r}_{s,q}$ is $|\hat{y}_{s,q}-0.5|$ (denoted as $w_{s,q}$). Weighted residual embeddings are respectively summed and normalized as the components of the direction vector (Eq.~\eqref{eq:aggregated_res}): \begin{equation} \label{eq:aggregated_res} \bm{u}^{mr}\!= \!\bigg( \sum_{i=1}^{n_r} w_{\scriptscriptstyle \mathcal{R}_i^m}\bm{r}_{\scriptscriptstyle \mathcal{R}_i^m} \bigg), \bm{u}^{mw}\!=\!\bigg( \sum_{i=1}^{n_w} w_{\scriptscriptstyle \mathcal{W}_i^m} \bm{r}_{\scriptscriptstyle \mathcal{W}_i^m}\bigg) \end{equation} where $\bm{u}^{mr}$ and $\bm{u}^{mw}$ are the aggregated residual embeddings. The direction is determined by Eq.~\eqref{eq:direction}: \begin{equation}\label{eq:direction} \bm{u}^{m}=w_r\underbrace{(\bm{u}^{mr}-\bm{m})}_{\mbox{\small{draw closer}}}+ w_w\underbrace{(\bm{m}-\bm{u}^{mw})}_{\mbox{\small{push away}}} \end{equation} where $w_r$ and $w_w$ are the normalized sum of corresponding weights used in Eq.~\eqref{eq:aggregated_res} ($w_r+w_w=1$). The pattern vector $m$ is updated with: \begin{equation} \label{eq:memory-update} \bm{m}_{new} = \bm{m}_{old} + \lambda_m \Vert\bm{m}_{old}\Vert _2 \frac{\bm{u}^{m}}{\Vert\bm{u}^{m}\Vert_2} \end{equation} where $\bm{m}_{old}$ and $\bm{m}_{new}$ are the memory vector $\bm{m}$ before and after updating; the constant $\lambda_m$ and $\left\|\bm{m}_{old}\right\|_2$ jointly control the step size. \subsubsection{Key Sentence Selection}\label{sec:selection} Whether a sentence is selected as a key sentence is determined by combining claim- and pattern-sentence relevance scores. The former is calculated with the distance of $q$ and $s$ trained with ROT (Eq.~\eqref{eq:score_q}) and the latter uses the distance between the nearest pattern vector in PMB and the residual embedding (Eq.~\eqref{eq:score_p}). The scores are scaled to $[0,1]$. For each sentence $s$ in $d$, the relevance score with $q$ is calculated by Eq.~\eqref{eq:score_all}: \begin{equation} scr_Q(q,s)=\mathrm{Scale}(\left\|\bm{r}_{s,q}\right\|_2)\\ \label{eq:score_q} \end{equation} \begin{equation}\label{eq:score_p} scr_P(q,s)=\mathrm{Scale}(\left\|\bm{m}_{u}-\bm{r}_{s,q}\right\|_2) \end{equation} \begin{equation}\label{eq:score_all} scr(q,s)=\lambda_{Q}scr_Q(q,s)+\lambda_{P}scr_P(q,s) \end{equation} where $\mathrm{Scale}(x)\!=\!1-\frac{x-min}{max-min}$ and $max$ and $min$ are the maximum and minimum distance of $s$ in $d$, respectively. $u=\arg \min_{i} \left\|\bm{m}_i-\bm{r}_{s,q}\right\|_2$, and $\lambda_{Q}$ and $\lambda_{P}$ are hyperparameters whose sum is $1$. Finally, sentences with top-$k_2$ scores, denoted as $\mathcal{K}\! =\!\{s^{key}_i(q,d)\}^{k_2}_{i=1}$, are selected as the \emph{key sentences} in $d$ for the claim $q$. \subsection{Article Relevance Prediction (ARP)}\label{sec:reasoner} \noindent\textbf{Sentence representation.} We model more complicated interactions between the claim and the key sentences by feeding each $z_{q,s^{key}}$ (derived from ROT) into a multi-layer Transformer ($\mathrm{MultiTransformer}$): \begin{equation} \label{eq:reasoner1} \bm{z}^\prime_{q,s^{key}}= \mathrm{MultiTransformer}(\bm{z}_{q,s^{key}}) \end{equation} Following~\citep{sentence-bert}, we respectively compute the mean of all output token vectors of $q$ and $s$ in $z^\prime_{q,s^{key}}$ to obtain the fixed sized sentence vectors $\bm{q}^\prime \in \mathbb{R}^{dim}$ and $\bm{s}^{key\prime} \in \mathbb{R}^{dim}$, where $dim$ is the dimension of a token in Transformers. \noindent\textbf{Weighted memory-aware aggregation.} For final prediction, we use a score-weighted memory-aware aggregation. To make the predictor aware of the pattern information, we append the corresponding nearest pattern vectors to the claim and key sentence vectors: \begin{equation} \label{eq:qsv-concat} \bm{v}_i = [\bm{q}^\prime,\bm{s}^{key\prime}_i(q,d),\bm{m}_{j}] \end{equation} where $i\!=\!1,\mydots,k_2$. $j\!=\!\mathop{\arg\min}_{k} \left\|\bm{m}_k\!-\!\bm{r}_{s^{key}_i,q}\right\|_2$. Intuitively, a sentence with higher score should be attended more. Thus, the concatenated vectors (Eq.~\eqref{eq:qsv-concat}) are weighted by the relevance scores from Eq.~\eqref{eq:score_all} (normalized across the top-$k_2$ sentences). The weighted aggregating vector is fed into a MLP which outputs the probability that $d$ fact-checks $q$: \begin{equation} scr^\prime(q,s_i^{key}) = \mathrm{Normalize}\big(scr(q,s_i^{key})\big) \end{equation} \begin{equation}\label{eq:final-prediction} \hat{y}_{q,d}=\mathrm{MLP}\Big(\sum_{i=1}^{k_2}scr^\prime(q,s_i^{key}) \bm{v}_i\Big) \end{equation} where $\hat{y}_{q,d} \in [0,1]$. If $\hat{y}_{q,d}>0.5$, the model predicts that $d$ fact-checks $q$, otherwise does not. The loss function is cross entropy: \begin{equation}\label{eq:celoss} \mathcal{L}_{M}=\mathrm{CrossEntropy}(\hat{y}_{q,d},y_{q,d}) \end{equation} where $y_{q,d} \in \{0,1\}$ is the ground truth label. $y_{q,d}=1$ if $d$ fact-checks $q$ and $0$ otherwise. The predicted values are used to rank all $k_1$ candidate articles retrieved in the first stage. \subsection{Training \model} We summarize the training procedure of \model~in Algorithm~\ref{algo}, including the pretraining of ROT, the initialization of PMB, the training of ARP, and the epoch-wise update of PMB. \begin{algorithm}[ht] \caption{\model~Training Procedure} \label{algo} \begin{algorithmic}[1] \Require Training set {$\mathcal{T}=[(q_0,d_{00}),\mydots,(q_0,d_{0k_1}),$ $\mydots,(q_n,d_{nk_1})]$ where the $k_1$ candidate articles for each claim are retrieved by BM25.} \State Pre-train \texttt{ROUGE}-guided Transformer. \State Initialize the Pattern Memory Bank (PMB). \For {each epoch} \For {$(q,d)$ in $\mathcal{T}$} \State {\color{gray} {//} Key Sentence Identification} \State Calculate $scr_Q(q,s)$ via ROT and $scr_P(q,s)$ via PMB. \State Calculate $scr(q,s)$ using Eq.~\eqref{eq:score_all}. \State Select key sentences $\mathcal{K}$. \State {\color{gray} {//} Article Relevance Prediction (ARP)} \State Calculate $\bm{v}$ for each $s$ in $\mathcal{K}$ and $\hat{y}_{q,d}$. \State Update the ARP to minimize $\mathcal{L}_M$. \EndFor \State Update the PMB using Eq.~\eqref{eq:memory-update}. \EndFor \end{algorithmic} \end{algorithm} \section{Experiments}\label{sec:exp} In this section, we mainly answer the following experimental questions: \noindent\textbf{EQ1:} Can \model~improve the ranking performance of FC-articles given a claim? \noindent\textbf{EQ2:} How effective are the components of \model, including \texttt{ROUGE}-guided Transformer, Pattern Memory Bank, and weighted memory-aware aggregation in Article Relevance Prediction? \noindent\textbf{EQ3:} To what extent can \model~identify key sentences in the articles, especially in the longer ones? \subsection{Data}\label{sec:data} We conducted the experiments on two real-world datasets. Table~\ref{tab:dataset} shows the statistics of the two datasets. The details are as follows: \noindent\textbf{Twitter Dataset} The Twitter\footnote{\url{https://twitter.com}} dataset is originated from \citep{vo2019} and processed by \citet{vo2020}. The dataset pairs the claims (tweets) with the corresponding FC-articles from Snopes. For tweets with images, it appends the OCR results to the tweets. We remove the manually normalized claims in Snopes' FC-articles to adapt to more general scenarios. The data split is the same as that in \citep{vo2020}. \noindent\textbf{Weibo Dataset} We built the first Chinese dataset for the task of detecting previously fact-checked claims in this article. The claims are collected from Weibo and the FC-articles are from \textit{multiple fact-checking sources} including Jiaozhen, Zhuoyaoji\footnote{\url{https://piyao.sina.cn}}, etc. We recruited annotators to match claims and FC-articles based on basic search results. Appendix~\ref{apd:dataset} introduce the details. \begin{table}[t] \caption{Statistics of the Twitter and the Weibo dataset. \#: Number of. C-A Pairs: Claim-article pairs.} \label{tab:dataset} {\small \setlength\tabcolsep{3.2pt} \begin{tabular}{l|rrr|rrr} \hline \multirow{2}{*}{\textbf{Dataset}} & \multicolumn{3}{c|}{\textbf{Twitter}} & \multicolumn{3}{c}{\textbf{Weibo}} \\ \cline{2-7} & Train & Val & Test & Train & Val & Test \\ \hline \#Claim & 8,002 & 1,000 & 1,001 & 8,356 & 1,192 & 2,386\\ \#Articles & 1,703 & 1,697 & 1,697 & 17,385 & 8,353 & 11,715\\ C-A Pairs & 8,025 & 1,002 & 1,005 & 28,596 & 3,337 & 7,245\\ \hline \multicolumn{7}{c}{\textbf{Relevant Fact-checking Articles Per Claim}} \\ \hline Average & 1.003 & 1.002 & 1.004 & 3.422 & 2.799 & 3.036 \\ Medium & 1 & 1 & 1 & 2 & 1 & 2 \\ Maximum & 2 & 2 & 2 & 50 & 18 & 32 \\ \hline \end{tabular} } \end{table} \begin{table*}[htbp] \centering \caption{Performance of baselines and \model. Best results are in \textbf{boldface}.} \label{tab:main_exp} \setlength\tabcolsep{3.5pt} {\small \begin{tabular}{l|c|cccccc|cccccc} \hline \multirow{3}{*}{\textbf{Method}} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Selecting}\\ \textbf{Sentences?}\end{tabular}} & \multicolumn{6}{c|}{\textbf{Weibo}} & \multicolumn{6}{c}{\textbf{Twitter}} \\ \cline{3-14} & & \multirow{2}{*}{MRR} & \multicolumn{3}{c}{MAP@} & \multicolumn{2}{c|}{HIT@} & \multirow{2}{*}{MRR} & \multicolumn{3}{c}{MAP@} & \multicolumn{2}{c}{HIT@} \\ \cline{4-8} \cline{10-14} & & & 1 & 3 & 5 & 3 & 5 & & 1 & 3 & 5 & 3 & 5\\ \hline BM25 & & 0.709 & 0.355 & 0.496 & 0.546 & 0.741 & 0.760 & 0.522 & 0.460 & 0.489 & 0.568 & 0.527 & 0.568 \\ \hline BERT & & 0.834 & 0.492 & 0.649 & 0.693 & 0.850 & 0.863 & 0.895 & 0.875 & 0.890 & 0.890 & 0.908 & 0.909 \\ DuoBERT & & 0.885 & 0.541 & 0.713 & 0.756 & 0.886 & 0.887 & 0.923 & \textbf{0.921} & 0.922 & 0.922 & 0.923 & 0.923 \\ BERT(Transfer) & \checkmark & 0.714 & 0.361 & 0.504 & 0.553 & 0.742 & 0.764 & 0.642 & 0.567 & 0.612 & 0.623 & 0.668 & 0.719\\ \hline Sentence-BERT & \checkmark & 0.750 & 0.404 & 0.543 & 0.589 & 0.810 & 0.861 & 0.794 & 0.701 & 0.775 & 0.785 & 0.864 & 0.905\\ RankSVM & \checkmark & 0.809 & 0.408 & 0.607 & 0.661 & 0.887 & 0.917 & 0.846 & 0.778 & 0.832 & 0.840 & 0.898 & 0.930\\ CTM & & 0.856 & 0.356 & 0.481 & 0.525 & 0.894 & 0.935 & 0.926 & 0.889 & 0.919 & 0.922 & 0.952 & 0.964 \\ \hline \model & \checkmark & \textbf{0.902} & \textbf{0.542} & \textbf{0.741} & \textbf{0.798} & \textbf{0.934} & \textbf{0.951} & \textbf{0.931} & 0.899 & \textbf{0.926} & \textbf{0.928} & \textbf{0.957} & \textbf{0.967} \\ \hline \end{tabular} } \end{table*} \begin{table*}[htbp] \centering \caption{Ablation study of \model. Best results are in \textbf{boldface}. AG: Ablation Group.} \label{tab:ablation} \setlength\tabcolsep{3.5pt} {\small \begin{tabular}{cl|cccccc|cccccc} \hline \multirow{3}{*}{\textbf{AG}} & \multirow{3}{*}{\textbf{Variant}} & \multicolumn{6}{c|}{\textbf{Weibo}} & \multicolumn{6}{c}{\textbf{Twitter}} \\ \cline{3-14} & &\multirow{2}{*}{MRR} & \multicolumn{3}{c}{MAP@} & \multicolumn{2}{c|}{HIT@} & \multirow{2}{*}{MRR} & \multicolumn{3}{c}{MAP@} & \multicolumn{2}{c}{HIT@} \\ \cline{4-8} \cline{10-14} & & & 1 & 3 & 5 & 3 & 5 & & 1 & 3 & 5 & 3 & 5\\ \hline - & \model & \textbf{0.902} & \textbf{0.542} & \textbf{0.741} & \textbf{0.798} & 0.934 & 0.951 & \textbf{0.931} & 0.899 & \textbf{0.926} & \textbf{0.928} & \textbf{0.957} & \textbf{0.967} \\ \hline \ 1& \textit{w/o \texttt{ROUGE} guidance} & 0.892 & 0.535 & 0.729 & 0.786 &0.925 & 0.943 & 0.929 & \textbf{0.905} & 0.924 & 0.926 & 0.945 & 0.952 \\ \hline \ \multirow{3}{*}{2} & \textit{w/ rand mem init} & 0.879 & 0.516 & 0.700 & 0.753 & 0.912 & 0.935 & 0.897 & 0.860 & 0.890 & 0.893 & 0.922 & 0.938 \\ \ & \textit{w/o mem update} & 0.898 & 0.541 & 0.736 & 0.790 & 0.935 & 0.948 & 0.925 & 0.897 & 0.860 & 0.890 & 0.922 & 0.938 \\ \ & \textit{w/o PMB} & 0.897 & 0.537 & 0.734 & 0.792 & 0.931 & 0.948 & 0.920 & 0.885 & 0.913 & 0.917 & 0.944 &0.960 \\\hline \ \multirow{2}{*}{3} & \textit{w/ avg. pool} & 0.901 & 0.540 &0.739 & 0.796& \textbf{0.938} & \textbf{0.958} & 0.923&0.892 &0.917 &0.919 & 0.944 &0.954\\ \ & \textit{w/o pattern aggr.} & 0.896 & 0.535 & 0.734 & 0.791 & 0.930 & 0.945 & 0.922 & 0.890 & 0.917 & 0.919 & 0.947 & 0.954 \\ \hline \end{tabular} } \end{table*} \subsection{Baseline Methods}\label{sec:baselines} \noindent\textbf{BERT-based rankers from general IR tasks} \textbf{BERT}~\citep{bert}: A method of pretraining language representations with a family of pretrained models, which has been used in general document reranking to predict the relevance.~\citep{reranking-with-bert, birch} \textbf{DuoBERT}~\citep{duobert}: A popular BERT-based reranker for multi-stage document ranking. Its input is a query and a pair of documents. The pairwise scores are aggregated for final document ranking. Our first baseline, BERT (trained with query-article pairs), provides the inputs for DuoBERT. \textbf{BERT(Transfer)}: As no sentence-level labels are provided in most document retrieval datasets, \citet{simple-bert} finetune BERT with short text matching data and then apply to score the relevance between query and each sentence in documents. The three highest scores are combined with BM25 score for document-level prediction. \noindent\textbf{Rankers from related works of our task} \textbf{Sentence-BERT}: \citet{shaar2020} use pretrained Sentence-BERT models to calculate cosine similarity between each sentence and the given claim. Then the top similarity scores are fed into a neural network to predict document relevance. \textbf{RankSVM}: A pairwise RankSVM model for reranking using the scores from BM25 and sentence-BERT (mentioned above), which achieves the best results in \citep{shaar2020}. \textbf{CTM}~\citep{vo2020}: This method leverages GloVe and ELMo to jointly represent the claims and the FC-articles for predicting the relevance scores. Its multi-modal version is not included as \model~focuses on key textual information. \subsection{Experimental Setup} \noindent\textbf{Evaluation Metrics.} As this is a binary retrieval task, we follow~\citet{shaar2020} and report Mean Reciprocal Rank (MRR), Mean Average Precision@$k$ (MAP@$k$, $k=1,3,5$) and HIT@$k$ ($k=3,5$). See equations in Appendix~\ref{apd:eval-metrics}. \noindent\textbf{Implementation Details.} In \model, the ROT and ARP components have one and eleven Transformer layers, respectively. The initial parameters are obtained from pretrained BERT models\footnote{We use \texttt{bert-base-chinese} for Weibo and \texttt{bert-base-uncased} for Twitter.}. Other parameters are randomly initialized. The dimension of claim and sentence representation in ARP and pattern vectors are $768$. Number of Clusters in PMB $K$ is $20$. Following~\citep{shaar2020} and~\cite{vo2020}, we use $k_1=50$ candidates retrieved by BM25. $k_2=3$ (Weibo, hereafter, W) / $5$ (Twitter, hereafter, T) key sentences are selected. We use \texttt{Adam}~\citep{adam} for optimization with $\epsilon=10^{-6}, \beta_1=0.9, \beta_2=0.999$. The learning rates are $5\times 10^{-6}$ (W) and $1\times 10^{-4}$ (T). The batch size is $512$ for pretraining ROT, $64$ for the main task. According to the quantiles on training sets, we set $t_{low}=0.252$ (W) / $0.190$ (T), $t_{high}=0.295$ (W) / $0.227$ (T). The following hyperparameters are selected according to the best validation performance: $\lambda_R=0.01$ (W) / $0.05$ (T), $\lambda_Q=0.6$, $\lambda_P=0.4$, and $\lambda_m=0.3$. The maximum epoch is $5$. All experiments were conducted on NVIDIA V100 GPUs with PyTorch~\citep{pytorch}. The implementation details of baselines are in Appendix \ref{apd:imp-baselines}. \subsection{Performance Comparison} To answer \textbf{EQ1}, we compared the performance of baselines and our method on the two datasets, as shown in Table \ref{tab:main_exp}. We see that: (1) \model~ourperforms all compared methods on the two datasets (the exception is only the MAP@1 on Twitter), which indicates that it can effectively find related FC-articles and provide evidence for determining if a claim is previously fact-checked. (2) For all methods, the performance on Weibo is worse than that on Twitter because the Weibo dataset contains more claim-sentence pairs (from multiple sources) than Twitter and is more challenging. Despite this, \model's improvement is significant. (3) BERT(Transfer), Sentence-BERT and RankSVM use transferred sentence-level knowledge from other pretext tasks but did not outperform the document-level BERT. This is because FC-articles have their own characteristics, which may not be covered by transferred knowledge. In contrast, our observed characteristics help \model~achieve good performance. Moreover, \model~is also efficiency compared to BERT(Transfer), which also uses 12-layer BERT and selects sentences, because our model uses only one layer for all sentences (other 11 layers are for key sentences), while all sentences are fed into the 12 layers in BERT(Transfer). \subsection{Ablation Study} To answer \textbf{EQ2}, we evaluated three ablation groups of \model's variants (AG1$\sim$AG3) to investigate the effectiveness of the model design.\footnote{We do not run MTM without sentence selection due to its high computational overhead which makes it unfeasible for training and inference.} Table~\ref{tab:ablation} shows the performance of variants and \model. \textbf{AG1: With vs. Without \texttt{ROUGE}.} The variant removes the guidance of \texttt{ROUGE} (\model~\textit{w/o \texttt{ROUGE} guidance}) to check the effectiveness of \texttt{ROUGE}-guided finetuning. The variant performs worse on Weibo, but MAP@1 slightly increases on Twitter. This is probably because there are more lexical overlapping between claims and FC-articles in the Weibo dataset, while most of the FC-articles in the Twitter dataset choose to summarize the claims to fact-check. \textbf{AG2: Cluster-based Initialization vs. Random Initialization vs. Without update vs. Without PMB.} The first variant (\model~\textit{ w/ rand mem init}) uses random initialization and the second (\model~\textit{ w/o mem update}) uses pattern vectors without updating. The last one (\model~\textit{ w/o PMB}) removes the PMB. We see that the variants all perform worse than \model~on MRR, of which \textit{w/ rand mem init} performs the worst. This indicates that cluster-based initialization provides a good start and facilitates the following updates while the random one may harm further learning. \textbf{AG3: Score-weighted Pooling vs. Average pooling, and With vs. Without pattern vector.} The first variant, \model~\textit{w/ avg. pool}, replace the score-weighted pooling with average pooling. The comparison in terms of MRR and MAP shows the effectiveness of using relevance scores as weights. The second, \model~\textit{w/o pattern aggr.}, does not append the pattern vector to claim and sentence vectors before aggregation. It yields worse results, indicating the patterns should be taken into consideration for final prediction. \subsection{Visualization of Memorized Patterns} To probe what the PMB summarizes and memorizes, we selected and analyzed the key sentences corresponding to the residual embeddings around pattern vectors. Figure~\ref{fig:visualization} shows example sentences where highly frequent words are in boldface. These examples indicate that the pattern vectors do cluster key sentences with common patterns like ``...spread in WeChat Moments''. \begin{figure}[t] \centering \includegraphics[width=0.92\linewidth]{figs/visualization.pdf} \caption{Visualization of pattern vectors ($\blacktriangle$) and near residual embeddings (\ding{54}). The sentences are translated from Chinese.} \label{fig:visualization} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=0.92\linewidth]{figs/he1.pdf} \caption{Results of human evaluation. (a)The proportion of the FC-articles where \model~found $\{0,1,2,3\}$ key sentences. (b) The proportion of key sentences at rank $\{1,2,3\}$. (c) The positional distribution of key sentences in the FC-articles.} \label{fig:he} \end{figure} \subsection{Human Evaluation and Case Study} The quality of selected sentences cannot be automatically evaluated due to the lack of sentence-level labels. To answer \textbf{EQ3}, we conducted a human evaluation. We randomly sampled 370 claim-article pairs whose articles were with over 20 sentences from the Weibo dataset. Then we showed each claim and top three sentences selected from the corresponding FC-article by \model. Three annotators were asked to check if an auto-selected sentence helped match the given query and the source article (i.e., key sentences). Figure~\ref{fig:he} shows (a) \model~hit at least one key sentence in 83.0\% of the articles; (b) 73.0\% of the sentences at Rank 1 are key sentences, followed by 65.1\% at Rank 2 and 56.8\% at Rank 3. This proves that \model~can find the key sentences in long FC-articles and provide helpful explanations. We also show the positional distribution in Figure~\ref{fig:he}(c), where key sentences are scattered throughout the articles. Using \model~to find key sentences can save fact-checkers' time to scan these long articles for determining whether the given claim was fact-checked. Additionally, we exhibit two cases in the evaluation set in Figure~\ref{fig:cases}. These cases prove that \model~found the key sentences that correspond to the characteristics described in Section~\ref{sec:intro}. Please refer to~Appendix \ref{apd:error} for further case analysis. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/cases} \caption{Cases in the set of human evaluation. Quotations are underlined and patterns are in {\color{red} \textbf{boldface}}.} \label{fig:cases} \end{figure} \section{Conclusions} We propose \model~to select from fact-checking articles key sentences that introduce or debunk claims. These auto-selected sentences are exploited in an end-to-end network for estimating the relevance of the fact-checking articles w.r.t. a given claim. Experiments on the Twitter dataset and the Weibo dataset (newly constructed) show that \model ~outperforms the state of the art. Moreover, human evaluation and case studies demonstrate that the selected sentences provide helpful explanations of the results. \section*{Acknowledgments} The authors thank Guang Yang, Tianyun Yang, Peng Qi and anonymous reviewers for their insightful comments. Also, we thank Rundong Li, Qiong Nan, and other annotators for their efforts. This work was supported by the National Key Research and Development Program of China (2017YFC0820604), the National Natural Science Foundation of China (U1703261), and the Fundamental Research Funds for the Central Universities and the Research Funds of Renmin University of China (No. 18XNLG19). The corresponding authors are Juan Cao and Xirong Li. \section*{Broader Impact Statement} Our work involves two scenarios that need the ability to detect previously fact-checked claims: (1) For social media platforms, our method can check whether a newly published post contains false claims that have been debunked. The platform may help the users to be aware of the text's veracity by providing the key sentences selected from fact-checking articles and their links. (2) For manual or automatic fact-checking systems, it can be a filter to avoid redundant fact-checking work. When functioning well, it can assist platforms, users, and fact-checkers to maintain more credible cyberspace. But in the failure cases, some well-disguised claims may escape. This method functions with reliance on the used fact-checking article databases. Thus, authority and credibility need to be carefully considered in practice. We did our best to make the new Weibo dataset for academic purpose reliable. Appendix~\ref{apd:dataset} introduces more details. \bibliographystyle{acl_natbib}
{'timestamp': '2021-12-21T02:24:56', 'yymm': '2112', 'arxiv_id': '2112.10322', 'language': 'en', 'url': 'https://arxiv.org/abs/2112.10322'}
\section{Introduction} Deep neural networks have achieved superior performance on numerous vision tasks \cite{Zisserman1,Zisserman2,semanticsegmentation}. However, they contain complicated black boxes with huge, unexplainable parameters that begin with random initialization and reach another unpredictable (still) sub-optimal point. Such transformations are non-linear for each problem, the interpretability remains unsolved. The vision community relies on estimating saliency maps for deciphering the decision-making process of deep networks. We try to bridge the unknown gap between the input space and decision space through this saliency map. These saliency map generation approaches can be divided into different procedures regarding input space \cite{RISE,SISE_main,Ada-SISE}, feature maps \cite{CAM,Axiom-CAM,Grad-CAM,Grad-CAM++,FullGrad,integrated,Ablation-CAM}, or propagation scheme \cite{Guidedbackpropagation,DEEPLIFT,LRP,pointinggame,Thereandbackagain}. Usual perturbation-based methods \cite{zeiler,externalperturbation1,externalperturbation2} probe the input space into different deviated versions and obtain a unified saliency map by their underlying algorithms. Even though their approach produces a better result, their process is relatively slow. CAM-like methods rely on the gradient for the given class and create a decent output within a very brief computation time. \begin{figure*}[!htbp] \centering \begin{subfigure}[l]{0.80\textwidth} \centering \begin{subfigure}[t]{0.15\textwidth} \centering \includegraphics[height=15mm,width=\textwidth]{fig1/elephant_single_class.png} \subcaption{Single Class} \label{a} \end{subfigure} \begin{subfigure}[t]{0.15\textwidth} \centering \includegraphics[height=15mm,width=\textwidth]{fig1/single_class_feature_map.jpg} \subcaption{Feature map} \label{b} \end{subfigure} \begin{subfigure}[t]{0.15\textwidth} \centering \includegraphics[height=15mm,width=\textwidth]{fig1/single_class_ours.jpg} \subcaption{Ours} \label{c} \end{subfigure} \begin{subfigure}[t]{0.15\textwidth} \centering \includegraphics[height=15mm,width=\textwidth]{fig1/scorecam_single.jpg} \subcaption{\cite{Score-CAM}} \label{d} \end{subfigure} \begin{subfigure}[t]{0.15\textwidth} \centering \includegraphics[height=15mm,width=\textwidth]{fig1/single_class_grad_CAM.jpg} \subcaption{\cite{Grad-CAM}} \label{e} \end{subfigure} \begin{subfigure}[t]{0.15\textwidth} \centering \includegraphics[height=15mm,width=\textwidth]{fig1/single_class_gradcampp.jpg} \subcaption{\cite{Grad-CAM++}} \label{f} \end{subfigure} \subcaption{Feature map characteristics for a single class} \label{sinef} \centering \end{subfigure} \begin{subfigure}[r]{0.90\textwidth} \centering \begin{subfigure}[t]{0.13\textwidth} \centering \includegraphics[height=13mm,width=\textwidth]{fig1/input_image_duo_class.jpg} \subcaption{Dual Class} \label{g} \end{subfigure} \begin{subfigure}[t]{0.13\textwidth} \centering \includegraphics[height=13mm,width=\textwidth]{fig1/first_im.jpg} \subcaption{Feature map} \label{h} \end{subfigure} \begin{subfigure}[t]{0.13\textwidth} \centering \includegraphics[height=13mm,width=\textwidth]{fig1/fifth_im.jpg} \subcaption{Ours(\nth{1})} \label{i} \end{subfigure} \begin{subfigure}[t]{0.13\textwidth} \centering \includegraphics[height=13mm,width=\textwidth]{fig1/fifth_im.jpg} \subcaption{Ours(\nth{2})} \label{j} \end{subfigure} \begin{subfigure}[t]{0.13\textwidth} \centering \includegraphics[height=13mm,width=\textwidth]{fig1/duel_scorecam.jpg} \subcaption{\cite{Score-CAM}} \label{k} \end{subfigure} \begin{subfigure}[t]{0.13\textwidth} \centering \includegraphics[height=13mm,width=\textwidth]{fig1/third_im.jpg} \subcaption{\cite{Grad-CAM}(\nth{1})} \label{l} \end{subfigure} \begin{subfigure}[t]{0.13\textwidth} \centering \includegraphics[height=13mm,width=\textwidth]{fig1/forth_im.jpg} \subcaption{\cite{Grad-CAM++}(\nth{1})} \label{m} \end{subfigure} \subcaption{Feature map characteristics for dual class} \label{duof} \end{subfigure} \caption{Proposed global guidance maps $g_M$ for cat, proposed saliency map ($S_c$) with and without global guidance, and the same maps for the dog. The proposed global guidance map provides strong localization information as well as exclusion of non-target class.} \label{algorithm figure} \end{figure*} This study explores the established framework of CAM-based methods to address current issues within vision-based interpretability. Similar studies usually rely on gradient information to produce saliency maps. Some replace the gradient dependency with score estimation to build the saliency map out of the feature maps. Nonetheless, these studies formulate saliency maps that often suffer degradation during class discriminative examples. Moreover, their weighted accumulation does not always cover the expected region for the given single class instance, as the associated weights sometimes do not address the local correspondence. On the other hand, the gradient-less methods take significant time to produce a score index for the feature maps. Typically, the gradient maps are averaged into single values to produce the saliency map within CAM-like studies. However, this projection is not always efficient for the saliency map. In figure \ref{algorithm figure}, we can see that weighted multiplication still contains traces of unwanted classes. To address the weighted multiplication issue, we first look at the problem setup. For a given image, we use a model that produces a fixed number of feature maps before going into the dense layers. This fixation limits our search space for the saliency map. The only variables are the gradient maps that change according to the class. Gradient matrix to a single value conversion eventually reduces many significant gradients from influencing the overall accumulation. If we keep all the gradients and perform elementwise multiplication with feature maps, the obtained map contains better representations of the assigned class. The acquired map is the intermediate global guidance map for the usual local weighted multiplication. We use this guidance map to constrain the generated feature maps to be responsive only to the assigned class through element-wise matrix multiplication. After that, we perform the formal weighted accumulation for acquiring the saliency map, followed by carefully designed upscaling process. Thus, the produced saliency map is more responsive to the given class and its boundaries than previous approaches. The following lines summarize our overall contributions: \begin{itemize} \item A new saliency map generation scheme is formulated by introducing global guidance map that incorporates elementwise influence of the gradient tensor onto the saliency map. \item Acquired boundaries for the saliency maps are crisper than contemporary studies and perform efficiently within single/multi-class/multi-instance-single class cases. \item To validate our study, we perform seven different metric analyses on three different datasets, and the proposed study achieves state-of-the-art performance in most cases. \end{itemize} \section{Related Work} \textbf{Backprop-based methods.} Zisserman \textit{et al}.\cite{Zisserman1,Zisserman2} first introduced gradient calculation by focusing on computing the confidence score for explanation generation, and other back propagation-based explanation studies have been introduced \cite{Guidedbackpropagation,integrated,FullGrad}. However, their gradient employment and manipulation lead to several issues, as addressed by \cite{integrated,FullGrad}. Instead of a single layer gradient, Srinivas \textit{et al.} \cite{FullGrad} focuses on aggregating gradients from all convolutional layers. \textbf{Activation-based methods.} Class activation mapping (CAM) methods are based on the study \cite{CAM}, where authors select feature maps as a medium for creating an explanation map. GradCAM \cite{Grad-CAM} is a weighted linear combination of the feature maps followed by the ReLU operation for a given image and the respective model. Later, GradCAM++ \cite{Grad-CAM++} introduced the effect of the higher-order gradients with inclusive only to positive elements properties. In this way, they achieve more precise representation compared to the previous studies. However, gradients are not the only way to generate the saliency map, which inspires the ScoreCAM \cite{Score-CAM}, AblationCAM \cite{Ablation-CAM}. X-GradCAM \cite{Axiom-CAM}, an extension to the GradCAM, follows the same the underlying weighted multiplication for GradCAM. EigenCAM \cite{Eigen-CAM} introduces principal component analysis into developing the saliency map. CAMERAS \cite{Cameras} utilizes basic CAM studies with multiscale input extension, leveraging the fusion technique of multiscale feature and gradient map weighted multiplication. \textbf{Perturbation-based methods.} Another group of studies treat the neural network as a ``white box" instead of a ``black box" and propose an explanation map by probing the input space. These studies generate saliency maps by checking the response upon the manipulated input space. By blocking/blurring/masking some of the regions randomly, these studies observe the forward pass response for each case and finally aggregate their decision to develop the saliency map. RISE \cite{RISE} first introduces such analysis, followed by external perturbation \cite{externalperturbation1,externalperturbation2} where authors rely upon optimization procedure. SISE \cite{SISE_main} presents feature map selection from multiple layers, followed by attribution map generation and mask scoring to generate the saliency map. Later, they improvise it through adaptive mask selection in the ADA-SISE \cite{Ada-SISE} study. \section{Proposed methodology} This section will describe the formulation of the proposed approach as shown in figure \ref{architecture} for obtaining the saliency for the given model and image. \subsection{Baseline formulation } Every trained model infers through the collective response of its feature maps for the given image. The current understanding of deep learning requires more research to define the ideal formulation for the extracted feature maps during inference. During inference, activated feature maps can be sparse, shallow, and rich in containing the spatial correspondence and their collective distribution governs the outcome of the final prediction. Let $\mathcal{\phi}$ be the model we take for the inference. For any image $\mathcal{X} \in \mathcal{R}^{W \times H \times 3}$, model $\mathcal{\phi}$ generates feature maps $\mathcal{M}$. Say, $\mathcal{M}^{k_l}$ is the $k^{th}$ feature channel at $l^{th}$ layer, before the final dense layer. If we aggregate them all, we will obtain a unified map that will represent the collective correspondence due to the model $\mathcal{\phi}$ for the given image. This aggregation for the activated channels is as follows: \begin{equation}\label{eq1} \mathcal{A}^{l} = \sum_{k} \mathcal{M}^{k_l} \end{equation} \begin{figure*}[!htbp]\label{f4} \centering \includegraphics[width = 6 in, height= 2.5 in]{fig1/architecture_final.png} \caption{Visual comparison between previous state-of-the-art studies and the proposed method. For the given demonstrations, our approach can mark down the primary salient regions under challenging visual conditions. Additionally, our saliency maps are more concrete and leaves almost no traces for the secondary-salient areas.} \label{architecture} \end{figure*} This tensor $\mathcal{A}^{l}$ contains the global representation for all activated feature maps, which can serve the purpose of marking salient regions with careful tuning as shown in figures \ref{sinef}, and \ref{duof}. In equation (\ref{eq1}), we aggregated all of the feature representations into $\mathcal{A}^{l}$; elements over a specific threshold may correspond to the primary class information. Nonetheless, this observation only works within images with a single class, but is not appropriate for the dual class scenarios, as shown in figure \ref{algorithm figure}. To achieve class discriminatory behaviour, researchers \cite{Grad-CAM,Grad-CAM++,integrated,FullGrad} worked with the idea of using weighted aggregation for equation (\ref{eq1}), in contrast to plain linear addition operation. To extract weights, first they compute the gradient maps $\nabla_\mathcal{C}$ with respect to the given class $\mathcal{C}$, from the feature maps $\mathcal{M}$. If $Y^{\mathcal{C}}$ is the class score \cite{Grad-CAM} for the given image from the input model $\mathcal{\phi}$, for each location $(i,j)$ of the $k^{th}$ feature map at the $l^{th}$ layer, then the corresponding gradient map is expressed as : \begin{equation}\label{eq2} \nabla_{\mathcal{C}_{i j}}^{k_{l}} = \frac{\partial Y^{\mathcal{C}}}{\partial \mathcal{M}_{i j}^{k_{l}}} \end{equation} If each feature map holds $\mathcal{Z}$ number of elements, then the corresponding weight for each feature map $\mathcal{M}^{k_l}$ is: \begin{equation}\label{eq3} \lambda_{\mathcal{C}}^{k_{l}}= \frac{1}{\mathcal{Z}}\sum_{i} \sum_{j} \nabla_{\mathcal{C}_{i j}}^{k_{l}} \end{equation} which is the mean value of $\nabla_{\mathcal{C}}^{k_{l}}$. Hence, the regular baseline formulation \cite{Grad-CAM} for the saliency map $\mathcal{S}_{\mathcal{C}}$ estimation is expressed as follows: \begin{equation}\label{eq4} \mathcal{S}_{\mathcal{C}} = \textit{ReLU}({\sum_{k} \lambda_{\mathcal{C}}^{k_{l}}\times \mathcal{M}^{k_l}}) \end{equation} \subsection{Incorporating global guidance } Equation (\ref{eq4}) achieves class discriminatory behavior, however it still faces challenges due to its formulation. If we investigate the above equation, we see that $\lambda_{\mathcal{C}}^{k_{l}}$ weighs the corresponding feature map $\mathcal{M}^{k_l}$. Typical $\lambda_{\mathcal{C}}^{k_{l}}$ treats every member of the given $\mathcal{M}^{k_l}$ equally and increases/decreases their collective effect homogeneously. Therefore, we can still see the traces of the other classes in the saliency map, as shown in figure \ref{algorithm figure}. Other studies \cite{SISE_main,Ada-SISE} have shown class discriminatory performance through perturbations without relying upon the gradients. However, those studies are time expensive \cite{FullGrad} and often require human interaction. Hence, it is natural to ask, can we still rely on the gradient-weighted operation and address the above issues? In response to this, we propose global guidance map. To obtain the global guidance map, we perform a simple elementwise multiplication between the feature maps and their corresponding gradient maps. The formulation for the global guidance map is as follows: \begin{equation}\label{eq5} \mathcal{G}_{M} = \textit{ReLU}({\sum_{k} \nabla_{\mathcal{C}}^{k_{l}}\odot \mathcal{M}^{k_l}}) \end{equation} \begin{figure*}[!htbp] \centering \begin{subfigure}[l]{0.18\textwidth} \centering \includegraphics[height=24mm,width=30mm]{rebuttal/rinput_image_duo_class.jpg} \subcaption{Input} \label{fig:a} \centering \end{subfigure} \begin{subfigure}[r]{0.70\textwidth} \centering \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[height=22mm,width=\textwidth]{rebuttal/ggct.jpg} \subcaption{$g_M$(cat)} \label{fig:b} \end{subfigure} \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[height=22mm,width=\textwidth]{rebuttal/prp.jpg}\\ \subcaption{$S_c$ w/o $g_M$(dog)} \label{fig:c} \end{subfigure} \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[height=22mm,width=\textwidth]{rebuttal/catpp.jpg}\\ \caption{$S_C$ w/o $g_M$ for \cite{Grad-CAM++}(cat)} \label{fig:d} \end{subfigure}\\ \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[height=22mm,width=\textwidth]{rebuttal/ggdg.jpg}\\ \caption{$g_M$ (dog)} \label{fig:e} \end{subfigure} \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[height=22mm,width=\textwidth]{rebuttal/prpdg.jpg}\\ \caption{$S_C$ w/o $g_M$(dog)} \label{fig:f} \end{subfigure} \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[height=22mm,width=\textwidth]{rebuttal/woggprp.jpg}\\ \caption{$S_C$ w/o $g_M$(dog)} \label{fig:g} \end{subfigure} \begin{subfigure}[t]{0.25\textwidth} \centering \includegraphics[height=22mm,width=\textwidth]{rebuttal/dogppp.jpg}\\ \caption{$S_C$ w/o $g_M$ for \cite{Grad-CAM++}(dog)} \label{fig:h} \end{subfigure} \end{subfigure} \caption{Proposed global guidance maps $g_M$ for cat, proposed saliency map ($S_c$) with and without global guidance (first row), and the same maps for the dog [second row and third row]. The proposed global guidance map provides strong localization information as well as exclusion of non-target class.} \label{rebuttal} \end{figure*} The idea behind considering the global guidance map is to focus only on the salient regions by limiting the operating zone for the $\lambda_{\mathcal{C}}^{k_{l}}$. Since the multiplication operation of equation (\ref{eq5}) investigates individual responses for every element of the gradient maps, we can mark the class-specific regions from the feature maps. In this way, captured guidance map successfully omits the trace of the secondary classes from equation (\ref{eq4}) unless the generated feature maps heavily overlap between categories, signifying the possible misclassification. In summary, we first compute $\mathcal{G}_{M}$ and multiply it with each of the feature maps to obtain the class discriminative feature maps from the initial class representative feature maps. Then, we perform the weighted multiplication between each member of the $\lambda_{\mathcal{C}}^{k_{l}}$ and class discriminative feature maps, followed by the final aggregation to obtain the desired representation. Hence, our proposed weighted-multiplicative-aggregation is as follows: \begin{equation}\label{eq6} \mathcal{S}_{\mathcal{C}} = \textit{ReLU}({\sum_{k} \lambda_{\mathcal{C}}^{k_{l}}\times (\mathcal{G}_{M}\odot\mathcal{M}^{k_l}})) \end{equation} In equation (\ref{eq6}), the only reason we are using the $\lambda_{\mathcal{C}}^{k_{l}}$ is to gain a similar homogeneous increment as in equation (\ref{eq4}), which also helps to preserve visual integrity during the final upsampling operation. Since guidance map $\mathcal{G}_{M}$ can successfully omit the secondary classes from any $k^{th}$ feature map $\mathcal{M}^{k_l}$ by \lq masking\rq\space the primary governing region, it also becomes possible to produce increments in the desired class region with the additional multiplication help from $\lambda_{\mathcal{C}}^{k_{l}}$. Finally, typical smoothing and normalization are performed on the saliency map before post aggregation up-sampling onto the given image size. In this way, the achieved saliency map $\mathcal{S}_{\mathcal{C}}$ shows significant improvement in both single class representative and multiclass discriminative cases. \begin{figure*}[!htbp] \centering \includegraphics[width = 5.8 in, height= 3.3 in]{fc/comparison.png} \caption{Visual comparison between previous state-of-the-art studies and the proposed method. For the given demonstrations, our approach can mark down the primary salient regions under challenging visual conditions. Additionally, our saliency maps are more concrete and leaves almost no traces for the secondary-salient areas.} \label{qualitative result} \end{figure*} \section{Performance evaluation} \textbf{Datasets.} Our experimental setup covers three widely used vision datasets: ImageNet, MS-COCO 14, and PASCAL-VOC 12. Among them, PASCAL-VOC 12 provides the full segmentation annotations for the input images. Hence, experiments using segmentation-oriented metrics are from this dataset. Other experiments do not involve segmentation labels, as the rest of the experiments are applicable for all datasets mentioned above. For the ImageNet and MS-COCO 14 datasets, we randomly selected a few thousand \cite{Score-CAM,Grad-CAM++,Axiom-CAM} for the experiments. \textbf{Compared studies.} For visual comparison, given the availability and functional complexities, we consider available official implementations of the GradCAM\cite{Grad-CAM}, GradCAM++\cite{Grad-CAM++}, X-gradCAM\cite{Axiom-CAM}, ScoreCAM\cite{Score-CAM}, Fullgrad\cite{FullGrad}, Smooth-Fullgrad\cite{FullGrad}, CAMERAS\cite{Cameras}, Integrated\cite{integrated}, and Relevance-CAM\cite{relevance}. We report seven different quantitative analyses in addition to the visual demonstration. We extract saliency maps from pretrained VGG16 \cite{VGG16} and ResNet50\cite{ResNet50} networks for all of the methods above in our experiments. A later section reports the quantitative results from ResNet50 in the respective tables. As with previous studies \cite{Score-CAM,integrated,Grad-CAM++,Axiom-CAM}, few thousand random images are selected from the datasets for our experiments. However, since the random images from previous studies are not made public, the results of our random selections may vary from the cited studies. \begin{figure*}[htbp] \begin{center} \begin{subfigure}{0.7\textwidth} \includegraphics[width=5in,height=1.8in]{fig4/alggggg.PNG} \caption{ Interpretation comparison between proposed and GradCAM++\cite{Grad-CAM++}} \label{4.1} \end{subfigure} \end{center} \begin{center} \begin{subfigure}{0.7\textwidth} \includegraphics[width=5in,height=0.6in]{fig4/algo_crop.pdf}\\ \caption{``Ruddy Turnstone" images from the ``ImageNet" dataset.} \label{4.2} \end{subfigure} \end{center} \caption{Here, (a) shows the interpretation comparison between the proposed and GradCAM++\cite{Grad-CAM++}. The upper row is the response map for the primary ``Albatross" bird class, and the second row is for the secondary class, ``Ruddy Turnstone" bird. The proposed method clearly presents the difference between VGG16 and ResNet50 in terms of interpretation, whereas GradCAM++ responses are all similar across different networks and classes. (b) As we suspect that dataset bias might lead to such a decision for the given ``Albatross" images, an inspection of the ImageNet dataset could clarify further. Upon examining typical Ruddy Turnstone bird images in the ImageNet dataset, we see that stony shore is the background for most of the Ruddy Turnstone bird images. } \label{4} \end{figure*} \subsection{Visual demonstration} In this section, we show a qualitative comparison with previous methods. In figure \ref{qualitative result}, we have included diverse sets of images from the datasets. The first five rows show the performance due to ResNet50, and later are for VGG16. Our image selection includes class representative, class discriminative, and multiple instances examples. Here, the proposed method can capture the class discriminative region with greater confidence, if not the entire class area itself for the bicycle, sheep, sea-bird, and dog image. Our scheme captures the class region for the sea-bird image and excludes its reflection in the water compared to other methods. Additionally, the proposed CAM bounds the whole dog class as a dog and captures the sheep in a lowlight environment more clearly than other methods. For images with dual classes, our method presents superior class discriminative performance. Our study successfully bounds the horse regions in the horse images without leaving any traces to other class regions. On the other hand, compared studies often mark both horse and secondary class instances. \subsection{Quantitative analysis} To present our quantitative analysis, we perform the following experiments: model's performance drops and increments due the to salient and context regions, Pointing score, Dice score, and IoU score. We present the scores for the above metrics for ResNet50 for all datasets. \textbf{Performance due to the saliency region.} If we have a perfect model and a perfect interpreter to mark the spatial correspondence for the specific class, the network will provide a similar prediction for both the given image and the segmented salient image. Here, we first extract the salient region from the given images with the help of the given interpreter. Then we perform prediction on the original image, and the corresponding salient image \cite{Grad-CAM++} and check the performance drop for the given interpreter. The expectation is that the better interpreter can exclude the non-salient region as much as possible; hence, the performance drop will be as low as possible. Therefore, our first metric delivers the performance drop due to salient area only as input. For some cases, prediction performance hinders due to the presence of strong spatial context. \begin{table*}[!htbp] \centering \caption{Comparative evaluation in terms of salience zone drop and context zone increase, Pointing Game, Dice, IoU (higher is better) and salience zone increase and context zone drop (lower is better) on the PASCAL VOC 2012 dataset for ResNet50 model. The best scores are in bold form and second best scores are underlined.} \resizebox{\textwidth}{!}{ \begin{tabular}{lcccccccccc} \hline & GradCAM\cite{Grad-CAM} & GradCAM++\cite{Grad-CAM++} & X GradCAM\cite{Axiom-CAM} & CAMERAS\cite{Cameras} & FullGrad\cite{FullGrad} & SmoothFullGrad\cite{FullGrad} & Integrated\cite{integrated} & ScoreCAM\cite{Score-CAM} & Relevance\cite{relevance} & Proposed \\ \hline Increase for salience zone $\uparrow $ & 0.0366 & \underline{0.0716} & 0.0366 & 0.0715 & 0.0421 & 0.0421 & 0.0172 & 0.0506 & 0.0559&\textbf{0.0786} \\ Drop for context zone $\uparrow$ & 0.9395 & 0.8812 & \underline{0.9429} & 0.8834 & 0.9083 & 0.9281 & 0.8124 & 0.9389 & 0.9089&\textbf{0.9443} \\ Pointing Game $\uparrow$ & 0.3355 & 0.4731 & 0.3733 & 0.4412 & 0.3713 & 0.4422 & 0.2642 & 0.4322& \underline{0.532}&\textbf{0.5945} \\ Dice $\uparrow$ & 0.2822 & 0.3422 & 0.2934 & 0.3342 & 0.2942 & 0.3328 & 0.1834 & 0.3321 & \underline{0.411}&\textbf{0.4321} \\ IoU $\uparrow$ & 0.0823 & 0.1122 & 0.0901 & 0.1007 & 0.0812 & 0.0963 & 0.0625 & 0.0943 & \underline{0.121}&\textbf{0.1321} \\ Drop for salience zone $\downarrow$ & 0.8996 & \underline{0.8064} & 0.8745 & 0.8201 & 0.8873 & 0.8762 & 0.9399 & 0.8567 & 0.8333&\textbf{0.7784} \\ Increase for context zone $\downarrow $ & 0.0172 & 0.0312 & 0.0205 & 0.0244 & 0.0291 & 0.0215 & 0.0411 & \textbf{0.0151} &0.029& \underline{0.0183} \\ \hline \end{tabular} } \centering \label{t1} \end{table*} \begin{table*}[!htbp] \caption{ Comparative performance drop and increment of saliency and context zones on the MS-COCO 14 dataset for ResNet50 model.} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{lcccccccccc} \hline & GradCAM\cite{Grad-CAM} & GradCAM++\cite{Grad-CAM++} & X GradCAM\cite{Axiom-CAM} & CAMERAS\cite{Cameras} & FullGrad\cite{FullGrad} & SmoothFullGrad\cite{FullGrad} & Integrated\cite{integrated} & ScoreCAM\cite{Score-CAM} & Relevance\cite{relevance}& Proposed \\ \hline Drop for context zone $\uparrow$ & 0.9023 & \textbf{0.9495} & 0.9142 & \underline{0.9424} & 0.8961 & 0.9025 & 0.8607 & 0.9183 & 0.9081 & 0.9391 \\ Increase for saliency zone $\uparrow$ & 0.0490 & \underline{0.0913} & 0.0555 & 0.0935 & 0.0455 & 0.0495 & 0.0796 & 0.0695 & 0.089 & \textbf{0.0999} \\ Drop for saliency zone $\downarrow$ & 0.8394 & \underline{0.7243} & 0.8081 & 0.7325 & 0.8454 & 0.8288 & 0.7713 & 0.7822 & 0.7357 & \textbf{0.6493} \\ Increase for context zone $\downarrow$ & 0.0245 & \underline{0.0165} & 0.0215 & 0.0172 & 0.0311 & 0.0315 & 0.0335 & 0.0205 & 0.0311 & \textbf{0.0152} \\ \hline \end{tabular} } \centering \label{t2} \end{table*} \begin{table*}[!htbp] \caption{ Comparative performance drop and increment of saliency and context zones on the ImageNet dataset for ResNet50 model.} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c c c c c c c c c c c} \hline & GradCAM\cite{Grad-CAM} & GradCAM++\cite{Grad-CAM++} & X GradCAM\cite{Axiom-CAM} & CAMERAS\cite{Cameras} & FullGrad\cite{FullGrad} & SmoothFullGrad\cite{FullGrad} & Integrated\cite{integrated} & ScoreCAM\cite{Score-CAM} & Relevance\cite{relevance}& Proposed \\ \hline Drop for context zone $\uparrow$ & 0.8767 & 0.9178 & 0.8698 &\underline{0.9335} & 0.8379 & 0.8548 & 0.8938 & 0.8585 & 0.8541 &\textbf{0.9392} \\ Increase for saliency zone $\uparrow$ & 0.0535 & 0.0903 & 0.0635 & 0.0936 & 0.0803 & 0.0669 & 0.0435 & \underline{0.1003} & 0.0969 &\textbf{0.1008} \\ Drop for saliency zone $\downarrow$ & 0.7906 & \underline{0.6717} & 0.7682 & 0.7302 & 0.7775 & 0.7844 & 0.7267 & 0.6975 & 0.6844 &\textbf{0.6492} \\ Increase for context zone $\downarrow$ & 0.0234 & \underline{0.0067} & 0.0368 & 0.0134 & 0.0502 & 0.0301 & 0.0301 & 0.0468 & 0.0602 &\textbf{0.0012} \\ \hline \end{tabular} } \centering \label{t3} \end{table*} \textbf{Performance due to the context region.} As above, if saliency extraction is as good as one expects, then we can set up another experiment with the context region. In this setup, we first exclude the salient part from the given image to obtain the context image and predict it. If the interpreter can successfully extract all the salient areas, the performance will drop near 100 percent. \textbf{Pointing game, Dice Score, IoU.} For image sets with segmenation labels, various segmentation evaluations can be calculated for saliency maps. We follow \cite{pointinggame} to perform the pointing game for class discriminative evaluation. In this performance metric, the ground-truth label is used to trigger each visualization approach, and the maximum active spot on the resulting heatmap is extracted. After that, it determines if the highest saliency points undergo into the annotated boundary box of an object, determining whether it is a hit or miss. The term \begin{tiny}$\frac{ Hits_{total}}{Hits_{total}+ Misses_{total}}$\end{tiny} is calculated as the pointing game accuracy, which high value represents a better explanation for any model. Dice score is a popular metric for analyzing segmentation performance. It results from the ratio of the doubled intersection over the total number of elements associated with this instance. IoU stands for intersection over the union. It is a widespread metric for evaluating segmentation performance. This score ranges from 0 to 1 and signifies the overlapping area between the obtained image and its corresponding ground truth. \begin{figure*} [htbp] \centering \begin{tabular}{cccccc} \includegraphics[width=0.8in,height =0.8 in]{fig5/ship_input.jpg}& \includegraphics[width=1.8in,height =1.2 in]{fig5/ship_iauc.png} & \includegraphics[width=1.8in,height =1.2 in]{fig5/ship_dauc.png} \\ \tiny 5.1(a) & \tiny 5.1(b) & \tiny 5.1(c) \\[1pt] \end{tabular} \\ \begin{tabular}{cccc} \includegraphics[width=0.8in,height =0.8in]{fig5/dog_input.jpg} & \includegraphics[width=1.8 in,height =1.2 in]{fig5/dog_iauc.png} & \includegraphics[width=1.8 in,height =1.2 in]{fig5/dog_dauc.png} \\ \tiny 5.2(d) & \tiny 5.2(e) & \tiny 5.2(f)\\[1pt] \end{tabular} \\ \begin{tabular}{cccc} \includegraphics[width=0.8in,height =0.8in]{fig5/cycle_input.jpg} & \includegraphics[width=1.8 in,height =1.2 in]{fig5/cycle_dauc.png} & \includegraphics[width=1.8 in,height =1.2 in]{fig5/cycle_iauc.png} \\ \tiny 5.3(g) & \tiny 5.3(h) & \tiny 5.3(i)\\[1pt] \end{tabular} \caption{ AUC demonstration for the insertion and deletion operation for images on the left side. The above analysis shows that the proposed method can capture the most salient regions for a single class, dual-class, and dual-class with multiple instances compared to the previous methods \cite{Grad-CAM,Axiom-CAM,Grad-CAM++}.} \label{fig:AUC2} \end{figure*} The proposed method performs better or similar to the best performing saliency generation method in table \ref{t1}. Here, we present the comparative data on the Pascal VOC 2012 dataset for the ResNet50 model. Out of seven different performance tests, our method obtains the highest score for the six of them. For an increase in the context zone, our study differs only 0.03 from the best performing result. In table \ref{t2}, the proposed research avails state-of-the-art performance for three out of four metrics. Table \ref{t3} also shows the best performance of our method for every experiment metrics using the ImageNet dataset. The PASCAL VOC 2012 dataset has corresponding segmentation mask, and we can achieve pseudo segmentation masks by thresholding the saliency maps. Achieved scores for the Pointing game, Dice and IoU signify that our study can better capture interest zones than the compared studies. To measure the explainability, we also have conducted a comparative analysis in figure \ref{fig:AUC2} on three images from the Pascal VOC 2012 dataset and presented the insertion and deletion operation. Here, our method captures the most salient regions for both single, dual, and dual classes with multiple instances in comparison to GradCAM \cite{Grad-CAM}, Grad-CAM++\cite{Grad-CAM++}, and X-GradCAM\cite{Axiom-CAM}. \subsection{Interpretation comparison} Saliency map generation is not all about capturing the class of interest as precisely as possible. A faithful interpretation is also a significant part of saliency generation studies. In other words, any interpretable method should explain why the underlying model is making such a prediction by marking the corresponding image region. However, we cannot present this for every image from the dataset, but a sophisticated example can show the difference from previous methods. In figure \ref{4.1}, we have presented the interpretation comparison between the proposed study and the GradCAM++\cite{Grad-CAM++}. Here, the top-1 class response is \lq Albatross\rq\space for the given image. For VGG16, saliency map for the proposed method marks one of the Albatross birds and the water as context, but fails to mark other Albatross bird. For \lq Albatross\rq, our method with the VGG16 shows different interpretations due to the global guidance map and responds to the water as context. In contrast to ResNet50, our guidance map marks both Albatross birds without marking the water context. However, for both networks, GradCAM++\cite{Grad-CAM++} captures both of the birds also barely touches the water context. For this particular image, the proposed method presents clearer interpretation difference between VGG16 and ResNet50. In \ref{4.2}, we show why the models might identify the given image as \lq Ruddy Turnstone\rq\space class. With our scheme, we interpret that surrounding stones and water are features that are corresponding to Ruddy Turnstone bird for both VGG16 and ResNet50. This interpretation makes more sense if we look at typical Ruddy Turnstone images from the ImageNet dataset \ref{4.2}, where the most of Ruddy Turstone birds are shown with stony sea-shore area as the background. Hence, we can utilize this interpretation as a medium for identifying the dataset bias. On the other hand, GradCAM++\cite{Grad-CAM++} shows the Albatross bird regions as the interpretation for the Ruddy Turnstone, and almost ignores the associated context. \section{Conclusion} In this study, we present a novel extension of the traditional gradient-dependent saliency map generation scheme. The proposed method leverages element-wise multiplicative aggregation as guidance with previous weighted multiplicative summation and further improves the performance of salient region bounding. Additionally, we showed our study's advanced class discriminative performance and presented evidence for better area framing with deeper networks. Furthermore, our model can avail crisper saliency map and significant quantitative improvement over three widely used datasets. We aim to integrate this study into other vision tasks in future work. \bibliographystyle{unsrt} \section{Introduction} \lipsum[2] \lipsum[3] \section{Headings: first level} \label{sec:headings} \lipsum[4] See Section \ref{sec:headings}. \subsection{Headings: second level} \lipsum[5] \begin{equation} \xi _{ij}(t)=P(x_{t}=i,x_{t+1}=j|y,v,w;\theta)= {\frac {\alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}{\sum _{i=1}^{N} \sum _{j=1}^{N} \alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}} \end{equation} \subsubsection{Headings: third level} \lipsum[6] \paragraph{Paragraph} \lipsum[7] \section{Examples of citations, figures, tables, references} \label{sec:others} \lipsum[8] \cite{kour2014real,kour2014fast} and see \cite{hadash2018estimate}. The documentation for \verb+natbib+ may be found at \begin{center} \url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf} \end{center} Of note is the command \verb+\citet+, which produces citations appropriate for use in inline text. For example, \begin{verbatim} \citet{hasselmo} investigated\dots \end{verbatim} produces \begin{quote} Hasselmo, et al.\ (1995) investigated\dots \end{quote} \begin{center} \url{https://www.ctan.org/pkg/booktabs} \end{center} \subsection{Figures} \lipsum[10] See Figure \ref{fig:fig1}. Here is how you add footnotes. \footnote{Sample of the first footnote.} \lipsum[11] \begin{figure} \centering \fbox{\rule[-.5cm]{4cm}{4cm} \rule[-.5cm]{4cm}{0cm}} \caption{Sample figure caption.} \label{fig:fig1} \end{figure} \subsection{Tables} \lipsum[12] See awesome Table~\ref{tab:table}. \begin{table} \caption{Sample table title} \centering \begin{tabular}{lll} \toprule \multicolumn{2}{c}{Part} \\ \cmidrule(r){1-2} Name & Description & Size ($\mu$m) \\ \midrule Dendrite & Input terminal & $\sim$100 \\ Axon & Output terminal & $\sim$10 \\ Soma & Cell body & up to $10^6$ \\ \bottomrule \end{tabular} \label{tab:table} \end{table} \subsection{Lists} \begin{itemize} \item Lorem ipsum dolor sit amet \item consectetur adipiscing elit. \item Aliquam dignissim blandit est, in dictum tortor gravida eget. In ac rutrum magna. \end{itemize} \section{Conclusion} Your conclusion here \section*{Acknowledgments} This was was supported in part by...... \bibliographystyle{unsrt}
{'timestamp': '2022-07-13T02:11:00', 'yymm': '2207', 'arxiv_id': '2207.05374', 'language': 'en', 'url': 'https://arxiv.org/abs/2207.05374'}
\section{Introduction} In adsorbed nematics, surfaces often determine the favoured director orientation, which then propagates into the bulk \cite{Jerome}. When a nematic is subject to orientations propagating from different surfaces, elasticity theory predicts a distorted director configuration, with an associated elastic free energy. Very often conflicting surface orientations frustrate the director in restricted geometries and disclinations are generated. Disclinations can be generated by curvature alone \cite{Lubensky}, for example by placing a two-dimensional (2D) nematic on a finite but unbounded surface such as a spherical surface \cite{Bowick,PhysRevLett.108.057801,Lowen}. The dimensionality of space is also important since it forces the possible topology and limits the type of defects that can form. Consider a 2D nematic made of rod-like particles inside a small (of a size a few times the particle length) circular cavity. When the density is increased from a dilute state, the bulk isotropic-nematic transition \cite{Cuesta,Bates,review,Lucio} induces some kind of orientational ordering in the cavity. However, due to the surface, the nematic director is unable to adopt a defect-free uniform configuration, and a global topological charge $+1$ \cite{Nelson} arises in the cavity: either as a central $+1$ disclination, or as two diametrically opposed $+1/2$ disclinations (on the surface or at a distance from it). Thus the fluid optimises the surface free energy at the cost of creating two disclinations and a distorted director field (which incurs an elastic free energy). This effect has been observed in Monte Carlo (MC) simulations \cite{Lowen}. For weak surface anchoring there is an alternative scenario: the director may not follow the favoured surface orientation and a quasi-uniform, defect-free configuration with little elastic free energy may arise in the cavity. A uniform phase has been obtained in vibrated monolayers of granular rods \cite{Galanis}. Interesting phase diagrams result as the cavity radius and the fluid density are changed, as exemplified by recent density-functional calculations \cite{PhysRevE.79.061703,herasliqcrys,Chen}. \begin{figure}[h] \includegraphics[width=3.0in,angle=0]{fig1.pdf} \caption{Schematic of the phases obtained in the circular cavity as the density is increased in the case of a wall imposing planar surface alignment. Continuous thin line is the director field. (a) Low density: Isotropic phase (I) with a film of nematic fluid adsorbed on the surface. (b) Intermediate density: Polar phase (N$_2$) containing two disclinations (in the case depicted disclinations are at the wall). (c) High density: Quasi-uniform (N$_u$) phase with two domain walls (represented by dashed lines).} \label{fig1} \end{figure} In this paper we use Monte Carlo simulation to analyse this system, using hard rectangles of length-to-width ratio $L/\sigma$ as a particle model and a circular cavity of radius $R$ that imposes hard or overlap forces on the particles. Both planar (tangential) and homeotropic (normal) anchorings are studied. In both cases we observe the formation of 1D domain walls when the cavity radius is small. The scenario is schematically depicted in Fig. \ref{fig1} (for the sake of illustration, only the planar case is depicted): an isotropic phase, I, at low density (containing a thin nematic film in contact with the wall), Fig. \ref{fig1}(a), followed, at higher densities, by a nematic phase with two $+1/2$ disclinations called polar configuration, N$_2$, see Fig. \ref{fig1}(b). When the density increases further the polar configuration is no longer stable, and may transform into a structure with a quasi-uniform director configuration, phase N$_u$ in Fig. \ref{fig1}(c). Depending on the surface conditions, cavity radius and particle aspect ratio, the N$_u$ phase may be preempted by formation of layered (smectic-like) structures. Our study contains two novel features. (i) In contrast with previous works, which focus solely on the nematic phase and the configuration of disclinations, the whole density range, from dilute to near close-packing, is explored here. (ii) The peculiar structure of the N$_u$ phase: the incorrect surface alignment associated with a uniform director is here observed to be relaxed by the formation of domain walls, i.e. one-dimensional interfaces across which the director rotates abruptly by approximately $90^{\circ}$; this is represented by the dashed lines in Fig. \ref{fig1}(c). These structures are similar to the step-like defects observed in three-dimensional nematics inside hybrid planar slit pores \cite{PM,Galabova,Sarlah,PhysRevE.79.011712,paulo,Noe} or near half-integer disclinations in cylindrical pores \cite{Schopohl}, and may be a universal feature in nematics subject to high frustration and strong anchoring under conditions of severe confinement. In our results, planar and homeotropic anchoring conditions behave similarly except for trivial but important differences. Although the calculation of a complete phase diagram including cavity radius, density and particle aspect ratio is beyond our present capabilities, we give general trends as to how the equilibrium phase depends on these parameters. Our theoretical work is mainly inspired by recent experiments on vibrated quasi-monolayers made of granular rods that interact through approximately overlap forces (hard interactions). Nematic ordering is observed in these fluids \cite{indios,indios1,Galanis,Aranson,review_hungaro}. Granular materials are non-thermal fluids and therefore do not follow equilibrium statistical mechanics. In particular, they flow and diffuse anomalously \cite{Aranson1,Yadav}. However, they can also form steady-state textures that resemble liquid-crystalline states. In this context, it would be interesting to check whether MC simulation on hard particle models can be useful to obtain basic trends as to type of patterns, dependence with packing fraction and size of confining cavity, etc. The arrangement of rods in a 2D confining cavity has also been investigated in connection with the modelling of actin filaments in the cell cytoplasm \cite{Mulder}. Self-organised patterns of these filaments have been observed in various quasi-2D geometries and result from the combined packing and geometrical constraints. Simulation studies such as the present one could also provide mechanisms to explain this and other experiments on confined quasi-two-dimensional nematics \cite{Mottram}. In Section \ref{SM} we define the particle model, the simulation method and provide some details on the analysis. Results are presented in Section \ref{Results}, and a short discussion and the conclusions are given in Section \ref{conclusions}. \section{Model and simulation method} \label{SM} The particle model we use is the hard-rectangle (HR) model, consisting of particles of length-to-width ratio $L/\sigma=16$ or $40$ that interact through overlap interactions. The configuration of a particle is defined by $({\bm r},\hat{\bm\omega})$, respectively the position vector of the centre of the particle and the unit vector giving the orientation of the long particle axis. A collection of $N$ such particles is placed in a circular cavity of radius $R$. We define the packing fraction $\phi$ of the system as the ratio of area covered by rectangles and total area $A$ of the cavity. Thus $\phi=NL\sigma/A=\rho_0L\sigma$, where $\rho_0=N/A$ is the mean density. For such large length-to-width ratios a fluid of HR undergoes a phase transition from an I phase to a nematic (N) at rather low densities \cite{SCHLACKEN,raton064903,PhysRevE.76.031704,PhysRevE.79.011711,raton014501,PhysRevE.80.011707}. The bulk transition is continuous and probably of the Kosterlitz-Thouless type; this detail is irrelevant here since, due to the completely confined geometry, there can be no true phase transition in the circular cavity and one expects a possibly abrupt but in any case gradual change from the I phase to the N phase. The effect of the cavity wall on the particles is represented via an external potential $v_{\hbox{\tiny ext}}({\bm r},\hat{\bm\omega})$. In all cases this is a hard potential but, depending on the type of surface anchoring condition wished (either homeotropic or planar), the potential can be chosen to act on the particle centres of mass or on the whole particle --all four corners of the particle. Specifically, \begin{eqnarray} \beta v_{\hbox{\tiny ext}}({\bm r},\hat{\bm\omega})=\left\{\begin{array}{cl}\infty,&\left\{\begin{array}{l} \cdot\hspace{0.1cm}\hbox{\small at least one corner outside} \\ \hbox{\small cavity (planar)}\\ \cdot\hspace{0.1cm}\hbox{\small centre of mass outside }\\ \hbox{\small cavity (homeotropic)}\end{array}\right\}\\\\ 0,&\hbox{otherwise.}\end{array}\right. \end{eqnarray} where $\beta=1/kT$, with $k$ Boltzmann's constant and $T$ the temperature. A hard wall acting on the whole particle promotes planar ordering. However, if the condition is on the particle centers of mass, it is homeotropic anchoring that is promoted. This was shown by MC \cite{Lowen} and density-functional studies \cite{PhysRevE.79.061703}, and has also been confirmed in fluids of hard discorectangles confined in 2D circular cavities or in fluids of rods confined in slit pores in 3D \cite{doi:10.1080/00268979909483083,heras:4949,0953-8984-19-32-326103}. Since a single particle close to a wall may have any orientation in this type of condition, one infers that homeotropic anchoring results from the collective effect of all particles. By contrast, a hard wall over the whole particle induces planar anchoring. In this case anchoring is not the result of a collective effect since a single particle sufficiently close to the wall is forced to adopt an orientation parallel to the wall. The simulation method was the following. We started at low density with a few particles inside the cavity. After equilibrating the system using the standard Metropolis algorithm on particle positions and orientations, a few particles are added and the fluid is equilibrated again. The number of particles added varied between $1$ and $20$, resulting in an increase in packing fraction of $\sim 1-10\cdot10^{-3}$ (depending on aspect ratio and cavity radius). This process was repeated until a high density was reached. For the insertion process we first chose one particle at random and create a replica with the same orientation but with the long axis displaced by $\sim D$. Then we performed a few thousands rotations and displacements on the new particle. The addition of one particle, especially when the density is high, may lead to overlap; in that case we chose another particle to create the replica and a new attempt was made until insertion was completed successfully. The simulation ended when the desired density was reached or if the addition of new particles is no longer possible. As usual, a Monte Carlo step (MCS) is defined as an attempt to individually move and rotate all particles in the system. We performed $5-15\cdot 10^5$ MCS for each $N$. The acceptance probability was set to about $0.2$, and depended on the maximum displacement $\Delta r_{\hbox{\tiny max}}$ and maximum rotation $\Delta\phi_{\hbox{\tiny max}}$ each particle is allowed to perform in one MCS. Both, $\Delta r_{\hbox{\tiny max}}$ and $\Delta\phi_{\hbox{\tiny max}}$, are adjusted to obtain the desired acceptance probability every time we increase the number of particles. To characterise the fluid structure in the cavity, three local fields are defined: (i) A local density $\rho({\bm r})$ in terms of a local packing fraction $\phi({\bm r})=\rho({\bm r})L\sigma$; here ${\bm r}=(x,y)$ is the position vector of a particle centre of mass. (ii) A local order tensor $Q_{ij}({\bm r})$, defined as $Q_{ij}=\left<2\hat{\bm \omega}_i\hat{\bm \omega}_j-\delta_{ij}\right>$, where $\left<\cdots\right>$ denotes a canonical average, $\hat{\bm \omega}=(\cos{\varphi},\sin{\varphi})$ is the unit vector pointing along the particle axis, and $\varphi$ is the angle with respect to the $x$ axis. The order tensor can be diagonalised, and the largest eigenvalue $Q({\bm r})$ is taken as the local order parameter (the other eigenvalue is negative and with the same absolute value). The $x$ axis of the frame where $Q_{ij}$ is diagonal defines a local tilt angle $\psi({\bm r})$ with respect to the laboratory-frame $x$ axis. All local quantities, $\phi({\bm r})$, $Q({\bm r})$ and $\psi({\bm r})$ were defined, at each ${\bm r}$, as an average of each quantity for all particles over a circle of radius $r=0.5L$ and for $2500$ different configurations separated by $50$ MCS (all local fields shown in this paper were obtained in this way). This number of configurations is necessary in order to obtain spatially smooth fields. However, a complication may arise due to the collective rotation of the particles inside the cavity. As the cavity does not impose a global director, microstates with the same average order parameter but distinct global directors are equivalent. Hence, an ensemble average over these states can artificially result in a state with lower order parameter. In particular, the collective rotations of the particles take place more frequently at low packing fractions, hindering the nematization analysis. Since the precise location of the transition to the nematic state is not the goal of our work, we did not attempt to address this problem. As shown in section \ref{Results}, distinct states characterised by different local nematic fields will arise in the cavity as the density is increased. Given the completely restricted geometry and the reduced number of particles in the cavity, abrupt changes are not expected and only a continuous transition between the different states can occur in the system. In order to check this, we have run simulations for selected cavity radii by first increasing and then decreasing the number of particles, looking for possible hysteresis effects. As expected, no such effects were found in the process and we can be confident that the states described in the following section are the stable ones. \section{Results} \label{Results} An overall picture of the phenomena occurring in the cavity as the density is increased can be obtained by looking at typical particle configurations, local packing fraction and local order parameter. In this section we first present the results for the planar case, and then for the homeotropic case. In both cases cavity radii from $R=1.0L$ to $10L$ were explored, all for $L/D=16$ and $40$. \begin{figure*} \includegraphics[width=\textwidth]{fig2.pdf} \caption{Cavity with radius $R=7.5L$, planar anchoring conditions and particle aspect ratio $L/D=40$. Left column: snapshot of particle configurations. Middle column: local packing fraction. Right column: local order parameter. First row: I phase, with $N=570$ and global packing fraction $\phi\simeq 0.08$. Second row: N$_2$ phase, with $N=1270$ and $\phi\simeq 0.18$. Third row: a probably metastable phase, with $N=1550$ and $\phi\simeq 0.22$. Forth row: N$_u$ phase, with $N=2820$ and $\phi\simeq 0.40$.} \label{fig2} \end{figure*} \subsection{Planar anchoring} Representative results in the case of planar anchoring can be obtained from the case $R=7.5L$ and $L/D=40$ (the behaviour is qualitatively the same for the other cavity radii and aspect ratios analysed). The initial number of particles was $N=250$ (corresponding to a packing fraction $\phi\simeq 0.04$), and the final number of particles was $N=3000$ ($\phi\simeq 0.42$). The results are shown in Fig. \ref{fig2}, where each row corresponds to a given packing fraction, increasing from top to bottom. At low $\phi$ (first row in Fig. \ref{fig2}) the fluid is disordered (I phase), except at a thin film next to the wall which presents some degree of planar ordering. As the fluid becomes more dense it undergoes a quasi-transition from the I phase to the N$_2$ phase at a density close to the bulk transition (the second row in the figure, for $\phi=0.180$, corresponds to a nematic state, i.e. beyond the bulk transition). This density agrees closely with that predicted for hard rods in 2D \cite{Bates,raton064903}. Nematization in the case $L/D=16$ is qualitatively similar, except that the transition density is more or less doubled. Once a nematic fluid is established in the cavity, the local director is subject to frustration due to the geometry. The planar surface orientation is satisfied by the particles but, due to the topological restrictions imposed by the wall, the nematic fluid creates two disclinations of topological charge $+1/2$ next to the walls in diametrically opposed regions. This feature manifests itself in panel b3 through the depleted order parameter at the disclination cores (by contrast, the local $\phi$, panel b2, is not sensitive because the ordered and disordered phases have the same density). Two isolated $+1/2$ disclinations are always more stable than a single point defect of charge $+1$ because the free energy is proportional to the square of the topological charge. The boundaries could modify this balance, but our results, already predicted by density-functional calculations \cite{PhysRevE.79.061703}, indicate that this is not the case. Sometimes along the MC chain, configurations with two extra disclinations are excited (third row in Fig. \ref{fig2}): one of charge $+1/2$, close to the surface and forming an equilateral triangle with the previous two, and another one with charge $-1/2$ at the centre (panel c3); these configurations, which still have a total topological charge of $+1$, do not appear very often in the MC chain since they involve a higher elastic free energy, and anyway the $-1/2$ and $+1/2$ disclinations tend to annihilate each other. A similar metastable configuration has been observed in simulations of hard spherocylinders lying on the surface of a three-dimensional sphere \cite{Lowen}. At higher densities a dramatic structural change can be observed (fourth row in Fig. \ref{fig2}). As $\phi$ increases, elastic stresses becomes very large because of the strong dependence of elastic constants, $K$, with density. As a consequence, a quasi-uniform director configuration (N$_u$ phase) with little elastic stress is formed beyond some critical value $\phi_c$. The director orientation is not completely uniform. A perfectly uniform director configuration would imply that the planar orientation favoured at the wall is not completely satisfied. However, the fluid can reduce the increased surface free-energy implied by a strictly uniform director field by creating two fluctuating domain walls, panel d3, that define two diametrically opposed domains where the director rotates by $90^{\circ}$. Alternatively, we can view this structure as a polar structure (stable at lower densities) where the two point defects are smeared out into a curved one-dimensional interface. Particles in the two small domains satisfy the surface orientation. Note that the domain walls behave as a soft wall: particles of the central domain next to the interface are highly ordered and the density in these regions is increased (panel d2). Domain walls are seen to behave as highly fluctuating structures.\\ \begin{figure} \includegraphics[width=1.0\columnwidth]{fig3.pdf} \caption{High-density states of particles with $L/D=40$ confined in cavities with planar anchoring and different radii. The global packing fraction is $\phi\simeq 0.45$. Left column: representative snapshots of particle configurations. Right column: local packing fraction. Cavity radius increases from top to bottom. First row: $R=1.5L$, $N=126$. Second row: $R=2.0L$, $N=224$. Third row: $R=2.5L$, $N=350$. Fourth row: $R=6.0L$, $N=2025$.} \label{fig3} \end{figure} The value of packing fraction, $\phi_c$, at which the N$_2$-N$_u$ transition takes place increases with cavity radius $R$. To understand this, let us consider the free energy $F$ of both configurations, N$_2$ and N$_u$, with respect to an undistorted nematic state with free energy $F_0$. Three terms contribute to the excess free-energy $\Delta F=F-F_0$: domain walls, $F_w$, elastic deformations of the director field, $F_e$, and disclination cores, $F_c$. In the N$_2$ state the director field is distorted; the free energy presents a logarithmic dependence with $R$, i.e. $F_e\sim K(\phi)\log{R}$ \cite{dephysics,kleman2002soft}, while the contribution from the two disclinations is almost constant (assuming the distance between the cores to be independent of $R$). In the N$_u$ state director deformations are negligible in comparison to the other phase, but the presence of domain walls increases the free energy. Since $F_w$ is proportional to the length of the domain wall, it should also increase with $R$, but faster than $F_e$. Due to the weaker dependence of $F_e$ with $R$, we expect a transition from N$_u$ to N$_2$ when $R$ is increased beyond some critical value and consequently $\phi_c(R)$ should be an increasing function of $R$ in view of the density dependence of the elastic constants $K(\phi)$. This conclusion is confirmed by our simulations (not shown). \begin{figure*} \includegraphics[width=\textwidth]{fig4.pdf} \caption{Cavity with radius $R=5.5L$, homeotropic anchoring conditions and particle aspect ratio $L/D=16$. Left column: snapshot of particles configurations (circle in black is the actual cavity wall of radius $R$, while circle in red represents the effective cavity with radius $R_{\hbox{\tiny eff}}$). Middle column: local packing fraction. Right column: local order parameter. First row: I phase with $N=378$ and global effective packing fraction $\phi_{\hbox{\tiny eff}}\simeq 0.21$. Second row: N$_2$ phase with $N=618$ and $\phi_{\hbox{\tiny eff}}\simeq 0.34$. Third row: a state intermediate between the N$_2$ and N$_u$ phases, with $N=778$ and $\phi_{\hbox{\tiny eff}}\simeq 0.43$. Forth row: N$_u$ phase with $N=1022$ and $\phi_{\hbox{\tiny eff}}\simeq 0.56$.} \label{fig4} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{fig5.pdf} \caption{Cavity with radius $R=5.5L$, homeotropic anchoring conditions and particle aspect ratio $L/D=40$. Left column: snapshot of the particle configurations (circle in black is the actual cavity wall of radius $R$, while circle in red represents the effective cavity with radius $R_{\hbox{\tiny eff}}$). Middle column: local packing fraction. Right column: local order parameter. First row: I phase with $N=385$ and average effective packing fraction $\phi_{\hbox{\tiny eff}}\simeq 0.09$. Second row: N$_2$ phase with $N=805$ and $\phi_{\hbox{\tiny eff}}\simeq 0.18$. Third row: N$_2$ phase with $N=1325$ and $\phi_{\hbox{\tiny eff}}\simeq 0.29$. Forth row: phase exhibiting layering in the radial direction, with $N=2445$ and $\phi_{\hbox{\tiny eff}}\simeq 0.54$.} \label{fig5} \end{figure*} \begin{figure*} \includegraphics[width=6.2in]{fig6.pdf} \caption{High density states in a cavity with homeotropic anchoring. Left column: representative particle configurations. Middle column: local packing fraction. Right column: local order parameter. First row: uniform phase, $L/D=40$, $R=2.5L$, $N=610$, $\phi_{\hbox{\tiny eff}}\simeq 0.54$. Second row: polar phase, $L/D=16$, $R=12.5L$, $N=3508$, $\phi_{\hbox{\tiny eff}}\simeq 0.41$.} \label{fig6} \end{figure*} On further increasing the value of $\phi$, the fluid may develop smectic-like layers, reflecting the corresponding transition in bulk. The role played by cavity radius is especially important in this regime. To see this, we plot in Fig. \ref{fig3} representative snapshots of the particle configurations (left column) and the local packing fraction (right column) of particles with aspect ratio $L/D=40$. In all cases the average packing fraction is $\phi\simeq 0.45$ (well above the bulk I-N transition) and the radius increases from top to bottom: $R=1.5L$, $2L$, $2.5L$ and $6L$. As expected, strong commensuration effects arise in the cavity at high density. For small cavity radii, the particles form well defined layers (first three rows in figure), the number of which depends on the available space. The formation of layers inside the cavity is the analogue of capillary smectization of a liquid crystal in slab geometry previously analysed in 3D \cite{PhysRevLett.94.017801,PhysRevE.74.011709,0953-8984-20-42-425221,0953-8984-22-17-175002} and 2D \cite{Yuri}. In general, the circular shape of the cavity frustrates the formation of well-defined layers, but the combined effect of shape and size may frustrate or enhance the formation of layers. In this system small cavity sizes promote layering: in the cases shown, the fluid remains in a nematic-like N$_{\hbox{\small u}}$ state when $R=6L$ (see panels d1 and d2 of Fig. \ref{fig3}), but for smaller cavities at the same packing fraction well defined smectic-like layers can develop. \subsection{Homeotropic anchoring} In this case the wall acts as a hard wall on the particle centres of mass. In order to be able to compare with the planar case, we define an effective cavity radius, $R_{\hbox{\tiny eff}}=R+\sqrt{L^2+D^2}/2$, and obtain an effective packing fraction as $\phi_{\hbox{\tiny eff}}=NL\sigma/(\pi R_{\hbox{\tiny eff}}^2)$. In contrast with the planar case, for homeotropic anchoring we observe strong differences with respect to particle aspect ratio for a fixed cavity radius: the N$_u$ phase is stabilised for $L/D=16$, but the N$_2$--N$_u$ transition is preempted by the formation of smectic-like layers when $L/D=40$ and, as a result, no quasi-uniform configuration occurs. Fig. \ref{fig4} summarises a typical evolution of the configurations as $\phi$ is increased when $L/D=16$. The low-density configuration (first row) is similar to the planar case: a thin (one-particle thick) film develops at the wall, now with normal average orientation of the particles, while the rest is disordered. When the density increases and nematic order appears in the whole cavity (second row of Fig. \ref{fig4}), the topological constraints force the creation of two disclinations of charge $+1/2$; this is as in the planar case (second row of Fig. \ref{fig2}). Here the two defects are not at the wall but a bit separated. This feature was predicted by density-functional theory \cite{PhysRevE.79.061703,herasliqcrys} and results from the effective repulsion of the defect by the wall combined with the mutual repulsion between the two defects (in the planar case such defect-wall repulsion does not exist). As the density is increased further the two disclinations can be seen to approach each other (not shown). This behaviour is in contrast with that predicted by the Onsager-like theory analysed in \cite{PhysRevE.79.061703}, according to which the relative distance should increase with chemical potential (or, equivalently, density). Again two effects are at work as density increases: defect-wall repulsion, which increases due to the strong spatial ordering near the wall and the incipient stratification from the wall in the normal direction (panels b2), and defect mutual repulsion, which also increases due to the higher elastic stiffness of the nematic. The poor description of strong density modulations occurring at high density in Onsager theory may explain the discrepancy. The third row in Fig. \ref{fig4}, corresponding to a larger value of packing fraction, represents an intermediate (probably metastable) stage between the polar state and what appears to be the stable configuration at high density: the uniform phase, N$_u$ (fourth row). Here the director alignment in the cavity is more or less uniform, with little bend-like elastic distortion. However particles in the first layer, highly packed in a compact normal configuration, form a well defined and stable film acting as a soft wall favouring planar orientation. This results in a quasi-uniform phase N$_u$, similar to that found in the case of planar anchoring at high densities. The resulting configuration contains two extended domain walls separating the first layer from the central nematic domain. The N$_2\to$N$_u$ transition is driven by an anchoring-transition mechanism: the orientation of particles next to the first layer changes from normal (see panel b1 in Fig.\ref{fig4}) to tangential (panel d1) along opposite arcs spanning $\sim 120^{\circ}$; note that in the configuration of panel c1 the transition has taken place only in one side. As in the N$_2$ phase, the formation of these domain walls breaks rotational symmetry and establishes a direction along which smectic-like layers can grow at higher densities (see incipient layering in panel d2). The stability of the N$_u$ phase was checked by preparing a configuration of $N=900$ rods in a cavity of radius $R=5.5L$ containing a central radial disclination of charge $+1$. During the MC chain the system rapidly formed a polar phase with two $+1/2$ disclinations, similar to that depicted in Fig. \ref{fig4} (b1). Then an anchoring transition took place in half of the cavity, and finally the N$_u$ phase was stabilised and remained unchanged for the rest of the simulation. This can be taken as strong evidence that the N$_u$ phase is the truly stable phase. For particle aspect ratio $L/D=40$ and the same cavity radius, Fig. \ref{fig5}, low-density states are similar to those with aspect ratio $L/D=16$ (first and second row of Fig. \ref{fig4}). However, the order parameter close to the surface is significantly lower than in the case of planar anchoring (compare panels a3 of Figs. \ref{fig2} and \ref{fig5}). The reason is the following: A single particle at close contact with the wall has a specific (tangential) alignment when the wall acts over the whole particle; out-of-tangential alignments lead to particle-wall overlap and are not allowed, which increases the order parameter. In contrast, a hard wall acting on the centers of mass does not induce any favoured orientation on the particles if the density is sufficiently low. As $\phi$ increases a nematization transition occurs, and a N$_2$ phase is stabilised at intermediate densities. However, for larger densities, the situation changes dramatically: the homeotropic anchoring imposed by the surfaces always propagates to the inner cavity since the locking mechanism that anchors particles to the first layer is much more effective: the normal-to-tangential anchoring transition never takes place. At higher packing fractions, $\phi_{\hbox{\tiny eff}}\simeq 0.54$ (fourth row in Fig. \ref{fig4}), a density stratification grows from the surface and particles form well-defined, concentric layers, the two $+1/2$ disclinations being pushed to the central region. The size of the defect cores (regions where the order parameter vanishes) becomes significantly smaller as density is increased. Therefore, the absence of the anchoring transition inhibits the formation of the N$_u$ phase and the fluid instead goes directly to a smectic-like phase exhibiting concentric layers. Thus, for $R=5.5L$, the high-density states are completely different depending on the aspect ratio. Anchoring is much stronger in the case $L/D=40$, the anchoring transition does not take place, and the quasi-uniform configuration is inhibited as a result. At high densities, smectic-like layers do not grow along a fixed direction but in the radial direction instead. From this evidence, it may seem that the value of aspect ratio $L/D$ determines the type of structure in the cavity as $\phi$ increases. This is not the case, and in fact the $R/L$ ratio is a more important factor. To check this, we have conducted simulations for both elongations but with very different cavity sizes, $R=2.5L$ for $L/D=40$ and $R=12.5L$ for $L/D=16$, Fig. \ref{fig6}. In the first case a uniform configuration with domain walls is stabilised; in the second, no anchoring transition occurs and, as a consequence, the N$_u$ phase is not stabilised before layering sets in. We conclude that a large value of $R/L$ favours anchoring and inhibits the anchoring transition and the formation of the quasi-uniform phase. In turn, the critical value of the ratio $R/L$ separating both regimes depends on $L/D$ to some extent. \section{Discussion and conclusions} \label{conclusions} In summary, we have observed the formation of domain walls in 2D nematic fluids subject to frustration as a result of confinement in small cavities. These structures, not predicted by elasticity theory, are similar to the step-like defects obtained in model 3D nematics confined in hybrid planar slit pores. At domain walls the director orientation changes at the molecular scale, so that the elastic field becomes singular along extended interfaces (lines in 2D or surfaces in 3D). In this way the elastic distortion is greatly relaxed, while the surface orientation is still optimised. Domain walls were predicted in an analysis of the neighbourhood of half-integer nematic disclinations, using Landau-de Gennes theory \cite{Schopohl}. In this case the domain wall is a way to avoid a disordered defect core. In hybrid planar slit pores, where the two facing surfaces favour antagonistic directions, a domain-wall structure has also been predicted by Landau-de Gennes theory \cite{PM} and by density-functional calculations \cite{PhysRevE.79.011712}. Here we have extended the observation of these domain walls to real particle simulations of confined nematics in 2D. In our system and for a fixed cavity radius, we have found the sequence I$\to$N$_2\to$N$_u$ for increasing packing fraction $\phi$. The packing fraction at which the N$_2\to$N$_u$ transition occurs strongly depends on cavity radius $R$. At high density smectic-like layers are formed in the fluid. The N$_u$ phase is a quasi-uniform phase with little director distortion; this is realised by the creation of two domain walls which help maintain the favoured surface orientation in the whole cavity. Homeotropic anchoring conditions induce the formation of a highly-packed surface film, which may drive an anchoring transition to a tangential orientation and the formation of a large quasi-uniform nematic domain separated from the first layer by domain walls. Large $R/L$ ratios inhibit the anchoring transition and therefore the stabilisation of the N$_u$ phase, but the critical value of $R/L$ depends on the aspect ratio $L/D$. Our results emphasise the possibility of domain-wall formation in small confined systems as a mechanism to optimise surface and elastic stresses. It is interesting to compare our results with other studies. Dzubiella et al. \cite{Lowen} analysed a similar system using MC simulation. They focused on moderate densities and obtained the N$_2$ phase. Galanis et al. \cite{Galanis} studied vibrated quasi-monolayers of rods in circular and square geometries, and observed the formation of nematic patterns. The patterns were seen to be well described by continuum elastic theory, but the particle configurations of the experiment seem to exhibit some evidence of domain walls not contemplated in that work. The formation of two $+1/2$ disclinations has been also predicted in active matter confined in cylindrical capillaries \cite{PhysRevLett.110.026001} and circular cavities \cite{PhysRevLett.109.168105}. Recently, liquid-crystalline ordering has been studied in square cavities using density-functional \cite{Miguel} and MC simulation \cite{Dani}. Domain walls can be stabilised in these systems. For some geometries the formation of domain walls may be a necessary requirement for the development of confined phases with spatial order (smectic or columnar) at higher densities. \section{Acknowledgments} We thank M. Schmidt, T. Geigenfeind, S. Rosenzweig and Y. Mart\'{\i}nez-Rat\'on for useful discussions and comments. E. V. acknowledges financial support from Programme MODELICO-CM/S2009ESP-1691 (Comunidad Aut\'onoma de Madrid, Spain), and FIS2010-22047-C01 (MINECO, Spain).
{'timestamp': '2014-02-04T02:15:53', 'yymm': '1402', 'arxiv_id': '1402.0363', 'language': 'en', 'url': 'https://arxiv.org/abs/1402.0363'}
\section{Introduction} Wave turbulence is observed in the interaction of nonlinear dispersive waves in many physical processes, see the review \cite{newell} and references therein. Zakharov and Filonenko \cite{Zakh} proposed a theory of weak turbulence of capillary waves on the surface of a liquid. According to which, a stationary regime of wave turbulence with an energy spectrum, now called the Zakharov-Filonenko spectrum, is formed at the boundary of a liquid. To date, the theory of weak turbulence has been very well confirmed both experimentally \cite{mezhov1, mezhov2,falcon2007, falc_exp} and numerically \cite{Push, korot,pan14,falcon14}. Physical experiments \cite{falcon2, falcon3} carried out for magnetic fluids in a magnetic field showed that the external field can modify the turbulent spectrum of capillary waves. So far, there was no theoretical explanation for this fact. In this paper, we consider the nonlinear dynamics of the free surface of a magnetic fluid in a horizontal magnetic field. Melcher \cite{melcher1961} has shown that the problem under study is mathematically completely equivalent to the problem of the dynamics of the free surface of a dielectric fluid in a horizontal electric field. For this reason, in this work we will use the previously obtained results for non-conducting liquids in an electric field. The dynamics of coherent structures (solitons or collapses) on the surface of liquids in a magnetic (electric) field has been very well studied (see, for example, \cite{koulova18,ferro10, tao18, zu2002, gao19}). At the same time, the turbulence of surface waves in an external electromagnetic field was not theoretically investigated (except of our recent work \cite{ko_19_jetpl}). In this paper, we will show that at the free surface of a ferrofluid in a magnetic field, new wave turbulence spectra differing from the classical spectra for capillary and gravitational waves can be realized. \section{Linear analysis} We consider a potential flow of an ideal incompressible ferrofluid with infinite depth and a free surface in a uniform horizontal external magnetic field. The fluid is dielectric, i.e., there are no free electrical currents in the liquid. Since, the problem under consideration is anisotropic because of the existence of the separated direction of the magnetic field, we consider only plane symmetric waves propagating in the direction parallel to the external field. Let the vector of a magnetic field induction be directed along the $x$ axis (correspondingly, the $y$ axis of the Cartesian coordinate system is perpendicular to it) and has the absolute value $B$. The shape of the boundary is described by the function $y=\eta(x,t)$, for the unperturbed state $y=0$. The dispersion relation for linear waves at the boundary of the liquid has the form \cite{melcher1961} \begin{equation}\label{disp}\omega^2=gk+\frac{\gamma(\mu)}{\rho}B^2k^2+\frac{\sigma}{\rho}k^3,\end{equation} where $\omega$ is the frequency, $g$ is the gravitational acceleration, $k$ is the wavenumber, $\gamma(\mu) =(\mu-1)^2 (\mu_0(\mu+1))^{-1}$ is the auxiliary coefficient, $\mu_0$ is the magnetic permeability of vacuum, $\mu$ and $\rho$ are the magnetic permeability and mass density of the liquid, respectively, and $\sigma$ is the surface tension coefficient. Let estimate the characteristic physical scales in the problem under study. In the absence of an external field, the dispersion relation (\ref{disp}) describes the propagation of the surface gravity-capillary waves. Their minimum phase speed is determined by the formula: $v_{min}=(4 \sigma g / \rho)^{1/4}$. To obtain the characteristic magnetic field we need to equate $v_{min}^2$ to the coefficient before $k^2$ (it has a dimension of squared velocity) in the right-hand side of (\ref{disp}). Thus, the critical value of the magnetic field induction has the form \begin{equation}\label{field} B_c^2=\frac{2(\rho g\sigma)^{1/2}}{\gamma(\mu)}. \end{equation} The characteristic scales of length and time are \begin{equation}\label{scale} \lambda_0=2 \pi \left(\frac{\sigma}{g\rho}\right)^{1/2},\quad t_0=2 \pi \left(\frac{\sigma}{g^{3}\rho} \right)^{1/4}. \end{equation} Let us calculate the specific values of the introduced quantities for the liquid used in the experiments \cite{falcon2, falcon3}. Put the fluid parameters as follows $$\rho=1324\, \mbox{kg/m}^3,\quad \sigma=0.059\, \mbox{N/m}, \quad \mu=1.69.$$ Substituting these parameters into above formulas, we obtain the estimations for the characteristic quantities in the problem under study: $\lambda_0\approx 1.3$ cm, $t_0\approx0.1$ s, and $B_c\approx 196$~G. It should be noted that the critical value of the magnetic field decreases with increasing magnetic permeability of the liquid. For the liquid with $\mu=10$, it is estimated as a relatively small quantity $B_c\approx 30$ G. Further in the work, it will be demonstrated that the magnetic wave turbulence can develop at the boundary of a ferrofluid with high magnetic permeability in the following field range: $2\leq B/B_c \leq 6$, i.e., the maximum value of $B_{max}$ used in the work is near 200 G. Note that in the case of a strong magnetic field, the dispersion relation (\ref{disp}) must be modified taking into account the magnetization curve, as was done in \cite{zel69}. Let us rewrite the expression for the magnetization $M(H)$ of a colloidal ferrofluid composed of particles of one size \cite{rosen87}: $$M(H)/M_{st}=\left(\coth \theta-1/\theta\right)\equiv L(\theta),\,\,\,\theta=\frac{\pi \mu_0M_d D^3H}{6 k_B T},$$ where $L(\theta)$ is the Langevin function, $D$ is the particle diameter, $M_{st}$ is the magnetic saturation, $M_d$ is the domain magnetization of the particles, $k_B$ is Boltzmann's constant, $T$ is the absolute temperature. For small values of the external field, the Langevin function can be approximated by the linear dependence $L(\theta)\approx \theta/3$. Hence, in such a situation, the fluid magnetization is linearly related with the magnetic field strength $M=\chi_i H$, where $\chi_i$ is the initial magnetic susceptibility defined as, \begin{equation}\label{chi}\chi_i=\frac{\pi \mu_0M_d^2 d^3}{18 k_B T}. \end{equation} Here we took into account that $M_{st}=\phi M_d$, where $\phi$ is the volume fraction of the ferromagnetic particles in a liquid. The formula (\ref{chi}) gives a relatively large value, $\chi_i\approx 9.5$, for a fluid consisting of the magnetite particles ($M_d\approx 4.46 \cdot 10^5$ A/m) with the characteristic size $D=22$ nm, and $T=300$~K. We now estimate the characteristic field strength $H_0$ for which the magnetization curve becomes deviate from the linear law $\theta/3$. For the value $\theta(H_0)=1$, the relative deviation between $L(\theta(H_0))$ and $\theta(H_0)/3$ is near 6$\%$, so this equality can be used as a criterion of the characteristic field. Form the definition of $\theta$, we obtain the expression for the characteristic magnetic field strength: \begin{equation}\label{field2}H_0=\frac{6k_BT}{\pi \mu_0 M_d D^3}.\end{equation} For the fluid with magnetic permeability, $\mu\approx 10$, the field strength is estimated as, $H_0\approx 1.3\cdot 10^3$ A/m (we put the fluid parameters as follows: $D=22$ nm, $M_d=4.46\cdot10^5$~A/m, $T=300$ K). At the same time, the critical magnetic field defined from (\ref{field}) should be near $H_c=B_c/\mu\mu_0\approx 0.2\cdot 10^3$~A/m, which is much less than the characteristic field (\ref{field2}). The maximum value of the magnetic field used in this work can be estimated as $H_{max}=B_{max}/\mu\mu_0\approx 1.5 \cdot 10^3$~A/m, this value is close to $H_0$. Thus, for the maximum magnetic field, the magnetization curve will differ from the linear dependence. Quantitatively, this difference is estimated at around 10$\%$, which is a relatively small value. For this reason, further in the work, we will assume that the magnetization of the ferrofluid linearly depends on the magnetic field strength. \section{Turbulence spectra for dispersionless waves} The dispersion law (\ref{disp}) describes three types of the surface waves: gravity, capillary, and magnetic ones. The magnetic surface waves are most interesting for us in this work. In contrast to the gravity and capillary waves, such waves propagate without dispersion. Indeed, in the following range of wavenumbers $k_{gm}\ll k\ll k_{mc}$, where $k_{gm}=g \rho /\gamma B^2$, and $k_{mc}=\gamma B^2/\sigma$, the dispersion law (\ref{disp}) has the simple form \begin{equation}\label{lindisp} \omega^2=v_A^2 k^2,\qquad v_A^2=\frac{\gamma(\mu)B^2}{\rho}. \end{equation} The wavenumber $k_{gm}$ is transitional between the gravity and the magnetic waves, and $k_{mc}$ separates the magnetic waves from the capillary ones. In the limit of a strong field $B\gg B_c$, and high magnetic permeability $\mu\gg 1$, Zubarev \cite {zu2004, zuzu2006, zu2009} has found exact particular solutions of the full equations of magnetic hydrodynamics in the form of nonlinear surface waves propagating without distortions in the direction or against the direction of the external horizontal magnetic field. In fact, the solutions obtained are a complete analogy of Alfv\'en waves in a perfectly conducting fluid which can propagate without distortions along the direction of the external magnetic field. The interaction is possible only between oppositely propagating waves, and it is elastic \cite{mhd0}. Surface waves in the high magnetic field regime studied in this work have the same properties \cite{zubkoch14}. The classical result of studying the wave magnetohydrodynamic (MHD) turbulence is the Iroshnikov-Kraichnan spectrum \cite{irosh,kraich}. According to the phenomenological theory of Iroshnikov and Kraichnan, the turbulent spectrum for fluctuations of the local magnetic field $\delta B_k$ and the fluid velocity $\delta V_k$ has the form: \begin{equation}\label{IK1} |\delta B_k|^2\sim|\delta V_k|^2\sim (SV_A)^{1/2}k^{-1/2}, \end{equation} where $V_A=(\mu_0\rho)^{-1/2}B$ is the Alvf\'en speed, $B$ is the magnetic field induction inside the fluid, $S$ is the rate of energy dissipation per unit mass. Note that in such a model of turbulence, the fluctuations of velocity and magnetic field should be small: $\delta V_k\ll V_A$, $\delta B_k\ll B$. According to (\ref{IK1}), the turbulence spectrum for the spectral density of the system energy has the form: \begin{equation}\label{enIK} \varepsilon_k \sim (S V_A) k^{-3/2}. \end{equation} The spectrum (\ref{IK1}) is written in terms of fluctuations of the velocity and the magnetic field in a liquid in 3D geometry, formally we can obtain its analogue for the quantities $\eta$ and $\psi$ (the value of the velocity potential at the boundary of liquid) used in the work. To do this, let introduce the perturbations of velocity $ \delta v_k $ and magnetic field induction $\delta b_k$ at the fluid boundary $y=\eta$. From the dimensional analysis ($\delta v_k \sim k\psi_k $, $\delta b_k \sim B k \eta_k $) and the dispersion relation $\omega_k \sim k$, in the strong field limit, one can obtain the spectra: \begin{equation}\label{IK2} |\eta_k|^2\sim|\psi_k|^2\sim k^{-5/2},\quad |\eta_\omega|^2\sim|\psi_\omega|^2\sim \omega^{-5/2}. \end{equation} In our recent work \cite{ko_19_jetpl}, it was observed that the slope of the spectrum for the surface elevation in $k$-space is close to $-2.5$. But the analysis of the spectrum for the quantity $\psi(k,\omega)$ was not carried out. In the present work, we will examine the realizability of the spectrum (\ref{IK2}) in detail. The Iroshnikov-Kraichnan energy spectrum (\ref{enIK}) can be obtained with help of the dimensional analysis of the weak turbulence spectra \cite{naz2003}. The weak turbulence spectrum for the energy density of a wave system with the linear dispersion law ($\omega_k\sim k$) like (\ref{lindisp}) and quadratically nonlinearity (three-waves interactions) can be written as follows (for more details see Nazarenko's book \cite{naz2011}): \begin{equation}\label{energy} \varepsilon_k\sim k^{\frac{1}{2}(d-6)},\qquad k=|\textbf{k}|, \end{equation} where $d$ is the dimension of space. It can be seen, that the spectrum (\ref{enIK}) is a particular case of (\ref{energy}) for $d=3$. The spectrum (\ref{energy}) also describes the energy distribution of the acoustic wave turbulence \cite{zakh70,efimov18}. The spectral density of the system energy for our problem is related with the surface elevation spectrum as follows: $\varepsilon_k\sim \omega_k |\eta_k|^2$. Form this expression and energy spectra (\ref{energy}) we can obtain the dimensional estimations for the turbulence spectra in terms of $\eta (k,\omega)$ and $\psi (k,\omega)$: \begin{equation}\label{sp2} |\eta_k|^2\sim |\psi_k|^2\sim k^{-3}, \,\, |\eta_\omega|^2\sim|\psi_\omega|^2\sim \omega^{-3},\, d=2, \end{equation} \begin{equation}\label{sp3} |\eta_k|^2\sim |\psi_k|^2\sim k^{-7/2}, \,\, |\eta_\omega|^2\sim|\psi_\omega|^2\sim \omega^{-7/2},\, d=1, \end{equation} For the first time, the spectrum (\ref{sp2}) was obtained by Falcon in \cite{falcon3} for a normal magnetic field, in \cite{falcon2} it was shown that in a tangential field, the spectrum index shifts to the region of higher values. Note that dimensional estimates for the capillary turbulence spectrum can also be obtained in one-dimensional geometry. Although this derivation is formal, since the capillary waves do not satisfy the conditions of three-wave resonances in 1D geometry, it will be useful to have these estimates for the comparison with the results of our numerical simulation. Skipping the details, we write out the surface spectrum for the one-dimensional capillary wave turbulence \cite{naz2011}: \begin{equation}\label{ZF} |\eta_k|^2\sim k^{-17/4},\qquad |\eta_{\omega}|\sim\omega^{-19/6}. \end{equation} It can be seen that these relations are highly different from that obtained for pure magnetic surface waves (\ref{IK2}), (\ref{sp2}), and (\ref{sp3}). The main purpose of this work is to find out which of the spectra will be the closest to that observed in a direct numerical simulation. \section{Results of numerical simulation} Our numerical model is based on the weakly nonlinear approximation, when the angles of boundary inclination are small $ \alpha = | \eta_x | \ll 1 $. We consider the liquid with high magnetic permeability $ \mu \gg 1 $, i.e., the surface waves have properties similar to Alfv\'en waves. For further analysis, it is convenient to introduce the dimensionless variables $$\eta\to \eta \cdot \lambda_0,\quad x\to x\cdot \lambda_0,\quad t\to t \cdot t_0,\quad \psi\to \psi \cdot \lambda_0^2/t_0,$$ where $\lambda_0$, $t_0$ are the characteristic values of length and time (\ref{scale}). It is convenient to introduce the dimensionless parameter $\beta$ defining the magnetic field induction as follows $\beta=\sqrt{2}B/B_c$, where $B_c$ is defined by (\ref{field}), i.e., if $B=B_c$ then $\beta^2=2$. Below, we consider the region of magnetocapillary waves $\beta^2+k\gg 1/k$, i.e., the wavelengths for which the effect of gravitational force can be neglected. The dispersion relation (\ref{disp}) can be represented in the dimensionless form \begin{equation}\label{disp2}\omega^2=\beta^2k^2+k^3.\end{equation} According to (\ref{disp2}), the linear surface waves are divided into two types: low-frequency magnetic ($k\ll k_c$, $\omega\ll \omega_c$) and high-frequency capillary ($k\gg k_c$, $\omega\gg\omega_c$) waves, where $k_c$ and $\omega_c$ are the crossover wavenumber and frequency defined as $$k_c=\beta^2,\qquad \omega_c=\sqrt{2}\beta^3.$$ The equations of the boundary motion up to the quadratically nonlinear terms were first time obtained by Zubarev in \cite{zu2004}, they can be represented in the form $$\psi_t=\eta_{xx}+\frac{1}{2}\left[\beta^2[(\hat k \eta)^2-(\eta_x)^2] +(\hat k \psi)^2-(\psi_x)^2\right]$$ \begin{equation}\label{eq1}+\beta^2 \left[-\hat k \eta +\hat k(\eta\hat k \eta)+\partial_x(\eta \eta_x)\right]+\hat D_k \psi,\end{equation} \begin{equation}\label{eq2}\eta_t=\hat k \psi- \hat k(\eta \hat k \psi)-\partial_x(\eta \psi_x)+\hat D_k \eta,\end{equation} where $\hat k$ is the integral operator having the form: $\hat k f_k=|k| f_k$ in the Fourier representation. The operator $\hat D_k$ describes viscosity and is defined in the $k$-space as $$\hat D_k=-\nu (|k|-|k_d|)^2, \quad |k|\geq |k_d|;\quad \hat D_k=0,\quad |k|< |k_d|.$$ Here, $\nu$ is a constant, and $k_d$ is the wavenumber determining the spatial scale at which the energy dissipation occurs. Equations (\ref{eq1}) and (\ref{eq2}) are Hamiltonian and can be derived as the variational derivatives $$\frac{\partial \psi}{\partial t}=-\frac{\delta \mathcal{H}}{\delta \eta},\qquad \frac{\partial \eta}{\partial t}=\frac{\delta \mathcal{H}}{\delta \psi}.$$ Here, $$\mathcal{H}=\mathcal{H}_0+\mathcal{H}_1=\frac{1}{2}\int \left[\psi \hat k \psi+\beta^2 \eta \hat k \eta+(\eta_x)^2 \right]dx-$$ $$-\frac{1}{2}\int \eta\left[(\hat k \psi)^2-(\psi_x)^2+\beta^2[(\hat k \eta)^2-(\eta_x)^2]\right]dx,$$ is the Hamiltonian of the system specifying the total energy. The terms $\mathcal{H}_0$ and $\mathcal{H}_1$ correspond to the linear and quadratic nonlinear terms in (\ref{eq1}) and (\ref{eq2}), respectively. The spectra (\ref{IK2}), (\ref{sp2}), and (\ref{sp3}) are obtained in the limit of an infinitely strong magnetic field, $\beta\gg1 $. It should be noted that it is not observed in formally ignoring the capillary pressure in the equations (\ref{eq1})-(\ref{eq2}). In the absence of surface tension, the interaction of counter-propagating waves results in the appearance of singular points at the boundary, at which the curvature of the surface increases infinitely \cite{kochzub18,koch18,ko_19_pjtp}. For the realization of the regime of magnetic wave turbulence on the fluid surface, it is necessary to take into account the effect of capillary forces. It immediately follows from this fact that the weak turbulence spectra obtained in the formal limit of a strong field will be distorted in the region of dispersive capillary waves for $k\geq k_c$. In the current work, it will be shown that the turbulence spectrum of the surface waves is divided into two regions: a low-frequency one, at which the calculated spectrum is close to the 1D spectrum (\ref{sp3}) and a high-frequency region, in which the capillary forces deform the magnetic wave turbulence spectrum. Let us proceed to the description of the results of our numerical experiments. To minimize the effect of coherent structures (collapses or solitons), the initial conditions for (\ref{eq1}) and (\ref{eq2}) are taken in the form of two counter-propagating interacting wavepackets: \begin{equation}\label{IC}\eta_1(x)=\sum \limits_{i=1}^{4}a_i\cos(k_i x),\quad\eta_2(x)=\sum \limits_{i=1}^{4}b_i\cos(p_i x), \end{equation} $$\eta(x,0)=\eta_1+\eta_2,\qquad \psi(x,0)=\beta(\hat H \eta_1-\hat H \eta_2),$$ where $a_i$, $b_i$ are the wave amplitudes (they were random), $k_i$, $p_i$ are the wavenumbers, and $\hat H$ is the Hilbert transform defined in $k$-space as $\hat H f_k=i \mbox{sign}(k) f_k$. The spatial derivatives and integral operators were calculated using pseudo-spectral methods with the total amount of harmonics $N$, and the time integration was performed by the fourth-order explicit Runge-Kutta method with the step $dt$. The model did not involve the mechanical pumping of the energy of the system. Hence, the average steepness of the boundary $ \overline\alpha$ was determined only by the initial conditions (\ref{IC}). The calculations were performed with the parameters $N=1024$, $dt=5\cdot 10^{-5}$, $\nu=10$, $k_d=340$. To stabilize the numerical scheme, the amplitudes of higher harmonics with $k\geq412$ were equated to zero at each step of the integration in time. In the current work, we present the results of four numerical experiments carried out with the different values of $\beta$ and, hence, with different $B/B_c$, see the parameters used in Table~1. From the Table~1, we can see that as the field increased, the nonlinearity level (the averaged steepness) required for the realization of a direct energy cascade decreased. Apparently, this effect is related with satisfying the conditions of three-wave resonances: \begin{equation}\label{reson}\omega=\omega_1+\omega_2,\qquad k=k_1+k_2.\end{equation} The conditions (\ref{reson}) are satisfied for any waves described by the linear dispersion law (\ref{lindisp}). For the waves from a high-frequency region, the capillary term in (\ref{disp2}) forbids the three-wave resonance in one-dimensional geometry. But the quasi-resonances are still possible and the probability of their realization is higher for the stronger magnetic fields. It should be noted that three-wave resonances can be achieved in 1D geometry in the absence of external field for the gravity-capillary waves near the minimum of their phase speed \cite{nonlocal15}. The lower threshold of the parameter $\beta$ required for the turbulence development is determined by the criterion of applicability of the weakly nonlinear approximation ($\alpha\ll1$). The upper threshold of the fields is limited by the tendency to form strong discontinuities that can correspond to the appearance of vertical liquid jets \cite {kochzub18j} and the formation of a regime of strong turbulence generated by singularities \cite{kuz2004}. The range of variation of the amplitudes $a_i$, $b_i$ in initial conditions (\ref{IC}) were chosen empirically to minimize the deviation of the model from the weakly nonlinear one. The wavenumbers $k_i$, $p_i$ were chosen in such a way that the energy exchange between nonlinear waves was the most intense. For all numerical experiments, the set of wavenumbers in (\ref{IC}) is the same and is presented in Table~2. \begin{table} \caption{The parameters of four numerical experiments presented in the work, $\beta^2$ is the auxiliary dimensionless parameter, $B/B_c$ is the corresponding dimensionless magnetic field induction, $\overline\alpha$ is the averaged steepness of the surface, $\omega_c$ is the crossover frequency. } \begin{center} \begin{tabular}{ccccc} \hline $\beta^2$ & 10 & 30& 50 & 70 \\ \hline $B/B_c$ & 2.24 & 3.87& 5.00 & 5.92 \\ \hline $\overline\alpha$ & 0.15 & 0.12& 0.10 & 0.09\\ \hline $\omega_c$ & 44.72 & 232.38 & 500.00 & 828.25\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{The set of the wavenumbers used in the initial conditions (\ref{IC}), $i$ is the summation index, $k_i$ and $p_i$ are the wavenumbers of the wavepackets traveling to the left and to the right, respectively.} \begin{center} \begin{tabular}{ccccc} \hline i & 1& 2 & 3 & 4 \\ \hline $k_i$ & 3 & 5 & 7 &9\\ \hline $p_i$ & 2 & 4 & 6 &8\\ \hline \end{tabular} \end{center} \end{table} Figure 1 shows the calculated energy dissipation rate $s=|d\mathcal {H}/dt|$ as a function of time for the different values of magnetic field induction. It can be seen that the system under study proceeds to the regime of quasistationary energy dissipation in times of the order $10^3 t_0$. In this mode, the probability density functions for the angles of boundary inclination become very close to Gaussian distributions (see Fig. 2). This behavior indicates the absence of strong space-time correlations and, consequently, the formation of the Kolmogorov-like spectrum of wave turbulence. \begin{figure} \center{\includegraphics[width=1\linewidth]{fig1.eps}} \caption{\small The energy dissipation rate versus time for the different values of magnetic field.} \label{buble} \end{figure} \begin{figure} \center{\includegraphics[width=1\linewidth]{fig2.eps}} \caption{\small Probability density functions (\emph{p.d.f.}) for the angles of the boundary inclination for the different values of $B/B_c$, black dotted lines correspond to Gaussian distributions.} \label{buble} \end{figure} Figure 3 shows the time-averaged spectra of the surface $I_{\eta}(k)=\overline{|\eta_k|}^2$ and the velocity potential perturbations $I_{\psi}(k)=\overline{|\psi_k|}^2$ for $B/B_c=2.24$. As can be seen, the inertial range of the wavenumbers is splitted into two regions. In the first region with the small $k$, the spectra are in relatively good agreement with the 1D spectrum (\ref{sp3}). At the second region of high $k$, where capillarity dominates, the spectra have the different power-law: \begin{equation}\label{IK3}I_{\eta}(k)\sim k^{-5/2}, \quad I_{\psi}(k)\sim k^{-3/2}.\end{equation} It is interesting that the transition between two spectra occurs at higher wavenumbers than $k_c$. Since, the level of nonlinearity in this case is quite large, the observed effect can be associated with an increase in the magnetic field at steep surface inhomogeneities. Apparently, the local intensification of the magnetic field can lead to the shift of $k_c$. \begin{figure} \center{\includegraphics[width=1\linewidth]{fig3.eps}} \caption{\small Time averaged spectra $I_{\eta}(k)$ (blue line) and $I_{\psi}(k)$ (green line) for $B/B_c=2.24$, the black dotted lines correspond to the 1D spectrum (\ref{sp3}), red dotted lines are the best power-law fit (\ref{IK3}) of the calculated spectra, the vertical black dotted line shows the crossover wavenumber $k_c$.} \label{buble} \end{figure} The spectrum (\ref{IK3}) for the surface elevation coincides with the Iroshnikov-Kraichnan one (\ref{IK2}), but the spectrum for the velocity potential does not. Thus, the spectrum observed at high $k$ is not the MHD turbulence spectrum (\ref{IK2}), as was suggested in our previous work \cite{ko_19_jetpl}. At the same time, the spectrum (\ref{IK3}) does not coincide with the Zakharov-Filonenko spectrum (\ref{ZF}) for the pure capillary waves. Consider this spectrum in the frequency domain $\omega$. We can empirically rewrite the spectra (\ref{IK3}) in terms of $\omega$ using the dispersion relation $ k\sim \omega^{2/3}$ for $k\gg k_c$ \begin{equation}\label{IK4}I_{\eta}(\omega)\sim \omega^{-5/3}, \quad I_{\psi}(\omega)\sim \omega^{-1}. \end{equation} Figure 4 shows how the spectra $I_{\eta} (\omega)$ and $I_{\psi} (\omega) $ change as the field induction increases. From the Fig.~4~(a), it can be seen that for a relatively weak magnetic field, the spectrum is mainly determined by the relation (\ref{IK4}). As the field increases, the region of magnetic turbulence expands, see Fig.~4~(b) and (c). For the maximum magnetic field $B/B_c\approx 5.92$, the capillary waves shift to the region of viscous dissipation and almost do not have energy contribution to the spectrum of turbulence, see Fig. 4 (d). The crossover frequencies are in a good agreement with the $\omega_c$, except the first case where the level of nonlinearity is too high. In general, the calculated spectrum of turbulence in the low-frequency region is in good agreement with the spectrum (\ref{sp3}) obtained from the dimensional analysis \cite{naz2003,naz2011}. \begin{figure} \center{\includegraphics[width=1\linewidth]{fig4.eps}} \caption{\small Time averaged spectra $I_{\eta}(\omega)$ (blue lines) and $I_{\psi}(\omega)$ (green lines) for the different values of $B/B_c$: (a) 2.24, (b) 3.87, (c) 5.00, (d) 5.92. The black dotted lines correspond to the 1D spectrum (\ref{sp3}), red dotted lines are the best power-law fit (\ref{IK4}) of the calculated spectra, the vertical black dotted lines show the crossover frequencies $\omega_c$.} \label{buble} \end{figure} \section{Conclusion} Thus, in the present work, a numerical study of the wave turbulence of the surface of a magnetic fluid in a horizontal magnetic field has been carried out within the framework of a one-dimensional weakly nonlinear model that takes into account the effects of capillarity and viscosity. The results show that the spectrum of turbulence is divided into two regions: a low-frequency (\emph{i}) and a high-frequency (\emph{ii}) one. In the region (\emph{i}), the magnetic wave turbulence is realized. The power-law spectrum of the surface elevation has the same exponent in $k$ and $\omega$ domains and is close to the value $-3.5$, which is in good agreement with the estimation (\ref{sp3}) obtained from the dimensional analysis of the weak turbulence spectra. In the high-frequency region (\emph {ii}), where the capillary forces dominate, the spatial spectrum of the surface waves is close to $k^{-5/2}$, which corresponds to $\omega^{-5/3}$ in terms of the frequency. This spectrum does not coincide with the spectrum (\ref{ZF}) for pure capillary waves. A possible explanation of this fact is that three-wave interactions for the capillary waves are forbidden in 1D geometry and this power-law spectrum can be generated by coherent structures (like shock fronts) arising in the regime of a strong field \cite{kochzub18,koch18,ko_19_pjtp}. Its is well known that collapses and turbulence can coexist in one-dimensional models of wave turbulence \cite{MMT15,MMT17}. In conclusion, we note that the results obtained in the work are in qualitative agreement with the experimental studies \cite{falcon2, falcon3}, in which it is shown that the external magnetic field can deform the Zakharov-Filonenko spectrum for capillary turbulence. The quantitative discrepancy may be due to the one-dimensional geometry and relatively high magnetic field leading to the nonlinear dependence of the magnetization curve, which is not taken into account in the current work. \section*{Acknowledgments} I am deeply grateful to N.M. Zubarev, A.I. Dyachenko, and N.B. Volkov for stimulating discussions. This work is supported by Russian Science Foundation project No. 19-71-00003. \bibliographystyle{abbrv}
{'timestamp': '2020-02-21T02:11:48', 'yymm': '2002', 'arxiv_id': '2002.08698', 'language': 'en', 'url': 'https://arxiv.org/abs/2002.08698'}
\section{Introduction} In recent years, tremendous advances in the study of algebraic $\K$-theory have been facilitated by the trace method approach, in which algebraic $\K$-theory is approximated by invariants from homological algebra and their topological analogues. For a ring $R$, the Dennis trace map relates its algebraic $\K$-theory to its Hochschild homology, \[ \K_q(R) \to \HH_q(R). \] Classical Hochschild homology of algebras has a topological analogue. This topological Hochschild homology, $\THH$, plays a key role in the trace method approach to algebraic K-theory. Indeed, for a ring (or ring spectrum) $R$ there is a trace map from algebraic K-theory to topological Hochschild homology, lifting the classical Dennis trace, \[ \K_q(R) \to \pi_q\THH(R). \] Topological Hochschild homology is an $S^1$-equivariant spectrum, and by understanding the equivariant structure of THH one can define topological cyclic homology (see, e.g. \cite{BHM93}, \cite{NS18}), which is often a close approximation to algebraic $\K$-theory. Real algebraic $\K$-theory, $\KR$, defined by Hesselholt and Madsen \cite{HM15}, is an invariant of a ring (or ring spectrum) with anti-involution. It is a generalization of Karoubi's Hermitian $\K$-theory \cite{Karoubi}, and an analogue of Atiyah's topological $\K$-theory with reality \cite{Atiyah}. Real topological Hochschild homology, $\THR(A)$, is an $O(2)$-equivariant spectrum that receives a trace map from Real algebraic $\K$-theory \cite{HM15, Dotto12, Hog16}. Just as topological Hochschild homology is essential to the trace method approach to algebraic $\K$-theory, $\THR$ is essential to computing Real algebraic $\K$-theory. In Hill, Hopkins, and Ravenel's solution to the Kervaire invariant one problem \cite{HHR}, they developed the theory of a multiplicative norm functor $N_H^G$ from $H$-spectra to $G$-spectra, for $H \subset G$ finite groups. In the case of non-finite compact Lie groups, however, norms are currently only accessible in a few specific cases. In \cite{ABGHLM18} the authors extend the norm construction to the circle group $\mathbb{T}$, defining the equivariant norm $N_e^{\mathbb{T}}(R)$ for an associative ring spectrum $R$. They further show that this equivariant norm is a model for topological Hochschild homology. In the present paper, we extend the norm construction to an equivariant norm from $D_2$ to $O(2)$, using the dihedral bar construction. We then show that for a ring spectrum with anti-involution $A$, the equivariant norm $N_{D_2}^{O(2)}(A)$ is a model for the Real topological Hochschild homology of $A$. Before defining the norm, we introduce some notation. We write $B^{\di}_{\bullet}(A)$ for the dihedral bar construction on $A$, as defined in Section \ref{sec:dihedralbar}. Let $\cU$ denote a complete $O(2)$-universe, and let $\cV$ be the complete $D_{2}$-universe constructed by restricting $\cU$ to $D_{2}$. We write $\widetilde{\mathcal{V}}$ for the $O(2)$-universe associated to $\mathcal{V}$ by inflation along the determinant homomorphism. The input for Real topological Hochschild homology is a ring spectrum with anti-involution, $(A,\omega)$. Ring spectra with anti-involution can be alternatively described as algebras in $D_2$-spectra over an $E_{\sigma}$-operad, where $\sigma$ is the sign representation of $D_2$, the cyclic group of order 2. The model structure $\Assoc_{\sigma}(\Sp_{\cV}^{D_2})$ on such $E_{\sigma}$-algebras is defined in Proposition \ref{model structure 2}. \begin{defin}We define the functor \[N_{D_2}^{O(2)} \colon \thinspace \Assoc_{\sigma}(\Sp_{\cV}^{D_2})\longrightarrow \Sp _{\cU}^{O(2)} \] to be the composite functor \[ A \mapsto \cI_{\widetilde{\cV}}^{\cU}| B_{\bullet}^{\di} (A) |. \] Here $\cI_{\widetilde{\cV}}^{\cU}$ denotes the change of universe functor. \end{defin} We then prove that this functor satisfies one of the fundamental properties of equivariant norms: in the commutative setting it is left adjoint to restriction. Let $\cR$ denote the family of subgroups of $O(2)$ which intersect $\mathbb{T}$ trivially, and let $\Sp ^{O(2),\mathcal{R}}_{\cU}$ denote the $\cR$-model structure on genuine $O(2)$-spectra, as defined in \ref{def: model structure}. \begin{thm \label{main thm 1} The restriction \[ N_{D_2}^{O(2)} \colon\thinspace \Comm(\Sp_{\cV}^{D_2})\to \Comm(\Sp ^{O(2),\mathcal{R}}_{\cU})\] of the norm functor $N_{D_2}^{O(2)}$ to genuine commutative $D_2$-ring spectra is left Quillen adjoint to the restriction functor $\iota_{D_2}^*$. \end{thm} This equivariant norm is a model for Real topological Hochschild homology. \begin{prop} Given a flat $E_{\sigma}$-ring in $\Sp^{D_2}_{\cV}$, there is a natural zig-zag of $\cR$-equivalences \[ N_{D_2}^{O(2)}A\simeq \THR(A)\] of $O(2)$-orthogonal spectra. This is also a zig-zag of $\cF_\text{Fin}^{\mathbb{T}}$-equivalences, where $\cF_\text{Fin}^{\mathbb{T}}$ is the family of finite subgroups of $\mathbb{T}\subset O(2)$. \end{prop} This result extends a comparison from \cite{DMPPR21} of the dihedral bar construction and Real topological Hochschild homology as $D_2$-spectra. In \cite{DMPPR21} the authors also prove that for a flat ring spectrum with anti-involution $(R, \omega)$, there is a stable equivalence of $D_2$-spectra \[ \iota^*_{D_2}\THR(R) \simeq R \wedge^{\mathbb{L}}_{N_e^{D_2} \iota_e^* R} R. \] We generalize this result by proving a multiplicative double coset formula for the norm $N_{D_2}^{O(2)}$. \begin{thm}[Multiplicative Double Coset Formula]\label{Introthm:doublecoset} Let $\zeta$ denote the $2m$-th root of unity $e^{2\pi i/2m}$. When $R$ is a flat $E_{\sigma}$-ring and $m$ is a positive integer, there is a stable equivalence of $D_{2m}$-spectra \[ \iota_{D_{2m}}^*N_{D_2}^{O(2)}R \simeq N_{D_2}^{D_{2m}}R \wedge^{\mathbb{L}}_{N_e^{D_{2m}}\iota_e^*R} N_{\zeta D_2\zeta^{-1}}^{D_{2m}}c_{\zeta}R \] \end{thm} Ordinary topological Hochschild homology is the topological analogue of the classical algebraic theory of Hochschild homology. Indeed, for a ring spectrum $R$, the two theories are related by a linearization map \[ \pi_k\THH(R) \to \HH_k(\pi_0R), \] which is an isomorphism in degree 0. It is natural to ask, then, what is the algebraic analogue of Real topological Hochschild homology? In this paper we define such an analogue: a theory of Real Hochschild homology for rings with anti-involution, or more generally for discrete $E_{\sigma}$-rings. A discrete $E_{\sigma}$-ring is a type of $D_2$-Mackey functor that arises as the algebraic analogue of $E_{\sigma}$-rings in spectra (see Definition \ref{defn:discreteEsigma}). Indeed, if $R$ is an $E_{\sigma}$-ring in spectra, $\m{\pi}_0^{D_2}(R)$ is a discrete $E_{\sigma}$-ring. The definition of Real Hochschild homology follows naturally from the double coset formula above. \begin{defin} The \emph{Real $D_{2m}$-Hochschild homology} of a discrete $E_{\sigma}$-ring $\underline{M}$ is the graded $D_{2m}$-Mackey functor \[ \HR_*^{D_{2m}}(\m{M}) = H_*\left ( \HR_{\bullet}^{D_{2m}}(\m{M}) \right ). \] where \[ \HR_{\bullet}^{D_{2m}}(\m{M}) =B_{\bullet}(N_{D_2}^{D_{2m}}\m{M},N_{e}^{D_{2m}}\iota_e^*\m{M}, N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}\m{M} ).\] \end{defin} This theory of Real Hochschild homology is computable using homological algebra for Mackey functors. We prove that Real topological Hochschild homology and Real Hochschild homology are related by a linearization map. \begin{thm} For any $(-1)$-connected $E_\sigma$-ring $A$, we have a natural homomorphism \[ \m{\pi}_k^{D_{2m}} \THR(A) \longrightarrow \HR_k^{D_{2m}}(\m{\pi}_0^{D_2}A). \] which is an isomorphism when $k=0$. \end{thm} This relationship facilitates the computation of Real topological Hochschild homology. As an example, we compute the degree zero $D_{2m}$-Mackey functor homotopy groups of $\THR(H\m{\bZ})$, for odd $m$. \begin{thm} Let $m\ge 1$ be an odd integer. There is an isomorphism of $D_{2m}$-Mackey functors \[ \underline{\pi}_0^{D_{2m}}\THR(H\m{\bZ})\cong \m{A}^{D_{2m}}/(2-[D_{2m}/\mu_{m}])\] where $\m{A}^{D_{2m}}$ is the Burnside Mackey functor for the dihedral group $D_{2m}$ of order $2m$ and we quotient by the congruence relation generated by $2-[D_{2m}/\mu_{m}]$. \end{thm} When restricted to $\underline{\pi}_0^{D_{2}}(\THR(H\m{\bZ})^{D_{2p^k}}),$ this computation recovers a computation of \cite{DMP19}, proven by different methods. As part of the above computation, we do some of the first calculations to appear in the literature of dihedral norms for Mackey functors. In doing so, we establish a Tambara reciprocity formula for sums for general finite groups. \begin{thm}[Tambara reciprocity for sums] Let \(G\) be a finite group and \(H\) a subgroup, and let \(\m{R}\) be a \(G\)-Tambara functor. For each \(F\in\Map^H\big(G,\{a,b\}\big)\), let \(K_F\) be the stabilizer of \(F\). Then for any \(a,b\in\m{R}(G/H)\), we have \begin{multline*} N_H^G(a+b)=\\ \sum_{[F]\in\Map^H(G,\{a,b\})/G} tr_{K_F}^G\left( \prod_{[\gamma]\in K_F\backslash G\slash H}\!\! N_{K_F\cap (\gamma^{-1}H\gamma)}^{K_F}\Big(\gamma res_{H\cap (\gamma K\gamma^{-1})}^{H}\big(F(\gamma^{-1})\big)\Big) \right) \end{multline*} \end{thm} In the case of dihedral groups, this leads to a very explicit formula for Tambara reciprocity for sums (see Lemma \ref{lem:reciprocity}), facilitating the computation of dihedral norms. Another important aspect of the theory of Real Hochschild homology is that it leads to a definition of Witt vectors for rings with anti-involution, or more generally for discrete $E_{\sigma}$-rings. Classically, Witt vectors are closely related to topological Hochschild homology. Indeed, in \cite{HM97} Hesselholt and Madsen show that for a commutative ring $R$ \[ \pi_0(\THH(R)^{C_{p^n}}) \cong \W_{n+1}(R;p) \] where $\W_{n+1}(R;p)$ denotes the length $n+1$ $p$-typical Witt vectors of $R$. Further, in \cite{Hes97}, Hesselholt generalized the theory of Witt vectors to non-commutative rings and showed that for any associative ring $R$ there is a relationship between Witt vectors and topological cyclic homology, \[ \TC_{-1}(R;p) \cong \W(R;p)_F. \] Here $W(R)_F$ denotes the coinvariants of the Frobenius endomorphism on the $p$-typical Witt vectors $W(R;p)$. In the current work we consider the analogous results for rings with anti-involution. We prove that there is a type of Real cyclotomic structure on Real Hochschild homology, and use this to define Witt vectors of rings with anti-involution $R$, denoted $\m{\mathbb{W}}(R;p)$, (Definition \ref{def: Witt vectors}). As a consequence of this work, we show that $\m{\pi}_{-1}\TCR(R)$ can be described purely in terms of equivariant homological algebra. \begin{thm} Let $A$ be an $E_{\sigma}$-ring and assume $R^1\lim_{k}\pi_0^{C_2}\THR(A)^{\mu_{p^k}}=0$. There is an isomorphism \[ \m{\pi}_{-1}\TCR(A;p)\cong \m{\mathbb{W}}(\m{\pi}_0^{D_2}A;p)_{F}.\] where $\m{\mathbb{W}}(A;p)_{F}$ is the coinvariants of an operator \[ F\colon \thinspace \m{\mathbb{W}}(\m{\pi}_0^{D_2}A;p)\longrightarrow \m{\mathbb{W}}(\m{\pi}_0^{D_2}A;p). \] \end{thm} \subsection{Organization} In Section \ref{prelims}, we recall the theory of dihedral and cyclic objects, and dihedral subdivision. In particular, we offer the perspective that dihedral objects are Real cyclic objects (see Definition \ref{def:RealCyclic}). In Section \ref{sec: thr}, we recall the definition of $E_{\sigma}$-rings, the dihedral bar construction, Real topological Hochschild homology, our model categorical framework, and the notion of (genuine) Real $p$ cyclotomic spectra. The main results begin in Section \ref{sec: norm} where we prove the that dihedral bar construction can be regarded as a norm in Theorem \ref{main norm thm}. To do this we first give background on equivariant orthogonal spectra and extend the comparison between the dihedral bar construction and the B\"okstedt model of Real topological Hochschild homology from \cite{DMPPR21} to an identification as $O(2)$-spectra. In Section \ref{mdcf}, we prove a multiplicative double coset formula for Real topological Hochschild homology (see Theorem \ref{thm:doublecoset}). In section \ref{sec: HR}, we develop the theory Real Hochschild homology and show that it is the algebraic analogue of Real topological Hochschild homology. In Section \ref{sec: witt} we define Witt vectors for rings with anti-involution. We then prove an identification of the lowest nontrivial homotopy group of $\TCR$ with the coinvariants of an operator on our theory of Witt vectors for rings with anti-involution in Theorem \ref{TCR thm}. Finally, in Section \ref{sec: computations}, we end with a Tambara reciprocity formula for dihedral groups, computations of norms for dihedral groups, and a computation of the degree zero $D_{2m}$-Mackey functor homotopy groups of $\THR(\m{\mathbb{Z}})$ for an odd integer $m\geq 1$. \subsection{Conventions} Let $\Top$ denote the category of based compactly generated weak Hausdorff spaces. We refer to objects in $\Top$ as spaces and morphisms in $\Top$ as maps of spaces. Let $G$ be a compact Lie group. Then $\Top^{G}$ denotes the category of based $G$-spaces and based $G$-equivariant maps of spaces. When regarded as a $\Top$-enriched category, we do not distinguish between them notationally. Let $\Top_{G}$ denote the $\Top^{G}$-enriched category with the same objects as $\Top^{G}$ and the mapping spaces given by all maps of $G$-spaces with $G$-action given by conjugation. For a $G$-universe $\cU$, let $\Sp_{\cU}^{G}$ be the category of orthogonal $G$-spectra indexed on $\cU$ \cite[II. 2.6]{MM02}. \subsection{Acknowledgements} The first author would like to thank Vincent Boelens, Emanuelle Dotto, Irakli Patchkoria, and Holger Reich for helpful conversations. The second author would like to thank Chloe Lewis for helpful conversations. The second author was supported by NSF grants DMS-1810575, DMS-2104233, and DMS-2052042. The third author was supported by NSF grants DMS-2105019 and DMS-2052702. Some of this work was done while the second author was in residence at the Mathematical Sciences Research Institute in Berkeley, CA (supported by the National Science Foundation under grant DMS-1440140) during the Spring 2020 semester. \section{Dihedral Objects}\label{prelims} We begin by fixing some conventions. Let $O(2)$ denote the compact Lie group of two-by-two orthogonal matrices. The determinant map determines an extension \[ 1\longrightarrow \mathbb{T} \longrightarrow O(2) \overset{\operatorname{det}}{\longrightarrow} \{-1,1\} \longrightarrow 1\] of groups. We choose a splitting by sending $-1$ to the matrix $\tau := ( \tiny\begin{array}{cc} 0& 1 \\ 1 & 0\normalsize \end{array} )$ and write $D_2$ for the subgroup of $O(2)$ generated by $\tau$. We then write $\mu_m\subset \bT$ for the subgroup of $m$-th roots of unity generated by $\zeta_m=e^{2\pi i/m}$. Finally, we fix a presentation $D_{2m}=<\tau, \zeta_m \mid \tau^2=\zeta_m^m=(\tau\zeta_m)^2 = 1>$ for the dihedral group of order $D_{2m}$ regarded as a subgroup of $\mathbb{T}\ltimes D_2=O(2)$. \subsection{Dihedral objects and cyclic objects} Let $\Delta$ denote the category with objects the totally ordered sets $[n]=\{0<1<\dots <n\}$ of cardinality $n+1$ and maps of finite sets preserving the total order. A simplicial object in a category $\cC$ is a functor $X_{\bullet} \colon\thinspace \Delta^{\text{op}}\to \cC$ which consists of an object $X_n$ in $\cC$ for each $n\ge 0$ and maps \begin{align*} d_i\colon\thinspace & X_{n}\longrightarrow X_{n-1} \text{ and}\\ s_i\colon\thinspace & X_n\longrightarrow X_{n+1} \end{align*} for each $0\le i\le n$ and $n\ge 0$ satisfying the simplicial identities \begin{align}\label{eq: simplicial identities} d_id_j=d_{j-1}d_i & \text{ if } i<j& d_is_j=s_{j-1}d_i & \text{ if } i<j & d_{j+1}s_j=1 \\ \nonumber d_is_j=s_jd_{i-1} & \text{ if }i>j+1 & s_is_j=s_{j+1}s_i &\text{ if } i\le j& d_js_j=1. \end{align} \begin{defin} A \emph{cyclic object} in a category $\cC$ is a simplicial object $X_{\bullet}$ in $\cC$ together with a $\mu_{n+1}$-action on $X_n$ given by maps \begin{align*} t_{n} \colon \thinspace X_n \to X_n \end{align*} satisfying the relations \begin{align} \label{cyclic cat face maps} d_{i}t_{n}=t_{n-1}d_{i-1} & \text{ for } 1\le i \le n & d_0t_{n+1}=d_{n}\\ \label{cyclic cat deg maps} s_{i}t_{n}=t_{n+1}s_{i-1} & \text{ for } 1\le i \le n & s_0t_{n}=t_{n+1}^2s_n \end{align} in addition to the simplicial identities \eqref{eq: simplicial identities}. \end{defin} \begin{defin} A \emph{dihedral object} in a category $\cC$ is a simplicial object in $\cC$ together with a $D_{2(n+1)}$-action on $X_n$ given on generators $t_{n}$ and $\omega_{n}$ of $D_{2(n+1)}$ by structure maps \begin{align*} t_n: X_n \to X_n \hspace{.2cm} \textup{ and }\hspace{.3cm} \omega_{n}\colon\thinspace X_n\to X_n \end{align*} satisfying the usual dihedral group relations \begin{align*} \omega_{n}t_{n}=t_{n}^{-1}\omega_{n} \end{align*} as well as the relations \eqref{cyclic cat face maps} and \eqref{cyclic cat deg maps} and the additional relations \begin{align} \label{dih cat face maps}d_i\omega_{n}=\omega_{n-1}d_{n-i} \text{ for } 0\le i \le n\\ \label{dih cat deg maps} s_i\omega_{n}=\omega_{n+1}s_{n-i}\text{ for } 0\le i \le n \end{align} \end{defin} We will also use the notion of a Real simplicial object. \begin{defin}\label{def: real simplicial set} A \emph{Real simplicial object} $X$ is a simplicial object $X\colon \thinspace \Delta^{\op}\to \mathcal{C}$ together with maps $\omega_n \colon\thinspace X_n\to X_n$ for each $n\ge 0$ satisfying the relations \eqref{dih cat deg maps}, \eqref{dih cat face maps} and $\omega_n^2=\id_{X_n}$ . \end{defin} \begin{rem} The definition of the cyclic category $\Lambda$ \cite[\S 2]{Connes83} and the dihedral category $\Xi$ \cite[Definition 3.1]{Loday87} are standard, so we omit them. It will be useful to recall that a cyclic object in a category $\cC$ is a functor $\Lambda^{\op}\longrightarrow \cC$ and a dihedral object in a category $\cC$ is a functor $\Xi^{\op}\longrightarrow \cC$. \end{rem} Let $\Delta \mathfrak{G}\in \{\Lambda,\Xi\}$ and let $X\colon \thinspace \Delta \mathfrak{G}^{\op} \to \cC$ be a functor. We can then produce a simplicial object $\iota^*X$ in $\cC$ by precomposition with the functor $\iota\colon \Delta \to \Delta \mathfrak{G}$. \begin{thm}[Thm. 5.3 \cite{FL91}]\label{action on realization} Let $\Delta\mathfrak{G}\in \{\Lambda,\Xi\}$. Write $\mathfrak{G}_n=\Aut_{\Delta \mathfrak{G}}([n])$. Then $\mathfrak{G}_{\bullet}$ is a simplicial set and the geometric realization of $\mathfrak{G}_{\bullet}$ has a natural structure of a topological group $\fG=| \mathfrak{G}_{\bullet}|$. Let $ X_{\bullet}\colon\thinspace \Delta \fG^{\op}\to \Top$ be a functor. Then $|\iota^*X_{\bullet}|$ has a natural continuous left $\fG$-action. \end{thm} \begin{cor} The realization of a cyclic space has a natural $\bT$-action and the realization of a dihedral space has a natural $O(2)$-action. \end{cor} \subsection{Dihedral subdivision}\label{subdivision} We now recall two types of subdivision of a simplicial sets: the Segal-Quillen subdivision \cite{Seg73} denoted $\sq$ and the edgewise subdivision \cite{BHM93} denoted $\text{sd}_r$. Let $\operatorname{sq}\colon\thinspace \Delta^{\op} \to \Delta^{\op}$ be the functor defined on objects by $ [n] \mapsto [n] \star [n] = [2n+1]$ and on morphisms by $ \alpha \mapsto \alpha \star \omega (\alpha)$ where $\alpha \colon\thinspace [n]\to [k]$. Here $ \omega(\alpha)(i)=k-\alpha(n-i),$ and $\star$ denotes the join. Given a simplicial set $Y_{\bullet}$, we define its Segal--Quillen subdivision $\operatorname{sq}Y_{\bullet}$ by precomposition of the simplicial set with the functor $\operatorname{sq}$. The induced map $|\operatorname{sq}Y_{\bullet} | \cong |Y_{\bullet}| $ is a canonical homeomorphism, which is $D_2$-equivariant when $Y_{\bullet}=\iota^*X_{\bullet}$ is the restriction of a dihedral set (or simply a Real simplicial set) to $\Delta^{\op}$. Moreover, when $X_{\bullet}$ is a dihedral set (or Real simplicial set) and $Y_{\bullet}=\iota^*X_{\bullet}$ the simplicial set $\operatorname{sq}(Y_{\bullet})$ is equipped with a \emph{simplicial} $D_2$-action such that there is a canonical homeomorphism $|(\operatorname{sq}Y_{\bullet})^{D_2}|\cong |Y_{\bullet}|^{D_2}.$ We now recall the $r$th edgewise subdivision. Let $\operatorname{sd}_r \colon\thinspace \Delta^{\op} \to \Delta^{\op} $ be the functor defined by $[n-1] \mapsto [nr-1] $ on objects and by $f \mapsto f^{\sqcup r}$ on morphisms. Letting $Y_{\bullet}$ be a simplicial set, we can form $\operatorname{sd}_rY_{\bullet}$ by precomposition with the functor $\operatorname{sd}_r$. The induced map is a canonical homeomorphism $|\operatorname{sd}_rY_{\bullet}|\cong |Y_{\bullet}| $ and it is $\mu_r$-equivariant when $Y_{\bullet}=\iota^*X_{\bullet}$ is the restriction of a dihedral set (or simply a cyclic set) to $\Delta^{\op}$. Moreover, $\operatorname{sd}_r(Y_{\bullet})$ has a simplicial $\mu_r$-action and there is a canonical homeomorphism $ |(\operatorname{sd}_rY_{\bullet})^{\mu_r}|\cong |Y_{\bullet}|^{\mu_r} $ when $X_{\bullet}$ is a cyclic or dihedral set. Now, given a simplicial set $Y_{\bullet}$, we define \[ \operatorname{sd}_{D_{2r}}Y_{\bullet}:=\operatorname{sq}\operatorname{sd}_{r}Y_{\bullet}\] following \cite{Spa00}. The induced map \[ |\operatorname{sd}_{D_{2r}}Y_{\bullet}|\cong |Y_{\bullet}|\] is a homeomorphism and it is $D_{2r}$-equivariant when $Y_{\bullet}=\iota^*X_{\bullet}$ is the restriction of a dihedral set $X_{\bullet}$ to $\Delta^{\op}$. Moreover, $\operatorname{sd}_{D_{2r}}Y_{\bullet}$ has a simplicial $D_{2r}$-action and there is a canonical homeomorphism \[ |(\operatorname{sd}_{D_{2r}}Y_{\bullet})^{D_{2r}} |\cong |Y_{\bullet}|^{D_{2r}}.\] \subsection{Dihedral objects as Real cyclic objects}\label{sec:dihedral as D_2-diagrams} The goal of this section is to describe dihedral objects and their realizations in terms of $D_2$-diagrams indexed by the cyclic category, which we call \emph{Real cyclic objects}. This perspective will be used in Section \ref{sec: norm} in order to show how work of \cite{DMPPR21} extends from the Real simplicial setting to the dihedral setting. Let $BD_2$ be the small category with one object $*$ and morphisms set $BD_2(*,*)=D_2$. Consider a small category $I$ and a functor $b\colon\thinspace BD_2\to \Cat$ such that $b(*)=I$. This data is equivalent to specifying an involution $ b(\tau)\colon\thinspace I\to I$ such that $b(\tau)^2=b(\tau^2)=b(1)=\id_{I}$. We will write $\Cat^{BD_2}$ for the category of functors $BD_2\to \Cat$. \begin{exm}\label{Ex:Trivial} The terminal object in $\Cat^{BD_2}$ is defined by $\mathsf{1}(*)=[0]$ where we write $[0]$ for the category with a single object and a single morphism. The action $\mathsf{1}(\tau) \colon\thinspace [0] \to [0]$ is the identity functor. \end{exm} \begin{exm}\label{Ex:DElta} Let $\delta \colon\thinspace BD_2 \to \Cat $ be defined by $\delta(*)=\Delta$ and $\delta(\tau)$ is the functor $\delta(\tau) \colon\thinspace \Delta \to \Delta $ defined as the identity on objects and by $\left(\delta(\tau)(f)\right)(i)=k-f(n-i)$ for $f\in \Delta([n],[k])$ (cf. \cite[Ex. 1.2]{DMPPR21}). This induces an involution $\delta(\tau)^{\op} \colon \thinspace \Delta^{\op} \longrightarrow \Delta^{\op}$. \end{exm} \begin{exm}\label{Ex:Lambda} Let $\lambda \colon\thinspace BD_2 \to \Cat$ be defined by $\lambda(*)=\Lambda^{\op}$ and $\lambda(\tau) \colon\thinspace \Lambda^{\op} \to \Lambda^{\op}$ is defined as the identity on objects and by $\lambda(\tau)(f)=\delta(\tau)^{\op}(f)$ for $f\in \Delta^{\op} ([n],[k])$ and by $ \lambda(\tau)(t_{n+1}^k)=t_{n+1}^{-k} $ for $t_{n+1}\in \mu_{n+1}=\Aut_{\Lambda^{\op}}([n])$ the canonical generator. \end{exm} \begin{lem}[cf. {\cite[Prop. I.19, I.35]{DK15}}] \label{gro equiv} There is an isomorphism of categories \[\Xi^{op} \cong BD_2\smallint \lambda \] where $BD_2\int \lambda$ is the Grothendieck construction associated to the functor $\lambda$ defined in Example \ref{Ex:Lambda}. \end{lem} \begin{proof} This is straightforward as the objects and morphism are clearly in one-to-one correspondence and there is an evident functor inducing this correspondence. \begin{comment} We first describe the Grothendieck construction $BD_2\wr a$. It has objects given by pairs $(*,[n]\in \Lambda)$. This is equivalent data to just specifying an object $[n]\in \Delta$, which in turn is the same data as specifying an object $[n]\in \Xi$. A morphism $(BD_2\wr a) ( [n],[m])$ is the data of a morphism $g\colon\thinspace * \to *$, or in other words an element $g\in D_2$, and a morphism $a(g)([n])\to [m]$ or in other words a morphism $[n]\to [m]$ in $\Lambda^{\op}$ precomposed with an extra automorphism $[n]\to [n]$ given by the action of $D_2$. This again is the same data as a morphism in $\Xi^{\op}$ by the fundamental axiom of a crossed simplicial group and the usual relations in a dihedral group. So the isomorphism of categories sends $ [n] \mapsto (*,[n]) $ and sends a morphism $[n]\overset{g}{\to}[n]\overset{f}{\to}[m]$ to the morphism $a(g)([n])\to [m]$ where $f\in \Lambda^{\op}([n],[m])$ and $g\in D_2$. It is clear that this is a bijection on objects and on morphisms. \end{comment} \end{proof} \begin{defin}[\cite{JS01, VF04, DM16}] For a category $\cC$, a \emph{$G$-diagram in $\cC$} consists of an object $b$ in $\Cat^{BG}$ with $b(*)=I$, a functor $X\colon\thinspace I\to \cC$ and natural transformations $g_X\colon\thinspace X\Rightarrow X\circ b(g)$ for each $g\in G$, compatible with the group structure in the sense that the composite natural transformation \[ h_{X\circ b(h)}\circ g_X \colon \thinspace X\Rightarrow \big(X\Rightarrow X\circ b(h)\big)\circ b(g) \] is \((hg)_X\) for any $h,g\in G$. A morphism of $G$-diagrams is a natural transformation of functors from $I$ to $\cC$, $f: X \to Y$, compatible with all the structure. Following \cite{DM16}, we write $\cC^{I}_b$ for the category of $G$-diagrams in $\cC$ with respect to the functor $b\colon\thinspace BG\to\Cat$ with $b(*)=I$ and with morphisms the maps of $G$-diagrams. \end{defin} We record the following notation for later use. \begin{notation}\label{notation for colimit} Given a $G$-diagram in $\Sp$, $X\colon \thinspace I\longrightarrow \Sp$ we write $\underline{\hocolim}_{I}X$ and $\underline{\holim}_{I}X$ for the homotopy colimit and the homotopy limit in $\Sp^{G}$ as defined in \cite[Definition 1.16]{DM16}. \end{notation} \begin{defin}\label{def:RealCyclic} A \emph{Real cyclic} object in a category $\mathcal{C}$ is a functor \[ BD_2\smallint \lambda \longrightarrow \cC\] where $\lambda \colon \thinspace BD_2\longrightarrow \Cat$ is the functor in Example \ref{Ex:Lambda}. We write $\cC^{\Lambda^{\op}}_{\lambda}$ for the category of Real cyclic objects in $\cC$. \end{defin} We now identify the category of dihedral objects in $\cC$ and the category of Real cyclic objects in $\cC$. \begin{cor}\label{iso of models for dihedral objects} There is an isomorphism of categories \[ \cC^{\Xi^{\op}} \cong \cC^{\Lambda^{\op}}_\lambda\] where $\lambda \colon\thinspace BD_2 \to \Cat $ is defined as in Example \ref{Ex:Lambda}. \end{cor} \begin{proof} There is an isomorphism of categories \[ \cC^{ BD_2\int \lambda } \cong \cC^{\Xi^{\op}}\] induced by the isomorphism of categories of Lemma \ref{gro equiv}. Then by \cite[Lemma 1.9]{DM16}, there is an isomorphism of categories \[ \Phi^{\prime} \colon\thinspace \cC^{ BD_2\int \lambda } \cong \cC^{\Lambda^{\op}}_\lambda.\] Recall that this isomorphism sends an object $X$ in $\cC^{ BD_2\int \lambda}$ to the the object \[\Phi^{\prime}(X):=X |_{\Lambda^{op}}. \] Here $X |_{\Lambda^{op}}$ denotes the restriction of $X$ along the natural inclusion into the Grothendieck construction, $i \colon\thinspace \Lambda^{\op}\to BD_2\int \lambda$. For an element $g\in D_2$, the natural transformation \[g_{\Phi^{\prime}(X)} \colon\thinspace X |_{\Lambda^{op}} \to X |_{\Lambda^{op}} \circ \lambda(g) \] is defined at an object $[n]$ in $\Lambda^{\op}$ by \[ (g_{\Phi^{\prime}(X)})_{[n]}:=X(g,\id\colon\thinspace g [n] \to g[n])\] and all remaining conditions follow by functoriality. \end{proof} \begin{rem} There is also an equivalence of categories between the category of Real simplicial objects in $\mathcal{C}$ and $\mathcal{C}_{\delta}^{\Delta^{op}}$ where $\delta\colon \thinspace BD_2\to \Cat$ is defined in Example \ref{Ex:DElta}. This motivates the naming convention in Definition \ref{def:RealCyclic}. \end{rem} Recall from \cite{DM16,DMPPR21} that we can form the geometric realization of a $D_2$-diagram $\Delta^{\op}\to \Sp$ and this geometric realization has a genuine $D_2$-action. By abuse of notation, write $|X_{\bullet}|\in \Sp_{\cV}^{D_2}$ for the geometric realization of a $D_2$-diagram $X_{\bullet}:\Delta^{\op}\to \Sp$. Let \[ F\colon \thinspace \cC_{\delta}^{\Delta^{\op}}\longrightarrow \cC^{\Lambda^{\op}}_{\lambda} \] denote the left adjoint to the forgetful functor $\iota^* \colon \thinspace \cC^{\Lambda^{\op}}_{\lambda}\longrightarrow \cC_{\delta}^{\Delta^{\op}}$. \begin{lem}\label{free C2 cyclic object} Let $Y_{\bullet}\in\Set_{\delta}^{\Delta^{op}}$. Then \[ |\iota^*F(Y)| \cong \mathbb{T} \times |Y_{\bullet}| \] with diagonal $D_2$-action given by the action of $D_2$ on $\mathbb{T}$ by complex conjugation and the action of $D_2$ on $|Y_{\bullet}|$. \end{lem} \begin{proof} The proof is a straight forward generalization of the proof of the classical result (cf. \cite[Theorem 5.3 (iii)]{FL91}) so we omit it. \begin{comment} We first show that \[ F(Y)_n=C_{n+1}\times Y_n\] as a $D_2$-set with diagonal $D_2$-action where $D_2$ acts on $C_{n+1}$ by sending the generator $t_{n+1}$ to it's inverse $t_{n+1}^{-1}$ and by the Real simplicial structure on $Y_n$. By an explicit model for the left Kan extension, we know that $F(Y)_n$ is the left Kan extension of the functor \[ \Set_{a_0}^{*}\to \Set_{a^n}^{\iota \downarrow [n]}\] where $\iota \downarrow [n]$ is the comma category with objects pairs $([m], [m]\to [n])$ where $[n]$ is an object in $\Delta^{\op}$ and $[m]\to [n]$ is a morphism in $\Lambda^{\op}$ where we abuse notation and write $[m]$ for $\iota([m])$. Morphism in $\iota \downarrow [n]$ consist of a morphism $f\colon\thinspace [m] \to [m^{\prime}]$ in $\Delta^{\op}$ and a commuting triangle \[ \xymatrix{ [m] \ar[r] \ar[dr] & [m^\prime] \ar[d] \\ & [n] } \] where we again abuse notation and simply write $[m]\to [m^{\prime}]$ $\iota(f)$. Since $\iota$ is a bijection on objects and an injection on morphism sets, this abuse of notation should not cause confusion. The functor $a^n\colon\thinspace BD_2 \to \Cat$ is defined by letting $a^n(*)=\iota \downarrow [n]$ and by defining an involution \[ a^n(\alpha)\colon\thinspace \iota \downarrow [n]\to \iota \downarrow [n]\] explicitly on objects by \[ a^n(\alpha) ( [m], [m]\to [n])= (a^{\prime}([m]),\iota (a^{\prime}(\alpha)([m]))\to a(\alpha)([n]))=(a^{\prime}([m]),a(\alpha)([m]\to [n]))\] and on morphisms in the obvious way. Since, by definition of $\Lambda^{\op}$, $\Aut_{\Lambda^{op}}([n])=C_{n+1}$ with generator $t_n$. We regard $\Aut_{\Lambda^{op}}([n])=C_{n+1}$ as a discrete category with objects $\{1,t_{n+1},\dots ,t_{n+1}^n\}$. Then there is a functor \ \[ \Aut_{\Lambda^{\op}}([n]) \to (\iota \downarrow [n]) \] sending $t_n^k\colon\thinspace [n]\to [n]$ to $([n], t_n^k \colon\thinspace [n] \to [n])$ defined on morphisms by the obvious inclusion. We may also view $\Aut_{\Lambda^{\op}}([n])$ as a $D_2$-category with associated functor $a^{(n)}\colon\thinspace D_2 \to \Cat$ such that $a^{(n)}(*)=\Aut_{\Lambda^{\op}}([n])$ and the involution $a^{(n)}(\alpha)$ is defined by \[a^{(n)}(\alpha)(t_n)=a^{(n)}(\alpha)(t_n^{-1})\] on objects and on morphisms in the obvious way. There is therefore a functor of $D_2$-categories \[ \Aut_{\Lambda^{\op}}([n]) \to (\iota \downarrow [n])\] by sending $t_n^k\colon\thinspace [n]\to [n]$ to $([n], t_n^k\colon\thinspace [n]\to [n])$. By the fundamental axiom of a crossed simplicial group, any morphism $g$ of $\Lambda^{\op}$ can be factored as \[g\colon\thinspace [m] \overset{\iota(f)}{\longrightarrow} [n] \overset{t_n^k}{\longrightarrow} [n]\] where $f$ is a morphism in $\Delta^{\op}$. Therefore, $\Aut_{\Lambda^{\op}} ([n])$ is cofinal in $(\iota \downarrow [n])$ and this remains true when both are regarded as $D_2$-categories. Explicitly, for any object $([m],g\colon\thinspace[m]\to [n])$ there exists a $k$ such that $g\colon\thinspace [m]\overset{\iota(f)}{\longrightarrow} [n] \overset{t_n^k}{\longrightarrow} [n]$ where $f$ is a morphism in $\Delta^{\op}$ and consequently a map \[ (f\colon\thinspace [m]\to [n], \xymatrix{ [m]\ar[r]^{f} \ar[dr]_{g}& [m^{\prime}]\ar[d]^{t_n^k}\\ & [n] }) \] from $([m], g\colon\thinspace [m]\to [n])$ to some object $([n], t_n^k\colon\thinspace [n]\to [n])$ in the essential image of the functor \[ \Aut_{\Lambda^{\op}}([n])\to (\iota \downarrow [n]).\] We then compare the left Kan extensions \[ \m{\colim} \colon\thinspace \Set_{a^n}^{(\iota \downarrow [n])} \to \Set_{a_0} \] \begin{align}\label{a(n) left Kan} \m{\colim} \colon\thinspace \Set_{a^{(n)}}^{ (\Aut_{\Lambda^{\op}}) ([n])} \to \Set_{a^n}^{(\iota \downarrow [n])} \to \Set_{a_0} \end{align} and by cofinality these two left Kan extensions agree. The left Kan extension \eqref{a(n) left Kan} is equivalent to \[\coprod_{g\in C_{n+1}} Y_n\{g\}=C_{n+1}\times Y_n\] as a $D_2$-set where $D_2$ acts diagonally on $C_{n+1}\times Y_n$ in the desired way. It is routine and tedious to check compatibility with the face and degeneracy maps. so we get a Real simplicial (discrete) space \[ C_{\bullet+1}\times Y_{\bullet} \] and the $D_2$-colimit \[ \m{\colim} \colon\thinspace \Top^{\Delta^{\op}}_{a^{\prime}} \to \Top_{a_0}^{*}\cong \Top^{D_2}\] is equivalent to $\mathbb{T} \times \m{\colim}(Y_{\bullet})$ with diagonal $D_2$-action where $D_2$ acts on $\mathbb{T}$ by complex conjugation and on $|Y_{\bullet}|$ in the usual way that it acts on $\ucolim(Y_{\bullet})$. \end{comment} \end{proof} \begin{remark} Note that specifying an $O(2)$-action $ O(2) \times X\to X $ on a topological space $X$ is equivalent data to specifying a $D_2$-action on $X$ as well as a $\bT$-action $ \bT \times X\to X $ which is $D_2$-equivariant with respect to the diagonal action on the source, where $D_2$ acts on $\bT$ by complex conjugation. \end{remark} \begin{cor} The geometric realization $|\iota^*X_{\bullet}|$ of a Real cyclic set $X_{\bullet}$ has an $O(2)$-action. \end{cor} Note that we also know that the realization of an object $X_{\bullet}$ in $\Set^{\Lambda^{\op}}_{\lambda}$ has an $O(2)$-action by Corollary \ref{iso of models for dihedral objects}. From this perspective, however, it is more clear that the $O(2)$-action restricts to the $D_2$-action on $|\iota^*X_{\bullet}|$ from \cite{DMPPR21}. We use this perspective in Section \ref{sec: norm}. \section{Real topological Hochschild homology}\label{sec: thr} In this section, we recall the definition of Real topological Hochschild homology. To begin we review the theory of $E_{\sigma}$-rings, which are the input for Real topological Hochschild homology. \subsection{Algebras over equivariant little disks}\label{sec:Esigmarings} Equivariant norms will play an important role here and in subsequent sections, so we begin by recalling their definition. \begin{defin}\label{def: norm} Let $G$ be a finite group and $H$ a subgroup of $G$ of index $n$. An ordered set of coset representatives of the $G$-set $G/H$ defines a group homomorphism $\alpha \colon G\to \Sigma_n\wr H$. Given a choice of ordering of the coset representatives of the $G$-set $G/H$, we define the indexed smash $\wedge_{H}^{G}$ as the composite \[ \wedge_{H}^G \colon \Sp^{BH}\overset{\wedge^n}{\to}\Sp^{B\Sigma_n\wr H}\overset{\alpha^*}{\to} \Sp^{BG} \] and we define the norm $N_{H}^{G}$ as the composite \[N_H^G =\cI_{\mathbb{R}^{\infty}}^{\cW}\circ \wedge_{H}^{G}\circ \cI_{\iota_H^*\cW}^{\mathbb{R}^{\infty}} \colon \thinspace \Sp_{\iota_H^*\cW}^{H}\to \Sp_{\cW}^{G}\] where $\cW$ is a complete $G$-universe and $\iota_H^*\cW$ is the restriction of $\cW$ to $H$. \end{defin} \begin{remark} One may also define norms more generally for finite $G$-sets as in \cite{BlHi15}. For example, when $X$ is a $D_2$-spectrum and $S$ is a finite $D_2$-set with orbit decomposition \[ S= (\coprod_{i\in I} D_2/e)\amalg (\coprod_{j\in J} D_2/D_2),\] then there is an isomorphism \[ N^{S}X\cong (\bigwedge_{i\in I} N_e^{D_2}\iota_e^*X)\wedge (\bigwedge_{j\in J} X)\] by \cite[Proposition 6.2]{BlHi15}. When there is no subscript in the notation of the norm, we will always mean the construction above. \end{remark} \begin{defin} Let $\text{\bf{n}}$ be the $D_2$-set $\{1,2,\ldots ,n\}$ with the generator $\tau$ of $D_2$ acting by $\tau(i)=n-i+1$. This produces a group homomorphism $D_2\to \Sigma_n$ and consequently a graph subgroup $\Gamma_n$ of $\Sigma_n\times D_2$, where a graph subgroup is a subgroup of $\Sigma_n\times D_2$ that intersects $\Sigma_n$ trivially. The $\Assoc_{\sigma}$-operad is an operad in $D_2$-spaces with $n$-th space \[ \Assoc_{\sigma,n}= (\Sigma_{n}\times D_2)/\Gamma_n.\] See \cite{Hil17} for the definition of the structure maps in this operad. \end{defin} \begin{defin} By an \emph{$E_{\sigma}$-operad}, we mean any operad $\mathcal{O}_{\sigma}$ in $\Top^{D_2}$ such that there is a map of operads in $D_2$-spaces \[ \mathcal{O}_{\sigma}\to \Assoc_{\sigma,n}\] inducing a weak equivalence $\mathcal{O}_{\sigma,n}\simeq (\Sigma_n\times D_2/\Gamma_n)$ in $\Top^{D_2}$. \end{defin} \begin{defin} Let $\mathcal{O}_{\sigma}$ be an $E_{\sigma}$-operad. Define a monad $\mathcal{P}_{\sigma}$ on $\Sp^{D_2}_{\cV}$ by the formula \[ \mathcal{P}_{\sigma}(-)=\coprod_{n\ge 1} \mathcal{O}_{\sigma,n}\otimes_{\Sigma_n}(-)^{\otimes n}.\] By an $E_{\sigma}$-ring, we mean an algebra in $\Sp^{D_2}_{\cV}$ over the monad $\cP_{\sigma}$ associated to the operad $\mathcal{O}_{\sigma,n}=\Assoc_{\sigma,n}$. \end{defin} \begin{prop} Let $X\in \Sp_{\cV}^{D_2}$. Then \[ \mathcal{P}_{\sigma}(X)=T(N^{D_2}X))\wedge (S\vee X).\] \end{prop} \begin{proof} This follows from the equivariant twisted James splitting (cf. \cite[Thm. 4.3]{Hil17}). Write $\bf{n}\rm=\{1,...,n\}$ for the $D_2$-set where the generator $\tau$ acts by $i\mapsto n+1-i$. There is an isomorphism \[ \bf{n}\rm \cong \begin{cases} D_2^{\amalg n/2} &\text{ if } n \text{ is even} \\ D_2 /D_2 \amalg D_2^{\amalg (n-1)/2} &\text{ if } n \text{ is odd} \end{cases} \] of $D_2$-sets. The action of $D_2$ on $\bf n \rm$ is a homomorphism $D_2\to \Sigma_n$ and we define the associated graph subgroup to be $\Gamma_n$. Thus, when $n$ is even \[ \cI_{\mathbb{R}^{\infty}}^{\cV}\left ((\Assoc_{\sigma ,n})_+\wedge_{\Sigma_n} \cI_{\cV}^{\mathbb{R}^{\infty}}X \right ) = \left (N^{D_2}(X)\right )^{\wedge n/2}\] and when $n$ is odd \[ \cI_{\mathbb{R}^{\infty}}^{\cV}\left ( (\Assoc_{\sigma ,n})_+ \wedge_{\Sigma_n} \cI_{\mathbb{R}^{\infty}}^{\cV}X \right )= X \wedge \left ( N^{D_2}(X) \right )^{ \wedge n/2} .\] The result then follows by regrouping smash factors according to the number of smash factors involving the norm. \end{proof} \begin{defin} By an $E_1$-ring $A$, we mean an algebra over the associative operad $\Assoc$ in the category of orthogonal spectra $\Sp$. By an $E_0$-$A$-algebra we mean an $A$-bimodule $M$ equipped with a map $A\to M$ of $A$-bimodules (cf. \cite[Rem. 2.1.3.10]{HA}). \end{defin} \begin{exm}\label{standard E0 ring} An $E_1$-ring $A$ is an $E_0$-ring in $A\wedge A^{\op}$-bimodules with unit map $\mu \colon \thinspace A\wedge A^{op}\to A$ given by multiplication and the bimodule structure given by \begin{align*} \varphi_L\colon \thinspace A\wedge A^{\op}\wedge A \overset{1\wedge B_{A^{\op}, A}}{\longrightarrow} A\wedge A\wedge A^{\op} \overset{\mu \circ 1\wedge \mu}{\longrightarrow} A\\ a_1\wedge a_2\wedge a\mapsto a_1\cdot a\cdot a_2\\ \varphi_R\colon \thinspace A\wedge A\wedge A^{\op}\overset{B_{A,A}\wedge 1}{\longrightarrow} A\wedge A\wedge A^{\op} \overset{\mu \circ 1\wedge \mu}\longrightarrow A\\ a\wedge a_1\wedge a_2 \mapsto a_1\cdot a\cdot a_2 \end{align*} which we call the \emph{standard structure} of an $E_0$-$A\wedge A^{\op}$-algebra on $A$. Here $B_{X,Y}\colon \thinspace X\wedge Y \to Y\wedge X$ denotes the natural braiding in the symmetric monoidal structure. \end{exm} \begin{remark}\label{rem: ring with anti-involution} Given an $E_{\sigma}$-ring $R$, then $\iota_e^*R$ is an $E_1$-ring with anti-involution given by the action of the generator of the Weyl group $D_2$. Throughout, we will let $\tau \colon \thinspace \iota_e^*R^{\text{op}}\longrightarrow \iota_e^*R$ denote this anti-involution. \end{remark} \begin{cor} An $E_{\sigma}$-ring $R$ is exactly a $D_2$-spectrum $R$ such that \begin{enumerate} \item the spectrum $\iota_e^*R$ is an $E_1$-ring with anti-involution, always denoted $\tau \colon \thinspace \iota_e^*R^{\text{op}}\longrightarrow \iota_e^*R$, given by the action of the generator of Weyl group, \item the spectrum $R$ is an $E_0$-$N^{D_2}R$-algebra and applying $\iota_e^*$ to the $E_0$-$N^{D_2}R$-algebra structure map gives $\iota_e^*R$ the standard $E_0$-$\iota_e^*R\wedge \iota_e^*R^{\op}$-algebra structure. \end{enumerate} \end{cor} \begin{exm}\label{exm: involution implies sigma} Given an $E_1$-ring with anti-involution, regarding $R$ as an object in $\Sp_{\cV}^{D_2}$ produces an $E_\sigma$-ring structure on $R$. \end{exm} \begin{exm}\label{exm: commutative implies sigma} If $R$ is in $\Comm(\Sp_{\cV}^{D_2})$, then $R$ is an $E_\sigma$-ring. \end{exm} \subsection{The dihedral bar construction} \label{sec:dihedralbar} We first define the relevant input for Real topological Hochschild homology with coefficients. \begin{defin} Suppose $A$ is an $E_{\sigma}$-ring. As observed in Remark \ref{rem: ring with anti-involution}, the underlying spectrum $\iota_e^*A$ is an $E_1$-ring with anti-involution $\tau \colon\thinspace \iota_e^*A^{\op}\to \iota_e^*A$. An \emph{$A$-bimodule $M$ with involution $j$} is an $\iota_e^*A$-bimodule $M$ along with a map of $\iota_e^*A$-bimodules $j\colon \thinspace M^{\op} \to M$, such that $j^{\text{op}}\circ j= \id$. Here $M^{\op}$ is the $\iota_e^*A^{\op}$-bimodule with module structures given by: \begin{align*}\label{left module structure Mop} \iota_e^*A^{\op}\otimes M \overset{B_{\iota_e^*A^{\op}, M}}{\longrightarrow} M \otimes \iota_e^*A^{\op} \overset{1\otimes \tau}{\longrightarrow} M \otimes \iota_e^*A \overset{\psi_R}{\longrightarrow} M \end{align*} and \[ M\otimes \iota_e^*A^{\op} \overset{B_{M,\iota_e^*A^{\op}}}{\longrightarrow} \iota_e^*A^{\op} \otimes M \overset{\tau \otimes 1}{\longrightarrow} \iota_e^*A \otimes M \overset{\psi_L}{\longrightarrow} M \] where $\psi_L$ and $\psi_R$ are the bimodule structure maps of $M$ and $B_{-,-}$ is braiding in the symmetric monoidal structure on $\Sp_{\cV}^{D_2}$. \end{defin} \begin{remark} In the definition below, we will drop the notation $(-)^{\op}$ in the maps $ \tau$ and $j$ when we just consider the maps as maps in $\Sp$. \end{remark} \begin{defin}[Dihedral bar construction] The dihedral bar construction $B^{\di}_{\bullet}(A; M)$ has $k$-simplices \[ B^{\di}_{k}(A; M) = M \otimes A^{\otimes k}. \] It has the same structure maps as the cyclic bar construction with coefficients, $B^{\cy}_{\bullet}(A;M)$, with the addition of a levelwise involution $\omega_{k}$ acting on the $k$-simplices. To specify the levelwise involution, let $\bfk$ be the $D_2$-set $\{1,2,\ldots ,k\}$ with the generator $\alpha$ of $D_2$ acting by $\alpha(i)=k-i+1$ for $1\le i \le k$. Then we define the action of $\omega_{k}$ by the composite \[ \xymatrix{ \omega_{k} \colon\thinspace M \otimes A^{\bfk} \ar[rr]^{M\otimes A^{\alpha} } &&M \otimes A^{\bfk}\ar[rr]^{ j \otimes \tau \otimes \ldots \otimes \tau} && M \otimes A^{\bfk} } \] where $A^{\alpha}$ is the automorphism of $A^{\bfk}$ induced by $\alpha\colon\thinspace \bfk\to \bfk$. It is straightforward to check that the structure maps satisfy the usual simplicial identities (and cyclic identities when $M=A$), together with the additional relations \begin{align*} d_i\omega_{k}=&\omega_{k-1}d_{k-i} \text{ for } 0\le i \le k \\ s_i\omega_{k}=&\omega_{k+1}s_{k-i}\text{ for } 0\le i \le k \\ \omega_{k}t_{k}=&t_{k}^{-1}\omega_{k} \hspace{.5cm} \textup{when } M=A. \end{align*} Therefore, the dihedral bar construction is a Real simplicial object and, when $M=A$, it is a dihedral object. We define the dihedral bar construction of the pair $(A;M)$ by the formula \[ B^{\di}(A;M)=|B_{\bullet}^{\di}(A;M)|.\] \end{defin} \subsection{Real topological Hochschild homology} We now recall the B\"okstedt model for $\THR(A)$, following \cite{HM15,DMPPR21}. Let $\cI$ be the category with objects the finite sets $\mathrm{k}=\{1, 2, \ldots k\}$ and injective morphisms, where $0$ is the empty set. Equip this category with a $D_2$-action $ \tau \colon\thinspace \cI \to \cI $ where $\tau$ acts trivially on objects and for an injection $\alpha\colon\thinspace \mathrm{i} \longrightarrow \mathrm{j}$, we let \[\tau(\alpha)(s)=j-\alpha(i-s+1)+1,\] following \cite{HM15}. Observe that $\cI$ is a monoid in $\Cat$ with operation \[+ \colon \cI\times \cI \to \cI \] defined by sending $(\mathrm{i},\mathrm{j})$ to $\mathrm{i}+\mathrm{j}$ on objects and by sending a pair of morphisms $\alpha \colon \thinspace \mathrm{i} \to \mathrm{j}$, $\beta \colon \thinspace \mathrm{i}^{\prime} \to \mathrm{j}^{\prime}$ to \[ (\alpha +\beta)(s)= \begin{cases} \alpha(s) & \text{ if } 1 \le s\le i \\ \beta(s-i)+j & \text{ if } i < s \le i+i^{\prime} \end{cases} \] given by disjoint union and neutral element $0$. The unit of $ \mathcal{I}$ regarded as a monoid is therefore the functor $\eta \colon\thinspace [0]\to \cI $ that sends the unique object in the terminal category $[0]$ to $0$ and the unique morphism in the terminal category to the identity map $\id_{0} \colon\thinspace 0\longrightarrow 0$. The functor $\tau$ is strong monoidal with respect to this monoidal structure on the target and the opposite monoidal structure on the source. In other words, there is a strong monoidal functor \[ \tau \colon \thinspace \cI_{\rev}\to \cI \] where $\cI_{\rev}$ has the same underlying category $\cI$, but the monoidal structure is defined by \[ \cI\times \cI \overset{B_{\cI,\cI}}{\to} \cI \times \cI \overset{+}{\to} \cI\] where $B_{\cC,\cD}\colon \thinspace \cC\times \cD\to \cD\times \cC$ is the natural braiding in the symmetric monoidal category $\Cat$. We can therefore view $\cI$ as a monoid with anti-involution in $\Cat$ and let $B^{\di}_{\bullet}(\cI)$ be its associated dihedral Bar construction. We write $\cI^{1+\bf k \rm}$ for the $k$-simplices of $B^{\di}_{\bullet}(\cI)$ regarded as an object in $\Cat^{BD_{2(k+1)}}$. By Corollary \ref{iso of models for dihedral objects}, we may regard $B_{\bullet}^{\di}\cI$ as a $D_2$-diagram \[ B_{\bullet}^{\di}\cI\colon\thinspace \Lambda^{\op} \to \Cat \] and take the Grothendieck construction \cite[Def. 1.1]{Tho79} of this functor, denoted $\Lambda^{\op}\smallint B_{\bullet}^{\di}\cI.$ Let $A$ be an $E_{\sigma}$-ring and let $\bS$ be the $D_2$-equivariant sphere spectrum with $\mathbb{S}_k=S^k$. We write $[k]$ for an object in $\Lambda^{\op}$ and $\underline{j}=( j_0,\dots ,j_k)$ for an object in $\cI^{1+\bf k \rm}$. We define $D_2$-diagrams in spectra \[\Omega_{\cI}^{\bullet}(A;\bS)\colon\thinspace \Lambda^{\op}\smallint B^{\di}_{\bullet}\cI\to \Sp \] on objects, by $([k] ; \underline{j} )\mapsto \Omega^{j_0+ \dots +j_k}(\bS \wedge A_{j_0}\wedge \dots \wedge A_{j_k})$ as in \cite{PS16}. Given a symmetric monoidal category $(\cC,\otimes)$, let $\gamma_k$ denote the natural transformation between the functor $-\otimes\ldots \otimes - \colon\thinspace \cC^{k}\longrightarrow \cC$ and itself that reverses the order of the entries. We abuse notation and write $\gamma_k:=(\gamma_k)_{(A_{j_1}, \dots ,A_{j_k})}$ and $\gamma_{j_i}=(\gamma_{j_i})_{(S^1,\dots ,S^1)}$ in $\Top$. We also write $\gamma_{j_i}=A((\gamma_{j_i})_{(\mathbb{R},\dots ,\mathbb{R})})$, where $\gamma_{j_i}$ is defined on the symmetric monoidal category of orthogonal representations with respect to direct sum, for $1\le i\le k$. Let $W_k$ be the composite map \[ \xymatrix{ \mathbb{S}\wedge A_{j_0}\wedge A_{j_1}\wedge\dots \wedge A_{j_k} \ar[rr]^{\mathbb{S}\wedge \tau^{\wedge k+1}} \ar[ddrr]_{W_k} && \bS \wedge A_{j_0}\wedge A_{j_1}\wedge \dots \wedge A_{j_k} \ar[d]^{\mathbb{S}\wedge A_{j_0}\wedge \gamma_k} \\ && \bS \wedge A_{j_0}\wedge A_{j_k}\wedge \dots \wedge A_{j_1}\ar[d]^{\mathbb{S}\wedge \gamma_{j_0}\wedge \gamma_{j_k}\wedge \dots \wedge \gamma_{j_1}} \\ && \bS\wedge A_{j_0}\wedge A_{j_k}\wedge \dots \wedge A_{j_1} } \] of orthogonal spectra and let $G_k$ be the composite map \[ \xymatrix{ S^{j_0}\wedge S^{j_1}\wedge \dots S^{j_k}\ar[r]^{S^{j_0}\wedge \gamma_k} \ar[dr]_{G_k} & S^{j_0}\wedge S^{j_k}\wedge \dots S^{j_1}\ar[d]^{\gamma_{j_0}\wedge \gamma_{j_k}\wedge \dots \wedge \gamma_{j_1}} \\ & S^{j_0}\wedge S^{j_1}\wedge \dots S^{j_k} } \] of topological spaces. The diagram $\Omega_{\cI}^{\bullet}(A;\bS)([k] ;\underline{j}\rm)$ has $D_2$-action \begin{align*} \label{Bok defin} \Omega_{\cI}^{\bullet}(A;\bS)_{\underline{j}} \to \Omega_{\cI}^{\bullet}(A;\bS)_{\tau \underline{j}}, \end{align*} where $\tau \underline{j}=( j_0,j_k,\dots ,j_1)$, defined by sending a map \[f\colon S^{i_0+\dots +i_k}\to \mathbb{S}\wedge A_{i_0}\wedge \dots A_{i_k}\] to $W_k \circ f \circ G_k$. Note that there is a $\mu_{k+1}$-action on \[ \Omega^{j_0+ \dots +j_{k}}(\bS \wedge A_{j_0}\wedge \dots \wedge A_{j_{k}})\] by conjugation where we act on source and target of a loop by cyclic permutation of the indices $(j_0, \dots j_k)$. The composite \[ \cI^{1+\bf k \rm}\overset{\iota_k}{\longrightarrow} \Lambda^{\op}\smallint B^{\di}_{\bullet}\cI \overset{\Omega_{\cI}^{\bullet}(A;\bS)}{\longrightarrow} \Sp \] sending $\underline{j}$ to $\Omega_{\cI}^{\bullet}(A;\bS)([k],\underline{j}\rm)$ is in fact a $D_{2(k+1)}$-diagram in spectra. \begin{lem}\label{Bokstedt construction} Given a functor $X\colon \Lambda^{\op}\int B^{\di}_{\bullet}\cI\to \Sp$ and the canonical $D_{2(k+1)}$-equivariant functor \[ \iota_k\colon\thinspace \cI^{1+\bf k \rm}\longrightarrow \Lambda^{\op}\int B^{\di}_{\bullet}\cI ,\] the functor $\Lambda^{\op}\to \Sp^{C_{2}}$ defined on $k$-simplices by $D_{2(k+1)}$-spectra \[ \THR(X)_k:= \underset{\cI^{1+\bf k \rm}}{\underline{\hocolim}} \iota_k^*X,\] using Notation \ref{notation for colimit} defines a Real cyclic object in $\Sp$ (see Definition \ref{def:RealCyclic}). \end{lem} \begin{proof} By \cite[Prop. 2.15]{DMPPR21}, we know that this construction defines a natural $D_2$-diagram $\Delta^{\op}\to \Sp$, so in order to elevate this to a natural $D_2$-diagram $\Lambda^{\op}\to \Sp$, it will suffices to describe the additional cyclic structure maps and their compatibility with the $D_2$-diagram structure $\Delta^{\op}\to \Sp$. The cyclic structure comes from cyclically permuting the coordinates in $\cI^{1+\bf k \rm}$ and acting via the $\mu_{k+1}$-action on $\iota_k^*X$. Then taking the homotopy colimit \[ \underset{\cI^{1+\bf k \rm}}{\m{\hocolim}}\iota_k^{*}X \] of the $D_{2(k+1)}$-diagram $\iota_k^{*}X$ in the sense of \cite[Definition 1.16]{DM16}, we have a $D_{2(k+1)}$-spectrum, which restricts to the $D_2$-spectrum defined in \cite[Prop. 2.15]{DMPPR21}. The restriction to the $\mu_{k+1}$-action produces the standard $\mu_{k+1}$-action on the B\"okstedt construction so this is compatible with the face and degeneracy maps. Since it is in fact the restriction of a $D_{2(k+1)}$-action, the compatibility of the cyclic structure maps and the additional dihedral structure maps is clear except for the compatibility of the additional dihedral structure maps with the face and degeneracy maps and this compatibility is exactly what is proven in \cite[Prop. 2.15]{DMPPR21}. \end{proof} \begin{defin}[Real topological Hochschild homology: B\"okstedt model] When $X=\Omega_{\cI}^{\bullet}(A;\bS)$ then the B\"okstedt model for Real topological Hochschild homology is defined as \[ \THR(A):= | \THR(\Omega_{\cI}^{\bullet}(A;\bS))_\bullet | \] and it is an $O(2)$-spectrum by Lemma \ref{free C2 cyclic object}. It also restricts to a genuine $H$-spectrum for each cyclic group $H$ of order $2$ such that $H\cap \mathbb{T}=e$. \end{defin} \subsection{Model structures}\label{sec: model structure} We now set up the basic model categorical constructions we need. We refer the reader to \cite{ABGHLM18}, \cite{MM02}, and \cite{DMPPR21} for a more thorough survey of the model categorical considerations used in this paper. First, we fix some notation from \cite{GM95}. Given a normal subgroup $N$ of a compact Lie group $G$, we write $\cF (N,G) = \{ H < G : H\cap N=e \}$ for the family of subgroups of $G$ that intersect $N$ trivially and write $\All=\cF (e,G)$ for the family of all subgroups of $G$. Following Hogenhaven \cite{Hog16}, we will write $\cR$ for the family $\cF(\mathbb{T},O(2))$. We also write $\cF[N]$ for the family of subgroups of $G$ that don't contain $N$. For any family $\cF$ of subgroups of $O(2)$, we say an $O(2)$-equivariant map is an $\cF$-equivalence if it induces isomorphisms on homotopy groups $\pi_*^H(-)$ for all $H$ in $\cF$. We write $\mathbb{R}^{\infty}$ for the trivial $G$-universe for any compact Lie group $G$. We fix a complete $O(2)$-universe $\cU$ throughout this section and let $\cV_n:=\iota_{D_{2n}}^*(\cU)$ be a complete $D_{2n}$-universe constructed as the restriction of $\cU$ to $D_{2n}$. When $n=1$, we simply write $\cV=\cV_1$. We note that the determinant homomorphism $O(2)\to \{-1,1\}$ has a (non-canonical) splitting producing $D_2$ as a subgroup of $O(2)$ (see Section \ref{prelims} for details on our choice of splitting). We write $\widetilde{\mathcal{V}}$ for the $O(2)$-universe associated to $\mathcal{V}$ by inflation along the determinant homomorphism. The following statement combines results from \cite[Appendix B]{HHR} (cf. \cite[Theorem 2.26,2.29]{ABGHLM18}). \begin{prop}\label{model structure} Let $G$ be a compact Lie group. There is a positive complete stable compactly generated model structure on orthogonal $G$-spectra indexed on a complete universe $\cU$, denoted $\Sp_{\cU}^{G,\cF}$, where the weak equivalences are \emph{$\cF$-equivalences}, the cofibrations the \emph{positive complete stable $\cF$-cofibrations}, and the fibrations are then determined by the right lifting property. There is also a positive complete stable compactly generated $\cF$-model structure on the category of commutative monoids in orthogonal $G$-spectra indexed on a complete universe $\cU$, denoted $\Comm(\Sp_{\cU}^{G,\cF})$. The weak equivalences and fibrations are exactly those maps which are weak equivalences and fibrations after applying the forgetful functor \[\Comm(\Sp_{\cU}^{G,\cF})\longrightarrow \Sp_{\cU}^{G,\cF}\] and the cofibrations are determined by the left-lifting property. When $\cF$ is the family of all subgroups, we drop $\cF-$ from the notation and refer to the equivalences as the \emph{stable equivalences}. \end{prop} \begin{prop}[Proposition A.2 \cite{DMPPR21}]\label{model structure 2} When $G=D_2$, there is also a model structure on the category of $\Assoc_{\sigma}$-algebras in $\Sp_{\cV}^{D_2}$, which we denote $\Assoc_{\sigma}(\Sp_{\cV}^{D_2})$ where the weak equivalences are the \emph{stable equivalences} and the cofibrations are the \emph{positive complete stable cofibrations}. \end{prop} \begin{defin}\label{def: model structure} We will refer to each of the model structures on $\Sp_{\cU}^{G,\cF}$ and $\Comm(\Sp_{\cU}^{G,\cF})$ defined in Proposition \ref{model structure} and the model structure on $\Assoc_{\sigma}(\Sp_{\cV}^{D_2})$ of Proposition \ref{model structure 2} as the \emph{$\cF$-model structure} for brevity. \end{defin} \begin{defin} Let $G$ be a compact Lie group. A map of $G$-spaces is a $G$-cofibration if it has the left lifting property with respect to all maps of $G$-spaces $f\colon \thinspace X\to Y$ such that $X^H\to Y^H$ is a weak equivalence and a Serre fibration for every closed subgroup $H$ of $G$. \end{defin} We use of another collection of cofibrations in orthogonal $D_2$-spectra indexed on a complete universe $\cV$, which we call \emph{flat cofibrations}. These are also the cofibrations of a model structure on orthogonal $D_2$-spectra \cite{BDS18}. \begin{defin}\label{flat} We say a map $X\to Y$ of orthogonal $D_2$-spectra is flat if the latching map \[ L_nY\coprod_{L_nX}X(\mathbb{R}^n)\to Y(\mathbb{R}^n) \] is a $(D_2\times O(n))$-cofibration for all $n\ge 0$. In particular, $Y$ is a flat orthogonal $D_2$-spectrum if the latching map \[ L_n(Y)\to Y(\mathbb{R}^n) \] is a $(D_2\times O(n))$-cofibration for all $n\ge 0$. \end{defin} \begin{remark} If $Y$ is cofibrant in $\Assoc_{\sigma}(\Sp_{\cV}^{D_2})$ with the positive complete stable model structure, then it is flat in the sense of Definition \ref{flat} (cf. Remark A.3 \cite{DMPPR21}). Therefore, we can replace any spectrum $Y$ in $\Assoc_{\sigma}(\Sp_{\cV}^{D_2})$ by a flat spectrum $\widetilde{Y}$ in $\Assoc_{\sigma}(\Sp_{\cV}^{D_2})$ by cofibrantly replacing in the positive complete stable model structure of Proposition \ref{model structure}. \end{remark} Finally, in order to make sure that certain simplicial spectra are Reedy cofibrant, we need the following definition. \begin{defin} We say an orthogonal $G$-spectrum $X$ indexed on a complete universe $\cU$ is \emph{well-pointed} if $X(V)$ is well-pointed in $\Top^G$ for all finite dimensional orthogonal $G$-representations $V$. We say an $E_{\sigma}$-ring in $\Sp^{D_2}_{\cV}$ is \emph{very well-pointed} if it is well-pointed and the unit map $S^0\to X(\mathbb{R}^0)$ is a Hurewicz cofibration in $\Top^{D_2}$. \end{defin} \subsection{Real \texorpdfstring{$p$}{p}-cyclotomic spectra}\label{sec: cylotomic} Here we briefly recall the theory of Real cyclotomic spectra following \cite{Hog16,QS19}. We include this section to motivate our choice of the family of subgroups $\cR$ in Section \ref{sec: norm}. In Section \ref{sec: witt}, we will prove that our construction of Real Hochschild homology has a notion of Real cyclotomic structure that allows one to define Witt vectors for rings with anti-involution, so this section is also relevant for that setup. First, recall that given a family of closed subgroups $\cF$ of a topological group $G$, there is a $G$-CW complex $E\cF$ with the property that $E\cF^H\simeq *$ if $H\in \cF$ and $E\cF=\emptyset$ otherwise. We define $\widetilde{E}\cF$ as the homotopy cofiber of the map $E\cF_+\longrightarrow S^0$ induced by the collapse map. Recall that there is a (derived) geometric fixed point functor \[ \Phi^{\mu_{p}}\colon \thinspace \Sp_{\cU}^{O(2)}\longrightarrow \Sp_{\cU}^{O(2)}\] for each odd prime $p$ defined by first cofibrantly replacing and then applying the functor $(\widetilde{E}\cR\wedge -)^{\mu_p}$. Here we identify $O(2)/\mu_p$ with $O(2)$ via the root homeomorphism $O(2)\to O(2)/ \mu_p$ and we post-compose with a change of universe functor so that we do not need to distinguish between $\cU^{\mu_p}$ and $\cU$. We write $D_{2p^{\infty}}$ for the semi-direct product $\mu_{p^{\infty}}\ltimes D_2\subset O(2)$ where $\mu_{p^{\infty}}$ is the Pr\"uffer $p$-group. For the rest of this section fix an odd prime $p$. \begin{defin}[cf. {\cite[Definition 2.6]{Hog16}}] We say $X$ in $\Sp^{O(2)}_{\cU}$ is \emph{genuine Real $p$-cyclotomic} if there is a $\cF_{\text{Fin}}^{O(2)}$-equivalence $\Phi^{\mu_p}X\simeq X$, where $\cF_{\text{Fin}}^{O(2)}$ is the family of finite subgroups of $O(2)$. \end{defin} \begin{exm} By \cite[Theorem 2.9]{Hog16}, the spectrum $\THR(A)$ is genuine Real $p$-cyclotomic for any $E_{\sigma}$-ring $A$. \end{exm} As a consequence, there are $D_{2}$-equivariant structure maps \begin{align}\label{eq:restriction maps} R_k \colon\thinspace \THR(A)^{\mu_{p^{k}}}\longrightarrow (\widetilde{E}\cR \wedge \THR(A))^{\mu_{p^k}}= (\Phi^{\mu_p}(A))^{\mu_{p^{k-1}}}\overset{\simeq}{\longrightarrow} \THR(A)^{\mu_{p^{k-1}}} \end{align} called \emph{restriction maps} for all primes $p$. \begin{defin}[cf. {\cite[Definition 3.6.]{Hog16}}]\label{Real TC} Real topological restriction homology is defined as \[ \TRR(A;p)=\underset{k,R_k}{\holim} \THR(A)^{\mu_{p^{k}}}\] in the category $\Sp_{\cV}^{D_2}$. There are also $D_2$-equivariant \emph{Frobenius maps} \[ F_k \colon\thinspace \THR(A)^{\mu_{p^k}}\longrightarrow\THR(A)^{\mu_{p^{k-1}}},\] induced by inclusion of fixed points, which in turn induce a map \[ F\colon\thinspace \TRR(A;p)\longrightarrow \TRR(A;p)\] on Real topological restriction homology. We define $\TCR(A;p)$ as the homotopy equalizer of the diagram \[ \begin{tikzcd} \TRR(A;p) \ar[r,shift right=1ex,"\id" '] \ar[r,shift left=1ex,"F"] & \TRR(A;p) \end{tikzcd} \] in the category $\Sp_{\cV}^{D_2}$. \end{defin} Let $X$ be an object in $\Sp_{\cU}^{O(2),\cR}$ and write $E=E\cR$ and $\widetilde{E}=\widetilde{E}\cR$. We consider the isotropy separation sequence associated to $\cR$, as in \cite{GM95} (cf. \cite[Remark 5.32]{QS19}) \begin{align}\label{eq:isotropy} \xymatrix{ (E_+ \wedge F(E_+,X))^{\mu_{p^k}} \ar[r] & F(E_+,X)^{\mu_{p^k}} \ar[r] & (\widetilde{E}\wedge F(E_+, X))^{\mu_{p^k}} } \end{align} where $(-)^{\mu_{p^k}}$ denotes the categorical fixed points functor (denoted $\Psi^{\mu_{p^k}}$ in \cite[Remark 3.2]{QS19}). We write \[X^{t_{D_2}\mu_{p^k}}=(\widetilde{E}\wedge F(E_+,\wedge X))^{\mu_{p^k}} \text{ and } X^{h_{D_2}\mu_{p^k}}=F(E_+,X)^{\mu_{p^k}}.\] We now recall the definition of Real $p$-cyclotomic spectrum. Our definition will differ from \cite[Definition 3.15]{QS19} in that we use $O(2)$ instead of $D_{2p^{\infty}}$. \begin{defin}[cf. {\cite[Definition 6.5]{QS19}}]\label{cyclotomicDef} A object $X$ in $\Sp_{\widetilde{\cV}}^{O(2),\cR}$ is \emph{Real $p$-cyclotomic} if there is a \emph{Tate-valued Frobenius} map \[ \varphi_p \colon \thinspace X\longrightarrow X^{t_{D_2}\mu_p}\] in $\Sp_{\widetilde{\cV}}^{O(2),\cR}$. \end{defin} Identifying $N_{D_2}^{O(2)}(A)$ and $\THR(A)$ as Real $p$-cyclotomic spectra in the sense above therefore only requires showing that there is a zig-zag of $\cR$-equivalences between $N_{D_2}^{O(2)}(A)$ and $\THR(A)$ and compatible Tate valued Frobenius maps $\varphi_p$ and $\varphi_p^{\prime}$ \[ \begin{tikzcd} N_{D_2}^{O(2)}(A) \simeq \THR(A) \ar[d,shift left=5ex,"\varphi_p^{\prime}"] \ar[d,shift right=5ex,"\varphi_p"]\\ N_{D_2}^{O(2)}(A)^{t_{D_2}\mu_p} \simeq \THR(A)^{t_{D_2}\mu_p} \end{tikzcd} \] in $\Sp_{\widetilde{\cV}}^{O(2),\cR}$. Finally, we note that it suffices that $\THR(A)$ is Real $p$-cyclotomic in the sense of Definition \ref{cyclotomicDef} in order to define $p$-typical Real topological cyclic homology. For this discussion, we assume $\iota_e^*\THR(A)$ is bounded below. Let \[ \mathrm{can}_p \colon \thinspace \THR(A)^{h_{D_2}\mu_{p}} \longrightarrow \THR(A)^{t_{D_2}\mu_{p}}\] denote the right map in the diagram \eqref{eq:isotropy} for $X=\THR(A)$. Using the identification \begin{align}\label{fixed points of fixed points} \THR(A)^{h_{D_2}\mu_{p^\infty}} \simeq (\THR(A)^{h_{D_2}\mu_{p}})^{h_{D_2}\mu_{p^\infty}} \end{align} and the identification \[ ((\THR(A))^{t_{D_2}\mu_{p}})^{h_{D_2}\mu_{p^{\infty}}} \simeq (\THR(A))^{t_{D_2}\mu_{p^{\infty}}},\] which uses the hypothesis that $\iota_e^*\THR(A)$ is bounded below and the Real Tate orbit lemma \cite[Lemma 7.19]{QS19}, we have maps \[ \can=(\can_p)^{h_{D_2}\mu_{p^{\infty}}} \colon \thinspace \TCR^{-}(A;p)\longrightarrow \TPR(A;p)\] and \[ \varphi=(\varphi_p)^{h_{D_2}\mu_{p^{\infty}}} \colon \thinspace \TCR^{-}(A;p)\longrightarrow \TPR(A;p). \] Finally, let $\TCR(A;p)$ denote the homotopy equalizer of the diagram \begin{align}\label{TCR NS} \xymatrix{ \TCR^{-}(A;p)\ar@<1ex>[r]^{\text{can}} \ar@<-1ex>[r]_{\varphi} & \TPR(A;p). } \end{align} Then \cite[Corollary 7.3]{QS19} implies that the definition of $\TCR(A;p)$ as the homotopy equalizer of the diagram \eqref{TCR NS} and the definition of Real topological cyclic homology $\TCR(A;p)$ of Definition \ref{Real TC} agree after $p$-completion. \section{Real topological Hochschild homology via the norm}\label{sec: norm} Recall that the third author with Hopkins and Ravenel \cite{HHR} define multiplicative norm functors from $H$-spectra to $G$-spectra \[ N_H^G: \Sp^H \to \Sp^G \] for a finite group $G$ and subgroup $H$ (see Definition \ref{def: norm}). It is shown in \cite{ABGHLM18} and \cite{BDS18} that one can extend this construction to a norm $N_e^{\mathbb{T}}$ on associative ring orthogonal spectra. The functor \[ N_e^{\mathbb{T}}\colon\thinspace \Assoc(\Sp) \to \Sp^{\mathbb{T}}_U \] is defined via the cyclic bar construction, $N_e^{\mathbb{T}}(R) = \mathcal{I}_{\mathbb{R}^{\infty}}^U|B^{cyc}_{\wedge}R|.$ In \cite{ABGHLM18} the authors then show that this functor satisfies the adjointness properties that one would expect from a norm. In particular they show that in the commutative setting the norm functor is left adjoint to the forgetful functor from commutative ring orthogonal $\mathbb{T}$-spectra to commutative ring orthogonal spectra. It follows from this construction of $N_{e}^{\mathbb{T}}$ that topological Hochschild homology can be viewed as a norm from the trivial group to $\mathbb{T}$. In this section, we consider the analogous story for Real topological Hochschild homology. In particular, we define a norm functor $N_{D_2}^{O(2)}$ using the dihedral bar construction, and prove that it satisfies the adjointnesss properties that one would expect from an equivariant norm. We then characterize Real topological Hochschild homology as the norm $N_{D_2}^{O(2)}$. \subsection{The norm and the relative tensor} We begin by observing that the dihedral bar construction takes values in $O(2)$-spectra. \begin{lem} There is a functor \[|B_{\bullet}^{\di}(-)|\colon \thinspace \Assoc_\sigma(\Sp_{\cV}^{D_2})\to \Sp_{\widetilde{\cV}}^{O(2)}.\] \end{lem} \begin{proof} By Theorem \ref{action on realization} the realization of a dihedral space has an $O(2)$-action. Since geometric realization in orthogonal spectra is computed level-wise, we know $|B_{\bullet}^{\di}(A)|$ has an $O(2)$-action and this orthogonal $O(2)$-spectrum is indexed on $\widetilde{\mathcal{V}}$. This construction is also clearly functorial. \end{proof} We then define the norm as the dihedral bar construction. \begin{defin}\label{O2 norm} Let $A$ be an $E_{\sigma}$-ring in $\Sp_{\cV}^{D_2}$. We define \[ N_{D_2}^{O(2)}A= \cI_{\widetilde{\cV}}^{\cU}| B_{\bullet}^{\di} (A) |\] to be the the norm from $D_2$ to $O(2)$. This defines a functor \[ N_{D_2}^{O(2)}\colon \thinspace \Assoc_{\sigma}(\Sp_{\cV}^{D_2}) \to \Sp_{\cU}^{O(2)}.\] \end{defin} \begin{rem} In particular, we produce a functor \[ N_{D_2}^{O(2)}:\Comm(\Sp_{\cV}^{D_2})\longrightarrow \Comm(\Sp_{\cU}^{O(2)})\] by restriction to the subcategory $\Comm(\Sp_{\cV}^{D_2})\subset \Assoc_{\sigma}(\Sp_{\cV}^{D_2})$. \end{rem} Note that the category of commutative monoids in $\Sp_{\cV}^{D_2}$ is tensored over the category of $D_2$-sets. This follows because $\Sp_{\cV}^{D_2}$ is tensored over $D_2$-sets and the forgetful functor $\Comm(\Sp_{\cV}^{D_2})\to \Sp_{\cV}^{D_2}$ creates all indexed limits and the category $\Comm(\Sp_{\cV}^{D_2})$ contains all $\Top^{D_2}$-enriched equalizers, by a generalization of \cite[Lem. 2.8]{MSV97}. We simply write $\otimes$ for this tensoring, viewed as a functor \[ -\otimes - \colon \thinspace \Comm(\Sp_{\cV}^{D_2})\times D_2\text{-}\Set \longrightarrow \Comm(\Sp_{\cV}^{D_2}).\] Note that this naturally extends to a functor \[ -\otimes - \colon \thinspace \Comm(\Sp_{\cV}^{D_2}) \times \left (D_2\text{-}\Set\right )^{\Delta^{\op}}\longrightarrow \Comm(\Sp_{\cV}^{D_2}).\] Consider the minimal simplicial model \[ \xymatrix{ D_2 \ar[r] & \ar@<1ex>[l]\ar@<-1ex>[l] \ar@<1ex>[r]\ar@<-1ex>[r] D_4 & \ar@<2ex>[l]\ar@<-2ex>[l]\ar[l] \ar@<2ex>[r]\ar@<-2ex>[r]\ar[r] D_6 & \dots \ar@<3ex>[l]\ar@<-3ex>[l]\ar@<1ex>[l]\ar@<-1ex>[l] \ldots }\] for $O(2)$, where $D_{2(n+1)}=\text{Aut}_{\Xi}([n])$. Note that it is standard that $D_{2(\bullet+1)}$ is in fact a functor $\Xi^{\op}\longrightarrow \Set$ \cite[Lemma 6.3.1]{Loday98}. From the discussion in Section \ref{subdivision}, we know that $\sq(D_{2(\bullet+1)})$ is equipped with the structure of a simplicial $D_2$-set. \begin{defin} We define $O(2)_{\bullet}=\sq(D_{2(\bullet+1)})$ regarded as a simplicial $D_2$-set. \end{defin} \begin{defin}\label{O2 tensor} Let $A$ be an object in $\Comm(\Sp_{\cV}^{D_2})$. We may form the coequalizer of the diagram \[ \xymatrix{ A \otimes D_2\otimes O(2)_{\bullet} \ar@<1ex>[rr]^(.5){\id_{A} \otimes \psi } \ar@<-1ex>[rr]_(.5){N\otimes\id_{O(2)_{\bullet}}} && A\otimes O(2)_{\bullet} } \] where \begin{align*}\label{the map N} N \colon\thinspace A\otimes D_2 =N^{D_2}A \to A \end{align*} is the $E_0$-$N^{D_2}A$-algebra structure map and $\psi \colon \thinspace D_2\times O(2)_{\bullet}\longrightarrow O(2)_{\bullet}$ is the left $D_2$-action on $O(2)_{\bullet}$. We define the coequalizer in the category of simplicial objects in $\Comm(\Sp_{\cV}^{D_2})$ to be \[ A \otimes_{D_2}O(2)_{\bullet} \] and we consider the geometric realization $|A\otimes_{D_2} O(2)_{\bullet}|$. This construction has an $O(2)$-action on the right so we may regard $|A\otimes_{D_2}O(2)_{\bullet}|$ as an object in $\Sp_{\widetilde{\cV}}^{O(2)}$. We therefore make the following definition \begin{align*} \label{relative tensor norm} A\otimes_{D_2} O(2) =\cI_{\widetilde{\cV}}^{\cU}|A\otimes_{D_2} O(2)_{\bullet}|. \end{align*} \end{defin} \subsection{Comparison of the norm and the B{\"o}kstedt model}\label{sec: comparing models}\rm We will now show that for an $E_{\sigma}$-ring $A$ the construction $N_{D_2}^{O(2)}A$ recovers the B\"okstedt model for Real topological Hochschild homology. In \cite[Thm. 2.23]{DMPPR21}, the authors prove that for a flat $E_{\sigma}$-ring $A$ there is a zig-zag of stable equivalences of $D_2$-spectra \[ |B^{\di}_{\bullet}(A)| \simeq \THR(A), \] where the righthand side is the B\"okstedt model for THR. We extend this to a zig-zag of $\mathcal{R}$-equivalences of $O(2)$-spectra \[ N_{D_2}^{O(2)}(A)\simeq \THR(A). \] \begin{prop}[Equivalence of norm model and B\"okstedt model]\label{norm=bokstedt} Given a flat $E_{\sigma}$-ring in $\Sp^{D_2}_{\cV}$, there is a natural zig-zag of $\cR$-equivalences \[ N_{D_2}^{O(2)}A\simeq \THR(A)\] of $O(2)$-orthogonal spectra. This is also a zig-zag of $\cF_\text{Fin}^{\mathbb{T}}$-equivalences, where $\cF_\text{Fin}^{\mathbb{T}}$ is the family of finite subgroups of $\mathbb{T}\subset O(2)$. \end{prop} \begin{proof} We write $\Omega^{\underline{i}}:=\Omega^{i_0+\dots + i_k}$ when $\underline{i}\in \cI^{k+1}$. Let sh$^iX$ denote the shifted spectrum where (sh$^iX)_n = X_{i+n}$. Recall that there is a stable equivalence \[ t \colon A \to \underset{i\in \cI}{\m{\hocolim}} \Omega^{i}\sh^iA \] and a canonical isomorphism \[ \can \colon\thinspace \bigwedge_{j\in 1+\bf k \rm } \left ( \underset{i_j\in \cI}{\m{\hocolim}} \Omega^{i_j} \sh^{i_j} A \right ) \to \underset{\underline{i}\in \cI^{1+\bf k \rm}}{\m{\hocolim}} \Omega^{\underline{i}}(\sh^{i_0}A\wedge \dots \sh^{i_k}A) \] produced by commuting smash products with loops \cite[Thm. 2.23]{DMPPR21}. We write \[ \widehat{\id} \colon\thinspace \Sigma^{\infty}(A_{i_0}\wedge \dots \wedge A_{i_k})\to \sh^{i_0}A\wedge \dots \wedge \sh^{i_k}A\] for the map adjoint to the identity map \begin{align}\label{identity map} \id \colon \thinspace A_{i_0}\wedge \dots A_{i_k} = (\sh^{i_0}A)_0 \wedge \dots \wedge (\sh^{i_k}A)_0 = (\sh^{i_0}A \wedge \dots \wedge \sh^{i_k}A)_0 \end{align} and we write $\widehat{\id}_* := \underset{ \underline{i}\in \cI^{1+\bf k \rm}}{\m{\hocolim}}\Omega^{\underline{i}}(\widehat{\id})$ for the induced map. We consider the zigzag \[ A^{\wedge 1+\bf k\rm } \overset{\can \circ t^{\wedge 1+\bf k \rm}}{\longrightarrow} \underset{\underline{i}\in \cI^{1+\bf k \rm}}{\m{\hocolim}} \Omega^{\underline{i}}(\sh^{i_0}A\wedge \dots \sh^{i_k}A ) \overset{ \widehat{\id}_*}{\longleftarrow} \underset{\underline{i}\in \cI^{1+\bf k \rm}}{\m{\hocolim}}\Omega^{\underline{i}} \Sigma^{\infty}(A_{i_0}\wedge \dots A_{i_k}).\] Note that the spectrum on the right is the $k$-simplicies of the B\"okstedt model for $\THR(A)$. To prove the theorem, it will suffice to show that this zig-zag is $\mu_{k+1}$-equivariant. The right-hand side has a $\mu_{k+1}$-action by the cyclic permutation action on $\cI^{1+\bf k \rm}$ and the conjugation action on the loop space where $\mu_{k+1}$ acts on $\underline{i}$ and $A_{i_0}\wedge \dots \wedge A_{i_k}$ by cyclic permutation. The left-hand-side has $\mu_{k+1}$-action induced by the cyclic permutation action on the set $1+\bf{k}$. The maps $t^{\wedge 1+\bf k \rm}$ and $\can$ are both clearly $\mu_{k+1}$-equivariant. These $\mu_{k+1}$-actions are also compatible with the $D_2$-action so the composite map $\can\circ t^{\wedge 1+\bf k \rm}$ is a $D_{2(k+1)}$-equivariant map. This composite is a stable equivalence on $D_2$-fixed points by \cite[Thm. 2.23]{DMPPR21}. The map $ \widehat{\id}_*$ is also clearly $\mu_{k+1}$-equivariant because the identity map \eqref{identity map} is $\mu_{k+1}$-equivariant and consequently its adjoint is as well. This action is also clearly compatible with the $D_2$-action, so it produces a $D_{2(k+1)}$-equivariant map and it is an equivalence on $D_2$-fixed points by \cite[Thm. 2.23]{DMPPR21}. These $\mu_{k+1}$-equivariant maps are also compatible with the face and degeneracy maps and therefore this is a zig-zag of maps of $D_2$-diagrams $\Lambda^{\op}\to \Sp$, which restricts to the zig-zag of maps of $D_2$-diagrams $\Delta^{\op}\to \Sp$ of \cite[Thm. 2.23]{DMPPR21}. The maps in the zig-zag of $D_2$-diagrams are also $D_2$-equivalences on $k$-simplices for all $k$ by \cite[Thm. 2.23]{DMPPR21}. By Corollary \ref{iso of models for dihedral objects}, these maps are also maps of dihedral objects. By the flatness hypotheses, each of the objects in the zig-zag are Real cyclic spectra, which restrict to good Real simplicial spectra in the sense of \cite[Definition 1.5]{DMPPR21} so we produce a zig-zag of $O(2)$-equivariant maps which are equivalences on $D_2$-fixed points by \cite[Lemma 1.6]{DMPPR21}. Consequently, the zig-zag of $O(2)$-equivariant maps \[ N_{D_2}^{O(2)}A\simeq \THR(A)\] is a zig-zag of $\cR$-equivalences as desired because all the groups in $\cR$ are conjugate to $D_2$ in $O(2)$. Since this entire argument is natural, this is in fact a natural $\cR$-equivalence. Since these maps are exactly the ones used to prove an equivalence of genuine $p$-cyclotomic structures between $B^{\text{cyc}}A$ and $\THR(A)$ on underlying $\mathbb{T}$-spectra in \cite{DMPSW19}, this is also an $\cF_{\text{Fin}}^{\mathbb{T}}$-equivalence. \end{proof} \begin{rem}\label{rem:Finequiv} It is claimed in \cite[Remark 3.6]{DMP19} and \cite[p.8]{DMP21} that the identification of the dihedral bar construction and the B\"okstedt model is an identification of Real $p$-cyclotomic spectra. Proposition \ref{norm=bokstedt} is the first step towards proving this claim using the theory of Real cyclic objects. We will not carry out a complete proof of this claim from our perspective since it is not the main thrust of this work. \end{rem} \begin{cor}\label{equivalence} Given a stable equivalence of very well-pointed $E_{\sigma}$-rings $A\to A^{\prime}$ in $D_2$-orthogonal spectra indexed on $\cV$, there is an $\mathcal{R}$-equivalence \[ N_{D_2}^{O(2)}A\to N_{D_2}^{O(2)}A^{\prime}\] of $O(2)$-spectra. \end{cor} \begin{proof} The induced map $ N_{D_2}^{O(2)}A \to N_{D_2}^{O(2)}A^{\prime}$ is $O(2)$-equivariant by construction, so it suffices to check that after restricting to $D_2$-spectra it is a stable equivalence of $D_2$-spectra. Note that this is independent of the choice of subgroup in $\mathcal{R}$ of order $2$ because all of the subgroups of order two in $\mathcal{R}$ are conjugate. After restricting to $D_2$-spectra, there is a zigzag of stable equivalences of $D_2$-spectra \[ \iota_{D_2}^*N_{D_2}^{O(2)}A \simeq \iota_{D_2}^*\THR(A) \overset{\simeq}{\longrightarrow} \iota_{D_2}^*\THR(A^{\prime}) \simeq \iota_{D_2}^*N_{D_2}^{O(2)}A^{\prime},\] by \cite[Thm. 2.20]{DMPPR21} and this agrees with the restriction of the map induced by $A\to A^{\prime}$ by naturality of Proposition \ref{norm=bokstedt}. \end{proof} \subsection{Comparison of the norm and the tensor}\label{sec: comparison of norm and tensor} We now identify the norm of Definition \ref{O2 norm} with the relative tensor of Definition \ref{O2 tensor} in the commutative case. \begin{prop}\label{equivalence of norm and tensor} Let $A$ be an object in $\Comm(\Sp_{\cV}^{D_2})$ and assume that $A$ is very well pointed. There is a natural map \[ N_{D_2}^{O(2)}A \longrightarrow A \otimes_{D_2} O(2),\] in $\Comm(\Sp_{\cU}^{O(2)})$, which is a weak equivalence after forgetting to $\Sp_{\cV}^{D_2}$. \end{prop} \begin{proof} We first prove that there is an equivalence of Real cyclic objects in $\Comm \Sp_{\cV}^{D_2}$ \[\sq (B_{\bullet}^{\di}(A)) \simeq A\otimes_{D_2} O(2)_{\bullet}. \] We consider the $k$ simplices on each side. On the left, the $k$-simplices, are $A \wedge A^{\wedge \text{\bf 2k+1}}$ with $D_{4(k+1)}=\langle \omega ,t | t^{2(k+1)}=\omega^2=t\omega t\omega =1\rangle$ action given by letting $t$ cyclically permute the $2k+2$ copies of $A$, and letting $\tau$ act on $ \bfk=\{1,\dots ,2k+1\}$ by $\tau(i)=2k+1-i+1$. The $k$-simplices on the right hand side are given by the coequalizer of the diagram \[ \xymatrix{ A \otimes D_2 \otimes D_{4(k+1)} \ar@<1ex>[rr]^(.5){ \id_{A} \otimes \psi } \ar@<-1ex>[rr]_(.5){ N\otimes \id_{D_{4(k+1)}} } && A\otimes D_{4(k+1)} } \] in the category of $D_{4(k+1)}$-spectra. This is $ A \otimes_{D_2}D_{4(k+1)}$ and therefore the result on $k$-simplices follows from the $D_{4(k+1)}$-equivariant map \[ A \otimes_{D_2} D_{4(k+1)}\simeq A \wedge A^{\wedge \text{\bf 2k+1 \rm}}, \] which is clearly an equivalence on underlying commutative $D_2$-spectra. To see that this map is $D_{4(k+1)}$-equivariant, note that $D_{4(k+1)}/D_2\cong \mu_{2k+2}$ as $D_{4(k+1)}$-sets and the right-hand-side can also be considered as a tensoring with the $D_{4(k+1)}$-set $\mu_{2k+2}$. Since this map is $D_{4(k+1)}$-equivariant it is compatible with the automorphisms in the dihedral category. It is also easy to check that this is compatible with the face and degeneracy maps. Since $A$ is very well-pointed, both sides are good in the sense of \cite[Definition 1.5]{DMPPR21}, this level equivalence induces an equivalence on geometric realizations in the category of $D_2$-spectra by \cite[Lemma 1.6]{DMPPR21}. \end{proof} \begin{rem} Note that in the $\cF$-model structure on $\Sp_{\cW}^{G}$ where $G$ is a compact Lie group, $\cF$ is a family of subgroups, and $\cW$ is a universe, the $\cF$-equivalences can either be taken to be the maps $X\to Y$ that induce isomorphism on homotopy groups $\pi_*(X^H)\longrightarrow \pi_*(Y^H)$ for all $H\in \cF$ or the maps that induce isomorphisms $\pi_*(\Phi^H X)\longrightarrow \pi_*(\Phi^HY)$, for all $H \in \cF$, where $\Phi^H$ denotes the $H$-geometric fixed points. \end{rem} In order to show that the identification from Proposition \ref{equivalence of norm and tensor} is in fact an $\cR$-equivalence it suffices to check that the map is an equivalence on geometric fixed points. Since all nontrivial groups in $\cR$ are conjugate to $D_2$, to check that the map is an $\cR$-equivalence it suffices to check the condition $D_2$-geometric fixed points. \begin{cor}\label{geom fix points of norm} Let $A$ be an object in $\Comm(\Sp_{\cV}^{D_2})$ and assume that $A$ is very well pointed and flat. Then there is an $\cR$-equivalence \[N_{D_2}^{O(2)}A\simeq A \otimes_{D_2} O(2).\] \end{cor} \begin{proof} Since there is a zig-zag of stable equivalences of orthogonal spectra \[ \Phi^{D_2}(N_{D_2}^{O(2)}A)\simeq \Phi^{D_2}(\THR(A))\] by Proposition \ref{norm=bokstedt}, we know by \cite{DMPPR21} that there is a zig-zag of stable equivalences of orthogonal spectra \[ \Phi^{D_2}(N_{D_2}^{O(2)}A) \simeq \Phi^{D_2}(A) \wedge_{A}^{\mathbb{L}}\Phi^{D_2}(A)\] where the right-hand-side is the derived smash product. Since $\Phi^{D_2}(-)$ sends homotopy colimits of $O(2)$-orthogonal spectra indexed on $\cU$ to homotopy colimits of orthogonal spectra.\footnotemark \footnote{Using the Bousfield--Kan formula for homotopy colimits, we can write any homotopy colimit as the geometric realization of a simplicial spectrum and then use the fact that genuine geometric fixed points commute with sifted colimits.} $\Phi^{D_2}(A \otimes_{D_2} O(2)_{\bullet})$ is the homotopy coequalizer of \[ \xymatrix{ \Phi^{D_2}(A \otimes D_2 \otimes O(2)_{\bullet} )\ar@<.5ex>[r] \ar@<-.5ex>[r]& \Phi^{D_2}(A\otimes O(2)_{\bullet}) } \] which is level-wise equivalent to \[ \ \Phi^{D_2}N^{D_{4(n+1)}}_{D_2}(A) \simeq \Phi^{D_2} \left( \bigwedge_{\gamma } N_{D_2\cap \gamma D_2\gamma^{-1}} ^{D_2}(\iota_{D_2\cap \gamma D_2\gamma^{-1}}^*c_{\gamma} A) \right ) \] where $\gamma$ ranges over the representatives of double cosets $D_2\backslash \gamma \slash D_2$ in the set of double cosets $D_2\backslash D_{4(n+1)} / D_2$, which in turn is equivalent to \[ B_n(\Phi^{D_2}(A),\iota_e^*(A),\Phi^{D_2}(A)).\] It is tedious, but routine, to check that this equivalence is compatible with the face and degeneracy maps. Since both source and target are Reedy cofibrant by our assumptions (cf. \cite[Definition 1.5, Lemma 1.6]{DMPPR21}), this produces a stable equivalence \[ \Phi^{D_2}(A \otimes_{D_2}O(2))\simeq \Phi^{D_2}(A)\wedge_{\iota_e^*A}^{\mathbb{L}}\Phi^{D_2}(A)\] of orthogonal spectra on geometric realizations. Since all the groups of order $2$ in $\cR$ are conjugate in $O(2)$, we have proven the claim. \end{proof} \subsection{The dihedral bar construction as a norm} We now show that our definition of the norm from $D_2$ to $O(2)$ satisfies one of the fundamental properties that one would expect of a norm: in the commutative case it is left adjoint to restriction. \begin{thm}[The dihedral bar construction as a norm]\label{main norm thm} The restriction \[ N_{D_2}^{O(2)} \colon\thinspace \Comm(\Sp_{\cV}^{D_2})\to \Comm(\Sp ^{O(2),\mathcal{R}}_{\cU})\] of the norm functor $N_{D_2}^{O(2)}$ to genuine commutative $D_2$-ring spectra is left Quillen adjoint to the restriction functor $\iota_{D_2}^*$ where \[\Comm(\Sp_{\cV}^{D_2}) \text{ and }\Comm(\Sp ^{O(2),\mathcal{R}}_{\cU})\] are equipped with the $All$-model structure and the $\cR$-model structure of Definition \ref{def: model structure} respectively. \end{thm} \begin{proof} By Corollary \ref{geom fix points of norm}, there is a natural $\cR$-equivalence \[ N_{D_2}^{O(2)}(A)\simeq A\otimes_{D_2}O(2), \] of $O(2)$-orthogonal spectra. If $A\to A^{\prime}$ is a $\All$-equivalence of $D_2$-spectra where both source and target are very well-pointed then there is an $\cR$-equivalence \[ N_{D_2}^{O(2)}(A)\simeq N_{D_2}^{O(2)}(A^{\prime})\] of $O(2)$-spectra by Corollary \ref{equivalence}. In particular, if $A$ and $A^{\prime}$ are cofibrant in the positive complete stable model structure on $\Assoc_{\sigma}(\Sp_{\cV}^{D_2})$, then they are in particular very well-pointed. This shows that both the functors $N_{D_2}^{O(2)}(-)$ and $(-)\otimes_{D_2}O(2)$ induce well-defined functors between the homotopy categories \[ N_{D_2}^{O(2)} \colon \thinspace \ho\left (\Comm(\Sp_{\cV}^{D_2})\right)\to \ho\left (\Comm(\Sp_{\cU}^{O(2),\cR}) \right )\] and \[ - \otimes_{D_2} O(2) \colon \thinspace \ho \left (\Comm(\Sp_{\cV}^{D_2})\right)\to \ho\left (\Comm(\Sp_{\cU}^{O(2),\cR}) \right )\] and they are naturally isomorphic on the homotopy categories. It is clear that $-\otimes_{D_2}O(2)$ is left adjoint to the restriction functor \[ \iota_{D_2}^* \colon \thinspace \ho \left (\Comm(\Sp_{\cU}^{O(2),\cR}) \right )\to \ho\left (\Comm(\Sp_{\cV}^{D_2})\right).\] Moreover, the restriction functor sends cofibrations and weak equivalences to cofibrations and weak equivalences by definition of the stable $\cR$-equivalences and the positive stable $\cR$-cofibrations. Consequently it also preserves all fibrations and acyclic fibrations. \end{proof} \section{A multiplicative double coset formula}\label{mdcf} Classically, the multiplicative double coset formula for finite groups gives an explicit formula for the restriction to $K$ of the norm from $H$ to $G$ where $H$ and $K$ are subgroups of $G$. For compact Lie groups, no such multiplicative double coset formula is known in general. In this section, we present a multiplicative double coset formula for the restriction to $D_{2m}$ of the norm from $D_2$ to $O(2)$. \begin{convention}\label{conv: ordered coset} When the integer $m$ is understood from context, let \[ \zeta=\zeta_{2m}=e^{2i\pi/2m}\in \mathbb{T}\subset O(2).\] We consider the element $\zeta$ as a lift of the element $-1$ along a chosen homeomorphism \[ D_{2m}\backslash O(2)/D_2\cong \mu_m\backslash \mathbb{T}\cong \mathbb{T}.\] We make this choice of homeomorphism simply so that the formula for $\zeta$ can be chosen consistently for all $m$ independent of whether $m$ is odd or even. We observe that $\zeta D_{2}\zeta^{-1}=<\zeta_{m}\tau>$. We fix total orders on the $D_{2m}$-sets $D_{2m}/e$, $D_{2m}/D_2$, and $D_{2m}/\zeta D_2 \zeta^{-1}$. Let \[D_{2m}/e=\{1\le \zeta_m\tau\le \zeta_m \le \zeta_m^{2}\tau \le \zeta_m^2\le \dots \le \zeta_m^{m-1}\le \tau\},\] \[ D_{2m}/D_2 = \{D_2\le \zeta_m D_2\le \dots \le \zeta_m^{m-1}D_2\},\] \[ D_{2m}/\zeta D_2\zeta^{-1} = \{\zeta D_2\zeta^{-1}\le \zeta_m \cdot \zeta D_2\zeta^{-1}\le \dots \le \zeta_m^{m-1}\cdot\zeta D_2\zeta^{-1}\}.\] These choices of total orderings on the $D_{2m}$-sets $D_{2m}/e$, $D_{2m}/D_2$, and $D_{2m}/\zeta D_2\zeta^{-1}$ also fix group homomorphisms \[\lambda_{e} \colon \thinspace D_{2m}\to \Sigma_{2m},\] \[\lambda_{D_2} \colon \thinspace D_{2m} \to \Sigma_m \wr D_2 \] \[\lambda_{\zeta D_2\zeta^{-1}}\colon \thinspace D_{2m} \to \Sigma_m\wr \zeta D_2\zeta^{-1}.\] We denote the associated norms by $N_e^{D_{2m}}$, $N_{D_2}^{D_{2m}}$ and $N_{\zeta D_2\zeta^{-1}}^{D_{2m}}$ respectively. The choice of ordering does not matter for our norm functors up to canonical natural isomorphism, but remembering the choice of ordering clarifies our constructions later. \end{convention} \begin{remark}\label{order on mum} Any finite subset $F$ of $\mathbb{T}\subset \mathbb{C}$ can be equipped with a total order by considering $1\le e^{2i\pi\theta}\le e^{2i\pi\theta^{\prime}}$ for all $0\le\theta \le \theta^{\prime}<1$. In particular, the subset of $m$-th roots of unity $\mu_m$ in $\mathbb{T}$ can be equipped with a total order in this way. \end{remark} \begin{lem}\label{lem: isomorphism of D2m sets} There is an isomorphism of totally ordered $D_{2m}$-sets \[f_{m,k} \colon \thinspace \mu_{2m(k+1)}\cong D_{2m}/D_2\amalg D_{2m}^{\amalg k} \amalg D_{2m}/\zeta D_2\zeta^{-1}\] where $\zeta=\zeta_{2m}$, the $D_{2m}$-sets on the right have the total orders from Convention \ref{conv: ordered coset}, and $\mu_{2m(k+1)}$ is equipped with a total order by Remark \ref{order on mum}. \end{lem} \begin{proof} Without the total ordering this isomorphism is clear. We chose the total ordering on the right hand side in Convention \ref{conv: ordered coset} so that this lemma would be true. \end{proof} \begin{rem}\label{rem: map of totally ordered sets} We will also write $f_{m,k}$ for the underlying map of totally ordered sets from Lemma \ref{lem: isomorphism of D2m sets} after forgetting the $D_{2m}$-set structure. \end{rem} Given an $E_{\sigma}$-ring $R$, then $\iota_e^*R$ is an $E_1$-ring and $R$ is a $N_{e}^{D_2}\iota_e^*R$-bimodule with right action \[ \overline{\psi}_R\colon \thinspace R\wedge N_e^{D_2}\iota_e^*R\longrightarrow R\] and left action \[ \overline{\psi}_L\colon \thinspace N_e^{D_2}\iota_e^*R\wedge R\longrightarrow R.\] We also note that there is an equivalence of categories \[ c_{\zeta}\colon \thinspace \Sp^{D_2} \longrightarrow \Sp^{\zeta D_{2}\zeta^{-1}}\] which is symmetric monoidal and therefore sends $E_{\sigma}$-rings in $\Sp^{D_2}$ to $E_{\sigma}$-rings in $\Sp^{\zeta D_{2}\zeta^{-1}}$. In particular, $c_{\zeta}R$ is a left $N_{e}^{\zeta D_2\zeta^{-1}}\iota_e^*R$-module with action map \[ c_{\zeta}(\overline{\psi}_L)\colon \thinspace N_e^{\zeta D_2\zeta^{-1}}\iota_e^*R\wedge c_{\zeta} R\longrightarrow c_{\zeta }R.\] \begin{defin}\label{def:twistedmodule} Let $R$ be a $E_{\sigma}$-ring. We define a right $N_{e}^{D_{2m}}\iota_e^*R$-module structure on $N_{D_2}^{D_{2m}}R$ as the composite \[ \xymatrix{ \psi_R\colon\thinspace N_{D_2}^{D_{2m}}R \wedge N_{e}^{D_{2m}}\iota_e^*R \ar[r]^-{\cong} & N_{D_2}^{D_{2m}}( R\wedge N_{e}^{D_2}\iota_e^*R ) \ar[rr]^(.6){N_{D_2}^{D_{2m}}(\bar{\psi}_R)} && N_{D_2}^{D_{2m}}R. } \] We define a left $N_{e}^{D_{2m}}\iota_e^*R$-module structure on $N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}R$ \[\psi_L\colon \thinspace N_e^{D_{2m}}\iota_e^*R\wedge N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}R\longrightarrow N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}R\] as the composite of the isomorphism \begin{align*} \label{twisted left module} \xymatrix{ N_{e}^{D_{2m}}\iota_e^*R\wedge N_{\zeta D_2\zeta^{-1}}^{D_{2m}}c_{\zeta}R\ar[r]^(.45){\cong} & N_{\zeta D_2\zeta^{-1}}^{D_{2m}}\left (N_e^{\zeta D_2\zeta^{-1}}\iota_e^*R\wedge c_{\zeta}R\right )} \end{align*} with the map \[ \xymatrix{ N_{\zeta D_2\zeta^{-1}}^{D_{2m}}(N_e^{\zeta D_2\zeta^{-1}}\iota_e^*R \wedge c_{\zeta}R) \ar[rrr]^(.6){N_{\zeta D_2\zeta^{-1}}^{D_{2m}}(c_{\zeta}(\bar{\psi}_L))}&&& N_{\zeta D_2\zeta^{-1}}^{D_{2m}}c_{\zeta}R. } \] \end{defin} \begin{exm} When $m=2$, we note that $\zeta_{4}D_2\zeta_{4}^{-1}$ in $D_{8}$ can be identified with the diagonal subgroup $\triangle$ of $D_{4}$. In this case, $\triangle$ and $D_2$ are conjugate in $D_{8}$ even though they are not conjugate in $D_4$. We still define a left $N_e^{D_{4}}\iota_e^*R$-module structure on $N_{\triangle}^{D_{4}}c_{\zeta}R$ by composing the map \begin{align*} \label{twisted left module 2} \xymatrix{ N_{e}^{D_{4}}\iota_e^*R\wedge N_{\triangle}^{D_{4}}c_{\zeta}R\ar[r]^-{\cong} & N_{\triangle}^{D_{4}}\left (N_e^{\triangle}\iota_e^*R\wedge c_{\zeta}R \right ) } \end{align*} with the map \[ \xymatrix{ N_{\triangle}^{D_{4}}(N_e^{\triangle}\iota_e^*R \wedge c_{\zeta}R) \ar[rrr]^(.6){N_{\triangle}^{D_{4}}(c_{\zeta}(\bar{\psi}_L))}&&& N_{\triangle}^{D_{4}}c_{\zeta}R. } \] \end{exm} \begin{rem}\label{rem:simplicialD2msets} Note that there is an isomorphism of simplicial $D_{2m}$-sets \[ \mu_{2m(\bullet+1)}\to D_{2m}/D_2 \amalg D_{2m}^{\amalg \bullet} \amalg D_{2m}/\zeta D_{2}\zeta^{-1}\] which is given by the isomorphism $f_{m,k}$ of totally ordered $D_{2m}$-sets of Lemma \ref{lem: isomorphism of D2m sets} on $k$-simplices. The simplicial maps on the left are given by the simplicial maps in the simplicial set $\text{sd}_{D_{2m}}S^1_{\bullet}$ where $S^1_{\bullet}$ is the minimal model of $S^1$ as a simplicial set. On the right, the face maps \[D_{2m}/D_2 \amalg D_{2m}^{\amalg k} \amalg D_{2m}/\zeta D_2\zeta^{-1}\to D_{2m}/D_2 \amalg D_{2m}^{\amalg k-1} \amalg D_{2m}/\zeta D_2\zeta^{-1}\] are given by the canonical quotient composed with the fold map \[D_{2m}/D_2\amalg D_{2m} \overset{D_{2m}/D_2\amalg q}{\longrightarrow} D_{2m}/D_2\amalg D_{2m}/D_2 \overset{\nabla}{\longrightarrow} D_{2m}/D_2\] for the first face map, the fold map \[ D_{2m} \amalg D_{2m} \overset{\nabla}{\longrightarrow} D_{2m} \] for the middle maps, and for the last face map it is given by the composite \[ D_{2m}\amalg D_{2m}/\zeta D_2\zeta^{-1} \overset{q\amalg 1}{\longrightarrow} D_{2m}/\zeta D_2\zeta^{-1} \amalg D_{2m}/\zeta D_2\zeta^{-1} \overset{\nabla}{\longrightarrow} D_{2m}/\zeta D_2\zeta^{-1}.\] The degeneracy maps are given by the canonical inclusions. \end{rem} We use the isomorphism of simplicial $D_{2m}$-sets above to keep track of the smash factors in the proof of the following result. \begin{prop}\label{charofdibar} Suppose $R$ is an $E_{\sigma}$-ring in $D_{2}$-spectra indexed on the complete universe $\cV=\iota_{D_{2}}^*\cU$ where $\cU$ is a fixed complete $O(2)$-universe. More generally, let $\cV_n=\iota_{D_{2n}}^*\cU$ for $n\ge 1$. There is an isomorphism of simplicial $D_{2m}$-spectra \[ \cI_{\widetilde{\cV}}^{\cV_m}(\operatorname{sd}_{D_{2m}}B^{\di}_{\bullet}(R)) \cong B_{\bullet} \left (N_{D_2}^{D_{2m}}R,N_{e}^{D_{2m}}\iota_e^*R,N_{\zeta D_2\zeta^{-1}}^{D_{2m}}c_{\zeta}R\right ). \] where we write $\widetilde{\cV}$ for the $D_{2}$-universe $\cV$ regarded as a $D_{2m}$-universe via inflation along the canonical quotient $D_{2m}\to D_2$. \end{prop} \begin{proof} When $R$ is an $E_{\sigma}$-ring, we explicitly define the simplicial map \[\cI_{\widetilde{\cV}}^{\cV_m}\operatorname{sd}_{D_{2m}}B^{\di}_{\bullet}(R) \to B_{\bullet }(N_{D_2}^{D_{2m}}R,N_{e}^{D_{2m}}\iota_e^*R,N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}R) )\] on $k$-simplices. There is an isomorphism given by composition of two maps. The first map \[ \xymatrix{ f_{m,k} \colon \thinspace R^{\wedge \mu_{2m(k+1)}} \ar[r] & R^{\wedge m} \wedge (R^{\wedge m } \wedge (R^{\op})^{\wedge m})^{\wedge k} \wedge (R^{\op})^{\wedge m} } \] is the isomorphism induced by $f_{m,k}$ of Remark \ref{rem: map of totally ordered sets} regarded simply as a map of totally ordered sets. The second map is \[ \xymatrix{ R^{\wedge m} \wedge \left ((R\wedge R^{\op})^{\wedge m}\right )^{\wedge k} \wedge (R^{\op})^{\wedge m} \ar[d]^(.5){1^{\wedge m} \wedge (1\wedge \tau )^{\wedge mk}\wedge \tau^{\wedge m} } \\ N_{D_2}^{D_{2m}}R \wedge (N_{e}^{D_{2m}}R)^{\wedge k} \wedge N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}R, } \] where we use the indexing conventions of Conventions \ref{conv: ordered coset}. This is a $D_{2m}$-equivariant isomorphism on $k$-simplices essentially because it comes from the isomorphism of ordered $D_{2m}$-sets of Lemma \ref{lem: isomorphism of D2m sets}. It follows that the map is compatible with the simplicial structure maps by comparing the structure maps in the isomorphism of simplicial $D_{2m}$-sets in Remark \ref{rem:simplicialD2msets} to the structure maps on either side. \end{proof} Consequently, for a flat $E_{\sigma}$-ring, we have the following multiplicative double coset formula. \begin{thm}[Multiplicative Double Coset Formula]\label{thm:doublecoset} When $R$ is a flat $E_{\sigma}$-ring and $m$ is a positive integer, there is a stable equivalence of $D_{2m}$-spectra \[ \iota_{D_{2m}}^*N_{D_2}^{O(2)}R \simeq N_{D_2}^{D_{2m}}R \wedge^{\mathbb{L}}_{N_e^{D_{2m}}\iota_e^*R} N_{\zeta D_2\zeta^{-1}}^{D_{2m}}c_{\zeta}R. \] \end{thm} \begin{proof} By Proposition \ref{charofdibar}, we know there is an equivalence \[ \cI_{\widetilde{\cV}}^{\cV_n}(\operatorname{sd}_{D_{2m}}B^{\di}_{\bullet}(R))\cong B_{\bullet} (N_{D_2}^{D_{2m}}R,N_{e}^{D_{2m}}\iota_e^*R, N_{\zeta D_2\zeta^{-1}}^{D_{2m}}c_{\zeta}R). \] When $R$ is a flat $E_{\sigma}$-algebra in $D_2$-spectra, then $N_{D_2}^{D_{2m}}R$ is a flat $N_e^{D_{2m}}\iota_e^*R$-module by \cite[Theorem 3.4.22-23]{Sto11} and \cite{BDS18} and therefore we may identify \[N_{D_2}^{D_{2m}}R \wedge^{\mathbb{L}}_{N_e^{D_{2m}}\iota_e^*R} N_{\zeta D_2\zeta^{-1}}^{D_{2m}}c_{\zeta}R\] with the realization of the bar resolution of $N_{D_2}^{D_{2m}}R$ by free $N_e^{D_{2m}}\iota_e^*R$-modules then smashed with the right $N_e^{D_{2m}}\iota_e^*R$-module $N_{\zeta D_2\zeta^{-1}}^{D_{2m}}c_{\zeta}R$. This is exactly the realization of the simplicial spectrum \[B_{\bullet}(N_{D_2}^{D_{2m}}R,N_{e}^{D_{2m}}\iota_e^*R, N_{\zeta D_2\zeta^{-1}}^{D_{2m}}R).\] \end{proof} This generalizes a result of Dotto, Moi, Patchkoria, and Reeh, in \cite{DMPPR21}, where they prove that for $R$ a flat $E_{\sigma}$-ring there is a stable equivalence of $D_2$-spectra \[ \iota^*_{D_2}\THR(R) \simeq R \wedge^{\mathbb{L}}_{N_e^{D_2} \iota_e^* R} R. \] \section{Real Hochschild homology}\label{sec: HR} The remaining goal of this work is to develop an algebraic analogue of Real topological Hochschild homology and use it to define a notion of Witt vectors for rings with anti-involution. Ordinary topological Hochschild homology (THH) is a topological analogue of the classical algebraic theory of Hochschild homology. For a ring $R$, there is a linearization map relating the topological and algebraic theories: \[ \pi_k(\THH(HR)) \to \HH_k(R). \] Here $HR$ denotes the Eilenberg--MacLane spectrum of $R$. Further, this linearization map is an isomorphism in degree $0$. For generalizations of topological Hochschild homology, it is natural to ask, then, for their algebraic analogues. In \cite{ABGHLM18}, for $H \subsetneq \mathbb{T}$ the authors define the $H$-twisted topological Hochschild homology of an $H$-ring spectrum $R$: \[ \THH_{H}(R) = N_{H}^{\mathbb{T}}R. \] In \cite{BGHL19} an algebraic analogue of this topological theory is constructed. In particular, the authors define a theory of Hochschild homology for Green functors, $\m{\HH}_H^G(\m{R})$, for $H\subset G \subsetneq \mathbb{T}$, and $\m{R}$ an $H$-Green functor. They prove that for an $H$-ring spectrum $R$ there is a linearization map: \[ \underline{\pi}^G_{k} \THH_{H}(R)\to \m{\HH}_{H}^{G}(\m{\pi}^{H}_0R)_k, \] which is an isomorphism in degree 0. In this section we address the question: What is the algebraic analogue of THR? We do this by defining a theory of Real Hochschild homology for discrete $E_{\sigma}$-rings. We then show how this leads to a theory of Witt vectors for rings with anti-involution. To begin, we recall some basic terminology in the theory of Mackey functors, we define norms in the category of Mackey functors, and $E_{\sigma}$-algebras in $D_2$-Mackey functors, which we call discrete $E_{\sigma}$-rings. \subsection{Representable Mackey functors}\label{rep functors} For a more thorough review of the theory of Mackey functors, we refer the reader to \cite[\S 2]{BGHL19}. Here we simply recall the contructions and notation we use in the present paper. Let $G$ be a finite group. Write $\cA$ for the Burnside category of $G$. For a finite $G$-set $X$, \[\m{A}^G_{X}:=\cA(X,-)\] denotes the representable $G$-Mackey functor represented by $X$. This construction forms a co-Mackey functor object in Mackey functors, by viewing it also as a functor in the variable $X$, so in particular \begin{align*}\label{representable Mackey functor property} \m{A}^G_{X\amalg Y}=\cA(X\amalg Y,-)=\cA(X,-)\oplus\cA(Y,-)=\m{A}^G_X\oplus \m{A}^G_Y. \end{align*} We write $\m{A}^G$ for the Burnside Mackey functor associated to $G$, which can be identified with $\m{A}^G_{*}$ where $*=G/G$, so there is no clash in notation. This Mackey functor has the property that $\m{A}^G(G/H)=A(H)$ where $A(H)$ denotes the Burnside ring for a finite group $H$. Recall that as an abelian group $A(H)$ is free with basis $\{ [H/K] \}$ where $K$ ranges over all conjugacy classes of subgroups $K<H$. When $K=H$ we simply write $1=[H/H]$ and when $K$ is the trivial group we simply write $[H]=[H/\{e\}]$. The transfer and restriction maps in $\m{A}^G$ are given by induction and restriction maps on finite sets. \begin{exm}\label{ex: burnside c2} The Burnside Mackey functor $\m{A}^{D_2}$ can be described by the following diagram. \[\xymatrix{ 1 \ar@{|->}[d] & [D_2] \ar@{|->}[d] & \m{M}^{D_2}(D_2/D_2)=\mathbb{Z}\langle1, [D_2] \rangle \ar@/_1pc/[d]_{res_e^{D_2}} & [D_2] \\ 1 & 2 & \m{M}^{D_2}(D_2/*)=\mathbb{Z}\langle 1 \rangle \ar@/_1pc/[u]_{tr_e^{ D_2}} & 1 \ar@{|->}[u] }\] \end{exm} Given a finite group $G$, a subgroup $H<G$, and a $H$-set $X$ we write $\Map^H(G,X)$ for the $G$-set of $H$-equivariant maps from $G$ to $X$, which is a functor in the variable $X$ known as coinduction. \subsection{Norms for Mackey functors} In this section, we recall briefly the definition and properties of the norm in $G$-Mackey functors for a finite group $G$. Let $\Sp_{\cU}^G$ denote the category of $G$-spectra indexed on a complete universe $\cU$ and let $\Mak_G$ denote the category of $G$-Mackey functors. Recall that these categories are both symmetric monoidal and the symmetric monoidal structures are compatible in the following sense. \begin{prop}\cite{LM06}\label{prop:smabox} For $X$ and $Y$ cofibrant, (-1)-connected orthogonal $G$-spectra, there is a natural isomorphism \[ \m{\pi}_0(X \wedge Y) \cong \m{\pi}_0 X \square \m{\pi}_0 Y. \] \end{prop} A $G$-Mackey functor $\m{M}$ has an associated Eilenberg Mac Lane $G$-spectrum, $H\m{M}$. The defining property of this spectrum is that \[ \m{\pi}^G_k(H\m{M}) \cong \left\{ \begin{array}{ll} \m{M} &\text{ if } k = 0 \\ 0 &\text{ if } k \neq 0. \\ \end{array} \right. \] It then follows from Proposition \ref{prop:smabox} above that the box product of Mackey functors has a homotopical description: \[ \m{M} \square \m{N} \cong \m{\pi}_0(H\m{M} \wedge H\m{N}). \] The category $\Sp_{\cU}^G$ has an equivariant enrichment of the symmetric monoidal product, a $G$-symmetric monoidal category structure \cite{HHR,HillHopkins16}. Such a $G$-symmetric monoidal structure requires multiplicative norms for all subgroups $H\subset G$. In $\Sp_{\cU}^G$ these are given by the Hill--Hopkins--Ravenel norm. The $G$-symmetric monoidal structure on $\Sp_{\cU}^G$ induces such a structure on Mack$_G$ as well. In particular, one can define norms for $G$-Mackey functors. \begin{defin}[cf. {\cite{HillHopkins16}}]\label{def: norm in Mackey} Given a finite group $G$ with subgroup $H$ and an $H$-Mackey functor $\m{M}$, the norm in Mackey functors is defined by \[ N_{H}^{G}\m{M} = \m{\pi}_0^{G}N_H^G H\m{M}.\] \end{defin} From this definition, the following lemma is immediate. \begin{lem}\label{norm of sifted colimit} The norm in Mackey functors commutes with sifted colimits. \end{lem} \subsection{Discrete \texorpdfstring{$E_{\sigma}$}{Esigma}-rings} In Section \ref{sec:Esigmarings}, we discussed $E_{\sigma}$-rings in $D_2$-spectra, which serve as the input for Real topological Hochschild homology. We now define their algebraic analogues, discrete $E_{\sigma}$-rings. These discrete $E_{\sigma}$-rings will be the input for our construction of Real Hochschild homology. \begin{defin}\label{EValgebras} Let $V$ be a finite dimensional representation of a finite group $G$. We define an $E_{V}$-algebra in $G$-Mackey functors to be a $\mathcal{P}_V$-algebra in $G$-Mackey functors, where $\mathcal{P}_V$ is the monad \[\mathcal{P}_V(-)=\bigoplus_{n\ge 0} \underline{\pi}_0^{G}\left ( ((E_{V,n})_{+}\wedge_{\Sigma_n} H(-)^{\wedge n}\right ) \] and $H\m{M}$ is the Eilenberg--MacLane $G$-spectrum associated to the Mackey functor $\m{M}$. \end{defin} When \(V=\sigma\), this monad is particularly simple, since the spaces in the \(E_{\sigma}\)-operad are homotopy discrete. \begin{prop} For \(D_{2}\)-Mackey functors, the monad \(\mathcal P_{\sigma}\) is given by \[ \mathcal P_{\sigma}(\m{M})=T\big(N_{e}^{D_{2}}i_{e}^{\ast}\m{M})\square (\m{A}\oplus \m{M}), \] where \(T(-)\) is the free associative algebra functor. \end{prop} \begin{proof} Recall that we have an equivariant equivalence \[ E_{\sigma,n}\simeq (D_{2}\times\Sigma_{n})/\Gamma_{n}. \] If \(n\) is even, then we have natural isomorphisms \[ \m{\pi}_{0}\big(((E_{\sigma,n})_+\wedge_{\Sigma_{n}} H\m{M}^{\wedge n}\big)\cong \m{\pi}_{0} \big((N^{D_{2}}H\m{M})^{\wedge n/2}\big)\cong (N^{D_{2}}\m{M})^{\square n/2}. \] If \(n\) is odd, then since fixed points contributes a box-factor of \(\m{M}\) itself: \[ \m{\pi}_{0}\big(((E_{\sigma,n})_+\wedge_{\Sigma_{n}} H\m{M}^{\wedge n}\big)\cong (N^{D_{2}}\m{M})^{\square \lfloor n/2\rfloor}\square \m{M}. \] The result follows from grouping the terms according to the number of box-factors involving the norm. \end{proof} \begin{remark} If we compose with the forgetful functor, then we actually recover the tensor algebra, but in a slightly curious presentation. This is the tensor algebra as \[ T(V\otimes V)\otimes (\mathbb Z\oplus V). \] We view the two tensor factors as left and right multiplication, and use this to ``unfold'' the various tensor powers. Now the multiplication is the usual concatenation product, but we see that it fails to be equivariant (which reflects the anti-automorphism property). \end{remark} \begin{defin}\label{defn:discreteEsigma} By a \emph{discrete \(E_{\sigma}\)-ring}, we mean an algebra over the monad \(\mathcal P_{\sigma}\) in the category of $D_2$-Mackey functors. \end{defin} We can further unpack this structure to describe the monoids. \begin{lem}\label{Esigmastructure} A discrete \(E_{\sigma}\)-ring is the following data: \begin{enumerate} \item A $D_2$-Mackey functor \(\m{M}\), together with an associative product on \(\m{M}(D_{2}/e)\) for which the Weyl action is an anti-homomorphism, \item a \(N_{e}^{D_{2}}\iota_e^*\m{M}\)-bimodule structure on \(\m{M}\) that restricts to the standard action of \(\m{M}(D_{2}/e)\otimes \m{M}(D_{2}/e)^{op}\) on \(\m{M}(D_{2}/e)\). \item an element \(1\in \m{M}(D_{2}/D_{2})\) that restricts to the element \(1\in\m{M}(D_{2}/e)\). \end{enumerate} \end{lem} \begin{rem} These conditions are almost identical to those of a Hermitian Mackey functor in the sense of \cite{DO19}: the only difference is that a discrete $E_{\sigma}$-ring includes the additional assumption that there is a fixed unit element $1\in \m{M}(D_2/D_2)$, or in other words, an $E_0$-$N^{D_2}_e\m{M}(D_2/e)$-ring structure on $\m{M}$. \end{rem} \begin{exm}\label{exm: pi0 of Esigma alg} Given an $E_{\sigma}$-ring $R$ it is clear that $\m{\pi}_0^{D_2}(R)$ is a discrete $E_{\sigma}$-ring. In fact, this does not depend on our choice of $E_{\sigma}$-operad. By Example \ref{exm: commutative implies sigma}, we also conclude that if $R$ is a commutative monoid in $\Sp_{\cV}^{D_2}$, then $\m{\pi}_0^{D_2}R$ is a discrete $E_\sigma$-ring. \end{exm} \begin{exm}[Rings with anti-involution] Let $R$ be a discrete ring with anti-involution $\tau \colon\thinspace R^{\op}\to R$, regarded as the action of the generator of $D_2$. Then there is an associated Mackey functor $\m{M}$ with $\m{M}(D_2/e)=R$ and $\m{M}(D_2/D_2)=R^{C_2}$. The restriction map $\res_e^{D_2}$ is the inclusion of fixed points, the transfer $\tr_e^{D_2}$ is the map $1+\tau$, and the Weyl group action of $D_2$ on $\m{M}(D_2/e)=R$ is defined on the generator of $D_2$ by the anti-involution $\tau\colon\thinspace R^{\op}\longrightarrow R$. Since $R^{\op}\to R$ is a ring map, there is an element $1\in R^{C_2}$ that restricts to the multiplicative unit in $ 1\in R$. This specifies a discrete $E_{\sigma}$-ring structure on the Mackey functor $\m{M}$. \end{exm} \subsection{Real Hochschild homology of discrete \texorpdfstring{$E_{\sigma}$}{Esigma}-rings}\label{sec:HR} In this section, we define the Real Hochschild homology $\HR_*^{D_{2m}}(\m{M})$ of a discrete $E_{\sigma}$-ring $\m{M}$, which takes values in graded $D_{2m}$-Mackey functors. To give our construction, we first need to specify a right $N_e^{D_{2m}}\iota_e^*\m{M}$-action on $N_{D_2}^{D_{2m}}\m{M}$, and a left $N_e^{D_{2m}}\iota_e^*\m{M}$-action on $N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}\m{M}$. Here $c_{\zeta}$ is the symmetric monoidal equivalence of categories \[ c_{\zeta} \colon \thinspace \Mak_{D_2}\to \Mak_{\zeta D_2 \zeta^{-1}}.\] For $\m{M}$ a discrete $E_{\sigma}$-ring, $\iota_e^*\m{M}$ is an (associative unital) ring so $N_e^{D_{2m}}\iota_e^*\m{M}$ is an associative Green functor. Recall from Lemma \ref{Esigmastructure} that there is a left action \[ \overline{\psi}_L \colon\thinspace N_e^{D_2}\iota^*_e\m{M} \square \m{M} \to \m{M} \] and a right action \[ \overline{\psi}_R \colon\thinspace \m{M} \square N_e^{D_2}\iota^*_e\m{M} \to \m{M}.\] \begin{defin}\label{def:twistedMackeymodule} We define a right $N_e^{D_{2m}}\iota_e^*\m{M}$-module structure on $N_{D_2}^{D_{2m}}\m{M}$ as the composite \[ \xymatrix{ \psi_R\colon\thinspace N_{D_2}^{D_{2m}}\m{M} \square N_{e}^{D_{2m}}\iota_e^*\m{M} \ar[r]^-{\cong} & N_{D_2}^{D_{2m}}( \m{M}\square N_{e}^{D_2}\iota_e^*\m{M} ) \ar[rr]^(.6){N_{D_2}^{D_{2m}}(\overline{\psi}_R)} && N_{D_2}^{D_{2m}}\m{M}. } \] We define a left $N_e^{D_{2m}}\iota_e^*\m{M}$-module structure $\psi_L$ on $N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}\m{M}$ as the composite of the map \begin{align*} \label{twisted left module} \xymatrix{ N_{e}^{D_{2m}}\iota_e^*\m{M}\square N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}\m{M} \ar[r]^-{\cong} & N_{\zeta D_2 \zeta^{-1}}^{D_{2m}} ( N_e^{\zeta D_2\zeta^{-1}}\iota_e^*\m{M}\square c_{\zeta}\m{M}) } \end{align*} with the map \[ \xymatrix{ N_{\zeta D_2\zeta^{-1}}^{D_{2m}}(N_e^{\zeta D_2\zeta^{-1}}\iota_e^*\m{M} \square c_{\zeta}\m{M}) \ar[rr]^(.6){N_{\zeta D_2\zeta^{-1}}^{D_{2m}}(c_{\zeta}(\overline{\psi}_L))}&& N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}\m{M}. } \] where $c_{\zeta}(\overline{\psi}_L)$ is the left action of $N_e^{\zeta D_2 \zeta^{-1}}\iota_e^*\m{M}$ on $c_{\zeta}\m{M}$ coming from the fact that $c_{\zeta}$ is symmetric monoidal and therefore sends $E_{\sigma}$-rings in $\Mak_{D_2}$ to $E_{\sigma}$-rings in $\Mak_{\zeta D_2 \zeta^{-1}}$. \end{defin} \begin{defin} Given Mackey functors $\m{R}$, $\m{M}$, and $\m{N}$ where $\m{R}$ is an associative Green functor, $\m{N}$ is a right $\m{R}$-module and $\m{N}$ is a left $\m{R}$-module, we define the two-sided bar construction \[ B_{\bullet}(\m{M},\m{R},\m{N}) \] with $k$-simplices \[ B_{k}(\m{M},\m{R},\m{N}) = \m{M} \square \m{R}^{\square k} \square \m{N} \] and the usual face and degeneracy maps. \end{defin} \begin{defin}\label{def: HR} The \emph{Real $D_{2m}$-Hochschild homology} of a discrete $E_{\sigma}$-ring $\underline{M}$ is defined to be the graded $D_{2m}$-Mackey functor \[ \HR_*^{D_{2m}}(\m{M}) = H_*\left ( \HR_{\bullet}^{D_{2m}}(\m{M}) \right ). \] where \[ \HR_{\bullet}^{D_{2m}}(\m{M}) =B_{\bullet}(N_{D_2}^{D_{2m}}\m{M},N_{e}^{D_{2m}}\iota_e^*\m{M}, N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}\m{M} )\] \end{defin} Recall that the homology of a simplicial Mackey functor is defined to be the homology of the associated normalized dg Mackey functor, as in \cite{BGHL19}. \begin{lem}\label{lem: HR0 of Tambara is Tambara} If $\m{M}$ is a $D_2$-Tambara functor, then $\HR_{0}^{D_{2m}}(\m{M})$ is a $D_{2m}$-Tambara functor. \end{lem} \begin{proof} This follows since reflexive coequalizers in the category of Tambara functors are computed as the reflexive coequalizer of the underlying Mackey functors (cf. \cite{Strickland12}). \end{proof} \begin{prop}\label{prop:HRBox} There is an isomorphism of $D_{2m}$-Mackey functors \[ \HR_0^{D_{2m}}(\m{M}) \cong N_{D_2}^{D_{2m}}\m{M}\square_{N_e^{D_{2m}}\iota_e^*\m{M}}N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}\m{M} \] \end{prop} \begin{proof} Both sides are given by the coequalizer \begin{align*} \xymatrix{ N_{D_2}^{D_{2m}}\m{M} \square N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}\m{M} & \ar@<.5ex>[l]^-{\psi_R \square \id}\ar@<-.5ex>[l]_-{\id \square \psi_L} N_{D_2}^{D_{2m}}\m{M}\square N_e^{D_{2m}}\iota_e^*\m{M} \square N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}\m{M} } \end{align*} and are hence isomorphic. \end{proof} \begin{defin} We say that a Mackey functor $\m{M}$ is \emph{flat} if the derived functors of the functor $\m{M}\square-$ vanish. \end{defin} \begin{prop}\label{prop HR} For any discrete $E_{\sigma}$-ring $\m{M}$ that can be written as a filtered colimit of representable $D_2$-Mackey functors, there is an isomorphism of $D_{2m}$-Mackey functors \[ \HR_*^{D_{2m}}(\m{M}) =\m{\Tor}_*^{N_e^{D_{2m}}\iota_e^*\m{M}} (N_{D_2}^{D_{2m}}\m{M} , N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}\m{M} ) \] \end{prop} \begin{proof} Norms send representable $D_2$-Mackey functors to representable $D_{2m}$-Mackey functors by \cite[Proposition 3.7]{BGHL19}. Norms also commute with sifted colimits by Lemma \ref{norm of sifted colimit}. The result then follows because filtered colimits of of representable $D_{2m}$-Mackey functors are flat. \end{proof} \subsection{Comparison between Real Hochschild homology and THR} We will now show our definition of Real Hochschild homology serves as the algebraic analogue of Real topological Hochschild homology. In particular, the two theories are related by a linearization map which is an isomorphism in degree 0. \begin{thm}\label{thm:linearization} For any $(-1)$-connected $E_\sigma$-ring $A$, we have a natural homomorphism \[ \m{\pi}_k^{D_{2m}} \THR(A) \longrightarrow \HR_k^{D_{2m}}(\m{\pi}_0^{D_2}A). \] which is an isomorphism when $k=0$. \end{thm} \begin{proof} From \cite[X.2.9]{EKMM97}, recall that for a simplicial spectrum $X_{\bullet}$ there is a spectral sequence \[ E^2_{p,q} = H_p(\pi_q(X_{\bullet})) \implies \pi_{p+q}(|X_{\bullet}|), \] from filtering by the skeleton. The same proof yields an equivariant version of this spectral sequence. By Proposition \ref{charofdibar}, there is a weak equivalence \[ \iota_{D_{2m}}^*\THR(A) \simeq |B_{\bullet} (N_{D_2}^{D_{2m}}A,N_{e}^{D_{2m}}\iota_e^*A,N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}A)|, \] so the spectral sequence in this case will be of the form: \[ E^2_{p,q} = H_p(\m{\pi}_q^{D_{2m}}( N_{D_2}^{D_{2m}}A \wedge N_{e}^{D_{2m}}\iota_e^*A^{\wedge \bullet} \wedge N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}A)) \Rightarrow \m{\pi}^{D_{2m}}_{p+q}(\THR(A)), \] The edge homomorphism of this spectral sequence is a map \[ \m{\pi}^{D_{2m}}_{p}(\THR(A)) \to H_p(\m{\pi}_0^{D_{2m}}( N_{D_2}^{D_{2m}}A\wedge N_{e}^{D_{2m}}\iota_e^*A^{\wedge \bullet} \wedge N_{\zeta D_2\zeta^{-1}}^{D_{2m}}c_{\zeta}A)). \] We can identify the right hand side as \begin{align}\label{rhs of above} H_p\left(\m{\pi}^{D_{2m}}_0N_{D_2}^{D_{2m}}A \square (\m{\pi}^{D_{2m}}_0N_{e}^{D_{2m}}\iota_e^*A)^{\square \bullet} \square \m{\pi}^{D_{2m}}_0(N_{\zeta D_2\zeta^{-1}}^{D_{2m}}c_{\zeta}A) \right), \end{align} using the collapse of the K\"unneth spectral sequence \cite{LM06} in degree 0. By Definition \ref{def: norm in Mackey}, there are isomorphisms of $D_{2m}$-Mackey functors \begin{align*} \m{\pi}_0^{D_{2m}}(N_e^{D_{2m}}\iota_e^*A)\cong & N_e^{D_{2m}}\iota_e^*\m{\pi}_0^{D_2}(A),&\\ \m{\pi}_0^{D_{2m}}(N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}A)\cong & N_{\zeta D_2 \zeta^{-1}}^{D_{2m}}c_{\zeta}\m{\pi}_0^{D_2}(A),\\ \m{\pi}_0^{D_{2m}}(N_{D_2}^{D_{2m}}A)\cong &N_{D_2}^{D_{2m}}\m{\pi}_0^{D_2}(A). & \end{align*} We can therefore identify \eqref{rhs of above} as \[ H_p\left( N_{D_2}^{D_{2m}}\m{\pi}_0^{D_2}A \square (N_{e}^{D_{2m}}\iota_e^*\m{\pi}_0^{D_2}A)^{\square \bullet} \square N_{\zeta D_2\zeta^{-1}}^{D_{2m}}c_{\zeta}\m{\pi}_0^{D_2}A \right ), \] and hence the edge homomorphism gives a linearization map \[ \m{\pi}^{D_{2m}}_{p}(\THR(A)) \longrightarrow \HR_p^{D_{2m}}(\m{\pi}_0^{D_2}A). \] To prove the claim that this map is an isomorphism in degree zero, we note that only contribution to \(t+s=0\) is \[ E_{0,0}^2\cong N_{D_2}^{D_{2m}}\underline{\pi}_0^{D_{2}}A\square_{N_e^{D_{2m}}\iota_e^*\underline{\pi}_0^{D_{2}}A} N_{\zeta D_2\zeta^{-1}}^{D_{2m}}c_{\zeta}\underline{\pi}_0^{D_{2}}A \] concentrated in degree $s=t=0$. By Proposition \ref{prop:HRBox} this is $\HR_0^{D_{2m}}(\underline{\pi}_0^{D_{2}}A).$ Since this is a first quadrant spectral sequence, we observe that \[E_{0,0}^2\cong E_{0,0}^{\infty}\cong \underline{\pi}_{0}^{D_{2m}}\THR(A).\] \end{proof} \begin{remark} Forthcoming work of Chloe Lewis constructs a B\"okstedt spectral sequence for Real topological Hochschild homology, which computes the equivariant homology of $\THR(A)$. The $E_2$-term of this spectral sequence is described by Real Hochschild homology, further justifying that $\HR$ is the algebraic analogue of THR. \end{remark} \section{Witt vectors of rings with anti-involution}\label{sec: witt} Hesselholt--Madsen \cite{HM97} proved that for a commutative ring $A$, there is an isomorphism $$ \pi_0(\THH(A)^{\mu_{p^n}}) \cong \W_{n+1}(A; p) $$ where $\W_{n+1}(A; p)$ denotes the length $n+1$ $p$-typical Witt vectors of $A$. This can be reframed as an isomorphism \[ \m{\pi}_0^{\mu_{p^n}}(\THH(A))(\mu_{p^n}/\mu_{p^n}) \cong \W_{n+1}(A; p). \] This was extended to associative rings in \cite{Hes97}. Recall that topological Hochschild homology is a cyclotomic spectrum, which yields restriction maps \[ R_n\colon\thinspace \THH(A)^{\mu_{p^{n}}} \to \THH(A)^{\mu_{p^{n-1}}}. \] One can then define \[ \TR(A;p) := \lim_{n,R_n} \THH(A)^{\mu_{p^n}}, \] and it follows from the Hesselholt-Madsen result above that \[ \pi_0 \TR(A;p) \cong \W(A;p), \] where $\W(A; p)$ denotes the $p$-typical Witt vectors of $A$. This was then extended by Hesselholt to non-commutative Witt vectors. From \cite[Thm. A]{Hes97}, for an associative ring $A$ there is an isomorphism \[ \text{TC}_{-1}(A;p)\cong \W(A;p)_F.\] Here $W(A;p)$ denotes the non-commutative $p$-typical Witt vectors of $A$, and \[\W(A;p)_F=\text{coker}\left (1-F\colon\thinspace \W(A;p)\longrightarrow \W(A;p) \right ),\] where $F$ is the Frobenius map. Analogously, one would like to have a notion of (non-commutative) Witt vectors for discrete $E_{\sigma}$-rings, such that for an $E_{\sigma}$-ring $A$, $\m{\pi}_0^{D_{2m}}\THR(A)$ is closely related to the Witt vectors of $\m{\pi}_0^{D_2}A$. In this section, we define such a notion of Witt vectors. Real topological Hochschild homology also has restriction maps \[ R_n \colon\thinspace \THR(A)^{\mu_{p^n}} \to \THR(A)^{\mu_{p^{n-1}}}, \] defined in Section \ref{sec: cylotomic}. To begin, we want to understand the algebraic analogue of these restriction maps, which in particular, requires defining a Real cyclotomic structure on Real Hochschild homology. To do this, we first recall from \cite{BGHL19} the definition of geometric fixed points for Mackey functors. Let $G$ be a finite group and let $\m{A}$ be the Burnside Mackey functor for the group $G$. For $N$ a normal subgroup of $G$, let $\cF[N]$ denote the family of subgroups of $G$ such that $N\not\subset H$. \begin{defin} Fix a finite group $G$ and let $N<G$ be a normal subgroup. Let $E\cF[N](\m{A})$ be the subMackey functor of the Burnside Mackey functor $\m{A}$ for $G$ generated by $\m{A}(G/H)$ for all subgroups $H$ such that $H$ does not contain $N$. Then define \[ \widetilde{E}\cF[N](\m{A})=\m{A}/(E\cF[N](\m{A})).\] If $\m{M}$ is a $G$-Mackey functor and $N$ is a normal subgroup of $G$, then one can define \begin{align*} E\cF[N](\m{M}):=\m{M}\square E\cF[N](\m{A}),&\text{ and }\\ \widetilde{E}\cF[N](\m{M}):=\m{M}\square \widetilde{E}\cF[N](\m{A}).& \end{align*} More generally, if $\m{M}_{\bullet}$ is a dg-$G$-Mackey functor we define \begin{align*} (E\cF[N](\m{M}_{\bullet}))_n:=E\cF[N](\m{M}_n). \end{align*} \end{defin} Note that the $G$-Mackey functor $\widetilde{E}\cF[N](\m{A})$ has the property that \[ \widetilde{E}\cF[N](\m{A})(G/H) = \begin{cases} 0 & N\not< H\\ \m{A}((G/N)/(N/H)) & N<H \\ \end{cases} \] which is the desired property for isotropy separation. There is a fundamental exact sequence \[ E\cF[N](\m{A})\to \m{A}\to \widetilde{E}\cF[N](\m{A})\ \] which models the isotropy separation sequence. We now recall the definition of geometric fixed points for Mackey functors, as in \cite{BGHL19}. Let $\m{M}$ be a $D_{2m}$-Mackey functor. By \cite[Prop. 5.8]{BGHL19}, we know $\widetilde{E}\cF[\mu_d](\m{M})$ is in the image of $\pi_{d}^*$, the pullback functor from $D_{2m}/\mu_d$-Mackey functors to $D_{2m}$-Mackey functors. Consequently, we may produce a $D_{2m}/\mu_d\cong D_{2m/d}$-Mackey functor $(\pi_{d}^*)^{-1}(\widetilde{E}\cF[\mu_d](\m{M})).$ \begin{defin}[Definition 5.10 \cite{BGHL19}] Let $\m{M}$ be a $D_{2m}$-Mackey functor, and $\mu_d$ a normal subgroup of $D_{2m}$. We define the $D_{2m/d}$-Mackey functor of $\Phi^{\mu_d}$-geometric fixed points to be \[ \Phi^{\mu_d}(\m{M}):=(\pi_{d}^*)^{-1}(\widetilde{E}\cF[\mu_d]\m{M})\] \end{defin} We now show that Real Hochschild homology has a type of genuine Real cyclotomic structure. \begin{prop}\label{cyclotomic} Given $D_2\subset D_{2m}\subset O(2)$, $\mu_{d}$ a normal subgroup of $D_{2m}$ where $d| m$ and $\m{M}$ a discrete $E_{\sigma}$-ring, there is a natural isomorphism \[ \Phi^{\mu_{d}} ( \HR_\bullet^{D_{2m}}(\m{M} ) )\cong \HR_\bullet^{D_{2m/d}}(\m{M})\] of simplicial $D_{2m/d}$-Mackey functors and consequently an isomorphism \[ \Phi^{\mu_{d}}\left (\HR_*^{D_{2m}}(\m{M})\right )\cong \HR_*^{D_{2m/d}}(\m{M})\] of $D_{2m/d}$-Mackey functors. \end{prop} \begin{proof} We apply $\Phi^{\mu_{d}}$ level-wise to the bar construction \[ (\HR^{D_{2m}}(\m{M}))_{\bullet} = B_{\bullet}(N_{D_2}^{D_{2m}}\m{M},N_{e}^{D_{2m}}\iota_e^*\m{M}, N_{\zeta_{2m} D_2 \zeta_{2m}^{-1}}^{D_{2m}}c_{\zeta_{2m}}\m{M}). \] By \cite[Proposition 5.13]{BGHL19}, the functor $\Phi^{\mu_{d}}$ is strong symmetric monoidal so on $k$-simplices there is an isomorphism \begin{multline}\label{iso 1} \Phi^{\mu_{d}} \left ( (\HR^{D_{2m}}(\m{M}))_k \right ) \cong \\ \Phi^{\mu_{d}} \left (N_{D_2}^{D_{2m}}\m{M} \right ) \square \left ( \Phi^{\mu_{d}} \left ( N_e^{D_{2m}}\iota_e^*\m{M} \right ) \right ) ^{\square k} \square \Phi^{\mu_{d}} \left ( N_{\zeta_{2m} D_2 \zeta_{2m}^{-1}}^{D_{2m}}c_{\zeta_{2m}} \m{M} \right ) \end{multline} of $D_{2m/d}$-Mackey functors. The interaction of the geometric fixed points and the norm is described in \cite[Theorem 5.15]{BGHL19}. It follows that \begin{multline}\label{iso 2} \Phi^{\mu_{d}} \left ( \HR^{D_{2m}}(\m{M})_k \right )\cong \\ N_{D_2}^{D_{2m/d}}\m{M} \square \left ( N_e^{D_{2m/d}}\iota_e^*\m{M} \right ) ^{\square k} \square N_{\zeta_{2m/d} D_2\zeta_{2m/d}^{-1}}^{D_{2m/d}}c_{\zeta_{2m/d}}\m{M} \end{multline} where we use the fact that \[D_{2m}/\mu_d\cong D_{2m/d}\] by an isomorphism sending $\zeta_{m}$ to $\zeta_{m/d}$ and $\tau$ to $\tau$. Similarly, \[( \zeta_{2m}D_2\zeta_{2m}\cdot \mu_d)/\mu_{d}\cong \zeta_{2m/d}D_2\zeta_{2m/d}.\] since $\zeta_{2m} D_2 \zeta_{2m}^{-1}=<\zeta_m\tau>$ and the isomorphism sends $\zeta_{m}\tau$ to $\zeta_{m/d}\tau$. It therefore suffices to check that the simplicial structure maps commute with the isomorphisms \eqref{iso 1} and \eqref{iso 2}. To see this we consider the isomorphism of totally ordered $D_{2m}$-sets \[\mu_{2m(\bullet+1)} \longrightarrow D_{2m}/D_2\amalg D_{2m}^{\amalg \bullet } \amalg D_{2m}/\zeta_{2m} D_2\zeta_{2m}^{-1}\] from Lemma \ref{lem: isomorphism of D2m sets}. We observe that in fact these maps form an isomorphism of simplicial $D_{2m}$-sets, as shown in Remark \ref{rem:simplicialD2msets}. Applying $\mu_d$-orbits to each side we get an isomorphism \begin{align}\label{eq: isomorphism of d2md mackey} \mu_{2m(\bullet+1)/d} \longrightarrow D_{2m/d}/D_2\amalg D_{2m/d}^{\amalg \bullet } \amalg D_{2m/d}/\zeta_{2m/d} D_2\zeta_{2m/d}^{-1} \end{align} of simplicial $D_{2m/d}$-sets. If our discrete $E_{\sigma}$-ring $\m{M}$ is actually the restriction of a $D_{2m}$-Mackey functor then the isomorphism is simply induced by tensoring with the isomorphism \eqref{eq: isomorphism of d2md mackey} of simplicial $D_{2m/d}$-sets. To see this, note that given a finite group $G$ with normal subgroup $N\subset G$, the geometric fixed points $\Phi^{N}$ of the norm $N^T$ of a $G$-set $T$ is given by taking the norm $N^{T/N}$ of the orbits $T/N$. The more general statement also holds since we described the isomorphism level-wise and the compatibility with the face and degeneracy maps can be described on each box product factor indexing by the isomorphism of simplicial $D_{2m/d}$-sets \eqref{eq: isomorphism of d2md mackey} and using the compatibility of that isomorphism with the face and degeneracy maps. \end{proof} \begin{defin}\label{fixed points of Mackey functor} Given a normal subgroup $N$ in $G$ we define a functor \[ (-)^{N} \colon\thinspace \Mak_{G}\to \Mak_{G/N}\] on objects by \[ \m{M} \mapsto \m{\pi}_0^{G/N}\big((H\m{M})^{N}\big)\] and on morphisms in the evident way. \end{defin} Given a $D_{2m}$-Mackey functor $\m{M}$, the $D_2$-Mackey functor $\m{M}^{\mu_m}$ is the data \[ \begin{tikzcd} \m{M}(D_{2m}/D_{2m}) \ar[r, bend left=20] & \ar[l, bend left=20] \m{M}(D_{2m}/\mu_m) \end{tikzcd} \] regarded as a $D_2$-Mackey functor with the action of the Weyl group $W_{D_2}(e)=D_2$ given by the action of the Weyl group $W_{D_{2m}}(\mu_m)=D_{2m}/\mu_m\cong D_2$. \begin{remark} This Mackey ``fixed points'' functor is the functor denoted \(q_\ast\) in \cite{HMQ}, where it was shown to preserve Green and Tambara functors. \end{remark} These fixed points in Mackey functors perfectly connect to the categorical fixed points in \(G\)-spectra. \begin{prop} For a \(G\)-spectrum \(E\) and for all integers \(k\), we have \[ \underline{\pi}_k^{G/N}\big(E^N\big)\cong \big(\underline{\pi}_k^G(E)\big). \] \end{prop} \begin{remark} This gives an alternate characterization of the geometric fixed points $\Phi^{\mu_d}\m{M}$ of a $D_{2m}$-Mackey functor $\m{M}$ for $d|m$ as \[ \Phi^{\mu_d}\m{M} \cong (\widetilde{E}\cF[\mu_d]\m{M})^{\mu_d}\] since it is clear in this case that there is a natural isomorphism \[(\widetilde{E}\cF[\mu_d](\m{M}))^{\mu_d}\cong \left (\pi_{\mu_d}^*\right )^{-1}(\widetilde{E}\cF[\mu_d](\m{M})).\] \end{remark} \begin{construction} Given a simplicial $D_{2p^k}$-Mackey functor $\m{M}_{\bullet}$ there is a natural map \[ \m{M}_{\bullet}\to \widetilde{E}\cF[\mu_p](\m{M}_{\bullet}) \] and then an induced natural map \[ \left (\left (\m{M}_{\bullet} \right )^{\mu_{p}}\right )^{\mu_{p^{k-1}}} \to \left (\left (\widetilde{E}\cF[\mu_p](\m{M}_{\bullet})\right )^{\mu_p} \right )^{\mu_{p^{k-1}}}_. \] Note that we can identify \[ \left (\left (\m{M}_{\bullet} \right )^{\mu_{p}}\right )^{\mu_{p^{k-1}}} =\left (\m{M}_{\bullet} \right )^{\mu_{p^k}}\] by unraveling the definition. In other words, there is a natural transformation \[ R_k\colon\thinspace \left ( -\right )^{\mu_{p^k}} \longrightarrow (\Phi^{\mu_p}(-))^{\mu_{p^{k-1}}}\] of functors $s\Mak_{D_{2p^k}}\to s\Mak_{D_2}$ for each $k\ge 1$. On corresponding Lewis diagrams, these produce maps \[ \begin{tikzcd} \m{M}_{\bullet} (D_{2p^k}/D_{2p^k}) \ar[d, bend left=50]\ar[r,"R_k"] & \Phi^{\mu_p}\m{M}_{\bullet} (D_{2p^{k-1}}/D_{2p^{k-1}}) \ar[d, bend left=50]\\ \m{M}_{\bullet} (D_{2p^k}/\mu_{p^k}) \ar[u, bend left=50] \ar[r,"R_k"] & \Phi^{\mu_p} \m{M}_{\bullet} (D_{2p^{k-1}}/\mu_{p^{k-1}}). \ar[u, bend left=50] \end{tikzcd} \] of simplicial $D_2$-Mackey functors for each $k$ when evaluated at a simplicial $D_{2p^k}$-Mackey functor $M_{\bullet}$. \end{construction} We give this map the name $R_k$ because in our case of interest, where $\m{M}_{\bullet} = \HR_{\bullet}^{D_{2p^k}}\!(\m{\pi}_0^{D_2}A)$ for an $E_{\sigma}$-ring $A$, it is an algebraic analogue of the restriction map $R_k$ on $\THR(A)$. \begin{construction} If $\m{M}$ is a discrete $E_{\sigma}$-ring, it follows from Proposition \ref{cyclotomic} that there is an isomorphism of $D_{2p^{k-1}}$-Mackey functors \[ \Phi^{\mu_{p}}\HR_n^{D_{2p^k}}\! (\m{M} ) \cong \HR_n^{D_{2p^{k-1}}}\!(\m{M}). \] The above construction therefore produces restriction maps \[ R_k \colon\thinspace \left ( \HR_n^{D_{2p^k}}\! (\m{M}) \right )^{\mu_{p^{k}}} \to \left ( \HR_n^{D_{2p^{k-1}}}!(\m{M} ) \right )^{\mu_{p^{k-1}}}, \] which are maps of $D_2$-Mackey functors. The maps $R_k$ can be described explicitly on Lewis diagrams by \[ \begin{tikzcd} \HR_n^{D_{2p^k}} (\m{M} )(D_{2p^k}/D_{2p^k}) \ar[d, bend left=50]\ar[r,"R_k"] & \HR_n^{D_{2p^{k-1}}} (\m{M})(D_{2p^{k-1}}/D_{2p^{k-1}}) \ar[d, bend left=50]\\ \HR_n^{D_{2p^k}} (\m{M}) (D_{2p^k}/\mu_{p^k}) \ar[u, bend left=50] \ar[r,"R_k"] & \HR_n^{D_{2p^{k-1}}}(\m{M} )(D_{2p^{k-1}}/\mu_{p^{k-1}}). \ar[u, bend left=50] \end{tikzcd} \] \end{construction} \begin{defin}\label{def: Witt vectors} Given a discrete $E_{\sigma}$-ring $\m{M}$, we define the \emph{truncated $p$-typical Real Witt vectors} of $\m{M}$ by the formula \[\m{\mathbb{W}}_{k+1}(\m{M};p)=\HR_0^{D_{2p^k}}\left (\m{M} \right )^{\mu_{p^k}}\] and the \emph{$p$-typical Real Witt vectors} of $\m{M}$ as \[ \m{\mathbb{W}}(\m{M};p):= \underset{k,R_k}{\lim} \HR_0^{D_{2p^k}}\left (\m{M} \right )^{\mu_{p^k}} \] where the limit is computed in the category of $D_2$-Mackey functors. This construction is entirely functorial in $\m{M}$ by naturality of the restriction maps $R$ so we produce a functor \[ \m{\mathbb{W}}(-;p)\colon\thinspace \Alg_{\sigma}(\Mak_{D_2})\longrightarrow \Mak_{D_2}.\] \end{defin} \begin{remark} If $\m{M}$ is a $D_2$-Tambara functor, then by Lemma \ref{lem: HR0 of Tambara is Tambara} and \cite[Prop 5.16]{HMQ}, we know $\HR_0^{D_{2p^n}}\left (\m{M} \right )^{\mu_{p^n}}$ is also a $D_{2}$-Tambara functor and applying the limit in the category of $D_2$-Tambara functors, we produce a functor \[ \m{\mathbb{W}}(-;p)\colon\thinspace \Tamb_{D_2}\longrightarrow \Tamb_{D_2}.\] \end{remark} We now consider how the $p$-typical Real Witt vectors are related to Real topological Hochschild homology. Let $A$ be an $E_{\sigma}$-ring. Recall that there is a restriction map \eqref{eq:restriction maps}. Since the family $\cR$ is defined in Section \ref{sec: model structure} does not contain $\mu_p^k$ and the family $\cF[\mu_p]$ does not contain $\mu_p^k$ for any $k\ge 1$, on $\mu_{p^k}$-fixed points, we do not need to distinguish between these two families. \begin{thm}\label{witt thr} Let $A$ be an $E_{\sigma}$-ring and $\underline{M}=\m{\pi}_0^{D_2}A$. There is an isomorphism of $D_2$-Mackey functors \[\m{\pi}_0^{D_2}\TRR(A;p) \cong \m{\mathbb{W}}(\m{M};p)\] whenever $R^1\lim_{k}\m{\pi}_0^{D_2}\THR(A)^{\mu_{p^k}}=0.$ \end{thm} \begin{proof} We note that there is an isomorphism \begin{align*} \m{\pi}_0^{D_{2p^k}}\THR(A)\cong \HR_0^{D_{2p^k}}(\m{M}) \end{align*} of $D_{2p^k}$-Mackey functors by Theorem~\ref{thm:linearization} and consequently a natural isomorphism of $D_2$-Mackey functors \begin{align} \label{eq:iso with Witt} \m{\pi}_0^{D_2}(\THR(A)^{\mu_{p^k}})\cong (\HR_0^{D_{2p^k}}(\underline{M}))^{\mu_{p^k}}. \end{align} By construction, the natural isomorphism \eqref{eq:iso with Witt} is compatible with the natural transformation \[R_k \colon \thinspace \m{\pi}_0^{D_2}\THR(-)^{\mu_{p^k}}\longrightarrow \m{\pi}_0^{D_2}\THR(-)^{\mu_{p^k-1}}\] of functors $\Alg_{\sigma}(\Sp_{\cV}^{D_2})\longrightarrow \Mak_{D_2}$ in the sense that the diagram \[ \xymatrix{ \m{\pi}_0^{D_2}\THR(-)^{\mu_{p^k}} \ar[r]^{R_k} \ar[d]^{\cong} & \m{\pi}_0^{D_2}\THR(-)^{\mu_{p^k-1}} \ar[d]^{\cong} \\ \left (\HR_0^{D_{2p^k}}( \m{\pi}_0^{D_2}(-) ) \right)^{\mu_{p^k}} \ar[r]^-{R_k} & \left (\HR_0^{D_{2p^{k-1}}} ( \m{\pi}_0^{D_2}(-)) \right )^{\mu_{p^{k-1}}} } \] commutes. To see this, we note that the edge homomorphism in the spectral sequence associated to the skeletal filtration of a simplicial spectrum (see Proof of \ref{thm:linearization}) is compatible with the restriction maps $R_k$. Consequently, there are isomorphisms \begin{align*} \m{\pi}_0^{D_2}\TRR(A;p) \cong & \lim_R \m{\pi}_0^{D_2}\THR(A)^{\mu_{p^k}} \\ \cong & \lim_R \left (\HR_0^{D_{2p^k}}(\m{\pi}_0^{D_2}A) \right )^{\mu_{p^k}}\\ = & \m{\mathbb{W}}(\m{\pi}_0^{D_2}A;p). \end{align*} where the first isomorphism holds by our assumption that \[R^1\lim_{k}\m{\pi}_0^{D_2}\THR(A)^{\mu_{p^k}} =0.\] \end{proof} \begin{construction}\label{Frobenius} We now define a Frobenius map \[F\colon \thinspace \m{\mathbb{W}}(\m{M};p)\to \m{\mathbb{W}}(\m{M};p).\] We note that the restriction maps $\text{res}_{\mu_{p^{k-1}}}^{\mu_{p^k}}$ and $\text{res}_{D_{2p^{k-1}}}^{D_{2p^k}}$ on the Mackey functor $\HR_0^{D_{2p^{k}}}\left (\m{M} \right )$ induce a map of $D_{2}$-Mackey functors \[ \begin{tikzcd} \left ( \HR_0^{D_{2p^{k}}}\left (\m{M} \right )\right )^{\mu_{p^k}}(D_2/D_2) \ar[d, bend left=50]\ar[r] & \left ( \HR_0^{D_{2p^{k}}}\left (\m{M} \right )\right )^{\mu_{p^{k-1}}}(D_{2p}/D_{2}) \ar[d, bend left=50]\\ \left ( \HR_0^{D_{2p^{k}}}\left (\m{M} \right )\right )^{\mu_{p^k}}(D_2/e) \ar[u, bend left=50] \ar[r] & \left ( \HR_0^{D_{2p^{k}}}\left (\m{M} \right )\right )^{\mu_{p^{k-1}}}(D_{2p}/e)\ar[u, bend left=50] \end{tikzcd} \] that we call $F$. We observe that the target of this map can be identified with the $D_2$-Mackey functor $\left (\HR_0^{D_{2p^{k-1}}}(\m{M})\right )^{\mu_{p^{k-1}}}$, so we abbreviate and simply write \[ F_k\colon \thinspace \left ( \HR_0^{D_{2p^{k}}}\left (\m{M} \right )\right )^{\mu_{p^k}}\longrightarrow \left (\HR_0^{D_{2p^{k-1}}}(\m{M})\right )^{\mu_{p^{k-1}}}\] for this map. Note that this construction is natural so we can apply it to the map \[ \HR_0^{D_{2p^k}}(M)\longrightarrow \widetilde{E}\cF[\mu_p]\left ( \HR_0^{D_{2p^k}}(M)\right ).\] Therefore, by construction and Proposition \ref{cyclotomic} this map is compatible with the restriction maps in the sense that there are commutative diagrams \[ \begin{tikzcd} \left ( \HR_0^{D_{2p^{k}}}\left (\m{M} \right )\right )^{\mu_{p^k}} \ar[r,"F_k"] \ar[d, "R_k"] & \left ( \HR_0^{D_{2p^{k-1}}}\left (\m{M} \right )\right )^{\mu_{p^{k-1}}} \ar[d,"R_k"] \\ \left ( \HR_0^{D_{2p^{k-1}}}\left (\m{M} \right )\right )^{\mu_{p^{k-1}}} \ar[r,"F_k"] & \left (\HR_0^{D_{2p^{k-2}}}\left (\m{M} \right )\right )^{\mu_{p^{k-2}}} \end{tikzcd} \] Therefore, we have an induced map \[F_k\colon\thinspace \m{\mathbb{W}}(\m{M};p)\longrightarrow \m{\mathbb{W}}(\m{M};p).\] \end{construction} \begin{rem} Note that there is a slight clash of notation here. The Mackey functor restriction maps $\text{res}_{\mu_{p^{k-1}}}^{\mu_{p^k}}$ and $\text{res}_{D_{2p^{k-1}}}^{D_{2p^k}}$ on the Mackey functor $\HR_0^{D_{2p^{k}}}\left (\m{M} \right )$ induce the Frobenius maps $F_k$, not the maps \[ R_k \colon\thinspace \left ( \HR_0^{D_{2p^k}} (\m{M}) \right )^{\mu_{p^{k}}} \to \left ( \HR_0^{D_{2p^{k-1}}}(\m{M} ) \right )^{\mu_{p^{k-1}}}, \] which are also called restriction maps. Indeed the maps $R_k$ are not induced by any of the structure maps in the Mackey functor. While the use of these terms in this paper is consistent with the literature on topological Hochschild homology, we point out the notation clash to avoid confusion. \end{rem} \begin{rem}\label{Vershiebung} We can also define a Verschiebung operator \[ V\colon \thinspace \m{\mathbb{W}}(\m{M};p)\longrightarrow \m{\mathbb{W}}(\m{M};p)\] in exactly the same way as in Construction \ref{Frobenius} by replacing the restriction maps in the Mackey functor with the transfer maps in the Mackey functor. \end{rem} There are also topological analogues of the maps $F$, $V$ and $R$ on $\THR(A)^{\mu_{p^k}}$ when $A$ is an $E_{\sigma}$-ring, which satisfy certain relations (cf. \cite[\S 3]{Hog16}). In particular, $R_k$ and $F_k$ are compatible in the sense that $R_{k-1}\circ F_k=F_{k-1}\circ R_k$. The cokernel of the map $\id_{\m{\mathbb{W}}(A;p)}-F$ is defined to be the coinvariants $\m{\mathbb{W}}(A;p)_F$. As a consequence of our setup, we have the following refinement of \cite[Theorem A]{Hes97}. \begin{thm}\label{TCR thm} Let $A$ be an $E_{\sigma}$-ring, and suppose that \[ R^1\lim_{k}\m{\pi}_0^{D_2}\THR(A)^{\mu_{p^k}}=0. \] Then there is an isomorphism \[ \m{\pi}_{-1}\TCR(A;p)\cong \m{\mathbb{W}}(\m{\pi}_0^{D_2}(A);p)_{F}.\] \end{thm} \begin{proof} We compute the homotopy fiber of the topological map $\id-F$ by the long exact sequence in homotopy groups. By Theorem \ref{witt thr}, we can identify $\pi_0\TRR(A)$. By inspection, the topological map $F\colon \thinspace \TRR(A)\to \TRR(A)$ induces the algebraic map $F\colon\thinspace \m{\mathbb{W}}(\m{\pi}_0A;p)\longrightarrow \m{\mathbb{W}}(\m{\pi}_0A;p)$ and therefore $\m{\pi}_{-1}\TCR(A;p)$ is the cokernel of the algebraic map $\id-F$. \end{proof} \subsection{Relation to existing work} In \cite{DMP19}, Dotto, Moi, and Patchkoria give a definition of $p$-typical Witt vectors for a $D_2$-Tambara functor at odd primes $p$. In particular they show that for an odd prime $p$ and a $D_2$-Tambara functor $T$, the classical $p$-typical Witt vectors of the commutative rings $T(D_2/D_2)$ and $T(D_2/e)$ can be assembled into a $D_2$-Tambara functor which they denote $W(T; p)$. The involution, restriction, and norm maps in $W(T; p)$ are induced from the analogous maps in $T$. The transfer is determined by the Tambara reciprocity relation. By \cite[Theorem 3.7]{DMP19}, this notion of Witt vectors recovers $\m{\pi}_0^{D_2}\THR(E)^{\mu_{p^n}}$ when $p$ is odd, $E$ is a connective flat commutative orthogonal $D_2$-ring spectrum, and $\m{M}:=\m{\pi}_0^{D_2}E$ is a $\m{\bZ}$-module (equivalently, when $\m{M}$ is a cohomological Mackey functor). Our work extends this description to the non-commmutative setting, i.e. $E_{\sigma}$-rings. We also give a description of the full dihedral Mackey functor which is accessible using tools from homological algebra for Mackey functors. \begin{prop}\label{cor: comparison to DMP} Fix an odd prime $p$. When $E$ is a connective flat commutative orthogonal $D_2$-ring spectrum such that $\m{M}:=\m{\pi}_0^{D_2}E$ is a $\m{\bZ}$-module and $R^1\lim_{k}\m{\pi}_0^{D_2}\THR(A)^{\mu_{p^k}}=0$, then there is an isomorphism \[ \m{\mathbb{W}}(\m{M};p) \cong \W(\HR_0^{D_2}(\m{M});p ) \] of Tambara functors. \end{prop} \begin{proof} This is direct corollary of Theorem \ref{witt thr} and \cite[Corollary 3.14]{DMP19}. \end{proof} More generally, we have the following corollary of our work and \cite{DMP19}. \begin{cor} Fix an odd prime $p$. When $E$ is a connective flat commutative orthogonal $D_2$-ring spectrum, $\m{M}=\m{\pi}_0^{D_2}E$, and \[ R^1\lim_{k}\m{\pi}_0^{D_2}\THR(A)^{\mu_{p^k}}=0, \] there are isomorphisms \begin{align*} \m{\mathbb{W}}(\m{M};p)(D_2/D_2) \cong &\widetilde{\W}(\HR_0^{D_2}(\m{M})(D_2/D_2);p) \text{ and }\\ \m{\mathbb{W}}(\m{M};p)(D_2/e) \cong &\W(\HR_0^{D_2}(\m{M})(D_2/e);p), \end{align*} where $W(-;p)$ denotes the classical $p$-typical Witt vectors and $\widetilde{\W}(-;p)$ is defined as in \cite{DMP19}. \end{cor} \begin{proof} We first note that as a direct consequence of Theorem \ref{witt thr} and \cite[Thm. 3.15]{DMP19} there is an isomorphism \[ \m{\mathbb{W}}(\m{M};p)(D_2/D_2) \cong \widetilde{\W}(B\otimes_{\phi} B;p), \] where $B=\m{M}(D_2/D_2)$ and $B\otimes_{\phi} B$ from \cite{DMP19} is exactly $\HR_0^{D_2}(\m{M})(D_2/D_2)$ in our notation. For the second isomorphism, recall from Theorem \ref{witt thr} that \[ \m{\mathbb{W}}(\m{M};p)(D_2/e) \cong \pi_0 \TRR(E;p). \] By \cite{HM97}, this is isomorphic to $\W(\pi_0 E;p)=\W(\HR_0^{D_2}(\m{M})(D_2/e);p)$. \end{proof} \section{Computations}\label{sec: computations} In this section, we use the new algebraic framework from Section \ref{sec:HR} to do some concrete calculations. In particular, we present a computation of $\underline{\pi}_0^{D_{2m}}\THR(H\m{\bZ})$ where $m\geq 1$ is an odd integer and $\m{\mathbb{Z}}$ is the constant Mackey functor. The first step in this computation is a Tambara reciprocity formula for sums, which we present for a general finite group and may be of independent interest. \subsection{The Tambara reciprocity formulae} The most difficult relations in Tambara functors tend to be the interchange describing how to write the norm of a transfer as a transfer of norms of restrictions. These are described by the condition that if \[ \begin{tikzcd} {U} \ar[d, "g"] & {T} \ar[l, "h"'] & {U\times_S \prod_g(T)} \ar[l, "{f'}"'] \ar[d, "{g'}"] \\ {S} & & {\prod_g(T)} \ar[ll,"{h'}"'] \end{tikzcd} \] is an exponential diagram, then we have \[ N_g\circ T_h=T_{h'}\circ N_{g'}\circ R_{f'}. \] The formulae called ``Tambara reciprocity'' unpack this in two basic cases: \(g: G/H\to G/K\) is a map of orbits and \begin{enumerate} \item \(h\colon G/H\amalg G/H\to G/H\) is the fold map or \item \(h\colon G/J\to G/H\) is a map of orbits. \end{enumerate} These respectively describe universal formulae for \[ N_H^K(a+b)\text{ and } N_H^K tr_J^H(a). \] In general, these can be tricky to specify, since we have to understand the general form of the dependent product (or equivalently here, coinduction). \begin{lem}[{\cite[Proposition 2.3]{HillMaz19}}] If \(h\colon T\to G/H\) is a morphism of finite \(G\)-sets, with \(T_0=h^{-1}(eH)\) the corresponding finite \(H\)-set, and if \(g\colon G/H\to G/K\) is the quotient map corresponding to an inclusion \(H\subset K\), then we have an isomorphism of \(G\)-sets \[ \prod_{g}(T)\cong G\times_K \Map^H(K,T_0). \] \end{lem} This entire argument is induced up from \(K\) to \(G\), so it suffices to study the case \(K=G\). In this formulation, the map \(f'\) along which we restrict is the map \[ G/H\times \Map^H(G,T_0)\to G\times_H T_0 \] given by \[ (gH,F)\mapsto \big[g,F(g)\big]. \] Since the transfer along the fold map is the sum, we can understand the exponential diagram by further pulling back along the inclusions of orbits in \(\Map^H(G,T)\). Let \(F\in \Map^H(G,T_0)\) be an element, let \(G\cdot F\) be the orbit, and let \(K=\Stab(F)\) be the stabilizer. We can now unpack the orbit decomposition of \(G/H\times G/K\) and the maps to \(T\) and to \(G/K\cong G\cdot F\). We depict the exponential diagram, together with the pullback along the inclusion of the orbit \(G\cdot F\) and orbit decompositions of the relevant pieces in Figure~\ref{fig:ExtendedExponential}. \begin{figure}[ht] \adjustbox{scale=.6,center}{ \begin{tikzcd}[ampersand replacement=\&] {} \& {} \& {} \& {\coprod_{[\gamma]\in H\backslash G/K} G\cdot (eH, \gamma\cdot F)} \ar[d, "\cong"'] \ar[dll, "{[g,F(\gamma^{-1})]\leftmapsto (gH,g\gamma\cdot F)}"{sloped, auto}] \& {\coprod_{[\gamma]\in H\backslash G/K} G/(H\cap \gamma K\gamma^{-1})} \ar[l,"\amalg\cong"'] \ar[d, "\cong"] \& {\coprod_{[\gamma]\in K\backslash G/H} G/(K\cap \gamma^{-1} H\gamma)} \ar[ddl, "{\nabla_{K,H}}"] \ar[l, "\cong"'] \\ {G/H} \ar[d] \& {G\times_H T_0} \ar[l] \& {G/H\times\Map^{H}(G,T_0)} \ar[l, "{f'}"] \ar[d, "{g'}"'] \& {G/H\times G\cdot F} \ar[l, hook'] \ar[d] \& {G/H\times G/K} \ar[l, "\cong" description] \ar[d] \\ {\ast} \& {} \& {\Map^H(G,T_0)} \ar[ll, "{h'}"] \& {G\cdot F} \ar[l, hook'] \& {G/K} \ar[l, "\cong"] \end{tikzcd} } \caption{Unpacking the exponential diagram on orbits} \label{fig:ExtendedExponential} \end{figure} The map labeled \(\nabla_{K,H}\) is the coproduct of the canonical projection maps \[ G/(K\cap\gamma^{-1}H\gamma)\to G/K, \] and the norm along this is, by definition \[ N_{\nabla_{K,H}}=\prod_{[\gamma]\in K\backslash G/H} N_{K\cap \gamma^{-1}H\gamma}^{K}. \] Also by definition, the restriction along \(\prod c_{\gamma}\) is \[ \prod_{[\gamma]\in H\backslash G/K} \underline{M}\big(G/(H\cap \gamma K\gamma^{-1})\big) \xrightarrow{\prod_{[\gamma]\in K\backslash G/H} \gamma} \prod_{[\gamma]\in K\backslash G/H} \underline{M}\big(G/(K\cap \gamma^{-1} H\gamma)\big) \] for a Mackey or Tambara functor \(\underline{M}\), where \(\gamma\) here is the Weyl action. These give all the tools needed to understand the Tambara reciprocity formulae. We spell out the formula for a norm of a sum in general; we will not need the formula for the norm of a transfer here. \begin{thm}\label{thm:TambRecipSums} Let \(G\) be a finite group and \(H\) a subgroup, and let \(\m{R}\) be a \(G\)-Tambara functor. For each \(F\in\Map^H\big(G,\{a,b\}\big)\), let \(K_F\) be the stabilizer of \(F\). Then for any \(a,b\in\m{R}(G/H)\), we have \begin{multline*} N_H^G(a+b)=\\ \sum_{[F]\in\Map^H(G,\{a,b\})/G} tr_{K_F}^G\left( \prod_{[\gamma]\in K_F\backslash G\slash H}\!\! N_{K_F\cap (\gamma^{-1}H\gamma)}^{K_F}\Big(\gamma res_{H\cap (\gamma K\gamma^{-1})}^{H}\big(F(\gamma^{-1})\big)\Big) \right) \end{multline*} \end{thm} \begin{proof} This follows immediately from the proceeding discussion. The only step to check is the identification of the restriction. This follows from the identification of the map \(f'\) with the evaluation map. In the case \(T_0=\{a,b\}\), where here we blur the distinction between \(a\) and \(b\) as elements of \(\m{R}(G/H)\) and as dummy variables, the map \[ G/\big(H\cap (\gamma K\gamma^{-1})\big)\to G/H\times\{a,b\} \] coincides with the canonical quotient onto the summand specified by evaluating \(F\) at \(\gamma^{-1}\). \end{proof} Here we only need the Tambara reciprocity formulae for dihedral groups, so we now restrict attention to these cases. \subsection{Formulae for dihedral groups} We use the following two lemmas to describe the coinductions needed for the Tambara reciprocity formulae for \[ N_{D_{2m}}^{D_{2n}}(a+b)\text{ and } N_{D_{2m}}^{D_{2n}} tr_{\mu_m}^{D_{2m}}(a), \] where \(n/m\) is an odd prime. \begin{lem}\label{unstable multiplicative double coset formula} Let $H,K$ be subgroups of a finite group $G$ and $T$ be a $G$-set. Then there is a natural bijection \[ \Map^H(G,T)^K\cong \prod_{K \gamma H\in K\backslash G \slash H}\Map^H(K\gamma H,T)^K\cong \prod_{K \gamma H\in K\backslash G \slash H}T^{(\gamma^{-1}K\gamma\cap H)}. \] When $T$ has trivial action this simplifies to \[ \Map^H(G,T)^K\cong \prod_{K \gamma H\in K\backslash G \slash H}T. \] \end{lem} \begin{proof} Regarding $G$ as a $K\times H^{\op}$-space, there is an isomorphism \[ \iota_{K\times H^{\op}}^*G\cong \coprod_{\gamma}K\gamma H \] where $\gamma$ ranges over representatives for double cosets $K\backslash \gamma \slash H$ in $K\backslash G \slash H$. The \(K\) fixed points of coinduction up from \(H\) to \(G\) are the same as the \(K\times H^{\op}\)-fixed points of just the set of maps out of \(G\). This gives a natural (in \(T\)) bijection \[ \Map^H(G,T)^K\cong \Map^H\left(\coprod_{K\backslash G \slash H} K\gamma H,T\right)^K \cong \prod_{K\backslash G \slash H}\Map^H(K\gamma H,T)^K \] as desired. To understand each individual factor, we use the quotient map \(\pi_K\colon K\times H\to H\) to rewrite the fixed points: \[ \big(\Map^H(K\gamma H,T)\big)^K\cong \Map^{K\times H}(K\gamma H,\pi_K^\ast T). \] Since by definition of the pullback, \(K\) acts trivially on \(T\), the map factors through the orbits \(K\backslash K\gamma H\), which is an \(H\)-orbit. The classical double coset formula identifies this with \(H/(H\cap \gamma^{-1}K\gamma)\), and the result follows. The simplification follows from the action being trivial, and hence all points being fixed. \end{proof} \begin{lem}\label{lem: coinduced} Let $p$ be an odd prime. There are isomorphisms of $D_{2p}$-sets \[ \Map^{D_{2}}(D_{2p},\{a,b\}) \cong \ast\amalg\ast\amalg \coprod_{i=1}^{2^{(p+1)/2}-2} D_{2p}/D_2 \amalg \coprod_{i=1}^{((2^{p-1}-1)/p)+1-2^{(p-1)/2}}D_{2p}. \] \end{lem} \begin{proof} We observe that the only fixed points with respect to $\mu_{p}$ and any subgroup of $D_{2p}$ containing $\mu_{p}$ are the constant maps \(f_a\) and \(f_b\) sending $D_{2p}$ to $a$ or $D_{2p}$ to $b$, respectively. This gives the first two summands. The $D_2$-fixed points are given by a product of $(p+1)/2=|D_2\backslash D_{2p}\slash D_2|$ copies of $\{a,b\}$ with an additional copy corresponding to the constant maps. Combining this information, we have $2^{(p+1)/2}-2$ copies of $D_{2p}/D_2$ each contributing one $D_2$-fixed point. The remaining summands must be given by copies of the $D_{2p}$-set $D_{2p}$ and examining the cardinality the number of copies of $D_{2p}$ must be $((2^{p-1}-1)/p)+1-2^{(p-1)/2}$. \end{proof} \begin{remark} There is a geometric interpretation of this. The \(D_{2p}\)-set \(\Map^{D_2}(D_{2p},\{a,b\})\) can be thought of as the ways to label the vertices of the regular \(p\)-gon with labels \(a\) or \(b\). The stabilizer of a function is the collection of those rigid motions which preserve the labeling. Those with stabilizer \(D_2\) are the ones that are symmetric with respect to the reflection through some fixed vertex and passing through the center. These then depend only on the label at the chosen vertex, and then \(\tfrac{p-1}{2}\) labels for the the next vertices, moving either clockwise or counterclockwise from that vertex. Of these, there are two that are special: the two where everything has a fixed label. \end{remark} \begin{notation}\label{not:Cardinalities Of Fixed Points} For an odd prime \(p\), let \[ c_p=2^{\frac{p-1}{2}}-1\text{ and } d_p=\frac{2^{p-1}-1}{p}-c_p. \] \end{notation} \begin{lem} Let $p$ be an odd prime. We have an isomorphism of \(D_{2p}\)-sets \[ \Map^{D_2}(D_{2p},D_2)\cong D_{2p}/\mu_p\amalg \coprod^{d_p+c_p} D_{2p}. \] \end{lem} \begin{proof} Since \(D_2\) is a free \(D_2\)-set, there are no \(D_{2p}\)-fixed points, by the universal property of coinduction. On the other hand, \(D_2\) as a \(D_2\)-set is actually in the image of the restriction from \(D_{2p}\)-sets via the quotient map \(D_{2p}\to D_2\), and this is compatible with the inclusion of \(D_2\) into \(D_{2p}\). This allows us to rewrite our \(D_{2p}\)-set as \[ \Map^{D_2}(D_{2p},D_2)\cong \Map(D_{2p}/D_2,D_{2p}/\mu_p). \] Since \(\mu_p\) acts trivially in the target, the \(\mu_p\)-fixed points are \[ \Map(D_{2p}/D_2,D_{2p}/\mu_p)^{\mu_p}=\Map(D_{2p}/\mu_pD_2,D_{2p}/\mu_p)\cong D_{2p}/\mu_p. \] Finally, since \(D_2\)-acts freely in the target, there are no \(D_2\)-fixed points (and hence for any of the conjugates). Counting gives the desired answer. \end{proof} We need two much more general versions of these identifications, both of which follow from the preceding lemmas. \begin{lem}\label{lem:NormalSG} If \(N\subset H\subset G\) with \(N\) a normal subgroup of \(G\), then for any \(H\)-set \(T\) with \(T=T^N\), we have a natural bijection of \(G\)-sets \[ \Map^H(G,T)\cong \Map^{H/N}(G/N,T). \] \end{lem} \begin{proof} Since \(N\) is a normal subgroup of \(G\), for any \(g\in G\), \(n\in N\), and \(f\in\Map^H(G,T)\), we have \[ f(gn)=f(c_g(n)g)=c_g(n) f(g)=f(g), \] where the first equality is by normality of \(N\), the second is by \(H\)-equivariance of \(f\), and the third is by the condition that \(T^N=T\). \end{proof} \begin{cor} Let $p$ be an odd prime. For any \(m\geq 1\), there are isomorphisms of $D_{2pm}$-sets \[ \Map^{D_{2m}}(D_{2pm},\{a,b\}) \cong \{f_a,f_b\}\amalg \coprod_{i=1}^{2c_p} D_{2pm}/D_{2m} \amalg \coprod_{i=1}^{d_p}D_{2pm}/\mu_{m}. \] and \[ \Map^{D_{2m}}(D_{2pm},D_{2m}/\mu_m)\cong D_{2pm}/\mu_{pm}\amalg \coprod^{d_p+c_p} D_{2pm}/\mu_{m}. \] \end{cor} \begin{proof} This follows from Lemma~\ref{lem:NormalSG}. The subgroup \(N=\mu_{m}\). The quotient \(D_{2m}/\mu_m\) is \(D_2\); the quotient \(D_{2pm}/\mu_m\) is \(D_{2p}\), and the result follows from the previous lemmas. \end{proof} We will now produce a Tambara reciprocity formula for sums for the group $D_{2p}$ when $p$ is an odd prime. \begin{notation}\label{set of words} Let \[ X=\Big(\Map^{D_2}\big(D_{2p},\{a,b\}\big)\Big)^{D_{2}}-\{f_a,f_b\} \] be a set of representatives for the \(D_{2}\)-fixed points, and for a point \(\underline{x}\in X\), let \(x_i=\underline{x}(\zeta_p^{i})\). Let \[ Y=\Big(\Map^{D_2}\big(D_{2p},\{a,b\}\big)-D_{2p}\cdot X\Big)/D_{2p} \] be the set of free orbits in \(\Map^{D_2}\big(D_{2p},\{a,b\}\big)\), and for an equivalence class \([\underline{y}]\in Y\), let \(y_i=\underline{y}(\zeta_p^i)\). \end{notation} \begin{lem}[Tambara Reciprocity for Sums for Dihedral groups] \label{lem:reciprocity} Let $D_{2p}$ be the dihedral group where $p$ is an odd prime, with a generator $\tau$ of order $2$ and $\zeta_p$ of order $p$, and let $D_2$ be the cyclic subgroup generated by $\tau$. Let $\m{S}$ be a $D_{2p}$-Tambara functor. Then for all $a$ and $b$ in $\m{S}(D_{2p}/D_2)$ \begin{align*} N_{D_2}^{D_{2p}}(a_{D_2}+b_{D_2}) = & N_{D_2}^{D_{2p}}(a) +N_{D_2}^{D_{2p}}(b) \\ & + \sum_{\underline{x}\in X }\tr_{D_2}^{D_{2p}}\left (x_0\prod^{(p-1)/2}_{i=1} N_{e}^ {D_{2}}(\zeta^{i}_p\res^{D_{2}}_{e}(x_i))\right )\\ & +\sum_{[\underline{y}] \in Y}\tr_{e}^{D_{2p}}\left (\prod_{i=1}^p \zeta^{i}_p \res^{D_{2}}_{e}(y_i) \right ) \end{align*} where $X$ and $Y$ are as in Notation \ref{set of words}. \end{lem} \begin{proof} This follows from Theorem~\ref{thm:TambRecipSums}, using the identification of coinduction given by Lemma~\ref{lem: coinduced}. \end{proof} \begin{exm} Explicitly, in the case of $p=3$, we have the formula \begin{align*} N_{D_2}^{D_{6}}(a+b)=&N_{D_2}^{D_{6}}(a_{D_2})+N_{D_2}^{D_{6}}(b_{D_2})\\ &+\tr_{D_2}^{D_{6}}(b_{D_2} \cdot N_e^{D_2}(\zeta_5\cdot \res^{D_2}_e(a_{D_2})))\\ &+ \tr_{D_2}^{D_{6}}(a_{D_2} \cdot N_e^{D_2}(\zeta_5\cdot \res^{D_2}_e(b_{D_2}))). \end{align*} because in this case the set $Y$ is empty. When $p=7$, abbreviating $a_e=\res_e^{D_2}a$ and $b_e=\res_e^{D_2}b$ there is a summand \[ \tr_e^{D_{14}}\left ( \xi_7b_e\cdot \xi_7^2b_e\cdot \xi_7^3a_e\cdot \xi_7^4b_e\cdot \xi_7^5a_e\cdot \xi_7^6a_e\cdot \xi_7^7a_e\right ).\] \end{exm} \subsection{Truncated \texorpdfstring{$p$}{p}-typical Real Witt vectors of \texorpdfstring{$\m{\mathbb{Z}}$}{the constant Mackey functor Z}} For $p$ an odd prime, we compute the $D_{2p^{k}}$-Tambara functor $\m{\pi}_0^{D_{2p^k}}\THR(H\m{\mathbb{Z}})$, using the formula \[ \m{\pi}_0^{D_{2p^k}}\THR(H\m{\mathbb{Z}}) \cong N_{D_2}^{D_{2p^k}}\m{\bZ}\square_{N_{e}^{D_{2p^k}}\iota_e^*\m{\bZ}} N_{\zeta D_2\zeta^{-1}}^{D_{2p^k}}c_{\zeta}\m{\bZ} \] from Theorem \ref{thm:linearization} and Proposition \ref{prop:HRBox}. We therefore begin by computing the Mackey functor norm, $N_{D_2}^{D_{2p^k}}\m{\bZ}$. Since we will be working both with dihedral groups as groups and with them as representatives of isomorphism classes of \(D_{2m}\)-sets in the corresponding Burnside ring, we will use distinct notation to keep track. \begin{notation} If \(T\) is a finite \(G\)-set, then let \([T]\) denote the isomorphism class of \(T\) as an element of the Burnside ring. When \(T=G/G\), we will also simply write this as \(1\). \end{notation} We also need some notation for generation of a Mackey functor, especially representable ones. \begin{notation} If \(T\) is a finite \(G\)-set, let \(\m{A}_{T}\cdot f\) be \(\m{A}_T\), with the canonical element \(T\xleftarrow{=}T\xrightarrow{=}T\) named \(f\). \end{notation} \begin{lem}\label{z2todpknormofz} Let $p$ be an odd prime. There is an isomorphism of $D_{2p}$-Tambara functors \[ N_{D_2}^{D_{2p}}(\m{\bZ})\cong \m{A}^{D_{2p}}/\big(2-[D_{2p}/\mu_p]\big). \] \end{lem} \begin{proof} The constant Mackey functor \(\m{\bZ}\) for \(D_2\) is the quotient of \(\m{A}^{D_2}\) by the element \(2-[D_2]\in\m{A}^{D_2}(D_2/D_2)\) using the conventions of Section \ref{rep functors}. Equivalently, we can rewrite this as a coequalizer of maps, both of which are represented by multiplication by a fixed \(D_2\)-set: \[ \begin{tikzcd} \m{A}^{D_2}\ar[r,"2" ',shift right=1ex]\ar[r,shift left=1ex,"D_2"]& \m{A}^{D_2} . \end{tikzcd} \] We can extend this to a reflexive coequalizer by formally putting in the zeroth degeneracy, and this represents \(\m{\bZ}\) as a sifted colimit of free Mackey functors: \[ \begin{tikzcd} \m{A}^{D_2}\cdot a\oplus\m{A}^{D_2}\cdot b \ar[r, shift right=1ex, "d_0"'] \ar[r, shift left=1ex, "d_1"] & \m{A}^{D_2} \ar[l, "s_0" description] \ar[r] & \m{\bZ}, \end{tikzcd} \] where \(s_0(1)=b\), and where \[ d_0(b)=d_1(b)=1\text{ and } d_i(a)=\begin{cases} [D_2] & i=0 \\ 2 & i=1. \end{cases} \] The norm commutes with sifted colimits, so we deduce that we have a reflexive coequalizer diagram \[ \begin{tikzcd}[column sep=large] N_{D_2}^{D_{2p}}(\m{A}^{D_2}\cdot a\oplus \m{A}^{D_2}\cdot b) \ar[r,shift right=2.5ex,"N(d_1)" '] \ar[r,shift left=2.5ex,"N(d_0)"] & N_{D_2}^{D_{2p}}(\m{A}^{D_2}) \ar[l,"N(s_0)" description] \ar[r] & N_{D_2}^{D_{2p}}(\m{\bZ}). \end{tikzcd} \] The norm is defined by the left Kan extension of coinduction, so we have a canonical isomorphism for representable functors: \[ N_{D_2}^{D_{2p}} \big(\m{A}^{D_2}\oplus \m{A}^{D_2}\big)\cong N_{D_2}^{D_{2p}}\big(\m{A}_{\{a,b\}}^{D_2}\big)\cong \m{A}_{\Map^{D_2}(D_{2p},\{a,b\})}, \] and the norm of the Burnside Mackey functor for \(D_2\) is the Burnside Mackey functor for \(D_{2p}\). Here and from now on we simply write $\m{A}_T$ for $\m{A}_T^{D_{2p}}$. Lemma~\ref{lem: coinduced} determines the \(D_{2p}\)-set we see here: \[ \Map^{D_2}\big(D_{2p},\{a,b\}\big)=\{f_a\}\amalg \{f_b\}\amalg\coprod_{\m{x}\in X} D_{2p}/D_2\cdot \m{x}\amalg \coprod_{[y]\in Y} D_{2p}\cdot [y]. \] This decomposition gives a decomposition of the representable: \[ \m{A}_{\Map^{D_2}(D_{2p},\{a,b\})}\cong \m{A}\cdot f_{a}\oplus \m{A}\cdot f_{b}\oplus \bigoplus_{\m{x}\in X} \m{A}_{D_{2p/D_2}}\cdot \m{x}\oplus \bigoplus_{[y]\in Y} \m{A}_{D_{2p}}\cdot y. \] Since the direct sum is the coproduct in Mackey functors, we can view each summand in the coequalizer as independently introducing a relation on \(\m{A}\). We can therefore work one summand at a time, keeping track of the added relations. By the Yoneda Lemma, maps from a representable Mackey functor \(\m{A}_T\cdot f\) to \(\m{A}\) are in bijective correspondence with elements of \(\m{A}(T)\), and the bijection is given by evaluating a map of Mackey functors on the canonical element \(f\). To determine these, we work directly, using the definition of the representables. For a general summand parameterized by the orbit of a function \(D_{2p}/D_2\to\{a,b\}\), the value of the corresponding face map is built out of the functions values at the points of \(D_{2p}/D_2\). The slogan here is that this is simply a ``decategorification'' of the Tambara reciprocity formula we already described. The first case is the constant functions. Here, we have \[ d_i(f_*)=N_{D_2}^{D_{2p}}\big(d_i(*)\big), \] for \(*=a,b\). Both \(d_0\) and \(d_1\) agree on \(b\) with value \(1\), so the summand \(\m{A}\cdot f_b\) contributes no relation. For the summand \(\m{A}\cdot f_a\), we use that the norms in the Burnside Tambara functor are given by coinduction: \begin{align*} d_0(f_a)&=N_{D_2}^{D_{2p}}\big([D_2]\big)=[D_{2p}/\mu_p]+(d_p+c_p)[D_{2p}]\text{ and}\\ d_1(f_a)&=N_{D_2}^{D_{2p}}(2)= 2+2c_p[D_{2p}/D_2]+d_p [D_{2p}], \end{align*} where \[ c_p=2^{\frac{p-1}{2}}-1\text{ and }d_p=\frac{2^{p-1}-1}{p}-c_p \] are as defined in Notation~\ref{not:Cardinalities Of Fixed Points}. Coequalizing these two maps introduces a relation \[ 2+2c_p[D_{2p}/D_2]+d_p[D_{2p}]-\big([D_{2p}/\mu_p]+(d_p+c_p)[D_{2p}]\big), \] which simplifies to \[ \big(2-[D_{2p}/\mu_p]\big)+c_p\big(2[D_{2p}/D_2]-[D_{2p}]\big) \] in \(\m{A}(D_{2p}/D_{2p}\). The second case we consider is the easiest one: the summands parameterized by \(Y\). Maps from \(\m{A}_{D_{2p}}\cdot y\) to \(\m{A}\) are in bijection with elements of \(\m{A}(D_{2p}/e)=\m{\bZ}\). The explicit value is the corresponding summand from the Tambara reciprocity formula: \[ d_i(y)= \prod_{j=0}^{p} \res_{e}^{D_2}\big(d_i(y_j)\big), \] since the Weyl action on the underlying abelian group in the Burnside Mackey functor is trivial. Since \(d_0(b)=d_1(b)=1\) and since \[ \res_{e}^{D_2}\big(d_0(a)\big)=\res_e^{D_2}\big(d_1(a)\big)=2, \] we find that both face maps always agree on these summands, with value given by \[ d_i(y)= 2^{|y^{-1}(a)|}. \] Finally, the trickiest summands are the ones parameterized by \(X\). Since we are mapping out of \[ \m{A}_{D_{2p}/D_{2}}\cong \Ind_{D_2}^{D_{2p}}\m{A}^{D_2}, \] by the induction-restriction adjunction, it suffices to understand instead the restriction to \(D_2\) of the target. Here we use the multiplicative double coset formula: for a general \(D_2\)-Mackey functor \(\m{M}\), we have \[ i_{D_2}^{\ast}N_{D_2}^{D_{2p}}\m{M}\cong \m{M}\square\DOTSB\bigop{\square}_{D_2\backslash D_{2p}\slash D_2-D_2eD_2} N_{e}^{D_2} i_e^{\ast}\m{M}. \] For the Burnside Mackey functor, there is a confusing collision: every Mackey functor in this expression is the Burnside Mackey functor, so we cannot distinguish between \(\m{A}^{D_2}\) as itself or as \(N_{e}^{D_2}\mathbb{Z}\). Writing things in terms of the actual norms of a generic Mackey functor, helps disambiguate. To a function \(\m{x}\) with stabilizer \(D_2\), we again have the corresponding summand from the Tambara reciprocity formula: \[ d_i(\m{x})=d_i(x_0)\cdot\prod_{j=1}^{(p-1)/2} N_{e}^{D_2}\big(\res_{e}^{\ast}d_i(x_j)\big), \] where again, the triviality of the Weyl group allows us to ignore it. Note also that with the exception of \(x_0\), we actually only see the restriction of \(d_i(x_j)\). As we saw in the second case, the two face maps here always agree, with value \(1\) if \(x_j=b\) and with value \(2\) if \(x_j=a\). If \(x_0=b\), then \( d_0(\m{x})=d_1(\m{x}), \) since the product factors always agreed. If \(x_0=a\), then we have \[ d_i(f)=d_i(a)\cdot \prod_{j=1}^{(p-1)/2} N_{e}^{D_2}\res_e^{\ast} d_i(x_j)= d_j(a)(2+[D_2])^k, \] where \(k\) is the number of \(j\) between \(1\) and \((p-1)/2\) such that \(x_j=1\). The coequalizer therefore induces the relation \[ \big(2-[D_2]\big)\cdot \big(2+[D_2]\big)^k. \] Since these are multiples of the case \(i=0\), so we deduce that all of these summands contribute exactly one relation: \[ 2-[D_2]\in\m{A}(D_{2p}/D_2). \] Summarizing, we have that the norm \(N_{D_2}^{D_{2p}}\m{\bZ}\) is \[ \m{A}/\Big( \big(2[D_{2p}/D_{2p}]-[D_{2p}/\mu_p]\big) + c_p \big(2[D_{2p}/D_2]-[D_{2p}]\big), \big(2[D_{2}/D_2]-[D_2]\big) \Big), \] where to help the reader keep track of where in the Mackey functor the relations are born, we replace \(1\in\m{A}(G/H)\) with \(H/H\). This simplifies in several ways, however. Since transfers in the Burnside Mackey functor are given by induction, we have \[ c_p\big(2[D_{2p}/D_{2}]-[D_{2p}]\big)=c_p tr_{D_2}^{D_{2p}} \big(2[D_{2}/D_{2}]-[D_2]\big), \] so we can remove this from the first relations with impunity, giving \[ \m{A}/\Big(2[D_{2p}/D_{2p}]-[D_{2p}/\mu_p], 2[D_{2}/D_{2}]-[D_{2}]\Big) . \] We also have \[ res_{D_2}^{D_{2p}}\big(2[D_{2p}/D_{2p}]-[D_{2p}/\mu_p]\big)=2[D_2/D_2]-[D_2], \] so we can now drop the second relation. This yields \[ N_{D_2}^{D_{2p}} \m{\bZ}\cong\m{A}/\big(2-[D_{2p}/\mu_p]\big). \] Since \(\m{\bZ}\) is a Tambara functor, \(N_{D_2}^{D_{2p}}\m{\bZ}\) is, and as a quotient of \(\m{A}\), it has a unique Tambara functor structure. \end{proof} This has a somewhat surprising consequence: the form of the norm is the same as what we started with, in that we are coequalizing two maps represented by \(G\)-sets of cardinality \(2\). Induction gives the following generalization. \begin{thm}\label{thm:NormToDmofZ} For any odd integer \(m\geq 1\), we have an isomorphism of \(D_{2m}\)-Tambara functors \[ N_{D_2}^{D_{2m}}\m{\bZ} \cong \m{A}^{D_{2m}}/(2-[D_{2m}/\mu_{m}]). \] \end{thm} We pause here to unpack this definition some, since the quotient of Mackey functors by a congruence relation might be less familiar than the abelian group case. \begin{defin} For any odd natural number \(m\), let \[ \m{R}_m=N_{D_2}^{D_{2m}}\m{\bZ}. \] \end{defin} \begin{lem}\label{lem: restriction of the norm of Z} For any \(k\) dividing \(m\), we have \[ i_{D_{2k}}^\ast \m{R}_m\cong \m{R}_k, \] and \[ i_{\mu_k}^\ast \m{R}_m\cong \m{A}^{\mu_k}. \] \end{lem} \begin{proof} Theorem~\ref{thm:NormToDmofZ} writes the norm $\m{R}_m$ as the coequalizer of \[ \begin{tikzcd} {\m{A}^{D_{2m}}} \ar[r, shift left=.5em, "2"] \ar[r, shift right=.5em, "{[D_{2m}/\mu_m]}"'] & {\m{A}^{D_{2m}}.} \end{tikzcd} \] Since the restriction functor on Mackey functors is exact, for any subgroup \(H\), the restriction of the norm is the coequalizer of \(2\) and \[ \res_{H}^{D_{2m}}[D_{2m}/\mu_m]=[i_H^\ast D_{2m}/\mu_m]. \] When \(H=D_{2k}\), we have \[ i_{D_{2k}}^\ast D_{2m}/\mu_m=D_{2k}/\mu_k, \] since there is a single double coset and \(D_{2k}\cap\mu_m=\mu_k\). This gives the first part. When \(H=\mu_m\), we have \[ i_{\mu_m}^\ast D_{2m}/\mu_m=2\mu_m/\mu_m, \] since \(\mu_m\) is normal. This implies that the restriction to \(\mu_m\) is \(\m{A}^{\mu_m}\), and hence the restriction to \(\mu_k\) is \(\m{A}^{\mu_k}\). \end{proof} Since we are coequalizing two maps from the Burnside Mackey functor to itself, the value at \(D_{2m}/D_{2m}\) can also be readily computed. We need a small lemma about the products of certain \(D_{2m}\)-orbits. \begin{prop}\label{prop: effect of two maps on basis} Let \(m\) be an odd natural number. Let \(H\subset D_{2m}\) be a subgroup, and let \(\ell=\gcd(m,|H|)\). Then we have \[ D_{2m}/H\times D_{2m}/\mu_m\cong \begin{cases} D_{2m}/\mu_{\ell} & |H|\text{ even},\\ D_{2m}/H\amalg D_{2m}/H & |H|\text{ odd}. \end{cases} \] \end{prop} \begin{proof} Since \(m\) is odd, any subgroup \(H\) is conjugate to either \(D_{2k}\) or \(\mu_k\), for \(k\) dividing \(m\), and the two cases are distinguished by the parity of the cardinality. Hence \(D_{2m}/H\) is isomorphic to either \(D_{2m}/D_{2k}\) or to \(D_{2m}/\mu_k\) in exactly the two cases in the statement. The result follows from the isomorphism \[ D_{2m}/H\times D_{2m}/\mu_m\cong D_{2m}\timesover{H} i_H^\ast D_{2m}/\mu_m, \] and our earlier analysis of the restrictions. \end{proof} \begin{cor} For any odd \(m\), \(\m{R}_{m}(\ast)\) is a free abelian group: \[ \m{R}_m(\ast)\cong\mathbb Z\big\{[D_{2m}/D_{2k}]\mid k\vert m\big\}. \] The image of \([D_{2m}/D_{2k}]\in\m{A}^{D_{2m}}(\ast)\) is \([D_{2m}/D_{2k}]\), while the image of \([D_{2m}/\mu_k]\in\m{A}^{D_{2m}}(\ast)\) is \(2[D_{2m}/D_{2k}]\). \end{cor} \begin{proof} Proposition~\ref{prop: effect of two maps on basis} describes the effect of the two maps on the standard basis for the Burnside ring. We see that \[ \big(2-[D_{2m}/\mu_m]\big)\cdot [D_{2m}/\mu_k]=0, \] while \[ \big(2-[D_{2m}/\mu_m]\big)\cdot [D_{2m}/D_{2k}]=2[D_{2m}/D_{2k}]-[D_{2m}/\mu_k]. \] This gives both the additive result and the images. \end{proof} Lemma~\ref{lem: restriction of the norm of Z} shows then that the same statement is essentially true for the values at dihedral subgroups. \begin{cor} For any odd \(m\) and any \(k\) dividing \(m\), we have an isomorphism \[ \m{R}_m(D_{2m}/D_{2k})\cong \mathbb Z\big\{ [D_{2k}/D_{2j}]\mid j\vert k\big\}. \] \end{cor} We can also spell out the restriction and transfer maps here. The restriction and transfer to the odd order cyclic subgroups is easier, since there is a unique maximal one. \begin{prop} The restriction map \[ \m{R}_m(*)\to \m{R}_m(D_{2m}/\mu_m)\cong \m{A}^{\mu_m}(\mu_m/\mu_m) \] is given by \[ [D_{2m}/D_{2k}]\mapsto [\mu_m/\mu_k]. \] The transfer map is given by \[ [\mu_m/\mu_k]\mapsto 2[D_{2m}/D_{2k}]. \] \end{prop} \begin{proof} These follow from the restriction and induction in \(D_{2m}\)-sets, together with the relation \[ [D_{2m}/\mu_k]=2[D_{2m}/D_{2k}] \] in $\m{R}_m$. \end{proof} For the restrictions and transfers to dihedral subgroups, we consider a maximal proper divisor. Let \(p\) be a prime dividing \(m\), and let \(k=m/p\). \begin{prop}\label{prop: res and tr} The restriction map \[ \m{R}_m(*)\to \m{R}_k(*) \] is given by \[ [D_{2m}/D_{2j}]\mapsto \frac{p\ell}{j}[D_{2k}/D_{2\ell}], \] where \(\ell=\gcd(k,j)\). The transfer maps are given by \[ [D_{2k}/D_{2j}]\mapsto [D_{2m}/D_{2j}]. \] \end{prop} \begin{proof} The transfer maps are immediate. For the restriction, since \(m\) is odd, the normalizer of any dihedral subgroup is itself. The intersection of \(D_{2k}\) with \(D_{2j}\) is the dihedral group \(D_{2\ell}\), while the intersection of \(D_{2k}\) with any conjugate of \(D_{2j}\) is just the intersection \(\mu_j\cap\mu_k=\mu_\ell\). This means that it suffices to count cardinalities. This gives \[ i_{D_{2k}}^{\ast} D_{2m}/D_{2j}=D_{2k}/D_{2\ell}\amalg \coprod^{a} D_{2k}/\mu_\ell, \] where \(a=\frac{p\ell-j}{2j}\). Since \([D_{2k}/\mu_{\ell}]=2[D_{2k}/D_{2\ell}]\), the result follows. \end{proof} \begin{thm}\label{propz} Let $m\ge 1$ be an odd integer. There is an isomorphism of $D_{2m}$-Mackey functors \[ \underline{\pi}_0^{D_{2m}}\THR(H\m{\bZ})\cong \m{A}^{D_{2m}}/(2-[D_{2m}/\mu_{m}]).\] where $(2-[D_{2m}/\mu_m])$ is the ideal generated by $2-[D_{2m}/\mu_m]$ in the Tambara functor $\m{A}^{D_{2m}}$. \end{thm} \begin{proof} By Proposition \ref{prop:HRBox} and Theorem \ref{thm:linearization}, it suffices to compute the coequalizer \[ \xymatrix{ N_{D_2}^{D_{2m}}\m{\bZ} \square N_{e}^{D_{2m}}\mathbb{Z} \square N_{\zeta D_2\zeta^{-1}}^{D_{2m}} c_{\zeta}\m{\bZ} \ar@<.5ex>[r] \ar@<-.5ex>[r] & N_{D_2}^{D_{2m}}\m{\bZ}\square N_{\zeta D_2\zeta^{-1}}^{D_{2m}} c_{\zeta}\m{\bZ}. } \] For any \(G\), \(N_e^G\mathbb{Z}\) is the Burnside Mackey functor, the symmetric monoidal unit. The \(E_0\)-structure map here is just the unit \[ \m{A}^{D_{2m}}\to N_{D_2}^{D_{2m}}\m{\bZ}. \] By Theorem~\ref{thm:NormToDmofZ}, this is surjective, so \(N_{D_2}^{D_{2m}}\m{\bZ}\) is a ``solid'' Green functor in the sense that the multiplication map is an isomorphism. Finally, note that the argument we gave to identify \(N_{D_2}^{D_{2m}}\m{\bZ}\) did not depend on the choice of \(D_2\) inside \(D_{2m}\), so we have an isomorphism \[ N_{D_2}^{D_{2m}} \m{\bZ}\cong N_{\zeta D_2\zeta^{-1}}^{D_{2m}} \m{\bZ} \] of Tambara functors. We deduce that all pieces in the coequalizer diagram are just \(N_{D_2}^{D_{2m}}\m{\bZ}\). \end{proof} When restricted to $\m{\pi}_0^{D_2}(\THR(H\m{\mathbb{Z}})^{D_{2p^k}})$, our computation recovers the computation in \cite{DMP19}. \begin{cor}Let $p$ be an odd prime. Then there are isomorphisms of abelian groups \[ \m{\pi}_0^{D_{2p^k}}(\THR(H\m{\bZ}))(D_{2p^k}/D_{2p^k}) \cong \m{\pi}_0^{D_{2p^k}}(\THR(H\m{\bZ}))(D_{2p^k}/\mu_{p^k})\cong \W_{k+1}(\mathbb{Z};p).\] \end{cor} In fact, since we computed the restriction and transfer maps, we have the following computation. \begin{cor} There is an isomorphism of Mackey functors \[ \mathbb{W}_{k}(\m{\mathbb{Z}};p)=\underline{\W_{k}(\mathbb{Z};p)}\] for odd primes $p$. \end{cor} \bibliographystyle{plain}
{'timestamp': '2021-11-16T02:03:42', 'yymm': '2111', 'arxiv_id': '2111.06970', 'language': 'en', 'url': 'https://arxiv.org/abs/2111.06970'}
\section{Introduction} A billiard ball, i.e.\ a point mass, moves inside a polyhedron $P$ with unit speed along a straight line until it reaches the boundary $\partial{P}$, then it instantaneously changes direction according to the mirror law, and continues along the new line. Label the faces of $P$ by symbols from a finite alphabet $\mathcal{A}$ whose cardinality equals the number of faces of $P$. Consider the set of all billiard orbits. After coding, the set of all the words is a language. We define the complexity of the language, $p(n)$, by the number of words of length $n$ that appears in this system. How complex is the game of billiard inside a polygon or a polyhedron? For the cube the computations have been done, see \cite{ moi1, moi2}, but there is no result for a general polyhedron. One way to answer this question is to compute the topological entropy of the billiard map. There are three different proofs that polygonal billiard have zero topological entropy \cite{Ka,Ga.Kr.Tr,Gu.Ha}. Here we consider the billiard map inside a polyhedron. We want to compute the topological entropy of the billiard map in a polyhedron. The idea is to improve the proof of Katok. Thus we must compute the metric entropy of each ergodic measure. When we follow this proof some difficulties appear. In particular a non atomic ergodic measure for the related shift can have its support included in the boundary of the definition set. Such examples were known for some piecewise isometries of $\mathbb{R}^2$ since the works of Adler, Kitchens and Tresser \cite{Ad.Ki.Tr}; Goetz and Poggiaspalla \cite{Go,Go.Pog}. Piecewise isometries and billiard are related since the first return map of the directional billiard flow inside a rational polyhedron is a piecewise isometry. Our main result is the following \begin{theorem}\label{entro} Let $P$ be a convex polyhedron of $\mathbb{R}^3$ and let $T$ be the billiard map, then $$h_{top}(T)=0.$$ \end{theorem} \begin{corollary} The complexity of the billiard map satisfies $$\lim_{n\rightarrow+\infty}\frac{\log{p(n)}}{n}=0.$$ \end{corollary} For the standard definitions and properties of entropy we refer to Katok and Hasselblatt \cite{Ha.Ka}. \subsection{Overview of the proof} We consider the shift map associated to the billiard map, see Section 2, and compute the metric entropy for each ergodic measure of this shift. We must treat several cases depending on the support of the measure. If the ergodic measure has its support included in the definition set, then the method of Katok can be used with minor changes, see Section 3. The other case can not appear in dimension two and represent the main problem in dimension three. We treat this case by looking at the billiard orbits which pass through singularities. By a geometric argument we prove in Section 4 that the support of a such measure is the union of two sets: a countable set and a set of words whose complexity can be bounded, see Proposition \ref{entroscruc} and Lemma \ref{3entrosur}. If we want to generalize this result to any dimension some problems appear. Im dimension three, we treat two cases by different methods depending on the dimension of the cells. In dimension $d$ there would be at least $d-1$ different cases and actually we have no method for these cases. Moreover we must generalize Lemma \ref{eqdim3} and the followings . Unfortunately this is much harder and cannot be made with computations. \section{Background and notations} \subsection{Definitions} We consider the billiard map inside a convex polyhedron $P$. This map is defined on the set $E\subset\partial{P}\times\mathbb{PR}^3$, by the following method: First we define the set $E'\subset\partial{P}\times\mathbb{PR}^3$. A point $(m,\theta)$ belongs to $E'$ if and only if one of the two following points is true: \\ $\bullet$ The line $m+\mathbb{R}^*[\theta]$ intersects an edge of $P$, where $[\theta]$ is a vector of $\mathbb{R}^3$ which represents $\theta$.\\ $\bullet$ The line $m+\mathbb{R}^*[\theta]$ is included inside the face of $P$ which contains $m$. Then we define $E$ as the set $$E=(\partial{P}\times \mathbb{PR}^3)\setminus E'.$$ Now we define the map $T$: Consider $(m,\theta)\in E$, then we have $T(m,\theta)=(m',\theta')$ if and only if $mm'$ is colinear to $[\theta]$, and $[\theta']=s[\theta]$, where $s$ is the linear reflection over the face which contains $m'$. $$T:E\rightarrow \partial{P}\times\mathbb{PR}^3$$ $$T:(m,\theta)\mapsto (m',\theta')$$ \begin{remark} In the following we identify $\mathbb{PR}^3$ with the unit vectors of $\mathbb{R}^3$ ({\it i.e} we identify $\theta$ and $[\theta]$). \end{remark} \begin{definition} The set $E$ is called the phase space. \end{definition} \begin{figure}[h] \includegraphics[width= 5cm]{billardcube.pdf} \caption{Billiard map inside the cube} \end{figure} \subsection{Combinatorics} \begin{definition} Let $\mathcal{A}$ be a finite set called the alphabet. By a language $L$ over $\mathcal{A}$ we mean always a factorial extendable language: a language is a collection of sets $(L_n)_{n\geq 0}$ where the only element of $L_0$ is the empty word, and each $L_n$ consists of words of the form $a_1a_2\dots a_n$ where $a_i\in\mathcal{A}$ and such that for each $v\in L_n$ there exist $a,b\in\mathcal{A}$ with $av,vb\in L_{n+1}$, and for all $v\in L_{n+1}$ if $v=au=u'b$ with $a,b\in\mathcal{A}$ then $u,u'\in L_n$.\\ The complexity function of the language $L$, $p:\mathbb{N}\rightarrow\mathbb{N}$ is defined by $p(n)=card(L_n)$. \end{definition} \subsection{Coding} We label each face of the polyhedron with a letter from the alphabet $\{1\dots N\}$. Let $E$ be the phase space of the billiard map and $d=\{d_1\dots d_N\}$ the cover of $E$ related to the coding. The phase space is of dimension four : two coordinates for the point on the boundary of $P$ and two coordinates for the direction. Let $E_0$ be the points of $E$ such that $T^n$ is defined, continuous in a neighborhood for all $n\in\mathbb{Z}$. Denote by $\phi$ the coding map, it means the map $$\phi : E_0\rightarrow\{1,\dots, N\}^{\mathbb{Z}},$$ $$\phi(p)=(v_n)_\mathbb{Z},$$ where $v_n$ is defined by $T^n(p)\in d_{v_n}$. Let $S$ denote the shift map on $\{1\dots N\}^{\mathbb{Z}}$. We have the diagram, \begin{equation*} \begin{CD} E_0 @>T>> E_0\\ @V{\phi}VV @VV{\phi}V\\ \phi(E_0) @>>S> \phi(E_0) \end{CD} \end{equation*} with the equation $\phi\circ T=S\circ\phi.$ We want to compute the topological entropy of the billiard map. We define the topological entropy of the billiard map as the topological entropy of the subshift, see Definition \ref{htop}. We remark that the proof of Theorem \ref{entro} given in \cite{Ga.Kr.Tr} as a corollary of their result is not complete: They do not consider the case, where the ergodic measure is supported on the boundary of $\phi(E_0)$. \subsection{Notations} Let $\Sigma$ be the closure of $\phi(E_0)$, and consider the cover $$d\vee T^{-1}d\vee \dots\vee T^{-n+1}d.$$ The cover $d$, when restricted to $E_0$, is a partition. The sets of this cover are called $n$-cells. If $v\in\Sigma$ we denote $$\sigma_v=\bigcap_{n\in\mathbb{Z}}\overline{T^{-n}(d_{v_n}\cap E_0)}= \displaystyle\bigcap_{n\in\mathbb{Z}}T^{-n}d_{v_n}.$$ It is the closure of the set of points of $E_0$ such that the orbit is coded by $v$. If $v\in\phi(E_0)$ then $\sigma_v$ is equal to $\phi^{-1}(v)$. We denote $d^{-}=\displaystyle \bigvee^{\infty}_{n=0}T^{-n}d$ and $$\sigma_v^{-}=\displaystyle\bigcap_{n\geq 0}\overline{T^{-n}(d_{v_n}\cap\phi(E_0))} =\displaystyle\bigcap_{n\geq 0}T^{-n}d_{v_n}.$$ \begin{definition}\label{xi} Let $\xi=\{c_1,\dots,c_k\} $ be the partition of $\Sigma$ given by $$c_k=\overline{\phi(d_k\cap E_0)}.$$ \end{definition} Finally we can define the topological entropy \begin{definition}\label{htop} Consider a polyhedron of $\mathbb{R}^3$, and $T$ the billiard map, then we define $$h_{top}(T)=\lim_{n\rightarrow +\infty}\frac{\log p(n)}{n},$$ where $p(n)$ is the number of $n$-cells. \end{definition} This definition is made with the help of the following lemma which links it to the topological entropy of the shift. \begin{lemma}\label{entrodef} With the same notation $$\lim_{n\rightarrow +\infty}\frac{\log{p(n)}}{n}=h_{top}(S|\Sigma).$$ \end{lemma} \begin{proof} The partition $\xi$, see Definition \ref{xi}, is a topological generator of $(S|\Sigma)$ (see \cite{Pet} for a definition), thus $$h(S|\Sigma)=\lim_{n\rightarrow +\infty}\frac{\log{card \xi_n}}{n},$$ and we have ${\rm card}(\xi_n)=p(n).$ \end{proof} \begin{remark} The number of cells, $p(n)$, is equal to the complexity of the language $\Sigma$.\\ There are several other possible definitions (Bowen definition $\dots$) but we use this one since we are interested in the complexity function of the billiard map. \end{remark} \subsection{Billiard} \subsubsection{Cell} We denote by $\pi$ the following map: $$\pi: \partial{P}\times\mathbb{PR}^3\mapsto \mathbb{PR}^3$$ $$\pi:(m,\theta)\rightarrow \theta.$$ Consider an infinite word $v\in\phi(E_0)$. \begin{definition} We consider the elements $(m,\theta)$ of $\partial{P}\times\mathbb{PR}^3$ as vectors $\theta$ with base point $m$. \\ We say that $X\subset \partial{P}\times\mathbb{PR}^3$ is a strip if all $x\in X$ are parallel vectors whose base points form an interval.\\ We say that $X\subset \partial{P}\times\mathbb{PR}^3$ is a tube if all $x\in X$ are parallel vectors whose base points form an open polygon or an open ellipse.\\ \end{definition} Now we recall the theorem of Galperin, Kruger and Troubetzkoy \cite{Ga.Kr.Tr}, which describe the shape of $\sigma_v^-$: \begin{lemma}\label{dim} Let $v\in\phi(E_0)$ be an infinite word, then there are three cases:\\ The set $\sigma_v^-$ consists of only one point.\\ The set $\sigma_v^-$ is a strip.\\ The set $\sigma_v^-$ is a tube.\\ Moreover if $\sigma_v^-$ is a tube then $v$ is a periodic word. \end{lemma} \begin{remark} The preceding lemma shows that $\phi$ is not bijective on $E_0$. \end{remark} By the preceding lemma for each infinite word $v$ the set $\pi(\sigma_v^-)$ is unique. If the base points form an interval we say that $\sigma_v^-$ is of dimension one, and of dimension two if the base points form a polygon or an ellipse. \begin{definition}\label{def} As in the preceding lemma, if $v$ is an infinite word we say that $\pi(\sigma_v^-)$ is the direction of the word.\\ Moreover if $v$ is an infinite word, we identify $\sigma_v^-$ with the set of base points $a$ which fulfills $\sigma_v^-=a\times \pi(\sigma_v^-)$. \end{definition} \subsubsection{Geometry} First we define the rational polyhedron. Let $P$ be a polyhedron of $\mathbb{R}^3$, consider the linear reflections $s_i$ over the faces of $P$. \begin{definition} We denote by $G(P)$ the group generated by the $s_i$, and we say that $P$ is rational if $G(P)$ is finite. \end{definition} In $\mathbb{R}^2$ a polygon is rational if and only if all the angles are rational multiples of $\pi$. Thus the rational polygons with $k$ edges are dense in the set of polygons with $k$ edges. In higher dimension, there is no simple characterization of rational polyhedrons, moreover their set is not dense in the set of polyhedrons with fixed combinatorial type (number of edges, vertices, faces). An useful tool in the billiard study is the unfolding. When a trajectory passes through a face, there is reflection of the line. The unfolding consists in following the same line and in reflecting the polyhedron over the face. For example for the billiard in the square/cube, we obtain the usual square/cube tiling. In the following we will use this tool, and an edge means an edge of an unfolded polyhedron. \subsection{Related results} If $P$ is a rational polyhedron, then we can define the first return map of the directional flow in a fixed direction $\omega$. This map $T_{\omega}$ is a polygon exchange (generalization of interval exchange). Gutkin and Haydn have shown : \begin{theorem}\cite{Gu.Ha} Let $P$ be a rational polyhedron and $w\in\mathbb{S}^2$ then $$h_{top}(T_{\omega})=0.$$ Moreover if $\mu$ is any invariant measure then $$h_{\mu}(T)=0.$$ \end{theorem} Buzzi \cite{Bu}, has generalized this result. He proves that each piecewise isometrie of $\mathbb{R}^n$ have zero topological entropy. Remark that a polygonal exchange is a piecewise isometry. \section{Variational principle} We use the variational principle to compute the entropy $$h_{top}(S|\Sigma)=\displaystyle\sup_{\substack{\mu\\ ergo}} h_{\mu}(S|\Sigma).$$ Remark that we cannot apply it to the map $T$ since it is not continuous on a compact metric space. The knowledge of $h_\mu(T)$ does not allow to compute $h_{top}(T)$. We are not interested in the atomic measures because the associated system is periodic, thus their entropy is equal to zero. We split into two cases $supp(\mu)\subset\phi(E_0)$ or not. We begin by treating the first case which is in the same spirit as the argument in Katok \cite{Ka}. \begin{figure}[t] \begin{center} \includegraphics[width= 6cm]{3ef1.pdf} \caption{Billiard invariant} \label{e31} \end{center} \end{figure} \begin{lemma}\label{lemka} Let $\mu$ be an ergodic measure with support in $\phi(E_0)$. We denote $\xi^{-}=\displaystyle\bigvee^{\infty}_{n=0}S^{-n}\xi$, where $\xi$ is defined in Definition \ref{xi}. Up to a set of $\mu$ measure zero we have $$S\xi^{-}=\xi^{-}.$$ \end{lemma} \begin{proof} As $\mu(\phi(E_0))=1$, the cover $\xi$ can be thought as a partition of $\phi(E_0)$. Let $v\in\phi(E_0)$, then the set $\sigma_v^-$ can be thought as an element of $d^{-}$. The set $\overline{\phi(\sigma_{v}^-\cap E_0)}$ coincides with the set of $\xi^{-}$ which contains $v$. By Lemma \ref{dim} the dimension of $\sigma_v^-$ can take three values. We have $\sigma_{S^{-1}v}^-\subset T^{-1}\sigma_{v}^-$, thus the set of $v$ such that $\sigma_{v}^-$ is a point is invariant by $S$. The ergodicity of $\mu$ implies that this set either has zero measure or full measure. Assume it is of full measure, then $d^{-}$ is a partition of points, and same thing for $\xi^{-}$. Then $\xi^{-}$ is a refinement of $S\xi^{-}$ , this implies that those two sets are equal. Assume it is of zero measure. Then by ergodicity there are two cases : $\sigma_{v}^-$ is an interval or of dimension two for a set of full measure. $\bullet$ Assume $\sigma_{v}^-$ is an interval for a full measure set of $v$. If $\theta$ is the direction of $v$, then consider the strip $\sigma_v^-+\mathbb{R}\theta$. Consider a line included in the plane of the strip and orthogonal to the axis $\mathbb{R}\theta$, and denote $L(\sigma_v^-)$ the length of the set at the intersection of the line and the strip, see Figure \ref{e31}. Clearly we have $T(\sigma_{v}^-)\subset \sigma_{Sv}^-,$ thus we have $L(T\sigma_{v}^-)\leq L(\sigma_{Sv}^-).$ Since $L(T\sigma_v^-)=L(\sigma_v^-)$ we conclude that the function $L$ is a sub-invariant of $S$. Since $\mu$ is ergodic the function $L$ is constant $\mu$ {\em a.e}. Thus for $\mu$ {\em a.e} $v$ we obtain two intervals of same length, one included in the other. They are equal. We deduce $\sigma_{Sv}^-=T\sigma_{v}^-.$ This implies that $v_1,v_2,\dots$ determines $v_0$ almost surely. It follows that $$S\xi^-=\xi^- \mu {\it a.e}.$$ $\bullet$ If $\sigma_{v}^-$ is of dimension $2$ for a positive measure set of $v$, by ergodicity it is of the same dimension for $\mu$ {\em a.e} $v$. It implies that $v$ is a periodic word $\mu$ {\em a.e}, thus $S\xi^{-}=\xi^{-} \mu$ {\em a.e}. \end{proof} Since $h(S,\xi)=H(S\xi^-|\xi^-)=0$ we have : \begin{corollary}\label{supmes1} If $supp(\mu)\subset\phi(E_0)$ then $h_{\mu}(S|\Sigma)=0$. \end{corollary} \section{Measures on the boundary} We will treat the cases of ergodic measures, satisfying $$X=supp(\mu)\subset\Sigma\setminus\phi(E_0).$$ First we generalize Lemma \ref{dim}: \begin{lemma}\label{entro3gkt} For a convex polyhedron, for any word $v\in\Sigma\setminus\phi(E_0)$ the set $\sigma_v^-$ is connected and is a strip. \end{lemma} We remark that Lemma \ref{entro3gkt} is the only place where we use the convexity of $P$. \begin{proof} First the word $v$ is a limit of words $v^n$ in $\phi(E_0)$. Each of these words $v^n$ have a unique direction $\theta_n$ by Lemma \ref{dim}. The directions $\theta_n$ converge to $\theta$, this shows that the direction of $\sigma_v^-$ is unique. Now by convexity of $P$ the set $\sigma_v^-$ is convex as intersection of convex sets. By definition the projection of $\sigma_v^-$ on $\partial{P}$ is included inside an edge, thus it is of dimension less than or equal to one. This implies that the set is an interval or a point. \end{proof} {\em A priori} there are several cases as $dim\sigma_v^-$ can be equal to $0$ or $1$. We see here a difference with the polygonal case. In this case the dimension was always equal to zero. \subsection{Orbits passing through several edges} In this paragraph an edge means the edge which appears in the unfolding of $P$ corresponding to $v$. We represent an edge by a point and a vector. The point is a vertex of a copy of $P$ in the unfolding and the vector is the direction of the edge. We consider two edges $A,B$ in the unfolding. Consider $m\in A$ and a direction $\theta$ such that the orbit of $(m,\theta)$ passes through an edge. We identify the point $m$ with the distance $d(m,a)$ if $a$ is one endpoint of the edge $A$. Moreover we denote by $u$ an unit vector colinear to the edge $A$. \begin{lemma}\label{entro3eq} The set of $(m,\theta), m\in A_0$ such that the orbit of $(m,\theta)$ passes through an edge $A_1$ satisfies either (i) $(m,\theta)$ is in the line or plane which contains $A_0,A_1$. Then there exists an affine map $f$ such that $f(\theta) = 0$. or (ii) there exists a map $F :\mathbb{R}^3 \to \mathbb{R}$ such that $m=F(\theta)$ (it is the quotient of two linear polynomials). Moreover the map $(A_0,A_1)\mapsto F$ is injective. \end{lemma} \begin{remark} The case where $A_0,A_1$ are colinear is included in the first case. In this case there are two equations of the form $f(\theta)=0$ but we only use one of them. \end{remark} \begin{proof} Consider the affine subspace generated by the edge $A_0$ and the line $m+\mathbb{R}\theta$. There are two cases : $\bullet\quad A_1\in Aff(A_0,m+\mathbb{R}\theta)$. Assume $A_0,A_1$ are not colinear, then the affine space generated by $A_0,A_1$ is of dimension two (or one), and several points $m$ can be associated to the same direction $\theta$. In the case it is of dimension 2, $\theta$ is in the plane which contains $A_0,A_1$. Then there exists an affine map $f$ which gives the equation of the plane and we obtain $f(\theta)=0$. $\bullet\quad A_1\notin Aff(A_0,m+\mathbb{R}\theta)$, then the space $Aff(A_0,A_1)$ is of dimension three. If the direction is not associated to a single point then the edges $A_0,A_1$ are coplanar. Thus in our case the direction is associated to a single point $m$. There exists a real number $\lambda$ such that $m+\lambda\theta\in A_1$. Since $A_1$ is an edge, it is the intersection of two planes (we take the planes of the two faces of the polyhedron). We denote the two planes by the equations $h=0;g=0$ where $h,g\quad \mathbb{R}^3\rightarrow \mathbb{R}$. We obtain the system $$h(m+\lambda\theta)=0,$$ $$g(m+\lambda\theta)=0.$$ Here $h(x)=<v_h,x>+b_h$ where $v_h$ is a vector and $<\cdot,\cdot>$ is the scalar product and similarly for $g$. Then we write $h(m)=<v_h,mu>+b_h=m<v_h,u>+b_h$, we do the same thing for $g$. Since $A_0,A_1$ are not coplanar the terms $<v_g,\theta>,<v_h,\theta>$ are non null, thus we obtain the expression for $\lambda$ : $$\lambda=\frac{-b_h-m<v_h,u>}{<v_h,\theta>}= \frac{-b_g-m<v_g,u>}{<v_g,\theta>}.$$ For a fixed $\theta$, there can be only one point $m\in A_0$ which solves this equation, otherwise we would be in case $(i)$. Thus we find $m=F(\theta)$ where $F$ is the quotient of two linear polynomials : $$m=\frac{b_g<v_h,\theta>-b_h<v_g,\theta>}{<v_h,u><v_g,\theta>-<v_g,u><v_h,\theta>}\quad (*).$$ Note that $F$ does not depend on the concrete choices of the planes $h,g$, but only on the edges $A_0,A_1$. We prove the last point by contradiction. If we have the same equation for two edges, it means that all the lines which pass through two edges pass through the third. We claim it implies that the three edges $A_0,A_1,A_2$ are coplanar : the first case is when $A_1,A_2$ are coplanar. Then the assumption implies that the third is coplanar, contradiction. Now assume that the three edges are pairwise not coplanar. Indeed consider a first line which passes through the three edges. Call $m$ the point on $A_0$, and $u$ the direction. Now consider a line which contains $m$ and passes through $A_1$ with a different direction. Those two lines intersect $A_1$, thus $m$ and the two lines are coplanar. Since $A_2$ is not coplanar with $A_0$, both lines can not intersect $A_2$, contradiction. To finish consider the case when two edges are colinear but the third one is not colinear with either of the other two. This case can be reduced to the first case by looking at the first and third edges. \end{proof} \begin{lemma}\label{entrogeom} Consider two edges $A_0,A_i$ which give the equation $m=F_i(\theta)$. Denote by $p_i$ a point on $A_i$ and $x_i$ the direction of the line $A_i$. Then we have $$F_i(\theta)=\frac{<p_i\wedge x_i,\theta>}{<u\wedge x_i,\theta>},$$ where $u$ is an unit vector colinear to the edge $A_0$. \end{lemma} \begin{proof} By Lemma \ref{entro3eq} each $F_i$ is the quotient of two polynomials. Consider the denominator of $F_i$ as function of $\theta$ ( we use the notations of the preceding proof). By equation $(*)$ we obtain: $$F_i(\theta)=\frac{N(\theta)}{D(\theta)},$$ $$D(\theta)= -<v_{h_i},u><v_{g_i},\theta>+<v_{g_i},u><v_{h_i},\theta>.$$ We remark for the map $F_i$ that $$-<v_{h_i},u>v_{g_i}+<v_{g_i},u>v_{h_i},$$ is orthogonal to $u$ and to $x_i$. Thus this vector is colinear to $u\wedge x_i$ : $$-<v_{h_i},u>v_{g_i}+<v_{g_i},u>v_{h_i}=C_iu\wedge x_i.$$ Consider the numerator $(b_{h_i}v_{g_i}-b_{g_i}v_{h_i},\theta)$ of $F_i$. The scalar product of $b_{h_i}v_{g_i}-b_{g_i}v_{h_i}$ with $x_i$ is null, moreover the scalar product with $p_i$ equals again zero by definition of $v_{g_i},b_{g_i},v_{h_i},b_{h_i}$. Thus we obtain : \begin{equation} b_{h_i}v_{g_i}-b_{g_i}v_{h_i}=C'_ip_i\wedge x_i, \end{equation} and : $$F(\theta)=\frac{C'_i}{C_i}\frac{<p_i\wedge x_i,\theta>}{<u\wedge x_i,\theta>}.$$ We claim that $C_i=C'_i=1$. We can choose the vectors $v_{g_i},v_{h_i}$ such that they are orthogonal and of norm 1. Then $x_i$ is colinear to $v_{g_i}\wedge v_{h_i}$ and is of norm one, thus if we choose the proper orientation of $x_i$ they are equal. Then we can have $$-<v_{h_i},u>v_{g_i}+<v_{g_i},u>v_{h_i}=u\wedge(v_{g_i}\wedge v_{h_i})=u\wedge x_i.$$ Thus we deduce $K_i'=1$. Now we compute the norm of the vector of the numerator $|b_{h_i}v_{g_i}-b_{g_i}v_{h_i}|^2=b_{h_i}^2+b_{g_i}^2$. By definition of $b_{g_i}, b_{h_i},p_i$ we obtain $$b_{g_i}=-<v_{g_i},p_i>; b_{h_i}=-<v_{h_i},p_i>.$$ Thus we have $|b_{h_i}v_{g_i}-b_{g_i}v_{h_i}|^2=<v_{g_i}|p_i>^2+<v_{h_i}|p_i>^2$. Moreover by definition we have that $x_i=v_{g_i}\wedge v_{h_i}$ this implies that $|p_i\wedge x_i|^2=<v_{g_i},p_i>^2+<v_{h_i},p_i>^2$. Finally we deduce $$|p_i\wedge x_i|^2(C'_{i})^2= |p_i\wedge x_i|^2.$$ \end{proof} \begin{lemma}\label{eqdim3} Consider three edges $A_0,A_1,A_2$ such that $dim Aff(A_i,A_j)=3$ for all $i,j$. Then the sets of lines $d$ which pass through $A_0,A_1,A_2$ is contained in a surface which we call $S(A_0,A_1,A_2)$. Consider an orthonormal basis such that the direction $u$ of $A_0$ satisfies $u=\begin{pmatrix}1\\0 \\ 0\end{pmatrix}$. If we call $(P_1,P_2,P_3)$ the coordinates of a point on this surface, then $(i)$ the equation of the surface can be written as $P_1=f(P_2,P_3)$, where $f$ is a polynomial. $(ii)$ there exists $N\leq 4$ such that any line which is not contained in $S$ intersects $S$ at most $N$ times. \end{lemma} \begin{proof} Consider a line $d=m+\mathbb{R}\theta, m\in A_0$ which passes through $A_1,A_2$. By Lemma \ref{entro3eq} we obtain two equations $m=F_i(\theta)$. Then Lemma \ref{entrogeom} implies that $F_i(\theta)=\frac{\sum a_{j,i}\theta_j}{\sum_{j=2}^{3}b_{i,j}\theta_j}$. Now call $P_i$ the coordinates of a point $P$ on $d$. We have $P=m+\lambda\theta$, thus we obtain $$\begin{cases}P_1=\frac{a_{1}\theta_1+a_2\theta_2+a_3\theta_3}{b_2\theta_2+b_3\theta_3}+\lambda\theta_1\\ P_2=\lambda \theta_2\\ P_3= \lambda\theta_3\\ (F_1-F_2)(\theta)=0 \end{cases}$$ where $a_j=a_{j,1}$ and $b_j=b_{1,j}$.\\ $\bullet$ First case $P_2\neq 0$. This is equivalent to $\theta_2\neq 0$. $$\begin{cases}P_1=\frac{a_{1}\theta_1+a_2\theta_2+a_3\theta_3}{b_2\theta_2+b_3\theta_3}+\lambda\theta_1\\ P_2=\lambda \theta_2\\ \theta_3=\frac{P_3} {P_2}\theta_2\\ (F_1-F_2)(\theta)=0 \end{cases}$$ $$\begin{cases}P_1=\frac{a_{1}\theta_1+\theta_2(a_2+a_3\frac{P_3}{P_2})}{\theta_2(b_2+\frac{P_3}{P_2})}+ P_2\frac{\theta_1}{\theta_2}\\ P_2=\lambda \theta_2\\ \theta_3=\frac{P_3}{P_2}\theta_2 \\ (F_1-F_2)(\theta)=0 \end{cases}$$ $$\begin{cases}P_1=\frac{a_{1}}{(b_2+\frac{P_3}{P_2})}\frac{\theta_1}{\theta_2}+\frac{a_2+ a_3\frac{P_3}{P_2}}{b_2+\frac{P_3}{P_2}}+ P_2\frac{\theta_1}{\theta_2}\\ P_2=\lambda \theta_2\\ \theta_3=\frac{P_3}{P_2}\theta_2 \\ (F_1-F_2)(\theta)=0 \end{cases}$$ $$\begin{cases}P_1=(\frac{a_{1}}{b_2P_2+P_3}+1)P_2\frac{\theta_1}{\theta_2}+\frac{a_2P_2+a_3P_3}{b_2P_2+P_3}\\ P_2=\lambda \theta_2\\ \theta_3=\frac{P_3}{P_2}\theta_2 \\ (F_1-F_2)(\theta)=0 \end{cases}$$ Now the equation $(F_1-F_2)(\theta)=0$ can be written as $$(\sum_{j=1}^3 a_{j}\theta_j)(\sum_{j=2}^{3}b'_{j}\theta_j)=(\sum_{j=1}^3 a'_{j}\theta_j)(\sum_{j=2}^{3}b_{j}\theta_j),$$ where $a'_j=a_{j,2}$ and $b'_j=b_{2,j}$.\\ $$(a_{1}\theta_1+a_{2}\theta_2+a_{3}\theta_3)(b'_{2}\theta_2+b'_{3}\theta_3)= (a'_{1}\theta_1+a'_{2}\theta_2+a'_{3}\theta_3)(b_{2}\theta_2+b_{2}\theta_3).$$ With the equation $\theta_3=\frac{P_3}{P_2}\theta_2$ we obtain an equation of the following form. \begin{gather*} (a_1\theta_1P_2+(a_2P_2+a_3P_3)\theta_2)(b'_2P_2+b'_3P_3)=\\ (a'_1\theta_1P_2+(a'_2P_2+a'_3P_3)\theta_2)(b_2P_2+b_3P_3).\\ (a_1\theta_1/\theta_2P_2+(a_2P_2+a_3P_3))(b'_2P_2+b'_3P_3)=\\ (a'_1\theta_1/\theta_2P_2+(a'_2P_2+a'_3P_3))(b_2P_2+b_3P_3). \end{gather*} Thus we obtain the value of $\frac{\theta_1}{\theta_2}$. \begin{gather*} \theta_1/\theta_2[a_1(b'_2P_2+b'_3P_3)-a'_1(b_2P_2+b_3P_3)]P_2=\\ (a'_2P_2+a'_3P_3)(b_2P_2+b_3P_3)-(a_2P_2+a_3P_3)(b'_2P_2+b'_3P_3). \end{gather*} If the coefficient of $\frac{\theta_1}{\theta_2}$ is null we obtain an equation of the form $P_2=KP_3$. This implies that $P$ is on a plane. It is impossible since the lines $A_i$ are non coplanar. Thus we can obtain the value of $\frac{\theta_1}{\theta_2}$. Then the first line of the system gives an equation of the form $$f(P_2, P_3)=P_1,$$ where $f$ is a homogeneous rational map of twp variables. $\bullet$ Second case $P_2=0$. We obtain $$\begin{cases}P_1=\frac{a_{1}\theta_1+a_3\theta_3}{b_3\theta_3}+\lambda\theta_1\\ P_3=\lambda\theta_3\\ (F_1-F_2)(\theta)=0 \end{cases}$$ Remark that $P_3\neq0$. Indeed if not the direction is included in $A_0$. Thus the system becomes $$\begin{cases}P_1=\frac{a_{1}\theta_1+a_3\theta_3}{b_3\theta_3}+\lambda\theta_1\\ \lambda=P_3/\theta_3\\ (F_1-F_2)(\theta)=0 \end{cases}$$ $$\begin{cases}P_1=\frac{a_{1}\theta_1+a_3\theta_3}{b_3\theta_3}+P_3/\theta_3\theta_1\\ P_3/\theta_3=\lambda\\ (F_1-F_2)(\theta)=0 \end{cases}$$ And the equation $(F_1-F_2)(\theta)=0$ gives as in the first case the values of $\frac{\theta_1}{\theta_3}$. $\bullet$ Now consider a transversal line $d'$. A point on this line depends on one parameter. If the point is on the surface, the parameter verifies a polynomial equation of degree four, thus there are a bounded number of solutions. \end{proof} \begin{corollary}\label{entro3inde} Consider four edges $A_0,A_1,A_2,A_3$ two by two non coplanar such that $A_3\notin S(A_0,A_1,A_2)$. Then the maps $F_1-F_2, F_1-F_3$ are linearly independent. \end{corollary} \begin{proof} We make the proof by contradiction. If the maps $F_1-F_2,F_1-F_3$ are linearly dependent, it means that $F_3$ is a linear combination of $F_1,F_2$. It implies that the system $\begin{cases}m=F_1(\theta)\\m=F_2(\theta)\\m=F_3(\theta)\end{cases}$ is equivalent to $\begin{cases}m=F_1(\theta)\\m=F_2(\theta)\end{cases}$. Thus each line which passes through $A_0,A_1,A_2$ must passes through $A_3$. By preceding Lemma it implies that $A_3$ is in $S(A_0,A_1,A_2)$, contradiction. \end{proof} \subsection{Key point} \begin{lemma}\label{3Ka} Consider a point $(m,\theta)\in \overline{E_0}$; then the set of words $v$ such that $(m,\theta)\in \sigma_v^-$ is at most countable. \end{lemma} For the proof we refer to \cite{Ka}. This proof does not depend on the dimension. \subsubsection{Definitions} For a fixed word $v\in\Sigma\setminus\phi(E_0)$, the set $\sigma_v^-$ is of dimension 0 or 1 and the direction $\theta$ is unique, see Lemma \ref{entro3gkt}. Fix a word $v\in\Sigma\setminus\phi(E_0)$, we will consider several cases: $\bullet$ First $\sigma_v^-$ is an interval with endpoints $a,b$. For any $m\in]a,b[$ we consider the set of discontinuities met in the unfolding of $(m,\theta)$. This set is independent of $m\in ]a,b[$ since $\sigma_v^-$ is an interval. We denote it $Disc(v,int)$. If the endpoint $a$ (resp. $b$) is included in the interval then the orbit of $(m,\theta)$ can meet other discontinuities. We call $Disc(v,a)$ (resp. $Disc(v,b)$) the set of those discontinuities. $\bullet$ If $\sigma_v^-$ is a point it is the same method as $Disc(v,int)$, we denote the set of discontinuities by $Disc(v,int)$. Here there are two sorts of discontinuities. First the singularity is a point of the boundary of a face whose code contributes to $v$. Then the orbit is not transverse to the edge. Secondly they meet in the transversal sense. If the orbit is included in an edge, then the discontinuities met are the boundary points of that edge (and similarly if the orbit is in a face). \begin{definition} Let $V=\Sigma\setminus\phi(E_0)$ and $X\subset V$ be the set of $v\in V$ such that the union of the elements $A_i$ of $Disc(v,int),Disc(v,a),Disc(v,b)$ are contained in a finite union of hyperplanes and of surfaces $S(A_0,A_1,A_2)$. \end{definition} Suppose $v\in X$. Let $N(\sigma_v^-)$ be the number of planes containing $Disc(v)$ if $\sigma_v^-$ is a point or $Disc(v,a)$ or $Disc(v,b)$ if $\sigma_v^-$ is an interval. In the following Lemma the function $L$ refers to the width of the strip of singular orbits as it does in the proof of Lemma \ref{lemka}. \begin{lemma}\label{entroprobpaq} Suppose $\mu$ is an ergodic measure with support in $\Sigma \backslash\phi(E_0)$. Then $(i)$ there exists a constant $L$ such that $L(\sigma_v^-)=L$ for $\mu$-a.e. $v\in \Sigma$ and thus for $\mu$-a.e $v,w \in \Sigma$ if $w_i = v_i$ for $i \ge 0$ then $\sigma_w = \sigma_v$. $(ii)$ there exists a constant $N$ such that $N(\sigma_v^-)=N$ for $\mu$-a.e $v\in\Sigma$. \end{lemma} \begin{proof} $(i)$ If $\sigma_v^-$ is a point then there is nothing to show. Let $L(\sigma_v^-)$ be as before. We have $L(\sigma_v^-)\leq L(\sigma_{S(v)}^-)$. Since $S$ is ergodic, $L$ is constant almost everywhere. Thus $L(\sigma_v) = L(\sigma_v^-)$ thus $\sigma_v=\sigma_v^-$. The same holds for $w$, thus since $\sigma_w^-=\sigma_v^-$ we have $\sigma_v= \sigma_w$. $(ii)$ We have $N(\sigma_v^-)\leq N(\sigma_{Sv}^-)$, thus the lemma follows since $S$ is ergodic. \end{proof} Let $D$ stand for $Disc(v,int),Disc(v,a),\text{or}\quad Disc(v,b)$. \begin{remark}\label{entrorem} For two sets $A_i,A_j\in D$ the relation $dim Aff(A_i,A_j)=2$ is a transitive relation. Indeed consider three sets $A_i,A_j,A_k$ such that $A_i\sim A_j$, and $A_j\sim A_k$. Since the line $m+\mathbb{R}\theta$ passes through $A_i,A_j,A_k$, we deduce $A_i\sim A_k$. \end{remark} Then we can show \begin{proposition}\label{entroscruc} The set $V\setminus X$ is at most countable. \end{proposition} \begin{proof} Let $v\in V$. Lemma \ref{entro3eq} implies that we have for each pair of discontinuities an equation $m=F(\theta)$ or $f(\theta)=0$. Denote the set $D$ by $A_0,\dots,A_n,\dots$. Either there exist discontinuities $A_{i_0},A_{i_1},A_{i_2},A_{i_3}$, such that the equations related to $(A_{i_0},A_{i_j})$, for all $j\leq 3$, are of the form $m=F(\theta)$ or not. In the following we will assume, for simplicity, that these three discontinuities (if they exist) are denoted by $A_0,A_1,A_2,A_3$. $\bullet$ First assume it is not the case. Then for any subset of $D\setminus\{A_0,A_1,A_2,A_3\}$ two elements give equations of the form $f(\theta)=0$. By Remark \ref{entrorem} all the discontinuities in the set $D\setminus\{A_0,A_1,A_2,A_3\}$ are in a single hyperplane. Thus all the discontinuities of $D$ are in a finite union of hyperplanes. We do the same thing for $Disc(v,a)$ and $Disc(v,b)$. We conclude $v\in X$. $\bullet$ Now we treat the case where we obtain at least three equations of the form $m=F(\theta)$ for some choice of $(m,\theta)$. Corollary \ref{entro3inde} shows that two such equations are different since the discontinuities are not in the union of surfaces. Thus consider the three first equations $m=F(\theta)=G(\theta)=H(\theta)$. It gives two equations $(F-G)(\theta)=(F-H)(\theta)=0$. Those two equations are different by Corollary \ref{entro3inde}, since $F,G,H$ are different. We deduce that the direction $\theta$ is solution of a system of two independant equations, thus it is unique. We remark that the vertices which appear in unfolding have their coordinates in a countable set $\mathcal{C}$. Indeed we start from a finite number of points corresponding to the vertices and at each step of the unfolding we reflect them over some faces of $P$. Thus at each step there are a finite set of vertices. Moreover the coefficients of the edges are obtained by difference of coordinates of vertices. By the same argument the coefficients of cartesian equations of the hyperplanes which contains faces live in a countable set $\mathcal{C}$. There are only a countable collection of functions $m=F(\theta)$ which arise. Thus the solution $\theta$ corresponding to the equations $m=F(\theta)=G(\theta)=H(\theta)$ lives in a countable set. It determines $(m,\theta)$. The number of words associated to the orbit of $(m,\theta)$ is countable by Lemma \ref{3Ka}. Thus the set of such words is countable. \end{proof} \begin{figure}[t] \begin{center} \includegraphics[width= 4cm]{hyper.pdf} \caption{Coding of a word} \label{entrofighyper} \end{center} \end{figure} \section{Proof of Theorem \ref{entro}} \begin{lemma}\label{3entrosur} Suppose that $\mu$ is an ergodic measure supported in $\Sigma \backslash \phi(E_0)$ such that $\mu(X) = 1$. Then $h_{\mu}(S) = 0$. \end{lemma} \begin{proof} By Lemma \ref{entroprobpaq} we can assume there is a constant $L \ge 0$ such that $L(\sigma_v^-) = L$. Suppose first that $L > 0$. Suppose $v \in \hbox{support}(\mu)$. This implies that $Disc(v,int)$ is contained in a single plane. If $w \in \hbox{support}(\mu)$ satisfies $w_i = v_i$ for $i \ge 0$ then $Disc(w,int)$ is contained in the same plane. Each trajectory in $\phi(E_0)$ which approximates the future of $v$ cuts this plane in a single point. Consider these sequence of approximating trajectories which converges to $(m,\theta)$. The limit of these trajectories cuts the surface at one (or zero) points. The point where it cuts the surface determines the backwards unfolding, and thus the backwards code. Thus if we ignore for the moment the boundary discontinuities the knowing the future $v_0,v_1,v_2,\dots$ determines $O(n)$ choices of the past $v_{-n},\dots,v_{-1}$. The boundary discontinuities and the case $L(\sigma_v^-) = 0$ are treated analogously. Let $(m,\theta) = \sigma_v^-$ (or one of the boundary points of $\sigma_v^-$ in the case above). By Lemma \ref{entroprobpaq} we can assume that $Disc(v,m)$ is contained in $N$ planes, and that if $w \in \hbox{support}(\mu)$ satisfies $w_i = v_i$ for $i \ge 0$ then $Disc(w,int)$ is contained in the same planes. Arguing as above, the point where an approximating orbit cuts these planes determines the past. Thus the future $v_0,v_1,v_2,\dots$ determines $O(n^N)$ choices of the past $v_{-n},\dots,v_{-1}$. Since $\displaystyle\lim_{n\rightarrow+\infty}\frac{\log{n^N}}{n}=0$ we deduce the result. \end{proof} The preceding lemma and proposition allow to conclude \begin{corollary}\label{supmes2} Let $\mu$ an ergodic measure with support in $\Sigma\setminus\phi(E_0)$, then $$h_{\mu}(S)=0.$$ \end{corollary} \begin{proof} This follows immediately from Lemma \ref{3entrosur} and Proposition \ref{entroscruc}. \end{proof} Lemma \ref{entrodef} reduces the problem to the computation of $h_{top}(S|\Sigma)$. Moreover we have $$h_{top}(S|\Sigma)=\sup_{\substack{\mu \\ ergo, \\ supp(\mu)\subset\phi(E_0)}}h_{\mu}(S|\Sigma)+\sup_{\substack{\mu \\ ergo, \\ supp(\mu)\subset\Sigma\setminus\phi(E_0)}}h_{\mu}(S|\Sigma),$$ then Corollaries \ref{supmes1} and \ref{supmes2} imply: $$h_{top}(S|\Sigma)=0.$$ \bibliographystyle{alpha}
{'timestamp': '2012-05-01T02:05:24', 'yymm': '1204', 'arxiv_id': '1204.6556', 'language': 'en', 'url': 'https://arxiv.org/abs/1204.6556'}
\section{Introduction} \subsection{Summary of our model's theoretical foundations} We have set forth in the first part of this work \cite{Guevel2019a} the theoretical foundations of CPFM. This is an extended PFM based on a non-equilibrium thermodynamic framework, contact thermodynamics, incorporating for now chemo-mechanical coupling. The mechanical effect is based on elasticity triggering the production of weak phase, similarly to dissolution. The chemical effect allows the opposite reaction, the production of strong phase in the zones away from the ones with large mechanical loading. The precise discrimination of the mechanical response within the system is ensured by the PFM capturing the actual interfaces. For this reason, we will apply this model in the present second part to microstructures with complex geometries, those of geomaterials. The novelty of our extended PFM resides in the term $\mu \Delta\dot\phi$ ($\phi$ is the order parameter), added to the usual term $\dot\phi$. We claim that the latter characterizes the normal variations of the interfaces curvature and the former their change of orientations i.e. tangential variations. Thus $\mu$, that we call PFM viscosity, quantifies the resistance for a rough geometry to smoothen. In that sense, $\mu$ encapsulates the kinetics of microstructural changes and could be described with the different activation energies of the CI effects associated to the main process, such as temperature. The influence of the main ingredients of our model, the bulk energy input $\chi$ and the PFM viscosity $\mu$ are first benchmarked. Then, in order to thoroughly exhibit our model's capabilities, we numerically model PSC at the grain scale. The geomaterials' MG is input in our model via the digitalization of micro CT-scans. Let us introduce now the main facets of PSC, in particular where the influence of MG stands in the literature. \subsection{Pressure solution creep} The concept of PSC finds its origin in the realization that the deformation of materials in the presence of fluids cannot accounted for solely on mechanical bases, but combined with chemical effects. Fluid-bearing materials like geomaterials fall into this consideration. The term "pressure solution" was originally coined by the geologist Sorby, in application to geomaterials, for which processes "mechanical force is resolved into chemical action" \cite{Sorby1863}. PSC is a major factor in the dynamics of Earth's crust, mainly in the upper crust, as in lithogenesis (\cite{Heald1956}), tectonic (\cite{Schwarz1996}, fault reactivation and earthquakes (\cite{Sleep1992}, \cite{Renard2000}). An outstanding observable example of PSC is the formation of stylolites (see for instance the review in \cite{Gratier2013a}). Finally, PSC can play a role in underground storage of nuclear waste both in the case of rock salt cavities \cite{Urai1986} and bentonite buffer that could make the metallic container sink \cite{Shin2017}. The process happening to geomaterials' grains can be compared to the one happening to salt or sugar in humid air, where grains tend to stick together under sufficient amount of loading (due to gravity) and humidity. PSC is usually the first degradation process to occur, provided the system is given enough time, at least before plasticity and breakage \cite{DeBoer1977}. Experimental results corroborate indeed that diagenesis takes place chiefly by pressure solution rather than by grain straining or crushing (see \cite{Lowry1956, Renton1969, Schwarz1996} e.g.). It is thus primordial to take PSC into account when there are inter-granular fluids, inasmuch as it can predetermine other processes. Pressure solution creep (PSC) is a serial stress-driven mass transfer process that can be described in four stages: (1)dissolution at grain impingements, (2)diffusion of the solute out of the contact, (3)deposition/precipitation on the pore surface and potential (4)diffusion of the solute to other pores (see fig.1 below adapted from \cite{Gundersen2002}). \begin{figure}[h!] \centering \includegraphics[scale=0.4]{sketch.pdf} \caption{4 main PSC processes and 2 main models for grain contacts} \end{figure} The corresponding indicators to look for in experiments are mainly interpenetration and overgrowth. The main drive is mechanical (constant) loading, triggering dissolution in stressed regions and allowing precipitation in sheltered regions relatively unstressed (after solute diffusion), first experimentally inferred by Griggs \cite{Griggs1940}. Since it is an en-serie process, the rate of the slower process will govern the rate of the overall process. Different limiting processes correspond to different creep laws. A key process is the diffusion through the compressed thin layer of trapped fluid between the grains triggering the dissolution. Two main models have been used to explain it: the fluid layer model \cite{Weyl1959, Renard1997} and the island-channel network model \cite{Raj1981}. Note that the diffusivity of the trapped fluid is much lower than the diffusivity of the free fluid in the pores or fractures. The main factors influencing the PSC are the stress, the temperature, the geometry (grain size e.g.), the porosity and the diffusion (and advection if any) coefficients.\\ We believe that the existing PSC models present two main limitations. Firstly, there is no clear consensus as for the rate-limiting process (i.e. the slowest process that will control the overall creep law) \cite{Croize2013}. Therefore PSC cannot be described presently with a unique law \cite{Raj1982, Gratier2013a}. Secondly, the actual MG is typically averaged by regular packs of mono-dispersed spherical particles. This approximation neglects the highly irregular geomaterials' MG and its continuous variations during compaction. Even though the grain boundary structures can be modeled and taken into account in the creep laws by assuming a thin-film model or islands-channels model, it seems that the grain size distribution can be misleading. This can cause major discrepancies between model and experimental \cite{Niemeijer2009,Croize2013}. The variation of grain distribution of the actual MG can potentially localize the deformations and lead to spatio-temporal variations in pressure solution rates. Thus overlooking the actual MG can mean passing over potential microscopic instabilities (at the grain scale) that can create major disruptive macroscopic events like earthquake ruptures \cite{Niemeijer2009}. It is clear that those two limitations are closely related insofar as PSC is essentially a grain-scale process and therefore characterizing it amounts to adequately modeling the MG. In a first approach, we will address this impediment by modeling the MG with our CPFM. \section{Problem set-up} Let us remind the problem's equation for PFM with chemo-mechanical coupling in dimensionless form: \begin{equation} \begin{cases} -\mu\Delta\dot{\phi} + \dot{\phi} = \alpha\Delta\phi - f(\phi,\mat\epsilon,c) \\ \tau^*\dot{c} = D^*\Delta{c} - \tau^*\dot\phi - \hat\beta^*(\phi) \end{cases} \end{equation} With $\mu=\frac{\tau_2}{\tau_1 l_0^2}$ (phase-field viscosity), $\alpha=\frac{\Gamma}{G l_0^2}$, $f(\phi,\mat\epsilon,c)=g'(\phi)+(\chi(\mat\epsilon)-\beta^* c)h'(\phi)=g'(\phi)+\hat\chi(\mat\epsilon,c)h'(\phi)$, $\chi(\mat\epsilon) = \frac{1}{2}\mat\epsilon.(\mat{C_B}-\mat{C_A})\mat\epsilon$, $\tau^*=\tau_3/\tau_1$, $D^*=\frac{D}{G l_0^2}$, $\hat\beta^*(\phi) = \frac{\beta}{G} \left[\frac{\int_V(1-h(\phi))dV}{\int_V dV}+1-h(\phi) \right]$ It is reminded that the mechanics is imply solved as follows: \begin{equation} \begin{cases} \nabla.\mat\sigma=0 \\ \mat\sigma = \overline{\mat{C}}(\phi)\mat{\epsilon} + \tau_3 \mat{\dot{\mat\epsilon}} \end{cases} \end{equation} With $\overline{\mat{C}}(\phi)= \mat{C}^A (1-h(\phi)) + \mat{C}^B h(\phi)$ the homogenized elastic tensor. $A$ is the weak phase ($\phi=0$( and $v$ the strong phase ($\phi=1$. For the sake of simplicity, we assume in the present work that $\tau_3=0$. We will work in 2D plane strain unless mentioned otherwise, then the elastic energy of phase $K$ reads: \begin{equation} H_K=\frac{1}{2}\lambda_K({\epsilon_{xx}^{K}}^2+{\epsilon_{yy}^{K}}^2)+\mu_K({\epsilon_{xx}^{K}}^2+{\epsilon_{yy}^{K}}^2+2{\epsilon_{xy}^{K}}^2) \end{equation} With $\lambda_K$ and $\mu_K$ the Lam\'e parameters of phase $K$. The reference time scale is fixed to $t_0=\frac{\tau_1}{G}$ corresponding to the relaxation of the normal variations of the interface. The reference length scale $l_0$ shall be fixed with respect to the problem's dimensions at stake. All the model's details can be found in \cite{Guevel2019a}. From the linear stability analysis therein, one should choose $\alpha$ significantly smaller than the problem's dimension, so that only the two phases at stake are stable (and observed). That will also discriminate the diffusion of the solute with respect to the interface diffusion ($D^* \gg \alpha$). Different reaction diffusivities are characteristic of reaction-diffusion systems. We will see that in practice it not always practical to achieve.\\ \section{Numerical implementation and benchmarks} In the following numerical examples, we intend to illustrate the roles played by the characteristic parameters $\chi$ and $\mu$, and more generally how including the MG impacts the material's response. It seems that $\chi$ plays the role of phase change catalyst whereas $\mu$ plays the role of a phase change inhibitor. Note however that $\chi$ encapsulates a static effect but $\mu$ a dynamic effect. The former is illustrated by displaying weak phase nucleation and the latter by considering geometrical effects. \subsection{Multiphysics Object-Oriented Simulation Environment} The Multiphysics Object Oriented Simulation Environment (MOOSE \footnote{http://mooseframework.org}) used here is a finite-element-based code dedicated to multiphysics nonlinear problems. Its efficiency comes, inter alia, from the facts that it is object-oriented, hence easily reusable, fully coupled, fully implicit, automatically parallel, it uses unstructured meshes, and more practically it is open source and thus makes the most of a large and helpful users community.\\ Following \cite{Tonks2012}, the system is discretized in time using the Jacobian-Free Newton Krylov method (JFNK) and in space with a FEM formulation. The equation is solved by minimizing its residual written in the weak form the lastly obtained dimensionless CPFM. The weighted integral residual projection reads: \begin{equation} -\left(\mu\Delta\dot{\phi},\varphi_m \right) + \left(\dot{\phi},\varphi_m \right) - \left(\Delta\phi,\varphi_m \right) + \left(f_\chi(\phi,\mat\epsilon),\varphi_m \right)=0 \end{equation} Where $\varphi_m$ is a test function, and $\left(.,. \right)$ is the usual integral inner product. Integrating by parts the first and third term, noting $\left<.,.\right>$ the boundary terms yield: \begin{equation} \left(\mu\nabla\dot{\phi},\nabla\varphi_m \right) - \left<\mu\nabla\dot{\phi}.\vec{n},\varphi_m \right>+ \left(\dot{\phi},\varphi_m \right) + \left(\nabla\phi,\nabla\varphi_m \right) - \left<\nabla\phi.\vec{n},\varphi_m \right> + \left(f_\chi(\phi,\mat\epsilon),\varphi_m \right)=0 \end{equation} Likewise, the macro-force balance $\nabla.\mat\sigma=0$ is implemented using the weak form: \begin{equation} \left(\mat\sigma,\nabla\varphi_m \right) - \left<\mat\sigma.\vec{n},\varphi_m \right> = 0 \end{equation} Each term $\left(.,. \right)$ inherits from a usually pre-implemented C++ base class called kernel in MOOSE. The existing kernels have been customized to fit our model. \subsection{Role of $\chi$: phase change activation} We first focus on the influence of $\chi$, characterizing the source energy for phase change, in our case elastic energy. The system's energy input is to destabilize its stable double-well organization and therefore to initiate phase changes. This can be visualized with the graph of the bulk energy term $B(\phi,\epsilon)=Gg(\phi)+\bar{H}(\epsilon,\phi)$ with $\bar{H}(\epsilon,\phi)=H_B(\epsilon)h(\phi) + H_A(\epsilon)(1-h(\phi))$. We consider the phase A to be the weak phase and B the strong phase, meaning A is much more deformable than B. Note that is consistent with Landau's theory to take an order parameter increasing with the (microscopic) "order" of the material considered. Here and in the next examples $\phi=0$ corresponds physically to pores (filled with liquid or not) and $\phi=1$ to a solid state. If the energy input $\bar{H}(\epsilon,\phi)$ is low enough, the system has a stable double-well organization (blue graph). If $\bar{H}(\epsilon,\phi)$ is high enough, the double well is tilted, which favors the least energy-demanding organization, i.e. an expansion of phase A at the expense of phase B (orange graph). This destabilization has been determined in \cite{Guevel2019a} to be any value of $\chi(\mat\epsilon) = \frac{1}{2}\mat\epsilon.(\mat{C_B}-\mat{C_A})\mat\epsilon$ larger than $\chi_0 \approx 1/3$ (with the proviso that the perturbation characteristic length is much higher than the interface characteristic width). \begin{figure}[h!] \centering \includegraphics[scale=0.7]{double_well_mechanical.pdf} \caption{Graph of the double-well potential $Gg(\phi) \approx B(\phi)$ with $G=10$ in blue and of the tilted double-well potential $B(\phi,\epsilon)=Gg(\phi)+\bar{H}(\epsilon,\phi)$ with $G=10$, $H_A(\epsilon)=0.1$ and $H_B(\epsilon)=10$ in orange} \end{figure} Thus we want to observe the nucleation of the phase A under mechanical loading. Let us consider the extreme case where there is no phase A at beginning, with initial conditions randomly fluctuating in the phase B, i.e. $\phi \in [0.79,1]$. We remind that, as explained in\cite{Guevel2019a}, we consider the pure phases A and B for respectively $\phi \in [0,0.21]$ and $\phi \in [0.79,1]$, while the interface corresponds to the spinodal interval $\phi \in [0.21,0.79]$. The boundary conditions are those of a uniform compression, i.e. constantly moving boundaries towards the center, and null Neumann conditions for the order parameter. We choose (displacement-controlled) loading directly proportional to the simulation time ($1*t$). Now let us set up some numerical values for the parameters. In all our numerical studies, we will work with the following consistent units system: length in $mm$, mass in $kg$, time in $ks$, energy in $J$, pressure/stress in $GPa$. The dimensionless interfacial coefficient $\alpha=\frac{\gamma l_i}{G l_0^2}$ can be estimated by choosing a surface tension $\gamma=0.1 Pa.m = 10^{-7}J/mm^2$ (for a solid-fluid interface, cf Leroy2001 e.g.), a interface width $l_i$ of $1nm=10^{-3}mm$, a problem's length scale $l_0$ of $1mm$ and a double-well barrier $G$ of $1 J/mm^3$; then $\alpha=10^{-12}$. As often in PFM, $\alpha$ is an "epsilon" term. Yet, for numerical purposes, this value is chosen as small as possible to maintain good convergence. We choose $\alpha=0.01$ in the present case. The mechanical properties are chosen to consider a solid (strong phase B) containing porous fluid (weak phase A). As a first approximation, we model the pores phase as a solid as well, as in \cite{Kassner2001}. As such, the pores fluid (air and/or liquid) will be then taken as a shear-free solid much more deformable than the matrix phase. Thus we choose for instance the $\lambda_A=1 GPa$, $\mu_A=0$, $\lambda_B=\mu_B=30 GPa$ ($\lambda_K$ and $\mu_K$ being the Lam\'e's first and second parameters of the phase $K$ respectively). We keep $\mu=0$ for now (not to be confused with the second Lam\'e parameter). Finally, we choose the reference length $l_0=1mm$ (side of the initial square). We observe below the initial, softening and final stages of our simulation for two different different values of $\chi$ (a function of the strain and the Lam\'e parameters). Associated is the stress measured at the top surface (averaged) vs the vertical displacement or shortening (vs). All the following stress/displacement curves will be given similarly. \begin{figure}[h!] \centering \includegraphics[scale=0.4]{weak_phase_nucleation.pdf} \caption{Cracks nucleation in solid phase under isotropic compression at initial time, onset of softening ($vs=4.48\%$) and final time ($vs=5.23\%$) ($\lambda_A=1 GPa$, $\mu_A=0$, $\lambda_B=\mu_B=30 GPa$ and $\lambda_B=\mu_B=60 GPa$) with a mesh of $100*100$ triangular elements and associated stress/shortening curves measured on top boundary (in orange a material twice as stiffer as the one in blue.} \end{figure} We observe the nucleation of the blue weak phase under mechanical loading, result of the conversion of the initial fluctuations transferred form phase B to phase A I(see fig.2). It is interesting to look at the mechanical response associated to the phase change. It corresponds to the onset of mechanical softening (decrease of stress for increase of displacement). Indeed the apparition of the weak phase A accelerates the compression of the material. As expected, since the elastic moduli determine the tilt of the double-well under mechanical loading, the stiffer the material (orange curve) the faster the phase change and the onset of softening. \subsection{Role of $\mu$: phase change CI} We perform an oedometric compression of the REV of two microstructures, from the most basic to a more realistic one. Following our postulate that the Laplacian rate term's CI effects operate via controlling the variations of the interfaces curvatures, we apply our model first on a circle, i.e. a constant-curvature shape. In the following part we will upscale our study to geomaterials' CT scans. \subsubsection{Circle-shaped inclusion} We perform similarly a displacement-controlled compression of a circle-shaped weak phase A, captured in the strong phase B, this time in oedometric conditions (only the top boundary can move). This can represent the compression of a pore. Thus the initial conditions are as follows:\\ \begin{figure}[h!] \centering \includegraphics[scale=0.2]{initial_conditions_circle_inclusion.pdf} \caption{Initial conditions for the oedometric compression of the circle-shaped inclusion (mesh with $100*100$ quadrilateral elements)} \end{figure} As per the previous dimensionless form of the equation, the model is fully parametrized by choosing $\mu$ and $\chi$. $\chi$ is fixed by choosing the elastic moduli $\lambda_A=1$, $\mu_A=0$ for the weak phase and $\lambda_B=30$, $\mu_B=30$ for the strong phase, and $G=1$. $/mu$ will be varied to study its influence. We visualize its influence through the output aspect, especially the interface curvatures, and the stress/strain curves. The following results are obtained by varying $\mu$ from $0$,$1$,$10$. \begin{figure}[h!] \centering \includegraphics[scale=0.4]{circle_inclusion.pdf} \caption{Oedometric compression of a circle inclusion of weak phase (blue), just after softening for different values of $\mu$ (from left to right $\mu=0$,$\mu=1$,$\mu=5$), and associated stress/strain curves showing top-right translation with increasing $\mu$.} \end{figure} \subsection{Role of the free energy endothermic term: phase change inhibition} In this last benchmark, we present the effect of the chemical coupling term $\beta^* c h'(\phi)$ that allows the production of the strong phase B ($\phi=1$). As explained previously, this chemical coupling term allows the change of sign of $f(\phi,\mat\epsilon,c)$ (and hence of $\dot\phi$). This can be visualize again the tilting of the double well, this time in the other way compared with the weak phase production (part 3.2.). \begin{figure}[h!] \centering \includegraphics[scale=0.7]{double_well_chemical.pdf} \caption{Graph of the double-well potential $Gg(\phi) \approx B(\phi)$ with $G=10$ in blue and of the tilted double-well potential $B(\phi,\epsilon)=Gg(\phi)+\bar{H}(\epsilon,\phi)$ with $G=10$, $H_A(\epsilon)=0.1$ and $H_B(\epsilon)=10$ in orange} \end{figure} \subsubsection{Setup} The initial setup consists of two half-circles separated by a thin layer of weak phase (representing the pores fluid) under a constant stress of $200MPa$ and oedometric conditions: \begin{figure}[h!] \centering \includegraphics[scale=0.2]{psc_benchmark_ini.pdf} \caption{Initial conditions for the benchmark of PSC model: two ideal spherical grains separated by a thin film of "fluid"} \end{figure} Whereas the thickness of the fluid film in between the grains should be reportedly of maximum few $nm$ \cite{Renard1997}, it is limited in our simulation by the mesh resolution. Indeed a quick calculation shows that our film thickness is approximately $12 \mu m$ ($grain \ diameter \approx 300 \mu m$, for a $50*50$ mesh elements, 2-element thick film). In order to have 1 mesh element measure say $1nm$, one should have a mesh of $3.10^5*3.10^5$ elements ($300/0.001=3.10^5$), which is clearly numerically unrealistic. We use the set of parameters $\alpha=0.1$, $\lambda_A=1$,$\mu_A=0$,$\lambda_B=\mu_B=30$, $\tau_4=\tau_1=1$, $D^*=10$. As explained in more details in the the next part, we consider two solid grains (geomaterials e.g.) separated by a shear-free much more deformable solid representing the liquid pores phase. Keeping that in mind, we perform different simulations with varying $\beta$ (see fig.8). It seems that the higher its value, the higher the chemical coupling (i.e. precipitation rate), translating into a slower compression (rightward translation of the displacement vs time curve. It is not clear how to quantify $\beta$ but we use the following rule of thumb: $\beta$ should be high enough to counterbalance the mechanically-induced dissolution (i.e. allow the reverse tilting of the double well); $\beta$ should not be too high to preserve coherent values of concentration $c$ as it appeared in the simulations. In that sense, we choose $\beta=0.05$ in the present case for instance. Indeed we observe in fig.8 that for too low values of $\beta$ (say $\beta<0.01$) the system's response is close to the case without chemical coupling ($\beta=0$). \begin{figure}[h!] \centering \includegraphics[scale=0.5]{different_betas.pdf} \caption{Influence of $\beta$ on the the system's mechanical response: the higher its value the later the failure} \end{figure} \subsubsection{Mesh dependency} Although the gross mesh $50*50$ allows to clearly visualize the dissolution at the contact zone, it seems that the jump in displacement is mostly dependent on the mesh resolution and therefore does not necessarily bear a physical meaning. We shall thus check the mesh dependency. We measure the time of the jump for finer and finer meshes. It appears that the finer the mesh the less accentuated the jump. In the light of the mesh convergence, we can choose a mesh of $100*100$ elements. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{mesh_convergence.pdf} \caption{Mesh convergence: measured failure time converges approximately for a mesh with at least $100*100$ elements} \end{figure} However, it appears that even for a converged mesh, the jump persists. We can attribute it to the discreet nature of our numerical modeling (FEM). Indeed, there will always be a last pair of elements (one from each grain) facing each other to be dissolved before the two flattened surfaces hit each other. In reality, we expect the dissolution to be smoother or at least that the jump induced by the last micro-particle would not be perceptible at the experimental scale. A crucial difference is also that in reality the microstructure is an assembly of grains that can ensure load restoration for each others when a contact between two grains gets flattened. Nonetheless, a possible physical meaning can still be attributed to this jump, viz. that this quasi-static phase before the jump is a sign of tertiary creep where the system does not receive enough energy at first to dissolve continuously the contact zone. \subsubsection{Numerical description of the PSC process} Let us show the details of the process as they appear in our simulations for a given set of parameters, in particular $\beta=0.05$. We keep a rough mesh of $50*50$ to have a better visualization of the processes at stake, knowing that the mesh should be at least $100*100$ - then the jump in vertical shortening (failure) appears earlier but the output is qualitatively the same. Our pressure solution model for the ideal case of two grains displays two clear stages: (1) the dissolution of the high-strain zone and (2) contact between the two flattened grains (see numerical results below in fig.9). The rounded grains surfaces are dissolved at the contact zone from the sides to the center until they become flat and there is no more support for the upper grain. Then a failure phase is characterized by a sudden increase in vertical displacement and the two flattened grains come into contact, still separated by a thin layer of fluid. Successive dissolution stages can be triggered on the flattened surface should the system be given enough energy (i.e. time in the present case of constant loading). One could associate those two stages respectively to the island-channel (IC) representation (\cite{Dysthe2003}, first in \cite{Raj1981}) and the thin-film (TF) model (\cite{Weyl1959}). Now let us show the details of the process as they appear in our simulations for a given set of parameters. \begin{figure}[h!] \centering \includegraphics[scale=0.4]{PSC_benchmark.pdf} \caption{PSC of two-grain benchmark at during dissolution until the rounded surfaces get flattened, and failure occurs at $t=24.3 ks$. Then the new contact surfaces are available for a new stage of dissolution. The concentration of volumetric strain and the precipitation rate (i.e. $\dot\phi_+$ are shown at $t=3.43 ks$. Those numerical images should be related to the green curve in fig.8.} \end{figure} We will see in part 4.2. that the present benchmarks conclusions are not as ideal for an irregular microstructure, since the response depends on the specific MG. \section{Results for chemo-mechanical degradation of microstructures} After benchmarking the different features of our model, we apply it to more realistic microstructures, those of geomaterials. We will use as input digitalized CT scans images of a sandpack obtained from a sandpack's CT scan from \cite{Dong2009}. We refer to the website https://www.imperial.ac.uk/earth-science/research/research-groups/perm/research/pore-scale-modelling/micro-ct-images-and-networks/sand-pack-lv60a. Unless mentioned otherwise, we use the layer 159 therein and consider a 2D problem. \subsection{Digital modeling of geomaterials} We consider now the two phases to be the rock matrix or grains (phase B, $\phi=1$) and the pores (phase A, $\phi=0$). As a first approximation, we model the pores phase a solid as well, similarly to \cite{Kassner2001}. As such, the pores fluid (air and/or liquid) will be then taken as a shear-free solid much more deformable than the matrix phase. Ideally the mechanics should be coupled with hydrodynamics but we will restrict our model to this assumption for now. This implies the elastic moduli to fulfil $\mu^A=0$ and $\lambda^A \ll \lambda^B$. Note that then $\lambda_A=K_A$ with $K$ the bulk modulus. The elastic energy of each phase reads now, in 2D (plane strain): \begin{equation} H_A=\frac{1}{2}\lambda_A({\epsilon_{xx}^{A}}^2+{\epsilon_{yy}^{A}}^2) \end{equation} \begin{equation} H_B=\frac{1}{2}\lambda_B({\epsilon_{xx}^{B}}^2+{\epsilon_{yy}^{B}}^2)+\mu_B({\epsilon_{xx}^{B}}^2+{\epsilon_{yy}^{B}}^2+2{\epsilon_{xy}^{B}}^2) \end{equation} We remind that we choose to work in the following consistent set of units: mm, ks, J, GPa, kg.\\ Let us consider the pores phase A as saturated with water and then $\lambda_A=K_A \approx 1 GPa$ (and $\mu_A=0$). As for the grains phase B, we cannot consider the mechanical characteristics of sand or sandstone as such but independently from the pores. In that sense, the grains phase can be considered as a rock with very low porosity, like granite. Hence we choose $\lambda_B \approx 30 GPa$ and $\mu_B \approx 30 GPa$, which are values close to what can be found in standard literature. We apply an oedimetric displacement-controlled compression directly proportional to the time $1*t$. The initial conditions are obtained by digitalizing the ct-scan binary image (left) into an image usable numerically in MOOSE (using the function ImageReader). We use a mesh $70*70$ for a initial ctscan whose resolution is $300*300$. Obviously, should the resolution be preserved, we should use a mesh of $300*300$, but this is unnecessarily computationally expensive for our qualitative study. However, to model the exact microstructure and obtain realistic quantitative results one should use a $300*300$ mesh. The dimensions of the ct-scan are $3*3*3 mm^3$. Therefore, the reference length for the ct-scan simulations will be $l_0=3 mm$. As for the reference length for the benchmarks of at the grains scale, we will take $l_0=0.3 mm$ e.g., obtained from the granulometry of the same sand available in \cite{Talabi2009}. \begin{figure}[h!] \centering \includegraphics[width=0.7\linewidth]{initial_digitalization.pdf} \caption{Digitalization of a LV60A sandpack's CT scan obtained from \cite{Dong2009}} \end{figure} The parameter $\alpha$ contains the squared length scale $l_0^2$ and therefore should be divided by $100$ as compared with the simulations on grains since for grains $l_0=0.3mm$ and for a CT scan $l_0=3mm$ (assuming the same material). So we should have here $\alpha=0.001$ but as mentioned before when $\alpha$ is too small, the numerical results are not satisfying. In particular, the values of $\phi$ get too much out of the range $[0,1]$. We thus stick to $\alpha=0.1$ (and $\alpha=0.01$ in part 4.2. We now look at the influence of our new coefficient $\mu$ (more exactly $\tau_2$), the phase-field viscosity. As we assumed previously and shown analytically in \cite{Guevel2019a}, the coefficient multiplying $\Delta\dot\phi$ characterizes the change of interfaces curvature and the phase change kinetics, and more generally the convergence to equilibrium. It makes then sense to associate this term with phase changes inducing change of interface curvature, which are a priori all non purely volumetric changes, like most degradation processes (dissolution e.g.). We verify this postulation by comparing the configuration of the present CT scan at the same time of deformation for different values of $\mu$: \begin{figure}[h!] \centering \includegraphics[width=1.2\linewidth]{ctscan.pdf} \caption{Simulations outputs for CT scans simulations for a same vertical shortening of $16.49\%$ for different values of $\mu$ ($\mu=0$ bottom left, $\mu=10$ bottom right) and (b) associated stress/vs curves, dotted vertical line placed at $vs=16.49\%$} \end{figure} Visually, as expected, increasing values of $\mu$ delay the change of curvatures, and as a result for low $\mu$ the grains appear more "mixed" than higher values. In terms of mechanical response, $\mu$ controls the onset of softening, i.e. decrease in stress for increase in strain. Thus onset of softening indeed corresponds to microstructural phase change, as it is usually assumed.\\ We conclude this discussion on the role of $\mu$ to show its implications on the dissipation, quantity at the heart of our model's derivation and reasoning \cite{Guevel2019a}. We display here below the dissipation $D=D_n+D_t=\tau_1 \dot\phi^2 + \tau_2||\nabla\dot\phi||^2$ (integrated over the digitalized CT scan domain) for $\mu=0$ (left) and $\mu=1$ (right). We associate the maximum mean stress calculated on the same domain. A drop in the maximum mean stress (softening) corresponds to a major phase change. Those dropped are accompanied by a peak of dissipation. \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{ctscan_dissipation.pdf} \caption{Maximum mean stress and dissipation for $\mu=0$ and $\mu=1$. The dissipation has two components $D=D_n+D_t=\tau_1 \dot\phi^2 + \tau_2||\nabla\dot\phi||^2$ ($D_t=0$ for the case $\mu=0$).} \end{figure} Note that the dissipation components are calculated assuming $\tau_1=1$ (as everywhere in the present numerical simulations) and $\tau_2=\mu \tau_1 l_0^2$. Interestingly, the evolution of the system's dissipation is significantly different depending if the Laplacian rate term is activated or not. In the latter case ($\mu=0$), the dissipation is more irregular whereas in the former case ($\mu=1$) the dissipation evolution consists in two peaks, mostly due to the tangential component, corresponding to the significant phase changes of the process. \\ Let us know combine the previous model with chemical coupling and thus get some insights on PSC. \subsection{Application to PSC} Since we are using in this work a sand pack as CT scan input, let us fix the pressure solution reaction by considering the dissolution reaction of silica (${SiO_2}_{(s)} + {2H_2 O}_{(l)} \rightleftarrows {Si(OH)_4}_{(aq)}$). Thus the grain phase is silica, the pore phase is water and the solute (of concentration $c$) is silicic acid. However, as it is the case in the present work, we focus on a preliminary qualitative understanding of the processes rather than obtaining quantitative estimates. \subsubsection{Parameters setup} Let us first choose the parameter $\beta$. As observed in part 3.4, it seems that the higher its value, the higher the precipitation rate, translating into a slower compression (rightward translation of the displacement vs time curve. It is not clear how to quantify $\beta$ but we use the following rule of thumb: $\beta$ should be high enough to counterbalance the mechanically-induced dissolution (i.e. allow the reverse tilting of the double well); $\beta$ should not be too high to preserve coherent values of concentration $c$ as it appeared in the simulations. In that sense, we choose $\beta=0.1$ in the present case. We can see that $\beta$ has the anticipated effect, but not as clearly as in the benchmark case for two ideal grains. Indeed we expect the MG to have a significant effect in the ctscans simulations. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{different_betas.pdf} \caption{Influence of $\beta$ on the microstructure's response: the higher its value the slower the compression. However this is less obvious for the present microstructure than for the ideal two-grain benchmark} \end{figure} Then we choose $\alpha=0.01$ and $D^*=0.1$ to keep $\alpha<D^*$. \subsubsection{Chemo-mechanical response} As expected from the model's equations, dissolution should happen in high stress/strain zones ($\hat\chi(\mat\epsilon,c)>0$) and conversely precipitation should happen in low stress/strain zones ($\hat\chi(\mat\epsilon,c)<0$). The numerical results below are shown at $t=3.41$ at the end of a major phase change, i.e. jump in vertical shortening (cf fig.14), the dissolution of a supporting bridge in the center of the ctscan (to be compared with initial state in fig.11). To illustrate the pressure dissolution process, the volumetric strain, solute concentration and order parameter rate are displayed as well. As observed previously in the case without chemical coupling, dissolution is favored in the strain localization zones (negative values of $\dot\phi$ in bottom right corner picture), accompanied by the production of solute (red zones in bottom left corner picture). In addition, in the present case of chemo-mechanical coupling, the solute is allowed to precipitate in the low-strain zones (dark red dots in bottom right corner picture). However, it is not always clear whether the production of solid phase ($\dot]phi>0$) is due to precipitation (due to the term $\hat\chi(\mat\epsilon,c)$ when negative) or grain boundary diffusion (due to the term $\alpha\Delta\phi$). This is why one should be careful not to choose $\alpha$ too large. We assume that the "red hots", corresponding to the pores closure (or pores collapse) are mostly due to grain boundary diffusion. Nonetheless, the central zone we are focusing on (black circle) clearly displays precipitation and not closure. The solute available after dissolution (red zone in center of bottom left corner picture) precipitates on the wall of the pore (zoom-in picture). This seems favored by the highly localized strain (see black circle in top right corner picture), allowing nearby precipitation as the strain quickly decreases around. The production of solid phase most likely results froma combination of grain boundary diffusion and precipitation. \begin{figure}[h!] \centering \includegraphics[scale=0.4]{PSC_ctscan.pdf} \caption{Visualization of the microstructure's state at $t=3.41 ks$ at the end of the major phase change. Top right: order parameter. Top left: volumetric strain. Bottom right: solute concentration. Bottom left: order parameter rate. Zoom in: only positive values of order parameter rate, showing the dissolution/precipitation interaction. The particular MG drives the strain concentration, which drives the dissolution/precipitation. } \end{figure} \subsubsection{Influence of the microstructure's geometry (MG)} Since PSC results from a stress-induced mass transport, the primordial process is stress/strain concentration, which depends strongly on the MG. We therefore expect to have a different response for different geometries. The results below are obtained from different layers of the same digitalized sand pack used previously \cite{Dong2009}. As expected, even though from the same specimen of geomaterial, the response is significantly different from different parts of the specimen. Since we can visualize the evolution of the MG thanks to PFM, we can observe the grains evolution and its correspondence to the stress/strain curves. For instance, the geometry of MG1 is less prone to stress/strain concentration and thus exhibits a longer phase of strain hardening in order to load more energy, correspond to grains reorganization. MG2 displays a faster phase change (at $t \approx 4 ks$ vs $t \approx 7.5 ks$), having an initial MG more favorable to strain concentration than grain reorganization. \begin{figure}[h!] \centering \includegraphics[scale=0.7]{andrade_fitting.pdf} \caption{Dynamic Andrade creep laws ($\epsilon(t) \sim t^{1/3}$), separated by weakening events (different MGs have different primitive processes) } \end{figure} Furthermore, we fit power laws in the cubic root of time, the so-called Andrade creep law, with good agreement. The adequateness between Andrade creep (from metallurgy initially) and PSC has been shown in \cite{Dysthe2002} and \cite{Dysthe2003}, with good agreement with experiments on salt. \subsubsection{A universal creep law?} The investigation towards building a unique creep law has not found an end just yet. In particular, it is not clear which is the rate-limiting process. It seems however that significant progress has been achieved by analogy with metallurgy. It is argued in \cite{Dysthe2002} that the characteristic length scale of the contact between two grains in PSC may grow as the cubic root of time, similarly to the Andrade creep law. In a related work \cite{Dysthe2003}, it is infered, via ideal spherical geometries and constitutive description of the contact, that the vertical shortening may as well follow an Andrade creep law. A key argumentation in \cite{Dysthe2002,Dysthe2003} is the acknowledgement that PSC is a transient process, whence the use of dynamic contact laws. This is corroborated by high-precision experiments on salt. The vertical shortening seems to follow such law upon change in loading stress. Interestingly, it seems that such Andrade fitting is relevant to our numerical results, even though we dropped the ideal spherical packing for a more accurate representation of the MG. However, we tend to see variations of the fitting from one Andrade law to another, even with a constant stress. The jump occurs upon weakening events, i.e. significant phase change. We would thus argue that more that a universal creep law for PSC, the response seems to follow an adaptative Andrade creep law, strongly dependent on the particular MG's dynamics. A variation of Andrade creep along the PSC process makes sense inasmuch as the fitting has been shown in the works above to be dependent among other things on the film layer thickness in between grains. This film layer obviously varies significantly in our (dynamic) results as grains reorganize and dissolve/reprecipitate. \subsubsection{Physical meaning of $\mu$} As showcased previously, the term in $\mu$ represents the phase-field viscosity, i.e. control the kinetics of the phase change (the higher its value the more delayed the process). It can encapsulate a priori any CI effects of the main process without having to model it explicitly. It can be associated with the activation energies $Q_i$ of the different catalytic effects $i$ in the form $\mu=\sum_{i} A_i e^{-\frac{Q_i}{k_B T}}$. For instance for PSC two main such effects can be the temperature \cite{Niemeijer2002} and the presence of clays \cite{Renard2001}, both assumed to enhance the process. We get the following responses for a fixed value of $\beta=0.1$: \begin{figure}[h!] \centering \includegraphics[scale=0.4]{psc_ctscan_different_mus.pdf} \caption{Influence of $\mu$ on the microstructure's response: no qualitative change (grains rereorganization followed by major phase change) but delayed as $\mu$ increases} \end{figure} A variation in $\mu$ corresponds to a vertical translation of the vertical shortening (constant slope of Andrade creep fitting), similarly to an increase of temperature a priori \cite{Niemeijer2002} and clay content \cite{Renard2001}. \section{Conclusion} We have studied numerically the different features of our previously developed CPFM and applied to geomaterials' MGs undergoing chemo-mechanical degradation. Two main conclusions should be emphasized. Firstly, our new Laplacian rate term is shown to control the variations of the interfaces curvatures and as such acts as a CI for degradation processes. For PSC, that could correspond to temperature or clay content, both enhancing the process. Secondly, the tracking of the MG's dynamics thanks to PFM is proven to have a significant influence on the system's behavior at the upper scale. This is particularly interesting for PSC modeling, for which the MG modeling is usually restricted to ideal spherical packing. Our results corroborate the already existing observation that microstructurally-driven processes like PSC are the result of transient interacting instabilities at the grain scale, but also that a dynamic Andradre creep law seems to prevail for PSC. Our results provide thus preliminary insights towards understanding better the influence of the MG of a system's response.
{'timestamp': '2019-07-02T02:30:24', 'yymm': '1907', 'arxiv_id': '1907.00698', 'language': 'en', 'url': 'https://arxiv.org/abs/1907.00698'}
\section{#1}\label{sec:#2}} \newcommand{\mysubsection}[2]{\subsection{#1}\label{sec:#2}} \newcommand{\mysubsubsection}[2]{\subsubsection{#1}\label{sec:#2}}
{'timestamp': '2019-04-17T02:27:27', 'yymm': '1904', 'arxiv_id': '1904.07615', 'language': 'en', 'url': 'https://arxiv.org/abs/1904.07615'}
\section{Introduction}\label{sec:intro} Neutron stars are promising compact objects to obtain the information on the equation of state of dense matter and investigate physics of strong gravity. The direct detections of gravitational waves from merging compact objects opened up a new window to access such information through the observation (\citealp{PhysRevLett.116.061102,PhysRevLett.119.161101,Abbott_2021}). The neutron star is born in core-collapse supernova ~(for a review see, \citealp{Janka2012,Burrows:2020qrp}). A successful explosion leaves a proto-neutron star~(PNS) in its supernova remnant, being decoupled from the expanding ejecta. At its birth, the PNS is initially hot and proton-rich (that is why it is called the proto-neutron star) and has a radius a few times larger than the typical size, $10 \mathrm{km}$, of neutron stars. Neutrinos and gravitational waves carry away energy from the PNS and cool it down to the neutron star on a diffusion timescale~(a few tens of seconds) much longer than the dynamical timescale~(\citealp{Prakash:1996xs}). It is quite numerically costly and impossible to follow the whole evolution of the PNS cooling by a dynamical simulation in multi-dimensions. Even recent core-collapse supernova simulations covered only the first $10$ \textrm{seconds} at most ~(\citealp{Muller:2018utr,Nakamura:2019snn,Burrows:2019zce,Nagakura:2017mnp}). Thanks to the large separation of the two timescales, this long, quasi-static evolution of the PNS cooling may be approximated in a sequence of equilibrium configurations with gradually changing thermal and lepton contents. In fact, this was the common strategy in the past ~(\citealp{Burrows:1986me,Keil:1996ab,Pons:1998mm}). Note that this was made possible by the fact that the Lagrangian formulation is easily employed in spherical symmetry. It is not a trivial task, however, to adopt the same strategy in multi-dimensions for rotating stars. One of the big issues to be resolved is to devise a Lagrangian formulation that enables us to construct the equilibrium configurations of rotating stars. To the best of our knowledge, there has been no such method so far except for our own previous attempt in Newtonian gravity~(\citealp{Yasutake:2015,Yasutake:2016}), in which we employ a triangular Lagrangian grid. It turns out, unfortunately, this scheme was neither robust nor very efficient. We hence need to build a new numerical scheme that is suitable for the study of the secular evolution of relativistic rotating stars such as PNSs to construct stationary rotating stars and solve two major issues: (i) to conceive a Lagrangian formulation in general relativity and (ii) its robust and efficient implementation\footnote{ A relativistic smoothed-particle-hydrodynamics formulation has been recently considered as dynamical problems in~\citet{Rosswog:2020kwm,Diener:2022hui}, whereas the Lagrange formulation as boundary value problems is not investigated in general relativity so far.}. The latter is rather technical but turns out to be crucial as we explain later in detail. The equilibrium configurations of relativistic rotating stars have been extensively investigated in the Eulerian formulation so far~(see \citealp{Paschalidis:2016vmz} for review): Rotating solutions in general relativity were discussed by \cite{Hartle:1967} and \cite{Butterworth:1976}, differentially rotating stars were successfully constructed by~\citet{Komatsu1989b,Cook:1992} \footnote{This scheme was implemented in the RNS code by~\cite{Stergioulas1995} which is now put in the public domain.}; higher accuracy was attained by a spectral solver in~\cite{Bonazzola:1993} and ~\cite{Ansorg:2002}. More recent progresses now allow triaxial stars and magnetized stars ~(\citealp{Uryu:2011ky,Zhou:2017xhf,Uryu:2019ckz}). Note, however, that all these works made two assumptions \textit{a priori}: (i) the barotropic condition, i.e., the pressure depends only on the density and (ii) the rotation law that assumes a functional relation between specific angular momentum and angular velocity, i.e., $F=F(\Omega)$~(\citealp{Komatsu1989b}) or $\Omega=\Omega (F)$~(\citealp{Uryu:2017obi}). Under these assumptions, one can analytically integrate the Euler equation and the main task is to solve the Einstein equations. Non-barotropic stars in general relativity have been also built, though. In general relativity, for example, \cite{Camelio:2019rsz} proposed a restrictive formulation by introducing $p-\Omega$ coordinates whereas there has been more works in the Newtonian gravity ~(\citet{Uryu:1994,Uryu:1995,Roxburgh2006,Espinosa-Lara2007,Espinosa-Lara2013,Yasutake:2015,Yasutake:2016,Fujisawa:2015}). Based on the above Eulerian formulation of rotating stars, many authors have attempted to describe rotating PNSs ~(\citet{Goussard:1996dp,Goussard:1997bn,Sumiyoshi1999,Strobel:1999vn,Villain:2003ey,Camelio:2016fan}). Thanks to sophistication of numerical relativity with the rapid increase in computational power, on the other hand, some groups have started to use dynamical simulations~(\citet{Camelio:2019rsz,Zhou:2021upu,Fujibayashi:2021wvv}) although they are limited to rather short periods. The dynamical simulations are put aside and the Eulerian formulation faces a difficulty when applied to the evolution of rotating stars: the angular momentum distribution \textit{in space} is not known a priori~(see assumption (ii) above) as a function of time even if there is no angular momentum transfer in the star. This is because in such a situation the specific angular momentum is conserved for the individual fluid element but the angular momentum distribution in space changes in time. This problem is automatically solved if one employs a Lagrangian formulation although it is highly nontrivial. That is our core idea in this paper. The organization of the paper is as follows. In Sec.~\ref{sec:method}, we present this Lagrangian formulation of our devising. We then explain rather in detail how this formulation is implemented as a numerical scheme, since this part is actually crucial. We describe the models constructed in this paper to demonstrate the capability of our new method in Sec.~\ref{sec:model} and show the results in Sec.~\ref{sec:result}. Finally, we summarize our findings and give future prospects in Sec.~\ref{sec:conclusion}. Throughout this paper, geometrized units, i.e., $c=G=1$ are used unless otherwise noted. \section{Method}\label{sec:method} We start with a brief review of the ordinary numerical construction of axisymmetric rotating stars in general relativitic gravity in e.g.~\citet{Komatsu:1989,Cook:1992,Cook:1993qj}. Under axisymmetry and stationarity without circulation flows\footnote{The metric ansatz for the case with circulation flows is discussed in \cite{Birkl:2010hc}. In this case, there are eight metric functions instead of four.} the line element is given in general as \begin{equation} {\rm d} s^2 = -N^{2} {\rm d} t^2 +A^{-2}\left({\rm d} r^2 +r^2{\rm d} \theta^2\right) +B^{-2}r^2\sin^2\theta \left({\rm d}\varphi -\omega{\rm d} t\right)^2, \nonumber\\ \end{equation} where $N(r,\theta), A(r,\theta), B(r,\theta), $ and $\omega(r,\theta)$ are functions of the radius and the zenith angle. Note that the metric components are not written in the exponential form ~(\cite{Komatsu1989b}) but in the $3+1$ style in~\cite{Bonazzola:1993}. The matter is assumed to be a perfect fluid and its energy momentum tensor is given as \begin{eqnarray} T_{\mu\nu} = \left(\varepsilon+P\right)u_{\mu}u_{\nu} +Pg_{\mu\nu}, \end{eqnarray} where $u^{\mu}$ is the 4-velocity and $\varepsilon \equiv \rho +\rho_{th}$ is the energy composed of the rest mass density~$\rho$ and the internal energy density~$\rho_{th}$. The four-velocity is given as \begin{eqnarray} u^{\mu}=\left(\frac{1}{N\sqrt{1-v^2}}, 0, 0, \frac{\Omega}{N\sqrt{1-v^2}}\right)^{T}, \end{eqnarray} where $v=\left(\Omega-\omega\right)N^{-1}B^{-1}r\sin\theta$ and $u^{\mu}u_{\mu}=-1$. The angular velocity observed by the Zero Angular Momentum Observer~(ZAMO) is denoted by $\Omega(r,\theta)$. Four metric functions~$N(r,\theta), A(r,\theta), B(r,\theta)$ and $\omega(r,\theta)$ are determined by solving the Einstein equation~$\displaystyle G_{\mu\nu}=8\pi T_{\mu\nu}$. The energy-momentum conservation equation or the relativistic Euler equation is given by $\nabla_{\nu}T_{\mu}^{\ \nu}=0$. The explicit differential forms of all these equations are presented in App.~\ref{sec:diffeqs}. If the matter is barotropic, i.e., its pressure depends only on the density~$P=P(\rho)$, the equation of state~(EOS) closes the system equation. If the matter is baroclinic, i.e., the pressure depends on another thermodynamic quantity, say the specific entropy~$s$ as $P=P(\rho,s)$\footnote{More generally, the pressure may depend on yet another thermodynamic quantity such as the electron fraction: $P=P(\rho,s,Y_e)$.}, then the equation is augmented with the equation for energy. One often needs to solve the radiation transport equation as well ~(\cite{Pons:1998mm}). There is a successful strategy common to the methods proposed in the literatures to numerically construct general relativistic rotating stars~(see ~\cite{Paschalidis:2016vmz} for a recent review): (i) the Einstein equation is first solved for a given matter configuration; fixing the metric so obtained, we solve the Euler equation for density; replacing the matter configuration employed in the first step with the density distribution obtained in the second step, we iterate these two steps until the convergence is achieved as the self-consistent field method for Newtonian rotating stars in ~\cite{Hachisu:1986}; (ii) the Euler equation is analytically integrated in advance, which is actually possible for the barotropic case: $\varepsilon=\varepsilon(P)$ or $P=P(\varepsilon)$ with $F(\Omega)= u^{t}u_{\varphi}$ where $F(\Omega)$ is an arbitrary function. In fact, the Euler equation under those assumptions: \begin{eqnarray} \frac{1}{\varepsilon+P}{\rm d} P -{\rm d} \ln u^{t} +u^{t}u_{\varphi}{\rm d}\Omega = 0,\label{eq:Euler} \end{eqnarray} leads to the first integral as follows: \begin{eqnarray} \mathcal{H}(P) -\ln u^{t} +\mathcal{F}(\Omega) = \mathcal{C}, \end{eqnarray} where $\mathcal{H}$ and $\mathcal{F}$ are given as \begin{eqnarray} \mathcal{H}(P) \equiv \int^{P} \frac{{\rm d} P'}{\varepsilon(P')+P'},\quad \mathcal{F}(\Omega) \equiv \int^{\Omega}F(\Omega'){\rm d}\Omega', \end{eqnarray} and $\mathcal{C}$ is an integration constant and may be determined at the pole. Although the methods based on this strategy have been successful in the numerical construction of a rotational configuration for a given functional form of angular velocity~$F(\Omega)$, they are not suited for the study of secular evolutions of rotating stars. There are two problems in fact: firstly, these methods are in principle applicable only to the barotropic case; secondly, the angular velocity distribution should be given in advance, which is normally impossible. The latter point will be understood if one considers the secular thermal evolution with the specific angular momentum being conserved in each fluid element. In fact, the spatial distribution of the angular velocity changes in time in this case even though there is no angular momentum transfer. This last point motivated us to develop a Lagrangian method in this paper, with which such a change of the angular velocity distribution can be automatically solved. Note also that it is not necessary to use the first integral in our formulation~(see \ref{eq:Euler_r} and \ref{eq:Euler_th}). \subsection{Lagrangian method} In the Eulerian formulation, one first introduces the coordinates rather arbitrarily according to the convenience in their applications; once chosen, they are unchanged; we then solve the density(or pressure) and the metric functions of the coordinates so that they should satisfy the Euler equation and the Einstein equation, respectively. As mentioned earlier, the angular velocity profile, or equivalently the angular momentum distribution should be given as input normally. It is also mentioned that the mass and angular momentum that characterizes a rotational star are normally obtained only after the solution is obtained. In sharp contrast, in the Lagrangian formulation we solve the Euler equation to obrain the coordinates, which are attached to fluid elements. More specifically, assigning the mass, specific angular momentum and specific entropy to each fluid element\footnote{The specific angular momentum and specific entropy hence regarded as functions of fluid elements. We may also give the electron fraction~$Y_e$.}, we will look for the positions of fluid elements that satisfy the Euler equation. In so doing, the density is a functional of the coordinate configurations. The pressure at each fluid element is derived with the equation of state once the density is known there as the specific entropy is also known \textit{a priori}. The spatial distribution of the angular velocity, or equivalently the angular momentum, is obtained once the coordinate configuration is determined. It should be apparent that the mass and angular momentum are automatically conserved in this formulation. In the following we will explain more in detail how these procedures are realized and implemented in the numerical construction of rotating stars. \subsubsection{Density in a Finite Element} \begin{figure} \psfrag{x}{$x$} \psfrag{z}{$z$} \psfrag{r11}{\small \hspace{-5mm}$(r_{11},\theta_{11})$} \psfrag{r12}{\small \hspace{0mm}$(r_{12},\theta_{12})$} \psfrag{r13}{\small \hspace{0mm}$(r_{13},\theta_{13})$} \psfrag{r14}{\small \vspace{-5mm}$(r_{14},\theta_{14})$} \psfrag{r21}{\small \hspace{0mm}$(r_{21},\theta_{21})$} \psfrag{r22}{\small \vspace{5mm}$(r_{22},\theta_{22})$} \psfrag{r23}{\small \hspace{0mm}$(r_{23},\theta_{23})$} \psfrag{r24}{\small \hspace{-3mm}$(r_{24},\theta_{24})$} \psfrag{rho11}{\small \hspace{0mm}$\boxed{\bar\rho_{11}}$} \psfrag{rho12}{\small \hspace{0mm}$\boxed{\bar\rho_{12}}$} \psfrag{rho13}{\small \hspace{0mm}$\boxed{\bar\rho_{13}}$} \psfrag{rho21}{\small \hspace{0mm}$\boxed{\bar\rho_{21}}$} \psfrag{rho22}{\small \hspace{0mm}$\boxed{\bar\rho_{22}}$} \psfrag{rho23}{\small \hspace{0mm}$\boxed{\bar\rho_{23}}$} \includegraphics[width=7.5cm]{Figs/def_FE2.eps} \caption{Schematic picture of finite elements. The circles denote the Lagrange nodes and the evaluation point for the Euler equation. Other physical variables are defined at the crosses and the Einstein equation is solved there. } \label{fig:def_FE} \end{figure} In our brand new Lagrangian formulation of axisymmetric stars in permanent rotation, the starting point is to express the density in terms of the coordinates configuration. For its numerical realization on the finite-elements~(FEs), we approximate each fluid element in the meridian section by a quadrilateral FE and assign mass~$\Delta m$, specific angular momentum~$\Delta j$, specific entropy~$\Delta s$, and electron fraction~$\Delta Y_e$, etc. The configuration of FE is specified by the coordinates~$(x_{jk}, y_{jk})$, or specifically $(r_{jk},\theta_{jk})$ in axisymmetric stars, of its four corners as shown in Fig.~\ref{fig:def_FE} and its area may be calculated conveniently with the isoparametric formulation as follows~(see, e.g., \cite{bathe2006finite}). In this formulation, we introduce the natural coordinates~$(\alpha,\beta)$, $-1\ge \alpha, \beta \ge 1$, which specify an arbitrary point in the FE as \begin{eqnarray} x(\alpha,\beta) &=& \sum_{j=1}^{2}\sum_{k=1}^{2} \hat{N}_{j}(\alpha)\hat{N}_{k}(\beta)x_{jk},\nonumber\\ y(\alpha,\beta) &=& \sum_{j=1}^{2}\sum_{k=1}^{2} \hat{N}_{j}(\alpha)\hat{N}_{k}(\beta)y_{jk},\label{eq:map_FE1} \end{eqnarray} where $\hat{N}_{j} (j=1,2)$ are called the shape function; $x_{jk}$ and $y_{jk}$ are the original coordinates at the four corners. Equation~\eqref{eq:map_FE1} is actually an interpolation formula and the linear shape functions are given as \begin{eqnarray} \hat{N}_{1}(\alpha) = \frac{1-\alpha}{2},\quad \hat{N}_{2}(\alpha) = \frac{1+\alpha}{2}. \end{eqnarray} In App.~\ref{sec:isoparametric}, we explain the basic idea underlying this formulation. The coordinate transformation from $(x,y)$ to $(\alpha,\beta)$ is characterized by the Jacobian matrix \begin{eqnarray} J = \begin{pmatrix} \displaystyle \frac{\partial x}{\partial \alpha} & \displaystyle \frac{\partial y}{\partial \alpha}\\ & \\ \displaystyle \frac{\partial x}{\partial \beta} & \displaystyle \frac{\partial y}{\partial \beta} \end{pmatrix}. \end{eqnarray} For instance, the elements of which are given as \begin{eqnarray} \frac{\partial x}{\partial \alpha} = \sum_{j=1}^{2}\sum_{k=1}^{2} \frac{{\rm d} \hat{N}_{j}}{{\rm d} \alpha}(\alpha)\hat{N}_{k}(\beta)x_{jk}. \end{eqnarray} Adopting $x_{jk}=r_{jk}^3/3$ and $y_{jk}=\cos\theta_{jk}$ in the current case, we define the volume of the FE in the flat space as \begin{eqnarray} \Delta V = 2\pi |\det J|. \end{eqnarray} By accounting for the curvature of the space, the baryonic density for the FE is given by \begin{eqnarray} \bar{\rho} = \frac{\Delta m}{\sqrt{-g}\Delta V}, \label{eq:density_def} \end{eqnarray} where the $g$ is the determinant of the spacetime metric. Since we evaluate the Einstein equation at each cell boundary as marked with crosses in Fig.~\ref{fig:def_FE}, we need the densities at these boundaries. In this paper, they are given as a simple arithmetic mean of the densities for the adjascent FEs as $\rho_{jk} \equiv \left(\bar{\rho}_{jk} + \bar{\rho}_{jk-1}\right)/2$. Note again that the density itself is the variable to be solved in the Eulerian formulation, whereas the coordinates are to be solved in the Lagrangian formulation. \subsubsection{Isoparametric interpolation and differentiation} In order to solve the Einstein and Euler equations in their differential forms, Eqs.~\eqref{eq:Euler_r}-\eqref{eq:Etph}, in the Lagrangian formulation, we need to evaluate not only the values of variables but also their derivatives at an arbitrary position. It is not a difficult task in the FE description. Since we need to evaluate second-order derivatives in the Einstein equation, we employ the second-order interpolation, in which we use quadratic shape functions and not the values of four but nine nearby points. Then the coordinates are expressed in terms of the natural coordinates~$\alpha$ and $\beta$ as \begin{eqnarray} x(\alpha,\beta) &=& \sum_{j=1}^{3}\sum_{k=1}^{3} \hat{M}_{j}(\alpha)\hat{M}_{k}(\beta)x_{jk},\label{eq:xiso}\\ y(\alpha,\beta) &=& \sum_{j=1}^{3}\sum_{k=1}^{3} \hat{M}_{j}(\alpha)\hat{M}_{k}(\beta)y_{jk},\label{eq:yiso}\ \end{eqnarray} where the shape functions are given by \begin{eqnarray} \hat{M}_{1}(\alpha) &=& -\frac{\alpha}{2}(1-\alpha),\nonumber\\ \hat{M}_{2}(\alpha) &=& (1+\alpha)(1-\alpha),\nonumber\\ \hat{M}_{3}(\alpha) &=& \frac{\alpha}{2}(1+\alpha). \end{eqnarray} We expand any function~$\phi(x(\alpha,\beta),y(\alpha,\beta))$ with the same shape functions as \begin{eqnarray} \phi (\alpha,\beta) &=& \sum_{j=1}^{3}\sum_{k=1}^{3} \hat{M}_{j}(\alpha)\hat{M}_{k}(\beta)\phi_{jk}. \label{eq:fiso} \end{eqnarray} It is straightforward to evaluate the derivatives of such a function with the derivatives of the shape functions: \begin{eqnarray} \frac{{\rm d}\hat{M}_{1}}{{\rm d} \alpha}(\alpha) &=& -\frac{1}{2} +\alpha,\nonumber\\ \frac{{\rm d}\hat{M}_{2}}{{\rm d} \alpha}(\alpha) &=& -2\alpha,\nonumber\\ \frac{{\rm d}\hat{M}_{3}}{{\rm d} \alpha}(\alpha) &=& \frac{1}{2} +\alpha. \end{eqnarray} For instance, the first derivative with respect to $x$ can be obtained at any point as \begin{eqnarray} \frac{\partial \phi}{\partial x} &=& \sum_{j,k=1}^{3} \left\{ \frac{\partial\alpha}{\partial x}\frac{{\rm d} \hat{M}_{j}}{{\rm d} \alpha}\hat{M}_{k} +\frac{\partial\beta}{\partial x}\hat{M}_{j}\frac{{\rm d} \hat{M}_{k}}{{\rm d} \beta} \right\}\phi_{jk}, \end{eqnarray} where the coefficients~$\displaystyle\frac{\partial\alpha}{\partial x}$ and~$\displaystyle\frac{\partial\beta}{\partial x}$ are computed straightforwardly from Eq.~\eqref{eq:xiso} as the coefficients of the inverse of the Jacobian matrix: \begin{eqnarray} J^{-1} = \begin{pmatrix} \displaystyle \frac{\partial \alpha}{\partial x} & \displaystyle \frac{\partial \beta}{\partial x}\\ & \\ \displaystyle \frac{\partial \beta}{\partial y} & \displaystyle \frac{\partial \alpha}{\partial y} \end{pmatrix} = \frac{1}{\det J} \begin{pmatrix} \displaystyle \frac{\partial y}{\partial \beta} & \displaystyle -\frac{\partial y}{\partial \alpha}\\ & \\ \displaystyle -\frac{\partial x}{\partial \beta} & \displaystyle \frac{\partial x}{\partial \alpha} \end{pmatrix}. \end{eqnarray} \subsubsection{Angular momentum} There are different definitions of specific angular momentum employed in the literature. For example, in the stationary and axisymmetric spacetime, the following specific angular momentum~$\ell$ is introduced: \begin{eqnarray} \label{eq:lmom} \ell = -\frac{u_{\varphi}}{u_t} = \frac{\left(\Omega-\omega\right) r^2\sin^2\theta} {N^2B^2+\omega\left(\Omega-\omega\right)r^2\sin^2\theta}, \end{eqnarray} which is conserved along a stream line~(\cite{Birkl:2010hc}). EoM, which can be written as \begin{eqnarray} \nabla^{\nu} T_{\mu\nu} &=& u^{\nu}\nabla_{\nu}\left[\left(\varepsilon+P\right)u_{\mu}\right] +\left(\varepsilon+P\right)u_{\mu}\nabla_{\nu}u^{\nu} +\nabla_{\mu}P \nonumber\\ &=& u^{\nu}\nabla_{\nu}\left[\frac{\left(\varepsilon+P\right)}{\rho}u_{\mu}\rho\right] -\frac{\left(\varepsilon+P\right)}{\rho}u_{\mu}u^{\nu}\nabla_{\nu}\rho +\nabla_{\mu}P\nonumber\\ &=& \rho u^{\nu}\nabla_{\nu}\left[\frac{\left(\varepsilon+P\right)}{\rho}u_{\mu}\right] +\nabla_{\mu}P = 0, \end{eqnarray} where the continuity equation is substituted, i.e. ~$\rho\nabla_{\nu}u^{\nu} = -u^{\nu}\nabla_{\nu}\rho$, one finds for the spacetime with the asymptotically timelike and axial Killing vectors there are actually two quantities conserved along each stream line: \begin{eqnarray} j_{t} = \frac{\varepsilon+P}{\rho} u_{t}\quad \mathrm{and}\quad j_{\varphi} = \frac{\varepsilon+P}{\rho} u_{\varphi}. \end{eqnarray} The specific angular momentum defined in Eq.~\eqref{eq:lmom} is nothing but the ratio of these two:~$\ell=-j_{\varphi}/j_{t}$ Since we are interested in the formulation that can be applied to the evolution of rotating stars, the existence of the timeline Killing vector cannot be assumed. It should be noted that $j_{\varphi}$ is still conserved along the stream line even in this case as long as the spacetime is axisymmetric. We will hence employ this specific angular momentum and assume that it is conserved for each fluid element unless some mechanism to exchange angular momenta between fluid elements is in operation. Note, however, that the specific angular momentum~$\ell$ is still very convenient from numerical point of view. In fact, it is simple to convert $\ell$ to the angular velocity~$\Omega$ \begin{eqnarray} F\left(\Omega\right) = u^tu_{\varphi} = \frac{\ell}{1-\Omega\ell}, \label{eq:Fomegaell} \end{eqnarray} where $u_{\mu}u^{\mu}=-1$ and $u^{\varphi}=\Omega u^{t}$. \subsection{Diagnostics} The following global quantities are useful for characterizing rotational equilibria. Following ~(\citet{Cook:1992,Nozawa1998,Paschalidis:2016vmz}), we define the baryon mass, proper mass and gravitational mass, respectively, as \begin{eqnarray} M_b &=& 2\pi\int \frac{\rho }{A^2B\sqrt{1-v^2}} r^2\sin\theta{\rm d} r{\rm d}\theta,\\ M_p &=& 2\pi\int \frac{\epsilon}{A^2B\sqrt{1-v^2}} r^2\sin\theta{\rm d} r{\rm d}\theta,\\ M &=& 2\pi\int \frac{1}{A^2B^2} \left[ NB\left\{ \frac{\left(\epsilon +P\right)\left(1+v^2\right)}{\left(1-v^2\right)} +2P\right\}\right.\nonumber\\ &&\left. \quad\quad\quad\quad\quad +2rv\omega\sin\theta \frac{\left(\epsilon +P\right)}{1-v^2} \right] r^2\sin\theta{\rm d} r{\rm d}\theta. \end{eqnarray} On the other hand, the quantities employed to measure how fast the rotation is are the total angular momentum, rotational energy and gravitational energy, respectively, as \begin{eqnarray} J &=& 2\pi\int \frac{\left(\epsilon +P\right)v}{A^2B^2\left(1-v^2\right)} r^3\sin^2\theta{\rm d} r{\rm d}\theta,\\ T &=& 2\pi\int \frac{\left(\epsilon +P\right)v\Omega}{A^2B^2\left(1-v^2\right)} r^3\sin^2\theta{\rm d} r{\rm d}\theta,\\ W &=& M_p +T -M. \end{eqnarray} \subsubsection{Isoparametric integration} The integrations above are evaluated in each FE and summed as follows: \begin{eqnarray} S &\equiv& \int\!\!\!\!\int\!\!\! \phi(x,y){\rm d} x{\rm d} y = \int\!\!\!\!\int\!\!\! \phi\left(x(\alpha,\beta),y(\alpha,\beta)\right) |\det J|{\rm d}\alpha{\rm d}\beta\nonumber\\ &=& \int\!\!\!\!\int\!\!\! \phi \left|\frac{\partial x}{\partial\alpha}\frac{\partial y}{\partial\beta} -\frac{\partial x}{\partial\beta}\frac{\partial y}{\partial\alpha}\right| {\rm d}\alpha{\rm d}\beta\nonumber\\ &=& \sum_{i,j,k,l,m,n}^{2}\!\!\!\!\!\! \frac{\phi_{ij} x_{kl} y_{mn}}{4}\left|(-1)^{k+n}\mathcal{N}_{im}\mathcal{N}_{jl} -(-1)^{m+l}\mathcal{N}_{ik}\mathcal{N}_{jn} \right|,\nonumber\\ \end{eqnarray} where the shape functions are analytically integrated over the ranges $-1\leq \alpha \leq 1$ and $-1\leq \beta \leq 1$ in advance as \begin{eqnarray} \mathcal{N}_{ij} = \frac{1}{6} \begin{pmatrix} 1 & 2\\ 2 & 1 \end{pmatrix}. \end{eqnarray} \subsection{Strategy to solve the system} To solve the whole system, we adopt the traditional iterative scheme~(\citealp{Hachisu:1986,Komatsu:1989}). Specifically, it consists of two parts: (i) solving the Einstein equation, Eqs.~\eqref{eq:Ett}-\eqref{eq:Etph}, with the matter quantities unchanged and (ii) solving the Euler equation, Eqs.~\eqref{eq:Euler_r} and~\eqref{eq:Euler_th}, with the metric functions fixed. Both equations in their differential forms are discretized on the FE grid and the resultant nonlinear algebraic equations are solved alternately until the changes in the solutions for both equations become smaller than certain values. As may be understood from the fact that many efforts have been made to choose a nice set of equations even in the Eulerian formulation, these nonlinear equations are difficult to solve and, it turns out, the original Newton-Raphson method does not work for our equations. In order to tackle this problem, we deploy two new schemes of our own devising: (i) for the Einstein equation, the W4IX method is applied; as described in App.~\ref{app:W4IX}, it is an extension of the original W4 method and requires only $\mathcal{O}(N^2)$ calculations to obtain a solution, in contrast to $\mathcal{O}(N^3)$ operations in the original W4 method with either the UL or LH decomposition ~(see \citet{Okawa:2018smx,Fujisawa:2018dnh} for details); (ii) for the Euler equation on the other hand, we find that a particular iteration scheme dubbed the slice shooting is very powerful to obtain convergence; as demonstrated in App.~\ref{app:sliceshooting}, the Jacobian matrix for the nonlinear force-balance equations in the $\theta$-direction is especially ill-conditioned as the size of the matrix becomes bigger; we find it better to solve a single radial slice at a time with all the other slices fixed; we repeat this for all the slices, starting from the axis, proceeding to the equator and going back to the axis; this cycle is repeated until the convergence is obtained globally. These two schemes turn out to be very successful. They are robust and also efficient and actually the key ingredients of our new formulation. \section{Models}\label{sec:model} In this section, we describe some models we employ in this paper to demonstrate the capability of our new method. As explained already, in this formulation we first assign the mass and specific angular momentum to the grid points and find their configuration that satisfies the Euler equation, or force-balance equations, as well as the spacetime metric that is consistent with the matter distribution by solving the Einstein equation. The spatial profiles of the density and angular velocity, respectively, are obtained from Eqs.~\eqref{eq:density_def} and~\eqref{eq:lmom}, respectively, after the equilibrium configurations of the matter and spacetime are derived. Having an application to PNSs in mind, we also allocate the electron fraction to the Lagrange grid points. The ultimate goal of this project is to study secular evolutions of relativistic rotating stars. The models are hence divided into two groups: \begin{itemize} \item Stationary rotating stars constructed either with a barotropic EOS or with a baroclinic EOS and their accuracies investigated closely with a couple of diagnostic quantities; a comparison made with an Eulerian code, \item mock evolutionary sequences of rotating stars considered for some scenarios: cooling, mass-loss, and mass-accretion; they are meant for demonstrations. \end{itemize} Each model is described more in detail below. \subsection{Stationary models} Here we adopt the so-called $j-$const law~(\citealp{Komatsu1989b}). This is actually the easiest case for our formulation, in which the specific angular momentum is assigned to the grid points and fixed. Then the angular velocity profile is derived after the equilibrium configuration is obtained. We begin with a barotropic EOS. In this case, as mentioned in introduction, we analytically obtain a first integral of the Euler equation, which gives for the present case the relation that the angular velocity should satisfy as \begin{eqnarray} F(\Omega) &\equiv& u^{t}u_{\varphi}\nonumber\\ &=& A_c^2(\Omega_c-\Omega) = \frac{(\Omega-\omega)r^2\sin^2\theta } {N^2B^2-(\Omega-\omega)^2r^2\sin^2\theta },\nonumber\\ \label{eq:law_diffrot} \end{eqnarray} where $A_c$ and $\Omega_c$ are constants. Once the metric functions are obtained, we can derive the angular velocity as a function of $r$ and $\theta$, which should be compared with the profile derived directly from the equilibrium configuration. Note that this relation is reduced to a cylindrical rotation law in the non-relativistic limit. As mentioned repeatedly, our new formulation can accommodate any EOS, which may depend not only on density and entropy but also on other quantities such as electron fraction and mass fractions of various nuclei. In this paper, we employ a polytropic type of EOS for simplicity: \begin{eqnarray} P(\rho, s(r,\theta)) = K(r,\theta) \rho^{1+\frac{1}{N}}, \label{eq:eos} \end{eqnarray} where $N$ is the polytropic index; note that $K$, which is normally a constant and is a function of entropy, is assumed here to be a function of $r$ and $\theta$ as \begin{eqnarray} K(r,\theta) \equiv \hat{K}\left(K_0 +\epsilon_1\frac{r^2}{R_e^2}\sin^2\theta +\epsilon_2\frac{r^2}{R_e^2}\cos^2\theta +\epsilon_3\frac{r^2-R_e^2}{R_e^2} \right), \label{eq:eosK} \end{eqnarray} where $R_e$ is the equatorial surface radius of the star; $K_0, \epsilon_1, \epsilon_2, \epsilon_3$ are constants. The polytropic index is set to $N=1$ in the following. For the stationary models, we set $K_0=1$ as well. Note that we use the geometrical unit~$c=G=1$ and put $\hat{K}=1$ in actual calculations following~(\cite{Cook:1992}). It should be apparent that the baroclinicity is introduced by the non-constancy of $K$ in this EOS. In fact we set all $\epsilon$'s to $0$ for the barotropic case above. In the baroclinic case, we solve the Einstein and Euler equations keeping the specific angular momentum attached to grid points the same as that of the reference model. We have to update the value of $K$ at each grid point according to its current position and Eq.~\eqref{eq:eos}. \begin{figure*} \begin{tabular}{cc} \includegraphics[width=7.cm]{./Figs/codeRELUME.eps} & \includegraphics[width=7.cm]{./Figs/codeRNS.eps}\\ (a) & (b) \end{tabular} \caption{Density contour of the uniformly rotatng star computed (a) by our new code and (b) by RNS code.} \label{fig:comparison} \end{figure*} \begin{table*} \centering \begin{tabular}{cccccc} Code & $N_r\times N_{\theta}$ & $\rho_c$ & $r_p/r_e$ & $M$ & $M_b$\\\hline our code & $32\times 17$ & $0.0562$ & $0.874$ & $0.0958$ & $0.100$\\ RNS code & $65\times 129$ & $0.0562$ & $0.870$ & $0.0967$ & $0.101$ \\\hline \end{tabular} \caption{Parameters for comparison with the public RNS code by \citet{Stergioulas1995}.} \label{table:comparison} \end{table*} \subsection{Evolutionary models} In its real evolution, the PNS cools down through neutrino emissions and experiences a sequence of quasi-equilibrium during a period of about a minute. If it rotates as normally expected, these are rotational equilibria with different thermal and lepton contents but with the same specific angular momentum if the angular momentum transfer is negligible. To understand such secular evolutions of PNS, it is necessary to construct a series of configurations of rotational equilibria and to compute the neutrino transfer on top of them. The main aim of this paper is to provide a new numerical tool to treat the first step, i.e., the building of a sequence of rotational equilibria that are supposed to represent the secular evolution of the rotating PNS. The purpose of the models considered here is to demonstrate the capability of our new Lagrangian formulation in full general relativity with some mock evolutions. The application of the method to more realistic evolutions is currently underway and will be presented elsewhere in the near future. For the mock evolutions, we consider the following three scenarios. We stress that in our formulation we need to give an angular velocity distribution only at the beginning of each sequence and it is automatically derived thereafter during the evolution\footnote{We need to specify the specific angular momentum of accreting matter in scenario~(iii).}. \begin{enumerate} \item \underline{Cooling model}: The first model is intended to mimic a cooling evolution by the neutrino emission, in which the PNS shrinks as the thermal energy is carried away by neutrinos. In realistic simulations~(\citet{Pons:1998mm,Villain:2003ey}), we need to calculate the entropy~(and electron fraction) evolution with the neutrino transfer. For the demonstrative purpose here, it suffices to prescribe the time-dependence of $K$ in the Polytropic EOS by hand so that it should mimic the cooling. \item \underline{Wind model}: The second model is to mimic the evolution of a rotating star via the mass-loss from the surface as a wind. The neutrino-driven wind from the PNS has been considered in~\citet{Meyer:1992zz,Witti:1992fn,Woosley:1994ux,Otsuki:1999kb,Sumiyoshi:1999rh,Terasawa:2001wn,Wanajo:2001pu,Panov:2008tr}. Such a stellar wind carries not only mass but also angular momentum. In this toy model, we emulate the mass-loss process by taking a certain fraction of mass and angular momentum away from the grid points near the stellar surface at a certain rate. \item \underline{Accretion model}: In the third model, we consider the evolution induced by the accretion of matter, the process opposite to the one in model~(ii). Recent core-collapse supernova simulations~(e.g.~\citet{Muller:2018utr,Burrows2019,Nakamura:2019snn,Janka:2021deg}) indicate that asymmetric accretion flows continue to exist for a few seconds after the revival of a stalled shock wave and may affect the angular momentum of PNS ~(\citet{Blondin2007,Fernandez2010,Wongwathanarat:2012zp,Guilet:2013bxa,Kazeroni:2017fup}). In this model, we simply increase the masses and angular momenta of the grid points near the surface at a certain rate, to consider the spin-up evolution of PNS. \end{enumerate} \begin{table*} \centering \begin{tabular}{cccccccc} Model & Number of Grids($N_r\times N_{\theta}$) & $R_p/R_e$ & $M_b$ & $M$ & $J$ & $T/|W|$ & GRV \\\hline Reference & $32\times 17$ & $0.893$ & $0.0999$ & $0.0961$ & $3.91\times 10^{-3}$ & $6.02\times 10^{-2}$ & $0.976\times 10^{-3}$ \\ Middle-Resolution & $16\times 9$ & $0.891$ & $0.0999$ & $0.0958$ & $3.77\times 10^{-3}$ & $5.61\times 10^{-2}$ & $4.51\times 10^{-2}$ \\ Low-Resolution & $8\times 5$ & $0.885$ & $0.0986$ & $0.0950$ & $1.93\times 10^{-3}$ & $3.66\times 10^{-2}$ & $1.22\times 10^{-1}$ \\ EOS-a & $32\times 17$ & $0.874$ & $0.0994$ & $0.0946$ & $3.95\times 10^{-3}$ & $5.14\times 10^{-2}$ & $2.07\times 10^{-3}$ \\ EOS-b & $32\times 17$ & $0.911$ & $0.100$ & $0.0956$ & $3.87\times 10^{-3}$ & $5.34\times 10^{-2}$ & $3.78\times 10^{-3}$ \\ EOS-c & $32\times 17$ & $0.894$ & $0.0999$ & $0.0954$ & $3.91\times 10^{-3}$ & $5.32\times 10^{-2}$ & $2.91\times 10^{-3}$ \\ Middle-Rotation & $32\times 17$ & $0.812$ & $0.0997$ & $0.0951$ & $5.79\times 10^{-3}$ & $9.88\times 10^{-2}$ & $6.75\times 10^{-3}$ \\ Cooling & $32\times 17$ & $0.895$ & $0.0989$ & $0.0955$ & $3.85\times 10^{-3}$ & $6.47\times 10^{-2}$ & $8.02\times 10^{-3}$ \\ Wind & $32\times 17$ & $0.890$ & $0.0980$ & $0.0943$ & $3.80\times 10^{-3}$ & $5.90\times 10^{-2}$ & $7.27\times 10^{-3}$ \\ Accretion & $32\times 17$ & $0.895$ & $0.102$ & $0.0981$ & $4.22\times 10^{-3}$ & $6.71\times 10^{-2}$ & $4.74\times 10^{-3}$ \end{tabular} \caption{ List of models: The first column shows the model name. The second corresponds to the number of grid points, followed by the ratio of the polar radius to the equatorial radius of the star, the baryon mass, the gravitational mass, the total angular momentum, $T/|W|$, and relativistic Virial relation~(GRV2) in~\citet{Nozawa1998}. } \label{table:model} \end{table*} \section{Results}\label{sec:result} In this section, starting from the comparison of our solution with that by the public RNS code, we show barotropic stationary models and baroclinic stationary models, followed by the three mock evolutionary models, namely, cooling, wind, and accretion models. \subsection{Uniformly rotating stars} Fortunately, it is possible to directly compare our result in the Lagrange formulation with the solution given by the well-known public RNS code for uniformly-rotating stars~(\cite{Stergioulas1995}). In Fig.~\ref{fig:comparison}, we show the density contour of a uniformly rotating star (a) by our new code in the Lagrangian formulation and (b) by the RNS code in the literature. Since the input parameters to construct equilibrium configurations are different in the Eurelian and Lagrangian formulations, we finetune those parameters to obtain almost the same rotating star as possible as we can, which the global quantities are compared in Table~\ref{table:comparison}. Note also that the relativistic Virial relation of our solution is $8.94\times 10^{-3}$, which indicates the accuracy of solutions~(\cite{Nozawa1998}). \subsection{Stationary models with barotropic EOS} \begin{figure} \includegraphics[width=8.5cm]{./Figs/DiffRotModel.eps} \caption{Stationary barotropic rotating stars with the differential rotation law. The shape of the reference model with the red circles is compared with those of the spherical model with the green lines and the middle-rotation model with the black circles. } \label{fig:diffrot} \end{figure} \begin{figure} \begin{tabular}{c} \includegraphics[width=8.5cm]{./Figs/ResolutionTest.eps}\\ (a) \\ \includegraphics[width=8.5cm]{./Figs/ResolutionTest_surface.eps}\\ (b) \end{tabular} \caption{(a) Shape of the rotating stars in the reference model with different resolutions. The lowest, middle and highest solutions are displayed with the green crosses, the red open circles, and the black filled circles. (b) Surface radii as a function of the number of radial meshes. The polar radii are shown by the purple squares, while the equatorial radii are shown by the red triangles, which are fitted by $R=\alpha N_r^{-\beta} +\gamma$ curves.} \label{fig:resolution} \end{figure} We first focus on the barotropic rotating stars with $\epsilon_1=\epsilon_2=\epsilon_3=0$ and the EOS described in Eq.~\eqref{eq:eos}. As the angular velocity is faster, the star is oblater. In Fig.~\ref{fig:diffrot}, the spherical and middle-rotation models are compared with the reference model. Furthermore, the resolution dependence is investigated in Fig.~\ref{fig:resolution}. The solution converges as the resolution increases. As shown in Fig.~\ref{fig:resolution}~(b), the convergence order is 1.5, which is expected because we employ the first-order FE scheme to calculate the baryon density while we use the second-order FE scheme to evaluate the derivatives. In this work, we adopt the model with $N_r\times N_{\theta}=32\times 17$ as the reference model for other computations. Note that our solutions with the resolution~$(N_r\times N_{\theta}=32\times 17)$ typically has the Virial relation of $\mathcal{O}(10^{-3})$ in Table~\ref{table:model}. \subsection{Stationary models with baroclinic EOS} Next, we investigate the effect of the baroclinicity on the equilibrium configuration of rotating stars. In Fig.~\ref{fig:baroclinic}, we observe the baroclinic feature, which is the misalignment between the pressure and density gradients as Figs.~\ref{fig:baroclinic}~(b), (d) and (f). To clearly see the difference between the barotropic and baroclinic models, we show the angular velocity as a function of the specific angular momentum in Figs.~\ref{fig:baroclinic}~(a), (c) and (e) as shown in~\cite{Camelio:2019rsz}. Isentropic rotating stars in general relativity satisfy the condition that is expressed as the black solid line by $\ell = \ell(\Omega)$ from Eqs.~\eqref{eq:Fomegaell} and~\eqref{eq:law_diffrot}. Although a type of shellular rotation has been considered as the initial rotation in core-collapse supernova simulations~(e.g.~\citet{Yamada:1994,Harada:2018ubo,Iwakami:2021pwo}), the self-consistent stationary solutions with baroclinicity have been obtained only in the Newtonian gravity so far~(\citet{Roxburgh2006,Fujisawa:2015}). We first obtain such a self-consistent stationary solution with baroclinicity in general relativity in Fig.~\ref{fig:baroclinic}~(d). Since the model~(EOS-c) does not satisfy the H{\o}iland criterion in the Newtonian limit and may be dynamically unstable~(\cite{Tassoul:1978}), we leave the detailed investigation on the dynamical stability for baroclinic rotating stars in the future study. \begin{figure*} \begin{tabular}{cc} \includegraphics[width=8.5cm]{./Figs/Comp_Baroclinic_model1.eps} & \includegraphics[width=8.5cm]{./Figs/Baroclinic_eos1.eps}\\ (a) $\Omega$ and $\ell$ of the model EOS-a & \begin{minipage}{0.5\textwidth} (b) Model EOS-a. (Left) Contours of pressure and density and (Right) color map of $K$ and contour of $\Omega$ \end{minipage} \\ \includegraphics[width=8.5cm]{./Figs/Comp_Baroclinic_model2.eps} & \includegraphics[width=8.5cm]{./Figs/Baroclinic_eos2.eps} \\ (c) $\Omega$ and $\ell$ of the model EOS-b & \begin{minipage}{0.5\textwidth} (d) Model EOS-b. (Left) Contours of pressure and density and (Right) color map of $K$ and contour of $\Omega$ \end{minipage} \\ \includegraphics[width=8.5cm]{./Figs/Comp_Baroclinic_model3.eps} & \includegraphics[width=8.5cm]{./Figs/Baroclinic_eos3.eps} \\ (e) $\Omega$ and $\ell$ of the model EOS-c & \begin{minipage}{0.5\textwidth} (f) Model EOS-c (Left) Contours of pressure and density and (Right) color map of $K$ and contour of $\Omega$ \end{minipage} \end{tabular} \caption{Baroclinic rotating stars in general relativity. The top, middle and bottom panels are the results of the model EOS-a ($\epsilon_1=0.15, \epsilon_2=\epsilon_3=0$), EOS-b ($\epsilon_2=0.15, \epsilon_3=\epsilon_1=0$) and EOS-c ($\epsilon_3=0.05, \epsilon_1=\epsilon_2=0$) in the entropy configuration, i.e., Eq.~\eqref{eq:eosK}, respectively. The left column shows the angular velocity as a function of the angular momentum of baroclinic rotating stars and that of barotropic ones shown by the black line. The right column displays the misalignment between the density~(dashed) and pressure~(solid) contours in the left panel, which directly indicates the baroclinicity. In the right panel of the right column, the color map of $K(r,\theta)$ is shown as well as the contour of angular velocity. } \label{fig:baroclinic} \end{figure*} \subsection{Cooling of rotating stars} As an evolution test by cooling, we vary the polytropic constant~$K_0$ keeping the other parameters constant. In Fig.~\ref{fig:coolingtest}~(a), we compare the cooling model~($K_0=0.95$) with the reference model~($K_0=1$). Rotating stars shrink for nonlinear balance equations between the pressure and the gravity to be satisfied. We emphasize that we do not impose any rotational law explicitly but keep the specific angular momentum~$j_{\varphi}$ of each FE. As mentioned, barotropic rotating stars satisfy the condition~$\ell=\ell(\Omega)$ between $\ell$ and $\Omega$ for any equilibria configurations. In fact, Fig.~\ref{fig:coolingtest}~(b) shows that the angular velocity of the cooling model as a function of the specific angular momentum is still expressed as the line, which is highly non-trivial in this Lagrangian formulation. \subsection{Wind and accretion models} For the wind model, we assume the mass-loss from the outer layer of rotating stars. Specifically, we decrease the specific masses of the outer two layers by hand as a toy model and compute a new equilibrium configuration. For the accretion model, on the other hand, we assume the accreting matter that is composed of both the $5\%$ mass and angular momentum of outer layers of rotating stars for simplicity. By the mass-loss or accretion, the total mass and total angular momentum are expected to change accordingly. In Fig.~\ref{fig:masschange}, we show $T/|W|$ as a function of the mass change. \begin{figure} \begin{tabular}{c} \includegraphics[width=8.5cm]{./Figs/Cooling_evolution.eps}\\ (a)\\ \includegraphics[width=8.5cm]{./Figs/Cooling_Omegalphi.eps}\\ (b) \end{tabular} \caption{(a) Rotating stars for cooling model~($K_0=0.95$) compared to the reference model~($K_0=1$) (b) Angular velocity as a function of the specific angular momentum of the reference and cooling models } \label{fig:coolingtest} \end{figure} \begin{figure} \begin{tabular}{c} \includegraphics[width=8.5cm]{./Figs/EvolutionModel.eps \end{tabular} \caption{T/|W| as a function of the mass variation by the wind and accretion. } \label{fig:masschange} \end{figure} \section{Conclusions}\label{sec:conclusion} We have proposed a new scheme to construct the equilibrium configuration of general relativistic rotating stars in the Lagrangian formulation, which maintains the mass and angular momentum automatically when no angular momentum transfer exists, in contrast to the Eulerian formulation. We employed the traditional iterative scheme~(\cite{Hachisu:1986}) to solve the whole system derived from discretizing the Einstein and Euler equations on the finite-elements grid, which consists of two new schemes of our own devising: for the Einstein equation, the W4IX method in App.~\ref{app:W4IX} is applied and for the Euler equation, the slice-shooting scheme is adopted in App.~\ref{app:sliceshooting}. Our formulation does need an angular velocity profile only at initial and employ any equation of state. It enables us to find the evolutionary sequence by keeping the mass, specific angular momentum, specific entropy, electron fraction and mass fraction of various nuclei. In order to demonstrate the capability of our new formulation, we first compare our result of uniformly-rotating stars with that given by the public RNS code~(\cite{Stergioulas1995}). Next, we present the stationary rotating stars with the barotropic and baroclinic equations of state. Finally, we consider the mock evolutionary sequences for cooling, wind and accretion as toy models and show those results. In this paper, we adopt simple models for finding the evolutionary sequence, since our focus is on the way of constructing the equilibrium configuration in the Lagrangian formulation. The application of our new method to more realistic situations is currently underway and will be presented in the near future. \section*{Acknowledgements} We would like to thank K-i. Maeda for helpful comments. This work was supported by JSPS KAKENHI Grant Number 20K03951, 20K03953, 20K14512, 20H04728, 20H04742, and by Waseda University Grant for Special Research Projects(Project Number: 2019C-640). \section*{Data Availability} The data underlying this paper will be available from the corresponding author on reasonable request. \bibliographystyle{mnras}
{'timestamp': '2022-04-22T02:20:58', 'yymm': '2204', 'arxiv_id': '2204.09943', 'language': 'en', 'url': 'https://arxiv.org/abs/2204.09943'}
\section{Introduction \noindent Future experimental programs at the High-Luminosity Large Hadron Collider (HL-LHC)~\cite{ATLAS:2013hta, CMS:2013xfa} and the International Linear Collider (ILC)~\cite{Baer:2013cma} aim to measure precisely the properties of Higgs boson, top quark and vector bosons for discovering the nature of the Higgs sector as well as for finding the effects of physics beyond the standard model. In order to match the high-precision of experimental data in the near future, theoretical predictions including high-order corrections are required. In this framework, detailed evaluations of one-loop multi-leg and higher-loop at general scale and mass assignments are necessary. One-loop Feynman integrals in general space-time dimension play a crucial role for several reasons. Within the general framework for computing two-loop or higher-loop corrections, higher-terms in the $\varepsilon$-expansion (with $\varepsilon =2-d/2$) from one-loop integrals are necessary for building blocks. Moreover, one-loop integrals in $d>4$ may be taken into account in a reduction for tensor one-loop \cite{Davydychev:1991va, Fleischer:2010sq}, two-loop and higher-loop integrals~\cite{IBP}. There have been available many calculations for scalar one-loop functions in general dimension $d$~\cite{Boos:1990rg, Davydychev:1990cq,Davydychev:1997wa, Anastasiou:1999ui,Suzuki:2003jn,Abreu:2015zaa, Phan:2017xsj}. However, not all of the calculations cover general $\varepsilon$-expansion at general scale and internal mass assignments. Furthermore, a recurrence relation in $d$ for Feynman loop integrals has been proposed \cite{Tarasov:1996br} and solved for scalar one-loop integrals which have been expressed in terms of generalized hypergeometric series \cite{Fleischer:2003rm}. However, the general solutions for arbitrary kinematics have not been found, as pointed out in \cite{Bluemlein:2015sia}. More recently, scalar 1-loop Feynman integrals as meromorphic functions in general space-time dimension, for arbitrary kinematics has been presented in~\cite{Bluemlein:2017rbi,Phan:2018cnz}. In the present paper, new analytic formulas for one-loop three-point Feynman integrals in general space-time dimension are reported by following an alternative approach. The analytic results are expressed in terms of $_2F_1$, $_3F_2$ and Appell $F_1$ hypergeometric functions. The evaluations are performed in general configuration of internal masses and external momenta. Last but not least, our results are cross-checked to other papers which have been available in several special cases. The layout of the paper is as follows: In section $2$, we present in detail the method for evaluating scalar one-loop three-point functions. In this section, we first introduce notations used for this work. We next consider the case of one-loop triangle diagram with two light-like external momenta and generalize the calculations for general case. Finally, tensor one-loop three-point integrals are discussed. Conclusions and outlooks are devoted in section $3$. Several useful formulas applied in this calculation can be found in the appendixes. \section{Analytic formulas Detailed evaluations for one-loop three-point integrals are presented in this section. \subsection{Definitions We arrive at notations for the calculations in this subsection. Feynman integrals of scalar one-loop three-point functions are defined: \begin{eqnarray} \label{feynmanintegral} &&\hspace{-1.5cm} J_3(d; \{\nu_1,\nu_2,\nu_3\}) \equiv J_3(d; \{\nu_1,\nu_2,\nu_3\} ;p_1^2,p_2^2,p_3^2; m_1^2,m_2^2,m_3^2) =\\ &=& \int \frac{d^d k}{i\pi^{d/2}} \dfrac{1}{[(k+q_1)^2 -m_2^2 + i\rho]^{\nu_1} [(k+q_2)^2 -m_3^2 + i\rho]^{\nu_2} [(k+q_3)^2 -m_1^2 + i\rho]^{\nu_3}}. \nonumber \end{eqnarray} We refer hereafter $J_3 \equiv J_3(d; \{1,1,1\})$. In this definition, the term $i\rho$ is Feynman's prescription and $d$ is space-time dimension. The internal (loop) momentum is $k$ and the external momenta are $p_1$, $p_2$, $p_3$. They are inward as described in Fig.~\ref{j3diagram}. We use the momenta $q_1 = p_1$, $q_2 =p_1+p_2$ and $q_3 = p_1+p_2+p_3 =0$ following momentum conservation. The internal masses are $m_1$, $m_2$ and $m_3$. $J_3$ is a function of $p_1^2$, $p_2^2$, $p_3^2$ and $m_1^2$, $m_2^2$, $m_3^2$. It has known that an algebraically compact expression and numerically stable representation for Feynman diagrams can be obtained by using kinematic variables such as the determinants of Caylay and Gram matrices~\cite{kajantie}. The expression also reflects the symmetry of the corresponding topologies. \begin{figure}[ht] \begin{center} \begin{pspicture}(-3, -3)(3.5, 3) \psset{linewidth=0.1pt} \psset{unit = 0.6} \psline(-2.5,-2)(0,2) \psline{->}(-2.5, -2)(-1.25, 0) \psline(0,2)(2.5,-2) \psline{->}(0, 2)(1.25, 0) \psline(-2.5,-2)(2.5,-2) \psline{->}(2.5, -2)(0, -2) \psline(-2.5,-2)(-5,-4) \psline{->}(-5, -4)(-3.8, -3) \psline(2.5,-2)(5,-4) \psline{->}(5,-4)(3.8,-3) \psline(0,2)(0,5) \psline{->}(0,5)(0,3) \rput(0.8, 3.5){$p_2^2$} \rput(-4.2,-2.5){$ p_1^2$} \rput(4.2, -2.5){$ p_3^2$} \rput(0, -2.8){$m_3^2$} \rput(-2.5,0){$m_1^2$} \rput(2.5, 0){$ m_2^2$} \psset{dotsize=3pt} \psdots(-2.5, -2)(2.5, -2)(0, 2) \end{pspicture} \end{center} \caption{\label{j3diagram}One-loop triangle diagrams.} \end{figure} We hence review the mentioned kinematic variables in the following paragraphs. The determinant of Cayley matrix of one-loop triangle diagrams is given \begin{eqnarray} \label{deltan} S_3 &=& \left| \begin{array}{ccc} 2m_1^2& -p_1^2 +m_1^2 +m_2^2 & -p_3^2 +m_1^2 +m_3^2 \\ -p_1^2 +m_1^2 +m_2^2 & 2m_2^2 & -p_2^2 +m_2^2 +m_3^2 \\ -p_3^2 +m_1^2 +m_3^2 & -p_2^2 +m_2^2 +m_3^2 & 2 m_3^2 \\ \end{array} \right|. \end{eqnarray} In the same manner, the Cayley determinants of one-loop two-point Feynman diagrams which are obtained by shrinking an propagator in the three-point integrals. The determinants are written explicitly as: \begin{eqnarray} S_{12}&=& \left| \begin{array}{cc} 2m_1^2& -p_1^2 +m_1^2 +m_2^2 \\ -p_1^2 +m_1^2 +m_2^2 & 2m_2^2 \\ \end{array} \right| = -\lambda(p_1^2, m_1^2,m_2^2), \\ S_{13}&=& \left| \begin{array}{cc} 2m_1^2 & -p_3^2 +m_1^2 +m_3^2 \\ -p_3^2 +m_1^2 +m_3^2 & 2 m_3^2 \\ \end{array} \right| =-\lambda(p_3^2, m_1^2,m_3^2), \\ S_{23} &=&\left| \begin{array}{cc} 2m_2^2 & -p_2^2 +m_2^2 +m_3^2 \\ -p_2^2 +m_2^2 +m_3^2 & 2 m_3^2 \\ \end{array} \right| =-\lambda(p_2^2, m_2^2,m_3^2). \end{eqnarray} Where $ \lambda(x, y,z) = x^2 +y^2 +z^2 -2 xy -2 xz -2 yz$ is so-called K\"allen function. Next, the determinant of Gram matrix of one-loop three-point functions is given \begin{eqnarray} G_3 = -8 \left| \begin{array}{cc} p_1^2 & p_1p_2 \\ p_1p_2 & p_2^2 \end{array} \right| = 2 \lambda(p_1^2,p_3^2,p_2^2). \end{eqnarray} In the same above definitions, we also get the Gram determinants of two-point functions as follows \begin{eqnarray} G_{12} = - 4p_1^2, \quad G_{13} = - 4p_3^2, \quad G_{23} = -4p_2^2. \end{eqnarray} In this work, the analytic formulas for scalar one-loop three-point integrals are expressed as functions with arguments of the ratio of the above kinematic determinants. Therefore, it is worth to introduce the following index variables \begin{eqnarray} M_{3} &=& -\dfrac{S_3}{G_3}, \quad \text{for} \quad G_3 \neq 0, \\ M_{ij} &=& -\dfrac{S_{ij}}{G_{ij}}, \quad \text{for} \quad G_{ij} \neq 0, \quad \text{with} \quad i,j =1,2,3. \end{eqnarray} By introducing Feynman parameters, we then integrate over the loop-momentum. The resulting integral after taking over one of Feynman parameters reads \begin{eqnarray} \label{feynj3} \dfrac{J_3}{\Gamma\left (\frac{6-d}{2} \right)} &=& - \int \limits_0^1 dx\int \limits_0^{1-x} dy \dfrac{1}{(Ax^2 + By^2+ 2C xy+ Dx + Ey + F - i\rho)^{\frac{6-d}{2}} }. \end{eqnarray} The corresponding coefficients $A,B,C,\;\cdots,F$ are shown: \begin{eqnarray} A &=& p_1^2,\quad\quad\quad \quad \quad \quad \quad \quad D \;=\;-(p_1^2+m_1^2-m_2^2), \\ B &=& p_3^2, \quad \quad \quad \quad \quad \quad \quad \quad E \;=\; -(p_3^2+m_1^2-m_3^2),\\ C &=& -p_1p_3,\quad \quad \quad \quad \quad \quad\; F \;=\; m_1^2. \end{eqnarray} Detailed calculation for $J_3$ in (\ref{feynj3}) is presented in next subsections. \subsection{Two light-like momenta We first consider the simple case which is two light-like external momenta. Without loss of the generality, we can take $p_1^2 =0,~p_3^2 =0$. The Feynman parameter integral in Eq.~(\ref{feynj3}) is now casted into the simpler form \begin{eqnarray} \label{twolight-like} J_3 &=& -\Gamma\left (\frac{6-d}{2} \right) \int \limits_0^1 dx \int \limits_0^{1-x}dy \dfrac{1} {(2Cxy + Dx + Ey+ F - i\rho)^{\frac{6-d}{2}} }. \end{eqnarray} It finds that the denominator function of the integrand in (\ref{twolight-like}) depends linearly on both $x$ and $y$. Hence, the integral can be taken easily. The resulting integral reads after performing the $y$-integration \begin{eqnarray} \label{twolight-like1} \dfrac{J_3 } {\Gamma\left (\frac{4-d}{2} \right)} &=& \int \limits_0^1 dx \; \dfrac{\left[ (m_2^2-m_1^2)x + m_1^2 -i\rho \right]^{\frac{d-4}{2}} - \left[\; p_2^2x^2 -(p_2^2 +m_3^2 -m_2^2)x +m_3^2 -i\rho \right]^{\frac{d-4}{2} } } { p_2^2 x - m_3^2 +m_1^2 }. \nonumber\\ \end{eqnarray} Both the integrands are singularity at $x_1 = (m_3^2-m_1^2)/p_2^2$. However, it is verified that \begin{eqnarray} p_2^2x_1^2 -(p_2^2 +m_3^2 -m_2^2)x_1 + m_3^2 = (m_2^2-m_1^2) x_1 + m_1^2 =M_{3}. \end{eqnarray} It means that the residue contributions at this pole of two integrations in Eq.~(\ref{twolight-like1}) will be cancelled. As a result, $J_3$ stays finite at this point. The first integral in Eq.~(\ref{twolight-like1}) can be formulated by mean of Appell $F_1$ functions. For the second integral in Eq.~(\ref{twolight-like1}), we can apply the formula for master integral as Eq.~(\ref{K2}). The result for $J_3$ then reads \begin{eqnarray} \label{twolightlike1} \dfrac{J_3}{\Gamma\left(\frac{4-d}{2}\right)} &=& -\dfrac{ (m_1^2)^{\frac{d-4}{2}} }{m_3^2-m_1^2} \; F_1 \left( 1; 1,\frac{4-d}{2}; 2; \frac{p_2^2}{m_3^2-m_1^2 }, \frac{m_1^2-m_2^2 +i\rho}{m_1^2} \right) \\ &&\hspace{-1.8cm} + \left(\dfrac{\partial_1 S_3}{p_2^2\;G_{23}} \right) \Bigg[ \left(\dfrac{\partial_2 S_{23}}{G_{23}} \right) \dfrac{(m_3^2)^{\frac{d-4}{2} } }{M_{3} -m_3^2} \; F_1\left(1; 1,\frac{4-d}{2}; \frac{3}{2}; \dfrac{M_{23} -m_3^2}{ M_{3} -m_3^2}, 1-\frac{M_{23}}{m_3^2} \right)\nonumber\\ &&\hspace{0cm} + \left( \dfrac{\partial_3 S_{23}}{G_{23} } \right) \dfrac{(m_2^2)^{\frac{d-4}{2} } }{M_{3} -m_2^2} \; F_1\left(1; 1, \frac{4-d}{2}; \frac{3}{2}; \dfrac{M_{23} -m_2^2}{ M_{3} -m_2^2}, 1-\frac{M_{23}}{m_2^2} \right) \Bigg] \nonumber \\ &&\hspace{-1.8cm} + \Bigg[ \dfrac{M_{23} - m_3^2}{2p_2^2(M_{3} -m_3^2)} \left(m^2_{3} \right)^{\frac{d-4}{2} } \; F_1\left(1; 1, \frac{4-d}{2}; 2; \dfrac{M_{23} -m_3^2}{ M_{3} -m_3^2}, 1-\frac{M_{23}}{m_3^2} \right) \nonumber\\ &&\hspace{-1.3cm} - \dfrac{M_{23} - m_2^2}{2p_2^2(M_{3} -m_2^2)} \left(m^2_{2} \right)^{\frac{d-4}{2}} \; F_1\left(1; 1,\frac{4-d}{2}; 2; \dfrac{M_{23} -m_2^2}{ M_{3} -m_2^2}, 1-\frac{M_{23}}{m_2^2} \right) \Bigg]. \nonumber \end{eqnarray} Another representation for $J_3$ can be obtained by using Eq.~(\ref{K1}) \begin{eqnarray} \label{twolightlike2} \dfrac{J_3}{\Gamma\left(\frac{4-d}{2}\right)} &=& -\dfrac{ (m_1^2)^{\frac{d-4}{2}} }{m_3^2-m_1^2} \; F_1 \left( 1; 1, \frac{4-d}{2}; 2; \frac{p_2^2}{m_3^2-m_1^2 }, \frac{m_1^2-m_2^2 +i\rho}{m_1^2} \right) \\ &&\hspace{-1.8cm} + \left( \dfrac{\partial_1 S_3}{p_2^2\;G_{23}} \right) \Bigg[ \left( \dfrac{\partial_2 S_{23}}{G_{23}} \right) \dfrac{(M_{23} -i\rho)^{\frac{d-4}{2}} }{M_{3} -M_{23}} \; F_1\left(\frac{1}{2}; 1, \frac{4-d}{2}; \frac{3}{2}; \dfrac{M_{23} -m_3^2}{ M_{23} -M_{3} }, 1-\frac{m_3^2}{M_{23}} \right)\nonumber\\ &&\hspace{0.1cm} + \left( \dfrac{\partial_3 S_{23}}{G_{23} } \right) \dfrac{(M_{23}-i\rho)^{\frac{d-4}{2}} }{M_{3} -M_{23}} \; F_1\left(1; 1, \frac{4-d}{2}; \frac{3}{2}; \dfrac{M_{23} -m_2^2}{ M_{23} -M_{3}}, 1-\frac{m_2^2}{M_{23}} \right) \Bigg] \nonumber \\ &&\hspace{-1.8cm} + \Bigg[ \dfrac{ \left(M_{23} - i\rho\right)^{\frac{d-4}{2}} } {2(M_{3} -M_{23} )} \; F_1\left(1; 1, \frac{4-d}{2}; 2; \dfrac{M_{23} -m_3^2} { M_{3} -M_{23} }, 1-\frac{m_3^2}{M_{23}} \right) \nonumber\\ &&\hspace{0cm}- \dfrac{\left(M_{23}-i\rho \right)^{\frac{d}{2} -2} } {2p_2^2(M_{3} -M_{23} )} \; F_1\left(1; 1, \frac{4-d}{2}; 2; \dfrac{M_{23} -m_2^2} {M_{3} -M_{23} }, 1-\frac{m_2^2}{M_{23}} \right) \Bigg]. \nonumber \end{eqnarray} The results are shown in Eqs.~(\ref{twolightlike1}, \ref{twolightlike2}) are new hypergeometric representations for scalar one-loop three-point functions in general space-time dimension. In the next paragraphs, we consider $J_3$ in several special cases. \begin{enumerate} \item $\underline{m_1^2 = m_2^2 = m_3^2=0:}$ \\ One-loop triangle diagrams with all massless internal lines are considered. In this case, the integral $J_3$ in (\ref{twolight-like}) becomes \begin{eqnarray} J_3 &=& -\Gamma\left(\frac{6-d}{2} \right) \int \limits_0^1 dx \int \limits_0^{1-x} dy \dfrac{1}{\left(-p_2^2\; x\; y -i\rho \right)^{\frac{6-d}{2} }} \\ &=& \dfrac{\Gamma\left(\frac{4-d}{2} \right) \Gamma\left(\frac{d-4}{2}\right) \Gamma\left(\frac{d-2}{2}\right) } {\Gamma(d-3)} \left(-p_2^2 -i\rho \right)^{\frac{d-6}{2}}. \label{case11} \end{eqnarray} It is important to note that we use $-p_2^2 x y -i\rho \equiv (-p_2^2-i\rho ) x y $. Therefore, for analytic continuation the result when $p_2^2>0$, we should keep $i\rho$-term together with external momentum like $p_2^2 \rightarrow p_2^2 +i\rho$. The result in (\ref{case11}) shows full agreement with Eq.~$(4.5)$ in \cite{Ellis:2007qk}. \\ \item $\underline{m_2^2 = m_3^2 =0, m_1^2=m^2:}$ \\ If internal mass configuration takes $m_2^2 = m_3^2 =0, m_1^2=m^2$, $J_3$ becomes \begin{eqnarray} \dfrac{J_3}{\Gamma\left(\frac{4-d}{2} \right)} &=& \int \limits_0^1 dx \; \dfrac{[m^2(1-x)]^{ \frac{d-4}{2}}}{p_2^2 x + m^2} -\int \limits_0^1 dx \; \dfrac{[p^2_2x^2 - p_2^2 x -i\rho] ^{ \frac{d-4}{2} }}{p_2^2 x + m^2} \\ &&\hspace{-2.2cm} = \dfrac{\Gamma\left(\frac{d-2}{2}\right)} {\Gamma\left( \frac{d}{2} \right)} (m^2)^{ \frac{d-6}{2}} \Fh21\Fz{1, 1}{\frac{d}{2} }{\dfrac{-p_2^2}{m^2}} -\dfrac{\Gamma\left(\frac{d-2}{2}\right)^2 } {\Gamma(d-2)} \dfrac{(-p_2^2 -i\rho)^{\frac{d-4}{2}} }{m^2} \Fh21\Fz{1, \frac{d-2}{2}}{d-2}{\dfrac{-p_2^2}{m^2}}. \nonumber \\ \end{eqnarray} This result is in agreement with Eq.~($B2$) in Ref.~\cite{Abreu:2015zaa}. In the limit of $p_2^2 \rightarrow -m^2<0$, we arrive at \begin{eqnarray} \dfrac{J_3}{\Gamma\left(\frac{4-d}{2} \right)} &=& -\left\{ \dfrac{\Gamma\left( \frac{d-2}{2}\right) \Gamma\left(\frac{d-4}{2} \right) } {\Gamma\left(d-3\right) } -\dfrac{\Gamma\left(\frac{d-4}{2}\right) } {\Gamma\left( \frac{d-2}{2}\right) } \right\}(m^2)^{ \frac{d-6}{2}}. \end{eqnarray} When $\mathcal{R}$e$\left(d-6\right)>0$ and $m^2 \rightarrow 0$, $J_3 \rightarrow 0$.\\ \item $\underline{m_1^2 = m_3^2:}$\\% We are also interested in the case of $m_1^2 = m_3^2 $. In such case, one should exchange the order of integration in Eq.~(\ref{twolight-like}). The $x$-integration can be taken first. Subsequently, we arrive at \begin{eqnarray} \dfrac{J_3}{\Gamma\left(\frac{4-d}{2} \right)} &=& -\int \limits_0^1 dy \; \dfrac{[p^2_2y^2 - (p_2^2+m_2^2-m_1^2 )y +m_2^2 -i\rho]^{ \frac{d-4}{2}} -(m_1^2)^{ \frac{d-4}{2}} } {p_2^2 y -m_2^2 + m_1^2}. \end{eqnarray} It is easy to find out that \begin{eqnarray} \left(p^2_2y^2 - (p_2^2+m_2^2-m_1^2 )y +m_2^2 \right)|_{y=(m_2^2 -m_1^2)/p_2^2} = m_1^2. \end{eqnarray} As previous explanation, the integral $J_3$ in this case also stays finite at $y=(m_2^2 -m_1^2)/p_2^2$. Using the analytical solution for master integral as Eq.~(\ref{K2}) in appendix $A$, the result reads \begin{eqnarray} \dfrac{J_3}{\Gamma\left(\frac{4-d}{2} \right) } &=& -\dfrac{(m_1^2)^{ \frac{d-4}{2} }}{m_2^2-m_1^2} \Fh21\Fz{1, 1}{2}{-\dfrac{p_2^2}{m_2^2-m_1^2}} \\ &&\hspace{-1.8cm} + \left( \dfrac{\partial_3 S_3}{p_2^2\; G_{23}} \right) \Bigg[ \left( \dfrac{\partial_3 S_{23}}{G_{23}} \right) \dfrac{(m_2^2)^{\frac{d-4}{2}} }{M_{3} -m_2^2} \; F_1\left(1; 1, \frac{4-d}{2}; \frac{3}{2}; \dfrac{M_{23} -m_2^2}{ M_{3} -m_2^2}, 1-\frac{M_{23}}{m_2^2} \right)\nonumber\\ &&\hspace{0cm} + \left(\dfrac{\partial_2 S_{23}}{G_{23} } \right) \dfrac{(m_1^2)^{\frac{d-4}{2}} }{M_{3} -m_1^2} \; F_1\left(1; 1,\frac{4-d}{2}; \frac{3}{2}; \dfrac{M_{23} -m_1^2}{ M_{3} -m_1^2}, 1-\frac{M_{23}}{m_2^2} \right) \Bigg] \nonumber \\ &&\hspace{-1.8cm} + \Bigg[ \dfrac{M_{23} - m_2^2}{2p_2^2(M_{3} -m_2^2)} \left(m^2_{2} \right)^{\frac{d-4}{2}} \; F_1\left(1; 1, \frac{4-d}{2}; 2; \dfrac{M_{23} -m_2^2}{ M_{3} -m_2^2}, 1-\frac{M_{23}}{m_2^2} \right) \nonumber\\ &&\hspace{-0.5cm} -\dfrac{M_{23} - m_1^2}{2p_2^2(M_{3} -m_1^2)} \left(m^2_{1} \right)^{\frac{d-4}{2} } \; F_1\left(1; 1,\frac{4-d}{2}; 2; \dfrac{M_{23} -m_1^2}{ M_{3} -m_1^2}, 1-\frac{M_{23}}{m_1^2} \right) \Bigg], \nonumber \end{eqnarray} provided that $\left|\frac{M_{23} -m_{1,2}^2} { M_{3} -m_{1,2}^2} \right|<1$, $\left| 1-\frac{M_{23}}{m_{1,2}^2} \right|<1$.\\ \item $\underline{m_1^2 = m_2^2=0, m_3^2 =m^2, p_2^2 =q^2:}$\\ This case has been calculated in~\cite{Abreu:2015zaa}. In this particular configuration, $J_3$ becomes \begin{eqnarray} \dfrac{J_3}{\Gamma\left (\frac{4-d}{2} \right)} &=& - \int \limits_0^1 dx \; \dfrac{\left[ q^2x^2 -(q^2 +m^2)x +m^2 -i\rho \right]^{\frac{d-4}{2}}} {q^2 x - m^2 }. \end{eqnarray} It is confirmed that \begin{eqnarray} q^2x^2 - (q^2 +m^2)x + m^2|_{x= m^2/q^2}=0. \end{eqnarray} As a matter of this fact, $J_3$ stays finite at $x= m^2/q^2$. By shifting $x \rightarrow 1-x$, the resulting integral reads \begin{eqnarray} \dfrac{J_3} {\Gamma\left (\frac{4-d}{2} \right)} &=& \left(m^2-q^2-i\rho\right)^{\frac{d-6}{2} } \int \limits_0^1 dx \; x^{\frac{d-4}{2} } \left(1-\dfrac{q^2}{q^2 -m^2}x\right)^{\frac{d-4}{2}}. \end{eqnarray} Following Eq.~(\ref{gauss-int}) in appendix $B$, we derive this integral in terms of Gauss hypergeometric functions \begin{eqnarray} \dfrac{J_3}{\Gamma\left (\frac{4-d}{2} \right)} &=&\dfrac{\Gamma\left(\frac{d-2}{2}\right) } {\Gamma\left(\frac{d}{2}\right)} \left(m^2-q^2-i\rho \right)^{\frac{d-6}{2}}\; \Fh21\Fz{\frac{6-d}{2},\frac{d-2}{2} } {\frac{d}{2}} {\dfrac{q^2}{q^2 - m^2}} \nonumber\\ &=&\dfrac{\Gamma\left(\frac{d-2}{2}\right) } {\Gamma\left(\frac{d}{2}\right)} \left(m^2\right)^{\frac{d-6}{2}}\; \Fh21\Fz{1,\frac{6-d}{2}}{ \frac{d}{2}} {\dfrac{q^2}{ m^2}}, \end{eqnarray} provided that $\left|\frac{q^2}{m^2} \right|<1$ and $\mathcal{R}$e$\left(d-2\right) >0$. This gives full agreement result with Eq.~($B7$) in Ref~\cite{Abreu:2015zaa}. In the limit of $q^2 \rightarrow m^2$, one arrives at \begin{eqnarray} \dfrac{J_3}{\Gamma\left (\frac{4-d}{2} \right) } & = &\dfrac{\Gamma\left(d-4\right) } { \Gamma\left(d-3\right)}\; \left( m^2 \right)^{\frac{d-6}{2}}, \end{eqnarray} provided that $\mathcal{R}$e$\left(d-4\right) >0$. In additional, if $\mathcal{R}$e$\left(d-6\right)>0$ and $m^2 \rightarrow 0$, $J_3 \rightarrow 0$.\\ \item $\underline{m_1^2 = m_2^2 = m_3^2 =m^2:} \\ This case has been performed in Ref~\cite{Davydychev:2003mv}. One confirms that $M_{3} = m^2$ in this kinematic configuration. We derive analytic formula for $J_3$ as follows \begin{eqnarray} \label{rijkM} \dfrac{J_3}{\Gamma\left (\frac{6-d}{2} \right)} = -\int \limits_0^1 dx\; \int \limits_0^{1-x} dy\; \dfrac{1}{( p_2^2 \;x\;y+m^2 - i\rho)^{\frac{6-d}{2}} }. \end{eqnarray} Using Mellin-Barnes relation one has \begin{eqnarray} J_3 = -\frac{1}{2\pi i} \int\limits_{-i \infty}^{i\infty} ds \; \Gamma\left(-s\right) \Gamma\left(\frac{6-d+2s}{2}\right) \int\limits_0^1 dx\; \int \limits_0^{1-x} dy\; \dfrac{[ (p_2^2 -i\rho) \;x\;y]^s }{ (m^2)^{3-\frac{d}{2} +s} }. \end{eqnarray} After taking over $x,y$-integrations, the contour integral is taken the form of \begin{eqnarray} J_3 &=& - \frac{\sqrt{\pi}}{2\pi i} ~(m^2)^{\frac{d-6}{2}} \int\limits_{-i \infty}^{i\infty} ds \; \dfrac{ \Gamma\left(-s\right) \Gamma\left(\frac{6-d+2s}{2}\right) \Gamma(s+1)^2}{4\;\Gamma(s+2) \Gamma\left(s+\frac{3}{2} \right)} \left(- \dfrac{p_2^2 -i\rho}{4m^2}\right)^s. \nonumber\\ \end{eqnarray} By closing the integration contour to the right side of imaginary axis in the $s$-complex plane, we then take into account the residua of sequence poles from $\Gamma(-s)$. The result is presented in terms of series of generalized hypergeometric function \begin{eqnarray} \dfrac{J_3}{\Gamma\left (\frac{6-d}{2} \right)} &=&-\dfrac{(m^2)^{\frac{d-6}{2} }}{2}\; \Fh32\Fz{1,1,\frac{6-d}{2} }{ 2, \frac{3}{2}}{ \dfrac{p_2^2 -i\rho}{4m^2}}, \end{eqnarray} provided that $\left|\frac{p_2^2 -i\rho}{4m^2} \right|<1$. This result is in agreement with Ref~\cite{Davydychev:2003mv}. \end{enumerate} \subsection{One light-like momentum We are going to proceed the method for one light-like momentum case. Without any loss of the generality, we can choose $p_3^2=0$. Let us use $q^2 = -2p_1p_3$. Applying the same previous procedure, one obtains Feynman parameter integral \begin{eqnarray} \label{onelightlike} \dfrac{J_3}{\Gamma\left (\frac{4-d}{2} \right) } &=& \int \limits_0^1 dx\; \dfrac{\left[p_2^2x^2 -(p_2^2 +m_3^2 -m_2^2)x +m_3^2 -i\rho \right]^{\frac{d-4}{2} }} { q^2 x + m_3^2 -m_1^2 } \\ &&- \int \limits_0^1 dx\; \dfrac{\left[p_1^2x^2 -(p_1^2 +m_1^2 -m_2^2)x + m_1^2-i\rho \right]^{\frac{d-4}{2}} } { q^2 x + m_3^2 -m_1^2 }. \nonumber \end{eqnarray} We find that two integrands have same singularity pole at $x= (m_1^2-m_3^2)/q^2$. It is easy to verify that the residue contributions from this pole will be cancelled out. As a result, $J_3$ stays finite at this point. The analytic result for $J_3$ can be presented as a compact form: \begin{eqnarray} \dfrac{J_3}{\Gamma\left(\frac{4-d}{2} \right)} = J_{123} +J_{231}. \end{eqnarray} Where the terms $J_{123}$, $J_{231}$ are obtained by using Eq.~(\ref{K2}). These terms are written in terms of $F_1$ as follows \begin{eqnarray} \label{onelightlike1} J_{123} &=& -\dfrac{\partial_3 S_3}{4(p_2^2-p_1^2)^2} \left(\dfrac{\partial_2S_{12}}{G_{12}} \right) \dfrac{(m_1^2)^{\frac{d-4}{2} }}{M_{3} -m_1^2} \; F_1\left(1; 1, \frac{4-d}{2}; \frac{3}{2}; \dfrac{M_{12} -m_1^2}{ M_{3} -m_1^2}, 1-\frac{M_{12}}{m_1^2} \right) \nonumber\\ &&-\dfrac{\partial_3 S_3}{4(p_2^2-p_1^2)^2} \left(\dfrac{\partial_1S_{12}}{G_{12}} \right) \dfrac{(m_2^2)^{\frac{d-4}{2} } }{M_{3} -m_2^2} \; F_1\left(1; 1,\frac{4-d}{2}; \frac{3}{2}; \dfrac{M_{12} -m_2^2}{ M_{3} -m_2^2}, 1-\frac{M_{12}}{m_2^2} \right) \nonumber\\ && +\left(\dfrac{M_{12} -m_1^2 }{2(M_{3} -m_1^2)} \right) \dfrac{(m_1^2)^{\frac{d-4}{2}} }{p_1^2-p_2^2} \; F_1\left(1; 1, \frac{4-d}{2}; 2; \dfrac{M_{12} -m_1^2}{ M_{3} -m_1^2}, 1-\frac{M_{12}}{m_1^2} \right) \nonumber\\ &&-\left(\dfrac{M_{12} -m_2^2 }{2(M_{3} -m_2^2) }\right) \dfrac{(m_2^2)^{\frac{d-4}{2} }}{p_1^2-p_2^2} \; F_1\left(1; 1, \frac{4-d}{2}; 2; \dfrac{M_{12} -m_2^2}{ M_{3} -m_2^2}, 1-\frac{M_{12}}{m_2^2} \right), \end{eqnarray} and \begin{eqnarray} \label{onelightlike12} J_{231} &=& -\dfrac{\partial_1 S_3}{4(p_2^2-p_1^2)^2} \left(\dfrac{\partial_2S_{23}}{G_{23}} \right) \dfrac{(m_3^2)^{\frac{d-4}{2} } }{M_{3} -m_3^2} \; F_1\left(1; 1, \frac{4-d}{2}; \frac{3}{2}; \dfrac{M_{23} -m_3^2}{ M_{3} -m_3^2}, 1-\frac{M_{23}}{m_3^2} \right) \nonumber\\ &&-\dfrac{\partial_1 S_3}{4(p_2^2-p_1^2)^2} \left(\dfrac{\partial_3S_{23}}{G_{23}} \right) \dfrac{(m_2^2)^{\frac{d-4}{2} } }{M_{3} -m_2^2} \; F_1\left(1; 1, \frac{4-d}{2}; \frac{3}{2}; \dfrac{M_{23} -m_2^2}{ M_{3} -m_2^2}, 1-\frac{M_{23}}{m_2^2} \right) \nonumber\\ && +\left(\dfrac{M_{23} -m_3^2 }{2(M_{3} -m_3^2)} \right) \dfrac{(m_3^2)^{\frac{d-4}{2} }}{p_2^2-p_1^2} \; F_1\left(1; 1, \frac{4-d}{2}; 2; \dfrac{M_{23} -m_3^2}{ M_{3} -m_3^2}, 1-\frac{M_{23}}{m_3^2} \right) \nonumber\\ &&-\left(\dfrac{M_{23} -m_2^2 }{2(M_{3} -m_2^2)}\right) \dfrac{(m_2^2)^{\frac{d-4}{2} }}{ p_2^2-p_1^2} \; F_1\left(1; 1,\frac{4-d}{2}; 2; \dfrac{M_{23} -m_2^2}{ M_{3} -m_2^2}, 1-\frac{M_{23}}{m_2^2} \right). \end{eqnarray} It is important to note that the result for $J_3$ in (\ref{onelightlike1}, \ref{onelightlike12}) are only valid if the absolute value of the arguments of {the Appell $F_1$ functions} in this presentation are less than $1$. If these kinematic variables do not meet this condition. One has to perform analytic {continuations} for Appell $F_1$ functions~\cite{olsson}. We can get another representation for $J_{123}, J_{231}$ by using a transformation for $F_1$ (seen appendix $B$). Taking $J_{123}$ as a example, one has \begin{eqnarray} \label{onelightlike2} J_{123}&=& \dfrac{\partial_3 S_3}{4(p_2^2-p_1^2)^2} \left(\dfrac{\partial_2S_{12}}{G_{12}} \right) \dfrac{(M_{12}-i\rho)^{\frac{d-4}{2} } }{M_{12}-M_{3}} \; F_1\left(\frac{1}{2}; 1,\frac{4-d}{2}; \frac{3}{2}; \dfrac{M_{12} -m_1^2}{ M_{12} -M_{3}}, 1-\frac{m_1^2}{M_{12}} \right) \nonumber\\ &&\hspace{-0.5cm} +\dfrac{\partial_3 S_3}{4(p_2^2-p_1^2)^2} \left(\dfrac{\partial_1 S_{12}}{G_{12}} \right) \dfrac{(M_{12} -i\rho )^{\frac{d-4}{2} } }{M_{12} -M_{3}} \; F_1\left(\frac{1}{2}; 1,\frac{4-d}{2}; \frac{3}{2}; \dfrac{M_{12} -m_2^2}{ M_{12} -M_{3} }, 1-\frac{m_2^2}{M_{12}} \right) \nonumber\\ && \hspace{-0.5cm} + \left(\dfrac{M_{12} -m_1^2 }{2(M_{3} -M_{12} )}\right) \dfrac{(M_{12} -i\rho)^{\frac{d-4}{2} }}{p_2^2-p_1^2} \; F_1\left(1; 1, \frac{4-d}{2}; 2; \dfrac{M_{12} -m_1^2}{ M_{12} -M_{3} }, 1-\frac{m_1^2}{M_{12}} \right) \nonumber\\ &&\hspace{-0.5cm} -\left(\dfrac{M_{12} -m_2^2 }{2(M_{3} -M_{12}) }\right) \dfrac{(M_{12} -i\rho)^{\frac{d-4}{2} }}{p_2^2-p_1^2} \; F_1\left(1; 1, \frac{4-d}{2}; 2; \dfrac{M_{12} -m_2^2}{ M_{12} -M_{12}}, 1-\frac{m_2^2}{M_{12}} \right). \end{eqnarray} From Eqs.~(\ref{onelightlike1}, \ref{onelightlike2}), we can perform analytic continuation the result in the limits of $M_{ij}= 0, m_i^2, m_j^2$ (for $i,j=1,2,3$) and $M_{3} =0$, etc. This can be worked out by applying the transformations for Appell $F_1$ functions, seen appendix $B$. The results in Eqs.~(\ref{onelightlike1}, \ref{onelightlike2}) are new hypergeometric representations for scalar one-loop three-point functions for this case in general $d$. Several special cases for $J_3$ are considered in the next paragraphs. \begin{enumerate} \item $\underline{m_1^2 = m_2^2 =m_3^2=0:}$\\ One first arrives at the case of all massless internal lines. In this case, Eq.~(\ref{onelightlike}) becomes \begin{eqnarray} \dfrac{J_3}{\Gamma\left(\frac{4-d}{2} \right)} &=& -\int \limits_0^1 dx\; \dfrac{\left[p_2^2x^2 - p_2^2x -i\rho \right]^{\frac{d-4}{2} } } { (p_2^2-p_1^2)x } + \int \limits_0^1 dx\; \dfrac{\left[p_1^2x^2 -p_1^2x -i\rho \right]^{\frac{d-4}{2} } } { (p_2^2-p_1^2) x} \\ &=&\sum\limits_{k=1}^2 \dfrac{(-1)^k} {p_1^2-p_2^2} \int \limits_0^1 dx\; (-p_k^2 +i\rho)^{\frac{d-4}{2} } x^{\frac{d-6}{2}} (1-x)^{\frac{d-4}{2}} \\ &=&\dfrac{\Gamma\left (\frac{d-4}{2}\right) \Gamma\left (\frac{d-2}{2}\right)} { \Gamma(d-3)}\sum\limits_{k=1}^2 (-1)^k\;\dfrac{ (-p_k^2 +i\rho)^{\frac{d-4}{2}}}{ p_1^2-p_2^2}. \end{eqnarray} We have already used $p_2^2x^2 -p_2^2x -i\rho = (-p_2^2 -i\rho) x(1-x) $. This result coincides with Eq.~($4.6$) in Ref.~\cite{Ellis:2007qk}. In the limit of $p_2^2 \rightarrow p_1^2$, one arrives at \begin{eqnarray} \dfrac{J_3}{\Gamma\left(\frac{4-d}{2} \right)} &=& \dfrac{ \Gamma^2\left (\frac{d-4}{2}\right)}{\Gamma(d-3)} \; (-p_2^2 -i\rho)^{ \frac{d-6}{2}}. \end{eqnarray} If $\mathcal{R}$e$\left(d-6\right)>0$ and $p_2^2 \rightarrow 0$, the integral $J_3 \rightarrow 0 $. \\ \item $\underline{m_1^2 =m_2^2 =0, m_3^2 = m^2:}$\\ We are concerning the case in which the internal masses have $m_1^2 =m_2^2 =0, m_3^2 = m^2$. Let us note that $q^2 = 2p_1p_3$, Eq.~(\ref{onelightlike}) now gets the form of \begin{eqnarray} \dfrac{J_3}{\Gamma\left(\frac{4-d}{2} \right)} &=& - \int \limits_0^1 dx\; \dfrac{\left[p_2^2x^2 -(p_2^2 +m^2 )x + m^2 -i\rho \right]^{\frac{d-4}{2} } -\left[p_1^2x^2 -p_1^2 x -i\rho \right]^{\frac{d-4}{2}} } {q^2x - m^2 }. \nonumber\\ \end{eqnarray} We make a change variable like $x \rightarrow 1-x$ for the first integral. Subsequently, it is presented in terms of Appell $F_1$ functions. While the second integral is formulated by mean of Gauss hypergeometric functions. The result is shown in concrete as follows \begin{eqnarray} \dfrac{J_3}{\Gamma\left (\frac{4-d}{2} \right)} &=& -\dfrac{\Gamma\left (\frac{d-2}{2} \right)^2 } {\Gamma\left (d-2\right)} \dfrac{ (-p_1^2 -i\rho)^{ \frac{d-4}{2}}}{m^2} \Fh21\Fz{1, \frac{d-2}{2}}{d-2}{ \dfrac{q^2}{m^2} } \\ && \hspace{-1.8cm} + \dfrac{ \Gamma\left (\frac{d-2}{2} \right)} {\Gamma\left (\frac{d}{2} \right)} \dfrac{(m^2-p_2^2 -i\rho)^{\frac{d-4}{2} } } {q^2-m^2} F_1\left(\frac{d-2}{2}; 1,\frac{4-d}{2}; \frac{d}{2}; \dfrac{q^2}{q^2-m^2}; \dfrac{p_2^2}{p_2^2 -m^2+i\rho} \right) \nonumber \end{eqnarray} provided that $\left|q^2/m^2 \right|<1$ and $\mathcal{R}$e$\left(d-2\right)>0$. One finds another representation for $J_3$ by applying Eq.~(\ref{f1relation1}) in appendix $B$ \begin{eqnarray} \label{j300m} \dfrac{J_3}{\Gamma\left (\frac{4-d}{2} \right)} &=& -\dfrac{\Gamma\left(\frac{d-2}{2} \right)^2 } {\Gamma\left (d-2\right)} \dfrac{ (-p_1^2 -i\rho)^{ \frac{d-4}{2}}}{m^2} \Fh21\Fz{1, \frac{d-2}{2}}{d-2}{ \dfrac{q^2}{m^2} } \\ && - \dfrac{ \Gamma\left (\frac{d-2}{2} \right)} {\Gamma\left (\frac{d}{2} \right)} (m^2)^{\frac{d-6}{2} } \; F_1\left(1; 1,\frac{4-d}{2}; \frac{d}{2}; \dfrac{q^2}{m^2}; \dfrac{p_2^2}{m^2-i\rho} \right). \nonumber \end{eqnarray} This representation gives agreement result with Eq.~($C6$) in \cite{Abreu:2015zaa}. In the limit of $q^2 \rightarrow m^2$, one gets \begin{eqnarray} \dfrac{J_3}{\Gamma\left (\frac{4-d}{2} \right)} &=&-\dfrac{\Gamma\left(\frac{d-4}{2} \right)} {\Gamma\left (\frac{d-2}{2} \right)} (m^2)^{\frac{d-6}{2} } \; \Fh21\Fz{1,\frac{4-d}{2} }{ \frac{d-2}{2}} { \dfrac{p_2^2}{m^2-i\rho} } \nonumber\\ && -\dfrac{\Gamma\left (\frac{4-d}{2} \right) \Gamma\left (\frac{d-2}{2} \right)^2 \Gamma\left (\frac{d}{2} \right)^2 } {\Gamma\left (d-3\right)} \dfrac{ (-p_1^2 -i\rho)^{\frac{d-4}{2}}}{m^2}. \end{eqnarray} When $p_2^2 \rightarrow m^2$, the result in (\ref{j300m}) reads \begin{eqnarray} \dfrac{J_3}{\Gamma\left (\frac{4-d}{2} \right)} &=&-\dfrac{\Gamma\left (d-3\right)} {\Gamma\left (d-2 \right)} (m^2)^{\frac{d-6}{2} } \;\Fh21\Fz{1, 1}{d-2}{ \dfrac{q^2}{m^2} } \\ &&-\dfrac{ \Gamma\left (\frac{d-2}{2} \right)^2 } {\Gamma\left (d-2\right)} \dfrac{ (-p_1^2 -i\rho)^{ \frac{d-4}{2}} }{m^2} \Fh21\Fz{1, \frac{d-2}{2}}{d-2} { \dfrac{q^2}{m^2} }. \nonumber \end{eqnarray} \item $\underline{m_1^2 = m_3^2 =0, m_2^2 =m^2:}$\\ This case has been performed in Ref~\cite{Fleischer:2003rm}. The Feynman parameter integral for $J_3$ in this case reads \begin{eqnarray} \dfrac{J_3}{\Gamma\left(\frac{4-d}{2} \right) } &=&\int \limits_0^1 dx\; \dfrac{\left[p_2^2x^2 -(p_2^2 -m^2)x -i\rho \right]^{\frac{d-4}{2}} - \left[p_1^2x^2 -(p_1^2 -m^2)x -i\rho \right] ^{\frac{d-4}{2}} }{ -2p_1p_3 \; x }\nonumber\\ && \\ &=&\sum\limits_{k=1}^2 (-1)^k \dfrac{(m^2 - p_k^2 -i\rho) ^{\frac{d-4}{2} }}{p_1^2 -p_2^2}\; \int \limits_0^1 dx\; x^{\frac{d-6}{2}} \left[1- \dfrac{p_k^2}{p_k^2 -m^2 +i\rho} \;x \right]^{ \frac{d-4}{2}} \nonumber\\ && \\ &=& \dfrac{ \Gamma\left(\frac{d-4}{2}\right) } { \Gamma\left( \frac{d-2}{2}\right)} \; \sum\limits_{k=1}^2 (-1)^k \frac{(m^2 - p_k^2 -i\rho)^{\frac{d-4}{2}}}{p_1^2 -p_2^2}\; \Fh21\Fz{\frac{4-d}{2}; \frac{d-4}{2} }{\frac{d-2}{2}} {\dfrac{p_k^2}{p_k^2 -m^2 +i\rho} } \nonumber\\ && \\ &=&\dfrac{\Gamma\left(\frac{d-4}{2}\right) } { \Gamma\left( \frac{d-2}{2}\right)} \; \sum\limits_{k=1}^2 (-1)^k \frac{(m^2)^{\frac{d-4}{2}}}{p_1^2 -p_2^2}\; \Fh21\Fz{1;\frac{4-d}{2}} {\frac{d-2}{2}}{\dfrac{p_k^2}{m^2-i\rho} }. \end{eqnarray} This gives a perfect agreement with Ref~\cite{Fleischer:2003rm}. In the limit of $p_2^2\rightarrow p_1^2$, one first uses \begin{eqnarray} \dfrac{d}{dp_2^2} \left\{\Fh21\Fz{1;\frac{4-d}{2} } {\frac{d-2}{2}}{\dfrac{p_2^2}{m^2-i\rho} } \right\} = -\dfrac{1}{m^2}\dfrac{d-4}{d-2} \;\Fh21\Fz{2;\frac{6-d}{2} }{\frac{d}{2} } { \dfrac{p_2^2}{m^2-i\rho} }. \end{eqnarray} The result reads \begin{eqnarray} J_3 &=& 2 \dfrac{\Gamma\left(\frac{4-d}{2} \right) } {2-d} \; (m^2)^{\frac{d-6}{2} } \Fh21\Fz{2;\frac{6-d}{2} }{\frac{d}{2}} {\dfrac{p_2^2}{m^2-i\rho} }. \end{eqnarray} \item $\underline{m_2^2 = m_3^3 =0, m_1^2=m^2:}$ \\ Let us note that $q^2 = 2p_1p_3$, the integral $J_3$ in (\ref{onelightlike}) is written by \begin{eqnarray} \dfrac{J_3}{\Gamma\left(\frac{4-d}{2} \right)} &=& \int \limits_0^1 dx\; \Bigg\{ \dfrac{\left[p_2^2x^2 - p_2^2 x -i\rho \right]^{\frac{d-4}{2}}} {-q^2 x -m^2 } - \dfrac{\left[p_1^2x^2 -(p_1^2 -m^2)x -i\rho \right]^{\frac{d-4}{2} }} { q^2\; x -q^2 -m^2 } \Bigg\}. \nonumber\\ \end{eqnarray} Here we have already performed a shift $x \rightarrow 1-x $ for the second integral. It is then expressed in terms of Appell $F_1$ functions. While the first integral is presented in terms of Gauss hypergeometric functions. Combining all these terms, analytic result for $J_3$ reads \begin{eqnarray} \dfrac{J_3}{\Gamma\left(\frac{4-d}{2} \right)} &=& \dfrac{\Gamma\left (\frac{d-2}{2} \right) } {\Gamma\left (\frac{d}{2} \right)} \dfrac{(m^2-p_1^2 -i\rho)^{\frac{d-4}{2} }} {q^2+m^2} \times\\ && \hspace{1cm}\times F_1\left(\dfrac{d-2}{2}; 1,\frac{4-d}{2}; \frac{d}{2}; \dfrac{q^2}{q^2 + m^2}; \dfrac{p_1^2}{p_1^2 -m^2+i\rho} \right) \nonumber\\ &&- \dfrac{\Gamma\left(\frac{d-2}{2} \right)^2 } {\Gamma\left(d-2\right)} \dfrac{(-p_2^2 -i\rho)^{ \frac{d-4}{2} } }{m^2} \Fh21\Fz{1, \frac{d-2}{2} }{d-2}{ -\dfrac{q^2}{m^2} }. \nonumber \end{eqnarray} Another representation for $J_3$ is derived by using Eq.~(\ref{f1relation1}) in appendix $B$. It is \begin{eqnarray} \dfrac{J_3}{\Gamma\left (\frac{4-d}{2} \right)} &=&\dfrac{\Gamma\left (\frac{d-2}{2} \right) } {\Gamma\left(\frac{d}{2} \right)}\; (m^2)^{\frac{d-6}{2} } F_1\left(1; 1, \frac{4-d}{2}; \frac{d}{2}; -\dfrac{q^2}{m^2}; \dfrac{p_1^2}{m^2-i\rho} \right) \nonumber\\ &&- \dfrac{\Gamma\left (\frac{d-2}{2} \right)^2 } {\Gamma\left (d-2\right)} \dfrac{ (-p_1^2 -i\rho)^{ \frac{d-4}{2} } }{m^2} \Fh21\Fz{1, \frac{d-2}{2} }{d-2}{ -\dfrac{q^2}{m^2} }. \end{eqnarray} From this representation, one can perform analytic continuation this result in the limits of $q^2 \rightarrow -m^2$ and $p_1^2 \rightarrow m^2$. \\ \item $\underline{p_1^2=p_2^2 \neq 0:}$ \\ We are going to consider an interesting case in which is $p_1^2=p_2^2 \neq 0$. In such the case, one recognizes that $G_3=0$. We can present $J_3$ in terms of two scalar one-loop two-point functions as follows \begin{eqnarray} \label{j3gram} J_3 &=& -\Gamma\left (\frac{4-d}{2} \right) \int \limits_0^1 dx\; \Bigg\{ \dfrac{\left[p_2^2x^2 -(p_2^2 +m_3^2 -m_2^2)x +m_3^2 -i\rho \right]^{\frac{d-4}{2} }}{m_3^2 -m_1^2 } \\ &&\hspace{3cm} + \dfrac{\left[p_1^2x^2 -(p_1^2 +m_1^2 -m_2^2)x +m_1^2 -i\rho \right]^{\frac{d-4}{2} }}{m_1^2 -m_3^2 } \Bigg\} \nonumber\\ &=&-\Gamma\left (\frac{4-d}{2}\right) (J_{231} +J_{123}). \end{eqnarray} Both terms in right hand side of Eq.~(\ref{j3gram}) are determined as Feynman parameter integrals of scalar one-loop two-point functions. They are calculated in detail as follows. Let us consider $J_{231}$ as a example. We can rewrite $J_{231}$ in the following form: \begin{eqnarray} J_{231} &=&\dfrac{1}{m_3^2 -m_1^2} \int \limits_0^1 dx\; \left[p_2^2(x -x_{23})^2 +M_{23} -i\rho \right]^{\frac{d-4}{2} }. \end{eqnarray} The integral will be worked out by applying Mellin-Barnes relation which is \begin{eqnarray} J_{231} &=& \dfrac{2}{m_3^2 -m_1^2}\; \frac{1}{2\pi i} \int\limits_{-i\infty}^{i\infty} \dfrac{\Gamma(-s) \Gamma\left(\frac{4-d+2s}{2} \right) \Gamma(s+\frac{1}{2})} {\Gamma\left(\frac{4-d}{2} \right)\Gamma(s+\frac{3}{2})} \\ && \hspace{0cm}\times \left( M_{23} -i\rho \right)^{\frac{d-4}{2} } \left[ x_{23} \left( \dfrac{p_2^2x_{23}^2}{M_{23} -i\rho}\right)^s + x_{32} \left( \dfrac{p_2^2x_{32}^2}{M_{23} -i\rho}\right)^s \;\; \right] \nonumber\\ &=& \dfrac{\left( M_{23}-i\rho\right)^{\frac{d-4}{2}} }{ m_3^2 -m_1^2} \left\{ x_{23}\; \Fh21\Fz{\frac{4-d}{2}; \frac{1}{2}}{\frac{3}{2}}{ \dfrac{p_2^2x_{23}^2}{M_{23} -i\rho} } +(2 \leftrightarrow 3) \right\}. \end{eqnarray} The $J_3$ now is casted into the form of \begin{eqnarray} \dfrac{J_3}{\Gamma\left (\frac{4-d}{2} \right)} &=&\Bigg\{ \left(\dfrac{\partial_2 S_{23} }{G_{23}}\right) \dfrac{(M_{23} -i\rho)^{ \frac{d-4}{2}}}{m_3^2-m_1^2} \Fh21\Fz{\frac{4-d}{2}; \frac{1}{2} } {\frac{3}{2}}{\dfrac{M_{23}-m_3^2}{M_{23} -i\rho} } +(2\leftrightarrow3) \Bigg\} \nonumber \\ && \hspace{-0.2cm} +\Bigg\{ \left(\dfrac{\partial_2 S_{12} }{G_{23}}\right) \dfrac{(M_{12} -i\rho)^{ \frac{d-4}{2} }} {m_1^2-m_3^2} \Fh21\Fz{\frac{4-d}{2}; \frac{1}{2} }{\frac{3}{2}} {\dfrac{M_{12}-m_1^2}{M_{12} -i\rho} } +(2\leftrightarrow 1) \Bigg\} \nonumber\\ \end{eqnarray} \end{enumerate} provided that $\left|\frac{M_{12}-m_{1,3}^2}{M_{12} -i\rho}\right|<1$. \subsection{General case We are going to generalize the method for the general case in which $p_i^2 \neq 0$ for $i=1,2,3$. Following an idea in~\cite{'tHooft:1978xw}, we first apply the Euler transformation like $y \rightarrow y - \beta x$, the polynomial written in terms of $x,y$ in $J_3$'s integrand will becomes \begin{eqnarray} &&\hspace{-0.5cm} A x^2+ By^2+2Cxy+ Dx+Ey+F- i\rho =\\ &&= (B\beta^2 - 2C\beta + A) x^2 + B y^2 +2(C-B\beta) xy + (D - E\beta)x + E y + F -i\rho. \nonumber \end{eqnarray} By choosing $\beta$ is one of the roots of following equation \begin{eqnarray} \label{alpha} B \beta^2 - 2C \beta + A =0, \quad \text{or} \quad \beta= \dfrac{C \pm \sqrt{C^2 - AB}}{B}, \end{eqnarray} the integral $J_3$ is casted into \begin{eqnarray} &&\dfrac{J_3}{\Gamma\left (\frac{6-d}{2} \right)} = \int \limits_0^1 dx\int \limits_{\beta x}^{1-(1-\beta)x} dy\; \left\{ \Big[2(C-B\beta)y + D - E\beta \Big] x + By^2+ Ey+ F - i\rho \right\}^{\frac{d-6}{2}} . \nonumber\\ \end{eqnarray} It is also important to note that the final result will be independent of $\beta$ in Eq.~(\ref{alpha}). It means that we are free to choice one of roots $\beta$ in Eq.~(\ref{alpha}). As a result, the integrand is linearized of $x$, hence the $x$-integration can be evaluated first. In order to work out the $x$-integration, we split the integration as follows \begin{eqnarray} \int \limits_0^1 dx\; \int \limits_{\beta x}^{1-(1-\beta)x} dy =\left\{ \int \limits_0^1 dy \; \int \limits_{0}^{1} dx - \int \limits_0^{\beta}dy\; \int \limits_{\frac{y}{\beta}}^{1} dx - \int \limits_{\beta}^1 dy\; \int \limits_{\frac{y-1}{\beta-1}}^{1} dx \right\}. \end{eqnarray} \begin{figure}[ht] \begin{center} \begin{pspicture}(-3, -2.5)(3, 2.5) \psset{linewidth=.1pt} \psset{unit = 0.9} \psline{->}(-4,-2.5)(-4,2.5) \psline(-4, 2)(-1.5, 0) \rput(-4.5, 2){$1$} \rput(-1.5, -0.5){$1$} \pspolygon[fillstyle=crosshatch, fillcolor=gray, linestyle=none, hatchcolor=lightgray](-4,2)(-4,0)(-1.5,0) \psline{->}(4,-2.5)(4,2.5) \psline{->}(-7,0)(-1,0) \psline{->}(1,0)(7,0) \psline(4,2)(6.5,1) \psline(4,0)(6.5,1) \psline[linestyle=dashed](4,1)(6.5,1) \psline[linestyle=dashed](4,2)(6.5,2) \psline[linestyle=dashed](6.5,0)(6.5,2) \pspolygon[fillstyle=crosshatch, fillcolor=gray, linestyle=none, hatchcolor=lightgray](4,2)(4,0)(6.5,1) \rput(-3.8, 2.6){$y$} \rput(6.5, -0.5){$1$} \rput(3.5, 2){$1$} \rput(3.5, 1){$\beta$} \rput(4.2, 2.6){$y$} \rput(-1.2,-0.2){$x$} \rput(7,-0.2){$x$} \rput(0., 0.6){$y' = y+ \beta x$} \psline{->}(-0.3,0)(0.3,0) \end{pspicture} \end{center} \caption{\label{region} The integration looks like after after shifting $y \rightarrow y - \beta x $.} \end{figure} To archive a more symmetric form we make a further transformations $y \rightarrow \beta (1 - y )$ for the second integral and $y \rightarrow 1 -(1 - \beta )y$ for the third integral respectively. This brings some order in the arguments of the integrands. The denominators are all of the linear form $\sim (y - y_i)$ for $i = 1, 2, 3$. It is also easy to confirm that all $y_i$ follows the equations \begin{eqnarray} p_i^2 (y_k -x_{ij})^2 +M_{ij} -i\rho = M_3 -i\rho, \end{eqnarray} for $i,j,k =1,2,3$. Finally, following an idea in~\cite{'tHooft:1978xw}, we add extra terms which sum all of them is to zero for cancelling the residue of the pole at $y_i$. The result reads \begin{eqnarray} &&\hspace{-0.5cm} \dfrac{ \lambda^{1/2}(p_1^2, p_2^2, p_3^2) } {\Gamma\left (\frac{4-d}{2} \right)} \; J_3 = \\ &&= -\int\limits_0^1 dy \frac{\left[p_1^2 y^2 -(p_1^2 + m_1^2 -m_2^2)y +m_1^2 -i\rho \right] ^{\frac{d-4}{2} }}{y-y_3} + \int\limits_0^1 dy \frac{(M_{3} -i\rho)^{\frac{d-4}{2} }}{y-y_3} \nonumber\\ && \hspace{0cm} \quad -\int\limits_0^1 dy \frac{\left[p_2^2 y^2 -(p_2^2 + m_2^2 -m_3^2)y +m_2^2 -i\rho \right] ^{\frac{d-4}{2} }}{y-y_1} + \int\limits_0^1 dy \frac{(M_{3} -i\rho)^{\frac{d-4}{2} }}{y-y_1} \nonumber \\ && \hspace{0cm} \quad -\int\limits_0^1 dy \frac{\left[p_3^2 y^2 -(p_3^2 + m_3^2 -m_1^2)y +m_3^2 -i\rho \right] ^{\frac{d-4}{2} }}{y-y_2} + \int\limits_0^1 dy \frac{(M_{3} -i\rho)^{\frac{d-4}{2} }}{y-y_2}. \nonumber \end{eqnarray} The analytic result for $J_3$ can be written in a compact form \begin{eqnarray} \label{j3final} \dfrac{J_3}{\Gamma\left (\frac{4-d}{2} \right)} &=& \;\left\{ -\dfrac{ (M_{3} -i\rho)^{\frac{d-4}{2} } } {\lambda^{1/2}(p_1^2, p_2^2, p_3^2)} \cdot J^{(d=4)}_{123} + \dfrac{1} {\lambda^{1/2}(p_1^2, p_2^2, p_3^2)} \cdot J^{(d)}_{123} \right\} \nonumber\\ && \nonumber\\ && + \left\{ (1,2,3) \leftrightarrow (2,3,1) \right\} + \left\{ (1,2,3) \leftrightarrow (3,1,2) \right\}, \end{eqnarray} with \begin{eqnarray} J_{ijk}^{(d)}&=& - \int\limits_0^1 dy \frac{\left[p_i^2 y^2 -(p_i^2 + m_i^2 -m_j^2)y +m_i^2 -i\rho \right] ^{\frac{d-4}{2} }}{y-y_k}, \end{eqnarray} for $i,j,k =1,2,3.$ Where the integrand's poles are given \begin{eqnarray} y_1 &=& 1-\dfrac{D - E\beta +2(C - B\beta)}{2(1-\beta) (C - B\beta)}, \; y_2 = 1+ \dfrac{D - E\beta}{2(C - B\beta)}, \; y_3 = - \dfrac{D - E\beta }{2\beta(C - B\beta)}. \end{eqnarray} Applying the formula for master integral in appendix $A$, we will present the result of $J_{ijk}$ in terms of Appell $F_1$ functions. For instant, one takes $J_{123}$ for an example. This term is expressed as follows \begin{eqnarray} \label{j123general1} J_{123} &=& -\left(\dfrac{\partial_3 S_3 }{\sqrt{8G_3 } } \right) \Bigg[ \left(\dfrac{\partial_2 S_{12} }{G_{12} } \right) \dfrac{ (m_1^2)^{\frac{d -4}{2} } }{M_{3}-m_1^2 } F_1\left(1; 1,\frac{4-d}{2}; \frac{3}{2}; \dfrac{M_{12} -m_1^2}{M_{3} -m_1^2}, 1-\dfrac{M_{12} }{m_1^2} \right)\nonumber\\ &&\hspace{1.7cm}+ \left(\dfrac{\partial_1 S_{12} }{G_{12} } \right) \dfrac{ (m_2^2)^{\frac{d-4}{2} }}{M_{3}-m_2^2 } F_1\left(1; 1, \frac{4-d}{2}; \frac{3}{2}; \dfrac{M_{12} -m_2^2}{M_{3} -m_2^2}, 1-\dfrac{M_{12} }{m_2^2} \right) \Bigg] \nonumber\\ &&+\Bigg[ \dfrac{M_{12} -m_1^2} {2(M_{3}-m_1^2)} (m_1^2)^{\frac{d-4}{2}} F_1\left(1; 1, \frac{4-d}{2}; 2; \dfrac{M_{12} -m_1^2}{M_{3} -m_1^2}, 1-\dfrac{M_{12} }{m_1^2} \right) \\ && \hspace{1.4cm} -\dfrac{M_{12} -m_2^2}{2(M_{3}-m_2^2)} (m_2^2)^{\frac{d-4}{2} } F_1\left(1; 1, \frac{4-d}{2}; 2; \dfrac{M_{12} -m_2^2}{M_{3} -m_2^2}, 1-\dfrac{M_{12} }{m_2^2} \right) \;\;\Bigg], \nonumber \end{eqnarray} provided that $\left|\frac{M_{12} -m_{1;2}^2} {M_{3} -m_{1;2}^2}\right|, \;\left|1-\frac{M_{12} }{m_{1;2}^2}\right|<1$. One finds another representation for $J_{123}$ is as (by applying Eq.~(\ref{K1}) in appendix $A$) \begin{eqnarray} \label{j123general2} J_{123} &=& -\left(\dfrac{\partial_3 S_3 }{\sqrt{8G_3 } } \right) \Bigg[ \left(\dfrac{\partial_2 S_{12} }{G_{12} } \right) \dfrac{ (M_{12} -i\rho)^{\frac{d-4}{2} } }{M_{3}-M_{12} } F_1\left(\frac{1}{2}; 1,\frac{4-d}{2}; \frac{3}{2}; \dfrac{M_{12} -m_1^2}{M_{12} -M_{3} }, 1-\dfrac{m_1^2}{M_{12} } \right)\nonumber\\ &&\hspace{1.5cm}+ \left(\dfrac{\partial_1 S_{12} }{G_{12} } \right) \dfrac{ (M_{12} -i\rho)^{\frac{d-4}{2} }}{M_{3}-M_{12} } F_1\left(\frac{1}{2}; 1, \frac{4-d}{2}; \frac{3}{2}; \dfrac{M_{12} -m_2^2}{M_{12} -M_{3} }, 1-\dfrac{m_2^2}{M_{12} } \right) \Bigg] \nonumber\\ &&+\Bigg[ \dfrac{M_{12} -m_1^2}{2(M_{12}-M_{3}) } (M_{12} -i\rho)^{\frac{d-4}{2} } F_1\left(1; 1,\frac{4-d}{2}; 2; \dfrac{M_{12} -m_1^2}{M_{12} -M_{3} }, 1-\dfrac{m_1^2}{M_{12} } \right) \\ && \hspace{1.0cm} -\dfrac{M_{12} -m_2^2}{2(M_{12}-M_{3}) } (M_{12} -i\rho)^{\frac{d-4}{2} } F_1\left(1; 1,\frac{4-d}{2}; 2; \dfrac{M_{12} -m_2^2}{ M_{12} -M_{3} }, 1-\dfrac{m_2^2}{M_{12} } \right) \;\;\Bigg], \nonumber \end{eqnarray} provided that $\left|\frac{M_{12} -m_{1;2}^2}{ M_{12} -M_{3} } \right|,\; \left|1-\frac{m_{1;2}^2}{M_{12} }\right|<1$. It is important to note that the results in (\ref{j123general1}, \ref{j123general2}) are only valid if the absolute value of the arguments of {the Appell $F_1$ functions} in these representations are less than $1$. If these kinematic variables do not satisfy this condition. One has to perform analytic {continuations}, we refer the work of~\cite{olsson} for Appell $F_1$. The results shown in Eqs.~(\ref{j123general1}, \ref{j123general2}) are also new hypergeometric representations for scalar one-loop three-point functions in general space-time dimension. From these representations, one can perform analytic continuation of the result in the cases of $M_{ij}=0, m_i^2, m_j^2$, $M_3=0$ (for $i,j,k=1,2,3$), $G_3=0$ and massless case, etc. This can be done by applying transformations for Appell $F_1$ functions (seen appendix $B$). For a example, we consider the case of $G_3=0$. In the case, $\beta = c/b$, repeating the calculation, we arrive at \begin{eqnarray} \label{g123J30} J_3 = -\sum\limits_{k=1}^3 \left(\dfrac{\partial_k S_3}{2S_3} \right) {\bf k^{-} } J_3(d; \{p_i^2\}, \{m_i^2\} ). \end{eqnarray} The operator ${\bf k^{-} }$ is defined in such a way that it will reduce the three-point integrals to two-point integrals by shrinking an propagator in the integrand of $J_3$. This equation equivalents with Eq.~($46$) in Ref.~\cite{Devaraj:1997es}. Noting that we have already arrived this relation in (\ref{j3gram}) of previous subsection. \subsection{Tensor one-loop three-point integrals} Following tensor reduction method in Ref.~\cite{Davydychev:1991va}, tensor one-loop three-point integrals with rank $M$ can be presented in terms of scalar ones with the shifted space-time dimension: \begin{eqnarray} && \hspace{-1cm} J^{(3)}_{\mu_1\mu_2\cdots\mu_M} (d;\{\nu_1,\nu_2,\nu_3\}) = \\ &=& \int \frac{d^d k}{i\pi^{d/2}} \dfrac{k_{\mu_1}k_{\mu_2}\cdots k_{\mu_M}} {[(k+q_1)^2 -m_2^2 + i\rho]^{\nu_1} [(k+q_2)^2 -m_3^2 + i\rho]^{\nu_2} [(k+q_3)^2 -m_1^2 + i\rho]^{\nu_3}} \nonumber\\ &=&\sum\limits_{\lambda, \kappa_1, \cdots, \kappa_3} \left(-\frac{1}{2}\right)^{\lambda} \Big\{[g]^\lambda [q_1]^{\kappa_1} [q_2]^{\kappa_2} [q_3]^{\kappa_3} \Big\}_ {\mu_1\mu_2\cdots\mu_M} \\ &&\hspace{1.4cm} \times (\nu_1)_{\kappa_1}(\nu_2)_{\kappa_2}(\nu_3)_{\kappa_3} \;\; J_3\Big(d+2(M-\lambda); \{\nu_1+\kappa_1,\nu_2+\kappa_2,\nu_3+\kappa_3\} \Big). \nonumber \end{eqnarray} In this formula, the condition for the indices $\lambda, \kappa_1, \kappa_2$ and $\kappa_3$ is $2\lambda +\kappa_1+\kappa_2+\kappa_3=M$. Moreover, these indices also follow the constrain $0 \leq \kappa_1, \kappa_2, \kappa_3 \leq M$ and $0 \leq \lambda \leq [M/2]$ (integer of $M/2$). The notation $(a)_\kappa = \Gamma(a+\kappa)/\Gamma(a)$ is the Pochhammer symbol. The structure of tensor $\{[g]^\lambda [q_1]^{\kappa_1} [q_2]^{\kappa_2} [q_3]^{\kappa_3} \}_{\mu_1\mu_2\cdots \mu_M} $ is symmetric with respect to $\mu_1, \mu_2,\cdots, \mu_M$. This tensor is constructed from $\lambda$ of metric $g_{\mu\nu}$, $\kappa_1$ of momentum $q_1$, $\cdots$, $\kappa_3$ of momentum $q_3$. The $J_3(d+2(M-\lambda);\{\nu_1+\kappa_1, \nu_2+\kappa_2,\nu_3+\kappa_3\})$ are scalar one-loop three-point functions with the shifted space-time dimension $d+ 2(M-\lambda)$, raising powers of propagators $\{\nu_i+\kappa_i\}$ for $i=1,2,3$. For examples, we first take the simplest case $M=1$. In this case, one has $\lambda =0$. Subsequently, we get \begin{eqnarray} J^{(3)}_{\mu} (d;\{\nu_1,\nu_2,\nu_3\}) &=& \sum\limits_{k=1}^{3} \nu_k\; q_{k\mu} \; J_3(d+2;\{\nu_1 +\delta_{1k}, \nu_2+ \delta_{2k},\nu_3+ \delta_{3k}\}), \end{eqnarray} with $\delta_{jk} $ is the Kronecker symbol. We next consider $M=2$. One has \begin{eqnarray} J^{(3)}_{\mu_1\mu_2} (d; \{\nu_1,\nu_2,\nu_3\}) &=& -\frac{1}{2} g_{\mu_1\mu_2}\; J_3 (d+2;\{\nu_1, \nu_2, \nu_3\}) \\ && \hspace{-2cm} + \sum\limits_{k=1}^{3} q_{k\mu_1} \; q_{k\mu_2} \; (\nu_k)_2 ~ J_3 (d+4; \{\nu_1 +2\delta_{1k}, \nu_2 +2\delta_{2k}, \nu_3 +2\delta_{3k}\}) \nonumber\\ &&\hspace{-2cm} + \sum\limits_{k=1}^{3}\sum\limits_{k'>k}^{3} (q_{k\mu_1} q_{k'\mu_2} + q_{k\mu_2} q_{k'\mu_1}) \; (\nu_k)_1 \;(\nu_{k'})_1 \nonumber\\ &&\hspace{-2cm}\times J_3 (d+4;\{\nu_1 +\delta_{1k}+\delta_{1k'}, \nu_2 +\delta_{2k}+\delta_{2k'}, \nu_3 +\delta_{3k}+\delta_{3k'}\} ). \nonumber \end{eqnarray} In the next step, the scalar integrals $J_3\Big(d+2(M-\lambda); \{\nu'_1,\nu'_2, \nu'_3\} \Big)$ will be reduced to subset of master integrals by using integration-by-part method (IBP)~\cite{IBP}. By applying the operator $\frac{\partial}{\partial k} \cdot k$ to the integrand of $J_3\left(d; \{\nu_1, \nu_2, \nu_3 \} \right)$ and choosing $k$ to be the momentum of three internal lines ($k\equiv\{k+q_1, k+q_2, k+q_3\}$). One arrives at the following system equations: \begin{eqnarray} \label{j30} \begin{cases} 2 \nu_1 m_1^2 \mathbf{1^+} - \nu_2 Y_{12} \mathbf{2^+} - \nu_3 Y_{13} \mathbf{3^+} = (d- 2 \nu_1 -\nu_2-\nu_3) \mathbf{1} -\nu_2 \mathbf{1^-}\mathbf{2^+} -\nu_3 \mathbf{1^-}\mathbf{3^+}, \\ - \nu_1 Y_{12} \mathbf{1^+} + 2 \nu_2 m_2^2 \mathbf{2^+} - \nu_3 Y_{23} \mathbf{3^+} = (d- \nu_1 - 2 \nu_2-\nu_3) \mathbf{1} - \nu_1 \mathbf{1^+}\mathbf{2^-} -\nu_3 \mathbf{2^-}\mathbf{3^+}, \\ - \nu_1 Y_{13} \mathbf{1^+} - \nu_2 Y_{23} \mathbf{2^+} +2 \nu_3 m_3^2 \mathbf{3^+} = (d- \nu_1 -\nu_2-2 \nu_3) \mathbf{1} -\nu_1 \mathbf{1^+}\mathbf{3^-} -\nu_2 \mathbf{2^+}\mathbf{3^-}. \end{cases} \end{eqnarray} Where we used the following notation: \begin{eqnarray} \begin{cases} \mathbf{1} = J_3 (d; \{\nu_1,\nu_2,\nu_3\}), \\ \mathbf{1^\pm }= J_3 (d; \{\nu_1\pm 1,\nu_2,\nu_3\}), \\ \mathbf{2^\pm }= J_3 (d; \{\nu_1,\nu_2\pm 1,\nu_3\}), \\ \mathbf{3^\pm }= J_3 (d; \{\nu_1,\nu_2,\nu_3\pm 1\}). \end{cases} \end{eqnarray} In order to solve system equations (\ref{j30}), one first considers the following matrix: \begin{eqnarray} Y_{3} = \begin{bmatrix} \nu_1 Y_{11} &&& -\nu_2 Y_{12} &&& -\nu_3 Y_{13} \\ -\nu_1 Y_{12} &&& \nu_2 Y_{22} &&& -\nu_3 Y_{23} \\ -\nu_1 Y_{13} &&& -\nu_2 Y_{23} &&& \nu_3 Y_{33} \\ \end{bmatrix} \text{with}\; Y_{ij}&=&-(q_i-q_j)^2+m_i^2+m_j^2. \end{eqnarray} If det$(Y_{3}) \neq 0$, one then can present $J_3 (d; \nu_{123}+1)$ in term of $J_3 (d; \nu_{123})$ with $\nu_{123}=\nu_1+\nu_2+\nu_3$. In this recurrence way \cite{Laporta:2001dd}, we can arrive at the following integrals: $J_2(d; \{\nu_1, \nu_2\})$ and $J_3(d; \{1,1,1\})$. By applying IPB once again for the former integrals, we will arrive at master integrals such as: $J_1(d; \{\nu\})$, $J_2(d; \{1,1\})$ which can be found in \cite{Fleischer:2003rm} and $J_3(d;\{1,1,1\})$ in this paper. \section{Conclusions} \noindent New analytic formulas for one-loop three-point Feynman integrals in general space-time dimension have presented in this paper. The results are expressed in terms of Appell $F_1$ functions, considered all cases of internal mass and external momentum assignments. We have also cross-checked the analytic results in this work with other paper in several special cases. \\ \noindent {\bf Acknowledgment:}~This research is funded by Vietnam National University (VNU-HCM), Ho Chi Minh City under grant number C$2019$-$18$-$06$???. \section*{Appendix $A$: Evaluatin the master integral We are considering master integral \begin{eqnarray} \mathcal{K} = \int\limits_0^1 dx\; \dfrac{\left[ p^2_ix^2 - (p_i^2 +m_i^2-m_j^2)x +m_i^2 -i\rho \right]^{\frac{d-4}{2} }}{x - x_k}, \end{eqnarray} with $p_i^2 \neq 0$. Where $|x_k|>1$ or $-1 \leq x_k <0$ and it follows the below equation \begin{eqnarray} p^2_ix_k^2 - (p_i^2 +m_i^2-m_j^2)x_k +m_i^2 -i\rho = M_3 -i\rho, \end{eqnarray} for $i,j,k=1,2,3$. We will discuss on the method to evaluate this integral under the above conditions. In the case of $0<x_k<1$, one will perform analytic continuation the result for all master integrals which are expected to appear in the general formula of $J_3$. This will be devoted in concrete in section $2$. In order to work out the master integral one should write the polynomial of $x$ which appears in numerator of the $\mathcal{K}$'s integrand as follows \begin{eqnarray} p^2_ix^2- (p_i^2 +m_i^2-m_j^2)x +m_i^2 -i\rho = p^2_i(x-x_{ij})^2 +M_{ij} -i\rho. \end{eqnarray} We have introduced the kinematic variables \begin{eqnarray} x_{ij} &=& \dfrac{p_i^2 +m_i^2-m_j^2 }{2p_i^2}, \\ x_{ji} &=&1-x_{ij}=\dfrac{p_i^2 -m_i^2+m_j^2 }{2p_i^2}, \end{eqnarray} for $p_i^2 \neq 0$. From the conventions, we subsequently verify the below relations \begin{eqnarray} \label{rij-rijk} p_i^2 x_{ij}^2 &=& m_i^2 - M_{ij},\\ p_i^2 x_{ji}^2 &=& m_j^2 - M_{ij},\\ p_i^2 (x_k-x_{ij})^2 &=& M_3 -M_{ij}, \\ p_i^2 (1-x_k-x_{ji})^2 &=& M_3 -M_{ij}, \end{eqnarray} for $i,j =1,2,3$. Using Mellin-Barnes relation~\cite{MB} we then decompose the integrand as \begin{eqnarray} \label{MB1} \mathcal{K} = \frac{1}{2\pi i} \int\limits_{-i\infty}^{i\infty} ds\; \dfrac{\Gamma(-s) \Gamma\left(\frac{4-d}{2} +s\right)} {\Gamma\left(\frac{4-d}{2}\right)} \left( \dfrac{1}{M_{ij} -i\rho }\right)^{\frac{4-d}{2} } \int\limits_0^1 \dfrac{dx}{x-x_{k}} \left[\dfrac{p_i^2(x-x_{ij})^2}{M_{ij} -i\rho} \right]^s. \end{eqnarray} With the help of this relation, the Feynman parameter integral will be casted into the simpler form. It will be calculated in terms of Gauss hypergeometric functions. In particular, we have \begin{eqnarray} \mathcal{L} = \int\limits_0^1\dfrac{dx}{x-x_{k}} \left[\dfrac{p_i^2(x-x_{ij})^2}{M_{ij} -i\rho} \right]^s = \left\{ \int\limits_0^{x_{ij}} dx + \int\limits_{x_{ij}}^1 dx \right\} \dfrac{1}{x-x_{k}} \left[\dfrac{p_i^2(x-x_{ij})^2}{M_{ij} -i\rho} \right]^s. \end{eqnarray} One makes a shift $x = x_{ij}\; x'$ (and $x = 1- x_{ji}\; x'$) for the first integral (second integral) respectively. The result reads \begin{eqnarray} \mathcal{L} &=& \left(- \dfrac{x_{ij}}{x_k} \right) \left(\dfrac{p_i^2 x_{ij}^2}{M_{ij} -i\rho} \right)^s \int\limits_0^1 dx \; \dfrac{(1-x)^{2s}} {1-\left( \dfrac{x_{ij}}{x_k}\right) x} - \Big\{x_{ij} \leftrightarrow x_{ji};\; x_k \leftrightarrow 1-x_k\Big\}-\mathrm{term}. \nonumber\\ \end{eqnarray} We apply a further transformation like $\xi = (1-x)^2 \geqslant 0$. The Jacobian of the shift is $dx = -\dfrac{d\xi}{2\sqrt{\xi}}$ and the integration domain is now $[1,0]$. As a result of this shift, we arrive at \begin{eqnarray} \mathcal{L} &=& -\left(\dfrac{x_{ij}}{2} \right) \left(\dfrac{p_i^2 x_{ij}^2}{M_{ij} -i\rho} \right)^s \int\limits_0^1 d\xi \; \dfrac{\xi^{s}}{\sqrt{\xi} \left(x_k -x_{ij} +x_{ij}\sqrt{\xi} \right) } \\ && \nonumber\\ && - \Big\{x_{ij} \leftrightarrow x_{ji};\; x_k \leftrightarrow 1-x_k\Big\}-\mathrm{term} \nonumber\\ &=& -\dfrac{x_{ij}}{2(x_k -x_{ij} )}\; \left(\dfrac{p_i^2 x_{ij}^2}{M_{ij} -i\rho} \right)^s \int\limits_0^1 d\xi \; \dfrac{\xi^{s-\frac{1}{2}}}{1- \dfrac{x_{ij}^2 }{ (x_k - x_{ij})^2} \xi }\nonumber\\ &&\nonumber\\ && + \dfrac{x_{ij}^2 }{2(x_k -x_{ij} )^2}\; \left(\dfrac{p_i^2 x_{ij}^2}{M_{ij} -i\rho} \right)^s \int\limits_0^1 d\xi \; \dfrac{\xi^{s} }{1- \dfrac{x_{ij}^2 }{ (x_k - x_{ij})^2} \xi }\\ &&\nonumber\\ && - \Big\{x_{ij} \leftrightarrow x_{ji};\; x_k \leftrightarrow 1-x_k\Big\}-\mathrm{term}. \nonumber\\ \nonumber \end{eqnarray} Following Eq.~(\ref{gauss-int}) in appendix $B$, we can present this integral in terms of Gauss hypergeometric functions \begin{eqnarray}\quad \mathcal{L} &=& -\dfrac{x_{ij}}{2(x_k -x_{ij} )} \;\left(\dfrac{p_i^2 x_{ij}^2}{M_{ij} -i\rho} \right)^s \dfrac{\Gamma\left(s+\frac{1}{2} \right)} { \Gamma\left(s+\frac{3}{2} \right)} \Fh21\Fz{s + \frac{1}{2}, 1}{s+ \frac{3}{2}} { \dfrac{x_{ij}^2 }{ (x_k - x_{ij})^2} } \nonumber\\ && +\dfrac{x_{ij}^2 }{2(x_k -x_{ij} )^2} \; \left(\dfrac{p_i^2 x_{ij}^2}{M_{ij} -i\rho} \right)^s \dfrac{\Gamma\left(s+1 \right)}{ \Gamma\left(s+2 \right)} \Fh21\Fz{s + 1, 1}{s+ 2}{\dfrac{x_{ij}^2 }{ (x_k - x_{ij})^2} } \\ &&\nonumber\\ &&-\Big\{x_{ij} \leftrightarrow x_{ji};\; x_k \leftrightarrow 1-x_k\Big\}-\mathrm{term}, \nonumber \end{eqnarray} provided that $\left|\frac{x_{ij}^2 }{ (x_k - x_{ij})^2}\right|<1$ for $i,j,k=1,2,3$ and $\mathcal{R}$e$\left(s+\frac{1}{2}\right)>0$. Putting this result into Eq.~(\ref{MB1}), we are going to evaluate the following Mellin-Barnes integral \begin{eqnarray} \label{MB2} \dfrac{1}{2\pi i} \int\limits_{-i\infty}^{+i\infty}ds \; \dfrac{\Gamma(-s)\;\Gamma(a +s) \;\Gamma(b+s)}{\Gamma(c+s) } (-x)^s \; \Fh21\Fz{a+s,b'}{c+s}{y}, \end{eqnarray} with $|\mathrm{Arg}(-x)|<\pi$ and $|x|<1$ and $|y|<1$. Under these conditions, one could close the integration contour to the right side of the imaginary axis in the $s$-complex plane. Subsequently, we take into account the residua of the sequence poles of $\Gamma(-s)$. The result is presented as a series of Appell $F_1$ functions~\cite{Slater} \begin{eqnarray} \label{MB3} \dfrac{\Gamma(a)\Gamma(b)}{\Gamma(c)} \sum\limits_{m=0}^{\infty} \dfrac{(a)_m (b)_m}{(c)_m} \dfrac{x^m}{m!} \Fh21\Fz{a+m,b'}{c+m}{y} =\dfrac{\Gamma(a)\Gamma(b)}{\Gamma(c)}\; F_1 (a; b, b';c; x,y), \nonumber \end{eqnarray} provided that $|\mathrm{Arg}(-x)|<\pi$ and $|x|<1$ and $|y|<1$. Finally, the result for $\mathcal{K}$ reads \begin{eqnarray} \label{K1} \mathcal{K} &=& -\dfrac{x_{ij}}{(x_k -x_{ij} )}\; \left(M_{ij} -i\rho\right)^{\frac{d-4}{2} } \; F_1\left(\frac{1}{2}; 1, \frac{4-d}{2}; \frac{3}{2}; \dfrac{x_{ij}^2 }{ (x_k - x_{ij})^2}, -\dfrac{p_i^2 x_{ij}^2}{M_{ij} -i\rho} \right)\nonumber\\ &&+ \dfrac{x^2_{ij}}{2 (x_k -x_{ij} )^2}\; \left(M_{ij} -i\rho\right)^{\frac{d-4}{2} } \; F_1\left(1; 1, \frac{4-d}{2}; 2; \dfrac{x_{ij}^2 }{ (x_k - x_{ij})^2}, -\dfrac{p_i^2 x_{ij}^2}{M_{ij}-i\rho } \right) \nonumber \\ &&\nonumber \\ &&-\Big\{x_{ij} \leftrightarrow x_{ji}; \; x_k \leftrightarrow 1-x_k\Big\}-\mathrm{term}, \end{eqnarray} provided that $\left|\frac{x_{ij}^2 }{ (x_k - x_{ij})^2} \right| <1$ and $ \left|-\frac{p_i^2 x_{ij}^2}{M_{ij}} \right| <1$ for $i,j,k=1,2,3$. It is confirmed that Arg$\left( \frac{x_{ij}^2 }{ (x_k - x_{ij} )^2 }\right)<\pi$ and Arg$\left(-\frac{p_i^2 x_{ij}^2}{M_{ij}}\right)<\pi$ due to the $i\rho$-term. In other words, these kinematic variables are never on the negative real axis. Applying the transformation for Appell $F_1$ functions (see Eq.~(\ref{f1relation1}) in appendix $B$, one finds another representation for integral $\mathcal{K}$ as \begin{eqnarray} \label{K2} \mathcal{K} &=& \dfrac{x_{ij} (x_{ij}-x_k) \left(m_i^2 \right)^{\frac{d-4}{2} } }{ (x_k -x_{ij} )^2 - x_{ij}^2 } \; F_1\left(1; 1, \frac{4-d}{2}; \frac{3}{2}; \dfrac{-x_{ij}^2}{ (x_k -x_{ij} )^2 -x_{ij}^2 }, \frac{p_i^2 x_{ij}^2 }{p_i^2 x_{ij}^2+M_{ij} -i\rho} \right) \nonumber\\ && + \dfrac{x^2_{ij} \left(m^2_{i} \right)^{\frac{d-4}{2}} }{2 \left[ (x_k -x_{ij} )^2 -x_{ij}^2\right] }\; F_1\left(1; 1, \frac{4-d}{2}; 2; \dfrac{-x_{ij}^2}{ (x_k -x_{ij} )^2 -x_{ij}^2 }, \frac{p_i^2 x_{ij}^2 }{p_i^2 x_{ij}^2 +M_{ij}-i\rho } \right) \nonumber \\ &&\nonumber\\ &&-\Big\{x_{ij} \leftrightarrow x_{ji};\; x_k \leftrightarrow 1-x_k\Big\}-\mathrm{term}, \end{eqnarray} provided that $ \left|-\frac{x_{ij}^2} { (x_k -x_{ij} )^2 -x_{ij}^2 } \right| <1$ and $ \left| \frac{p_i^2 x_{ij}^2 } {p_i^2 x_{ij}^2 +M_{ij} } \right| <1$ for $i,j,k=1,2,3$. \section*{Appendix $B$: Generalized hypergeometric functions} The Gauss hypergeometric series are given (see Eq.~($1.1.1.4$) in Ref.~\cite{Slater}) \begin{eqnarray} \label{gauss-series} \Fh21\Fz{a,b}{c}{z} = \sum\limits_{n=0}^{\infty} \dfrac{(a)_n (b)_n}{(c)_n} \frac{z^n}{n!}, \end{eqnarray} provided that $|z|<1$. Here, the pochhammer symbol, \begin{eqnarray} (a)_n =\dfrac{\Gamma(a+n)}{\Gamma(a)}, \end{eqnarray} is taken into account. The integral representation for Gauss hypergeometric functions is (see Eq.~(1.6.6) in Ref.~\cite{Slater}) \begin{eqnarray} \label{gauss-int} \Fh21\Fz{a,b}{c}{z} =\dfrac{\Gamma(c)}{\Gamma(b)\Gamma(c-b)} \int\limits_0^1 du \; u^{b-1} (1-u)^{c-b-1} (1-zu)^{-a}, \end{eqnarray} provided that $|z|<1$ and Re$(c)>$Re$(b)>0$. The series of Appell $F_1$ functions are given (see Eq.~(8.13) in Ref.~\cite{Slater}) \begin{eqnarray} \label{appell-series} F_1(a; b, b'; c; x, y) = \sum\limits_{m=0}^{\infty}\sum\limits_{n=0}^{\infty} \dfrac{(a)_{m+n} (b)_m (b')_n}{(c)_{m+n}\; m! n!} x^m y^n, \end{eqnarray} provided that $|x|<1$ and $|y|<1$. The single integral representation for $F_1$ is (see Eq.~(8.25) in Ref.~\cite{Slater}) \begin{eqnarray} \label{appell-int} F_1(a; b, b'; c; x, y) = \dfrac{\Gamma(c)}{\Gamma(c-a)\Gamma(a)} \int\limits_0^1 du\; u^{a-1} (1-u)^{c-a-1} (1-xu)^{-b}(1-yu)^{-b'}, \end{eqnarray} provided that Re$(c)$ $>$ Re$(a)>0$ and $|x|<1$, $|y|<1$. \subsection*{Transformations for Gauss $_2F_1$ hypergeometric functions} Basic linear transformation formulas for Gauss $_2F_1$ hypergeometric functions collected from Ref.~\cite{Slater} are listed as follows \begin{eqnarray} \Fh21\Fz{a,b}{c}{z} &=& \Fh21\Fz{b,a}{c}{z} \label{tran2F1a} \\ &=& (1-z)^{c-a-b} \Fh21\Fz{c-a,c-b}{c}{z} \label{tran2F1b} \\ &=& (1-z)^{-a} \Fh21\Fz{ a,c-b}{c}{\frac{z}{z-1} } \label{tran2F1c} \\ &=& (1-z)^{-b} \Fh21\Fz{ b,c-a}{c}{\frac{z}{z-1} } \label{tran2F1d}\\ &=& \frac{\Gamma(c) \Gamma(c-a-b)}{\Gamma(c-a) \Gamma(c-b)} \Fh21\Fz{a,b}{a+b-c+1}{1-z} \nonumber \\ &+& (1-z)^{c-a-b} \frac{\Gamma(c) \Gamma(a+b-c)}{\Gamma(a) \Gamma(b)} \Fh21\Fz{c-a,c-b}{c-a-b+1}{1-z}\\ &=& \frac{\Gamma(c) \Gamma(b-a)}{\Gamma(b) \Gamma(c-a)} (-z)^{-a} \Fh21\Fz{a,1-c+a}{1-b+a}{\frac{1}{z} } \nonumber \\ &+& \frac{\Gamma(c) \Gamma(a-b)}{\Gamma(a) \Gamma(c-b)} (-z)^{-b} \Fh21\Fz{b,1-c+b}{1-a+b}{\frac{1}{z} }\label{z-}. \end{eqnarray} \subsection*{Transformations for Appell $F_1$ hypergeometric functions} We collect all transformations for Appell $F_1$ functions from Refs.~\cite{Slater}. The first relation for $F_1$ is mentioned, \begin{eqnarray} \label{f1relation1} F_1\Big(a;b,b';c;x,y\Big)=(1-x)^{-b}(1-y)^{-b'} F_1\Big(c-a;b,b';c;\frac x{x-1},\frac y{y-1}\Big). \end{eqnarray} If $b'=0$, we arrive at the well-known Pfaff--Kummer transformation for the $_2F_1$. In detail, one has \begin{eqnarray} F_1\Big(a;b,b';c;x,y\Big)=(1-x)^{-a} F_1\!\left(a;-b-b'+c,b';c;\frac x{x-1},\frac{y-x}{1-x}\right). \end{eqnarray} Furthermore, if $c=b+b'$, one then obtains \begin{eqnarray} F_1\Big(a;b,b';b+b';x,y\big)&=&(1-x)^{-a} \Fh21\Fz{a,b'}{b+b'}{\frac{y-x}{1-x}} \\ &=& (1-y)^{-a}\Fh21\Fz{a,b}{b+b'}{\frac{x-y}{1-y}}. \end{eqnarray} Similarly, \begin{eqnarray} F_1\left(a;b,b';c;x,y\right)&=&(1-y)^{-a} F_1\!\left(a;b,c-b-b';c;\frac{x-y}{1-y},\frac y{y-1}\right), \\ F_1\left(a;b,b';c;x,y\right) &=& {\mbox{\small $(1-x)^{c-a-b}(1-y)^{-b'}$}} F_1\!\left(c-a;c-b-b',b';c;x,\frac{x-y}{1-y}\right), \\ F_1\left(a;b,b';c;x,y\right) &=& {\mbox{\small $(1-x)^{-b}(1-y)^{c-a-b'}$}} F_1\!\left(c-a;b,c-b-b';c;\frac{y-x}{1-x},y\right). \end{eqnarray}
{'timestamp': '2019-04-22T02:13:56', 'yymm': '1904', 'arxiv_id': '1904.07430', 'language': 'en', 'url': 'https://arxiv.org/abs/1904.07430'}
\section{\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex}{2.3ex plus .2ex}{\bf}} \def\subsection{\@startsection{subsection}{2}{\z@}{3.25ex plus 1ex minus .2ex}{1.5ex plus .2ex}{\bf}} \def\bf Appendix B: {\Roman{section}.} \def\Alph{subsection}.{\Alph{subsection}.} \def\arabic{subsubsection}.{\arabic{subsubsection}.} \def
{'timestamp': '1992-05-24T14:29:59', 'yymm': '9205', 'arxiv_id': 'hep-ph/9205231', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-ph/9205231'}
\section{Introduction} Supermassive black holes (SMBHs) in active galactic nuclei (AGNs) are known to have luminous accretion discs (Shakura \& Sunyeav 1973; Kato et al. 2008 and reference therein). Various evidences of mass outflows from a luminous disc are frequently observed, but their origins and dynamics are not yet well clarified. For example, 10-15\% of quasars are classified to BAL (broad absorption line) quasars, which show broad, blue-shifted and strong absorption lines in the spectra of higher-order ionized atoms such as N{\footnotesize V}, C{\footnotesize I\hspace{-1pt}V}, and Si{\footnotesize I\hspace{-1pt}V}, due to optical or UV observations (Weymann et al. 1991; Hamann et al. 1993; Gibson et al. 2009; Allen et al. 2011). From the center of BAL quasars, it is supposed that accretion disc winds of $10000-30000 ~\rm km~s^{-1}$ (0.01--$0.1c$) are blowing off from the black hole vicinity. The column density of these outflows is estimated to be $10^{23}$--$10^{24}~\rm cm^{-2}$ The existence of high speed winds is also found by X-ray observation. In about 40\% of AGNs, the absorption lines of Fe{\footnotesize XXV} and Fe{\footnotesize XXI\hspace{-1pt}V} are found in the outflow. Their corresponding velocities are 0.1--$0.3c$ and these are called ultra-fast outflows (UFOs) {(e.g., Tombesi et al. 2010, 2011, 2012, 2013, 2014)}. The column density is estimated to be $10^{22}$--$10^{23}\rm cm^{-2}$. UFOs are thought to have energies comparable to energetic jets, which may have a significant impact on AGN feedbacks such as star formation and SMBH growth in the bulge {(e.g., Nayakshin 2010; Wagner et al. 2013; King \& Muldrew 2016; Longinotti 2018)}. Theoretically, spherically symmetric optically-thick winds driven by radiation pressure under general relativity have been investigated by several researchers (Lindquist 1966; Castor 1972; Cassinelli \& Hartmann 1975; Ruggles \& Bath 1979; Mihalas 1980; Quinn \& Paczy\'{n}ski 1985; Paczy\'{n}ski 1986, 1990; Paczy\'{n}ski \& Pr\'{o}szy\'{n}ski 1986; Turolla et al. 1986; Nobili et al. 1994; Akizuki \& Fukue 2008, 2009), using radiation hydrodynamical equations under the moment formalism (Thorne 1981; Park 2006; Takahashi 2007; see also Kato \& Fukue 2020). Many of these studies adopted the equilibrium diffusion approximation, where the radiation temperature is equal to the gas one, in order to close the moment equations. However, the usage of the equilibrium diffusion approximation in the moving media is physically questionable, since it permits thermal pulses to travel faster than the speed of light, and notoriously acausal, as was stated by Thorne et al. (1981). Furthermore, under the equilibrium diffusion approximation, the transonic points are proved to be always nodal in the nonrelativistic regime (Fukue 2014). Instead of the equilibrium diffusion approximation, several studies adopted the nonequilibrium diffusion approximation, where the radiation temperature is not equal to the gas one, or the radiation pressure is not expressed by the gas temperature (Nobili et al. 1994; Akizuki \& Fukue 2008, 2009). In this case, some closure relation is necessary, and the Eddington approximation is usually adopted. However, a simple Eddington approximation with the Eddington factor of 1/3 in the nonrelativistic regime is known to bring a pathological behavior in the relativistic regime (e.g., Turolla \& Nobili 1988; Nobili et al. 1991; Turolla et al. 1995; Dullemond 1999; Fukue 2005). Namely, the moment equations under the simple Eddington approximation in the relativistic regime have singular points, which are purely mathematical artifacts of the moment expansion (Dullemond 1999). Hence, instead of the simple Eddington approximation, the variable Eddington factor has been used (Noboli et al. 1994; Akizuki \& Fukue 2008, 2009). In Akizuki and Fukue (2008, 2009), for example, the velocity-dependent variable Eddington factor was used. In their studies, however, the gas pressure was dropped for simplicity. Thus, in this paper, in order to resolve the transonic black hole winds again, we investigate the general relativistic radiation hydrodynamical winds, under the nonequilibrium diffusion approximation with the help of a variable Eddington factor $f(\tau,\beta)$, which depends both on the optical depth $\tau$ and the flow speed $v$ ($=\beta c$) (Akizuki \& Fukue 2008, 2009), and examine the topological nature of the critical points of the black hole winds in detail. In the next section we describe the basic equations for general relativistic winds driven by radiation pressure in the spherical symmetric case. In section 3 we derive tne wind equations from the basic equations. In section 4 we show the loci of critical points, and examine their type, while in section 5 we solve and show the transonic solutions with typical parameters, respectively. The final section is devoted to concluding remarks. \section{Basic equations} In this section, we describe the basic equations for the present spherically symmetric, optically-thick, steady wind driven by radiation pressure from the vicinity of the central black hole of mass $M$. General relativistic radiation hydrodynamical equations have been derived by several studies (Lindquist 1966; Anderson \& Spiegel 1972; Thorne 1981; Udey \& Israel 1982; Nobili et al. 1993; {Park 1993, 2006}; Takahashi 2007; see also Kato et al. 2008; Kato and Fukue 2020). For gas, the continuity equation is \begin{equation} \label{eq-conti} 4\pi r^2 \rho c u =\dot{M}, \end{equation} where $\rho$ is the proper gas density, $c$ the speed of light, $u$ the radial component of the four velocity, and $\dot{M}$ the constant mass-loss rate. Using the proper three velocity $v$ and $\beta$ ($\equiv v/c$), the four velocity $u$ is expressed as $u=y\beta$, where $y=\gamma\sqrt{g_{00}}$, $\gamma=1/\sqrt{1-\beta^2}$, $g_{00}=1-r_{\rm S}/r$, $r_{\rm S}$ being the Schwartzschild radius ($r_{\rm S}=2GM/c^2$), The equation of motion is \begin{equation} \label{eq-motion} u\frac{d u}{dr}+\frac{r_{\rm S}}{2r^2}+\frac{y^2}{\varepsilon +p}\frac{d p}{dr} =\frac{y}{\varepsilon +p}\frac{\rho \overline{\kappa}_{\rm F}}{c} F_{0}, \end{equation} where $p$ is the gas pressure, $\overline{\kappa}_{\rm F}$ ($=\kappa+\sigma$) the frequency-integrated flux-mean opacity for absorption $\kappa$ and scattering $\sigma$, and $F_0$ the radiative flux in the comoving frame. The Lorentz transformation of the radiation moments in the fiducial observer frame and those in the comoving one is expressed as \begin{equation} F_0=\gamma^2[(1+\beta^2)F-\beta(cE+cP)], \end{equation} where $E$ is the radiation energy density, $F$ the radial component of the radiative flux, and $P$ the $rr$ component of the radiation stress tensor in the fixed frame. In contrast to Akizuki and Fukue (2009), in the present study we include the gas pressure term, which creates the transonic points in the flow. Instead of the radiative equilibrium in Akizuki and Fukue (2009), we use the full form of the energy equation for gas: \begin{equation} \label{eq-energy} \frac{c}{r^2}\frac{d}{dr}[ r^2(\varepsilon -\rho c^2)u]+c\frac{p}{r^2}\frac{d}{dr}(r^2 u)=-\rho (j_{0}-\overline{\kappa}_{\rm E}cE_{0} ), \end{equation} where $\varepsilon$ is the gas internal energy including the rest-mass energy, and expressed as \begin{equation} \label{EoS} \varepsilon=\rho c^2 +\frac{p}{\Gamma-1}, \end{equation} $\Gamma$ being the ratio of specific heats. Furthermore, $\overline{\kappa}_{\rm E}$ ($=\kappa$) is the frequency-integrated energy-mean absorption opacity. Moreover, $E_0$ is the radiation energy density in the comoving frame, and the Lorentz transformation is \begin{equation} cE_0=\gamma^2[cE-2\beta F+\beta^2cP]. \end{equation} In addition, $j_0$ is the frequency-integrated emissivity, and can be written under the local thermodynamic equilibrium (LTE) condition as \begin{equation} \label{LTE} j_0=4\pi \overline{\kappa}_E B(T), \end{equation} where $B$ is the frequency-integrated blackbody intensity, $B(T)=\sigma_{\rm SB}T^4/\pi$, $\sigma_{\rm SB}$ being Stephan-Boltzmann constant, and $T$ the gas temperature. The equation of state is \begin{equation} p=\rho \frac{\mathcal{R}}{\mu} T, \end{equation} where $\mathcal{R}$ is the gas constant, $\mu$ ($=0.5$, fully ionized hydrogen plasma) the mean molecular weight. We assume that the black hole winds are sufficiently hot, and the gas is fully ionized. The adiabatic sound speed, $c_{\rm s}$, is defined as \begin{equation} c_{s}^2=c^2\left(\frac{dp}{d\varepsilon}\right)_{\rm adiabatic}=c^2\frac{\Gamma p}{\varepsilon+p}. \end{equation} For radiation, the 0-th moment equation is \begin{equation}\label{0th-moment} \frac{d}{dr}(4 \pi r^2 g_{00}F)=4\pi r^2 \rho y\left(j_0-\overline{\kappa}_{\rm E}cE_0-\beta \overline{\kappa}_{\rm F} F_{0}\right), \end{equation} while the first moment equation is \begin{eqnarray} \label{1st-moment} \frac{d}{dr}(4 \pi r^2 g_{00}cP)&=&4\pi r\left(1-\frac{3r_{\rm S}}{2r}\right)(cE-cP)\nonumber \\ &&-4\pi r^2 \rho y\bigl[\overline{\kappa}_{\rm F} F_{0} -\beta\left(j_0-\overline{\kappa}_{\rm E}cE_0\right)\bigr].\nonumber \\ && \end{eqnarray} The $rr$ component of the radiation stress tensor, $P_0$, in the comoving frame is written as \begin{equation} cP_0=\gamma^2[\beta^2cE-2\beta F+cP]. \end{equation} In the present study, we do not assume the equilibrium diffusion approximation, where $P_0$ is expressed in terms of the gas temperature, as $P_0 = aT^4/3$, but assume the nonequilibrium diffusion approximation, where $P_0$ or the radiation temperature is an independent variable, and some closure relation is necessary. As a closure relation, we adopt the Eddington approximation: \begin{equation}\label{edd-app} P_0=f(\beta,\tau)E_0, \end{equation} where $f(\beta,\tau)$ is the variable Eddington factor which depends both on the optical depth and the flow speed (Tamazawa et al. 1975; Abramowicz et al. 1991; Akizuki \& Fukue 2008, 2009): \begin{equation} f(\beta,\tau)=\frac{\gamma(1+\beta)+\tau}{\gamma(1+\beta)+3\tau} \end{equation} Finally, we introduce the optical depth variable $\tau$ by \begin{eqnarray} \label{def-tau} d\tau &=& -\rho\overline{\kappa}_{\rm F}\gamma(1-\beta \cos\theta)\sqrt{g_{11}} dr \nonumber \\ &=& -\rho(\kappa+\sigma)\gamma(1-\beta) \frac{dr}{\sqrt{g_{00}}} , \end{eqnarray} where $\theta$ is the angle between the velocity and the line-of-sight, and set as $\theta=0$ in the present wind case (cf. Abramowicz et al. 1991; Nied\'{z}wiecki \& Zdziarski 2006; Fukue 2011). Using equations (\ref{eq-conti}), (\ref{eq-motion}), (\ref{0th-moment}), and (\ref{1st-moment}), we derive the additional equation, the Bernoulli equation, for the present case: \begin{equation} \label{bel-0} \dot{M}\frac{\varepsilon +p}{\rho}y+4\pi r^2 g_{00}F =\dot{E} , \end{equation} where $\dot{E}$ is a Bernoulli constant. In the relativistic regime, where the flow velocity becomes large in comparison with the speed of light, there appear advection terms in this Bernoulli equation, when the radiative flux $F$ in the inertial frame is converted to that $F_0$ in the comoving one. Since we treat the spherically symmetric flow, instead of the linear flux $F$ and radiation pressure $P$, we use the spherical variables $L$ and $Q$ defined by \begin{eqnarray} L &\equiv& 4\pi r^2 g_{00} F, \\ Q &\equiv& 4\pi r^2 g_{00} cP, \end{eqnarray} where $L$ is the luminosity measured by an observer at infinity. Moreover, with the black hole winds in mind, we define and use nondimensional variables: \begin{equation} \label{non-d-variable} \hat{r}\equiv \frac{r}{r_{\rm S}},~~~~\beta \equiv \frac{v}{c},~~~~\alpha_{\rm s} \equiv \frac{c_{\rm s}}{c},~~~~\hat{L} \equiv \frac{L}{L_{\rm E}},~~~~\hat{Q} \equiv \frac{Q}{L_{\rm E}}, \end{equation} and nondimensional parameters: \begin{equation} \label{non-d-parameter} m\equiv\frac{M}{M_{\odot}},~~~~\dot{m}\equiv\frac{\dot{M}c^2}{L_{\rm E}},~~~~\dot{e}\equiv\frac{\dot{E}}{L_{\rm E}}, \end{equation} where $L_{\rm E}$ ($\equiv 4\pi cGM/\overline{\kappa}_{\rm F})$ is the Eddington luminosity, and $\dot{M}_{\rm E}$ ($\equiv {L_{\rm E}}/{c^2})$ the critical mass-loss rate. In the previous paper of the UFOs observation, they estimated UFOs mean mass-loss rate is $\dot{M} \sim 0.01-1\rm M_{\odot} yr^{-1}$ (Tombesi et al. 2012). Specifically, the quaser PG1211+143 has a SMBH with a mass of $\sim 10^8 \rm M_\odot$ at its center, and using the nondimensional mass-loss rate, it is estimated $\dot{m}\sim$ 1--$10^2$. \section{Wind equations} Eliminating the gas-pressure gradient from equations (\ref{eq-motion}) and (\ref{eq-energy}), and the density using equation (\ref{eq-conti}), we obtain the wind equation on the velocity: \begin{eqnarray}\label{wind-beta} \frac{d\beta}{dr}&=&\frac{1}{y^2(\beta^2-\alpha ^2_{\rm s})}\Biggl\{\beta\Biggl[-\frac{r_{\rm S}}{2r^2}\nonumber\\ &&+\frac{2}{r}\left(1-\frac{3r_{\rm S}}{4r}\right)\alpha ^2_{\rm s}+\frac{\delta^2y}{\gamma^2c^2}\overline{\kappa}_{\rm F}\frac{F_0}{c}\Biggr] \nonumber \\ &&+\frac{\delta^2y}{\gamma^2c^2}\frac{\Gamma-1}{c}\left(j_0-c\overline{\kappa}_{\rm E}E_0\right)\Biggr\} , \end{eqnarray} where \begin{equation} \delta^2\equiv \frac{\rho c^2}{\varepsilon +p}=1-\frac{\alpha^2_{\rm s}}{\Gamma-1}. \end{equation} On the other hand, the gas-pressure gradient is expressed in terms of the density and adiabatic sound as \begin{equation} \label{pressure-grad} \frac{dp}{dr}=\frac{\varepsilon+P}{\Gamma}\left(\frac{\alpha_{\rm s}^2}{\rho}\frac{d\rho}{dr}+\frac{\Gamma-1}{\Gamma-1-\alpha_{\rm s}^2}\frac{d\alpha_{\rm s}^2}{dr}\right) . \end{equation} Hence, eliminating the gas-pressure gradient from equation (\ref{eq-motion}) and (\ref{pressure-grad}), and the density using equations (\ref{eq-conti}), we obtain the wind equation on the adiabatic sound speed \footnote{ If we take the nonrelativistic limit ($g_{00}\rightarrow1$, $\gamma\rightarrow 1$, $\delta\rightarrow 1$) in the wind equations (\ref{wind-beta}) and (\ref{wind-alpha}), those coincide with the wind equations derived by Fukue (2014). }: \begin{eqnarray} \frac{d\alpha^2_{\rm s}}{dr}&=&-\frac{(\Gamma-1)\delta^2}{y^2(\beta^2-\alpha^2_{\rm s})}\Biggl[\alpha ^2_{\rm s}\Biggl(-\frac{r_{\rm S}}{2r^2}+\frac{2}{r}y^2\beta^2+\frac{\delta^2y}{c^2}\overline{\kappa}_{\rm F}\frac{F_0}{c}\Biggr) \nonumber \\ &&+\frac{\delta^2y}{c^2}\frac{\Gamma \beta^2-\alpha^2_{\rm s}}{c\beta}\left(j_0-c\overline{\kappa}_{\rm E}E_0\right)\Biggr] . \nonumber \end{eqnarray} \begin{equation} \label{wind-alpha} ~ \end{equation} As was stated, we assume the LTE condition (\ref{LTE}) in this paper. Then, equations (\ref{wind-beta}), (\ref{wind-alpha}), (\ref{0th-moment}), (\ref{1st-moment}), (\ref{def-tau}), and (\ref{bel-0}) are respectively rewritten as follows: \begin{eqnarray}\label{wind-beta-re} \frac{d\beta}{dr}&= &\frac{1}{y^2(\beta^2-\alpha ^2_{\rm s})}\Biggr\{\beta\Biggl[-\frac{r_{\rm S}}{2r^2}+\frac{2}{r}\left(1-\frac{3r_{\rm S}}{4r}\right)\alpha ^2_{\rm s}\nonumber \\ &&+\frac{\delta^2y}{4\pi r^2g_{00}\gamma^2}\frac{\overline{\kappa}_{F}}{c^3}\frac{(f+\beta^2 )L-(1+f)\beta Q}{f-\beta ^2}\Biggr]\nonumber \\ &&+\frac{\delta^2y}{4\pi r^2g_{00}\gamma^2}(\Gamma-1)\frac{\overline{\kappa}_E}{c^3}\nonumber \\ &&\times\left[16\pi^2g_{00} r^2B-\frac{(1+\beta^2)Q-2\beta L}{f-\beta^2}\right]\Biggr\}, \end{eqnarray} \begin{eqnarray} \label{wind-alpha-re} \frac{d\alpha^2_{\rm s}}{dr}&= &-\frac{(\Gamma-1)\delta^2}{y^2(\beta^2-\alpha^2_{\rm s})}\Biggl\{\alpha ^2_{\rm s}\Biggl[-\frac{r_{\rm S}}{2r^2}+\frac{2}{r}y^2\beta^2\nonumber\\ &&+\frac{\delta^2y}{4\pi r^2g_{00}}\frac{\overline{\kappa}_{F}}{c^3}\frac{(f+\beta^2 )L-(1+f)\beta Q}{f-\beta ^2}\Biggr]\nonumber\\ &&+\frac{\delta^2y}{4\pi r^2g_{00}\beta}(\Gamma \beta^2-\alpha^2_{\rm s})\frac{\overline{\kappa}_{E}}{c^3}\nonumber\\ &&\times\Biggl[16\pi^2g_{00} r^2B-\frac{(1+\beta^2)Q-2\beta L}{f-\beta^2}\Biggr]\Biggr\}, \end{eqnarray} \begin{eqnarray} \frac{dL}{dr}&= &\frac{\dot{M}}{4\pi r^2g_{00}c\beta}\Biggl\{\overline{\kappa}_{\rm E}\Biggl[16\pi^2g_{00}r^2B\nonumber\\ &&-\frac{(1+\beta^2)Q-2\beta L}{f-\beta^2}\Biggr]\nonumber\\ &&-\overline{\kappa}_{\rm F}\beta\frac{(f+\beta^2)L-(1+f)\beta Q}{f-\beta^2}\Biggr\}, \nonumber \end{eqnarray} \begin{equation} \label{0th-moment-re} ~ \end{equation} \begin{eqnarray} \frac{dQ}{dr}&= &\frac{1}{rg_{00}}\left(1-\frac{3r_{\rm S}}{2r}\right)\frac{(1-f)(1+\beta^2)Q-(1-f)2\beta L}{f-\beta^2}\nonumber\\ &&+\frac{\dot{M}}{4\pi r^2g_{00}c\beta}\Biggl\{\beta\overline{\kappa}_{\rm E}\Biggl[16\pi^2g_{00}r^2B\nonumber\\ &&-\frac{(1+\beta^2)Q-2\beta L}{f-\beta^2}\Biggr]\nonumber\\ &&-\overline{\kappa}_{\rm F}\frac{(f+\beta^2)L-(1+f)\beta Q}{f-\beta^2}\Biggr\}, \nonumber \end{eqnarray} \begin{equation} \label{1st-moment-re} ~ \end{equation} \begin{equation} \label{tau-re} \frac{d\tau}{dr}=-\overline{\kappa}_{\rm F}\frac{(1-\beta)}{4\pi r^2 g_{00} c\beta}\dot{M}, \end{equation} \begin{equation} \label{bel-re} \frac{\dot{M}c^2}{\delta ^2}y+L=\dot{E}. \end{equation} Here, this Bernoulli equation is not a dependent one, but use to determine the adiabatic sound speed, as follows. \footnote{ Nonrelativistic limit of this Bernoulli equation is expressed \begin{displaymath} \dot{M}c^2\left(1+\frac{1}{2}\beta^2+\frac{\alpha_{\rm s}^2}{\Gamma-1}-\frac{r_{\rm s}}{2r}\right)+L=\dot{E}, \end{displaymath} when we plug $y\simeq 1-\frac{r_{\rm S}}{2r}+\frac{1}{2}\beta^2$, $\delta^{-2}\simeq 1+\frac{\alpha_{\rm s}^2}{\Gamma-1}$ in equation (\ref{bel-re}). If we include the constant term $\dot{M}c^2$ on the right side, it has the same form as in Fukue (2014). } Next, we normalize these equations in terms of the nondimensional variables (\ref{non-d-variable}) and parameters (\ref{non-d-parameter}). Then, equations (\ref{wind-beta-re}), (\ref{wind-alpha-re}), (\ref{0th-moment-re}), (\ref{1st-moment-re}), (\ref{tau-re}), and (\ref{bel-re}) are respectively normalized as follows: \begin{eqnarray}\label{df1} \frac{d\beta}{d\hat{r}}&= &\frac{1}{y^2(\beta^2-\alpha ^2_{\rm s})}\Biggr\{\beta\Biggl[-\frac{1}{2\hat{r}^2}+\frac{2}{\hat{r}}\left(1-\frac{3}{4\hat{r}}\right)\alpha ^2_{\rm s}\nonumber\\ &&+\frac{\delta^2y}{2\hat{r}^2g_{00}\gamma^2}\frac{(f+\beta^2 )\hat{L}-(1+f)\beta \hat{Q}}{f-\beta ^2}\Biggr]\nonumber\\ &&+\frac{\delta^2y}{2\hat{r}^2g_{00}\gamma^2}(\Gamma-1)\epsilon\nonumber \\ &&\times\left[\mathcal{B}g_{00} \hat{r}^2-\frac{(1+\beta^2)\hat{Q}-2\beta \hat{L}}{f-\beta^2}\right]\Biggr\}, \end{eqnarray} \begin{eqnarray} \label{df2} \frac{d\alpha^2_{\rm s}}{d\hat{r}}=& &-\frac{(\Gamma-1)\delta^2}{y^2(\beta^2-\alpha ^2_{\rm s})}\Biggl\{\alpha ^2_{\rm s}\Biggl[-\frac{1}{2\hat{r}^2}+\frac{2}{\hat{r}}y^2\beta^2\nonumber\\ &&+\frac{\delta^2y}{2\hat{r}^2g_{00}}\frac{(f+\beta^2 )\hat{L}-(1+f)\beta \hat{Q}}{f-\beta ^2}\Biggr]\nonumber\\ &&+\frac{\delta^2y}{2\hat{r}^2g_{00}\beta}(\Gamma \beta^2-\alpha^2_{\rm s})\epsilon\nonumber\\ &&\times\Biggl[\mathcal{B}g_{00} \hat{r}^2-\frac{(1+\beta^2)\hat{Q}-2\beta \hat{L}}{f-\beta^2}\Biggr]\Biggr\}, \end{eqnarray} \begin{eqnarray} \label{df3} \frac{d\hat{L}}{d\hat{r}}&= &\frac{\dot{m}}{2\hat{r}^2g_{00}\beta}\Biggl\{\epsilon\left[\mathcal{B} g_{00}\hat{r}^2-\frac{(1+\beta^2)\hat{Q}-2\beta \hat{L}}{f-\beta^2}\right]\nonumber\\ &&-\beta\frac{(f+\beta^2)\hat{L}-(1+f)\beta \hat{Q}}{f-\beta^2}\Biggr\}, \end{eqnarray} \begin{eqnarray} \label{df4} \frac{d\hat{Q}}{d\hat{r}}&= &\frac{1}{\hat{r}g_{00}}\left(1-\frac{3}{2\hat{r}}\right)\frac{(1-f)(1+\beta^2)\hat{Q}-(1-f)2\beta \hat{L}}{f-\beta^2}\nonumber\\ &&+\frac{\dot{m}}{2\hat{r}^2g_{00}\beta}\Biggl\{\beta\epsilon\left[\mathcal{B} g_{00}\hat{r}^2-\frac{(1+\beta^2)\hat{Q}-2\beta \hat{L}}{f-\beta^2}\right]\nonumber\\ &&-\frac{(f+\beta^2)\hat{L}-(1+f)\beta \hat{Q}}{f-\beta^2}\Biggr\}, \end{eqnarray} \begin{equation} \label{df5} \frac{d\tau}{d\hat{r}}=-\frac{\dot{m}(1-\beta)}{2\hat{r}^2 g_{00}\beta}, \end{equation} \begin{equation} \label{bel} \dot{e}=\frac{\dot{m}}{\delta ^2}y+\hat{L}. \end{equation} Here, $\mathcal{B}$ and $\epsilon$ are the nondimensional blackbody intensity and photon destruction probability, respectively: \begin{eqnarray} \mathcal{B} &=&\frac{16\pi^2r_{\rm S}^2 }{L_{\rm E}}B(T)=1.700\times10^{21}\frac{m}{\delta^8\Gamma^4}\alpha_{\rm s}^8\left(1+\frac{\kappa}{\sigma}\right),\\ \frac{\kappa}{\sigma}&=&1.838\times10^{-27}\frac{\dot{m}}{m}\frac{\delta^7\Gamma^{3.5}}{\hat{r}^2 y\beta\alpha_{\rm s}^7},\\ \epsilon &=&\frac{\overline{\kappa}_{\rm E}}{\overline{\kappa}_F}=\frac{\kappa}{\kappa+\sigma}=\frac{\frac{\kappa}{\sigma}}{1+\frac{\kappa}{\sigma}}. \end{eqnarray} In addition, the adiabatic sound speed is expressed by using the nondimensional Bernoulli equation (\ref{bel}): \begin{equation} \label{alpha} \alpha_{\rm s}^2=(\Gamma-1)\left(1-\frac{\dot{m}y}{\dot{e}-\hat{L}}\right). \end{equation} In order to determine the adiabatic sound speed, we use this equation (\ref{alpha}), instead of (\ref{df2}), for simplicity. \section{Critical points} The distance ($\hat{r}=\hat{r}_{\rm c}$), at which the flow speed ($\beta=\beta_{\rm c}$) is equal to the adiabatic sound speed ($\alpha_s=\alpha_{\rm s,c}$), is called the {\it transonic point} ($\beta_{\rm c}=\alpha_{\rm s,c}$) (the subscript c means `critical'). Wind equation (\ref{df1}) shows that the transonic points is also a {\it critical point} since at $\hat{r}=\hat{r}_{\rm c}$ the denominater of equation (\ref{df1}) vanishes. Hence, in order for the transonic solution to exist, the numerator of equation (\ref{df1}) must vanish at the critical point simultaneously ({\it regularity condition}). In this section, we first obtain and determine all the variables at $\hat{r}_{\rm c}$, calculate $d\beta/d\hat{r}|_{\rm c}$ by using the L'Hopital's rule in equation (\ref{df1}) at $\hat{r}=\hat{r}_{\rm c}$, and examine the topology of the transonic/critical points. Equation (\ref{df1}) is reexpressed as \begin{equation} \label{N1-D} \frac{d\beta}{d\hat{r}}=\frac{\mathcal{N}_1}{\mathcal{D}}, \end{equation} where \begin{equation} \label{D} \mathcal{D} \equiv \beta ^2-\alpha _{\rm s}^2, \end{equation} \begin{eqnarray} \label{N1} \mathcal{N} _1\nonumber&\equiv &f_1\nonumber\\ &\equiv & \frac{1}{y^2}\Biggl\{\beta\Biggl[-\frac{1}{2\hat{r}^2}+\frac{2}{\hat{r}}\left(1-\frac{3}{4\hat{r}}\right)\alpha ^2_{\rm s}\nonumber\\ &&+\frac{\delta^2y}{2\hat{r}^2g_{00}\gamma^2}\frac{(f+\beta ^2)\hat{L}-(1+f)\beta \hat{Q}}{f-\beta ^2}\Biggr]\nonumber\\ &&+\frac{\delta^2y}{2\hat{r}^2g_{00}\gamma^2}(\Gamma-1)\epsilon\nonumber \\ &&\times\left[\mathcal{B}g_{00} \hat{r}^2-\frac{(1+\beta^2)\hat{Q}-2\beta \hat{L}}{f-\beta^2}\right]\Biggr\}. \end{eqnarray} At the critical points, $\mathcal{D}$ and $\mathcal{N}_1$ must vanish simulteneously, as was stated. Firstly, we derive a relation among the quantities at the critical point. That is, imposing the condition of $\mathcal{D}|_{\rm c}=0$ ($\beta_{\rm c}=\alpha_{\rm s,c}$) on the Bernoulli equation (\ref{bel}), we can express $\beta_{\rm c}$ in terms of other quantities as \begin{displaymath} \beta_{\rm c}^6-(2\Gamma-1)\beta_{\rm c}^4+(\Gamma^2-1)\beta_{\rm c}^2-(\Gamma-1)^2\left[1-\frac{\dot{m}^2g_{00}}{(\dot{e}-\hat{L}_{\rm c})^2}\right]=0. \end{displaymath} \begin{equation} \label{cp-eq} ~ \end{equation} In order to simplify the expression, we introduce \begin{equation} C\equiv 1-\frac{\dot{m}^2g_{00}}{(\dot{e}-\hat{L}_{\rm c})^2} = 1-\frac{\dot{m}^2}{(\dot{e}-\hat{L}_{\rm c})^2} \left( 1 - \frac{1}{\hat{r}_{\rm c}} \right), \end{equation} and solve equation (\ref{cp-eq}) to yield \begin{eqnarray} \beta_{\rm c}^2&=&\alpha_{\rm s,c}^2\nonumber\\ &=&\frac{2\Gamma-1}{3}-\frac{1}{3}\Biggl\{-\frac{27C}{2}(\Gamma-1)^2+(1-2\Gamma)^3\nonumber\\ &&-\frac{9(1-2\Gamma)(\Gamma^2-1)}{2}+\frac{1}{2}\Bigl[-4\left((1-2\Gamma)^2-3(\Gamma^2-1)\right)^3\nonumber\\ &&+\Bigl(-27C(\Gamma-1)^2+2(1-2\Gamma)^3\nonumber\\ &&-9(1-2\Gamma)(\Gamma^2-1)\Bigr)^2\Bigr]^{\frac{1}{2}}\Biggr\}^{\frac{1}{3}}\nonumber\\ &&-\frac{(1-2\Gamma)^2-3(\Gamma^2-1)}{3}\Biggl\{-\frac{27C}{2}(\Gamma-1)^2+(1-2\Gamma)^3\nonumber\\ &&-\frac{9(1-2\Gamma)(\Gamma^2-1)}{2}+\frac{1}{2}\Bigl[-4\left((1-2\Gamma)^2-3(\Gamma^2-1)\right)^3\nonumber\\ &&+\Bigl(-27C(\Gamma-1)^2+2(1-2\Gamma)^3\nonumber\\ &&-9(1-2\Gamma)(\Gamma^2-1)\Bigr)^2\Bigr]\Biggr\}^{-\frac{1}{3}}.\nonumber \end{eqnarray} \begin{equation} \label{beta-c-gamma} ~ \end{equation} For example, when $\Gamma=4/3$, equation (\ref{beta-c-gamma}) becomes \begin{eqnarray} \beta_{\rm c}^2&=&\alpha_{\rm s, c}^2\nonumber\\ &=&\frac{5}{9}-\frac{1}{3}\left[-\frac{3C}{2}+\frac{1}{2}\sqrt{\left(\frac{65}{27}-3C\right)^2-\frac{256}{729}}+\frac{65}{54}\right]^{\frac{1}{3}}\nonumber\\ &&-\frac{4}{27}\left[-\frac{3C}{2}+\frac{1}{2}\sqrt{\left(\frac{65}{27}-3C\right)^2-\frac{256}{729}}+\frac{65}{54}\right]^{-\frac{1}{3}},\nonumber \end{eqnarray} \begin{equation}\label{beta-c-sol4/3} ~ \end{equation} and when $\Gamma=5/3$, it is \begin{eqnarray} \beta_{\rm c}^2&=&\alpha_{\rm s, c}^2\nonumber\\ &=&\frac{7}{9}-\frac{1}{3}\left[-6C +\frac{1}{2}\sqrt{\left(\frac{322}{27}-12C\right)^2-\frac{4}{729}}+\frac{161}{27}\right]^{\frac{1}{3}}\nonumber \\ &&-\frac{1}{27}\left[-6C +\frac{1}{2}\sqrt{\left(\frac{322}{27}-12C\right)^2-\frac{4}{729}}+\frac{161}{27}\right]^{-\frac{1}{3}}. \nonumber \end{eqnarray} \begin{equation}\label{beta-c-sol5/3} ~ \end{equation} It should be stressed that equations (\ref{beta-c-gamma}), (\ref{beta-c-sol4/3}) {and (\ref{beta-c-sol5/3})} can have physical solutions only if $0\leq C\leq 1$. If $C=0$, $\beta_{\rm c}=0$, and if $C=1$, $\beta_{\rm c}=\sqrt{\Gamma-1}$. In particular, at the horizon, where $g_{00}=0$, $C=1$, therefore $\beta_{\rm c}=\sqrt{\Gamma-1}$. Another condition for the quantities at the critical point is obtained from the regularity condition: $\mathcal{N}_1|_{\rm c}=0$. Other relations for, e.g., radiation pressure $\hat{Q}_{\rm c}$ and optical depth $\tau_{\rm c}$ can be obtained from the remained equations as below. Next, applying the L'Hopital's rule in equation (\ref{N1-D}) at $\hat{r}=\hat{r}_{\rm c}$, $\left.d\beta/d\hat{r}\right|_{\rm c}$ is written as \begin{eqnarray} &&\left.\frac{d\beta}{d\hat{r}}\right|_{\rm c} \nonumber\\ &&=\frac{\left.\frac{\partial \mathcal{N}_1}{\partial \hat{r}}\right|_{\rm c} +\left.\frac{\partial \mathcal{N}_1}{\partial \beta }\right|_{\rm c}\left.\frac{d\beta}{d\hat{r}}\right|_{\rm c} +\left.\frac{\partial \mathcal{N}_1}{\partial \hat{L} }\right|_{\rm c}\left.\frac{d\hat{L}}{d\hat{r}}\right|_{\rm c} +\left.\frac{\partial \mathcal{N}_1}{\partial \hat{Q}}\right|_{\rm c}\left.\frac{d\hat{Q}}{d\hat{r}}\right|_{\rm c} +\left.\frac{\partial \mathcal{N}_1}{\partial \tau }\right|_{\rm c}\left.\frac{d\tau}{d\hat{r}}\right|_{\rm c}} {\left.\frac{\partial \mathcal{D}}{\partial \hat{r}}\right|_{\rm c} +\left.\frac{\partial \mathcal{D} }{\partial \beta }\right|_{\rm c}\left.\frac{d\beta}{d\hat{r}}\right|_{\rm c} +\left.\frac{\partial \mathcal{D} }{\partial \hat{L} }\right|_{\rm c}\left.\frac{d\hat{L}}{d\hat{r}}\right|_{\rm c} +\left.\frac{\partial \mathcal{D} }{\partial \hat{Q}}\right|_{\rm c}\left.\frac{d\hat{Q}}{d\hat{r}}\right|_{\rm c} +\left.\frac{\partial \mathcal{D} }{\partial \tau }\right|_{\rm c}\left.\frac{d\tau}{d\hat{r}}\right|_{\rm c}}.\nonumber \end{eqnarray} \begin{equation} \label{dbeta-c} ~ \end{equation} In order to calculate equation (\ref{dbeta-c}), equations (\ref{df2}), (\ref{df3}), (\ref{df4}), and (\ref{df5}) are also tranformed in the same way: \begin{equation} \label{N2-D} \frac{d\alpha^2_{\rm s}}{d\hat{r}}=\frac{\mathcal{N}_2}{\mathcal{D}} \end{equation} \begin{equation} \frac{d\hat{L}}{d\hat{r}}=\frac{\mathcal{N}_3}{\mathcal{D}}, \end{equation} \begin{equation} \frac{d\hat{Q}}{d\hat{r}}=\frac{\mathcal{N}_4}{\mathcal{D}}, \end{equation} \begin{equation} \frac{d\tau}{d\hat{r}}=\frac{\mathcal{N}_5}{\mathcal{D}}, \end{equation} where \begin{eqnarray} \label{N2} \mathcal{N} _2&\equiv&f_2\nonumber\\ &\equiv &-(\Gamma-1)\frac{\delta^2}{y^2}\Biggl\{\alpha ^2_{\rm s}\Biggl[-\frac{1}{2\hat{r}^2}+\frac{2}{\hat{r}}y^2\beta^2\nonumber\\ &&+\frac{\delta^2y}{2\hat{r}^2g_{00}}\frac{(f+\beta )\hat{L}-(1+f)\beta \hat{Q}}{f-\beta ^2}\Biggr]\nonumber\\ &&+\frac{\delta^2y}{2\hat{r}^2g_{00}}\frac{1}{\beta}(\Gamma \beta^2-\alpha^2_{\rm s})\epsilon \nonumber \\ &&\times \Biggl[\mathcal{B}g_{00} \hat{r}^2-\frac{(1+\beta^2)\hat{Q}-2\beta \hat{L}}{f-\beta^2}\Biggr]\Biggr\}, \end{eqnarray} \begin{eqnarray} \mathcal{N} _3&\equiv&\mathcal{D}f_3\nonumber\\ &\equiv &\frac{(\beta ^2-\alpha _{\rm s}^2)\dot{m}}{2\hat{r}^2g_{00}\beta}\Biggl\{\epsilon\Biggl[\mathcal{B} g_{00}\hat{r}^2-\frac{(1+\beta^2)\hat{Q}-2\beta \hat{L}}{f-\beta^2}\Biggr]\nonumber\\ &&-\beta\frac{(f+\beta^2)\hat{L}-(1+f)\beta \hat{Q}}{f-\beta^2}\Biggr\}, \end{eqnarray} \begin{eqnarray} \mathcal{N} _4&\equiv &\mathcal{D}f_4\nonumber\\ &\equiv &(\beta ^2-\alpha _{\rm s}^2)\Biggl\{\frac{1}{\hat{r}g_{00}}\left(1-\frac{3}{2\hat{r}}\right)\nonumber\\ &&\times\frac{(1-f)(1+\beta^2)\hat{Q}-(1-f)2\beta \hat{L}}{f-\beta^2}\nonumber\\ &&+\frac{\dot{m}}{2\hat{r}^2g_{00}\beta}\Biggl[\epsilon\left(\mathcal{B} g_{00}\hat{r}^2-\frac{(1+\beta^2)\hat{Q}-2\beta \hat{L}}{f-\beta^2}\right)\nonumber\\ &&-\frac{(f+\beta^2)\hat{L}-(1+f)\beta \hat{Q}}{f-\beta^2}\Biggr]\Biggr\} \end{eqnarray} \begin{eqnarray} \mathcal{N} _5&\equiv &\mathcal{D}f_5\nonumber\\ &\equiv &-(\beta ^2-\alpha _{\rm s}^2)\frac{\dot{m}(1-\beta)}{2\hat{r}^2 g_{00}\beta}, \end{eqnarray} although in this paper we do not use equation (\ref{N2-D}), but use the Bernoulli equation (\ref{bel}). Now, we define the eigenvalue matrix $\Lambda$ as follows: \begin{eqnarray}\label{matrix} \Lambda &\equiv& \left( \begin{array}{ccccc} \lambda _{11} &\lambda _{12} &\lambda _{13} &\lambda _{14} &\lambda _{15} \\ \lambda _{21} &\lambda _{22} &\lambda _{23} &\lambda _{24} &\lambda _{25} \\ \lambda _{31} &\lambda _{32} &\lambda _{33} &\lambda _{34} &\lambda _{35} \\ \lambda _{41} &\lambda _{42} &\lambda _{43} &\lambda _{44} &\lambda _{45} \\ \lambda _{51} &\lambda _{52} &\lambda _{53} &\lambda _{54} &\lambda _{55} \end{array} \right)\nonumber\\ &\equiv& \left( \begin{array}{ccccc} \left.\frac{\partial \mathcal{D}}{\partial \hat{r}}\right|_{\rm c} & \left.\frac{\partial \mathcal{D}}{\partial \beta}\right|_{\rm c} & \left.\frac{\partial \mathcal{D}}{\partial \hat{L}}\right|_{\rm c} & \left.\frac{\partial \mathcal{D}}{\partial \hat{Q}}\right|_{\rm c} & \left.\frac{\partial \mathcal{D}}{\partial \tau}\right|_{\rm c} \\ \left.\frac{\partial \mathcal{N}_1 }{\partial \hat{r}}\right|_{\rm c} & \left.\frac{\partial \mathcal{N}_1 }{\partial \beta}\right|_{\rm c} & \left.\frac{\partial \mathcal{N}_1 }{\partial \hat{L}}\right|_{\rm c} & \left.\frac{\partial \mathcal{N}_1 }{\partial \hat{Q}}\right|_{\rm c} & \left.\frac{\partial \mathcal{N}_1 }{\partial \tau}\right|_{\rm c} \\ \left.\frac{\partial \mathcal{N}_3 }{\partial \hat{r}}\right|_{\rm c} & \left.\frac{\partial \mathcal{N}_3 }{\partial \beta}\right|_{\rm c} & \left.\frac{\partial \mathcal{N}_3 }{\partial \hat{L}}\right|_{\rm c} & \left.\frac{\partial \mathcal{N}_3 }{\partial \hat{Q}}\right|_{\rm c} & \left.\frac{\partial \mathcal{N}_3 }{\partial \tau}\right|_{\rm c} \\ \left.\frac{\partial \mathcal{N}_4 }{\partial \hat{r}}\right|_{\rm c} & \left.\frac{\partial \mathcal{N}_4 }{\partial \beta}\right|_{\rm c} & \left.\frac{\partial \mathcal{N}_4 }{\partial \hat{L}}\right|_{\rm c} & \left.\frac{\partial \mathcal{N}_4 }{\partial \hat{Q}}\right|_{\rm c} & \left.\frac{\partial \mathcal{N}_4 }{\partial \tau}\right|_{\rm c} \\ \left.\frac{\partial \mathcal{N}_5 }{\partial \hat{r}}\right|_{\rm c} & \left.\frac{\partial \mathcal{N}_5 }{\partial \beta}\right|_{\rm c} & \left.\frac{\partial \mathcal{N}_5 }{\partial \hat{L}}\right|_{\rm c} & \left.\frac{\partial \mathcal{N}_5 }{\partial \hat{Q}}\right|_{\rm c} & \left.\frac{\partial \mathcal{N}_5 }{\partial \tau}\right|_{\rm c} \end{array} \right).\nonumber\\ && \end{eqnarray} After simple but lengthy calculations, this matrix is found to be expressed as \begin{eqnarray} \Lambda &=&\left( \begin{array}{ccccc} \left.\frac{\partial \mathcal{D}}{\partial \hat{r}}\right|_{\rm c}& \left.\frac{\partial \mathcal{D}}{\partial \beta}\right|_{\rm c} & \left.\frac{\partial \mathcal{D}}{\partial \hat{L}}\right|_{\rm c}& 0 & 0 \\ \left.\frac{\partial f_1 }{\partial \hat{r}}\right|_{\rm c} & \left.\frac{\partial f_1 }{\partial \beta}\right|_{\rm c} & \left.\frac{\partial f_1 }{\partial \hat{L}}\right|_{\rm c} & \left.\frac{\partial f_1 }{\partial \hat{Q}}\right|_{\rm c} & \left.\frac{\partial f_1 }{\partial \tau}\right|_{\rm c} \\ \left.\frac{\partial \mathcal{D}}{\partial \hat{r}}\right|_{\rm c}\left.f_3\right|_{\rm c} & \left.\frac{\partial \mathcal{D}}{\partial \beta}\right|_{\rm c}\left.f_3\right|_{\rm c} & \left.\frac{\partial \mathcal{D}}{\partial \hat{L}}\right|_{\rm c}\left.f_3\right|_{\rm c} & 0 & 0 \\ \left.\frac{\partial \mathcal{D}}{\partial \hat{r}}\right|_{\rm c}\left.f_4\right|_{\rm c} & \left.\frac{\partial \mathcal{D}}{\partial \beta}\right|_{\rm c}\left.f_4\right|_{\rm c} & \left.\frac{\partial \mathcal{D}}{\partial \hat{L}}\right|_{\rm c}\left.f_4\right|_{\rm c} & 0 & 0 \\ \left.\frac{\partial \mathcal{D}}{\partial \hat{r}}\right|_{\rm c}\left.f_5\right|_{\rm c} & \left.\frac{\partial \mathcal{D}}{\partial \beta}\right|_{\rm c}\left.f_5\right|_{\rm c} & \left.\frac{\partial \mathcal{D}}{\partial \hat{L}}\right|_{\rm c}\left.f_5\right|_{\rm c} & 0 & 0 \end{array} \right)\nonumber\\ &=&\left( \begin{array}{ccccc} \lambda _{11} &\lambda _{12} &\lambda _{13} & 0 & 0\\ \lambda _{21} &\lambda _{22} &\lambda _{23} &\lambda _{24} &\lambda _{25} \\ \lambda _{11}\left.f_3\right|_{\rm c} &\lambda _{12}\left.f_3\right|_{\rm c} & \lambda _{13}\left.f_3\right|_{\rm c} & 0 & 0\\ \lambda _{11}\left.f_4\right|_{\rm c} &\lambda _{12}\left.f_4\right|_{\rm c} & \lambda _{13}\left.f_4\right|_{\rm c} & 0 & 0\\ \lambda _{11}\left.f_5\right|_{\rm c} &\lambda _{12}\left.f_5\right|_{\rm c} & \lambda _{13}\left.f_5\right|_{\rm c} & 0 & 0 \end{array} \right).\nonumber \end{eqnarray} \begin{equation}\label{matrix2} ~ \end{equation} Thus, fortunately, the eigenvalue equation reduces to the quadratic one, as shown below. Using the components of matrix $\Lambda$, equation (\ref{dbeta-c}) is further rewritten as \begin{equation} \left.\frac{d\beta}{d\hat{r}}\right|_{\rm c} =\frac{\lambda_{21}+\lambda_{22}\left.\frac{d\beta}{d\hat{r}}\right|_{\rm c} +\lambda_{23}\left.f_3\right|_{\rm c}+\lambda_{24}\left.f_4\right|_{\rm c} +\lambda_{25}\left.f_5\right|_{\rm c}} {\lambda_{11}+\lambda_{12}\left.\frac{d\beta}{d\hat{r}}\right|_{\rm c} +\lambda_{13}\left.f_3\right|_{\rm c}}, \end{equation} and finally solved as \begin{eqnarray} \label{dbeta-dr-c} \left.\frac{d\beta}{d\hat{r}}\right|_{\rm c}&=& \frac{1}{2\lambda_{12}}\Biggl[-\left(\lambda_{11}-\lambda_{22}+\lambda_{13}\left.f_3\right|_{\rm c}\right)\nonumber\\ &&\pm\sqrt{\left(\lambda_{11}-\lambda_{22}+\lambda_{13}\left.f_3\right|_{\rm c}\right)^2+4\lambda_{12}\xi}\Biggr], \end{eqnarray} where \begin{equation} \xi \equiv \lambda_{21}+\lambda_{23}\left.f_3\right|_{\rm c}+\lambda_{24}\left.f_4\right|_{\rm c}+\lambda_{25}\left.f_5\right|_{\rm c}. \end{equation} When equation (\ref{dbeta-dr-c}) has a real solution, it gives a linear approximated solution in the vicinity of the critical point. Thirdly, we determine the types of critical points by the eigenvalue of the matrix $\Lambda$. The eigenvalue equation of $\Lambda$ is now \begin{displaymath} \lambda^2-(\lambda_{11}+\lambda_{22}+\lambda_{13}\left.f_{3}\right|_{\rm c})\lambda +(\lambda_{11}+\lambda_{13}\left.f_{3}\right|_{\rm c})\lambda_{22}-\xi\lambda_{12}=0. \end{displaymath} \begin{equation} \label{eigenvalue-eq} ~ \end{equation} Depending on what solution this equation has, we can determine the types of critical points. If equation (\ref{eigenvalue-eq}) has two real solutions with different signs, the critical points are of the {\it saddle type}, two real solutions with the same sign of the {\it nodal one}, and two complex solutions of the {\it spiral one}. Since the nodal type is always a deceleration solution and the spiral type is not a physical solution in the present case, the most suitable transonic solution is that passes the critical point of the saddle type. \begin{figure} \includegraphics[width=80mm,clip]{figure1a.jpg} \includegraphics[width=80mm,clip]{figure1b.jpg} \includegraphics[width=80mm,clip]{figure1c.jpg} \caption{ Critical curves between $\hat{r}_{\rm c}$, $\beta_{\rm c}$, $\hat{L}_{\rm c}$; (a) $\dot{m}=2$, $\dot{e}=3$, $m=10^8$, $\Gamma=4/3$, $\hat{Q}_{\rm c}=1$, $\tau_{\rm c}=1$, (b) $\dot{m}=10$, $\dot{e}=11.5$, $m=10^8$, $\Gamma=4/3$, $\hat{Q}_{\rm c}=1$, $\tau_{\rm c}=5$, (c) $\dot{m}=100$, $\dot{e}=101$, $m=10^8$, $\Gamma=4/3$, $\hat{Q}_{\rm c}=1$, $\tau_{\rm c}=5$. Black thick curves denote $\beta_{\rm c}$, while red thin ones $L_{\rm c}$. Solid curves mean saddle type, dotted ones nodal type, and dashed ones spiral type. } \end{figure} \begin{figure} \includegraphics[width=81.2mm,clip]{figure2a.jpg} \includegraphics[width=81.2mm,clip]{figure2b.jpg} \caption{ Critical curves between $\hat{r}_{\rm c}$, $\beta_{\rm c}$, and $\hat{L}_{\rm c}$; (a) $\dot{m}=2$, $\dot{e}=3$, $m=10^8$, $\Gamma=5/3$, $\hat{Q}_{\rm c}=1$, $\tau_{\rm c}=1$, (b) $\dot{m}=10$, $\dot{e}=11.5$, $m=10^8$, $\Gamma=5/3$, $\hat{Q}_{\rm c}=1$, $\tau_{\rm c}=5$. Black thick curves denote $\beta_{\rm c}$, while red thin ones $\hat{L}_{\rm c}$. Solid curves mean saddle type, dotted ones nodal type, and dashed ones spiral type. } \end{figure} In Fig. 1, the loci of critical points and their types are shown for several typical parameters; Fig. 1a for $\dot{m}=2$, $\dot{e}=3$, $m=10^8$, $\Gamma=4/3$, $\hat{Q}_{\rm c}=1$, $\tau_{\rm c}=1$, Fig. 1b for $\dot{m}=10$, $\dot{e}=11.5$, $m=10^8$, $\Gamma=4/3$, $\hat{Q}_{\rm c}=1$, $\tau_{\rm c}=5$, and Fig. 1c for $\dot{m}=100$, $\dot{e}=101$, $m=10^8$, $\Gamma=4/3$, $\hat{Q}_{\rm c}=1$, $\tau_{\rm c}=5$. Black thick curves denote $\beta_{\rm c}$, while red thin ones $L_{\rm c}$. Solid curves mean saddle type, dotted ones nodal type, and dashed ones spiral type. When the mass-loss rate $\dot{m}$ is large (Fig. 1c), the flow velocity at critical points becomes generally low, since the loaded mass is large. In such a case, the loci and types of critical points resemble those obtained in the nonrelativistic regime (Fukue 2014). That is, the saddle type appears in the inner branch, while the types in the outer region are nodal or spiral. Especially, if we drop the radiation drag term, and give the same mass-loss rates as those of Fukue (2014), say $\dot{m}=10^5$ for nova winds or $\dot{m}=10^3$ for neutron star winds, the critical curves almost coincide to those of Fukue (2014) in the nonrelativistic regime. However, at far from the center, the velocity at the critical points diverges in the nonrelativistic case, while in the general relativistic case it approaches the relativistic limit of the adiabatic sound speed, $\beta_{\rm c} = \alpha_{\rm s, c} \rightarrow 1/\sqrt{3}$. When the mass-loss rate becomes small (Figs. 1a and 1b), the flow velocity at critical points becomes high. In such cases, the general behavior is similar to the nonrelativistic regime, but the loci and types are rather different. That is, the loci of critical points move inward, and further nodal and spiral types disappear. Anyway, the loci and types of critical points, and the velocity and luminosity there depend on the various parameters (see Fig. 3 below). Before it, we briefly mention the case of $\Gamma=5/3$. As was shown in, e.g., Holzer and Axford (1970), in the nonrelativistic regime the critical points do not exist for $\Gamma=5/3$. Hence, in Fukue (2014) and in the present study we set $\Gamma=4/3$. However, in contrast to the nonrelativistic regime, in the present relativistic regime the critical points do exist even for $\Gamma=5/3$, as is shown in Fig. 2. Fig. 2 shows the critical curves for the same parameters as in Figs. 1a and 1b, but $\Gamma$ is $5/3$. As is seen in Fig. 2, the loci and types of critical points are somewhat similar to those in Fig. 1, although they are quantitatively different. The existence of critical points for $\Gamma=5/3$ is due to the relativistic effect, especially the relativistic gravity. In the nonrelativistic case (Holzer \& Axford 1970), the case of $\Gamma=5/3$ is marginal, and the critical points exist for $\Gamma<5/3$. Hence, if the Newtonian gravity may slightly change, the critical points can exist. \begin{figure} \includegraphics[width=83mm,clip]{figure3a-revision.jpg} \includegraphics[width=83mm,clip]{figure3b-revision.jpg} \includegraphics[width=83mm,clip]{figure3c-revision.jpg} \caption{ Types of critical points in the $\hat{r}_{\rm c}$-$\dot{e}$ parameter space. Blue crosses mean saddle type, green open circles nodal type, and yellow open triangles spiral type. No critical points appear in the margin no mark region. Parameters are (a) $\dot{m}=2$, $\hat{L}_{\rm c}=1$, $\tau_{\rm c}=1$, $m=10^8$, $\Gamma=4/3$, (b) $\dot{m}=10$, $\hat{L}_{\rm c}=2$, $\tau_{\rm c}=10$, $m=10^8$, $\Gamma=4/3$, (c) $\dot{m}=100$, $\hat{L}_{\rm c}=2$, $\tau_{\rm c}=20$, $m=10^8$, $\Gamma=4/3$. } \end{figure} It should be mentioned the limitation of constant $\Gamma$ assumption. In the vicinity of the center, the ratio of specific heats should approach $4/3$ due to the high temperature of the gas, and the velocity at the critical point should also approach the relativistic limit of the sound speed ($\alpha_{\rm s}=1/\sqrt{3}$). As is seen in Fig. 2, however, the velocity of the critical point near to the center (and far from the center) exceeds this limit of $1/\sqrt{3}$, since we fixed $\Gamma=5/3$ in this calculation. Hence, in order to treat and solve this problem, we must use the variable $\Gamma(T)$, which depends on the gas temperature (e.g., Kato et al. 2008). We now summarize the types of critical points in the $\hat{r}_{\rm c}$-$\dot{e}$ parameter space. In Fig. 3, blue crosses mean saddle type, green open circles nodal type, and yellow open triangles spiral type. No critical points appear in the margin no mark region. Parameters are (a) $\dot{m}=2$, $\hat{L}_{\rm c}=1$, $\tau_{\rm c}=1$, $m=10^8$, $\Gamma=4/3$, (b) $\dot{m}=10$, $\hat{L}_{\rm c}=2$, $\tau_{\rm c}=10$, $m=10^8$, $\Gamma=4/3$, (c) $\dot{m}=100$, $\hat{L}_{\rm c}=2$, $\tau_{\rm c}=20$, $m=10^8$, $\Gamma=4/3$. As is seen in Fig. 3, types of critical points roughly align from the inner region (saddle), via the middle one (nodal), to the outer one (spiral). The divided pattern is similar, although $\dot{e}$ becomes large as $\dot{m}$ becomes large. For example, the saddle type critical points appear relatively close to the central black hole. As a result, the transonic flows are accelerated in the vicinity of the central black hole, as shown in the next section. In addition, when the value of $\dot{e}$ is large, the flow is always supersonic, while it is always subsonic when the value of $\dot{e}$ is small. This is the reason that no critical points appear in high and low $\dot{e}$ regions. Finally, we should note that the existence of the saddle-type critical point does not always mean that a physically-reasonable solution can be obtained. For example, if the radiation pressure and radiation drag is too large at large distance, the flow may be decelerated to the subsonic speed. Hence, it is important to choose the appropriate parameters at the critical point. \section{Transonic solutions} For appropriate parameters, from the critical point $\hat{r}_{\rm c}$, we integrate equations (\ref{df1}), (\ref{df3}), (\ref{df4}), (\ref{df5}), and (\ref{alpha}) instead of (\ref{df2}), inward and outward using the 4-th Runge-Kutta method to solve and obtain transonic solutions. In this section we show several examples of transonic solutions. As appropriate parameters, we restrict several parameters, as mentioned in section 2, bearing in mind UFOs, especially PG1211+143 (Tombesi et al. 2012). Namely, the mass of the central supermassive black hole is assumed to be $m=10^8$. The nondimensional mass-loss rate is set to be $\dot{m} \sim 1$--100. Furthermore, the typical outflow velocity of UFOs is $\beta_{\infty}\sim 0.1$--0.3, and the typical luminosity is assumed to be on the order of Eddington luminosity; $\hat{L}_{\infty}\sim 1$. Under these restricted parameters, the Bernoulli equation (\ref{bel-re}) is estimated at infinity as \begin{eqnarray} \label{dote-infty} \dot{e} &\sim& \dot{m}\gamma_{\infty}+\hat{L}_{\infty} \nonumber \\ &\sim& \dot{m}\left( 1+\frac{1}{2}\beta^2_{\infty} \right)+\hat{L}_{\infty} \nonumber \\ &\sim& (1.005{\rm -}1.05)\dot{m}+1. \end{eqnarray} This relation restricts the range of parameter $\dot{e}$. For example, when $\dot{m}=2$, $\dot{e}\sim 3.01$--3.1. \begin{figure} \includegraphics[width=80mm,clip]{figure4-revision.jpg} \caption{ Typical transonic solutions. Parameters are $\dot{m}=100$, $\dot{e}=102$, $m=10^8$, $\Gamma=4/3$, $\hat{r}_{\rm c}=20$, $\hat{L}_{\rm c}=1.118$, $Q_{\rm c}=2.363$, $\tau_{\rm c}=17.9$. A blue thick solid curve means $\beta$, a red solid one $\alpha_{\rm s}$, a green dashed one $\hat{L}/5$, a yellow chain-dotted one $\hat{Q}/5$, a black dotted one $\tau/20$, and a cyan thick dotted one $f(\beta,\tau)$. } \end{figure} Fig. 4 shows a typical example of transonic solutions. Parameters are $\dot{m}=100$, $\dot{e}=102$, $m=10^8$, $\Gamma=4/3$, $\hat{r}_{\rm c}=20$, $\hat{L}_{\rm c}=1.118$, $Q_{\rm c}=2.363$, $\tau_{\rm c}=17.9$. A blue thick solid curve means $\beta$, a red solid one $\alpha_{\rm s}$, a green dashed one $\hat{L}/5$, a yellow chain-dotted one $\hat{Q}/5$, a black dotted one $\tau/20$, and a cyan thick dotted one $f(\beta,\tau)$. As is seen in Fig. 4, the wind is mainly accelerated at around the critical point, and approaches a terminal speed, while the luminosity becomes almost constant of $\hat{L}\sim 0.457$. In this case the wind terminal speed is about $0.163~c$, which is preferable for UFOs, as assumed. It should be noted that in this case the optical depth vanishes at around $r \sim 500~r_{\rm S}$, which is the wind top, and the Eddington factor approaches unity there. It should be noted that the flow becomes optically thin at the large distance $r$ since the gas density decreases. Although the variable Eddington factor can be applied in the optically thin regime, the diffusion approximation itself becomes inappropriate there. Hence, the present transonic solution is not appropriate at large $r$. In addition, in the realistic black hole wind, such as UFOs, there may exist a luminous accretion disc surrounding a black hole. As a result, except for the inner optically thick flow, the outer optically thin part would be affected by the disc radiation. Hence, the spherically symmetric present model would be modified in the region at large $r$ again. Parameter dependence of transonic solutions is depicted in Figs. 5 and 6. \begin{figure} \includegraphics[width=80mm,clip]{figure5a-revision.jpg} \includegraphics[width=80mm,clip]{figure5b-revision.jpg} \includegraphics[width=80mm,clip]{figure5c-revision.jpg} \caption{ Mass-loss rate dependence of transonic solutions. The critical radius is fixed at $\hat{r}_{\rm c}=3$, while other parameters are (a) $\dot{m}=2$, $\dot{e}=3.06$, $m=10^8$, $\Gamma=4/3$, $\hat{L}_{\rm c}=1.28$, $\hat{Q}_{\rm c}=1.37$, $\tau_{\rm c}=1.6$, (b) $\dot{m}=8$, $\dot{e}=9.574$, $m=10^8$, $\Gamma=4/3$, $\hat{L}_{\rm c}=2.723$, $\hat{Q}_{\rm c}=4.681$, $\tau_{\rm c}=8.9$, (c) $\dot{m}=20$, $\dot{e}=22$, $m=10^8$, $\Gamma=4/3$, $\hat{L}_{\rm c}=5.37$, $\hat{Q}_{\rm c}=16.49$, $\tau_{\rm c}=32$. Blue thick solid curves mean $\beta$, red solid ones $\alpha_{\rm s}$, green dashed ones $\hat{L}/5$ for (a) and (b), and $\hat{L}/10$ for (c), yellow chain-dotted ones $\hat{Q}/5$ for (a) and (b), and $\hat{Q}/10$ for (c), black dotted ones $\tau/5$ for (a), $\tau/10$ for (b), and $\tau/20$ for (c), cyan thick dotted ones $f(\beta,\tau)$. } \end{figure} Fig. 5 shows the mass-loss rate dependence of transonic solutions. The critical radius is fixed at $\hat{r}_{\rm c}=3$, while other parameters are (a) $\dot{m}=2$, $\dot{e}=3.06$, $m=10^8$, $\Gamma=4/3$, $\hat{L}_{\rm c}=1.28$, $\hat{Q}_{\rm c}=1.37$, $\tau_{\rm c}=1.6$, (b) $\dot{m}=8$, $\dot{e}=9.574$, $m=10^8$, $\Gamma=4/3$, $\hat{L}_{\rm c}=2.723$, $\hat{Q}_{\rm c}=4.681$, $\tau_{\rm c}=8.9$, (c) $\dot{m}=20$, $\dot{e}=22$, $m=10^8$, $\Gamma=4/3$, $\hat{L}_{\rm c}=5.37$, $\hat{Q}_{\rm c}=16.49$, $\tau_{\rm c}=32$. Blue thick solid curves mean $\beta$, red solid ones $\alpha_{\rm s}$, green dashed ones $\hat{L}/5$ for (a) and (b), and $\hat{L}/10$ for (c), yellow chain-dotted ones $\hat{Q}/5$ for (a) and (b), and $\hat{Q}/10$ for (c), black dotted ones $\tau/5$ for (a), $\tau/10$ for (b), and $\tau/20$ for (c), cyan thick dotted ones $f(\beta,\tau)$. As is seen in Fig. 5, transonic solutions are roughly similar to those in Fig. 4. In all cases, the gas is quickly accelerated in the vicinity of the black hole since $\hat{r}_{\rm c}=3$. In addition, $\hat{L}$ and $\hat{Q}$ take the maximum near the center and vanish at the horizon. This behaviour is easy understood due to the definitions of $\hat{L}$ and $\hat{Q}$. As the mass-loss rate $\dot{m}$ increases, the optical depth also increases as expected. As the radius increases, the optical depth decreases, while the Eddington factor increases. Near the horizon, on the other hand, the optical depth diverges, and therefore, the Eddington factor approaches 1/3. Near the horizon, furthermore, $\alpha_{\rm s}$ reaches a relativistic limit of $\alpha_{\rm s}\rightarrow 1/\sqrt{3}$, regardless of the parameters. In other words, the gas is quite hot near the center, and constant $\Gamma$ assumption would be violated, as was stated. As a result, scattering is dominant in the vicinity of the center, and absorption would be almost ineffective. \begin{figure} \includegraphics[width=80mm,clip]{figure6a-revision.jpg} \includegraphics[width=80mm,clip]{figure6b-revision.jpg} \includegraphics[width=80mm,clip]{figure6c-revision.jpg} \caption{ Critical radius dependence of transonic solutions. Parameters are fixed as $\dot{m}=2$, $\dot{e}=3.06$, $m=10^8$, $\Gamma=4/3$, while the critical radius is (a) $\hat{r}_{\rm c}=2$ (b) $\hat{r}_{\rm c}=3$, (c) $\hat{r}_{\rm c}=4$. Blue thick solid curves mean $\beta$, red solid ones $\alpha_{\rm s}$, green dashed ones $\hat{L}/5$, yellow chain-dotted ones $\hat{Q}/5$, black dotted one $\tau$, cyan thick dotted ones $f(\beta,\tau)$. } \end{figure} Fig. 6 shows the critical radius dependence of transonic solutions. Parameters are fixed as $\dot{m}=2$, $\dot{e}=3.06$, $m=10^8$, $\Gamma=4/3$, while the critical radius is (a) $\hat{r}_{\rm c}=2$ (b) $\hat{r}_{\rm c}=3$, (c) $\hat{r}_{\rm c}=4$. Blue thick solid curves mean $\beta$, red solid ones $\alpha_{\rm s}$, green dashed ones $\hat{L}/5$, yellow chain-dotted ones $\hat{Q}/5$, black dotted one $\tau$, cyan thick dotted ones $f(\beta,\tau)$. In general, from Bernoulli equation (\ref{bel}), the terminal velocity $\beta_\infty$ of the winds increases as $\dot{e}$ increases, while it decreases as $\dot{m}$ decreases. However, the luminosity $\hat{L}_\infty$ also affects the value of the terminal velocity. Indeed, the values of $\dot{m}=2$ and $\dot{e}=3.06$ are the same in Fig. 6, but there is a slight difference in the velocity and luminosity at the wind top ($\hat{r}=60$); for (a) $\beta_{\rm top}\sim 0.148$, $\hat{L}_{\rm top}\sim 1.05$, for (b) $\beta_{\rm top}\sim 0.283$, $\hat{L}_{\rm top}\sim 0.976$, for (c) $\beta_{\rm top}\sim 0.139$, $\hat{L}_{\rm top}\sim 1.05$. That is, when $\hat{L}$ is large, $\beta$ becomes small. \section{Concluding remarks} We have examined the general relativistic radiatively-driven spherical wind under the nonequilibrium diffusion approximation with the help of the variable Eddington factor, $f(\tau,\beta)$, focusing our attention on the topological nature of critical points, and application to UFOs. We found that there appear three types of critical points (loci); saddle, nodal, and spiral types. The nodal type always admits a deceleration solution, and the spiral type is unphysical. Hence, only the saddle type is reasonable for a transonic accelerated solution. Furthermore, in order for the terminal speed to be 0.1-0.3$~c$, the saddle type critical points should be located relatively close to a black hole, as is easily expected. As a result, the gases are accelerated in the vicinity of the center and pass through the transonic point. In the nonrelativistic case, it is known that the critical point does not exist for $\Gamma=5/3$. In the present general relativistic case, however, the critical point is found to exist even in the $\Gamma=5/3$ case, due to the relativistic effect, although there is a non-physical solution near and far from the center that exceeds the relativistic limit of the sound speed. In the present study, we also bear in mind the applicability of the radiatively driven model for ultra-fast outflows (UFOs). As was stated, when we calculate transonic solutions, we use the parameters ($\dot{e},~\dot{m}$) favorable for UFOs (e.g., Tombesi et al. 2012). In addition, the luminosity at the wind top was also set to be about the Eddington one. We thus obtained the transonic solutions, which are consistent with UFOs. We also found that near the center the gas is so hot that scattering is dominanted and absorption is almost ineffective. This result is in good agreement with the previous observations, and suggests that the radiatively-driven model by continuum is a plausible model for the acceleration of UFOs. In the present paper, the calculations were made under the simple assumptions of a spherically symmetric one-dimensional flow, and constant mass-loss rate. Naturally, the gas cannot be supplied from the black hole itself, but should be supplied from the very vicinity of the horizon. One of the main source of the gas is the accretion disc surrounding a black hole, especially, in the present case, the supercritical accretion disc. If this is the case, the black hole winds would be realized as funnel jets (e.g., Lynden-Bell 1978; Fukue 1982; Vyas et al. 2015; Vyas \& Chattopadhyay 2017, 2018). Another source of the gas could be pair creations via magnetic fields around the rotating black holes. However, in the present case we implicitly assumed the black hole winds as a baryonic matter. We further assumed the constant $\Gamma$ in this study for simplicity. Since the flow temperature becomes quite high in the vicinity of the center, the value of the ratio of specific heats $\Gamma$ should vary there, and the temperature dependent $\Gamma$ as well as the relativistic equation of state should be used there for the precise calculation (cf. Kato et al. 2008). It is noted that it may be useful to use the approximation formula of $\Gamma(T)$ and equation of state (e.g., Chattopadhyay \& Ryu 2009). As was stated, we expected the gas to be sufficiently hot near the center and assumed that the gas is fully ionized. As a result, we ignored the line-driven force. However, at the outer region where the flow temperature decreases, the line-driven mechanism could work. If the line-force dominates the continuum one, the flow could be further accelerated in the outer region. These considerations are for the future works. \section*{Acknowledgements} The author would like to thank an anonymous referee for valuable comments, which improved the original manuscript. This work has been supported in part by a Grant-in-Aid for Scientific Research (18K03701) of the Ministry of Education, Culture, Sports, Science and Technology. \section*{Data availability} No new data were generated or analysed in support of this research.
{'timestamp': '2021-05-18T02:10:25', 'yymm': '2105', 'arxiv_id': '2105.07223', 'language': 'en', 'url': 'https://arxiv.org/abs/2105.07223'}
\section{Introduction} \label{sec:Introduction} Dealing with renewable energy sources requires internalizing their stochasticity into optimization and market-clearing tools used in power system operations. To this end, stochastic \cite{shiina2004stochastic} and robust \cite{jabr2013adjustable} optimization methods have been employed and demonstrated to improve power system cost efficiency and reliability. Consider a decision-making problem, such as an optimal power flow or unit commitment problem $\min_{\substack{x\in\set{X}(\omega)}} {C}(x,\omega)$, where $C(\cdot)$ and $x$ are the objective (cost) function and the vector of decision variables (e.g. generator outputs) constrained by feasible solution space $\set{X}$ and $\omega \in \Omega$ is a vector of uncertain parameters (e.g. net load or renewable injections) affecting both the objective function and the feasible region. In stochastic approaches, this problem is often solved by representing $\Omega$ as a set of discrete scenarios $\{\omega_s\}$ with probability $\pi_s$ and minimizing the expected cost across all scenarios, i.e., $\min_{\substack{x\in\set{X}(\omega_s)}} \sum_s \pi_s C(x,\omega_s)$. However, besides high computational requirements that limit the number of scenarios that can be considered \cite{dupavcova2003scenario}, the accuracy of the scenario-based method highly depends on how well the chosen scenarios can capture both the range and correlation structures of uncertain parameters. Ideally, the used scenarios are historical realizations of uncertain parameters, which will ensure the most accurate representation. However, relying on historical samples raises two challenges. First, the number of historical samples may be scarce. For example, if a power system operator wants to analyze the impact of a wind farm that is under construction, there are no historical data points available to use as scenarios. Thus scenarios with credible statistical properties need to be \textit{synthesized}. Second, historical samples may not include events that are relevant in the future, i.e., considering the most adversarial historic event to ensure system reliability may not be sufficient if a new and worse event materializes. The first challenge can be addressed through novel data-driven approaches that are powerful in catering to specific requirements based on the underlying patterns they learned from historical data. Specifically, a machine learning framework named \textit{generative adversarial network} (GAN) proposed by Goodfellow \textit{et al.} \cite{goodfellow2014generative} has been shown to efficiently synthesize data samples that fit to a given empirical distribution with very high credibility. Further, a modification of GANs introduced by Mirza and Osindero \cite{mirza2014conditional}, called conditional GAN (cGAN), allows to \textit{condition} the generated data sets based on predefined labels, thus allowing to tune the generated synthetic scenarios to the specific needs of their applications. GANs and cGANs have been successfully applied to power system problems. For example, Chen \textit{et al.} \cite{chen2018model} used cGAN to generate scenarios for wind and solar injections. Further power system applications of (c)GANs include Wang \textit{et al.} \cite{wang2020modeling} who applied cGANs to generate load scenarios, Zhang \textit{et al.} \cite{zhang2020typical} who studied wind power injection scenarios with focus on the spatio-temporal correlation between multiple wind farms in the system, and Wang \textit{et al.} \cite{wang2019generative} who used GANs to improve short-term forecasting of renewable injections. While the approaches in \cite{chen2018model,wang2020modeling,zhang2020typical,wang2019generative} can successfully synthesize the statistical properties of the historical data they do \textit{not} consider the impact of a (c)GAN-generated data sample on the decision making problem $\min_{\substack{x\in\set{X}(\omega)}} {C}(x,\omega)$ at hand, thus they do not address the second challenge of scenario generation. Traditionally, this challenge has been addressed by generating robust scenarios that may not be statistically credible but constitute a worst-case outcome for decision-making task, i.e., $ \min_{\substack{x\in\set{X}(\omega)}} \sup_{\substack{\omega \in \set{U}}} {C}(x,\omega)$, where $\set{U}$ is a predefined uncertainty set. Such robust decisions are usually overly conservative and, therefore, costly. Additionally, an analytic and/or computationally tractable solutions to the inner maximization problem may be difficult to obtain for some $\set{U}$ and, thus, often require approximations that further add to the solution conservatism. To address these two scenario generation challenges simultaneously, we propose a modified cGAN that can generate scenarios that are adversarial for the decision-making task, but, at the same time, remain statistically credible. Unlike the previous work in \cite{chen2018model,wang2020modeling,zhang2020typical,wang2019generative}, we internalize the decision-making task, in our case a DC optimal power flow (DC-OPF) problem, into the cGAN training phase, thus rendering it \textit{operation adversarial} (OA-cGAN). We use the proposed OA-cGAN to generate forecast errors of the real-time net load (i.e., demand minus renewable injections) considered during a day-ahead planning stage. We derive the necessary training method and demonstrate that the ability of the proposed OA-cGAN to generate statistically credible forecast errors that, at the same time, improve system reliability. Our implementation and experiments use real-world data from the NYISO system. \section{Preliminaries} \label{sec:Basic_model} \subsection{Conditional generative adversarial networks (cGANs)} \label{subsec:cGANs} \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{figures/cGAN.png} \caption{A typical structure of the cGAN model.} \label{fig:cGAN} \end{figure} Fig.~\ref{fig:cGAN} illustrates a basic cGAN model with one generator ($G$) and one discriminator ($D$), both of which are non-linear mapping functions, such as neural networks. Generator $G$ and discriminator $D$ are defined by a set of parameters $\theta_g$ and $\theta_d$, respectively, which must be trained. Specifically, $G$ and $D$ are trained alternately in a zero-sum game process. The training objective is to tune $\theta_g$ such that $G$ transforms data samples drawn from some distribution $\mathbb{P}_z$ into new data points that follow a target data distribution $\mathbb{P}_{data}$. The training objective of $D$, on the other hand, is to tune $\theta_d$ such that $D$ can distinguish real data samples drawn from $\mathbb{P}_{data}$ and synthetic data points generated by $G$ with high accuracy. The additional input is label $y$, which conditions the training of $G$ and $D$ to specific data features, hence the naming convention ``conditional'' GAN. The \textit{adversarial} competition between the objectives of $G$ and $D$ will push each model to improve its performance until a Nash equilibrium is reached, i.e., the samples produced by $G$ cannot be distinguished from the original data by $D$. The full training objective function of cGANs can then be formalized as: \begin{align} \min_{\substack{\theta_g}}\max_{\substack{\theta_d}}\ &\mathbb{E}_{x \sim \mathbb{P}_{data}}[\log \left( D(x,{\theta_d}|y) \right)] \nonumber \\ &+ \mathbb{E}_{z \sim \mathbb{P}_z}[\log \left(1 - D(G(z,{\theta_g}|y),{\theta_d}|y)\right)], \label{cGAN} \end{align} where $x\sim{\mathbb{P}_{data}}$ is data from the real distribution, $z\sim{\mathbb{P}_z}$ is randomly generated data (e.g. from a Gaussian distribution), $G(z,{\theta_g}|y)$ is the output of $G$, i.e., the generated data based on the noise input (denoted as $z$) and label $y$, and $D(x,{\theta_d}|y)$ is the output of $D$, i.e., the probability that $x$ is from real data distribution $\mathbb{P}_{data}$ conditioned by label $y$ ($D(x,{\theta_d}|y) \in [0,1]$). Operators $\mathbb{E}_{x \sim \mathbb{P}_{data}}$ and $\mathbb{E}_{z \sim \mathbb{P}_z}$ compute the expectation with respect to distributions $\mathbb{P}_{data}$ and $\mathbb{P}_z$, respectively. Training objective \eqref{cGAN} is achieved by alternately tuning $\theta_g$ such that $G$ maximizes the probability that the currently trained $D$ identifies its synthetic data $G(z,{\theta_g}|y)$ as real: \begin{align} \max_{\substack{\theta_g}}\ \mathbb{E}_{z \sim \mathbb{P}_z}\left[ {\log \left( {D(G(z,{\theta_g}|y), {\theta_d}|y)} \right)} \right], \label{GAN_G} \end{align}% and tuning $\theta_d$ such that $D$ maximizes its judgement accuracy, i.e., achieving high values $D(x,{\theta_d}|y)$ for real data and low values $D(G(z,{\theta_g}|y),{\theta_d}|y)$ for synthetic data: \begin{align} \max_{\substack{\theta_d}}\ &\mathbb{E}_{x \sim \mathbb{P}_{data}}\left[ {\log \left(D(x,{\theta_d}|y) \right)} \right] \nonumber \\ &+\mathbb{E}_{z \sim \mathbb{P}_z}\left[ {\log \left( {1-D(G(z,{\theta_g}|y), {\theta_d}|y)} \right)} \right]. \label{GAN_D} \end{align}% \subsection{Power system operation model} \label{subsec:DCOPF} We consider a standard DC optimal power flow (DC-OPF) problem to model power system operations. The DC-OPF minimizes the operating cost of supplying the system net load (i.e., load minus renewable injections) with respect to physical limits of generators and transmission lines: \allowdisplaybreaks \begin{subequations} \begin{align} &\min_{\substack{\{P_{g,t}\}_{g\in \set{G}, t\in \set{T}},\\ \{\theta_{i,t}\}_{i\in \set{I}, t\in \set{T}}}}\ C = \sum\limits_{t\in \set{T}} \sum\limits_{g\in \set{G}} {(c_{0g} + c_{1g}P_{g,t} + c_{2g}P_{g,t}^2)} \label{DCOPF_objective}\\ & \text{s.t. } \nonumber \\ &(\lambda_{i,t}):\sum\nolimits_{g\in \set{G}_i}P_{g,t} - \sum\nolimits_{j \in \mathcal{N}_i } B_{i,j} (\theta_{i,t} - \theta_{j,t}) = d_{i,t} \nonumber \\ & \hspace{1.1cm} \forall{i}\in\set{I},\ \forall{t}\in\set{T} \label{DCOPF_power_balance}\\ &(\rho_{g,t}^{-},\rho_{g,t}^{+}): 0 \le P_{g,t} \le P_{g}^{\max} \quad \forall{g}\in\set{G} \label{DCOPF_Pmax}\\ & (\beta_{i,j,t}^{-},\beta_{i,j,t}^{+}): -S_{i,j} \le B_{i,j} (\theta_{i,t}-\theta_{j,t}) \le S_{i,j} \nonumber \\ & \hspace{1cm}\forall{i}\in\set{I},\ \forall{j}\in \mathcal{N}_i,\ \forall{t}\in\set{T} \label{DCOPF_power_flow}\\ & (\eta_t): \theta_{ref,t} = 0 \quad \forall{t}\in\set{T}, \label{DCOPF_ref} \end{align}% \label{DCOPF}% \end{subequations}% \allowdisplaybreaks[0]% where $\set{I}$ is the set of nodes in the transmission network indexed by $i$, $\set{T}$ is the set of time steps in the planing horizon indexed by $t$, $d_{i,t}$ is the net load at node $i$ and time $t$, $P_{g,t}$ is the (active) power output of generator $g$ at time $t$, $\set{G}_i$ is the set of generators connected to node $i$, $\mathcal{N}_i $ is the set of nodes adjacent to $i$, $\theta_{i,t}$ is the voltage angle at node $i$ at time $t$, $B_{i,j}$ is the susceptance of the line between node $i$ and $j$, and $S_{i,j}$ is the thermal capacity of the line between node $i$ and $j$. Objective~\eqref{DCOPF_objective} minimizes system cost using a quadratic cost model of each generator given by parameters $c_{0g}$, $c_{1g}$, $c_{2g}$. Eq.~\cref{DCOPF_power_balance} enforces the nodal power balance at each node. Eqs.~\cref{DCOPF_Pmax} and \cref{DCOPF_power_flow} limit the output of generators and the power flow on each line to their technical limits. Eq.~\cref{DCOPF_ref} sets the voltage angle at the reference node ($i=ref$) to 0. Greek letters in parentheses in \cref{DCOPF_power_balance,DCOPF_Pmax,DCOPF_power_flow,DCOPF_ref} denote dual multipliers of the respective constraints. \section{Operation-Adversarial cGAN model} \label{sec:Proposed_model} \subsection{Training objective} \label{subsec:Objective} \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{figures/operation_adversarial_cGAN.png} \caption{The proposed structure of the OA-cGAN model.} \label{fig:operation_adversarial_cGAN} \end{figure} The structure of the proposed operation-adversarial cGAN model (OA-cGAN) is shown in Fig.~\ref{fig:operation_adversarial_cGAN}. Compared with the traditional cGAN in Fig.~\ref{fig:cGAN}, another player (i.e., the DC-OPF model) joins the game between $G$ and $D$ and becomes part of the training process of $G$. Note that the training process of $D$ remains the same as in the traditional cGAN model. Thus, the objective of $D$ in the OA-cGAN model is still maximizing its judgment accuracy as shown in \cref{GAN_D}. As the first step, we formulate the training objectives of $G$ and $D$ as an equivalent minimization problem to achieve consistency with the cost-minimizing DC-OPF formulation: \begin{align} \min_{\substack{\theta_d}}\ loss_D &= \mathbb{E}_{x \sim \mathbb{P}_{data}}\left[ {\log \left(1-D(x,{\theta_d}|y) \right)} \right] \nonumber \\ &+\mathbb{E}_{z \sim \mathbb{P}_z}\left[ {\log \left( {D(G(z,{\theta_g}|y), {\theta_d}|y)} \right)} \right] \label{model_D}. \end{align}% Next, the training objective of generator $G$ receives an additional component to capture the operational model: \begin{align} \min_{\substack{\theta_g}}\ loss_G =& k \overbrace{ \mathbb{E}_{z \sim \mathbb{P}_z}\left[ {\log \left( {1-D(G(z,{\theta_g}|y), {\theta_d}|y)} \right)} \right]}^{loss_{G1}} \nonumber \\ & +(1-k) \underbrace{ \mathbb{E}_{z \sim \mathbb{P}_z}\left[ -{C^{*} \left( G(z,{\theta_g}|y) \right)} \right]}_{loss_{G2}}. \label{objective_G} \end{align}% The first part of \cref{objective_G} (denoted as $loss_{G1}$) maximizes the probability that the generated data is recognized as real data by $D$, i.e. playing against $D$ as in \eqref{GAN_G}, and the second part (denoted as $loss_{G2}$) maximizes the expected operating cost based on the generated net load, i.e. playing against the DC-OPF. The two objectives are weighted against each other using factor $k\in[0,1]$. When $k=1$, the OA-cGAN becomes a traditional cGAN. Term $C^{*} \left( G(z,{\theta_g}|y) \right)$ in \cref{objective_G} is interpreted as the scaled optimal operating cost based on generated load $G(z,{\theta_g}|y)$, i.e.: \begin{align} C^*= \sum\limits_{t\in \set{T}} \sum\limits_{g\in \set{G}} \frac {(c_{0g} + c_{1g}P_{g,t}^* + c_{2g}{P_{g,t}^*}^2)-\delta_{shift}} {\delta_{scale}} \label{C_star} \end{align} where $P_{g,t}^*$ is the optimal power output of generator $g$ at time $t$ obtained by solving the DC-OPF based on generated load $G(z,{\theta_g}|y)$. Since $loss_{G1}$ represents a \textit{probability} and therefore will always take values between 0 and 1, we use constants $\delta_{shift}$ and $\delta_{scale}$ to project operating the cost into a comparable interval. This will allow for trading off the two parts of the objective using weight $k$. Fig.~\ref{fig:three_players} illustrates the relationship between the three objectives ($\min \ loss_{D}$, $ \min \ loss_{G1}$, and $\min \ loss_{G2}$) of the OA-cGAN. First, objectives $loss_{G1}$ and $loss_{D}$ capture the competition between the credibility of scenario generation from $G$ and the detection accuracy of $D$. Second, objective $loss_{G1}$ and $loss_{G2}$ determine the success of $G$ to either work against $D$ or the DC-OPF, respectively. Depending on weight $k$, $G$ prioritizes the former or latter objective. Specifically, by focusing on minimizing $loss_{G1}$, the generated data becomes more statistically credible, while by minimizing $loss_{G2}$, the generated data become operational-adversarial. \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{figures/three_players.png} \caption{Relationships between three objectives in OA-cGAN.} \label{fig:three_players} \end{figure} \subsection{Training data preparation} \label{subsec:Training} In this paper, we describe the training process for operation-adversarial scenarios on the \textit{net load forecasts}, i.e., the difference between the forecast net load during a day-ahead (DA) planning stage and the realized net load in real-time (RT). We note that the proposed OA-cGAN can be adapted to other scenario parameters, e.g., renewable injections. It is also assumed the planning horizon is one day and has a resolution of 1 hour, i.e., $\mathcal{T}=\{1,...,24\}$. The training process requires a suitably prepared training data set, which we create as follows: \noindent \underline{\textit{Step 1}}: Obtain historical data for DA and RT net loads for each sample day $s$ in the training and testing data sets defined as $\{DA^{real}_s, RT^{real}_s\}_{\ s \in \set{S}_{train} \cup \set{S}_{test}}$, where $DA^{real}_{i,t,s}$ and $RT^{real}_{i,t,s}$ are DA load forecast and RT actual loads for each node $i\in\set{I}$ and $t\in\set{T}$ in sample day $s$. Each sample $s$ receives label $y_s$, which denotes attributes of interest such as the day of the week, month, season, and weather conditions in that day \noindent \underline{\textit{Step 2}}: For each $s$ and $i$, calculate the minimum, average and maximums denoted as $DA^{\min}_{i,s}$, $DA^{ave}_{i,s}$ and $DA^{\max}_{i,s}$. \noindent \underline{\textit{Step 3}}: For each $s$, $i$ and $t$, calculate a normalized DA and RT load ($DA^{norm}_{i,t,s}$ and $RT^{norm}_{i,t,s}$) as: \begin{align} DA^{norm}_{i,t,s} &= \left({DA^{real}_{i,t,s} {\rm{-}} DA^{ave}_{i,s}}\right)/ \left({DA^{\max}_{i,s} {\rm{-}} DA^{\min}_{i,s}}\right) \label{normalize_DA}\\ RT^{norm}_{i,t,s} & = \left({RT^{real}_{i,t,s} {\rm{-}} DA^{ave}_{i,s}}\right) /\left({DA^{\max}_{i,s} {\rm{-}} DA^{\min}_{i,s}}\right). \label{normalize_RT} \end{align} \noindent \underline{\textit{Step 4}}: For each $s$, $i$ and $t$, calculate normalized net load forecast error $\varepsilon^{norm}_{i,t,s}$ as \begin{align} \varepsilon^{norm}_{i,t,s} = DA^{norm}_{i,t,s}-RT^{norm}_{i,t,s}. \label{normalize_error} \end{align} The normalization in \textit{Step 3} is to ensure $\varepsilon^{norm}$ have a statistically significant pattern. Using $\varepsilon^{norm}$ as training data, the OA-cGAN will generate data ($\varepsilon^{gen}$) that follows the statistical characteristics of $\varepsilon^{norm}$, while maximizing the operating cost in the DC-OPF model. Note that the synthetic errors $\varepsilon^{gen}$ have to be transformed into a RT load value (``denormalized'') as: \begin{align} d_{i,t,s} = \varepsilon^{gen}_{i,t,s} \left(DA^{\max}_{i,s} - DA^{\min}_{i,s} \right)+ DA^{real}_{i,t,s}, \label{denormalize} \end{align} where $d_{i,t,s}$ is the generated RT net load based on the generated forecast error and the real DA net load forecast. \subsection{Training process} \label{subsec:Gradient} \begin{algorithm}[t] \small \SetAlgoLined \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{$\{\varepsilon^{norm}_s\}_{s \in \set{S}_{train}}$\\ } \Output{$\{\varepsilon^{gen}_s\}_{s \in \set{S}_{test}}$, $\theta_g$, $\theta_d$\\ } \Begin{ Initialize $\theta_g$ and $\theta_d$; $epoch \leftarrow 0$\\ \While{$epoch < epoch^{\max}$ }{ \For{$\set{B} \in \set{S}_{train}$} {Input $\{\varepsilon^{norm}_s\}_{s \in \set{B}}$ to OA-cGAN;\\ Obtain output of $G$ as $\{\varepsilon^{gen}_s\}_{s \in \set{B}}$;\\ Calculate $\{loss_{D,s}\}_{s \in \set{B}}$ with $\{\varepsilon^{gen}_s\}_{s \in \set{B}}$;\\ Update $\theta_d$ with $\{loss_{D,s}\}_{s \in \set{B}}$ using SGD; \\ Calculate $\{loss_{G1,s}\}_{s \in \set{B}}$ with $\{\varepsilon^{gen}_s\}_{s \in \set{B}}$;\\ Update $\theta_g$ with $\{loss_{G1,s}\}_{s \in \set{B}}$ using SGD; \\ \For{$s \in \set{B}$} {Run DC-OPF \cref{DCOPF} based on $\varepsilon^{gen}_s$;\\ Obtain $C^*_s$ based on \cref{C_star};\\ Calculate $loss_{G2,s}$ based on \cref{objective_G};\\ } Update $\theta_g$ with $\{loss_{G2,s}\}_{s \in \set{B}}$ using SGD based on the gradient in \cref{gradient_final} ; \\ } $epoch \leftarrow epoch+1$ } Obtain $\{\varepsilon^{gen}_s\}_{s \in \set{S}_{test}}$ based on fully-trained $G$\\ \KwRet{$\{\varepsilon^{gen}_s\}_{s \in \set{S}_{test}}$, $\theta_g$, $\theta_d$} } \caption{Training process of OA-cGAN} \label{alg:OA_cGAN} \end{algorithm} Standard cGANs are trained using gradient-based methods. In particular, the stochastic gradient descent (SGD), which uses an estimated gradient calculated from a randomly selected subset of the training data (so called ``mini-batch''), is the most common one because it facilitates training over very large training data sets and exhibits superior convergence properties \cite{pmlr-v97-qian19b}. Hence, the OA-cGAN can also rely on SGD to iteratively update (``train'') parameters $\theta_g$ and $\theta_d$. Nevertheless, the additional term of $loss_{G1}$ in \cref{objective_G}, which is related to the solutions to another optimization problem (i.e. DC-OPF), bring challenges to the direct use of SGD in the OA-cGAN. Thus, a suitable training method for the OA-GAN needs to be designed. Since the training process of $D$ is the same as it in traditional cGANs, it can be achieved by off-the-shelf functions that are readily implemented in many machine learning packages (e.g., TensorFlow, PyTorch, or Flux). Therefore, this section focuses on the parameter update method for the parameters of $G$ ($\theta_g$). Since the two components of $loss_G$ are linearly additive, the gradient of $loss_G$ can be calculated by combining the gradient of $loss_{G1}$ and $loss_{G2}$. Thus, the resulting update rule for $\theta_g$ is: \begin{align} \theta_g^{r+1} & {\rm{=}} \theta_g^r {\rm{-}} \frac {\alpha}{N_b} \sum\limits_{s \in \set{B}} \left( k \frac{{\partial loss_{G1,s}}}{{\partial \theta_g}} + (1{\rm{-}}k) \frac{{\partial loss_{G2,s}}}{{\partial \theta_g}} \right), \label{SGD} \end{align} where $r$ denotes the training iteration, $\alpha$ is the learning rate, $\set{B}$ is the mini-batch of data from the training data set $\set{S}_{train}$, $N_b$ is the number of samples in mini-batch $\set{B}$, $loss_{G1,s}$ and $loss_{G2,s}$ are the losses of $G$ associated with sample $s$ in $\set{B}$. The progress of training in the SGD is measured by epochs, where one epoch means one complete pass of the training data set through the parameter update process. The gradient of $loss_{G1,s}$ is the same as the loss gradient of $G$ in traditional cGANs and, as for $D$, can be inferred using off-the-shelf implementations. Since $loss_{G2}$ does not explicitly contain $\theta_g$, to calculate the ${\partial loss_{G2,s}}/{\partial {\theta_g}}$, we need to find the relationship between $loss_{G2}$ and $\theta_g$ using intermediate variables. Analytically, for each sample $s$, $\theta_g$ decides the output of $G$ ($\{\varepsilon^{gen}_{i,t,s}\}_{i \in \set{I}, t \in \set{T}}$) which will affect the generated net load ($\{d_{i,t,s}\}_{i \in \set{I}, t \in \set{T}}$). Then, $\{d_{i,t,s}\}_{i \in \set{I}, t \in \set{T}}$ will affect the optimal output of generators ($\{p_{g,t,s}^*\}_{g \in \set{G}, t \in \set{T}}$) which will directly affect the optimal operating cost $C^*_s$ and thus $loss_{G2,s}$. Therefore, we can derive the gradient of $loss_{G2,s}$ using the chain rule as: \begin{align} \frac{{\partial loss_{G2,s}}}{{\partial {\theta_g}}} = - \sum\limits_{t \in \set{T}} \sum\limits_{g \in \set{G}} {\frac{{\partial {C_s^*}}} {\partial P_{g,t,s}^*} \sum\limits_{i \in \set{I}} \frac{\partial P_{g,t,s}^*}{\partial d_{i,t,s}} \frac{\partial d_{i,t,s}}{\partial \varepsilon^{gen}_{i,t,s}} \frac{\partial \varepsilon^{gen}_{i,t,s}}{\partial \theta_g}}. \label{gradient_loss2} \end{align} In the following, we derive each term in \cref{gradient_loss2}. Term ${\partial {C_s^*}}/{\partial P_{g,t,s}^*}$ captures the marginal change of cost when changing the output of generator $g$ in the optimal solution of the DC-OPF at time $t$ in sample $s$. For generators with binding constraints \cref{DCOPF_Pmax} we have ${{\partial {C_s^*}}} /{\partial P_{g,t,s}^*} = 0$. For generators with non-binding constraints \cref{DCOPF_Pmax} (i.e., marginal generators) ${{\partial {C_s^*}}} /{\partial P_{g,t,s}^*} = \lambda_{i,t,s}/ {\delta_{scale}}$, where $\lambda_{i,t,s}$ is the locational marginal price at node $i$ and time $t$ in sample day $s$, i.e., the dual multiplier of \cref{DCOPF_power_balance}, and ${\delta_{scale}}$ is a constant value introduced in \cref{C_star}. Note that $\lambda_{i,t,s}$ can be obtained directly from most numerical solvers after solving \cref{DCOPF}. Therefore, we obtain: \begin{align} \sum\limits_{g \in \set{G}} \frac{{\partial {C_s^*}}} {\partial P_{g,t,s}^*} =\sum\limits_{i \in \set{I}} \sum\limits_{g \in \set{G}_i} \frac{{\partial {C_s^*}}} {\partial P_{g,t,s}^*} = \sum\limits_{i \in \set{I}} \frac{\lambda_{i,t,s}}{{\delta_{scale}}}. \label{gradient_c_g} \end{align} Next, as per \cref{DCOPF_power_balance}, it follows: \begin{align} \frac {\partial P_{g,t,s}^*}{\partial d_{i,t,s}} = -1. \label{gradient_g_d} \end{align} Similarly, according to \cref{denormalize}, we obtain: \begin{align} \frac{\partial d_{i,t,s}}{\partial \varepsilon^{gen}_{i,t,s}} = DA^{\max}_{i,s} - DA^{\min}_{i,s}. \label{gradient_d_e} \end{align} Finally, since $\varepsilon^{gen}_{i,t}$ is the output of $G$, ${\partial \varepsilon^{gen}_{i,t}}/{\partial \theta_g}$ can, again, be calculated by off-the-shelf implementations. As a result, we can recast \cref{gradient_loss2} for SGD training as: \begin{align} \frac{{\partial loss_{G2,s}}}{{\partial {\theta_g}}} = \sum\limits_{t \in \set{T}} \sum\limits_{i \in \set{I}} \frac{\lambda_{i,t,s} (DA^{\max}_{i,s} - DA^{\min}_{i,s})}{{\delta_{scale}}} \frac{\partial \varepsilon^{gen}_{i,t,s}}{\partial \theta_g}. \label{gradient_final} \end{align} Algorithm~\ref{alg:OA_cGAN} summarizes the OA-cGAN training process. \section{Numerical Experiments} \label{sec:Case_study} We apply the proposed OA-cGAN to improve power system reliability through more accurate DA decisions. \subsection{Reserve in the DA model} \label{ssec:reserve_testing} To accommodate changes between the DA load forecast and the actual RT load, some generators need to provide reserves that are sufficient to offset the forecast error and that are deliverable through the transmission network, i.e. can be deployed without violating transmission constraints. At the same time, these reserves should be allocated in the least-cost manner. Assume a set of given forecast error $\{\varepsilon_{i,t}\}_{i\in \set{I}, t\in \set{T}}$. The optimal reserve allocation can be calculated through the following modified DC-OPF formulation: \allowdisplaybreaks \begin{subequations} \begin{align} &\min_{\substack{\{P_{g,t}^{DA}\}_{g\in \set{G}, t\in \set{T}}, \\ \{r_{g,t}^{+},r_{g,t}^{-}\}_{g\in \set{G}, t\in \set{T}}, \\ \{\theta_{i,t}\}_{i\in \set{I}, t\in \set{T}}}}\ C = \sum\limits_{t\in \set{T}} \sum\limits_{g\in \set{G}} {(c_{0g} + c_{1g}P_{g,t}^{DA} + c_{2g}{P_{g,t}^{DA}}^2)} \label{reserve_DCOPF_objective}\\ &\text{s.t. } \sum\nolimits_{g\in \set{G}_i} P_{g,t}^{DA} - \sum\nolimits_{j \in \mathcal{N}_i} B_{i,j} (\theta_{i,t} - \theta_{j,t}) = DA^{real}_{i,t} \nonumber \\ & \hspace{0.5cm} \forall{i}\in\set{I},\ \forall{t}\in\set{T} \label{reserve_DCOPF_power_balance}\\ &\hspace{0.5cm}\sum\nolimits_{g\in \set{G}_i}(r_{g,t}^{+}-r_{g,t}^{-}) - \sum\nolimits_{j \in \mathcal{N}_i} B_{i,j} (\bar \theta_{i,t} - \bar \theta_{j,t}) = \varepsilon_{i,t} \nonumber \\ & \hspace{0.5cm} \forall{i}\in\set{I},\ \forall{t}\in\set{T} \label{reserve_DCOPF_reserve_balance}\\ & \hspace{0.5cm} P_{g,t}^{DA} + r_{g,t}^{+} - r_{g,t}^{-} \le P_{g}^{\max} \quad \forall{g}\in\set{G} \label{reserve_DCOPF_Pmax}\\ & \hspace{0.5cm} -S_{i,j} \le B_{i,j} (\theta_{i,t} + \bar \theta_{i,t} - \theta_{j,t} - \bar \theta_{j,t}) \le S_{i,j} \nonumber \\ & \hspace{0.5cm}\forall{i}\in\set{I},\ \forall{j}\in \mathcal{N}_i,\ \forall{t}\in\set{T} \label{reserve_DCOPF_power_flow}\\ &\hspace{0.5cm} \theta_{ref,t} = 0, \ \bar \theta_{ref,t} = 0 \quad \forall{t}\in\set{T} \label{reserve_DCOPF_ref} \\ & \hspace{0.5cm} P_{g,t}^{DA} \ge 0, \ r_{g,t}^{+} \ge 0, \ r_{g,t}^{-} \ge 0 \quad \forall{g}\in\set{G},\ \forall{t}\in\set{T}, \end{align}% \label{reserve_DCOPF}% \end{subequations}% \allowdisplaybreaks[0]% where $r_{g,t}^{+}$ and $r_{g,t}^{-}$ are the upward and downward reserve provided by generator $g$ at time $t$, $\theta_{i,t}$ and $\bar \theta_{i,t}$ are the voltage angle at node $i$ and time $t$ considering only $DA^{real}_{i,t}$ and $\varepsilon_{i,t}$, respectively. Since the DC-OPF is a linear model, the voltage angles can be superimposed such that $\theta_{i,t} + \bar \theta_{i,t}$ is the voltage angle at node $i$ and time $t$ considering both $DA^{real}_{i,t}$ and $\varepsilon_{i,t}$. Objective \cref{reserve_DCOPF} minimizes the operating cost using the same quadratic cost model as in \cref{DCOPF}. Eqs.~\cref{reserve_DCOPF_power_balance} and \cref{reserve_DCOPF_reserve_balance} are the nodal power balance constraint, where \cref{reserve_DCOPF_power_balance} ensures the DA forecast net load is served by the active output power of generators, and \cref{reserve_DCOPF_reserve_balance} ensures the DA net load forecast error is compensated for by the reserve provided by generators. Constraints \cref{reserve_DCOPF_Pmax}-\cref{reserve_DCOPF_ref} ensure deliverability of both scheduled generation $P_{g,t}^{DA}$ and reserves $r_{g,t}^{+}$, $r_{g,t}^{-}$. If forecast error $\varepsilon$ was known exactly, then \cref{reserve_DCOPF} would yield the optimal least-cost dispatch and reserve allocation. In practice, however, $\varepsilon$ is unknown and must be estimated. Hence, we can use the OA-cGAN to estimate forecast errors that are statistically credible but particularly ``stressful'', i.e. corresponding to a relatively large operating cost for the system. In the experiments below, we evaluate the generated forecast errors using two metrics. The first evaluation metric is the total DA operating cost $C_{total}$ for all sample days in the testing set $\set{S}_{test}$, and the second evaluation metric is the RT upward (or downward) security level $I^+$ (or $I^-$), which determines the percent of cases in which the error $\varepsilon$ considered in the DA stage induced the procurement of sufficient upward (or downward) reserves to balance the \textit{actual} RT net load ($I^+, I^- \in [0, 100\%]$). Algorithm~\ref{alg:evaluation} summarizes the proposed evaluation method. \begin{algorithm}[t] \small \SetAlgoLined \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{$\{DA^{real}_s, RT^{real}_s, \varepsilon_s\}_{s \in \set{S}_{test} }$;\\ number of samples in $\set{S}_{test}$ ($N_{test}$) } \Output{$C_{total}$, $I^+$, $I^-$ } \Begin{ $C_{total} \leftarrow 0$, $I^+ \leftarrow 0$, $I^-\leftarrow 0$; \\ \For{$s \in \set{S}_{test}$} {Run DC-OPF \cref{reserve_DCOPF} with $DA^{real}_s$ and $\varepsilon_s$, obtain $C_s$, $\{P_{g,t,s}^{DA}, r_{g,t,s}^{+}, r_{g,t,s}^{-}\}_{ g \in\set{G}, \ t\in\set{T}}$; $C_{total} \leftarrow C_{total} + C_s$;\\ Run DC-OPF \cref{DCOPF} with $RT^{real}_s$ and obtain $\{P_{g,t,s}\}_{g \in\set{G}, \ t\in\set{T}}$;\\ Calculate the reserve required in RT operation $R_{g,t,s} \leftarrow P_{g,t,s} - P_{g,t,s}^{DA} \quad \forall{g}\in\set{G}, \ \forall{t}\in\set{T}$; \\ \If{$R_{g,t,s} \le r_{g,t,s}^{+} \ \forall{g}\in\set{G}, \ \forall{t}\in\set{T}$} {$I^+ \leftarrow I^+ +1/N_{test}$} \If{$R_{g,t,s} \ge r_{g,t,s}^{-} \ \forall{g}\in\set{G}, \ \forall{t}\in\set{T}$} {$I^- \leftarrow I^- +1/N_{test}$} } \KwRet{$C_{total}$, $I^+$, $I^-$} } \caption{Evaluation of Given Error $\varepsilon$} \label{alg:evaluation} \end{algorithm} \subsection{Test system and data} \label{subsec:System} We conduct our numerical experiments using a zonal representation of the New York Independent System Operator (NYISO) system, as shown in Fig.~\ref{fig:zones}. Following the NYISO market structure, the full system is aggregated into an 11-zone system. (We note that this 11-zone representation is used in real-world operations for computing locational marginal prices for load charges). The hourly DA net load forecasts and actual RT net loads for each zone are available from NYISO in \cite{NYISO_load}. The system is populated with 362 generators and 33 wind farms, whose locations and parameters have been estimated from publicly available data bases \cite{NYISO_Gold_Book, Wind_Turbine_Database}. All computations were carried out in Julia v1.5 \cite{Julia-2017}. The neural networks in the OA-cGAN were built and trained using the Flux package \cite{innes2018}, and the DC-OPF problems were implemented in JuMP \cite{DunningHuchetteLubin2017} and solved using the Gurobi solver \cite{gurobi}. Our implementation and data is publicly available at \cite{oacgan_code}. \begin{figure}[!t] \centering \includegraphics[width=0.7\linewidth]{figures/11_zones.png} \caption{11-zone representation of the NYISO system \cite{NYISO_zone_map}.} \label{fig:zones} \end{figure} We obtained the hourly DA and RT net load from January 1st, 2018 to January 4th, 2021 (1100 days in total) from NYISO and randomly split the data in 1000 days for training the OA-cGAN, as described in Sections~\ref{subsec:Training} and \ref{subsec:Gradient}, and 100 days for testing as described in Section~\ref{ssec:reserve_testing}. Fig.~\ref{fig:real_load} shows the DA and RT load profiles for four selected days (Jan. 1st, Apr. 10th, Jul. 20th, Oct. 30th 2018) in each zone drawn from each quarter of the year. It can be seen that for each zone and each quarter the forecast errors exhibit distinct seasonal characteristics. Thus, we decide to use quarters at labels in the OA-cGAN training, i.e. the load in the first (Jan.--Mar.), second (Apr.--Jun.), third (Jul.--Sep.) and fourth (Oct.--Dec.) quarter are labeled as 0, 1, 2, and 3 respectively. Note that this labeling system is used in this paper for the simplicity of illustration. To generate errors with more specific properties, one can use more complicated labeling systems, which include more information of the target day, such as the daily temperature or the precipitation. The normalized errors in the 11 zones and the whole NYISO system with the same label (label=0) are shown in Fig.~\ref{fig:error_12_zones}, where the blue lines are the real normalized errors in year 2018, the red line in the middle of each sub-figure is the average of the blue lines in the same sub-figure, and the light blue areas indicate the possible distribution area of the errors according to historical data. Fig.~\ref{fig:error_12_zones} displays differences among the normalized errors in 11 zones. For example, in Zone 3, the errors approximately evenly distribute between $-0.5$ and $0.5$ and have an average value close to 0, while in Zone 3, all the errors are positive and the maximum error is around 2. \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{figures/real_load.png} \caption{Day-ahead and real-time loads in 11 NYISO zones. ``Total'' shows the sum over all 11 zones.} \label{fig:real_load} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{figures/Normalized_error_12_zones.png} \caption{Historical normalized errors in 11 NYISO zones and the total sum for Label=0.} \label{fig:error_12_zones} \end{figure} \subsection{Training Results} \label{subsec:Training_Results} We test the OA-cGAN using seven different values of $k$ between $0.5$ and $1$. Recall that $k$ determines the weight between the two objectives of $G$ in Eq.~\cref{objective_G}, i.e., the objective of $G$ is to generate statistically credible errors (minimizing $loss_{G1}$) that are also operational-adversarial (minimizing $loss_{G2}$). The greater the $k$ is, the more important the first objective is. Both $G$ and $D$ in the OA-cGAN in this numerical experiment are three-layer convolutional neural networks which use rectified linear units (ReLU) as the activation function. The size of the mini-batch during the training process is 100, and the reference cost for scaling in \cref{C_star} are $\delta_{shift}=2 \cdot 10^8$ and $\delta_{scale}=8\cdot 10^5$. The loss of $G$ and $D$ during the first 30 epochs of the training process is shown in Fig.~\ref{fig:loss_G_D}. According to Fig.~\ref{fig:loss_G_D}(a), the overall trend of $loss_D$ during training process is rapidly decreasing at the first 10 epochs and then gradually stabilizes. On the contrary, according to Fig.~\ref{fig:loss_G_D} (b), the overall trend of $loss_G$ during the training process is slowly increasing at first and then gradually stabilizes. Based on the pattern of each curve in Fig.~\ref{fig:loss_G_D}, we can divide the seven cases with the seven different values of $k$ into two groups, i.e. the three cases when $k \ge 0.9$ are in one group and the four cases when $k \le 0.8$ are in another group. When $k \ge 0.9$, $loss_D$ converges to 1, indicating that the discriminator cannot identify whether the input data are original or generated very well; while when $k \le 0.8$, $loss_D$ converges to $0$, indicating that the discriminator can almost completely distinguish the generated and the original data and the credibility of the generated data is poor. With $k=0.9$, $0.95$ and $1$, $loss_G$ converges to three close positive values between $1$ and $1.5$. However, if $k \le 0.8$, $loss_G$ converges to four equally spaced negative values. To further explain the results of $loss_G$, we plot the value of $loss_{G1}$ and $loss_{G2}$ during the training process separately in Fig.~\ref{fig:G_loss_epoch}(a) and (b). According to Fig.~\ref{fig:G_loss_epoch}(b), when $k=1$, $loss_{G2}$ oscillates between $-0.5$ and $0$. When $k<1$, $loss_{G2}$ at the beginning of the training process is around $-0.7$, and only in the cases when $k \ge 0.9$, $loss_{G2}$ deviates from the initial value during the training process and starts to oscillates. When $k \le 0.8$, $loss_{G2}$ will not change during the training process. Moreover, in the cases when $k=0.9$ (or $0.95$), there is an obvious turning point at epoch 10 (or epoch 7) on the curves of $loss_{G1}$, $loss_{G2}$, and $loss_D$, but there is no turning point on the curve of $loss_{G}$. These turning points reflect the changes in the relative influence of the two objective of $G$ during the training process. For example, when $k=0.9$, the objective of minimizing $loss_{G2}$ controls the training of $G$ before epoch 10. Thus, during this period, $loss_{G2}$ remains at a low level, while $loss_{G1}$ keeps increasing and $loss_{D}$ keeps decreasing because $D$ can recognize generated data better and better. After the turning point, the influence of minimizing $loss_{G1}$ exceeds the influence of minimizing $loss_{G2}$, so $loss_{G1}$ starts to decrease and $loss_{G2}$ starts to increase. In the cases when $k \le 0.8$, $loss_{G1}$ keeps increasing during the whole training process, which means that the objective of minimizing $loss_{G2}$ is always more influential than minimizing $loss_{G1}$. \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{figures/loss_G_D.png} \caption{Values of loss function terms $loss_{G}$ (a) and $loss_{D}$ (b) during the training process.} \label{fig:loss_G_D} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{figures/G_loss_epoch.png} \caption{Values of loss function terms $loss_{G1}$ (a) and $loss_{G2}$ (b) during the training process.} \label{fig:G_loss_epoch} \end{figure} \subsection{Testing Results} \label{subsec:Testing_Results} In this section, we present the testing results of the fully trained OA-cGAN. According to the training results in Section \ref{subsec:Training_Results}, the generated data of OA-cGAN when $k \le 0.8$ have similar characteristics. Thus, we will only study the testing results when $k$ is equal to $0.8$, $0.9$ or $1$. Errors generated by the trained OA-cGAN are shown in Fig.~\ref{fig:generated_error}. We notice that the greater the $k$, the greater the variance of the generated errors. Specifically, the generated errors for 11 zones and the whole system when $k=0.8$ are always straight lines, which corresponds to the most costly cases in each zone. Note that the maximum values of the generated errors in 11 zones are different, which corresponds to the historical error distributions of each zone shown in Fig.~\ref{fig:error_12_zones}. \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{figures/generated_error.png} \caption{Selected generated errors for the fully trained OA-cGAN for all zones and label=0.} \label{fig:generated_error} \end{figure} Then, we will evaluate the performance of the generated errors using Algorithm~\ref{alg:evaluation}. We compare the evaluation results of the generated errors ($\varepsilon^{gen}$) by the OA-cGAN and the robust errors ($\varepsilon^{robust}$), which are assumed to be proportional to the real DA load ($DA^{real}$) as: \begin{align} \varepsilon^{robust}_{i,t} = r DA_{i,t}^{real}, \quad \forall{i}\in\set{I},\ \forall{t}\in\set{T} \label{robust_error} \end{align} where $r$ is the robust level. We evaluate the performance of errors in nine cases and summarize the results in Table~\ref{tab:evaluation}. In Case 1, we do not consider any errors in DA scheduling, so no reserve will be provided in RT operation. In Cases 2-4, we consider errors generated by the OA-cGAN with different values of $k$. In Cases 5-9, we consider robust errors generated as \cref{robust_error} with different $r$. It can be seen from Table~\ref{tab:evaluation} that from Case 1 to Case 9, the total operating cost $C_{total}$ and the upward security level ($I^+$) monotonically increases, while the downward security level ($I^-$) monotonically decreases. Meanwhile, the overall security level ($I^+ + I^-$) basically remains the same in Cases 2-9. Thus, we can conclude that the error generated by the OA-cGAN leads to a more economical dispatch and reserve schedule, while maintaining the same high level of reliability. \begin{table}[!t] \centering \caption{Evaluation results of each testing case} \begin{tabular}{ccccc} \toprule No. & Error & $C_{total}$ (million \$) & $I^{+}$ & $ I^{-}$ \\ \midrule 1 &No error & 470.16 & 0 & 0\\ 2 &Generated ($k=1$) & 470.71 &53.45\% &67.88\% \\ 3 &Generated ($k=0.9$) & 475.56 &54.37\% &67.00\%\\ 4 &Generated ($k=0.8$) & 480.17 &56.36\% &63.61\% \\ 5 &Robust ($r=0.1$) & 482.68 &56.86\% &64.22\%\\ 6 &Robust ($r=0.3$) & 512.06 &62.62\% &58.11\%\\ 7 &Robust ($r=0.5$) & 545.59 &64.05\% &56.18\% \\ 8 &Robust ($r=0.7$) & 590.82 &65.83\% &55.23\% \\ 9 &Robust ($r=0.9$) & 648.71 &66.16\% &53.83\%\\ \bottomrule \end{tabular}% \label{tab:evaluation} \end{table} \section{Conclusion} In this paper we developed a modified conditional generative adversarial network (cGAN) that internalizes a DC optimal power flow model to generate statistically credible net load scenarios that are particularly stressful, i.e., expensive, to the system operation. We derived the necessary training objectives and methods using stochastic gradient descent (SGD) and tested the proposed operation-adversarial cGAN (OA-cGAN) on a realistic zonal model of NYSIO. The numerical experiments illustrated the proposed training method and demonstrated that net load forecast errors produced by the OA-cGAN lead to generator dispatch and reserve allocation decisions that are more cost effect, but as reliable as a robust benchmark. \bibliographystyle{IEEEtran} \section{Test} \newcommand{\set}[1]{\mathcal{#1}} \newcommand{\un}[1]{\mathbf{#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\bm{\epsilon}}{\bm{\epsilon}} \newcommand{\bm{\omega}}{\bm{\omega}} \newcommand{\bm{\Omega}}{\bm{\Omega}} \newcommand{\bm{\kappa}}{\bm{\kappa}} \newcommand{\tilde{\bm{\epsilon}}}{\tilde{\bm{\epsilon}}} \newcommand{\alpha^{\text{ne}}}{\alpha^{\text{ne}}} \newcommand{\alpha^{\text{no}}}{\alpha^{\text{no}}} \newcommand{\Sigma^{\nicefrac{1}{2}}}{\Sigma^{\nicefrac{1}{2}}} \newcommand{\Sigma_k^{\nicefrac{1}{2}}}{\Sigma_k^{\nicefrac{1}{2}}} \DeclareMathOperator{\Ploss}{Ploss} \DeclareMathOperator{\Qloss}{Qloss} \DeclareMathOperator{\LP}{LP} \DeclareMathOperator{\LQ}{LQ} \DeclareMathOperator{\Var}{Var} \DeclareMathOperator{\Skew}{Skew} \DeclareMathOperator{\Kurt}{Kurt} \DeclareMathOperator{\Cov}{Cov} \DeclareMathOperator{\Stdv}{Stdv} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\Rest}{Rest} \DeclareMathOperator{\Eptn}{\mathbb{E}} \DeclareMathOperator{\RM}{\mathbb{F}} \DeclareMathOperator{\Prb}{\mathbb{P}} \DeclarePairedDelimiter{\ceil}{\lceil}{\rceil} \DeclarePairedDelimiter{\floor}{\lfloor}{\rfloor} \DeclarePairedDelimiter{\scal}{\langle}{\rangle} \DeclareMathOperator{\vect}{vec} \DeclareMathOperator{\cvar}{CVaR} \DeclareMathOperator{\var}{VaR} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\orr}{ORR} \DeclareMathOperator{\for}{FOR} \DeclareMathOperator{\ptdf}{PTDF} \DeclareMathOperator{\lodf}{LODF} \DeclareMathOperator{\eens}{EENS} \def\noindent\textit{Proof. }{\noindent\textit{Proof. }} \def\hspace*{\fill} $\square${\hspace*{\fill} $\square$ \vspace{0.3\baselineskip}} \makeatletter \newcommand{\pushright}[1]{\ifmeasuring@#1\else\omit\hfill$\displaystyle#1$\fi\ignorespaces} \newcommand{\pushleft}[1]{\ifmeasuring@#1\else\omit$\displaystyle#1$\hfill\fi\ignorespaces} \makeatother \newcolumntype{C}[1]{>{\centering\arraybackslash}p{#1}}
{'timestamp': '2021-10-06T02:27:52', 'yymm': '2110', 'arxiv_id': '2110.02152', 'language': 'en', 'url': 'https://arxiv.org/abs/2110.02152'}
\section{Introduction} Following the isolation of single layer graphene,~\cite{NovoselovNat2007} studies on the electrical,~\cite{RevModPhys.81.109,Novoselov10222004,ISI:000248194700033,ISI:000233133500043,cresti} optical,~\cite{ISI:000274338800035,ISI:000256441100035} thermal,~\cite{Seol04092010, doi:10.1021/nl0731872,ISI:000279014300016, ISI:000277444900021, geim_acsnano,Balandin-NM-Rev} and mechanical~\cite{ISI:000257713900044, ISI:000253764000041} properties of this low-dimensional material have revealed their potential for many technological applications.~\cite{doi:10.1021/nl070133j,PhysRevLett.100.206802, ISI:000244558400015,PhysRevLett.98.206805} This in turn has triggered interest in isomorphs of graphene, namely \textit{h}-BN~\cite{PhysRevB.49.5081,ISI:000278888600004, ISI:000275858200036, ISI:000282727600056, ISI:000254669900075, ISI:000284990900046, ISI:000268652700007, ISI:000276905600060} and hybrid \textit{h}-BN/graphene structures. Recently, fabrication of both random immersions of \textit{h}-BN in graphene~\cite{C1NR11387A,han:203112} and well-defined clusters of \textit{h}-BN in graphene with possible kinetically controllable domain sizes~\cite{ISI:000276953500024} has intensified this interest. In particular, such hybrid systems have a considerable compositional and structural diversity that translates into greater freedom for tuning the physical properties. Both experimental and density functional theory (DFT) studies have shown that the physical properties of these materials can be significantly modified by simply varying the relative amount of \textit{h}-BN to graphene.~\cite{doi:10.1021/nl2011142,lam:022101,qiu:064319} For instance, Ci~\textit{et al.}~\cite{ISI:000276953500024} have experimentally shown that decreasing the relative amount of ~\textit{h}-BN to graphene increases the electrical conductivity, which has been supported by DFT studies where increasing BN concentration and cluster size results in band gap opening.~\cite{xu:073711,PhysRevB.84.125401} It is recently shown that the details of the bonding at the \textit{h}-BN/graphene interface can change the type of intrinsic doping of the system.~\cite{PhysRevB.84.205412} Just to name a few other examples of how this chemical and structural diversity in this low dimensional hybrid system enable control over magnetic properties; zigzag-edges in ribbons have been suggested to lead to ferromagnetic behavior~\cite{PhysRevLett.87.146803} while more complex interfaces, like those present in \textit{h}-BN clusters embedded in graphene, can be antiferromagnetic ~\cite{PhysRevB.84.075405} may also be mentioned. Thermal transport in graphene with embedded \textit{h}-BN quantum dots has been studied recently using real-space Kubo approach.~\cite{PhysRevB.84.205444} This study has shown that the decreasing dot size decreases the phonon mean free path (MFP) of both in-plane and out-of-plane modes considerably. However, limited variation in MFP has been observed by changing the dot concentration at the smallest dot sizes. In another study, the effect of BN nanodots on the heat current in graphene nanoribbons has been investigated by using non-equilibrium Green's functions and nonequilibrium (direct method) molecular dynamics.~\cite{refId0} The authors claimed that there is a linear inverse relationship between the number of atoms at the interface and the heat current. Although these studies provide valuable insight about the thermal transport, the thermal conductivity of graphene/\textit{h}-BN nanostructured systems has not been investigated systematically by considering superlattices with different nano-morphologies. The objective of this study is to investigate the influence of the chemical and structural diversity present in hexagonal boron nitride ((h-BN) and graphene hybrid nanostructures on thermal transport and test possible pathways for tuning the thermal conductivity of these low dimensional hybrid structures. In this paper, we investigate the variation of thermal conductivity of hybrid graphene/h-BN nanostructures in particular: 1) stripe superlattice geometries while varying geometric parameters and composition and 2) BN (graphene) dots embedded in graphene (BN) as a function of dot-diameter and composition. The theoretical findings aim at providing basis for potential thermal management applications in miniaturized devices. We have previously calculated the lattice thermal conductivities of nanotubes, graphene and \textit{h}-BN based nanostructures~\cite{Che_2000_2,JustinACS,Cem-BN-PRB,cem-bn-szotop,doi:10.1021/nl2029333} with considerable accuracy and compared the results with available experimental data.~\cite{Balandin-NM-Rev} In this study, we implement an accurate model for C-B and C-N interactions by employing DFT calculations in addition to our previous \textit{h}-BN potential. Using these Tersoff interatomic potentials, we calculated the lattice thermal conductivity of several possible graphene/\textit{h}-BN hybrid structures. The rest of the report is organized as follows: First, the model utilized to develop the potential and the calculation methods for thermal conductivity are described. Then, the validity of our potentials for studying hybrid nanostructures is demonstrated. This is followed by a detailed description of the considered hybrid nanostructures and a discussion of the effect of structure and composition on lattice thermal transport properties. \section{Method} Equilibrium molecular dynamics simulations can be utilized to obtain instantaneous heat current ($\bm{J}$) or energy moment ($\bm{R}$) as a function of time. Subsequently, thermal conductivity, $\kappa$ can be evaluated by using either the heat current autocorrelation function (Green-Kubo method)~\cite{Green_1954,Kubo_1957,Zwanzig_1965} or mean square displacement of the energy moment (Einstein relation)~\cite{Zwanzig_1965} as discussed in detail in our earlier studies.~\cite{JustinACS,Cem-BN-PRB,cem-bn-szotop,Alper,JustinNanotech} Here, the thermal conductivity is evaluated from the Einstein relation (the mean square displacement of energy moment, named \emph{h}MSD) as given by~\cite{Weitz_1989} \begin{equation} \frac{\big\langle[R_\mu(t)-R_\mu(0)]^2\big\rangle}{2V k_BT^2} = \kappa_{\mu\mu} [t+\tau(e^{-t/\tau}-1)]. \label{eq:Reinstein} \end{equation} Here, $V$ is the volume, $T$ is the temperature and $k_B$ is the Boltzmann constant. The energy moment through direction $\mu$ is defined by $R_\mu$. The right hand side of Eq.~\ref{eq:Reinstein} represents a linear change in Einstein relation for the time ($t$) much larger than the decay time ($\tau$). The long-time behavior corresponds to diffusive regime in transport of heat. For short-times, on the other hand, the average energy propagation is ballistic and results in a non-linear relation between $\kappa$ and \emph{h}MSD. Given the time, a bulk system assumes a diffusive behavior at elevated temperatures and thus we are more interested in this regime. Computationally, we eliminate the non-linear portion of the relationship by discarding the first 100 ps of \emph{h}MSD then fit the rest to a linear function, i.e., \emph{h}MSD $=2V k_BT^2\kappa_{\mu\mu}t$, in order to obtain thermal conductivity. In this study, we investigate the thermal conductivity of graphene/\textit{h}-BN superlattices in the form of stripes and dots/``anti"dots, see Fig.~\ref{fig1}. The stripe superlattices are discussed in two general categories. In the first case, equal periods ($l_{\mathrm{G}}$ = $l_{\mathrm{BN}}$), and in the second unequal periods ($l_{\mathrm{G}}$ $\neq$ $l_{\mathrm{BN}}$) of graphene and \textit{h}-BN stripes are simulated. The stripes of graphene and \textit{h}-BN sublattices are connected \textit{via} two different orientations namely, resulting in a zigzag or an armchair interface. For all structures, approximately 60$\times$60~$nm^2$ periodic domains are considered. Previously, we showed that such large systems are required for the proper convergence of thermal conductivity in equilibrium MD calculations of ribbon like systems.~\cite{JustinACS} For the equal period simulations, in each orientation, five different period thicknesses ranging from $\sim$1.25 to $\sim$30 $nm$ are constructed. The atomistic details of these systems are given in Table~\ref{table1} in the Appendix. For the unequal period simulations, again five different configurations are created for the armchair and zigzag interface systems where the thicknesses of BN sublattices change from $\sim$3 to $\sim$57 $nm$ and the sum of $l_{\mathrm{BN}}$ and $l_{\mathrm{G}}$ is set to $\sim$60 $nm$, see Table~\ref{table2} in the Appendix for details. As a second type of nanostructure, dots of~\textit{h}-BN are embedded in graphene with a close-packed arrangement as shown in Fig.\ref{fig1}. We select three different radii (4.95 \AA, 12.38 \AA, and 24.76 \AA) for these ordered dots. Ordered graphene dots in~\textit{h}-BN, so called anti-dots, are created with radius of 12.38 \AA. Random configuration of antidots are also considered with radii of 6.19 and 12.38 \AA. The details of these structures are provided in Table~\ref{table3} in the Appendix. \begin{figure}[!h] \includegraphics[width=14.0cm]{fig1.ps \caption{\label{fig1}(Color online) The hybrid structures structures considered in this work \emph{viz.}: stripe superlattices and dots embedded in sheet matrix.} \end{figure} Molecular dynamics simulations are performed in the microcanonical (NVE) ensemble with a time step of 1.0~fs to conserve energy and a simulation length of 5~ns to obtain an acceptable ensemble average of \textit{h}MSD. Each data point for $\kappa$ is therefore obtained by averaging the results of a minimum of five distinct simulations with different initial velocity distributions. The error in $\kappa$ value is calculated from the standard deviation of these independent calculations. The volumes of the two-dimensional structures are defined as $lw\Delta$, where $w$ and $l$ are the width and the length of simulated structures, and $\Delta$ (0.335 $nm$) is the mean Van der Waals thickness for \textit{h}-BN and graphene. Finally, we did not consider isotopic disorder explicitly in thermal conductivity calculations. Instead, a single mass of natural abundance is used for all elements. We have previously developed a Tersoff-type potential for \textit{h}-BN systems.~\cite{Cem-BN-PRB} Also, a Tersoff parametrization for graphene is given by Lindsay and Broido.~\cite{PhysRevB.81.205441} Both potentials have been optimized to reproduce the DFT phonon dispersions for their respective material, necessary for ensuring accurate lattice thermal conductivities. In order to simulate the interfaces, one needs to further develop interaction potentials for all possible element pairs coupling at the interface. We have used DFT calculations to generate the data needed for the interfaces. These calculations have been performed with Vienna \textit{ab initio} simulation package (VASP)~\cite{vasp1,vasp2} which is based on density functional theory. Projector augmented wave (PAW)~\cite{PAW1, PAW2} pseudo potential formalism was imposed with Perdew-Burke-Ernzerhof (PBE)~\cite{GGA1} form of generalized gradient approximations (GGA). Using DFT energetics to condition empirical potentials has been previously motivated for both pure graphene and \textit{h}-BN. The PAW-PBE formalism, in particular, produces accurate structures for our systems of interest, with the calculated lattice parameters for graphene and \textit{h}-BN being 2.45\AA{} and 2.51\AA{}, respectively. Long horizontal strips of the structures in Fig.~\ref{fig2} are used in periodic-boundary conditions in order to avoid spurious interface-interface interactions. Depending on the basic repeating unit of the given structures, in-plane dimensions of 29.95~\AA{} $\times$2.47~\AA{} (structures 2 and 3), 30.22~\AA{} $\times$2.49~\AA{} (structures 1 and 4), or 24.8~\AA{} $\times$4.30~\AA{} (structure 5) were used with 2$\times$16$\times$1, 2$\times$16$\times$1, and 2$\times$10$\times$1 Monkhorst-Pack $k$ point grids, respectively. 400~eV is selected for the plane wave energy cut-off to achieve the energy convergence. \section{Results and Discussions} \subsection{Optimization of C-BN Parameters} \begin{figure}[!h] \includegraphics[width=12.0cm]{fig2.ps \caption{\label{fig2}(Color online) The change in the total energy, as given by both the Tersoff potential and DFT, per interface area as a function of interface separation, d. The corresponding structures for the interfaces are given with ball-and-stick representations (C = yellow, N = small blue, B = big red atom).} \end{figure} As pointed put in the previous section, reliable potentials for C-C and B-N interactions have appeared in the literature. To simulate the hybrid structures of interest, we must then only define the interactions between B-C and N-C. Since the structure and vibrational spectrum of \textit{h}-BN and graphene are similar, we opt to employ the mixing rules and fitting procedure put forth by Tersoff for Si-Ge and Si-C,~\cite{PhysRevB.39.5566} which approximates the parameters as a mixture of the existing BN and C parameters modified by two arbitrary values, $\chi_{\mathrm{B-C}}$ and $\chi_{\mathrm{N-C}}$. These parameters adjust the contribution from the attractive term to the potential. We have obtained $\chi_{\mathrm{B-C}}$ and $\chi_{\mathrm{N-C}}$ by imposing the requirement on the potential to reproduce DFT energetics of all probable \textit{h}-BN/graphene interfaces shown in Fig.~\ref{fig2}. In these graphs, $\Delta\gamma$ is the change in total energy per interface area (width $\times$ Van der Waals thickness) as the interface separation, d, changes from the equilibrium value, d$_0$, under the condition that the bond lengths in the graphene and \textit{h}-BN regions are held fixed. The corresponding interfaces are also shown in Fig.~\ref{fig2}. Note that the interface separation parameter, d, accounts for both bond length and angle variations. Parameter fitting is accomplished by minimizing the differences between the DFT and the force field derived $\Delta\gamma$ values for each displacement for each structure simultaneously by updating the force field parameters using a genetic algorithm. The fitted parameters for $\chi_{B-C}$ and $\chi_{N-C}$ (0.886777 and 1.013636 respectively) along with the parameters obtained from the mixing rule have produced MD energies in good agreement with DFT results. Moreover, the error in the calculated equilibrium B-C and N-C bond lengths of all structures is no larger than 1.5\%. A full list of parameters along with the description of interaction potential function and the mixing rules are given in Table~\ref{table4} of the Appendix. \subsection{Stripe Superlattices with Equal Periods} In all striped superlattice structures, we calculated the lattice thermal conductivities parallel, $\kappa_{\shortparallel}$, and perpendicular, $\kappa_{\perp}$, to the superlattice orientation. The chosen interfaces are shown in Fig.~\ref{fig3} and~\ref{fig4} with the associated thermal conductivity values. The ball-and-stick structure in Fig.~\ref{fig3}a and in Fig.~\ref{fig4}a is the same interface given in Fig.~\ref{fig2} as structure 5, essentially one armchair ribbon connected to the other two in a symmetrical fashion though forming B-C and N-C bonds. Whereas the structure represented as Fig.~\ref{fig3}b and Fig.~\ref{fig4}b can be thought of as one zigzag ribbon connected to two others on one side by B-C bonds and on the other by N-C bonds. These interfaces correspond to structures 2 and 3 in Fig.~\ref{fig2}. The effective stiffness at the interface, obtained by fitting the $\Delta\gamma$ to a quadratic function, shows that the C-N bond is stronger than C-B bond. This is expected considering that both interactions are mainly covalent and as more electrons are involved in the bonding, the strength of the bond increases. Fig.~\ref{fig3} shows how the thermal conductivity of the aforementioned superlattice interfaces behave when the periods, constrained by $l_{\mathrm{G}}$ = $l_{\mathrm{BN}}$, are varied. The transport coefficients in the parallel direction, however, behave differently, depending on the type of interface. The superlattice with armchair interface has smaller thermal conductivity compared to the one with zigzag interface in the studied period range. As the period thickness increases, the stripe structures appear to become less sensitive to interface effects on parallel thermal conductivity for both interfaces approaching ~1050-1200 $W/mK$. This is close to the midpoint of the thermal conductivity values of pristine \textit{h}-BN, 450 $W/mK$, and graphene, 2300 $W/mK$. This behavior agrees quantitatively with what is expected from treating the striped structure as the combination of two independent nanoribbons. Previously, it was shown that zigzag ribbons have better thermal transport properties than armchair ribbons at small widths because of the latter having higher atomic line density on the edge.~\cite{JustinACS,Cem-BN-PRB} Thicker ribbons have more transport channels and difference in scattering behavior at the edges become less significant. Thus, it is sensible for striped structures combined through zigzag interfaces to have larger transport coefficients in smaller periods. Another apparent observation is that the thermal transport coefficients perpendicular to the different interfaces behave similarly, gradually increasing from 200-250 $W/mK$ at $l = 1.5~nm$ to 350-400 $W/mK$ at $l = 30~nm$. The perpendicular thermal transport is strongly controlled by the lower thermal conductivity component (\textit{h}-BN) and the interface phonon scattering even at a 30~nm thickness. If one assumes that the periods of the stripes are longer than the phonon mean free path and the boundary resistance is negligible, then $\kappa_{\perp}$ of the stripe system of equal periods is bounded by $2(\kappa_{\mathrm{graphene}}\times\kappa_{\mathrm{\textit{h}-BN}})/(\kappa_{\mathrm{graphene}}+\kappa_{\mathrm{\textit{h}-BN}})$. For the calculated superlattices this equation give 752.7 $W/mK$. The actual physics of the simulated systems, on the other hand, will not resemble to the idealized picture. First, the system has a finite thermal boundary resistance that depends on the acoustic mismatch of the stripes and the intrinsic properties of the boundary. The effect of boundary structure on $\kappa_{\perp}$ is less pronounced when the results from Fig.~\ref{fig3} a) and b) compared, and it is almost independent for zigzag and armchair interfaces. Second, some of the systems have period lengths of only few nanometers which is very short compared to the MFP of the relevant phonons. Thermal conductivity perpendicular to the interface increases slowly as the period size grows; however, the ideal value will not be reached because of the limiting effect of thermal boundary resistance, which will be present even in systems with period sizes longer than the characteristic MFP. \begin{figure}[!h] \includegraphics[width=14.0cm]{fig3.ps \caption{\label{fig3}(Color online) The thermal transport coefficients parallel and perpendicular to the two different graphene/\textit{h}-BN interfaces are shown in (a) and (b). The period lengths of both graphene and \textit{h}-BN are constrained to be equal. The atomistic details for the calculated structures are given in Table~\ref{table1} in Appendix.} \end{figure} \subsection{Stripe Superlattices with Unequal Periods} Using the same interfaces, we remove the constraint of equal size periods and only require the sum of $l_{\mathrm{BN}}$ and $l_{\mathrm{G}}$ to be 60 $nm$. We note here that the variation of the period lengths also enables us to see the influence of concentration. When \textit{h}-BN has a small concentration (or a small period), the parallel component of thermal transport increases toward the limiting value of graphene as seen in Fig.~\ref{fig4}. On the other hand, the perpendicular component does not exceed 700 $W/mK$. Again, the zigzag interfaces have higher parallel thermal transport coefficients (35\% larger) than the armchair interfaces in almost all configurations. When the period of BN is small, the reduction in $\kappa_{\perp}$ from the pristine graphene value is mainly due to interfacial phonon scattering; systems with larger $l_{BN}$ drive $\kappa_{\perp}$ toward the pure BN values but are still limited by the influence of interfacial scattering. The effect of atomic bonding at the interface on conduction is most clearly seen when $l_{BN}/l_{Total}$=0.05. Conductivity perpendicular to the boundary in armchair interfaced sample is noticeably higher than zigzag sample. This is most probably caused by enhanced scattering from alternating types of interface bonding in zigzag boundaries. \begin{figure}[!h] \includegraphics[width=14.0cm]{fig4.ps \caption{\label{fig4}(Color online) The thermal transport coefficients parallel and perpendicular to the two different graphene/\textit{h}-BN interfaces are shown in (a) and (b). The sum of period lengths of graphene and \textit{h}-BN are constrained to be 60 nm. The atomistic details for the calculated structures are given in Table~\ref{table2} in Appendix.} \end{figure} \subsection{Dot and Anti-dot Superlattices} We now turn to the investigation of the thermal conductivity of ordered and random distributions of~\textit{h}-BN dots embedded in graphene. Fig.~\ref{fig5} shows the influence of dot size and concentration on the $\kappa$. From Fig.~\ref{fig5} we see that larger dot sizes lead to higher thermal transport coefficients. At the lowest BN concentration (2\%) the system with the largest dot has a 20\% larger transport coefficient than the other sizes. This could be understood by the fact that larger dots have a smaller boundary to bulk ratio at the same concentration. As more dots are introduced, this interface effect is suppressed and the $\kappa$ of all systems converge to 250 $W/mK$ at 40\% \textit{h}-BN. Interestingly, this large concentration limit is similar to the perpendicular conductivity of stripe superlattices with periods similar to the diameter of the dots, see Fig.~\ref{fig3}. It is likely that at large concentrations the \textit{h}-BN dots can isotropically limit the thermal transport in the same manner that the stripes limit the transport perpendicular to the boundary. In addition to ordered BN dots, we have modeled ordered and random distributions of graphene dots in \textit{h}-BN. The thermal conductivity values of these systems are also presented in Fig.~\ref{fig5}. A decreasing behavior in thermal conductivity is also observed in these systems as the number of graphene dots increases. It is surprising to see graphene, as the higher $\kappa$ component, does not enhance the thermal conductivity of ~\textit{h}-BN. This can be attributed to the relatively small size of the dots and the large \textit{h}-BN/graphene interface to area ratio, leading to interfacial phonon scattering events dominating $\kappa$. At the lowest C concentration, the ordered dot system has higher thermal conductivity than the bulk value of \textit{h}-BN. It is not clear whether this is an actual physical phenomena or an averaging problem since the error bars are large enough to include the bulk value. In creating the random dot configurations, we maintain the mean dot separation similar to the one in the ordered configurations with same concentration. For each concentration, the initial conditions of the simulations are not only varied by atom velocities but the also the distribution of the dots. The thermal conductivities of the structures, having ordered and random dots, are not significantly different for the same dot sizes and concentrations (see the inset of Fig.~\ref{fig5}). Again, the smaller dots lead to lower $\kappa$ when the concentration of C is kept constant. \section{Summary and Concluding Remarks} We have characterized the lattice thermal transport properties of hybrid graphene and \textit{h}-BN structures: graphene-white graphene stripes and dot/antidot superlattices. The $\kappa_{\perp}$ of striped nanostructures with large periods is limited by the less conductive component, \textit{h}-BN. The parallel transport, on the other hand, attains a value close to the average of the two components. As the periods of the stripes are reduced, interface scattering effects become more prevalent with zigzag interfaces resulting in higher $\kappa$ than the armchair interfaces. The thermal conductivity of the dot systems can be tailored by both dot diameter and concentration. Small dot concentration and large dot diameter leading to larger conductivities. Moreover, the transport properties of nanosystems with high dot concentrations are independent of size, approaching the $\kappa_{\perp}$ of the small period striped superlattices. \begin{figure}[!h] \includegraphics[width=14.0cm]{fig5.ps \caption{\label{fig5}(Color online) The thermal transport properties of graphene with embedded \textit{h}-BN dots and \textit{h}-BN with embedded graphene. Three different radii, 4.95 \AA, 12.38 \AA, and 24.76 \AA~are used for \textit{h}-BN dots. The number of dots in the systems are varied such that the BN concentration ranges from ~2-98\%. Two different radii, 6.19 \AA, 12.38 \AA~are employed for graphene dots. The superscript ``d'' indicates the disordered dot arrangement. BN concentration on the horizontal axis is calculated as the percent ratio of the total number of boron and nitrogen atoms to the total number of atoms. The inset graph has the same axis units as the outer graph. The atomistic details for these systems are given in Table~\ref{table3} in Appendix.} \end{figure} \section{Acknowledgements} We acknowledge support from NSF (DMR 0844082) to the International Institute of Materials for Energy Conversion at Texas A\&M University as well as AFRL. Parts of computations were carried out by the Laboratory of Computational Engineering of Nanomaterials, supported by ARO, ONR and DOE grants. We would also like to thank the Supercomputing Center of Texas A\&M University for a generous time allocation for this project. CS acknowledges the support from The Scientific and Technological Research Council of Turkey (TUBITAK) to his research at Anadolu University. \clearpage \section{Appendix} Structural details of stripe and dot superlattices are given in Table~\ref{table1}, Table~\ref{table2} and Table~\ref{table3}. \begin{scriptsize}\begin{table}[!h] \caption{\label{table1} Simulation details for the stripe superlattices where graphene and \textit{h}-BN have equal period thicknesses. Thermal conductivities of these structures are given in Fig.~\ref{fig3}. The total size of the systems are given by Length$_{arm}$ and Length$_{zig}$ where the subscripts define whether the length is measured along the armchair or the zigzag configuration.} \begin{ruledtabular} \begin{tabular}{lccccc} Boundary $\backslash$ Period ($nm$) & \# of B & \# of N & \# of C & Length$_{arm}$ ($nm$)& Length$_{zig}$ ($nm$)\\\hline Armchair $\backslash$ $l=1.246854$ & 33600 & 33600 & 67200 & 60.46925 & 59.84897 \\ Armchair $\backslash$ $l=7.481121$ & 33600 & 33600 & 67200 & 60.46925 & 59.84897 \\ Armchair $\backslash$ $l=9.974828$ & 33600 & 33600 & 67200 & 60.46925 & 59.84897 \\ Armchair $\backslash$ $l=14.9622$ & 33600 & 33600 & 67200 & 60.46925 & 59.84897 \\ Armchair $\backslash$ $l=29.92449$ & 33600 & 33600 & 67200 & 60.46925 & 59.84897 \\ Zigzag $\backslash$ $l=1.2957698$ & 32982 & 32982 & 65964 & 59.60541 & 59.59960 \\ Zigzag $\backslash$ $l=5.1830792$ & 34560 & 34560 & 69120 & 62.19695 & 59.84897 \\ Zigzag $\backslash$ $l=9.934235$ & 32982 & 32982 & 65964 & 59.60541 & 59.59960 \\ Zigzag $\backslash$ $l=15.1173$ & 33600 & 33600 & 67200 & 60.46925 & 59.84897 \\ Zigzag $\backslash$ $l=30.234625$ & 33600 & 33600 & 67200 & 60.46925 & 59.84897 \\\hline \end{tabular} \end{ruledtabular} \end{table}\end{scriptsize} \newpage \begin{scriptsize}\begin{table}[!h] \caption{\label{table2} Simulation details for the stripe superlattices where graphene and \textit{h}-BN have different period thicknesses. Thermal conductivities of these structures are given in Fig.~\ref{fig4}. The total size of the systems are given by Length$_{arm}$ and Length$_{zig}$ where the subscripts define whether the length is measured along the armchair or the zigzag configuration.} \begin{ruledtabular} \begin{tabular}{lccccc} Boundary $\backslash$ $l_{\mathrm{BN}}/l_{\mathrm{total}}$ & \# of B & \# of N & \# of C & Length$_{arm}$ ($nm$)& Length$_{zig}$ ($nm$)\\\hline Armchair $\backslash$ 0.05 & 3360 & 3360 & 127680 & 60.46925 & 59.84897 \\ Armchair $\backslash$ 0.25 & 16800 & 16801 & 100800 & 60.46925 & 59.84897 \\ Armchair $\backslash$ 0.50 & 33600 & 33600 & 67200 & 60.46925 & 59.84897 \\ Armchair $\backslash$ 0.75 & 50400 & 50400 & 33600 & 60.46925 & 59.84897 \\ Armchair $\backslash$ 0.95 & 63840 & 63840 & 6720 & 60.46925 & 59.84897 \\ Zigzag $\backslash$ 0.05 & 3360 & 3360 & 127680 & 60.46925 & 59.84897 \\ Zigzag $\backslash$ 0.25 & 16800 & 16800 & 100800 & 60.46925 & 59.84897 \\ Zigzag $\backslash$ 0.50 & 33600 & 33600 & 67200 & 60.46925 & 59.84897 \\ Zigzag $\backslash$ 0.75 & 50400 & 50400 & 33600 & 60.46925 & 59.84897 \\ Zigzag $\backslash$ 0.95 & 63840 & 63840 & 6720 & 60.46925 & 59.84897 \\\hline \end{tabular} \end{ruledtabular} \end{table}\end{scriptsize} \newpage \begin{scriptsize}\begin{table}[!h] \caption{\label{table3} Simulation details for graphene with embedded \textit{h}-BN dots and \textit{h}-BN with embedded graphene anti-dots. Thermal conductivities of these structures are given in Fig.~\ref{fig5}.} \begin{ruledtabular} \begin{tabular}{lccccc} Radius ($nm$) & \# of B & \# of N & \# of C & Length$_{X}$ ($nm$)& Length$_{Y}$ ($nm$)\\\hline & 6000 & 4800 & 18000 & 29.71200 & 25.73130\\ & 3840 & 3072 & 25856 & 31.69280 & 27.44680\\ r$_{\mathrm{BN}}=$ 0.495& 2016 & 1728 & 31104 & 32.68320 & 28.30446\\ & 960 & 768 & 31040 & 31.69280 & 27.44676\\ & 540 & 432 & 44028 & 37.14000 & 32.16417\\\hline & 12384 & 12960 & 39456 & 44.56800 & 38.59704\\ & 5504 & 5760 & 45184 & 41.59680 & 36.02388\\ r$_{\mathrm{BN}}=$ 1.238& 3096 & 3240 & 58464 & 44.56800 & 38.59701\\ & 1376 & 1440 & 53632 & 41.59680 & 36.02388\\ & 348 & 360 & 34140 & 32.68320 & 28.30448\\\hline & 13032 & 12744 & 39024 & 44.56800 & 38.59701\\ & 5792 & 5664 & 44992 & 41.59680 & 36.02388\\ r$_{\mathrm{BN}}=$ 2.476& 1448 & 1416 & 25936 & 29.71200 & 25.73135\\ & 1448 & 1416 & 53584 & 41.59680 & 36.02389\\ & 1448 & 1416 & 136528 & 65.36640& 56.60896\\\hline & 14000 & 13500 & 17500 & 37.14000 &32.16420 \\ & 22784 & 22464 & 11200 & 41.59680 & 36.02388 \\ r$_{\mathrm{C}}=$ 1.238& 29340 & 29160 & 6300 & 44.56800 & 38.59701 \\ & 26864 & 26784 & 2800 & 41.59680 & 36.02388 \\ & 68320 & 68256 & 2816 & 65.36640 & 56.60896 \\\hline r$^{\mathrm{d}}_{\mathrm{C}}=$ 1.238 & 26750 & 26820 & 5950 & 39.88552 & 40.15493 \\ & 29150 & 29145 & 1225 & 39.87575 & 40.14509 \\\hline r$^{\mathrm{d}}_{\mathrm{C}}=$ 0.619 & 26795 & 26765 & 5960 & 39.89808 & 40.16757 \\ & 29160 & 29160 & 1200 & 39.88407 & 40.15347 \\\hline \end{tabular} \end{ruledtabular} \end{table}\end{scriptsize} \newpage The potential used in this study is developed by Tersoff.~\cite{PhysRevB.39.5566} \begin{eqnarray} V_{ij} &=& f_{C}(r_{ij})\left[f_{R}(r_{ij})+b_{ij}f_{A}(r_{ij})\right]\nonumber\\ f_{C}(r_{ij}) &=& \left\{\begin{array}{rcl} 1 & : &r_{ij} < R_{ij} \\ \frac{1}{2}+\frac{1}{2}\cos\left(\pi\frac{r_{ij}-R_{ij}}{S_{ij}-R_{ij}}\right) & : &R_{ij} < r_{ij} < S_{ij} \\ 0 & : &r_{ij} > S_{ij} \end{array}\right. \nonumber\\ f_{R}(r_{ij}) &=& A_{ij} \exp\left(-\lambda^{I}_{ij}r_{ij}\right) \nonumber\\ f_{A}(r_{ij}) &=& -B^{'}_{ij} \exp\left(-\lambda^{II}_{ij}r_{ij}\right) ,\;\;\;\;\;\; B^{'}_{ij} = B_{ij}\chi_{ij} \nonumber\\ b_{ij} &=&\left(1+\beta_{i}^{n_{i}}\zeta_{ij}^{n_{i}}\right)^{-\frac{1}{2n_{i}}} \nonumber\\ \zeta_{ij} &=& \sum_{k\neq i,j} f_{C}(r_{ik}) g(\theta_{ijk}) \nonumber \\ g(\theta_{ijk}) &=& \left(1+\frac{c_{i}^{2}}{d_{i}^{2}}-\frac{c_{i}^{2}}{\left[d_{i}^{2}+(\cos\theta_{ijk}-h_{i})^{2}\right]}\right)\nonumber \end{eqnarray} In this description the lower indices $i$, $j$ and $k$ mark the atoms where $i$-$j$ bond is modified by a third atom $k$. The potential parameters and their corresponding values are given in Table~\ref{table4}. The parameter $\chi_{ij}$ was used as a fitting parameter in our study. For the mixing of parameters, the geometric mean is calculated for the multiplier parameters and arithmetic mean is calculated for the exponential parameters. These rules are given below. \begin{eqnarray} \lambda^{I}_{ij} &=& \left(\lambda^{I}_{i}+\lambda^{I}_{j}\right)/2,\;\;\;\; \lambda^{II}_{ij} \;=\; \left(\lambda^{II}_{i}+\lambda^{II}_{j}\right)/2,\;\;\;\; A_{ij} \;=\;\left(A_{i}A_{j}\right)^{(1/2)}\nonumber\\ B_{ij} &=& \left(B_{i}B_{j}\right)^{(1/2)},\;\;\;\; R_{ij} \;=\;\left(R_{i}R_{j}\right)^{(1/2)},\;\;\;\; S_{ij} \;=\;\left(S_{i}S_{j}\right)^{(1/2)}\nonumber \end{eqnarray} It should be mentioned that $\chi_{ij}$ modifies $B_{ij}$ which is obtained as a result of the mixing procedure. Here, we also note that the developed potential is not parameterized to represent N-N or B-B interactions as can be seen from Table~\ref{table4}. \newpage \begin{scriptsize}\begin{table}[!h] \caption{\label{table4} The parameters of Tersoff potential optimized for C-BN interactions. The atom X represent the bond modifying element where all parameters are exactly the same whether it is C, B or N.} \begin{ruledtabular} \begin{tabular}{lccccccc} Parameters& C B X & C C X & C N X & B C X & B N X & N B X & N C X\\\hline $A$ (eV)&1386.78&1393.6 &1386.78 &1386.78 & 1380.0 & 1380.0 & 1386.78 \\ $B^{'}$ (eV)&339.06891&430.0 &387.575152 &339.068910 & 340.0 & 340.0 & 387.575152 \\ $\lambda^{I}$ (\AA$^{-1}$)&3.5279 &3.4879 &3.5279 & 3.5279 & 3.568 & 3.568 & 3.5279 \\ $\lambda^{II}$ (\AA$^{-1}$)&2.2054 &2.2119 &2.2054 & 2.2054 & 2.199 & 2.199 & 2.2054 \\ $n$&0.72751&0.72751 &0.72751&0.72751&0.72751&0.72751&0.72751 \\ $\beta$ (10$^{-7}$)&1.5724 &1.5724 &1.5724 & 1.25724 & 1.25724 & 1.25724 & 1.25724 \\ $c$&38049&38049 &38049 &25000 & 25000 & 25000 & 25000 \\ $d$&4.3484&4.3484 &4.3484 &4.3484 &4.3484 &4.3484 &4.3484 \\ $h$&-0.93&-0.93 &-0.93 & -0.89 & -0.89 & -0.89 & -0.89 \\ $R$ (\AA)&1.85&1.80 &1.85 &1.85 & 1.90 & 1.90 & 1.85 \\ $S$ (\AA)&2.05&2.10 &2.05 &2.05 & 2.00 & 2.00 & 2.05 \\\hline \end{tabular} \end{ruledtabular} \end{table}\end{scriptsize} \clearpage
{'timestamp': '2012-08-20T02:02:04', 'yymm': '1208', 'arxiv_id': '1208.3607', 'language': 'en', 'url': 'https://arxiv.org/abs/1208.3607'}
\section{INTRODUCTION} Optical and X-ray studies (e.g. Geller \& Beers\markcite{14} 1982; Forman {et~al.}~\markcite{12} 1981; Dressler \& Shectman 1988; Jones \& Forman\markcite{19} 1992; Mohr et al. 1994; Bird 1994; Slezak et al. 1994) have shown that galaxy clusters are dynamically evolving systems exhibiting a variety of substructure. Thus, we expect to see key indentifiers of the merging process in the temperature maps of galaxy clusters with significant substructure. In particular, in the early stages of an unequal merger, i.e. one subcluster larger than the other, simulations show significant heating of the smaller subcluster above the temperature of the local ICM (Evrard\markcite{9} 1990a and\markcite{10} b; Schindler \& M\"{u}ller\markcite{24} 1993) as well as the development of a shock located between the two subclusters. Abell 1367 has previously been identified as having significant optical and X-ray substructure (Bechtold\markcite{3} et al. 1983; Grebenev\markcite{16} et al. 1995). The X-ray emission is elongated along a southeast-northwest axis, and contains small, localized ``clumps''. The cluster has a relatively cool gas temperature and a high spiral fraction (see Bahcall\markcite{2} 1977 and Forman \& Jones\markcite{12a} 1982) typical of what is expected for a dynamically young system. In this paper we report on the analysis of the structure of the A1367 galaxy cluster as mapped by the X-ray emission observed with the ROSAT PSPC and the ASCA detectors. In Section~\ref{sec:obs}, we apply two independent methods which account for the energy dependent ASCA Point-Spread-Function(PSF) and produce moderate spatial resolution ($\sim 4$$^\prime$) temperature maps. This is an extension of the work presented by Churazov et al. (1996a). We also describe our surface brightness analysis of the ROSAT PSPC data. Section~\ref{sec:results} presents the results of the temperature determinations and our estimates of the masses of each subcluster. Finally, Section~\ref{sec:conclusion} briefly summarizes our results. All distance dependent quantities have assumed $\rm H_o= 50\ km\ s^{-1}\ Mpc^{-1}$, $q_o=0.5$, all coordinates are given in the J2000 system, and unless otherwise noted all error bars are 67\% confidence level (1$\sigma$) errors. \section{OBSERVATIONS \& METHODS} \label{sec:obs} \subsection{ROSAT Analysis} The ROSAT PSPC observed A1367 from 29 November to 2 December 1991 (RP800153); observing details are listed in Table~\ref{tab:pointings}. We corrected the PSPC image for telescope vignetting as well as the removal of times with high solar/particle backgrounds using the standard procedures outlined by Snowden\markcite{25} (1994; also Snowden\markcite{26} et al. 1994). By combining only the data from Snowden bands 4 through 7 (0.44-2.04 keV), we excluded the lower energies, which are likely to have higher background X-ray contamination. The background was not subtracted during this stage of the processing; instead, it was included as a constant component in our fits of the surface brightness, as described below. The central area of the PSPC image is shown in Figure~\ref{fig:pspcimage}, after having been smoothed with a 30$^{\prime\prime}$\ Gaussian. \placetable{tab:pointings} \placefigure{fig:pspcimage} The extended cluster emission appears peaked in two locations, a primary peak near $11^h\ 44.8^m$ $+$19$^{\circ}$ 42$^\prime$ (hereafter the SE subcluster) and a secondary peak towards the northwest at $11^h\ 44.4^m$ $+$19$^{\circ}$ 52$^\prime$ (hereafter the NW subcluster). The temperature distribution and previous radio results-- both discussed later in this paper-- suggest that we are observing the merger of two subclusters. Because of this, we performed a surface brightness analysis deriving the X-ray surface brightness profile and the core radius assuming a standard $\beta$-model on each region independent of and excluding the other. This data also were used as a surface brightness model for one of the spectroscopic analyses of the ASCA data (Method B). To aid in the location of potential point sources contaminating the field, we smoothed the PSPC image on scales from 30$^{\prime\prime}$\ to 8$^\prime$. The smaller scales were used to locate point sources near the center of the image, while the larger scales were used near the edges where the distortions of the PSF are large. Potential point sources were identified by eye and these areas were excluded from further analysis of the surface brightness distribution. To accurately locate the peaks of the two subclusters, we masked off one peak as well as the point sources, and centroided the other. We then repeated the process for the other peak. The equatorial coordinates (J2000) of the two peaks in the ROSAT image coordinate frame are: SE- $\alpha =11^h44^m50^s\ \delta =19^\circ 41^\prime 44^{\prime\prime}$, NW- $\alpha =11^h44^m22^s\ \delta =19^\circ 52^\prime 27^{\prime\prime}$. These appear to be offset $+0.175^s$ in RA and $+11$$^{\prime\prime}$\ in Dec from the true sky position based upon the location of NGC 3862 (the bright X-ray point source in the southeast). This is roughly consistent with typical pointing errors for ROSAT. To generate radial profiles, we defined 1$^\prime$\ wide annuli, with inner radii from 0$^\prime$\ to 46$^\prime$, centered on each peak. To avoid contamination of one subcluster by the other, we excluded the third of each annulus which was on the side towards the other subcluster. For the SE subcluster the excluded azimuths ranged from 255$^{\circ}$ to 15$^{\circ}$ and for the NW subcluster from 75$^{\circ}$ to 195$^{\circ}$(with the angle measured counter-clockwise from North). We then measured the average surface brightness ($\rm{cts\ s^{-1}\ arcmin^{-2}}$) in each annulus, and fit the resultant surface brightness profiles with a standard hydrostatic, isothermal $\beta$-model: \begin{equation} \Sigma(r)=\Sigma_0\left[1+\left( \frac{r}{R_c}\right)^2\right]^{-(3\beta -\frac{1}{2})} \label{eq:sb} \end{equation} (Cavaliere \& Fusco-Femiano\markcite{7} 1976). Because of A1367's low redshift ($z=0.0215$) the cluster emission overfills even the PSPC's large field of view. This makes direct measurement of the background difficult. Therefore, we included a constant background component in our model. The best fit backgrounds were $(1.79\pm 0.17)\times 10^{-4}$ and $(1.29\pm 0.29)\times 10^{-4}$ $\rm cts\ s^{-1}\ arcmin^{-2}$ for the SE and NW subclusters respectively, which are both consistent with typical PSPC backgrounds. While the best fit background for the SE subcluster is slightly larger, possibly indicating a small amount of contamination by the NW subcluster, the errors are consistent with a single constant background. The results of our fitting are given in Table~\ref{tab:data} and the fits themselves are shown in Figure~\ref{fig:surfbright}. \placetable{tab:data} \placefigure{fig:surfbright} \subsection{ASCA Analysis} To correctly characterize the temperature from ASCA data at some location for an extended source, we must account for the extended and energy dependent PSF of the telescope (Takahashi et al. 1995). The primary difficulty for spatially resolved spectroscopy is caused by the outer part, or wings, of the PSF. Failing to correct for this component can lead to spurious temperature and abundance gradients, although for A1367, due to its relatively low temperature and large spatial extext, these effects are not expected to be very strong. We have employed two independent methods which account for the broad energy dependent ASCA Point-Spread-Function (PSF) to construct temperature maps for A1367. Method A (Churazov et al. 1997; Gilfanov et al. 1997), provides a rapid approximate correction of the extended wings of the PSF. In contrast, Method B (Markevitch et al. 1997) performs an exact convolution of a surface brightness model with the PSF and the effective area of the telescope to generate model spectra which are then compared to the data. ASCA observed A1367 on 4-5 December 1993, with four pointings, one pair centered on the northwest region and the other centered on the southeast. Within each pair, the two pointings were offset from each other by 1.89$^\prime$\ along a roughly SE-NW axis. This offset allowed an evaluation of any systematic effects. Details of the ASCA observations also are given in Table~\ref{tab:pointings}. Preliminary to correcting the PSF, the ASCA data were ``cleaned'' with standard processing tools (Arnaud\markcite{1} 1993). A cutoff rigidity of 8 GeV/c, minimum Earth elevation angles of 5$^{\circ}$\ for the GIS and 20$^{\circ}$\ for the SIS, and maximum count rate of 50 cts/s in the radiation belt monitor were used. The GIS's background was generated from an appropriately weighted combination of background maps for all rigidities, and for the SIS's, all hot-pixel events were removed and the background maps were scaled by total exposure. \subsubsection{Approximate Fitting of the Wings of the ASCA PSF-- Method A} Method A approximates the ASCA PSF as having a core and broader wings (see Churazov et al. 1997 and Gilfanov et al. 1997 for details). The core PSF is corrected explicitly while the energy dependant PSF of the wings is corrected using a Monte Carlo algorithm. After the approximate subtraction of the scattered flux in the wings of the PSF, the temperature is determined using one of two approaches. The first fits the spectrum in each 15$^{\prime\prime}$\ pixel with a linear combination of two fiducial single-temperature spectra and then smooths the result to reduce the noise (Churazov et al. 1996b). This approach yields a continuous (unbinned) temperature map of the cluster. Central to this method is the fact that thermal spectra having typical cluster temperatures ($\gtrsim 2$ keV) can be approximated as a linear combination of two spectra bounding the temperature range in the cluster (Churazov et al. 1997). For A1367 we used fiducial spectra with kT=2 and 6 keV. The results for the GIS data for A1367 are shown in Figure~\ref{fig:tmap}, along with intensity contours from the ASCA data. Only temperatures with a $\frac{T+\sigma_T}{T-\sigma_T}< 1.5$ are shown. Results for the SIS are similar, although with a smaller detector field of view. \placefigure{fig:tmap} The second temperature fitting approach proceeds by defining a series of regions which we chose to lie straddling the line connecting the two subclusters (see Figure~\ref{fig:tmap}). Each region was 5$^\prime$\ in width and approximately 16$^\prime$\ in length. We excluded a region 5$^\prime$\ in diameter around the bright point source (NGC 3862) near the SE subcluster, so as to prevent its signal from contaminating our temperature fits. This diameter was chosen to be large enough to more than contain the central part of the PSF. In addition, we estimate that the point source contributes only about 33\% of the total flux in this excluded region, which gives us confidence that there is little contamination in the neighboring regions (\#1 and \#2). All of the data in each region were binned together, and then output as composite spectra. These spectra were then fit with a standard Raymond-Smith model using the XSPEC package utilizing the data from 1.5-2.0 and 2.5-11.0 keV. These limits were chosen to exclude the poorly calibrated region near the gold edge at 2.2 keV, and the extreme low energies for which the PSF remains poorly determined. \subsubsection{Multiple Region Simultaneous Spectral Fitting-- Method B} Method B employs a model surface brightness distribution which is convolved with the mirror effective area and the ASCA PSF (Takahashi et al. 1995) to produce model spectra for a set of user defined regions in the ASCA detector planes. The spectra from the desired regions are then fit simultaneously. A more detailed description,including a discussion of systematic uncertainties, can be found in Markevitch\markcite{22} et al. (1996 and 1997). Integral to this method is the use of a detailed surface brightness model. To this end we used the ROSAT PSPC image from our surface brightness analysis, blocked, rotated and shifted to coincide with the ASCA image. We chose the regions used in this analysis to be identical to those defined in Method A. To perform the actual temperature fitting, the pulse height data were binned in energy to achieve an adequate signal to noise ratio. The same energy range used in Method A was utilized, with the bins defined to be 1.5-2.0, 2.5-3.5, 3.5-5.5 and 5.5-11.0 keV. All of the processing so far was performed independently for each detector (SIS-0, SIS-1, GIS-2, and GIS-3) for each pointing. To determine the temperatures in each of the model regions, all of the SIS data-- from both the 0 and 1 detectors-- for all four pointings, for all the regions and for all of the energy bands were simultaneously fit . A similar procedure was followed for the GIS. To estimate the errors in our temperature solutions, we measured the standard deviation of the distribution of fit temperatures from 200 simulated spectra. Each spectrum was constructed by performing a Monte-Carlo simulation of the counts in each energy band, assuming a Gaussian distribution about the observed number of counts in that energy band in the data. The systematic errors were added to the data and model as appropriate. \section{RESULTS \& DISCUSSION} \label{sec:results} \subsection{Temperature Structure} Figure~\ref{fig:tmap} shows the continuous temperature map produced by Method A and indicates a temperature gradient across the cluster, increasing from the 3.0 keV in the SE to 4.3 keV in the NW. The regions defined earlier were designed to assess the significance of this trend, spanning the cluster along an axis from the SE to the NW (see Figure~\ref{fig:pspcimage} and Figure~\ref{fig:tmap}). Both spectral analysis methods (A \& B) discussed above were applied to the data and the results of the fitting are given in Table~\ref{tab:results} and Figure~\ref{fig:tplot}. \placetable{tab:results} \placefigure{fig:tplot} The first feature of note in Figure~\ref{fig:tplot} is the excellent agreement between the two methods for both the GIS and SIS detectors. This gives us confidence that the results of our fitting procedure are correct, at least to our knowledge of the ASCA PSF. The next feature to note in Figure~\ref{fig:tplot} is that the temperature variation appears to be abrupt rather than smooth, with a jump occuring in region \#4. The spectral fits for the SE subcluster (regions \#1-3) are consistent with a constant temperature of $3.2 \pm 0.1$ keV, and although the data for the NW subcluster (regions \#5-7) have larger error bars, they are consistent with a constant, but higher, temperature of $4.2\pm 0.3$ keV, spanning the bulk of the emission. To study the nature of the transition and as a consistency check, we shifted the regions by one half of a box width (2.5$^\prime$) along the SE-NW axis and reapplied Method A to the GIS data. For the shifted regions that overlapped with the original regions \#1--\#3, i.e. the SE subcluster, the fit temperatures were unchanged. Similarly the temperatures for the NW subcluster, regions \#5-\#7, were unchanged. However, the abruptness of the transition from SE to NW became more pronounced. Focusing on the regions near the transition-- \#3, \#4 and \#5-- the original GIS Method A temperatures were 3.3, 3.6 and 3.8 keV respectively. After the shift, the region to the southeast of the middle of \#4 had a temperature of 3.4 keV while the temperature of the region to the northwest was 3.9 keV. If the extent of the transition was large, the shifted regions would, by their partial inclusion of intermediate gas in {\it two} shifted regions, have had a higher temperature than before on the southeastern side and a lower temperature on the northwestern side. Instead the intermediate region effectively disappeared. Region \#4 appears to have been intermediate because it contained nearly equal amounts of the SE and NW subclusters. This abrupt temperature change indicated by the temperature fits with and without the shift is strongly suggestive of a shock located nearly at the midpoint of region \#4, that has been generated during a collision between the two galaxy subclusters. In fact the temperature distribution and intensity contours are very similar in nature to cross-sectional temperature and projected surface brightness profiles shown in Figures 3b-c and 5b-c respectively in Schindler \& M\"{u}ller\markcite{24} (1993). In these simulations, at 0.95 Gyr after the beginning of the merger, the smaller subcluster shows extensive heating in its core relative to its initial state as well as a strong gradient with radius. By 2.66 Gyr the cores are nearly in contact and the thermal gradient in the smaller subcluster has dissipated considerably. At the same time the temperature of the core of larger member of the merger has begun to increase and develop a thermal gradient. In A1367, the measured gas temperatures of both the SE and NW subclusters are hotter than that expected from the luminosity-temperature relation for clusters (e.g. Edge \& Stewart 1991; David et al. 1993). Although there is moderate dispersion in the $\rm L_x$-T relation it is possible that the SE subcluster gas also has begun to be heated due to the merger. This potential heating of the SE subcluster and apparent lack of thermal gradients in both subclusters as well as the clear separation of the two peaks suggests that we are observing the merger at a stage intermediate to those shown in the figures of Schindler \& M\"{u}ller, at approximately 1.8 Gyr after the onset. Observations by Gavazzi\markcite{13} et al. (1995) also suggest that A1367 is currently undergoing a merger. They find three head-tail radio galaxies in the NW subcluster that have extremely large radio/IR flux ratios as well as an extreme excess of giant HII regions on their leading edges, all of which are pointed towards the SE subcluster. From simulations, Roettiger\markcite{23} et al. (1996) find that the expected shock generated from a merger event could ``induce a burst of star formation'' as well as help to generate head-tail morphology. Finally, in an effort to generate a detailed map of the cluster we divided each of the rectangular regions into three roughly square (5$^\prime$$\times$5$^\prime$ .3) subregions, and re-applied Method A. By combining the temperature fitting for the GIS and SIS data we attempted to improve the statistics in each region and produce composite temperatures. The results are shown in Figure~\ref{fig:2dmap}. We also added four regions to the northeast of regions \#1-\#4 to examine the cool feature that appears in Figure~\ref{fig:tmap}. We see a trend of decreasing temperature from west to east across the subregions around the SE subcluster (\#41 to \#14). This may indicate that the merger is slightly oblique rather than head-on; however, we note that the variations are not highly significant. \placefigure{fig:2dmap} Figure~\ref{fig:2dmap} also helps to rule out the possibility that we are viewing the NW subcluster through a hot isothermal shell. In that case, we would expect the temperature to decline with decreasing projected radius, which is opposite to the trend found in our temperature maps. \subsection{Density Profiles} Typical values of the core radius for relaxed clusters range up to 0.6 Mpc with the peak of the distribution around 0.2 Mpc (Jones \& Forman 1984). The core radii of the SE and NW subclusters are 0.42 and 0.49 Mpc respectively, and lie significantly toward the high end of the distribution. Simulations by Roettiger et al. (1996) show that during a merger the core radius of the gas can increase by a factor of two or more due to the increase in the central entropy of the gas through shocks. Further, the core radius of the NW subcluster is larger than that of the SE subcluster indicating that the NW subcluster is even farther from a relaxed state than the SE subcluster. Similarly, the best fit values of $\beta$ are 0.73 and 0.66 for the SE and NW respectively, which are relatively large compared to relaxed clusters with temperatures similar to those of A1367. In a relaxed cluster $\beta$ generally lies in the range of 0.4 to 0.8 and increases with the gas temperature. The typical value for a 4 keV cluster is between 0.5 and 0.6 (Jones \& Forman 1997). \subsection{Abundances} For Method A, where we fit the spectrum for each region independently, we allowed the abundance to be a free parameter in the fit. The results with error bars are given in Table~\ref{tab:results} and in Figure~\ref{fig:abun}. Even though the error bars are quite large, there is some suggestion from these results that the NW subcluster has a lower abundance than the SE subcluster. To test this we defined two regions on the GIS data, one surrounding each subcluster, and again applied Method A. The results from these fits gave abundances relative to solar of $0.26\pm 0.06$ and $0.11\pm 0.05$ for the SE and NW subclusters respectively. We note, however, that large changes in the abundance, e.g. assuming an abundance of 0.3 solar for the NW subcluster, had no significant effect on the fit temperatures. In Method B, where the abundances were held fixed during the fits, we determined the temperatures twice, once with an abundance of 0.3 solar and once with an abundance of 0.4 solar. The differences in the temperature solutions were inconsequential. Because the temperatures are relatively insensitive to abundance, for Method B we have presented only the results using the typical abundance of 0.3 solar. \placefigure{fig:abun} \subsection{Mass Estimates} We made two estimates for the mass of each subcluster by applying the equation of hydrostatic equilibrium with the same density profiles, but two independent temperatures. In the case of a merger and the subsequent heating of the ICM gas, the mass estimates though very uncertain do provide a means of comparing the masses of the two subclusters. The first estimate uses the temperature we derived from our spectral fitting; the second came from applying the observed relation between temperature and X-ray luminosity found by David\markcite{5} et al. (1993). Independent of these calculations we also estimated the masses of the X-ray emitting gas in each subcluster. The assumption of hydrostatic equilibrium and spherical symmetry gives a simple equation for the total mass of the emitting system which when combined with a density profile given by an isothermal $\beta$-model reduces to: \begin{equation} M(r)=1.13\times 10^{15}\beta \left(\frac{T}{\rm 10 keV}\right)\left(\frac{r}{\rm Mpc}\right) \frac{(r/R_c)^2}{1+(r/R_c)^2}M_\odot. \label{eq:hydro2} \end{equation} To test the assertion of isothermality, we constructed three semi-circular annuli-- radii 0-6$^\prime$, 6-12$^\prime$, and 12-18$^\prime$-- centered on each subcluster such that each sampled the region {\em away} from the other subcluster in order to exclude the region of the shock and the other subcluster-- and applied Method A to the data. Due to the limited coverage of the SIS in the outer areas, we used only the GIS data. A region 5$^\prime$\ in diameter centered on NGC 3862 was once again excluded from the analysis. The summed spectra for each region were fit with a Raymond-Smith model to determine the temperatures. For the SE subcluster annular regions we find temperatures from Method A for the GIS data only of $3.1\pm 0.1$ keV, $3.1\pm 0.2$ keV and $3.6\pm 0.4$ keV for the inner, intermediate and outer annuli respectively. For the NW subcluster the Method A GIS only temperatures were $4.9\pm 0.5$ keV, $4.7\pm 0.4$ keV and $4.6\pm 0.7$ keV for the inner, intermediate and outer annuli respectively. We note that the large errors in the outermost annuli are due to their close proximity to the edge of the GIS imaging area. Both profiles are consistent with isothermal conditions and with the previous measured temperatures from the appropriate rectangular regions. We know from our previous results that there is temperature structure in the cluster, namely the shock. However, to derive an estimate of the masses, we assume isothermality throughout each subcluster. We used the weighted average temperatures ($\rm T_c$) of 3.2 keV and 4.2 keV applied to Equation~\ref{eq:hydro2} to calculate the mass within 0.5 Mpc. These include the results from both Method A and Method B for the GIS and SIS detectors. We also have extrapolated the observed mass to that within 1 Mpc for ease of comparison with previous work. The temperature of 4.2 keV--instead of those listed above-- was selected for the NW subcluster due to the consistently higher results found for it (regions \#6, \#7 and \#8) using Method A with the GIS data (see Figure~\ref{fig:tplot}). The resultant mass estimates are given in Table~\ref{tab:data}. The second hydrostratic equilibrium mass estimate uses the same gas density profile as the first, but with a different temperature. We have used the empirical relation between temperature and luminosity from David\markcite{5} et al. (1993), \begin{equation} kT_{eff}=10^{-0.72}\left(\frac{L_{bol}}{10^{40}}\right)^{0.297} \label{eq:tlum} \end{equation} to estimate an effective temperature ($\rm T_{eff}$) for a similarly luminous, but undisturbed, cluster. We determined the total flux within the ROSAT band-- from 0.5 to 2.0 keV-- from each of the previously defined annuli. Applying azimuthal symmetry, we found the total flux within a radius of 18$^\prime$\ (0.66 Mpc) for each subcluster, and then calculated the bolometric luminosity (David et al. 1997). Finally, the gas mass was estimated by inverting the formula from David\markcite{4} et al. (1990), \begin{eqnarray} \lefteqn{L(r)=\frac{2\pi n_e n_H \Lambda_0 a^3}{(1-3\beta)}\times}\nonumber\\ & \int\limits^{\infty}_{0}\left\{ \left[ 1+s^2+\left(\frac{r}{a}\right)^2\right]^{-3\beta +1}-\left(1+s^2\right)^{-3\beta+1}\right\} ds \label{eq:lumden} \end{eqnarray} and solving for the central density. We then integrated the density distribution to find the gas mass, \begin{equation} M_{gas}=4\pi\rho_o\int\limits^r_0r^2\left(1+\left[\frac{s}{R_c}\right]^2\right)^{-\frac{3\beta}{2}}ds, \label{eq:mass} \end{equation} using the density distribution corresponding to the surface brightness given in Equation~\ref{eq:sb} for an isothermal gas with $\rho_o = \mu_e n_e m_p$ where $\mu_e$ is the mean molecular weight per electron. The resultant gas masses are given in Table~\ref{tab:data} and yield, when compared to the mass estimates (using $\rm T_{eff}$ for the NW subcluster), gas mass fractions of $\sim$12\% at 0.5 Mpc and $\sim$16\% at 1 Mpc in each subcluster which is typical of rich clusters. Although, as stated above, there is considerable dispersion in the $\rm L_x-T$ relation, temperature estimates from the luminosity for both subclusters are lower than the measured values. For the SE subcluster the difference is moderate with an estimated temperature-- and thus mass-- 31\% lower than the actual measured values. However, for the NW subcluster the difference is nearly twice as large. We measure a luminosity of $\rm 0.29\times 10^{44}\ ergs\ s^{-1}$ and thus estimate a gas temperature of 2.0 keV which is 52\% lower than the measured value of 4.2 keV. Again the results for the masses within 0.5 Mpc and 1.0 Mpc are given in Table~\ref{tab:data}. As discussed above the estimates of the masses of the two subclusters have considerable uncertainty associated with them due to the perturbations from the merger. However, the two subclusters appear to be experiencing similar changes to their density profiles, thus allowing a {\it relative} comparison of their masses. While the masses derived from the measured temperatures are essentially equal, the NW subcluster appears to have been heated more than the SE subcluster, thus suggesting that it is actually less massive than the SE subcluster. This is further supported by the calculated gas mass. \section{CONCLUSION} \label{sec:conclusion} We have analyzed the ROSAT PSPC and ASCA SIS and GIS observations of A1367. For the ASCA data, we have applied two different analysis techniques for measuring the intracluster gas temperature and find excellent agreement. Our analysis indicates that we are observing the early stages of a slightly unequal merger between the two subclusters occuring along a SE-NW axis nearly in the plane of the sky. We find evidence for a shock-like feature along the merger axis between the two subclusters as well as heating of the gas throughout both subclusters, with the smaller NW subcluster being heated more than the SE subcluster. This is in excellent agreement with predictions from the merger simulations by Evrard\markcite{10}\markcite{11} (1990a and b) and Schindler \& M\"{u}ller\markcite{24} (1993). We also note that the surface brightness profile of the NW subcluster is ``puffing out'' as indicated by its larger core radius. This is similar to the effects identified by Roettiger\markcite{23} et al. (1996) as a signature of a merger event. Our detailed temperature map of the cluster suggests that the merger may be occuring slightly obliquely, with the cluster cores passing each other traveling north-south, but the statistics of the data are not sufficient to support any firm conclusion on this point. Future studies of A1367 and other merging clusters will provide a clearer picture of the detailed interactions which occur as clusters form. With the launch of AXAF in 1998, we can expect to obtain much higher angular resolution temperature maps to study the merger process and the structure of the shocks which are produced. \acknowledgments RHD, MM, WF, CJ and LPD acknowledge support from the Smithsonian Institute and NASA contract NAS8-39073.
{'timestamp': '1997-12-22T20:08:05', 'yymm': '9712', 'arxiv_id': 'astro-ph/9712299', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/9712299'}
\section{Summary} \input{sections/7.summary} \section*{Acknowledgements} This research received funding from the Flemish Government under the ``Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen'' programme. The authors thank Maurice Bruynooghe for his review of the drafts of this paper. \bibliographystyle{packages/kr} \section{Introduction} \newcommand{{\mathcal{I}}}{{\mathcal{I}}} \newcommand{\ensuremath{\mathit{hasFever}}}{\ensuremath{\mathit{hasFever}}} \newcommand{\ensuremath{\mathit{sneezes}}}{\ensuremath{\mathit{sneezes}}} \newcommand{\ensuremath{\mathit{coughs}}}{\ensuremath{\mathit{coughs}}} \ignore{ With the advent of strong solver technologies from SAT, CP, SMT, ASP, \dots, it becomes possible for those with interest in the nature of knowledge in all its complexity and variation, to study real software applications from a Knowledge Representation and Reasoning (KRR) point of view, to analyze the sort of knowledge available in them, to study the sorts of inferences that would help to solve problems arising in those domains, and to implement such solutions. Often such experiments pose interesting new challenges on the KR level suggesting new language constructs or sometimes a new role for concepts that were introduced before in philosophical logic papers, of which the relevance to software problems was unclear so far. Being able to implement the language constructs makes a big difference: we can evaluate the solution. \todo{remove duplication in next paragraph. Or remove both per Joost.} Thanks to the development of powerful solvers in SAT, SMT, ASP, CP, and MILP, the field of Knowledge Representation (KR) has entered a new phase: the age of computational Knowledge Representation, where such tools are used as reasoning engines to perform tasks and help solve problems by applying the suitable form of reasoning on some knowledge base expressed in an expressive KR language. \ignore{Piere:I suspect that our use of `inference engine` is not standard: it usually refers to the engine behind rule-based systems. I prefer "reasoning engine". It also matches better with KRR = Knowledge Representation and *Reasoning* } } The power of a KR language for compact expression of knowledge lies for an important extent in the way it allows us to abstract over certain types of objects, e.g., to quantify, count, or sum over them. Classical First Order logic (FO) is a much stronger KR language than Propositional Calculus (PC) because it allows to quantify over domain objects. Still, the abstraction power of FO is limited, e.g., it does not allow to count or sum over domain objects satisfying some condition. Many first-order modeling languages are therefore extended to support \emph{aggregates}. Likewise, one can only quantify over individuals in the domain, not over relations and functions. This is resolved in Second Order logic (SO). In this paper, we argue that in some KR applications, we want to abstract (i.e., quantify, count, sum, \dots) not over relations and functions, but over certain {\em concepts} that are available in the knowledge base vocabulary, as the following example will show. \ignore{ These concepts may be zero order, i.e., object concepts such ``earth'' or ``moon'', or first order, such as sets of patients exhibiting specific symptoms of a disease. \ignore{``\st{symptoms of diseases that different patients may show.}'' was een beetje verwarrend, omdat dit een predicaat `Patient(symptoms)' lijkt te beschrijven, maar het beschrijft een groep aan `symptom(Patient)' predicaten!} } \begin{example} Consider a corona testing protocol. A person is to be tested if she shows at least 2 of the following symptoms: fever, coughing and sneezing. The set of persons showing a symptom is represented by a unary predicate ranging over persons, i.e., $\ensuremath{\mathit{hasFever}}{}/1$, $\ensuremath{\mathit{coughs}}{}/1$, and $\ensuremath{\mathit{sneezes}}{}/1$. Leaving first order behind us, assume that the set of symptoms is represented by predicate $Symptom/1$. Testing is expressed as a unary predicate $\mathit{test1}/1$, ranging over persons. Informally, this predicate is to be defined along the following lines (while freely extending the syntax of FO with the ``$\#$'' cardinality operator on a set): \begin{equation} \label{eq1} \forall p : test1(p) \Leftrightarrow 2\leq \# \{ x | Symptom(x) \land x(p)\} \end{equation} Alternatively, assume another test is to be taken iff the patient shows all 3 symptoms. Now the definition is along the following lines: \begin{equation} \label{eq2} \forall p : test2(p) \Leftrightarrow \forall x (Symptom(x) \Rightarrow x(p)) \end{equation} \end{example} The question is: what is the nature of the variable $x$ and the predicate $Symptom$? Over what sorts of values does $x$ range? At first sight, one might think that $x$ ranges over sets of persons and, hence, that it is a second order variable. \ignore{That, at least, was our first impression in this and similar examples. In retrospect, it is not difficult to see that this is \emph{incorrect}.} An easy `possible world' analysis refutes this idea. Consider a state of affairs where everybody has all 3 symptoms. Formally, in a structure ${\mathcal{I}}$ abstracting such a state, the interpretations of the symptoms are identical: $\ensuremath{\mathit{hasFever}}^{\mathcal{I}}=\ensuremath{\mathit{coughs}}^{\mathcal{I}}=\ensuremath{\mathit{sneezes}}^{\mathcal{I}}$. The set in Eq.~\ref{eq1}, \begin{equation*} \{x|x=\ensuremath{\mathit{hasFever}}^{\mathcal{I}} \lor x=\ensuremath{\mathit{coughs}}^{\mathcal{I}} \lor x=\ensuremath{\mathit{sneezes}}^{\mathcal{I}}\} \end{equation*} is a singleton, and the condition for the first test is not satisfied for any $p$, not even for poor Bob who has all three symptoms. \ignore{ Thus, in rule (1), $p$ cannot be a second order variable, abstracting over sets of domain elements. Instead, it abstracts over the intension of the $Symptom$ symbols. It so happens that, in rule (2), $p$ can be seen either as a second order variable or as an intensional variable: in both cases, the formula has the same models. In later sections, we will see that second order variables are not subsumed in any way by intensional variables, rather, they tackle different problems. Both are of value, but they are not interchangable, and we need to learn to distinguish them. } \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{images/Intensions.png} \caption{We extend structures in classical logic with objects representing the intension of symbols in the ontology} \label{fig:interpretation} \end{figure*} A different idea is that $x$ is ranging over {\em symbols}: the symbols \ensuremath{\mathit{hasFever}}{}, \ensuremath{\mathit{coughs}}{}, \ensuremath{\mathit{sneezes}}{}. This sort of variable is generally found in meta programming, of the kind investigated in, e.g., Logic Programming (\cite{DBLP:journals/logcom/BarklundDCL00}) or Hilog~(\cite{chen1993hilog}). Again, it is not difficult to see that also this is not really the case. Assume for the sake of insight, that the team working on this application is international, and the French group has introduced the predicate $\mathit{estFievreux}/1$, French for ``has fever''. Thus, \ensuremath{\mathit{hasFever}}{} and $\mathit{estFievreux}$ are {\em synonymous}. Clearly, $\mathit{John}$ whose only symptom is fever, does not need to be tested. However, because $\mathit{isFievreux}$ and \ensuremath{\mathit{hasFever}}{} are different symbols, the meta programming view of the first definition would define $\mathit{test1}$ to erroneously hold for $\mathit{John}$. \ignore{ Pierre:I'd rather use 2 synonyms in the example (e.g. sneezing and sternutation), rather than take a translation. Or use antonyms, e.g. short and tall ?} \ignore{Marc: I like fever and fievre.} The view that we elaborate in this paper, is that the abstraction in both examples is over {\em concepts}. For context, consider that the first phase of a rigorous approach to building a KR specification is the selection of a formal ontology $\voc$ of symbols representing relevant {\em concepts} of the application field. These concepts can be identified with the user's \emph{informal interpretations} of the symbols in the ontology of the domain. The informal interpretation is in practice a crucial concept in all knowledge representation applications: it is the basis of all actions of formal knowledge representation, and of all acts of interpreting formal results of computation in the application domain. Connecting concepts to informal interpretations provides a good intuitive explanation of what concepts are and how they are relevant in a KR context. A concept is not a value, but takes on a particular value in any particular state of affairs. Since the informal meaning of symbols is known, the synonymity relation $\sim_s$ is known too, and it is an equivalence relation on $\voc$. Synonymous symbols have the same arity and type. Formally, in the logic, an ontology symbol $\sigma$ has one \emph{intension} and, for each structure ${\mathcal{I}}$, one \emph{extension} (Fig. \ref{fig:interpretation}). Synomymous symbols $\sigma_1$, $\sigma_2$ have the same intension, and, in every structure ${\mathcal{I}}$, the same extension. The extension of $\sigma$ in a structure is its interpretation, aka value, in that structure. It is a relation or function, of a precise arity, over the universe of discourse; in formulae, it is denoted simply by $\sigma$. In classical FO logic, a structure only has the extensions of symbols. For our purpose, we extend a structure to also have the intension of symbols. This intension is \emph{rigid} in the sense that it is the same in every extended structure. In formulae, the intension of $\sigma$ is denoted by $\deref{\sigma}$. Finally, we introduce the \emph{value functor} $\$(\cdot)$: when applied to an intension, it returns its interpretation in the current structure. \ignore{ \begin{figure}[h] \centering \begin{tikzpicture} [roundnode/.style={draw=black, ellipse, minimum width=70pt, align=center},] \node (syntax) {Syntax}; \node (semantics) [right = 4 cm of syntax] {Semantics}; \node[roundnode] (symbol1) [below =0.3 cm of syntax] {$\deref{\sigma}$}; \node[roundnode] (concept) [below =0.3 cm of semantics] {Concept}; \node[roundnode] (symbol) [below =0.6 cm of symbol1] {$\sigma$\\ $\$(\deref{\sigma})$}; \node[roundnode] (value) [below =0.5 cm of concept] {Relation or\\function}; \draw[->] (symbol1.east) -- (concept.west) node[midway, above]{interpretation}; \draw[->] (symbol.north east) -- (concept.south west) node[midway, below, rotate=13]{intension}; \draw[->] (concept.south) -- (value.north) node[midway, right=0.1cm]{value}; \draw[->] (symbol.east) -- (value.west) node[midway, below]{interpretation}; \end{tikzpicture} \caption{Intension and interpretation} \label{fig:interpretation} \end{figure} } \ignore{ \begin{figure}[h] \centering \begin{tikzpicture} [roundnode/.style={draw=black, ellipse, minimum width=70pt, align=center},] \node[roundnode] (symbol) {Symbol $\sigma$}; \node[roundnode] (concept) [above right = 0.55cm and 1 cm of symbol] {Concept $\deref{\sigma}$}; \node[roundnode] (value) [below right = 0.35cm and 1 cm of concept] {Set or\\function}; \draw[->] (symbol.east) -- (value.west) node[midway, below]{interpretation}; \draw[->] (symbol.north east) -- (concept.west) node[midway, left=0.1cm]{intension (`)}; \draw[->] (concept.east) -- (value.north west) node[midway, right=0.1cm]{interpretation (\$)}; \end{tikzpicture} \caption{Intension and interpretation} \label{fig:interpretation} \end{figure} } In the example, the intension of the symbols $\ensuremath{\mathit{hasFever}}$ and $\mathit{estFievreux}/1$ are the same object denoted $\deref{\ensuremath{\mathit{hasFever}}}$ (or $\deref{\mathit{estFievreux}}$) which is the abstraction of the concept of people having fever. The symbol $Symptom$ denotes the set containing the intensions $\deref{\ensuremath{\mathit{hasFever}}}, \deref{\ensuremath{\mathit{coughs}}}$ and $\deref{\ensuremath{\mathit{sneezes}}}$. In any particular structure, the $\ensuremath{\mathit{hasFever}}{}$ symbol has an interpretation which is a set of elements abstracting the set of people having fever in the state of affairs. This set is also the value of the $\deref{\ensuremath{\mathit{hasFever}}}$ intension. Suppose we want to count the concepts in $Symptom$ whose value (a set) contains person $p$. The following formula expresses the testing protocol: \begin{equation} \label{eq:rule1} \forall p : test1(p) \Leftrightarrow 2\leq \# \{ x | Symptom(x) \land \$(x)(p)\} \end{equation} \ignore{ remove next 2 paragraphs and expand in contribution. However, formula~(\ref{eq:rule1}) is incorrect because $x$ quantifies not only over concepts, but also over domain elements, while the value function $\$$ has meaning only for arguments that are concepts. Furthermore, when the value of a concept $x$ is not a unary relation, $\$(x)(p)$ makes no sense. Under strict evaluation, the sentence would make no sense either. Thus, one of the challenges of defining the logic is to find a flexible mechanism to ensure that a formula makes sense. The mechanism should be defined at the syntactical level: it should not depend on the particular interpretation of symbols, such as $\mathit{Symptom}$, to avoid performance issues. We introduce such a mechanism, and show that its complexity is linear with the length of the formula. } In the current state of the art of Knowledge Representation, one might consider, as we once did, applying the common KR technique of \emph{reification}, rather than laboriously extending KR languages as we do. In this particular case, one could introduce object symbols $\mathit{Fever}, \mathit{Sneezing}, \mathit{Coughing}$ (under the Unique Name Assumption), and a predicate, $\mathit{Has}$. We can now represent that Bob has fever by the atom $\mathit{Has}(\mathit{Bob},\mathit{Fever})$. The testing protocols can then be formalized (in a proper extension of FO) as: \begin{gather*} \begin{split} \forall x : &\mathit{Symptom}(x) \Leftrightarrow \\ &x=\mathit{Fever} \lor x=\mathit{Sneezing} \lor x=\mathit{Coughing} \end{split}\\ \begin{split} \forall p : \mathit{test1}(x) \Leftrightarrow 2\leq \# \{ x | \mathit{Symptom}(x) \land \mathit{Has}(p,x)\} \end{split} \end{gather*} Still, we believe that our study is valuable for two reasons. First, we improve the scientific understanding of the problem, thus helping avoid using KR encoding tricks to circumvent it. Second, our logic is more \emph{elaboration tolerant}. Per \cite{mccarthy1998elaboration}, ``a formalism is elaboration tolerant to the extent that it is convenient to modify a set of facts expressed in the formalism to take into account new phenomena or changed circumstances''. Elaboration tolerance is a desirable feature of formalisms used in AI applications, and in particular in knowledge representation. \ignore{The simplest kind of elaboration is the addition of new formulas.} Consider a knowledge base that, initially, does not contain rule~(\ref{eq1}). Per standard KR practice, it would use predicates such as $\ensuremath{\mathit{hasFever}}{}/1$ to encode symptoms. Then, circumstances change, and rule~(\ref{eq1}) must be introduced. A formalism that requires rewriting the knowledge base using reified symptoms is not elaboration tolerant. Our formalism, by contrast, would not require any change to the initial knowledge base. \ignore{ There are two reasons why we do not follow this road here. First, reification is a useful KR encoding technique but such techniques do not give insight in the nature of the underlying knowledge. We believe it is important to look at the problem from a more theoretical angle. Second, reification has also downsides: Imagine a large on-going project, with many users and many knowledge components partially based on the same vocabulary. Assume at some point, some group of symbols needs to be reified to express a proposition. Unless care is taken, this has impact on all components of the applications that use the symbols being reified, it may confuse users, reduce the readability of the knowledge representation, and it may easily introduce errors. It seems to us that future KR systems might better be equipped with language constructs that support such applications without the need for reification.\dmar{I think we can add here the problem that occurred in my pre-dock work on FM. Namely, imagine that you have a set of unary functions, but their type does not have to match. If we introduce objects representing these functions (funs) and want to introduce function val(value, funs)->value, you can not do it, since what is the type of value.} } To summarize, the main contributions of the paper are as follows. We argue that in some KR applications, it is useful to abstract over concepts in quantification and aggregates. We analyse the connection to well-known intensional logic (Section~\ref{sec:intensional-logic}) , and explain how to distinguish this from second order quantification and meta-programming (Section ~\ref{sec:related}). We explain how to extend a simple predicate logic so that every ontology symbol has an intensional and an extensional component~(Section~\ref{sec:FoConcept}). We show that this may lead to non-sensical sentences, and we restrict our syntax to avoid them. In Section~\ref{sec:FoInt}, we extend FO($\cdot$), a model-based knowledge representation language, and illustrate the benefits of the extension in different applications~(Section \ref{sec:examples}). \ignore{We also briefly report on a model expansion system, called IDP-Z3, which supports FO($\cdot$) with such extension.} \section{``Concepts'' in Logic} \label{sec:intensional-logic} The notions of a ``concept'' and of its ``value'' in a particular state of affairs that we attempt to study here are closely related, if not identical, to the broadly studied distinction between \emph{intension} and \emph{extension}, \emph{meaning} and \emph{designation}, or \emph{sense} and \emph{reference} as studied in philosophical and intensional logic. The close relation between the two urges us to detail the role of concepts or ``intensions'' in intensional logic, and to make the comparison with our approach. In philosophical logic, the concepts of \emph{intension} and \emph{extension} have been investigated at least since Frege (for a historical overview see \cite{sep-logic-intensional}). A prototypical example concerns the ``morning star'' and ``evening star'', i.e., the star visible in the east around sunrise, and in the west around sunset. These words have two different intensions (i.e., they refer to two different concepts), but as it happens, in the current state of affairs, their extensions (i.e., their interpretation) are the same: the planet also known as Venus. Early key contributors in this study are Frege, Church, Carnap and Montague. The intensional logics of Montague, of Tich\'y, and of Gallin, are typed modal logics based on \emph{Kripke semantics} in which expressions at any level of the type hierarchy have associated intensions. They provide notations to access the intension and the extension of expressions. They also provide modal operators to talk about the different values that expressions may have in the \emph{current} and in \emph{accessible} worlds. The principles of intensions of higher order objects are similar to those of the base level (domain objects), leading Fitting in several papers (\citeyear{fitting2004first,sep-logic-intensional}) to develop a simplified logic FOIL where only expressions at the base level of the type hierarchy have intensions. For example, in the logic FOIL~(\cite{fitting2004first}), the intension of $\mathit{EveningStar}$ is the mapping from possible worlds to the extension of $\mathit{EveningStar}$ in that world. We can express that the \emph{extension} of $\mathit{MorningStar}$ in the current world, equals the \emph{extension} of $\mathit{EveningStar}$ in a (possibly different) \emph{accessible} world: \begin{equation} [\lambda x(\diamond (\mathit{EveningStar}=x))] (\mathit{MorningStar}) \end{equation} Here the lambda expression binds variable $x$ to the extension of $\mathit{MorningStar}$ in the current world, and the ``$\diamond$'' modal operator is used to indicate an accessible world. This statement would be true, e.g., in the view of a scientist that may not have evidence that both are the same but \emph{accepts it as a possibility}. A comparison between the work presented in this paper and the intensional logics in philosophical logic, is not easy. Even if the notion of intension versus extension is underlying the problems that we study here, the focus in our study is on very different aspects of intensions than in philosophical logic. As a result, the logic developed here differs quite strongly from intensional logics. An important difference is the lack of modal logic machinery in our logic to analyse the difference between intensions and extensions. In our logic, a symbol has an extension (aka interpretation) in different possible worlds, but there are no modal operators to ``talk'' about the extension in other worlds than the current one. On the other hand, our study is on aspects that occur more often in daily KR applications. This is why we stress the link between intensions and the pragmatically important concept of informal interpretation of symbols. In such KR applications, problems such as the symptom example easily occur (see Section~\ref{sec:examples}). They can be analyzed and demonstrated and solved in a logic much closer to standard logic. An important difference is that we provide an abstraction mechanism through which we can quantify over, and count, intensional objects (aka concepts). Compared to FOIL, we associate intensions to predicate and function symbols of any arity even though, as hinted earlier, it poses additional challenges to ensure the syntactical correctness of formulae. We will address these by introducing guards. By contrast, FOIL associates intensions only to intensional objects of arity 0, like $\mathit{Morningstar}$, avoiding the issue. Last but not least, our research also includes the extension of an existing reasoning engine for the proposed logic, such that statements such as (3) can be written and used to solve problems. \section{FO with Concepts} \label{sec:FoConcept} Below, we describe how first order logic, in its simplest form, can be extended to support quantification over concepts. The language, FO(Concept), is purposely simple, to explain the essence of our ideas. We present the syntax~(\ref{sec:syntax}), the semantics~(\ref{sec:semantics}), and alternative formulations of the extension~(\ref{sec:alt}). \subsection{Syntax} \label{sec:syntax} We want to extend the FO syntax of terms over vocabulary \voc{} with these four new constructing rules: \begin{itemize} \item[+] $\numeral{n}$ is a term if $\numeral{n}$ is a numeral, i.e., a symbol denoting an integer; \item[+] $\#\{x_1,\dots,x_n:\phi\}$ is a term if $x_1,\dots,x_n$ are variables and $\phi$ a formula; \item[+] $\deref{\sigma}$ is a term if $\sigma \in \voc$, \item[+] $\val{x}(t_1,\ldots,t_n)$ is a term if $x$ is a variable, and $t_1, \ldots, t_n$ are terms over \voc{}. \end{itemize} The last rule is problematic, however. In a structure ${\mathcal{I}}$ extended as we propose, $x$ ranges not only over the domain of ${\mathcal{I}} , but also over the set of concepts corresponding to the symbols in \voc{}: when the value of $x$ is such a concept, $\val{x}$ is its extension; but when $x$ is not such a concept, $\val{x}$ is undefined. Furthermore, the value assigned to $x$ may be the concept of a predicate symbol: in that case, $\val{x}(t_1,\ldots,t_n)$ is not a term. Finally, the value of $x$ may be the concept of a function of arity $m \neq n$: in that case, $\val{x}(t_1,\ldots,t_n)$ is not a well-formed term. For example, $\val{x}()$ is not a well-formed term in the following cases (among others): \begin{itemize} \item $[x=1]$ where $1$ denotes an element of the domain of discourse; \item $[x=\deref{p}]$ where $p$ is a predicate symbol; \item $[x=\deref{f}]$ where f is a function symbol of arity 1. \end{itemize} Essentially, whereas in FO, arities of function and predicate symbols are known from the vocabulary, and taken into account in the definition of well-formed composite terms and atoms by requesting that the number of arguments matches the arity, this now has become impossible due to the lack of information about $\val{x}$. Thus, to define well-formed terms, we need more information about the variables occurring in them. We formalize that information in a typing function. \newcommand{\guards{t}}{\guards{t}} \newcommand{\guards{f}}{\guards{f}} We call $\gamma$ a \emph{typing function} if it maps certain variables $x$ to pairs $(k,n)$ where $k$ is either \predi{} or \funci{}, and $n$ is a natural number. Informally, when a variable $x$ is mapped to $(k,n)$ by $\gamma$, we know that $x$ is a concept, of kind $k$ and arity $n$. For a given $\gamma$, we define $\gamma[x:(k,n)]$ to be the function $\gamma'$ identical to $\gamma$ except that $\gamma'(x)=(k,n)$. This allows us to define the notion of well-formed term and formula: \begin{definition} We define that a string $e$ is a \emph{well-formed term} over $\voc$ given a typing function $\gamma$ (denoted $\gamma \guards{t} e$) by induction: \begin{itemize} \item $\gamma \guards{t} x$ if $x$ is a variable; \item $\gamma \guards{t} f(t_1,..,t_n)$ if $f$ is an n-ary function symbol of $\voc$ and for each $i$, $\gamma\guards{t} t_i$; \item[+] $\gamma \guards{t} \numeral{n}$ if $\numeral{n}$ is a numeral, a symbol denoting the corresponding natural number $n$; \item[+] $\gamma \guards{t} \#\{x_1,\dots,x_n:\phi\}$ if $x_1,\dots,x_n$ are variables and $\gamma\guards{f} \phi$; \item[+] $\gamma \guards{t} \deref{\sigma}$ if $\sigma \in \voc$; \item[+] $\gamma \guards{t} \$(x)(t_1,\dots,t_n)$ if x is a variable, $\gamma(x)=(\funci{}, n)$ and for each $i \in [1,n]$, $\gamma\guards{t} t_i$. \end{itemize} \end{definition} (The four rules with a ``+'' bullet are those we add to FO's classical definition of terms.) \begin{definition} We define that a string $\phi$ is a \emph{well-formed formula} over $\voc$ given $\gamma$ (denoted $\gamma \guards{f} \phi$) by induction: \begin{itemize} \item $\gamma\guards{f} \Tr$, $\gamma\guards{f}\Fa$; \item $\gamma \guards{f} p(t_1,\dots,t_n)$ if $p$ is an n-ary predicate of $\voc$ and \\ for each $i \in [1,n]$, $\gamma\guards{t} t_i$; \item $\gamma \guards{f} (\lnot \phi)$ if $\gamma \guards{f} \phi$; \item $\gamma \guards{f} (\phi \lor \psi)$ if $\gamma \guards{f} \phi$ and $\gamma \guards{f} \psi$; \item $\gamma \guards{f} \exists x: \phi$ if $x$ is a variable and $\gamma \guards{f} \phi$; \item $\gamma \guards{f} t_1=t_2$ if $\gamma\guards{t} t_1$ and $\gamma\guards{t} t_2$; \item[+] $\gamma \guards{f} t_1 \leq t_2$ if $\gamma\guards{t} t_1$ and $\gamma\guards{t} t_2$\\ and $t_1$ and $t_2$ are numerals or cardinality aggregates; \item[+] $\gamma \guards{f} (\ite{x::k/n}{\phi}{\psi})$ if $x$ is a variable, \\ $k$ is either \predi{} or \funci{}, $n$ is a natural number, \\ $\gamma[x:(k,n)] \guards{f} \phi$ and $\gamma\guards{f} \psi$; \item[+] $\gamma \guards{f} \$(x)(t_1,\dots,t_n)$ if $x$ is a variable, \\ $\gamma(x)=(\predi{},n)$ and for each $i \in [1,n]$, $\gamma\guards{t} t_i$. \end{itemize} \end{definition} Notice that, because of the aggregate term rule, these definitions are mutually recursive. Also, due to their constructive nature, well-formed terms and formulae can be arbitrarily large, but always finite. Formulae of the form \begin{equation*} \phi \land \psi, \phi\Rightarrow\psi, \phi\Leftrightarrow\psi, \forall x : \phi \end{equation*} are defined through their usual equivalences \begin{equation*} \lnot(\lnot \phi \lor \lnot \psi), \neg\phi\lor\psi, (\phi \land \psi) \lor (\neg\phi \land \neg \psi), \neg\exists x : \neg \phi \end{equation*} and are not further discussed. The other comparison operators, $<, >, \ge$, can be defined similarly. Let $\emptycontext$ be the typing function with empty domain. We say that $\phi$ is a \emph{well-formed formula} over $\voc$ if $\emptycontext\guards{f} \phi$. \begin{example} Here is a well-formed version of the cardinality sub-formula of Equation~\ref{eq:rule1}: \begin{align*} \# \{ x | &\ite{x::\predi{}/1}{Symptom(x) \land \$(x)(p)}{\\&false}\} \end{align*} \end{example} We say that $Symptom(x) \land \$(x)(p)$ is \emph{guarded} by the $x::\predi{}/1$ condition, and that the formula is \emph{well-guarded}. Note that, in this logic, a value $\val{x}$ cannot occur in a formula without immediately being applied to a tuple of arguments. \paragraph{Complexity} The determination of the well-formedness of a formula can be performed by backward reasoning, applying the appropriate inductive rule at each step based on the syntactic structure of the formula under consideration. Notice that the quantifications occurring in the formula are eliminated by the reasoning. Assuming a unit cost for executing each inductive step, the complexity of the computation is thus linear with the number of sub-formulae, i.e., with the length of the formula. \subsection{Semantics} \label{sec:semantics} Let an \emph{ontology} \Ont{} be a pair of a vocabulary $\voc$ and a synonymity relation $\synrel$ . Let $\eqcl{\sigma}$ be the equivalence class of symbol $\sigma$ in \Ont{} (or any equivalent witness of this equivalence class), and $\intdom{}$ be the set of such equivalence classes (or witnesses). Recall that all synonymous symbols have the same type and arity. In the philosophical papers about intensional logic, the class of concept is open. Here, however, we want to quantify only over the concepts that are interpretations of symbols in \voc{}, not over any concepts. In essence, we want to quantify over $\intdom{}$. Hence, to define the semantics of FO(Concept), we extend the notion of a structure to include $\intdom{}$ and an additional mapping from concepts to their value. \begin{definition} A \emph{(total) structure} \struct{} over ontology \Ont{} consists of: \begin{itemize} \item an object domain $D$ containing $\mathds{N}$, \item a (total) mapping from predicate symbols $p/n$ in \voc{} to n-ary relations $p^{\mathcal{I}}$ over $D \cup \intdom{}$, \item a (total) mapping from function symbols $f/n$ in \voc{} to $n$-ary functions $f^{\mathcal{I}}$ over $D \cup \intdom{}$, \item[+] a (total) mapping $\$^\struct{}$ from concepts in \intdom{} to relations and functions over $D \cup \intdom{}$. \end{itemize} \end{definition} $\$^\struct{}$ interprets the value function ``$\$(\cdot)$'' We now define a notion of \emph{coherency} for structures $\struct{}$, expressing that synonyms have the same value or, more formally, that $\$(\deref{\sigma}) = \sigma$ for any $\sigma$. \begin{definition} A total structure $\struct{}$ over \Ont{} is \emph{coherent} iff \begin{itemize} \item for every predicate symbol $p \in \voc{}$, $\$^\struct{}{(\eqcl{p})} = p^{\mathcal{I}}$ \item for every function symbol $f \in \voc{}$, $\$^\struct{}(\eqcl{f}) = f^{\mathcal{I}}$. \end{itemize} \end{definition} From now on, we consider only coherent structures. We define a \emph{variable assignment} $\nu$ as a mapping of variables to elements in $D \cup \intdom{}$. A variable assignment extended so that the mapping of $x$ is $d$ is denoted $\nu[x:d]$. We introduce the ternary \emph{valuation} function, that maps terms $t$ (resp. formulae $\phi$), structures \struct{} and variable assignment $\nu$ to values $v \in D \cup \intdom$ (resp. to booleans). \begin{definition} \label{def:term_value} We partially define the value $v$ of well-formed $t$ in (\struct{}, $\nu$) (denoted $\eval{t} = v$) by induction: \begin{itemize} \item $\eval{x}=\nu(x)$ if $x$ is a variable in the domain of $\nu$; \item $\eval{f(t_1, \ldots, t_n)}=f^{\mathcal{I}}(\eval{t_1}, \ldots, \eval{t_n})$ if \\ $f$ is an $n$-ary function symbol and \\ $\eval{t_1}, \ldots, \eval{t_n}$ are defined; \item[+] $\eval{\numeral{n}}=n$ if $n$ is the integer denoted by $\numeral{n}$; \item[+] $\eval{\#\{x_1,\dots,x_n:\phi\}}=m$ if $\llbracket \phi \rrbracket^{\mathcal{I}}_{\nu[x1:d1]\dots[xn:dn]}$ \\ is defined for every $d1,\dots, dn \in D \cup \intdom{}$,\\ $m$ is an integer, and\\ $\#\{\eval{x_1},\dots,\eval{x_n}:\llbracket \phi \rrbracket^{\mathcal{I}}_{\nu[x1:d1]\dots[xn:dn]} = \Tr\}=m $ \item[+] $\eval{\deref{S}}=\eqcl{S}$; \item[+] $\eval{\$(x)(t_1,\ldots, t_n)}=\$^\struct{}(\eval{x})(\eval{t_1}, \ldots, \eval{t_n})$ if\\ $\eval{x}$ is a concept in \intdom{} mapped by $\$^\struct{}$ to an $n$-ary function, and $\eval{t_1}, \ldots, \eval{t_n}$ are defined; \item[+] $\eval{t}$ is undefined in all the other cases. \end{itemize} \end{definition} \begin{definition} \label{def:form_value} We partially define the truth value $v$ of well-formed $\phi$ in (\struct{}, $\nu$) (denoted $\eval{\phi} = v$) by induction: \begin{itemize} \item $\eval{\Tr} = \Tr$; $\eval{\Fa} = \Fa$; \item $\eval{p(t_1, \ldots, t_n)}=p^{\mathcal{I}}(\eval{t_1}, \ldots, \eval{t_n})$ if\\ $p$ is a $n$-ary predicate symbol and\\ $\eval{t_1}, \ldots, \eval{t_n}$ are defined; \item $\eval{\lnot \phi} = \lnot \eval{\phi}$ if $\eval{\phi}$ is defined; \item $\eval{\phi \lor \psi} = \eval{\phi} \lor \eval{\psi}$ if $\eval{\phi}$ and $\eval{\psi}$ are defined; \item $\eval{\exists x: \phi} = (\exists d \in D \cup \intdom{}: \llbracket \phi \rrbracket^{\mathcal{I}}_{\nu[x:d]})$ if \\ $\llbracket \phi \rrbracket^{\mathcal{I}}_{\nu[x:d]}$ is defined for every $d\in D \cup \intdom{}$; \item $\eval{t_1 = t_2} = (\eval{t_1}=\eval{t_2})$ if \\ and $\eval{t_1}$ and $\eval{t_2}$ are defined; \item $\eval{t_1 \le t_2} = (\eval{t_1}\le\eval{t_2})$ if \\ and $\eval{t_1}$ and $\eval{t_2}$ are defined and integers; \item[+] $\eval{\ite{x::k/n}{\phi}{\psi}} = \eval{\phi}$ if $\eval{x}$ is a concept in \intdom{} mapped by $\$^\struct{}$ to an $n$-ary predicate or function according to $k$, and $\eval{\phi}$ is defined; \item[+] $\eval{\ite{x::k/n}{\phi}{\psi}} = \eval{\psi}$ if $\eval{x}$ is not a concept in \intdom{} mapped by $\$^\struct{}$ to an $n$-ary predicate or function according to $k$, and $\eval{\psi}$ is defined; \item[+] $\eval{\$(x)(t_1,\dots,t_n)} = \$^\struct{}(\eval{x})(\eval{t_1}, \ldots, \eval{t_n})$ if\\ $\eval{x}$ is a concept in \intdom{} mapped by $\$^\struct{}$ to an $n$-ary predicate, and $\eval{t_1}, \ldots, \eval{t_n}$ are defined; \item[+] $\eval{\phi}$ is undefined in all the other cases. \end{itemize} \end{definition} \begin{theorem} The truth value $\eval{\phi}$ is defined for every well-formed formula $\phi$ over \voc{}, every total structure $\struct{}$ over (\voc{}, \synrel), and every variable assignment $\nu$ that assigns a value to all free variables of $\phi$. \end{theorem} This can be proven by parallel induction over the definitions of well-formed formulae (and terms) and of their value, applying the properties of $\struct{}$ and $\nu$ when needed. Indeed, the conditions in the inductive rules of the valuation function match the conditions in the inductive rules of well-formed formulae (and terms). A \emph{sentence} is a well-formed formula without free variables. We say a total structure \struct{} \emph{satisfies} sentence $\phi$ iff $\eval{\phi} = \Tr$ for any $\nu$. This is also denoted $\struct \models \phi$. Coherent structures \struct{} that satisfy sentence $\phi$ are called \emph{models} of $\phi$. The complexity of decision and search problems in FO(Concept) is the same as in FO. This is because the domain of structures is extended only with a fixed constant-sized set of elements (and not over higher order objects). For example, deciding the existence of a model of a FO(Concept) theory with an input domain $D$ is an NP problem (measured in the size of $D$). \subsection{Alternative formulations} \label{sec:alt} FO could be extended to support quantification over concepts in other ways than the one above. Instead of the $\ite{..}{..}{..}$ construct, we could use $\lor$ (and $\land$) with non-strict evaluation. Well-guarded quantifications would then be written as follows: \begin{align} \exists x: (x::\predi{}/1) \land \$(x)(p).\\ \forall x: (x::\predi{}/1) \implies \$(x)(p). \end{align} Also, instead of giving \emph{partial} definitions of values (Def.~\ref{def:term_value} and~\ref{def:form_value}), we could give \emph{total} definitions by assigning an arbitrary value when the term (resp. formula) is undefined. We would then show that this arbitrary value does not matter in well-formed formula. \section{Extending \fodot with intensions} \label{sec:FoInt} We now discuss how we extended the Knowledge Representation language called \fodot to support quantifications over intensions. \fodot knowledge bases encode knowledge of a particular problem domain. Such knowledge bases are used to perform a variety of reasoning tasks, using generic methods provided by reasoning engines, such as IDP-Z3~(\cite{IDP-Z3paper}). \fodot is First Order logic extended with the following extensions: \begin{itemize} \item[+] types: the vocabulary may include custom types, in addition to the built-in types (e.g., \lstinline|Int, Bool|). Each n-ary symbol has a type signature of the form \begin{lstlisting}[language=IDP] T1**...**Tn->T\end{lstlisting} specifying their domain and range. T is \lstinline|Bool| for predicates. Formula must be well-typed, i.e., predicates and functions must be applied to arguments of the correct type. \item[+] equality: \lstinline|t1=t2| is a formula, where \lstinline|t1| and \lstinline|t2| are terms. \item[+] arithmetic over integers and rationals: arithmetic operators (\lstinline|+,-,*,/|) and comparisons (e.g., \lstinline|=<|) are interpreted functions. \item[+] binary quantification: \lstinline|?x \in P: p(x).| (where \lstinline|P| is a type or predicate) is equivalent to \lstinline|?x: P(x) & p(x).| \item[+] aggregates, such as \lstinline|#{x \in P: p(x)}| (count of \lstinline|x| in \lstinline|P| satisfying \lstinline|p|) and \lstinline|sum(lambda x \in P: f(x))| (sum of \lstinline|f(x)| over P). \item[+] $\ite{..}{..}{..}$ formula \item[+] (inductive) definitions~(\cite{DBLP:journals/tocl/DeneckerT08}): \fodot theory consists of a set of formulas as described above and a set of (potentially inductive) definitions. Such a definition is represented as a set of rules of the form: \[\forall x_1\in T_1,\dots,x_m\in T_n: p(t_1,\dots,t_n) \leftarrow \psi\] where $\psi$ is a \fodot formula. \end{itemize} \ignore{ In \fodot, identifiers, i.e., nullary symbols denoting the same elements of $D \cup \mathcal{I}$ in all interpretations are not followed by $()$.} The reference manual of \fodot is available online\footnote{\url{http://docs.idp-z3.be/en/latest/FO-dot.html}}. To allow reasoning about concepts, we have extended \fodot with the `` \lstinline|`.| '' operator (to refer to the intension of a symbol) and the ``\lstinline|$(.)|'' operator (to refer to the interpretation of a concept), as described in Section 3 for FO(Concept). An issue arises in expressions of the form \begin{lstlisting}[language=IDP] $(x)(t1,...,tn) \end{lstlisting} $x$ must be a \lstinline|Concept|, the arguments \lstinline|t1,...,tn| must be of appropriate types and number for the predicate or function \lstinline|$(x)|, and \lstinline|$(x)(t1,...,tn)| must be of the type expected by its parent expression. To address this issue and to support the writing of well-guarded formulae, we have introduced types for the concepts having a particular type signature: \begin{lstlisting}[language=IDP] Concept[T1**..**Tn -> T] \end{lstlisting} The interpretation of this type is the set of concepts with signature \lstinline|T1**..**Tn -> T|. Note that \lstinline|T1,..,Tn, T| themselves can be conceptual types. The well-formedness and semantics of quantifications over a conceptual type : \begin{lstlisting}[language=IDP] ?x\in Concept[T1**..**Tn -> T]: $(x)(t1, .., tn). \end{lstlisting} is defined by extending the concept of guards (Section~\ref{sec:syntax}), and considering the following equivalent statement with guards: \begin{lstlisting}[language=IDP] ?x: if x::[T1**..**Tn -> T] then $(x)(t1,..,tn) else false. \end{lstlisting} where \lstinline|x::[T1**..**Tn -> T]| is a guard specifying the types of the arguments of $x$, and the type of \lstinline|$(x)(t1,...,tn)|. The definitions of the typing function $\gamma$ (Section~\ref{sec:syntax}), and of well-formed formulae and their semantics are updated accordingly. The syntax of \fodot allows applying the ``\lstinline|$(.)|'' operator to expressions (not just to variables), e.g., in atom \lstinline|$(expr)()|. The type of \lstinline|expr| must be a conceptual type with appropriate signature. An example can be found in Section~\ref{sec:Intl} below. \ignore{ OLD \ignore{ Notice that we denote the intension of a symbol by \lstinline|`symbol| (and not \lstinline|`(symbol)|). We made the choice to not support synonyms of concepts, to reduce the language complexity. $\alpha$ is thus a bijection between ontology symbols and concepts. In \fodot, \lstinline|`symbol| is actually a concept identifier, i.e., a nullary function that uniquely identifies a concept; identifiers are denoted without parenthesis (thus, \lstinline|`symbol|, not \lstinline|`symbol()|). } The notion of guard is thus extended with conditions of the form: \begin{lstlisting}[language=IDP] x::[T1**..**Tn -> T] \end{lstlisting} where \lstinline|T1,...,Tn| specifies the types of the arguments of the extension of $x$, and \lstinline|T| the type of \lstinline|$(x)(t1,...,tn)|. The definition of the typing function $\gamma$ is updated accordingly. To further simplify the writing of well-guarded formula, we introduced ``subtypes'' of the \lstinline|Concept| type: \begin{lstlisting}[language=IDP] Concept[T1**..**Tn -> T] \end{lstlisting} The extension of this subtype is the set of concepts with that particular type signature. These subtypes can occur where types occur: in symbol declarations and in quantifications. The syntax of \fodot is extended to allow applying the ``\lstinline|$(.)|'' operator to expressions (not just to variables), e.g., in atom \lstinline|$(expr)()|. An example can be found in Section~\ref{sec:Intl} below. When \lstinline|expr| is properly guarded to ensure its definedness, the atom can be rewritten as \lstinline[language=IDP]|?y in T: y=expr & $(y)()|, where \lstinline|T| is the type of \lstinline|expr|. } \ignore{ Finally, the \lstinline|\$(.)| operator can be used in quantification, e.g., \begin{lstlisting}[language=IDP] !c in Concept[()->T]: ?x in $(output_domain(c)): p()=x. \end{lstlisting} } \paragraph{Implementation} We have updated the IDP-Z3 reasoning engine~(\cite{IDP-Z3}) to support such quantification over concepts. IDP-Z3 transforms \fodot theories into the input language of the Z3 SMT solver~(\cite{de2008z3}) to perform various reasoning tasks. A quantification \begin{lstlisting}[language=IDP] ? x in Concept[T1**..**Tn->T]: expr(x). \end{lstlisting} is transformed into a disjunction of \lstinline|expr(c)| expressions, where \lstinline|c| is a \lstinline|Concept[T1**..**Tn->T]|. Occurrences of \begin{lstlisting}[language=IDP] $(c)(t1,..tn) \end{lstlisting} within \lstinline|expr(c)| are transformed into \lstinline|p(t1,..tn)| where \lstinline|p| is the symbol denoting concept \lstinline|c| (Our implementation currently does not support synonymous concepts). The resulting formula is an FO sentence that can be submitted to Z3. \section{Examples in \fodot} \label{sec:examples} In this section, we explore a few examples where quantifications over concepts proved essential to accurately model the knowledge available within a domain in an elaboration tolerant way, i.e., without reification. These examples come from a broad range of applications. The examples can be run online\footnote{\url{https://tinyurl.com/Intensions}} using the IDP-Z3 reasoning engine. \subsection{Symptoms Analysis} In symptoms analysis, we model a policy for deciding whether to perform additional testing of patients, based on a set of symptoms. The core of the knowledge can be expressed through the vocabulary shown in Listing~\ref{lst:symptoms-voc}. This ontology introduces, among others, a type \lstinline|Patient|, four predicates designating the `risks', i.e., symptoms (\lstinline|hasFever|, \lstinline|coughs|, and \lstinline|sneezes|) and alarming factors (\lstinline|highRisk|), and a predicate over intensions called \lstinline|riskFactor|. Finally, it introduces the \lstinline|severity| function and the \lstinline|test| predicate which should hold for those patients needing subsequent testing. \begin{lstlisting}[language=IDP, caption={Vocabulary for the Symptoms Analysis example}, label={lst:symptoms-voc}] vocabulary V { type Patient hasFever : Patient -> Bool coughs : Patient -> Bool sneezes : Patient -> Bool highRisk : Patient -> Bool riskFactor: Concept[Patient->Bool]->Bool severity : Patient -> Int test : Patient -> Bool } \end{lstlisting} \lstinline|riskFactor| is defined as: \begin{lstlisting}[language=IDP] riskFactor := {`hasFever, `coughs, `sneezes, `highRisk}. \end{lstlisting} \lstinline|severity| is defined as the number of risk factors that the patient exhibits: \begin{lstlisting}[language=IDP] !x\inPatient: severity(x) = #{rf\inriskFactor: $(rf)(x)}. \end{lstlisting} This statement is well-formed thanks to the guards implicit in the quantification over \lstinline|Concept[Patient -> Bool]|. We can then express the constraint that a patient needs testing if he has at least three risk factors: \begin{lstlisting}[language=IDP] !x\inPatient: test(x) <=> 3=<severity(x). \end{lstlisting} This approach is used to solve the more elaborate May 2021 DM Community challenge\footnote{\url{https://dmcommunity.org/challenge/challenge-may-2021/}} about Covid\footnote{\url{https://tinyurl.com/Intensions}}. \ignore{ Note that considering \lstinline|rf| a second-order logic variable is unsatisfactory as a solution. \begin{lstlisting}[language=IDP] // second-order !x\in Patient: severity(x) = #{rf\in riskFactor: rf(x)}. \end{lstlisting} \todo{quantification ?} Indeed, if everybody have all the symptoms, the set of relations would contain only one element, and nobody would be tested. If instead, we were to use reification, we would run into trouble if one of the symptoms would depend on constraints while another is defined using rules. \todo{exclude ?} } \subsection{International law} \label{sec:Intl} Since 1990, the European Union has adopted legislation to fight against money laundering and terrorist financing. It creates various obligations for the parties in a business relationship, such as verifying the identity of the counter-party. The member states have to transpose the directive into national laws. The national laws must meet the minimum obligations set forth in the EU directive. In our simplified example, the EU directive requires the verification of identity in any transaction with a value above 1M€; a national law might set the threshold at 500K€ instead. Similarly, the EU directive might require a bank to send a report to their authority at least quarterly, but a country might require a monthly report. Our goal is then to express the requirement that the national obligations are stricter than the EU ones. We choose an ontology in which ``has a lower value'' means ``stricter''. We also use a mapping from the parameters of the national laws to their equivalent parameter in the EU law. \begin{lstlisting}[language=IDP, caption={Theory for the International law example}, label={lst:intl}] vocabulary { type Country threshold, period: Country -> Int obligation: Concept[Country->Int]->Bool thresholdEU, periodEU: () -> Int mapping: Concept[Country->Int]-> Concept[()->Int] } theory { obligation := {`threshold, `period} mapping := { `threshold ->`thresholdEU, `period -> `periodEU} // each national obligation must be stricter than the European obligation. !o\inobligation: !c\inCountry: $(o)(c)=<$(mapping(o))(). } \end{lstlisting} This example could be written in \fodot without the machinery of concepts: the last axioms could be instantiated for every relevant obligations. However, our formalism is more concise. Notice that an expression (not a variable) is applied to the value operator: \lstinline|$(mapping(o))()|. Because \lstinline|o| is a \lstinline|Concept[Country->Int]| by quantification, it is in the domain of \lstinline|mapping|, and \lstinline|mapping(o)| has type \lstinline|Concept[()->Int]|. Its value is thus a nullary function of range \lstinline|Int|. \subsection{Word disambiguation} A law may be ambiguous. For example, in ``Parents must feed their children'', the word ``children'' may mean either ``biological'' or ``legal children''. This statement has two meanings, each inducing its own class of possible states of affairs. In the theory below, the ambiguity is captured by the intensional constant \lstinline|childConcept()| that could refer to either the biological or legal child concept. An assignment to this constant corresponds to a disambiguation of the sentence. \begin{lstlisting}[language=IDP, caption={Ontology for Word disambiguation}, label={lst:disamb}] vocabulary V { type Person, Word biologicalChildof, legalChildOf, feeds: Person ** Person -> Bool child: Word childConcept: ()->Concept[Person**Person->Bool] }} \end{lstlisting} The theory has these axioms: \begin{lstlisting}[language=IDP, caption={theory for Word disambiguation}, label={lst:disambT}] childConcept() \in {`biologicalChildof, `legalChildOf}. !p1, p2\inPerson: $(childConcept())(p1, p2) => feeds(p1, p2). \end{lstlisting} Each structure that satisfies the axioms will include an interpretation of \lstinline|childConcept| that is consistent with the interpretation of \lstinline|feeds|. Imagine the legislator has provided a set of good examples (structures) of situations where the intended law holds and some examples contain a parent feeding only the legal but not the biological children, then, in those examples, our solver can exclude ~\lstinline|`biologicalChildof| as a possible interpretation and we can resolve the ambiguity. \subsection{Templates} Intensional objects can be used to define templates. The following example serves to define the transitive closure of all binary predicates in $\voc$ over type \lstinline|Node|. \begin{lstlisting}[language=IDP, caption={Theory for transitive closure}, label={lst:TransClos}] vocabulary { type Node graph1, graph2: Node ** Node -> Bool TransClos: Concept[Node**Node->Bool] ** Node ** Node -> Bool } theory { {!r\inConcept[Node**Node->Bool]:!x,y\inNode: TransClos(r, x, y) <- $(r)(x,y). !r\inConcept[Node**Node->Bool]:!x,z\inNode: TransClos(r, x, z) <- (?y\inNode:TransClos(r,x,y) &TransClos(r,y,z)). } } \end{lstlisting} \ignore{ Notice that the domain of \lstinline|TransClos| is declared in the vocabulary using the \lstinline|Concept[Node**Node->Bool]| sub-type instead of the \lstinline|Concept| type. In the latter case, the rules in the definition would have to be quantified over \lstinline|Concept|~(to fully cover the domain of \lstinline|TransClos|), and appropriate guards would have to be added to their body using the $\ite{..}{..}{..}$ construct, to make them well-formed. This would make the rules considerably more complex to write. } Contrary to the previous examples, this example could be implemented using second order logic and predicates: the first argument of \lstinline|TransClos| would range over all binary relations with a \lstinline|Node**Node->Bool| type signature. This is the approach used by, e.g., ~(\cite{dasseville2015semantics}). A second order definition of \lstinline|TransClos| would be more general (but also computationally more complex): it would not only define the transitive closure of all binary relations with a \lstinline|Node**Node->Bool| type signature in $\voc$ but also of {\em any} such relation that exists in the structure. Sometimes, that kind of expressivity is needed, as discussed in the next section. \ignore{ \subsection{\set{} Game} Our next example is a single-player version of the \set{} game, as described in~\cite{davis2003card}. It is played with a deck of cards. Every card in this deck features a shape\footnote{Diamond, squiggle, or oval}, of a certain color\footnote{Red, green, or purple.}, with a certain fill pattern\footnote{Solid, striped, or open.} occurring between one and three times, leading to 81 ($3^4$) unique cards. Number, fill, color and shape itself are called the four attributes of a card. See \textbf{Figure}~\ref{fig:set} for an example of three \set{} cards, each differing with one another on all four attributes. The aim of the \set{} game is for the player to select, out of twelve randomly drawn cards, a \emph{set} of three card such that, for each of the four attributes, the three selected cards either have all the same value, or all different values. \textbf{Figure}~\ref{fig:set} thus forms a valid \emph{set}, as do the three cards of \textbf{Figure}~\ref{fig:alt-set}, where the cards have the same shape and fill but differ in color and number. As a possible modelling, we suggest the following vocabulary: \begin{lstlisting}[language=IDP] vocabulary { type Card := { 1..12 } //The set of cards //Possible values of the features type Color := {red, green, purple} type Number := {one, two, three} type Fill := {solid, striped, open} type Shape := {diamond, squiggle, oval} attributes : (Concept) -> Bool color : (Card) -> Color number : (Card) -> Number fill : (Card) -> Color shape : (Card) -> Shape set : (Card) alldifferent : (Concept) -> Bool same : (Concept) -> Bool } \end{lstlisting} Using this vocabulary, we can model the knowledge in the \set{} game, including a justification for every set \emph{why} exactly it forms a valid set, i.e., which attributes are all different and which attributes are shared by all three cards in the set. \begin{lstlisting}[language=IDP] attributes('color) & attributes('number) & attributes('fill) & attributes('shape). ! attribute\in attributes : if isFunc(attribute) & #(attribute, 1) then alldifferent(attribute) <=> #{v : ?c\in set : attribute(c) = v} = 3 else ~alldifferent(attribute). ! attribute\in attributes : if isFunc(attribute) & #(attribute, 1) then same(attribute) <=> #{v : ?c\in set : attribute(c) = v} = 1 else ~same(attribute). #{ att\in attributes : alldifferent(att) | same(att)} = 4. #{ x: set(x) } = 3. \end{lstlisting} \begin{figure*} \caption{\todo[inline]{Create missing figure}Three example set cards.\label{fig:set}} \end{figure*} \begin{figure*} \caption{\todo[inline]{Create missing figure}Three example set cards.\label{fig:other-set}} \end{figure*} } \section{Related Work} \label{sec:related} We discussed the relation between intensional logic and our work in Section~\ref{sec:intensional-logic}. We now discuss the relation with other work. \ignore{ \marc{Marcs introduction introduces FOIL hence, first FOIL has to be discussed, then HILOG. Also, by now, FOIL was presented well enough by the intro, so that this section can be moved to the end of the paper. Did HILOG intensionally want to have intensional predicates? I dont think so, it wanted to have some form of higher order predicates. We analyse HILOG from the perspective of intensional logic, and explain that it can be seen as a very special form of intensional logic extension of Prolog, where with any predicate intensional symbol and any arity, a separate concept is attached. In the semantics of HILOG, any occurrence of the intensional symbol is mapped to the concept of the right arity based on the number of arguments that syntactically occur with the symbol.} \marc{ Hier volgt een soort opsomming van hoe onze logica ineen zit. Niet compleet. Dat moet ergens staan, lijkt me. Maar het hoeft niet in related work natuurlijk. In the view of this paper, ontology symbols have an intension and, in every state of affairs, an extension. Informally, the intension associated with a symbol is the concept that is the users informal interpretation of the symbol in the application domain. Formally, it is an atomic object, determined by the symbol itself modulo the synonym table. It has associated attributes and the value function returning the extension, the designation, the reference, the value, of the intension in the contextual possible world. The intension itself is independent of the structure. Informally, the extension of an intension depends on the state of affairs and is its value in it. Formally, the extension in some structure is the value of the symbol in the structure. In logic expressions, occurrences of a symbol are interpreted as references to the extension of the symbol. The operator '$\cdot$ is used to refer to the intension of the symbol itself. Vice versa, the $\$(\cdot)$ operator returns the extension of an intensional object in the contextual structure. } In philosophical logic, the concepts of \emph{intension} and \emph{extension} (or, meaning and designation, or sense and reference) have been investigated at least since Frege (for a historical overview see \cite{sep-logic-intensional}). Early key contributions were by Frege, Church, Carnap and Montague. The intensional logics of Montague, of Tich\'y, and of Gallin, are modal logics where intensions are generalized to all types in simple type theory and expressions at any level of the type hierarchy have associated intensions. However, the principles of intensions of higher order objects are similar to those of the base level (domain objects) leading Fitting in several papers (\cite{fitting2004first,sep-logic-intensional}) to develop a simplified logic FOIL where only intensional objects of the base level of the type hierarchy are present. Our main goal was to demonstrate other aspects of intensions that are relevant in the context of certain applications of knowledge representation and automated reasoning. As such, the logic developed here differs quite strongly from intensional logics. An important difference is the lack of modal logic machinery to analyse the difference between intensions and extensions. In intensional logics, modal contexts are used for analysing situations where symbols have other extensions than in the contextual state of affairs. E.g., to express that the morning star is the evening star although not necessarily so: \[MorningStar=EveningStar \land \neg \square (MorningStar=EveningStar) \] In our logic, an intension has a different value in different structures, but there are no modal operators to ``talk'' about the extensions in other states of affairs than in the contextual one. \todo{Evt. verduidelijken verschil actual vs contextual} \todo[inline]{Hier lijkt me een goede plek om je mail van 10/09 te addresseren:} \mvdh{ As a result, it is natural for intensional logics such as FOIL to assume a given Kripke structure, and subsequently analyse it with FOIL formulae. Seen descriptively, most FOIL theories have infinitely many Kripke structures that correspond to a (consistent) FOIL model. An important contribution of our logic, however, is that it allows solving concrete computational problems through inferences on one of its theories. Consequently, it is very important that the formulae in our logic are of a descriptive nature: they describe very specific models that can be derived through inference. Although our resulting logic lacks the ability to perform modal analysis, we note that many real world problems can be described without modals. } \todo{Echt als bijdrage formuleren, inclusief evt. nadelen zoals het gebrek aan modale analyses} Compared to intensional logics in philosophical logic, the focus in this study is on different aspects of intensions, aspects that can be analyzed and demonstrated in a logic much closer to standard logic (extended with aggregate symbols). One contribution of this paper is that on the informal level, we identify the notion of intension of a symbol with the user's informal interpretation of the symbol in the application domain. The informal interpretation is in practice a crucial concept in all knowledge representation applications. Connecting intensions to informal interpretation provides a good intuitive explanation of what intensions are and how they are relevant in a KR context. At the formal level, significant differences between our logic and intensional logics are: the introduction of $Concept$ \todo[inline]{Dit is momenteel niet zo (we beperken ons tot untyped), en in de getypeerde versie noemden we dat niet $Concept$, al is de naam misschien beter} as a separate type collecting all intensions, the introduction of attributes describing aspects of the concept (e.g., arity and type of arguments making up the type of the concept), and the introduction of the explicit operators $'(\cdot)$ and $\$(\cdot)$. The second one replaces the expressions $(\lambda x: t)(\sigma)$ used in, e.g., FOIL to select the extension of an an intension $\sigma$ in the contextual structure, rather than $\$(\sigma)$ in our logic. But these small changes have a big impact on the language. Indeed, while in intensional logics in general, every intension occurs on a well-typed argument position allowing to detect its type and its arity, this is no longer the case in our logic. To ensure syntactical correctness of formulae, we introduced \emph{guards}. } \paragraph{Second Order quantification and relations} A contribution of this work is to clarify the utility of being able to quantify over concepts in the vocabulary and to show how it differs from quantifying over sets or functions as in second order quantification. Certainly in the case of predicate or function intensions, it is our own experience that it is easy to confuse the two. We showed in the introduction the need for quantification over intensions as opposed to over second or higher order objects (relations or functions). However, the quantification over intensions cannot replace quantification over second order. A clear cut example of second order quantification and relation occurs in the \emph{graph mining problem} (\cite{DBLP:journals/amai/HallenPJD19}). \begin{example} An occurrence of a graph, called the \emph{pattern} $p$, in graph $g$ corresponds to a homomorphism, a function $h$ from nodes of $p$ to nodes of $g$ that preserves edges. The following expression intends to define $\mathit{hom/1}$ as the collection of occurrences of $p$ in $g$. \[\forall h: hom(h) \Leftrightarrow (\forall x\forall y: p(x,y) \Leftrightarrow g(h(x),h(y)))\] \end{example} The question is: over what sorts of values does $h$ range (in the context of a structure ${\mathcal{I}}$)? Over the set of intensions of unary function symbols in $\voc$? There may be none! Or over the set of all binary relations between nodes of the pattern and the graph in ${\mathcal{I}}$? Clearly, over the latter. Hence, $h$ is a second order variable and $\mathit{hom/1}$ a second order predicate symbol. \paragraph{HiLog} In the context of computational logic, we are not aware of languages supporting quantification over concepts, at least not in the sense understood in this paper. One computational logic that mentions ``intensions'' in its definition is HiLog~(\cite{chen1993hilog}). Hilog is a powerful declarative programming language, used in the Flora-2 system (\cite{conf/coopis/YangKZ03}). But it soon becomes clear that ``intension'' there is to be interpreted differently than here. HiLog is an extension of logic programming formalism with a higher-order syntax and a first order semantics. Skimming over the details of HiLog's definition, what is important is that a HiLog structure associates with every symbol both a so-called ``intension'' (in practice, the symbol itself) and a class of ``extensional'' values: for every number n, an n-ary relation and an n-ary total function in the domain of the structure. As such, one cannot associate a clear-cut relational or functional concept to a HiLog symbol; it is more like a collection of relational and functional concepts, one for each arity n. That makes it incomparable to our logic and to most intensional logics. In HiLog, because a symbol can be applied to tuples of any length and any type, any formula is well-formed. Thus, the concept of guards is unnecessary. The developer of a knowledge base in HiLog cannot benefit from the automatic detection of the syntactical errors that our method provides. \ignore{ templates HiLogs main goal is to offer generalized predicate definitions. For example, one can define the transitive closure $Transclos(g)$ of the binary relation associated with the symbol $g$: \[ \begin{array}{l} TransClos(g)(x,y) \leftarrow g(x,y).\\ TransClos(g)(x,z) \leftarrow TransClos(g)(x,y), TransClos(g)(y,z) \end{array} \] This facility is not offered in our language, but vice versa, none of the examples of our logic can be modelled in HiLog. \mvdh{Hier is zeker een woordje extra nodig} \marc{Hilog is not a logic to represent absence of knowledge of p and q} \pierre{It is easy to transform the above definition into a definition of $TransClos(g, x, y)$ in our language. The problem is with the signature of $TransClos$: what is the type of the second and third argument ? If we introduced a built-in type $Object$, whose extension is the whole domain of discourse, we could declare $TransClos : Symbol \times Object \times Object \rightarrow Bool$ as a partial function.} \todo{Maurice: at this point you have to confront HiLog with our template example and point out the difference.} } HiLog is a programming language: an HiLog program is meant to be run to perform only one form of reasoning: querying. By contrast, our implementation in IDP-Z3 allows many forms of reasoning, such as model expansion or propagation (\cite{IDP-Z3paper}). Unlike HiLog, IDP-Z3 supports the computation of aggregates. \ignore{ HiLog is a rule-based language, it does not extend FO like our logic does. Strictly speaking it cannot represent a disjunction $p\lor q$ without adding extraneous symbols. None of the examples of our logic can be modelled in HiLog. \todo{This statement does not seem correct} HiLog is implemented in Flora-2~\cite{yang2005flora} and partially in XSB~(\cite{swift2012xsb}) systems. } \ignore{ We identify two major lines of related work; firstly, the work around HiLog~(\cite{chen1993hilog}), that ``combines advantages of higher-order syntax with the simplicity of first-order semantics''. Secondly, the work on intensions in the context of modal logics, as studied by Fitting~\cite{fitting2004first}. In HiLog, as in our approach, every (parameter) symbol is associated with an \emph{intension}. \marc{"intension": is that our terminology or theirs?}\todo{Should we stress their use of parameter vs our use of function / predicate symbols here?} Their higher-order syntax allows variables that range over these intensions, and allows these intensions, represented by the symbol name itself, to occur as arguments in predicate heads or predicate calls. Although the resulting logic bears resemblance to ours, important differences can be found in syntax, in semantics, and in spirit. Specifically, HiLog refers to these intensions using the symbol itself, thereby recuperating the higher-order syntax, while we propose a dereference operator that maps every symbol onto its intension. While this may seem trivial, we argue there are multiple shortcomings of this approach when working out a clear semantic meaning of intensions in \emph{Knowledge Representation}: Using standard higher-order syntax for intensional occurrences impedes the unification of intensional knowledge and higher-order\footnote{We also refer to this as extensional knowledge.} knowledge within a single system. Such a unified system is worthwhile as, while overlap exists, there are clear examples for both, as shown in \secref{}. It is of course possible to distinguish extensional occurrences syntactically instead, but we see two reasons advising against this approach: First, the syntax of higher-order logic has been standard for years, and repurposing it should not be done lightly. Second, not distinguishing intensional occurrences impoverishes the distinction between intensional and extensional occurrences; consider, for example, the occurrence of the symbol $P$ in its application $P(t)$, which is clearly extensional but not distinguished as such in HiLog. \todo[inline]{Argument $P(t)$ is extensional: If it's not, then a higher order quantification cannot use the regular application (), as it would expect something intensional, e.g., no higher-order sentence \lstinline[mathescape]{!P : P(x)}} Semantically, it is important to note that HiLog considers parameter symbols to have a single \emph{intension} but multiple \emph{extensions}; a single symbol is associated with different extensions according to the different roles it can assume in a sentence: as a constant, a function or a predicate, the latter two with any arbitrary arity. Any symbol can thus occur in any role and is assigned an extension for any such role. This is in stark contrast to our approach, where a symbol can occur extensionally in only a single role and with a single arity. The semantic difference described above leads not only to an important difference in semantic structures, but also in spirit: as intensions, to us, are associated to symbols from the ontology, we have encountered no examples of \emph{intensions} in knowledge representation that permit multiple extensions of differing arity. \todo{Express this in a better way...} A final distinction between HiLog and our system can be found in the foundations of HiLog as a Logic Program, with only a single Herbrand model, where choice and search must be encoded separately, as oppposed to a knowledge base system such as ours, which has multiple models and permits a number of different inferences such as satisfiability checking, model expansion or minimization on the same knowledge specification. A different study of intensionality, specific to the context of modal logics, can be found in \cite{fitting2004first}. Here, a first-order modal logic called \FOIL{} is introduced. In this logic, \emph{intensions} allow distinguishing on some level $the morning star$ from the $the evening star$, even though, in a typical frame of reference, both refer to the same object, i.e., Venus. In much the same way, our logic allows the distinction between the people who are feverish, represented by the predicate $fever$, and those who are sneezing, represented by $sneezing$, to be made, even in structures where both are assigned the exact same extension. However, some fundamental differences arise when we consider that in the semantics of \FOIL{}, the mapping of symbols to intensions is dependent on the \emph{possible worlds} of the associated modal logic, while in our logic, the mapping of symbols to intensions is rigid. Furthermore, their logic effectively allows only constants to have an intension. } \section{Summary} As our examples illustrate, it is often useful in knowledge-intensive applications to quantify over concepts, i.e., over the intensions of the symbols in an ontology. An intension is an atomic object representing the informal interpretation of the symbol in the application domain. First Order logic and \fodot can be extended to allow such quantifications in an elaboration tolerant way. Appropriate guards should be used to ensure that formulae in such extensions are well-formed: we propose a method to verify such well-formedness with a complexity that is linear with the length of the formula. While related to modal logic, the logic introduced here differs quite strongly from it. It has no modal operators to talk about the extension of a symbol in other worlds than the contextual one, but it offers mechanisms to quantify and count intensional objects. \ignore{ \dmar{I have written a few sentences for the summary. Please check these, add or change it :)} Knowledge representation languages should be expressive in the sense that they can encode many classes of knowledge. Besides expressivity, language should provide constructs that allow the natural and compact representation of knowledge. We notice in various real applications that a set of properties should hold for distinct concepts. Sometimes we can get over this problem by reifying these concepts into explicit objects. This technique is practical in many cases. However, as soon as reified function symbols have different codomains, this approach fails. Therefore, sometimes it is convenient to talk about concepts rather than their value. Allowing intensional objects is a straightforward approach for this problem. The plain syntax and semantics proposed in this paper make it natural and unequivocally. Extending language with intensional objects results in language constructs that allow great compactness. It is relevant to notice that there is no difference in the expressivity of the language.\pierre{I would dispute that. It is "polynomially" more expressive.} \dmar{I don't understand. What is polynomially more expressive. I think there is no statement you can express with FO(io) and can not with FO(). I have a feeling that we are mismatching what is expressivity.} One can express the same theory with or without the use of intensional objects. However, in many cases, one can significantly reduce the size of a formalization. \marc{I think that expressivity is a dangerous term here. Lets think deeper and be more precise. In each example, we have in mind a range of instantiations and a property that holds in this range of instantiations. We have have a language taht can express this property, once and for all. And then we may notice that without this langauge feature, we are forced to introduce instantiation-specific axioms: different axioms for different instantiations. Moreover, the different axiom may polynomially grow in size with some measure of the instantiation. Can we prove this? In any case, from a KR point of view, there is undeniably value in such sort of language primitive. I think we should say this: `` there is undeniable KR value in such a language primitive". } \marc{But what I have missed so far is the comparative analysis with higher order logic. } } \section{Implementation} \todo{update} We have extended our $FO(\cdot$) implementation to support intensional objects as proposed in this paper. Its source code is available on-line~\cite{IDP-Z3}. It is written in python and uses the Z3 Satisfiability Modulo Theories (SMT) solver as a back-end~\cite{de2008z3}. The engine uses a ground-and-solve approach: formulae quantified over a finite set of values are instantiated for each value in the set, starting from the outermost quantifiers. This approach applies to formulae quantified over $Symbol$, the set of intensional symbols automatically created by analysis of the vocabulary. Crucially, the built-in introspection functions, such as $arity$, are immediately evaluated after instantiation with a particular symbol. Thus, they do not appear in grounded formulae. Furthermore, the expressions in which they appear, are grounded in a non-strict manner, as explained in the formal semantics section. This non-strict grounding can be used by the knowledge expert to ensure that $\$(s)$ is applied to appropriate tuples of arguments, by guarding $\$(s)$ with appropriate applications of introspection function. For example, in \begin{align*} \forall s \in Symbol:&arity(s) = 0 \land output\_domain(s)=Bool \\ & \land \$(s)(). \end{align*} $\$(s)()$ is grounded only with propositional symbols. The inference of the type of a quantified variable is performed just before grounding the quantified formula: at that time, any outer quantification is already grounded. For example, in \begin{align*} \forall s \in Symbol: & arity(s) = 1 \land output\_domain(s)=Bool \\ & \land \forall x: \$(s)(x). \end{align*} the type of $x$ is inferred for each unary predicate. \maurice{implementation uses "\$" instead of "'" for dereferencing?Ground the dereferencing formula of your running example toillustrate the grounding} \section{Introduction} A logic is in no small part characterized by the things that can be quantified over within the language. \todo{`In propositional logic, no quantification is possible'?} In \emph{first order} logic, we can quantify over domain elements. Turning to \emph{higher order} logic, we can additionally quantify over \emph{sets} of \emph{domain elements}. Our study of logic as a knowledge representation (KR) language, however, has led us to believe that logic needs another form of quantification in order to be a great KR language. \todo{For readability, drop either `as a representation language' or `to be a great KR language'.} In a KR application of logic, a \emph{vocabulary}, sometimes also referred to as \emph{ontology}, introduces symbols for \emph{concepts} that exist within the knowledge domain; exactly these concepts from the knowledge domain are of interest to the new type of quantification that we suggest. We often find ourselves wanting to identify, quantify or count concepts, for example: \begin{itemize} \item In a medically oriented knowledge base, we introduce the predicates $\mathit{Sneezing}$, $\mathit{Coughing}$ and $\mathit{Fever}$. Later on, we want to write a constraint that counts the number of symptoms that patients exhibit. \item In a knowledge base detailing a card game, functions such as $\mathit{Color}$, $\mathit{Shape}$ and $\mathit{Fill}$ describe properties of each of the cards. One of game's rules says that for a valid selection of cards, every property must either be of \emph{identical} or \emph{unique} value, for example, four cards with identical shape and fill but of all different colors form a valid selection. \item One more example would be nice \ldots \end{itemize} \noindent If quantification over concepts is not supported directly, constraints such as those suggested above are often expressed using \emph{reification}~\cite{} or \emph{higher order} logic. In \textbf{Section}~\ref{}, we give an overview of these approaches and detail the benefits we believe that direct support for quantification over concepts has to offer. \todo[inline]{Ergens hier moet komen wat dan precies *onze* approach is, maar ik heb wat moeite met de plaatsing \& het benodigde detail. Blijven we oppervlakkig, en geven we detail in een latere sectie waar we dieper kunnen ingaan op intensie, synoniemen en datgene uit de figuur (\ref{fig:fig1}) hieronder? `We propose two new language construct: one language construct that, for a given vocabulary \emph{symbol}, refers to the \emph{concept} it represents, also called its \emph{intension}; the second construct refers for a given \emph{concept} to its \emph{value} in the interpretation.'} We refer to the \emph{concept} \st{from the knowledge domain} that a vocabulary symbol represents as the symbol's \emph{intension}; this is in contrast to a symbol's \emph{extension}, which refers to a symbol's value within a \emph{model} or \emph{interpretation}. This distinction has previously been studied in linguistics and modal logics~\ref{}. In these studies, sentences such as ``the morning star is the evening star'' illustrate that human knowledge is sensitive to \emph{intensions}, i.e., it is not \emph{extensional}. To see this, reflect on the knowledge the previous sentence offers compared to a sentence such as ``the morning star is the morning star''. By considering quantification over intensions rather than quantification over symbols, as a meta-programming approach would do, we hope to one day synthetise these lines of research. \todo{I think a sentiment like this should be included, but should be expressed better?} \textbf{Section}~\ref{} discusses in more detail the notion of `intension' in related literature. In \textbf{Section}~\ref{}, we introduce the semantics for first order logic extended with intensions. In \textbf{Section}~\ref{}, we detail the implementation of a knowledge base system supporting the extension. \begin{figure} \includegraphics[width=\linewidth]{Images/SecondDraftDiagram} \caption{The interaction of the new $'\cdot$ and $\$(\cdot)$ language constructs, w.r.t. straightforward occurrence of a vocabulary symbol. \label{fig:fig1}} \end{figure} \section{Motivation} \marc{I find this a fairly well-written motivation. I can see that it has much in common with the introduction that I wrote. But I think I wrote it in a more direct way. Here, the structure is by providing a problem and then exploring different erroneous ways how to do this; leading to the solution. But that is a difficult road. My introduction assumes we know basically quite well what we say: Test somebody if she has at least two symptoms. we just have to analyse it. .. So, I think this motivation should go. } Over the years, first order logic has benefited from many extensions: inductive definitions to formalize definitional knowledge, arithmetic including aggregates, uninterpreted and partial functions, \ldots . For each of these extensions, one or more examples problem statements can be given that depend on knowledge that is either difficult or even impossible to express in standard first order logic without the extension. In this paper, we introduce the \emph{symptoms triage} problem, and claim it supports our need for a language extension we call \emph{intensional objects}: \begin{example}[Symptoms Triage] A number of patients present themselves to a triage center for diagnosis, as they present one or more symptoms of a disease. The symptoms in question are coughing, sneezing and fever. The triage system decides who has to undergo subsequent testing, based on the \emph{number of symptoms} each patient presents; specifically, they test anyone who presents \emph{at least} two symptoms. \end{example} A straightforward representation of the knowledge in this problem could introduce the predicates $\mathit{Coughing}$, $\mathit{Sneezing}$ and $\mathit{Fever}$. Naively, the following first order constraint could model the condition that (only) patients who exhibit at least two symptoms are kept for testing ($\mathit{Test}$). \[ \forall x . \mathit{Test}(x) \Leftrightarrow \left(\begin{array}{c} (\mathit{Coughing}(x) \land \mathit{Sneezing}(x))\ \lor \\ (\mathit{Coughing}(x) \land \mathit{Fever}(x))\ \lor \\ (\mathit{Sneezing}(x) \land \mathit{Fever}(x)) \\ \end{array}\right) \] However, it soon becomes clear that this approach does not scale well and is very fragile to changing requirements such as taking into account additional symptoms, or changing the number of symptoms that patients must present. Consider the amount of work to be done if the list of possible symptoms were to be extended with `loss of smell' and `loss of taste', and/or if the threshold for testing was raised to \emph{three} symptoms. While this approach shows the knowledge underlying the problem can clearly be expressed using first order logic, the resulting theory is definitely not elaboration tolerant~\cite{}. \mvdh{Add ref to elaboration tolerance (McCarthy)! + Some sentence about elaboration tolerance, and a statement it is a property to be desired by any knowledge specification / KR language?} A second approach would be to use the same ontology of predicates $\mathit{Coughing}$, $\mathit{Sneezing}$ and $\mathit{Fever}$, and to try and express the testing condition using an aggregate. The underlying concept of an aggregate is a set expression $\{ \overline{x} \mid \phi \}$ where $\overline{x}$ is a tuple of fresh variables, and $\phi$ expresses a condition for $\overline{x}$ to be in the set; we can then count the number of elements in the set using the cardinality aggregate function ($\#$). For the threshold constraint, this would give us: \begin{gather*} \forall x . \mathit{Test}(x) \Leftrightarrow \\ \#\left\{ s \left| \begin{array}{c} s \in \{\mathit{Coughing}, \mathit{Sneezing}, \mathit{Fever} \} \land \\ s(x) \end{array} \right. \right\} \geq 2 \end{gather*} \noindent These aggregate expressions can easily be extended w.r.t. the symptoms in the set to be counted and the threshold the count must exceed. However, a closer assessment of the expression above suggests that we have left first order logic behind us, and have resorted to using higher order logic when expressing the set of symptoms to be counted, both when expressing the condition for symptoms to be present in the aggregate argument set ($s \in \{\mathit{Coughing},\ldots\}$), as well as in the aggregate argument set itself. Even worse, and surprisingly, the constraint suggested above is not even semantically correct! Specifically, due to the extensional semantics of higher order logic and sets, we can only conclude that in, for example, a model where the people who sneeze are exactly those who cough and have a fever as well, the higher order set \emph{contains just a single value}. Slightly foreshadowing our eventual proposal, we do note that resorting to multi-sets, which have an intrinsic ability to differentiate elements that have the same value, offers some solution to the semantic problem. \mvdh{So why don't we? `We claim, however, that our eventual solution offers a more rigorous and principled approach?' or something of the likes?} \marc{In my opinion, using a multi-set based solution is basically a fake solution, because it does not identify the nature of the objects that we quantify over. It could be technically ok, but it quite explicitly avoids to talk about the nature of the abstract objects: they are set occurrences that can occur multiple times, but what determines how many occurrences there are of a set? Suppose Coughing, Sneezing and Fever have the same value, is this 1, 2 or 3 occurrences and why? Or if we use $Symptom(x) \land Symptom(y) \land x\neq y$ what is $Symptom$ ranging over? What does it mean that $x\neq y$ if $x, y$ range over sets? It does not explain *at all* the link with intensional objects. I think/hope that this was done well in my proposed introduction.} One final, radically different approach to the problem would be to perform a \emph{reification} of the different symptoms, leading to an ontology that consists of just a single \emph{binary} symbol $\mathit{Exhibits}$ to represent all knowledge about patients and their symptoms, considering a domain that contains elements not only for every patient but also for every possible symptom: the elements $\mathit{coughing}$, $\mathit{sneezing}$ and $\mathit{fever}$. The threshold constraint can now be expressed as: \[ \forall x . \mathit{Test}(x) \Leftrightarrow \#\left\{s \left| \begin{array}{c} s\in \{\mathit{coughing}, \mathit{sneezing}, \mathit{fever}\} \\ \land\ \mathit{Exhibits}(s,x) \end{array} \right. \right\} \geq 2 \] This constraint is first order, and, thanks to its use of a set expression and cardinality function, is elaboration tolerant. However, a major drawback of this approach is that we have resorted to a form of meta-representation that permeates our entire knowledge specification, having chosen a much more involved reified ontology over the straightforward ontology used previously. While the resulting knowledge specification is now elaboration tolerant w.r.t. additional symptoms or different thresholds, it is highly likely that if the constraint of needing at least two symptoms was already an elaboration of an earlier problem, the road leading to this specification was fraught with changes and reformulations due to a change in ontology from the simple and straightforward representation to the reified meta-representation.\mvdh{Barring the approach of two different representations side by side supported using channeling constraints.} Additionally, the reified ontology presents an additional drawback when considering that some symptoms \mvdh{(or more general, relations/criteria)} are specified by \emph{definitional knowledge}. Consider, for example, an elaboration of the problem where we track confirmed disease cases ($\mathit{Sick}$), a binary close contact relation ($\mathit{Exposed}$) and summarize that in an additional symptom called $\mathit{HighRisk}$: \begin{align*} &\{\\ &\forall x . HighRisk(x) \leftarrow \exists y . \mathit{Exposed}(x,y) \land (\mathit{Sick}(y) \lor \mathit{HighRisk(y)}).\\ &\} \end{align*} It is straightforward to see that in a reified ontology, this definitional knowledge regarding the `high risk' symptom can only be expressed using auxiliary symbols, as the relation $\mathit{Exhibits}$ can only be a defined symbol if we can provide definitional rules for \emph{every} symptom. \marc{I remember there was a good point here, but it is technical.} We have given an overview of attempts to express the required knowledge of the \emph{symptoms triage} problem in an clear, simple and elaboration tolerant way, and discussed their drawbacks. We can now propose our solution, which in some way can be seen as a synthesis of these earlier attempts.\mvdh{, based on the concept of \emph{intensions}.} Note that in our ontologies, every symbol corresponds to a specific \emph{intension}. This intension is the \emph{meaning} of a symbol, its formal definition. This formal definition is there as soon as we compose our ontology, it is known without any notion of a domain, and thus strictly separate from its value, also referred to as \emph{its extension}, that is an element of the domain. For an archetypical example of the importance of intensions, consider the two symbols ``morning star'' and ``evening star''. The intension of the morning star is \emph{the first star you see in the morning}, while that of the evening star is \emph{the last star you see in the evening}. The sentence \emph{``the morning star is the evening star''} holds informational content, whereas the sentence that \emph{``the morning star is the morning star''} conveys no information at all. The difference, however, does not derive from a different value for both symbols, as the ``morning star'' and ``evening star'' both denote the planet Venus, extensionally. Thus, we must conclude that (a) the intensions of both symbols are what lends the first sentence informational content, and (b) two symbols can share the same \emph{extension} without having the same \emph{intension}. We argue that, in the approach that combines higher order logic with multisets, the way that multisets disambiguates between the different predicate symbols in a multiset, even when they designate the same (set) values, actually takes into account the \emph{intension} of each of the symbols. Likewise, in the approach that modifies the ontology through reification, the domain elements introduced by the reification are, in fact, representatives of the \emph{intensions} of the original symbols, injected into the domain. Thus, from a practical point of view, it would suffice to extend first order logic with a framework for reasoning with \emph{intensions}, including predicates and variables over intensions and language constructs that transform a symbol into its intension and vice versa. This also allows for a much more satisfactory theoretical, knowledge representation point of view. To illustrate our approach on the \emph{symptoms triage} puzzle, it suffices to know that our framework introduces the dereferencing language operator $'$ that maps every symbol to its intension\mvdh{or `intensional object'?}, and the apply language operator $\$$ that allows referring to the extension of any intensional object. \footnote{\mvdh{While we do not consider such cases here, note that the existence of the apply language operator (correctly) implies that when two symbols share a single \emph{intension} (i.e., when they are synonyms), they also have the same \emph{extension}}} We can then formalize the threshold constraint as: \begin{gather*} \forall x . \mathit{Test}(x) \Leftrightarrow \\ \#\left\{ s \left| \begin{array}{c} s \in \{\mathit{'Coughing}, \mathit{'Sneezing}, \mathit{'Fever}\}\ \land\\ \$(s)(x) \end{array} \right.\right\} \geq 2 \end{gather*} \noindent where it is important to note the use of the two new operators: $'$ in the set containing the \emph{intensions} of $\mathit{Coughing}$, $\mathit{Sneezing}$ and $\mathit{Fever}$, and $\$$ in the check that $x$ is contained by the extension associated with $s$. In this paper, we ... \todo[inline]{\mvdh{continue with an overview of the different sections, and what we achieve in each of them...}} \todo[inline]{ Gerda: ``daarna kan er dan het verschil uitgelegd worden tussen een domain element, een higher-order element en een intensional object'':\\ \mvdh{ Een domain element zijn de mogelijke waarden van symbolen, zoals bepaald door het domein. Een higher-order element is een relatie (of functie) over het domein. Intensional objects zijn de intensies van de symbolen, vastgelegd door het vocabularium, (en 'als elementen ge\"injecteerd' in elk domein van structuren over dit vocabularium? Dit maakt het misschien ingewikkeld, maar lijkt wel noodzakelijk voor de consistentie, als we predicaten over intensies hebben...) } } \section{Joost's Introduction} Whenever we use classical first-order logic (FO) to represent some piece of knowledge, we first have to introduce \emph{symbols} to represent the \emph{domain concepts} that we wish to talk about. The formal semantics of FO will then assign a \emph{value} to each of these symbols. For instance, suppose we want to represent the information ``the actor Nicholas Cage has won an Oscar''. \begin{itemize} \item The concept ``the actor Nicholas Cage'' can be represented by, e.g., the constant $NickCage$ and an interpretation $S$ will assign to this constant a domain element $NickCage^S \in dom(S)$ as its value; \item The concept ``winning an Oscar'' can be represented by a unary predicate $OscarWinner/1$ and an interpretation $S$ will assign to this predicate a unary relation $OscarWinner^S \subseteq dom(S)$. \end{itemize} Using these symbols, we can represent our knowledge by the following (atomic) formula $\varphi$: \[ OscarWinner(NickCage) \] which of course has the formal semantics that: $S \models \phi$ if and only if $NickCage^S \in OscarWinner^S$. It is clear, here, that whenever we use symbols (e.g., $NickCage$ or $OscarWinner$) in our formulas, these symbols actual refer to their \emph{values} in a certain structure $S$ (e.g.., the domain element $d = NickCage^S$ or the unary relation $OscarWinner^S$). This is of course the same as what also happens with symbols in almost any programming language: in an expression such as \verb|x + 9 * y|, the symbols \verb|x| and \verb|y| refer to their values. Some programming language include an operator that allows to ``quote'' a symbol, producing an expression that refers to the symbol itself, rather than to its value. For instance, the LISP expression \verb|(equal x y)| tests whether the values of \verb|x| and \verb|y| are equal (which may or may not be the case), whereas the expression verb|(equal 'x 'y)| tests whether the symbols \verb|x| and \verb|y| themselves are equal (which they are not). In this paper, we introduce the same $'\sigma$ operator into classical FO and explore its uses from a knowledge representation perspective. In particular, we examine possible uses of this operator to quote not only constants, but also functions and relations. This allows us to do a kind of higher-order quantification, which is nevertheless---as we will show---quite different from classical higher-order quantification. We will also consider an extension of FO with aggregates, allowing the use of terms such as $\#\{ x : \varphi(x) \}$, representing the number of $x$ for which $\varphi(x)$ holds, since this will reveal a number of interesting use cases. Throughout this paper, we assume that we construct our FO vocabularies in such a way that each domain concept is represented by a unique symbol. In other words, we do not consider situations in which the vocabulary contains ``synonyms'' (two distinct symbols that nevertheless represent the same concept, such as $Oscar$ and $AcademyAward$ might). This considerable simplifies the setting, since we can therefore, e.g., count concepts by simply counting symbols. \section{Quoting constants} The difference between a symbol's \emph{intension} (the concept that the symbol represents) and its \emph{extension} (the value of the symbol) has been well-studied in philosophy. Kripke famously discussed the example of Hesperus and Phosphorus being names for the concepts ``the evening star'' (= the intension of Hesperus) and ``the morning star'' (the intension of Phosphorus), whose extension is same in the world in which we live (namely, as it turns out, it is the planet Venus). In our logic, we can introduce the vocabulary $\Sigma = \{ Hesperus, Phosphorus \}$ containing the two constants. The world in which we live can then be represented as the structure $S$ such that $dom(S) = \{ Venus \}$ and $Hesperus^S = Venus = Phosphorus^S$. It is then the case that $S \models Hesperus = Phosphorus$. However, no model exists for the sentence $'Hesperus = 'Phosphorus$, since both symbols are different. In logic programming, it is common to make use of Herbrand interpretations, in which each constant is interpreted as itself. This restriction is often imposed on a meta-level, but our logic allows to express that a constant $C$ is a Herbrand constant by the following formula $H(C)$ \[ C = 'C.\] For each structure $S$, $S \models H(C)$ if and only if $C^S = ('C)^S = C$, i.e., if $C$ is indeed a Herbrand constant in $S$. \section{Syntax and semantics} A \emph{vocabulary} $\Sigma$ is a set of \emph{symbols} $\upsigma$ and their associated signature, that satisfy these inductive rules: \begin{itemize} \item \U and $\B$ are symbols with signature $\U \rightarrow \B$; \item a \emph{base type} $t$ is a symbol with signature $\U \rightarrow \B$ (e.g., \U, $\B$, $\mathds{N}$, the \emph{intension type} $\In$, or any custom type); \item a \emph{typed symbol} $\sigma$ is a symbol with signature $\bar{\bt} \rightarrow \bt$, where $\bar{\bt}$ is a tuple of base types, and $\bt$ a base type. \end{itemize} Notice that we consider base types as symbols, so that they (and their intension) can occur in logic formula. For each symbol $\upsigma$, $\tau(\upsigma)$ is its signature. The \emph{arity} of $\upsigma$ is 1 for base types, and the length of the n-tuple in $\tau(\upsigma)$ for typed symbols. The arity of a typed symbol can be 0. A \emph{universe of discourse} $\mathscr{U}$ is a tuple $\langle \U^\mathscr{U}, \mathscr{U}_{int}, \mathscr{U}_{v}, \rangle$, such that: \todo{or "domain of discourse" ?} \begin{itemize} \item a set $\U^\mathscr{U}$, that includes the set of $\{\Tr, \Fa\}$ booleans and the set of \emph{intensions}; \item $\mathscr{U}_{int}$ is a total mapping from base types $\bt$ \pierre{and some typed symbols ?} to their \emph{intensions}, noted $`\upsigma^{\mathscr{U}}$, i.e., a partial function from $\Sigma$ to the set of intensions; \item $\mathscr{U}_v$ is a total mapping from base types $\bt$ to their \emph{value}, $\bt^{\mathscr{U}}$, where $\bt^{\mathscr{U}}$ is a function from $\U^\mathscr{U}$ to the set of booleans; the value of a base type specifies which elements of $\U^\mathscr{U}$ belong to the base type. \item $\mathscr{U}_v$ defines a partition of $\U^\mathscr{U}$, with one subset associated to each base type. \end{itemize} For example, the boolean function $\B^{\mathscr{U}}$ specifies the set $\{\Tr, \Fa\}$ of booleans, and $\In^{\mathscr{U}}$ specifies the set of intensions, distinct from $\B^{\mathscr{U}}$. By abuse of language, we may call the "value of a base type" the set itself. Each element of $\U^\mathscr{U}$ belongs to one base type. The \emph{domain} $\bar{\bt}^\mathscr{U}$ is the cross-product of the sets associated with each type in tuple $\bar{\bt}$. \ignore{ A \emph{value of type} $\tau$ with signature $\bar{\bt} \rightarrow \bt$, in $\mathscr{U}$, is a total function from $\bar{\bt}^{\mathscr{U}}$ to $\bt^{\mathscr{U}}$. } A $\Sigma$-interpretation $\mathscr{I}$ is a tuple $\langle \mathscr{U}, \mathscr{I}_{int}, \mathscr{I}_v \rangle$, such that: \begin{itemize} \item $\mathscr{U}$ is a universe of discourse for $\Sigma$; \item $\mathscr{I}_{int}$ is a total, surjective mapping from symbols $\upsigma$ to their intensions, noted $`\upsigma^{\mathscr{I}}$, i.e., a total surjective function from $\Sigma$ to $\In^{\mathscr{U}}$; \item $\mathscr{I}_v$ is a total mapping from symbols $\upsigma$ (with signature $\bar{\bt} \rightarrow \bt$) to their value, $\upsigma^{\mathscr{I}}$, where $\upsigma^{\mathscr{I}}$ is a total function from $\bar{\bt}^{\mathscr{U}}$ to $\bt^{\mathscr{U}}$; \item the intension (resp. value) of a base type in $\mathscr{I}$ is its intension (resp. value) in $\mathscr{U}$: $`t^{\mathscr{I}}=`t^{\mathscr{U}}$ and $t^{\mathscr{I}}=t^{\mathscr{U}}$; \item if any 2 symbols have identical intensions, then they have identical values. \todo{TODO: shuffling of arguments} \end{itemize} $\eval{\phi}{\mathscr{I}[x:d]}$ is an interpretation $\mathscr{I}$ where $\mathscr{I}_v$ is extended with the mapping from $x$ to $d$. The symbol(s) mapped to intension $i$ in $\mathscr{I}$ are the \emph{denoting symbols} of $i$. \ignore{An intension $i \in {\mathcal{I}}$ is \emph{denoted} in $\mathscr{I}$ iff it has at least one denoting symbol in $\mathscr{I}$.} The (unique) \emph{value of the intension}, noted $\symb(i)^{\mathscr{I}}$, is the value of anyone of its denoting symbols, noted $\symb(i)$. When an intension has only one denoting symbol, $\upsigma$, we have $\symb(`\upsigma) = \upsigma.$ \subsection{FO(Intension)} $\bar{\Sigma}$ is the vocabulary $\Sigma$ extended with these symbols: \begin{itemize} \item $`\upsigma: () \rightarrow \In$ for every symbol $\upsigma \in \Sigma$; \item $\m{arity}: \In \rightarrow \mathds{N}$; \item $\m{input}: (\In, \mathds{N}) \rightarrow \In$; \item $\m{output}: \In \rightarrow \In$; \item $\m{type}: \U \rightarrow \In$ \end{itemize} $FO_{Int}(\Sigma)$ is the language $FO(\bar{\Sigma})$ augmented with the special function application $\$(i)(\bar{d})$, where $i$ is a formula, and $\bar{d}$ is a tuple of formulae. More precisely, the set of formulae $FO_{Int}(\Sigma)$ is defined inductively as follows (beginning with the rules of FO): \begin{itemize} \item $\m{true}$ and $\m{false}$ are formulae; \item a \emph{function application}, $\upsigma(\bar{d})$ is a formula if $\upsigma$ is a symbol in $\bar{\Sigma}$ and $\bar{d}$ a tuple of formulae; \item $\lnot(\phi$), $(\phi) \lor (\psi)$ and $(\phi) = (\psi)$ are formulae if $\phi$ and $\psi$ are formulae; \item a \emph{variable} $x$ is a formula; \item an \emph{existentially quantified formula}, $\exists x \in t:(\phi)$ is a formula if $x$ is a variable, $t$ a base type, and $\phi$ a formula; \item an \emph{aggregate} $\#\{x \in t: \upsigma(x)\}$ is a formula \ignore{\item a symbol $`\sigma$ in $\bar{\Sigma} \setminus \Sigma$ is a formula;} \item $\$(i)(\bar{d})$ is a formula when $i$ is a formula, and $\bar{d}$ a tuple of formulae. \end{itemize} This minimal set of constructs can be extended with syntactic abbreviations such as $(\phi) \land (\psi$), $(\phi) \Rightarrow (\psi)$, and $\forall x: (\phi)$. An $\m{if} (\phi) \m{then} (a) \m{else} (b)$ construct can also be added easily. A \emph{free variable} $x$ is a variable that does not occur in the scope of a quantifier that declares $x$. A formula $\phi$ is \emph{well-formed} iff it has no free variable. We call $\delta_{\upsigma}$ the \emph{domain predicate} of symbol $\upsigma$: it associates the boolean value $\Tr$ to tuples that are in the domain of $\evali{\upsigma}$ (as specified by $ \tau(\upsigma)$ and \U), and $\Fa$ to any other tuples. \ignore{ Given an interpretation $\mathscr{I}$, we define the \emph{type of a well-formed formula} as: \begin{itemize} \item $\evali{\m{type}(\upsigma(\bar{d}))} = t \text{ where } \tau(\upsigma)= \bar{t} \rightarrow t$ \item $\evali{\m{type}(\$(i)(\bar{d}))} = t \text{ where } \tau(\symb(i))= \bar{t} \rightarrow t$ \item $\evali{\m{type}(\phi)} = `\B$ otherwise \end{itemize} } Given an interpretation $\mathscr{I}$, we define the value $\evali{\phi} (\in \U \cup \{\bot\})$ of a well-formed formula $\phi$ inductively, based on the syntactical structure of $\phi$:\footnote{$\uptau, \bar{d}, \phi, \psi, x, i$ are placeholders for sub-formula of the implied type)} \footnotetext[1]{the rules above have precedence over this rule} \begingroup \allowdisplaybreaks \begin{align*} \evali{true} &= \Tr\\ \evali{false} &= \Fa\\ \ignore{ \vals{t(\bar{d})} &= \begin{cases} d \in t^{\mathscr{I}}, &\text{if } t \text{ is a type symbol } \land \bar{d} \cong (d),\\ \bot, &\text{otherwise}. \\ \end{cases}\\ } \vals{`\upsigma(\bar{d})} &= \begin{cases} `\upsigma^{\mathscr{I}}, \text{i.e., the intension of } \upsigma \text{ in } \mathscr{I}, \text{ if } \bar{d} \cong ()\\ \bot, \text{otherwise}. \\ \end{cases}\\ \vals{arity(\bar{d})} &= \begin{cases} \text{the arity of } \upsigma, \text{ if } \evali{\bar{d}} \cong (`\upsigma^{\mathscr{I}}) \text{ and } `\upsigma^{\mathscr{I}} \in \In^{\mathscr{I}}\\ \bot, \text{otherwise}. \\ \end{cases}\\ \vals{input(\bar{d})} &= \begin{cases} `t_j^{\mathscr{I}}, &\text{ if } \evali{\bar{d}} \cong (`\upsigma^{\mathscr{I}}, j), `\upsigma^{\mathscr{I}} \in \In^{\mathscr{I}}, j \in \mathds{N} \\ & \text{ and } \tau(\upsigma) \cong (t_1, ..., t_j, ..., t_n) \rightarrow t\\ \bot, &\text{otherwise}. \\ \end{cases}\\ \vals{output(\bar{d})} &= \begin{cases} `t^{\mathscr{I}} , &\text{ if } \evali{\bar{d}} \cong (`\upsigma^{\mathscr{I}}), `\upsigma^{\mathscr{I}} \in \In^{\mathscr{I}} \\ & \text{ and } \tau(\upsigma) \cong (t_1, ..., t_n) \rightarrow t\\ \bot, &\text{otherwise}. \\ \end{cases}\\ \vals{type(\bar{d})} &= \begin{cases} `t^{\mathscr{I}} , &\text{ if } \bar{d} \cong (x) \text{ and $x$ in the scope of } \exists x \in t\\\ `t^{\mathscr{I}} , &\text{ if } \bar{d} \cong (\upsigma(\bar{d})) \text{ and } \tau(\upsigma)\cong \bar{t} \rightarrow t\\ `t^{\mathscr{I}} , &\text{ if } \bar{d} \cong (\$(i)(\bar{d})) \text{ and } \tau(\symb(i))\cong \bar{t} \rightarrow t\\ `\B^{\mathscr{I}}, &\text{otherwise}. \\ \end{cases}\\ \evali{\upsigma(\bar{d})} &= \begin{cases} \text{if none of the above rules apply:}\\ \upsigma^{\mathscr{I}}(\evali{\bar{d}}), \text{if } \delta_\upsigma(\evali{\bar{d}}) = \Tr \\ \bot, \text{otherwise}. \\ \end{cases}\\ \vals{\neg(\phi)} &= \begin{cases} \Tr, & \text{if } \vals{\phi} = \Tr;\\ \Fa, & \text{if } \vals{\phi} = \Fa;\\ \bot, & \text{otherwise}.\\ \end{cases}\\ \vals{(\phi) \lor (\psi)} &= \begin{cases} \Tr, & \text{if } (\vals{\phi} = \Tr \land \evali{type(\psi)}=`\B) \text{ or } (\vals{\phi} = \Fa \text{ and }\vals{\psi} = \Tr);\\ \Fa, & \text{if } \vals{\phi} = \Fa \text{ and } \vals{\psi} = \Fa;\\ \bot, & \text{otherwise.} \end{cases}\\ \vals{(\phi) = (\psi)} &= \begin{cases} \Tr, & \text{if } \vals{\phi} \neq \bot \text{ and } \evali{\m{type}(\phi)}=\evali{\m{type}(\psi)} \text{ and } \vals{\phi} = \vals{\psi};\\ \Fa, & \text{if } \vals{\phi} \neq \bot \text{ and }\evali{\m{type}(\phi)}=\evali{\m{type}(\psi)} \text{ and } \vals{\psi} \neq \bot;\\ \bot, & \text{otherwise.} \end{cases}\\ \vals{\exists x \in t:\phi} &= \begin{cases} \Tr, &\text{if, for some } d \in t^\U, \eval{\phi}{\mathscr{I}[x:d]} = \Tr;\\ \Fa, &\text{if, for all } d \in t^\U, \eval{\phi}{\mathscr{I}[x:d]} = \Fa;\\ \bot, & \text{otherwise.}\\ \end{cases}\\ \vals{\#\{x \in t: \upsigma(x)\}} &= \begin{cases} \text{number of $x$ in $t$ such that } \evali{\upsigma(x)} = \Tr, \text{if } \forall x \in t: \evali{\B(\upsigma(x))}=\Tr;\\ \bot, \text{otherwise.}\\ \end{cases}\\ \evali{\$(i)(\bar{d})} &= \begin{cases} \evali{f(\bar{d})} & \text{where f is } \symb(\evali{i}), \text{if } \tcc{i} \text{ and } \evali{i} \in \In\\ \bot, & \text{otherwise.} \end{cases}\\ \ignore{ \evali{\delta(i)(\bar{d})} &= \begin{cases} \delta_{\sigma}(\evali{\bar{d}}) & \text{where } \sigma = \symb(\evali{i}), \text{if } \evali{i} \in \In \\ \bot, & \text{otherwise.} \end{cases} } \end{align*} \endgroup \todo{definitions ?} Notice that: \begin{itemize} \item a disjunction is asymetric: to be defined, its first operand must be defined; this reduces the computational complexity of checking definedness; \item $\evali{\$(\m{type}(\phi))(\phi)} = \Tr$, i.e., $\phi$ is a member of the value of the type of $\phi$; \item $\evali{\$(`\sigma)(\bar{d})} = \evali{\sigma(\bar{d})}$ \end{itemize} \ignore{, and $\evali{\delta(`\sigma)(\bar{d})}$ is $\delta_\sigma(\evali{\bar{d}})$} \subsection{Definedness} We say that a formula $\phi$ is \emph{well-defined} iff $\evali{\phi}$ is defined (i.e., $\evali{\phi} \neq \bot$) for any $\mathscr{I}$. We follow mathematical tradition by requiring sentences to be well-defined formula. Following the approach presented in \cite{DBLP:journals/entcs/BerezinBSCGD05}, we define the \emph{well-defined condition} of a formula $\phi$, noted $\tcc{\phi}$, as follows: \begin{equation*} \begin{split} \tcc{true} &\equiv true\\ \tcc{false} &\equiv true\\ \tcc{`\sigma(\bar{d})} &\equiv \bar{d} = () \\ \tcc{\m{arity}(\bar{d})} &\equiv \bar{d} \cong (i) \land \In(i)\\ \tcc{\m{input}(\bar{d})} &\equiv \bar{d} \cong (i, n) \land \In(i) \land \mathds{N}(n) \land 1 < n < \text{arity}(i)\\ \tcc{\m{output}(\bar{d})} &\equiv \bar{d} \cong (i) \land \In(i)\\ \tcc{\sigma(\bar{d})} &\equiv \ignore{\tcc{d_1} \land ... \land \tcc{d_n} \land} \delta_{\sigma}(\bar{d}) \\ \tcc{\lnot(\phi)} &\equiv \B(\phi) \\ \tcc{(\phi) \lor (\psi)} &\equiv \B(\phi) \land (\m{type}(\phi)=`\B) \land (\phi \lor \B(\psi)) \\ \tcc{(\phi) = (\psi)} &\equiv \tcc{\phi} \land \tcc{\psi} \land (\m{type}(\phi)=\m{type}(\psi))\\ \tcc{\exists x \in t : \phi} &\equiv (\exists x \in t : \B(\phi) \land \phi) \lor (\forall x \in t : \B(\phi)) \\ \tcc{\#\{x \in t: \upsigma(x)\}} &\equiv \forall x \in t: \B(\upsigma(x)) \\ \tcc{\$(i)(\bar{d})} &\equiv \In(i) \land \tcc{(\symb(i)) (\bar{d})} \\ \ignore{ \tcc{\delta(i)(\bar{d})} &\equiv \In(i) \land \tcc{(\symb(i)) (\bar{d})} \\ } \end{split} \end{equation*} \ignore{ The \emph{co-domain condition for $\Sigma$}, $\m{range}(\Sigma)$, states that the output-domain of every symbol is specified by its signature $\bar{t}_\upsigma \rightarrow t_\upsigma$: $\forall \upsigma \in \Sigma: \forall \bar{x} \in \bar{t}_\upsigma: \upsigma(\bar{x}) \in t_\upsigma$. } \paragraph{Theorem} $\phi$ is well-defined iff $\evali{\tcc{\phi}} = \Tr$ for any $\mathscr{I}$. \paragraph{Proof} Each rule of the well-defined condition is constructed by \emph{abstracting} (\cite{DBLP:conf/plilp/CousotC92}) the corresponding rule in the evaluation of a formula in $\mathscr{I}$: the abstract value is \Fa if the concrete value is $\bot$, \Tr otherwise. Therefore, $\evali{\tcc{\phi}}$ is true whenever $\phi$ is defined. The condition in the theorem expresses the requirement that $\phi$ be defined in every interpretation. \paragraph{Example} Consider $\Sigma = \{\U: \U \rightarrow \B, \B: \U \rightarrow \B, \In: \U \rightarrow \B, f: \In \rightarrow \In \}$. $\bar{\Sigma}$ is $\Sigma \cup \{`\U, `\B, `\In, `f\}$ \ignore{ The co-domain condition is \begin{equation*} \begin{split} &(\forall o \in \U: \U(o) \in \B) \\ \land &(\forall o \in \U: \B(o) \in \B) \\ \land &(\forall o \in \U: \In(o) \in \B) \\ \land &(\forall o \in \In: f(o) \in \In) \\ \land &(`\U() \in \In) \land (`\B() \in \In) \\ \land &(`{\mathcal{I}}() \in \In) \land (`f() \in \In) \end{split} \end{equation*} } For $\phi \equiv (\exists x \in \In: \$(x)(x)=x)$, the well-defined condition $\tcc{\phi}$ is \[ (\exists x \in \In: \tcc{\$(x)(x)=x} \land \$(x)(x)=x) \lor (\forall x \in \In: \tcc{\$(x)(x)=x}) \] One can show that $\evali{\tcc{\phi}}$ is \Fa, e.g., by considering $x=`\mathds{U}()$. However, by adding an appropriate ``guard'', $\phi$ can be made well-defined: \[ \phi' \equiv \exists x \in \In: arity(x)=1 \land input(x,1)=output(x) \land \$(x)(x)=x \] By grounding, $\phi'$ reduces to $f(`f())=`f()$ whose well-defined condition $\In(`f())$ is \Tr in every $\mathscr{I}$. Indeed, $\phi'$ is well-formed: it has a value in every interpretation. \section{Use cases/examples} \maurice{Why not enumerate the elements of type symbol? I understand, every function is a symbol. On the other hand, there is no need to dereference every function, so making explicit which ones will be used as concepts in the application adds clarity to the application code?} Earlier, we show the ``Symptoms Triage'' example, which demonstrates the power of intensional objects in one single statement. Now we will show a few more complex examples. \subsection{Set game} Our first example is a simplified\footnote{This game is played in multiple rounds where players consecutively make their moves. Since the dynamics of the game do not have an impact on what we want to demonstrate, we will keep it simple and limit our example to the one move of one player.} version of the Set game, which is described in ~\cite{davis2003card}. Twelve cards are dealt from a deck of 81 different cards. Each card has four distinct features: a shape (diamond, squiggle, oval), a number (one, two, or three), a fill (solid, striped, or open), and a color (red, green, or purple). The player has a goal to find a \textit{magic set} from these 12 dealt cards. \textit{Magic set} consist of exactly three cards, where for each feature, these three cards either have the same value, or all different values. We propose the following vocabulary for modelling of this problem: \begin{lstlisting}[language=idptest] vocabulary { type Card := { 1..12 } //The set of cards //Possible values of the features type Color := {red, green, purple} type Number := {one, two, three} type Fill := {solid, striped, open} type Shape := {diamond, squiggle, oval} //Differentiating symbols feature : (Symbol) -> Bool //Value of the feature for each card colorOf : (Card) -> Color numberOf : (Card) -> Number fillOf : (Card) -> Fill shapeOf : (Card) -> Shape sel : (Card) -> Bool//Selected cards } \end{lstlisting} The input of the problem is a partial structure that interprets $colorOf$, $numberOf$, $fillOf$, and $shapeOf$ functions. The desired output is a full expansion of that structure that satisfies the goal of the game, giving the interpretation of $selected$, $feature$ and $same$. It is clear that our statement of the goal of the game in natural language ranges over the features. In FO, we cannot quantify a statement over the set of features. To express that for each feature, the 3 selected cards must have either the same value, or all different values, we have to write a separate sentence for each feature. With the intensional extension of FO however, we can quantify over vocabulary symbols and write these constraints as one statement. Here is the theory that expresses the game with such an extension: \begin{lstlisting}[language=idptest] theory { //The set of features feature := {`colorOf, `fillOf, `numberOf, `shapeOf} //All cards are different ! x , y in Card: x~=y => (? s in Symbol: feature(s) & $(s)(x) ~= $(s)(y)). //Selected cards constraint ! s in Symbol : feature(s) => ( (! x in sel, y in sel: x ~= y => $(s)(x) = $(s)(y)) | (! x in sel, y in sel: x ~= y => $(s)(x) ~= $(s)(y)) ). //Three cards are selected #{ x: selected(x) } = 3. } \end{lstlisting} \subsection{Dangerous temperatures} \pierre{I wish this example was more realistic.} Assume we want to model a controller for a dynamic system of electrical devices in one house. There are many different properties of devices, and their value can be dependant on other devices. The house controller receives the values of the properties of all devices in intervals. The controller should ensure that the house is safe at any time point. Among many different properties, let us focus on the temperature and up-time of the devices. Knowing the dangerous temperatures and up-time of each device, we would certainly like to have a property that no device exceeds the critical point at any moment. For example, the laptop temperature should always be less than 100 degrees Celsius, while for an oven it should not exceed 400. Additionally, no device should work for more than 10 time units (time is abstract in our example) except for the light bulb which can work infinitely. Important for this example is that parameters have different ordered codomains. So even if we abstract over parameters the problem remains, we need separate statements per property. But using intensions we can define functions that map symbols to symbols which is sufficient to overcome this problem. We use the following simplified vocabulary to express this example: \begin{lstlisting}[language=idptest] vocabulary V { type Time := {1..100} type Temp := Int type Device := {laptop, oven, lightbulb} //Properties temp : (Time ** Device) -> Temp upTime : (Time ** Device) -> Time //Maximum of property per device maxTemp : (Device) -> Temp maxUpTime : (Device) -> Time //Properties to their maximums limits : (Symbol) -> Symbol //Set of properties property : (Symbol) -> Bool } \end{lstlisting} We will omit the specification of a dynamic system as it is not in the primary scope of this paper. The following formalization represents the described example: \begin{lstlisting}[language=idptest] theory T : V { //Action theory ... //Maximum temperatures and up-time maxTemp := {laptop -> 100, oven -> 350, lightbulb -> 150} maxUpTime := {lightbulb -> 100} else 10 //Enumerations of prop and limits prop := {`temp, `upTime} limits := {`temp -> `maxTemp,`upTime -> `maxUpTime} //No device at any time point exceeds critical value for any parameter ! s in Symbol : prop(s) => (! t in Time : ! d in Device : $(s)(t,d) =< $(limits(s))(d)). } \end{lstlisting} It is important to note how we abstract over functions with different codomain using another function that maps symbols to symbols. \marc{I find this inherently a good example, but it took me too long to understand the point. Thee is a a set of attributes, each value ranging over some ordered cdomain, and each attribute having some minimal and maximal safe value. Can we express this property in one statement? Notice that I would also abstract over the order of the ordered domain of an attribute. } \pierre{ A possible application in the legal field is in reasoning about degrees of relationship for calculation of the[inheritancetax](https://www.kbcbrussels.be/retail/en/family/inheritance-tax.html). The IDP theory would define what is a parent, child, grand-parent, spouse, aunt and uncles, ... : these are relationships. Now, some relationships are 1st order, 2nd order, ... The order determines, among other things, the inheritance tax rate. To find the heir and their tax rate, you would quantify over relationship types. } \marc{Yes, that is a nice example. Why dont you give it a try, Pierre? E.g., tax on inheritance for family of the 2nd level is double that of family of the first level.} \section{Related Work} The term \textit{intensional objects} is associated with intensional logic \cite{sep-logic-intensional}. An overview of the first-order intensional logic is available in the paper~\cite{fitting2004first}. Our approach mostly diverges from intensional logic in the sense of focusing on what objects are designating. Thus, concepts such as ``synonymy'' are not present in our logic. \marc{So, our approach is characterised by being a subset of Fittings? There must also be something we can do that he cant. And if not, then this paper should not be written. Do you have a clear understanding of this? I dont yet. Mayb it is my responsibility to find out. If somebody knows, please write a note on this here. What can we do that is not possible in Fitting's logic, and vice versa? } The motivation that drives us into the direction of intensional objects has a strong connection with concepts as higher-order logics, macros, or templates. While higher-order semantics is different from intensional semantics, some logics combine their distinct features. For instance, HiLog~\cite{chen1993hilog} is a logic with higher-order syntax, while its model theory is first-order. Variables in HiLog range over a set of intensions, so it has intensional semantics. This idea has a strong relationship with ours. Treatment of HiLog predicate and function symbols depends on their place in a formula: when applied to arguments, they evaluate through their extension; when not applied, they are terms (intensions). HiLog syntax fits nicely in the context of logic programming. It is implemented in Flora-2~\cite{yang2005flora} and partially in XSB~\cite{swift2012xsb} systems. By contrast, our syntax allows the knowledge expert to explicitly distinguish references to the value and to the concept itself. \pierre{ remove ? On the other hand, extending FO($\cdot$) language with higher-order syntax would naturally lead to higher-order semantics. Thus, we have to discriminate between these two syntaxes. } \section{Formal semantics} \pierre{This section is kept for historical reason} \maurice{I am missing the semantics of arity, argdom and val. Maybe, you should show the structure for your "running" example so that the reader has a better feeling of how this deviates from a standard structure? This is supposed to be the core of the paper(?) It looks a bit tiny} A structure \struct{} over a vocabulary \voc{} consists of: \begin{itemize} \item a non-empty domain $D$, and an assignment of a disjoint set of domain elements for every type \type. Specifically, the type associated with $\mathbb{B}$ must be the set $\{True, False\}$, and the set \SymbolType{} must contain exactly one element for every function symbol in \voc{} except \arity{}, \argdom{} and \val{}. \mvdh{If we keep functions and predicates separated, we should add an additional rule} \item an assignment of an $n$-ary function over the domain for every function symbol $f \in \voc{}$, \item an assignment of a domain element $d$ to every variable symbol $x$, and \item an assignment of an element $f_{d}$ from the set assigned to \SymbolType{} for every function symbol $f$ \end{itemize} We restrict structures based on the information in the type system: the function over $D$ assigned to every function $f$, and the domain element assigned to every variable $x$ must respect their respective types. The assignment by \struct{} of functions over the domain $D$ to functions $f$ in \voc{} is called the \emph{valuation} function of \voc{}. The value assigned to $f$ by this valuation function is written as $f^{\struct{}}$. \marc{There is no notion of a model Yet? } \section{Formal syntax} \pierre{This section is kept for historical reason} \pierre{ A {\em definition} of a function $\sigma$ is a set of rules of the form $\forall \overline{x}: \forall y: \sigma(\overline{x})=y \leftarrow \phi(\overline{x},y).$, where $\overline{x}$ denotes a tuple of variables and $y$ a variable. Definitions that have several rules can be reduced to definitions with one rule by 1) renaming the head variables to a unique tuple $\overline{x}$ and $y$ and 2) joining the bodies of the rules by a disjunction. In the most general case, definitions may be recursive, containing rules that define a function in terms of the itself. The interpretation of a definition is then given by an iterative construction process of a lower- and upper-bound value for each element of the domain of $\sigma$ that reaches a fixed point corresponding to the value of the function for that element of the domain. In most legal applications however, definitions are not (co-)recursive: the dependency graph between function symbols implied by the definitions has no loop. In that case, and when a definition has only one rule, it simply states an equivalence between the head and the body of the (unique) rule. This interpretation of rules was implicit already in Clark's work on completion semantics For the sake of brevity, we'll limit our presentation to this simpler case. } Before we introduce the syntax of \langu{}, we sketch a simple type system, consisting of a set $\Delta$ of base types $\delta$ and their composite types, i.e. any type $(\delta_{1} \times \ldots \times \delta_{n}) \rightarrow \delta$. Within the set of base types, we identify the types \SymbolType{} and \Bool{}. The type \Bool{} is reserved for the well-known type of boolean values, while the type \SymbolType{} serves as the type of intensional objects. A vocabulary $\voc$ consists of an infinite supply of typed variable symbols, and a finite set of typed symbols $f/n$, where $n \geq 0$ and $n$ is known as the \emph{arity} of the symbol. Without loss of generality, we can refer to a symbol $f/n \in V$ as simply $f$, specifically when the arity is irrelevant or clear from context. \marc{Variable symbols are often considered as logical symbols; the role of the vocabulary is mostly to be the set of concepts that we enrich logic with, to have a language to talk about the application domain.} We note that symbols of the type $(\delta_{1} \times \ldots \times \delta_{n}) \rightarrow \Bool{}$ are commonly referred to as \emph{predicates}. We will refer to variable symbols of the \SymbolType{} type as \emph{intensional} variables.\marc{What is the symbol type? \SymbolType{}? Then say so.} In every vocabulary \voc{}, we also include the built-in function symbols $\arity/1$ and $\argdom/2$, and the built-in higher-order function $\val/1$. These functions allow access to information about intensional objects, such as the arity or the range of definition for each of their arguments. The function $\val/1$ will define the connection between intensional objects and their extensional value. \marc{This is verbose, redundant. Try to be direct and precise: ``The function $\val/1$ will define ... '' $\rightarrow$ give directly its specification.} A term $t$ is either: \begin{itemize} \item a variable $x \in \voc$. \item a symbol $f/n \in \voc$ applied to $n$ terms $t_1 \ldots t_n$, written as $f(t_1, \ldots, t_n)$. \item the intension of a non built-in symbol $S \in \voc$, written as $\deref{S}$, or \marc{Mathemtical language gets very unprecise here. I dont think any reader would understand what we mean here. It is absolutely necessary to specify that in logical sentences, we need to refer to the intension of a symbol at some occasions and to the extension at other occations, and that a language mechanism is necessary to make clear to what we want to refer. probably we should already have introduced an example showing the distinction. } \item an application of the built-in higher-order function $\val/1$ to a term $t$, subsequently applied to a number of terms $t_1,\ldots,t_n$, written as $\val(t)(t_1,\ldots,t_n)$ \end{itemize} \todo{use compactitem from paralist package?} Writing \deref{S} instead of $S$, we are referring to the symbol $S$'s intension, an object of type \SymbolType{}, rather than its extension. We refer to this as \emph{dereferencing} $S$. Note that we exclude the \emph{built-in} functions $\arity$, $\argdom$ and $\val$ from being dereferenced. \marc{I think it is me who introduced the term dereferencing. Did you check dictionaries? It is probably wrong term. What we want to say is: in a sentenced we want to talk about the intensional value of a symbol or of the extensional value. So, a symbol refers to one or another, and we must be able to know which one it is. So I think ``referencing'' is a more correct term. } A formula $\phi$ over a vocabulary \voc{} is defined inductively as \begin{compactitem} \item a term $t$, \item $\lnot \phi$, $\phi \land \psi$, $\phi \lor \psi$, $\phi \Rightarrow \psi$ and $\phi \Leftrightarrow \psi$ where $\phi$ and $\psi$ are formulas, \item $\forall x \in \type{} : \phi$ and $\exists x \in \type{} : \phi$ where $\phi$ is a formula and $x$ is a variable of \voc{}. \item A definition, consisting of a set of rules of the form $\forall \overline{x} : \forall y \leftarrow \phi.$ \end{compactitem} \mvdh{What about binary quants? Where does T come from? Unary predicate over Bool or a specific type.} \maurice{I find it a bit weird to read that a term is a formula and then having to imagine what the negation, $\land$, $\lor$, ... of terms is(unless you mean a term of type boolean) Show your running example in this syntax.}
{'timestamp': '2022-02-15T02:08:31', 'yymm': '2202', 'arxiv_id': '2202.00898', 'language': 'en', 'url': 'https://arxiv.org/abs/2202.00898'}
\section{Introduction} The Bekenstein-Hawking entropies of the BPS saturated black holes have been studied in four-, five- and ten-dimensions from the nonperturbative aspects of string theory \cite{CY,CT,SV,HS,MS,Tsey,GKT,BB}. The black holes in ten-dimensions are constructed by the intersecting D-branes. The black holes are dual to each other under the T-duality transformation. The four- and five-dimensional black holes are obtained by dimensional reduction from the ten-dimensional black holes. The method to obtain the non-extremal black holes from the extremal black holes are studied. Their entropies are interpreted in terms of the microscopic states. The effect of the graybody factor to the Hawking-radiate blackbody radiation is also derived from the D-branes. On the other hand, the Euler numbers of the black holes are investigated in four-dimensions~\cite{HHR,GK,LP}. The Euler number $\chi$ of an $n$-dimensional manifold is the sum of the Betti numbers $B_p$: \begin{eqnarray*} \chi = \sum_{p=0}^{n}(-1)^p B_p. \end{eqnarray*} The Betti numbers are integers. Therefore the Euler numbers are also integers. We can also calculate the Euler numbers by using the $n$-dimensional Gauss-Bonnet action, which integrates the curvatures $R_{ij}$ of the black holes. The $n$-dimensional Gauss Bonnet action is \begin{eqnarray*} \chi &=& \frac{1}{(4\pi)^{n} n!} \int dx^n \epsilon^{i_1i_2 \cdots i_n} R_{i_1i_2}R_{i_3i_4} \cdots R_{i_{n-1}i_n} . \end{eqnarray*} The black holes in four-dimensions have the period in Euclidean time coordinate. The period is defined by the inverse of the surface gravity. By using this relation, we can obtain the integer valued Euler numbers. In the same way as in four-dimensions, we can calculate the Euler numbers of the black holes in ten-dimensions using the ten-dimensional Gauss-Bonnet action. The Euler numbers of the black holes in ten-dimensions have not been studied so far. We consider the Euler numbers of the black holes which are realized by the intersecting D-branes in ten-dimensions. The black holes of the intersecting D-branes in ten-dimensions also have the periods with respect to the compactification radii. The radii are usually taken to be arbitrary. For the generic radii, the Euler numbers calculated by the ten-dimensional Gauss-Bonnet actions are non-integers. However the Euler numbers must be integers because of their definition. To avoid these difficulties, we propose a way to constrain the compactification radii. We reflect on the necessity to fix the values of the compactfication radii in the physical reasoning. We treat the black holes in ten-dimensions. These have six compactified directions. The entropies of the black holes in ten-dimensions are expressed by the compactification radii and the charges. We rewrite the entropies using the quantized charges and the Newton's constant, then the entropies are irrespective of the compactification radii. Therefore it seems not to be necessary to fix the value of the compactification radii. However, the four-dimentional Newton's constant $G_{(4)}$ depends on the value of the compactified radii, \begin{eqnarray*} G_{(4)} &=& G_{(10)}/L_1L_2L_3L_4L_5L_6 , \end{eqnarray*} where $L_i$ are the compactification radii. $G_{(10)}$ is the ten-dimensional Newton's constant, \begin{eqnarray*} G_{(10)} = 8\pi^6 g^2 \end{eqnarray*} with $\alpha' = 1$. We find that the compactified radii depend on the radius of the horizon ($\mu$) and the charges, using the method of the surface gravities of the compactified directions, discussed in the section 3. Then the four-dimensional Newton's constant is also rewritten by $\mu$ and the charges. We are interested in the behavior of the four-dimensional Newton's constant in the BPS limit, $\mu \to 0$. As a result, we obtain that the Newton's constant vanishes in the BPS limit. The another physical reasoning that we need to fix the compactification radii is to avoid the singular effects of the horizon in the compactified directions. We recall the way to define the period in the time direction. If no singular effects of the horizon exist in $t-r$ directions, then the topology of these directions are ${\bf R^2}$, and the Euler number of these directions is 1. Using the Gauss-Bonnet theorem, we find that the Euclidean time coordinate have the period which is the inverse of the surface gravity. Therefore we need to take this period in the time direction to avoid the singular effects of the horizon. Similarly, we need to fix the compactification radii to avoid the singular effects of the horizon in the compactified directions. We find that the compactification radii are the inverses of the surface gravity in the compactified directions using the Gauss-Bonnet theorem. We discuss this point in the section 3 . The purpose of this paper is to constrain the compactification radii in order to obtain the integer valued Euler numbers for the non-extremal black holes of the intersecting D-branes in ten-dimensions. We further study the microscopic interpretations of their entropies. We consider the black holes of the intersecting D-branes which have some compactification radii in ten-dimensions~\cite{CT,Tsey,GKT}. We assume that the metrics only depend on $r = \sqrt{x^2+y^2+z^2}$. In the four-dimensions, the black holes have the period in Euclidean time coordinate. The period is defined by the inverse of the surface gravity. We extend this construction to constrain the compactified radii in ten-dimensions. We use the inverse of the surface gravities in the compactified directions to constrain these radii. As a result, we are able to obtain the integer Euler numbers. We recall that the temperatures are defined by the surface gravities in the time coordinates. The relation between the radii and the surface gravities generalizes the temperatures of the black holes. Using these results, we further obtain the entropies of the non-extremal intersecting D-branes. These entropies are invariant under T-duality transformation. In the BPS limit, we obtain the finite and non-vanishing entropies only for two intersecting D-branes. We observe that they can be interpreted as the product of charges of each D-brane. We then study the entropies of the boosted metrics. We obtain the famous relation between the entropies and the quantized D-brane charges and the internal momenta of the intersecting D-branes. These relations are the same as that of the microscopic D-brane picture~\cite{SV,HS,MS}. The organization of this paper is as follows. In section 2, we review the way to construct the intersecting D-branes of the type IIA and IIB superstrings. We explain how to obtain the non-extremal black holes from the extremal black holes. In addition we define the boost transformation of the metrics. In section 3, we consider the surface gravities in the compactified directions to constrain the compactification radii. As a result, we obtain the Euler numbers which are integers. In section 4, we calculate the entropies of the non-extremal intersecting D-branes and study them in the BPS limit. We further discuss the interpretation of these entropies in terms of the microscopic states. \section{Non-extremal Intersecting D-branes in ten-dimensions} In this section we review the way to construct the intersecting D-branes~\cite{Tsey}. We then explain how to obtain non-extremal black holes from the extremal black holes and to obtain the boosted metrics. We treat the intersecting D-branes in ten-dimensions with the three-dimensional transverse directions with respect to the D-branes. This means that the harmonic functions of these metrics only depend on $r = \sqrt{x^2+y^2+z^2}$. We introduce the metrics and the field strengths in association with the D-branes. The metrics of the D$n$-branes wrapped on the $i_1, i_2, \cdots i_n$- directions are \begin{eqnarray} ds^2_{10} = H^{1/2} \big[ H^{-1}(-dt^2 + dx_{i_1}^2 + \cdots + dx_{i_n}^2 ) + dr^2 + r^2 d\Omega^2_{10-n-2} \big] . \end{eqnarray} $H$ is the harmonic function. The field strengths are \begin{eqnarray*} {\cal F} = \left \{ \begin{array}{@{\,}ll} dt \wedge dH^{-1} \wedge dx_{i_1} \wedge \cdots \wedge dx_{i_n} & (n\le 3; \ electric) \\ \ast(dt \wedge dH \wedge dx_{i_1} \wedge \cdots \wedge dx_{i_n}) & (n\ge 3; \ magnetic) , \end{array} \right. \end{eqnarray*} where $\ast$ is the Hodge dual in ten-dimensions. In $n=3$, the field strength is self-dual. Then they have the field strengths both of electric and magnetic nature. For the intersecting D-branes, ${\cal F}$ are the sum of the field strengths of each D-brane. We obtain the metrics of the type IIA string theory from the intersecting M-branes by dimensional reductions. The intersecting M-branes with the three-dimensional transverse directions with respect to the M-branes are $(2,2,5,5)_M$, $(5,5,5)_M$, $(2,5,5)_M$, $(2,2,5)_M$, and $(5,5)_M$, where $(5,5)_M$ denote two intersecting M5-branes, and so on~\cite{CT,Tsey,GKT}. We subsequently review the metrics of three intersecting D-branes. In ten-dimensions, we can obtain the (4,4,4) metric from the intersecting M-branes $(5,5,5)_M$ upon dimensional reduction along a common direction of three intersecting M5-branes : \begin{eqnarray} ds^2_{10} &=& (H_1 H_2 H_3)^{1/2} \big[ - H_1^{-1} H_2^{-1} H_3^{-1}dt^2 \nonumber \\ && + H_1^{-1} H_2^{-1} (dx_1^2 + dx_2^2) + H_1^{-1} H_3^{-1} (dx_3^2 + dx_4^2) \nonumber \\ && + H_2^{-1} H_3^{-1} (dx_5^2 + dx_6^2) + dr^2 + r^2 d\Omega^2_2 \big] . \end{eqnarray} The field strength ${{\cal F}_4}_{(4,4,4)}$ and the dilaton $\phi$ are \begin{eqnarray} {{\cal F}_4}_{(4,4,4)} &=& (\partial_r H_1 \ d\theta \wedge d\phi \wedge dx_6 \wedge dx_7 \nonumber \\ && + \partial_r H_2 \ d\theta \wedge d\phi \wedge dx_4 \wedge dx_5 + \partial_r H_3 \ d\theta \wedge d\phi \wedge dx_1 \wedge dx_2), \\ e^{-2\phi} &=& (H_1H_2H_3)^{1/2}, \end{eqnarray} where $H_i = 1+ \frac{Q_i}{r}$. We denote this metric as the following~\cite{BB} : \begin{eqnarray} (4,4,4) &=& \mbox{ {\scriptsize $\left\{ \begin{array}{c|cccccccccc} \times & \times & \times & \times & \times & - & - & - & - & - & \\ \times & \times & \times & - & - & \times & \times & - & - & - & \\ \times & - & - & \times & \times & \times & \times & - & - & - & . \end{array} \right. $} } \label{444} \end{eqnarray} Similarly we can obtain the $(2,4,4)$ metric, from $(2,5,5)_M$ upon dimensional reduction along a direction of two intersecting M5-branes, \begin{eqnarray} (2,4,4) &=& \mbox{ {\scriptsize $\left\{ \begin{array}{c|cccccccccc} \times & \times & \times & - & - & - & - & - & - & - & \\ \times & \times & - & \times & \times & \times & - & - & - & - & \\ \times & - & \times & \times & \times & - & \times & - & - & - & . \end{array} \right. $} } \end{eqnarray} The field strength ${{\cal F}_4}_{(2,4,4)}$ and the dilaton $\phi$ are \begin{eqnarray} {{\cal F}_4}_{(2,4,4)} &=& ( dt \wedge \partial_r H_1^{-1} \ dr \wedge dx_1 \wedge dx_2 \nonumber \\ && + \partial_r H_2 \ d\theta \wedge d\phi \wedge dx_2 \wedge dx_6 + \partial_r H_3 \ d\theta \wedge d\phi \wedge dx_1 \wedge dx_5), \\ e^{-2\phi} &=& (H_1^{-1}H_2H_3)^{1/2}. \end{eqnarray} Along a direction of the M5-brane, we can obtain the $(2,2,4)$ metric, \begin{eqnarray} (2,2,4) &=& \mbox{ {\scriptsize $\left\{ \begin{array}{c|cccccccccc} \times & \times & \times & - & - & - & - & - & - & - & \\ \times & - & - & \times & \times & - & - & - & - & - & \\ \times & \times & - & \times & - & \times & \times & - & - & - & . \end{array} \right. $} } \end{eqnarray} The field strength ${{\cal F}_4}_{(2,2,4)}$ and the dilaton $\phi$ are \begin{eqnarray} {{\cal F}_4}_{(2,2,4)} &=& (dt \wedge \partial_r H_1^{-1} \ dr \wedge dx_1 \wedge dx_2 \nonumber \\ && + dt \wedge \partial_r H_2^{-1} \ dr \wedge dx_3 \wedge dx_4 + \partial_r H_3 \ d\theta \wedge d\phi \wedge dx_2 \wedge dx_4), \\ e^{-2\phi} &=& (H_1H_2H_3^{-1})^{-1/2}. \end{eqnarray} In these constructions, we have regarded the D2-branes as electric and the D4-branes as magnetic objects. Moreover, we can obtain the remaining metrics by T-duality transformation from these metrics in the $i$-th direction~\cite{GKT} : \begin{eqnarray*} g'_{ii} = 1/g_{ii}, \quad \exp(-2\phi') = g_{ii} \exp(-2\phi) . \end{eqnarray*} In our notation, the T-duality transformation is realized by interchanging $\times$ and $-$. For example, in the three intersecting D-branes case, \begin{eqnarray} \mbox{ {\scriptsize $\left\{ \begin{array}{cc} \times & \\ \times & \\ - & \end{array} \right. $} } \leftrightarrow \mbox{ {\scriptsize $\left\{ \begin{array}{cc} - & \\ - & \\ \times & . \end{array} \right. $} } \end{eqnarray} If we apply the T-duality transformation on the metric $(4,4,4)$ (in (\ref{444})) in the first direction, this metric is changed to $(3,3,5)$. Similarly, we can obtain the following metrics : \begin{eqnarray*} && (4,4,4) \leftrightarrow (3,3,5) \leftrightarrow (2,4,4) \ or \ (2,2,6) \\ && \leftrightarrow (3,3,3) \ or \ (1,3,5) \leftrightarrow (2,2,4) \ or \ (0,4,4) \\ && \leftrightarrow (1,3,3) \leftrightarrow (2,2,2) . \end{eqnarray*} The metrics of the intersecting even-branes are the metrics of type IIA. The metrics of the intersecting odd-branes are the metrics of type IIB. For four intersecting D-branes, we obtain the following metric: \begin{eqnarray} (2,2,4,4) &=& \mbox{ {\scriptsize $\left\{ \begin{array}{c|cccccccccc} \times & \times & \times & - & - & - & - & - & - & - & \\ \times & - & - & \times & \times & - & - & - & - & - & \\ \times & \times & - & \times & - & \times & \times & - & - & - & \\ \times & - & \times & - & \times & \times & \times & - & - & - & . \end{array} \right. $} } \end{eqnarray} from $(2,2,5,5)_M$. Applying the T-duality transformation, we can obtain the following metrics : \begin{eqnarray*} (3,3,3,3) \leftrightarrow (2,2,4,4) \leftrightarrow (1,3,3,5) \leftrightarrow (0,4,4,4) . \end{eqnarray*} For two intersecting D-branes, we obtain the $(4,4)$ metrics: \begin{eqnarray*} (4,4) &=& \mbox{ {\scriptsize $\left\{ \begin{array}{c|cccccccccc} \times & \times & \times & \times & \times & - & - & - & - & - & \\ \times & \times & \times & - & - & \times & \times & - & - & - & , \end{array} \right. $} } \end{eqnarray*} from $(5,5)_M$. Applying the T-duality transformation, we can obtain following metrics : \begin{eqnarray*} (4,4) \leftrightarrow (3,5) \leftrightarrow (2,6) , \\ (3,3) \leftrightarrow (2,4) \leftrightarrow (1,5) , \\ (2,2) \leftrightarrow (1,3) \leftrightarrow (0,4) , \end{eqnarray*} in a non-intersecting direction, and \begin{eqnarray*} (4,4) \leftrightarrow (3,3) \leftrightarrow (2,2) , \end{eqnarray*} in an intersecting direction. We have assumed that these metrics have the three-dimensional transverse directions. Therefore all harmonic functions in these metrics only depend on $r = \sqrt{x^2+y^2+z^2}$. Note that our notation is different from~\cite{BB}. These metrics can also be obtained from the metrics of three intersecting D-branes. For example, we find the following relation: \begin{eqnarray*} (3,3,5) \to (3,3) , \end{eqnarray*} by assuming the third D5-brane's charge to vanish. For single D-brane, the T-duality transformation implies the following relation \begin{eqnarray*} (6) \leftrightarrow (5) \leftrightarrow \cdots \leftrightarrow (0) . \end{eqnarray*} In order to obtain non-extremal metrics, we modify the above metrics as follows~\cite{CT} : (1) We make the following replacements in the transverse space-time part of the metric: \begin{equation} dt^2 \to f(r) dt^2 , \quad dr^2 \to f^{-1} (r) dr^2 , \quad f(r) = 1 - \frac{\mu}{r} . \end{equation} We also use the harmonic functions, \begin{equation} H_i = 1 + \frac{{{\cal Q}}_i}{r} , \quad {{\cal Q}}_i= \mu\sinh^2 \delta_i , \end{equation} for the constituent D2-branes, and \begin{equation} H_i = 1 + \frac{{{\cal P}}_i}{r} , \quad {{\cal P}}_i= \mu\sinh^2 \gamma_i , \end{equation} for the constituent D4-branes. (2) In the expression for the field strength ${\cal F}_4$ of the three-form field, we make the following replacements: \begin{equation} H'_i \to H'_i = 1 + \frac{Q_i}{r + {{\cal Q}}_i - Q_i} =\big[ 1- \frac{Q_i}{r} H_i^{-1} \big]^{-1} , \quad Q_i =\mu\sinh\delta_i\cosh\delta_i , \end{equation} for the electric (D2-brane) part, and \begin{equation} H_i \to H'_i = 1 + \frac{P_i}{r} , \quad P_i =\mu\sinh\gamma_i\cosh\gamma_i\ , \end{equation} for the magnetic (D4-brane) part. Here $Q_i$ and $P_i$ are the electric and magnetic charges respectively. The BPS limit is realized when $\mu \to 0 ,\ \delta_i\to \infty$, and $\gamma_i\to \infty$, while the charges $Q_i$ and $P_i$ are kept fixed. In this case ${{\cal Q}}_i=Q_i$ and ${{\cal P}}_i =P_i$, so that $H'_i=H_i$. We note that the metric of two intersecting D-branes are not extremal in the BPS limit. (3) In the case when the solution has a null isometry, i.e. the intersecting D-branes have a common string along some direction $y$, one can add momentum along $y$ by applying the coordinate transformation \begin{equation} t'= \cosh \sigma \ t - \sinh \sigma \ y , \quad y'= - \sinh \sigma \ t + \cosh \sigma \ y , \label{sigma} \end{equation} to the non-extremal background which is obtained according to the above two steps. Then we obtain that \begin{eqnarray*} - f(r) dt^2 + dy^2\ &\to& - f(r) dt'^2 + dy'^2 \\ &=& - dt^2 + dy^2 + \frac{\mu}{r} (\cosh \sigma \ dt - \sinh \sigma \ dy)^2, \end{eqnarray*} where $\sigma$ is a boost parameter. This transformation is called the boost transformation. \section{Surface Gravity and Compactification} In ten-dimensions~\cite{CT,Tsey,GKT}, the metrics have the compactification radii. The radii are usually taken to be arbitrary. However for the generic radii, we find that the Euler numbers of the metrics are non-integers. In this section, we consider the surface gravities of the compactified dimensions. We constrain the compactification radii using the surface gravity. As a result, we obtain the integer Euler numbers. We consider a way to constrain the radii in the compactified directions. We first consider the radius in the Euclidean time direction. We treat the Euclidean time coordinate as the polar angle in the $t-r$ directions. We define the deficit angle $\delta$ in the $t-r$ directions as, \begin{eqnarray*} \frac{1}{2}\int_0^\beta dt \int_{r_H}^\infty dr \sqrt{g_{tt}g_{rr}} R_{(2)} = 2\pi - \delta, \end{eqnarray*} where $R_{(2)}$ is the scalar curvature in the $t-r$ directions. $\beta$ is the period of the time coordinate. $r_H$ is the radius of the event horizon, which satisfies that $ g_{tt}\bigg|_{r = r_H} = 0 $. The metric in the $t-r$ directions is \begin{eqnarray*} ds_{(2)}^2 = g_{tt} dt^2 + g_{rr} dr^2. \end{eqnarray*} The scalar curvature $R_{(2)}$ and the extrinsic curvature $K$ are \begin{eqnarray*} R_{(2)} = - \frac{1}{\sqrt{g_{tt}g_{rr}}} \partial_r\bigg[\frac{1}{2}\frac{\partial_r g_{tt}} {\sqrt{g_{tt}g_{rr}}}\bigg], \quad K = \frac{1}{\sqrt{g_{rr}}} \bigg[\frac{1}{2}\frac{\partial_rg_{tt}}{g_{tt}}\bigg] . \end{eqnarray*} We define the Euler number $\chi$ from the Gauss-Bonnet theorem is \begin{eqnarray} 2\pi \chi &=& \frac{1}{2}\int_0^\beta dt \int_{r_H}^\infty dr \sqrt{g_{tt}g_{rr}}R_{(2)} - \int_0^{\beta} dt \sqrt{g_{tt}}K \bigg|_{r=\infty} \nonumber \\ &=& \frac{\beta}{2}\frac{\partial_r g_{tt}} {\sqrt{g_{tt}g_{rr}}} \bigg|_{r=r_H} . \label{chi} \end{eqnarray} If no singular effect of the event horizon exists, then the topology in $t-r$ directions is ${\bf R^2}$, then the Euler number is 1. Therefore, from (\ref{chi}) the radius $\beta$ in the time direction is \begin{eqnarray*} \beta = \frac{2\pi}{\kappa_t}, \quad \kappa_t &\equiv& \frac{1}{2}\frac{\partial _r g_{tt}} {\sqrt{g_{rr}g_{tt}}}\bigg|_{r = r_H}, \end{eqnarray*} where $\kappa$ is the surface gravity. $r_H$ is a radius of the black hole event horizon. We obtain that the deficit angle is $0$, and the Euler number is 1. Then if we use the radius $\beta$ in the time direction, the singular effect of the event horizon $r_H$ is absent. Therefore we take this radius in the time direction to obtain the smooth geometry around the event horizon. For example, we find that the Euler numbers of the Schwarzschild metric, the Kerr-Newman metric, and the $U(1)$ dilaton metric to be integers by using the relation between the surface gravity and the compactification radius (the inverse temperature). We extend this idea to the ten-dimensional manifolds in order to obtain the integer Euler numbers. We next consider the compactification radii of the black holes in ten-dimensions. We treat the black holes which have six compactified directions. These compactified directions are the same as the Euclidean time direction, then the topology is ${\bf R^8} \times {\bf S^2}$. The metric $g_{ii}$ of the compactified directions does not depend the variables of the compactified directions. Therefore we can choose one of the compactified directions and consider the Euler numbers in the $i-r$ directions partly. We consider the compactification radii in the $i-r$ directions, where $i$ is one of the compactified directions. The topologies of the $i-r$ directions are ${\bf R^2}$. We define the deficit angle $\delta_i$ in the $i-r$ directions as \begin{eqnarray*} \frac{1}{2} \int_0^{\beta_i} dx^i \int_{r_H}^\infty dr \sqrt{g_{ii}g_{rr}} R_{(2)} = 2\pi - \delta_i, \end{eqnarray*} where $R_{(2)}$ is the scalar curvature in the $i-r$ directions. $\beta_i$ are the periods of the $i$-th coordinate. The metric in the $i-r$ directions is \begin{eqnarray*} ds_{(2)}^2 = g_{ii} dt^2 + g_{rr} dr^2. \end{eqnarray*} The scalar curvature $R_{(2)}$ and the extrinsic curvature $K$ are \begin{eqnarray*} R_{(2)} = - \frac{1}{\sqrt{g_{ii}g_{rr}}} \partial_r\bigg[\frac{1}{2}\frac{\partial_r g_{ii}} {\sqrt{g_{ii}g_{rr}}}\bigg], \quad K = \frac{1}{\sqrt{g_{rr}}} \bigg[\frac{1}{2}\frac{\partial_rg_{ii}}{g_{ii}}\bigg] . \end{eqnarray*} We define the Euler number $\chi$ from the Gauss-Bonnet theorem as \begin{eqnarray} 2\pi \chi &=& \frac{1}{2}\int_0^{\beta_i} dx^i \int_{r_H}^\infty dr \sqrt{g_{ii}g_{rr}}R_{(2)} - \int_0^{\beta_i} dx^i \sqrt{g_{ii}}K \bigg|_{r=\infty} \nonumber \\ &=& \frac{\beta}{2}\frac{\partial_r g_{ii}} {\sqrt{g_{ii}g_{rr}}} \bigg|_{r = r_H} . \label{chi2} \end{eqnarray} If no singular effect of the event horizon exists, then the topology in $i-r$ directions is ${\bf R^2}$, and the Euler number is 1. Therefore, from (\ref{chi2}) we suppose that the compactification radius $\beta_i$ in the $i$-th direction is defined by the following "surface gravity $\kappa_i$": \begin{eqnarray*} \beta_i(r_H) &=& \frac{2\pi}{\kappa_i}, \quad \kappa_{i}(r_H) \equiv \frac{1}{2}\frac{\partial _r g_{ii}} {\sqrt{g_{rr}g_{ii}}}\bigg|_{r = r_H}, \\ && (i=1 \cdots 6) . \end{eqnarray*} We obtain that the vanishing deficit angles in the compactified directions. Therefore we use this radius $\beta_i$ in the compactified directions to obtain the smooth geometry around the event horizon. We obtain the integer Euler numbers for the ten-dimensional manifolds using these compactification radii. For example, let us consider the following metric: \begin{eqnarray*} (4,4) &=& \mbox{ {\scriptsize $\left\{ \begin{array}{c|cccccccccc} \times & \times & \times & \times & \times & - & - & - & - & - & \\ \times & \times & \times & - & - & \times & \times & - & - & - & . \end{array} \right. $} } \end{eqnarray*} In the non-extremal version of this metric, the event horizon is located at \begin{eqnarray*} r_H = \mu . \end{eqnarray*} In the third direction, the compactification radius is \begin{eqnarray*} \beta_{3}(r_H) = 4 \pi (\mu + {\cal M}_1)(\mu + {\cal M}_2)^{3/2}/({\cal M}_1-{\cal M}_2)\sqrt{\mu} . \\ \end{eqnarray*} Using this definition, we indeed find that the Euler number of this example to be an integer as follows: \begin{eqnarray*} \chi &=& \frac{1}{(4\pi)^{10} 10!} \int dx^{10} \epsilon^{abcdefghij}R_{ab}R_{cd}R_{ef}R_{gh}R_{ij} \\ &=& \frac{1}{(4\pi)^{10}} \int_{r_H}^{\infty} dr \int_0^{\beta_t(r)} dt \int_0^{2\pi} d\phi \int_0^{\pi} d\theta \sin \theta \\ && \times \prod_{i=1}^{6} \int_0^{\beta_i(r)} dx_i R_{tr}R_{\theta\phi}R_{12}R_{34}R_{56} \\ &=& 2 . \end{eqnarray*} We further obtain the same Euler number for all the other metrics which are related to this metric by T-duality as it will be explained in the next section. We emphasize again that the Euler numbers are non-integers if we consider the compactification radii as arbitrary constants. \section{Black Hole Entropies and T-duality} In this section, we study the entropies of the intersecting D-branes using the compactification radii as defined in the previous section. The entropies are calculated semiclassically by using $S=A/4G$ in this section. Here $A$ is the area of the event horizon, and $G$ is the Newton's constant. We obtain the entropies which are T-duality invariant. In the BPS limits ($\mu \to 0 $), we obtain the finite and non-vanishing entropies only for two intersecting D-branes. They are found to be proportional to the product of the charges of each D-brane. We further study the entropies of boosted metrics. We obtain the relations between the entropies and the internal momenta of the intersecting D-branes. These relations are the same as that of the microscopic D-brane picture~\cite{SV,HS,MS}. We first study the entropies of the metric with the boost parameter $\sigma = 0$ in (\ref{sigma}). We define the proper length in the $i$-th direction as: \begin{eqnarray*} L_i(r) &\equiv& \bigg|\int_{0}^{\beta_{i}(r)}\sqrt{g_{ii}}dx_i \bigg| = 4\pi \bigg| \frac{\sqrt{g_{rr}}}{\partial_r (\ln g_{ii})} \bigg| , \\ && ( i= 1,\cdots, 6 ) . \end{eqnarray*} Using these definitions, the entropies of the black holes are \begin{eqnarray} S = A_8/4G_{10} \bigg|_{r=\mu} = L_1L_2L_3L_4L_5L_6A_{\theta\phi}/4G_{10} \bigg|_{r=\mu}. \end{eqnarray} Here $A_8$ is the area of the event horizon, $A_{\theta\phi}$ is the area in the $\theta-\phi$ directions. For example, we consider the following metric : \begin{eqnarray*} (4,4) &=& \mbox{ {\scriptsize $\left\{ \begin{array}{c|cccccccccc} \times & \times & \times & \times & \times & - & - & - & - & - & \\ \times & \times & \times & - & - & \times & \times & - & - & - & . \end{array} \right. $} } \end{eqnarray*} In the third direction, the proper length is \begin{eqnarray*} L_3 = \int_{0}^{\beta_{3}}\sqrt{g_{33}}dx_3 = 4 \pi (r + {\cal M} _1)^{5/4} (r + {\cal M} _2)^{5/4} /({\cal M} _1-{\cal M} _2)\sqrt{r}, \end{eqnarray*} where ${\cal M}_i$ are ${\cal Q}_i$ or ${\cal P}_i$. With this definition, we find the following relations : \begin{eqnarray} L_3 = L_4 = L_5 = L_6 &=& 4\pi(r + {\cal M}_1)^{5/4} (r + {\cal M}_2)^{5/4}/({\cal M}_1-{\cal M}_2)\sqrt{r} \nonumber \\ &\equiv& L_a, \nonumber \\ L_1 = L_2 &=& 4 \pi (r + {\cal M}_1)^{1/4} (r + {\cal M}_2)^{1/4}\sqrt{r} \nonumber \\ &\equiv& L_b. \label{pre} \end{eqnarray} Let us consider the proper lengths of the metrics which are dual to each other. If we apply the T-duality transformation to the fourth-direction of (4,4) type metric, we obtain the (3,5) type metric, \begin{eqnarray*} (3,5) &=& \mbox{ {\scriptsize $\left\{ \begin{array}{c|cccccccccc} \times & \times & \times & \times & - & - & - & - & - & - & \\ \times & \times & \times & - & \times & \times & \times & - & - & - & . \end{array} \right. $} } \end{eqnarray*} For this metric, we also obtain the proper lengths which are the same with (\ref{pre}). Then the entropies of (3,5) type metrics are the same with (4,4) type metrics. We also obtain the identical proper lengths for all the other metrics which are dual to each other. Therefore the entropies of the T-dual metrics are the same. We can also obtain the analogous relations with the different numbers of the D-branes. The area in the $\theta-\phi$ directions is \begin{eqnarray*} A_{\theta\phi} = \int_{0}^{\pi} d\theta \int_{0}^{2\pi} d\phi \sin\theta \sqrt{g_{\theta\theta}g_{\phi\phi}} = 4 \pi (r + {\cal M}_1)^{1/2} (r + {\cal M}_2)^{1/2}r. \end{eqnarray*} Then we find that the entropy of two intersecting D-branes is given by \begin{eqnarray*} S &=& A_8/4G_{10} \bigg|_{r=\mu} = L_a^4L_b^2A_{\theta\phi}/4 \bigg|_{r=\mu} \\ &=& (4\pi)^7(\mu+{\cal M}_1)^8(\mu+{\cal M}_2)^8 \\ && /\bigg[({\cal M}_1-{\cal M}_2)^4[\mu({\cal M}_1+{\cal M}_2)+2{\cal M}_1{\cal M}_2]^2 \bigg] , \end{eqnarray*} where $G_{10}$ is the ten-dimensional Newton's constant. For three intersecting D-branes, the entropy is given by \begin{eqnarray*} S &=& A_8/4G_{10} \bigg|_{r=\mu} \\ &=& (4\pi)^7((\mu+{\cal M}_1)^8(\mu+{\cal M}_2)^8(\mu+{\cal M}_3)^8\mu^2 \\ && /\bigg[[\mu^2({\cal M}_1+{\cal M}_2-{\cal M}_3) + 2\mu {\cal M}_1{\cal M}_2 +{\cal M}_1{\cal M}_2{\cal M}_3] \\ && \times [\mu^2({\cal M}_2+{\cal M}_3-{\cal M}_1) + 2\mu {\cal M}_2{\cal M}_3 +{\cal M}_1{\cal M}_2{\cal M}_3] \\ && \times [\mu^2({\cal M}_3+{\cal M}_1-{\cal M}_2) + 2\mu {\cal M}_3{\cal M}_1 +{\cal M}_1{\cal M}_2{\cal M}_3] \bigg]^2 . \end{eqnarray*} For four intersecting D-branes, the entropy is \begin{eqnarray*} S &=& A_8/4G_{10} \bigg|_{r=\mu} \\ &=& (4\pi)^7((\mu+{\cal M}_1)^8(\mu+{\cal M}_2)^8(\mu+{\cal M}_3)^8(\mu+{\cal M}_4)^8 \\ && /\mu^6\bigg[ [\mu^2({\cal M}_1+{\cal M}_2-{\cal M}_3-{\cal M}_4) +2\mu({\cal M}_1{\cal M}_2-{\cal M}_3{\cal M}_4) \\ && \ + ({\cal M}_1+{\cal M}_2){\cal M}_3{\cal M}_4-({\cal M}_3+{\cal M}_4){\cal M}_1{\cal M}_2] \\ && \times [\mu^2({\cal M}_1+{\cal M}_3-{\cal M}_2-{\cal M}_4) +2\mu({\cal M}_1{\cal M}_3-{\cal M}_2{\cal M}_4) \\ && \ + ({\cal M}_1+{\cal M}_3){\cal M}_2{\cal M}_4-({\cal M}_2+{\cal M}_4){\cal M}_1{\cal M}_3] \\ && \times [\mu^2({\cal M}_1+{\cal M}_4-{\cal M}_2-{\cal M}_3) +2\mu({\cal M}_1{\cal M}_4-{\cal M}_2{\cal M}_3) \\ && \ + ({\cal M}_1+{\cal M}_4){\cal M}_2{\cal M}_3-({\cal M}_2+{\cal M}_3){\cal M}_1{\cal M}_4] \bigg] ^2 . \\ \end{eqnarray*} We also list the entropy formula for single brane: \begin{eqnarray*} S = A_8/4G_{10} \bigg|_{r=\mu} = (4\pi)^7\mu^6(\mu+{\cal M}_1)^8/{\cal M}_1^6 . \end{eqnarray*} If we consider the BPS limit (namely $\mu \to 0$), we can obtain the finite and non-vanishing entropies only for the metrics of two intersecting D-branes. In order to interpret these entropies, we define the quantized D-brane charges in the wrapping directions except for the intersecting directions. They are the integer numbers. In the case of $(4,4)$ case, we define the quantized magnetic charges of the D4-branes as \begin{eqnarray*} N_1 &=& \frac{L_a^2}{4\pi} \int {\cal F}_{\theta\phi12} d\Omega \bigg|_{r=1} \\ &=& (4\pi)^2{\cal M}_1({\cal M}_1{\cal M}_2)^{5/2}/({\cal M}_1-{\cal M}_2)^2, \\ N_2 &=& \frac{L_a^2}{4\pi} \int {\cal F}_{\theta\phi12} d\Omega \bigg|_{r=1} \\ &=& (4\pi)^2{\cal M}_2({\cal M}_1{\cal M}_2)^{5/2}/({\cal M}_1-{\cal M}_2)^2 . \end{eqnarray*} Using these charges, we find the entropies as \begin{eqnarray} S = (4\pi)^3 N_1 N_2 /4 . \label{44} \end{eqnarray} In the T-dual cases of this metric, (3,5) and (2,6), we define \begin{eqnarray*} N = \left\{ \begin{array}{ll} \frac{L_a^2}{4\pi}\int *{\cal F} & \mbox{(electric)}, \\ \frac{L_a^2}{4\pi}\int {\cal F} & \mbox{(magnetic)}, \end{array} \right. \end{eqnarray*} where $\ast$ is Hodge dual in four-dimensions. Then we find that the entropies for all metrics of two intersecting D-branes as the same with (\ref{44}). Therefore we observe that the entropies of two intersecting D-branes are the product of the quantized charges of each D-brane. Next we consider the case of the large boost parameter ($\sigma \gg 1$). In this case, the metric becomes \begin{eqnarray*} - f(r) dt^2 + dy^2\ &\to& - f(r) dt'^2 + dy'^2 \\ &=& - dt^2 + dy^2 + \frac{\mu}{r} (\cosh \sigma \ dt - \sinh \sigma \ dy)^2 \\ &\sim& - dt^2 + dy^2 + \frac{\mu\cosh^2 \sigma}{r} (dt-dy)^2 . \end{eqnarray*} Here we have introduced $\mu' = \mu \cosh^2 \sigma$. Then the event horizon is at $r_H= \mu'$. The proper length in the $y$- direction is \begin{eqnarray*} L_y = 4\pi[({\cal M}_1+r)({\cal M}_2+r)]^{5/4} \sqrt{r} /(2r^2-2{\cal M}_1{\cal M}_2) . \end{eqnarray*} For $(3,3), (2,4)$ and $(1,5)$ cases, we find the following entropy formula as \begin{eqnarray*} S = L_a^4L_bL_yA_{\theta\phi}/4\bigg|_{r=\mu'}. \end{eqnarray*} In the $(4,4), (3,5)$ and $(2,6)$ cases, we have two isometric directions. If we consider the case that only one of these directions is boosted, the entropy is given by \begin{eqnarray*} S = L_a^4L_bL_yA_{\theta\phi}/4\bigg|_{r=\mu'}. \end{eqnarray*} We may further consider the case that the metric is boosted in two different directions. Let us assume that the $y_2$ direction is boosted first and then the $y_1$ direction is boosted. After such a process, we obtain the following metric as \begin{eqnarray*} - f(r) dt^2 + dy_1^2 + dy_2^2 \ &\to& - f(r) dt'^2 + dy_1'^2 + dy_2'^2 \\ &=& - dt^2 + dy_1^2 + dy_2^2 \\ && + \frac{\mu}{r} (\cosh^2 \sigma_1\cosh^2 \sigma_2\ dt - \sinh \sigma_1\ dy_1 - \cosh^2 \sigma_1\sinh^2 \sigma_2 dy_2)^2 \\ &\sim& - dt^2 + dy_1^2 +dy_2^2 + \frac{\mu\cosh^2 \sigma_1 \cosh^2 \sigma_2}{r} (dt-dy_2)^2. \end{eqnarray*} Therefore we obtain the same metric with the boost in the single direction. As a result, we find that the entropies of the boosted metrics are the same. According to the definitions in~\cite{SV,HS,MS}, we define the quantized internal momentum $p$ and the charge $n$ in the direction of $y$. Here $n$ is a number of the state level of the effective conformal field theory in the D-brane picture. The momentum $p$ is quantized in term of the surface area of the event horizon $A_8$. We define the charge and the momentum as \begin{eqnarray*} n &=& L_y p ,\\ n &=& \int dA_8 \mu'/r \bigg|_{r=\mu'}\\ &=& L_a^4L_bL_yA_{\theta\phi} \\ &\sim& ({\cal M}_1{\cal M}_2)^6 /({\cal M}_1-{\cal M}_2)^4. \end{eqnarray*} Using these quantities, we obtain the entropies in the BPS limit with $\sigma \gg 1$ as follows, \begin{eqnarray*} S \sim \sqrt{N_1N_2n}. \end{eqnarray*} This entropy formula of the boosted metrics is the same with that of the microscopic D-brane picture~\cite{SV,HS,MS}. \section{Conclusion} We have studied the Euler numbers and the entropies of the non-extremal black holes which are constructed by the intersecting D-branes in ten-dimensions. The Euler numbers are generally integers. In ten-dimensions, the metrics have the arbitrary compactification radii. However for the generic radii, the Euler numbers of the metrics are non-integers. To avoid these difficulties, we need the way to constrain the compactification radii. The another physical reasoning that we need to fix the compactification radii is in order to avoid the singular effects of the horizon in the compactified directions. We have discussed this point in the section 3 . We consider the way to define the period in the time directions. If no singular effects of the horizon exist in $t-r$ directions, then the topology of these directions are ${\bf R^2}$, and the Euler number of these directions is 2. Using the Gauss-Bonnet theorem, we find that the Euclidean time coordinate have the period which is the inverse of the surface gravity. Therefore we need to take this period in the time direction to avoid the singular effects of the horizon. Similarly, we need to fix the compactification radii to avoid the singular effects of the horizon in the compactified directions. We find the necessity that the compactification radii are the inverses of the surface gravity in the compactified directions using the Gauss-Bonnet theorem. We identify the inverse of the surface gravities as the compactification radii in order to obtain the integer Euler numbers. The entropies of the black holes are T-duality invariant when we use the compactification radii which are defined by surface gravities. In the BPS limit, we have the finite and non-zero entropies only with two intersecting D-branes. We have introduced the quantized D-brane charges $N_1$ and $N_2$. We find the common relation of these entropies as $S \sim N_1N_2$, when the boost parameter vanishes. Therefore we observe that the entropies are proportional to the product of the charges of each D-brane. In the case of the large boost parameter, we also obtain the entropy formula for the black holes constructed by two intersecting D-branes. The entropies of the boosted black holes are also T-duality invariant. We have introduced the quantum number $n$. Here $n$ is a number of the state level of the effective conformal field theory in the D-brane picture. The entropies are $S \sim \sqrt{N_1N_2n}$. The entropies of the boosted black holes can be interpreted as the entropies of the microscopic states in the D-brane picture. We need to consider the microscopic interpretation of the entropies in association with the non-boosted black holes in the D-brane picture. The entropies of the non-boosted black holes reduce to $S \sim N_1N_2$. The black holes have no momenta $n$ in this limit. On the other hand, the entropies in the microscopic D-brane picture $S \sim \sqrt{N_1N_2n}$ are obtained for the large momenta. Therefore the entropy formula for the non-boosted black holes could be entirely different from that for the boosted black holes which agrees with the microscopic D-brane picture. The microscopic entropies in the D-brane picture with small or zero momenta were not discussed so far. We further need to study the microscopic interpretation of the entropies with the momenta $n \sim 0$ in the D-brane picture. We have proposed a method to constrain the compactification radii of the non-extremal black holes of the intersecting D-branes. We have identified the inverse of the surface gravities as the compactification radii. We have correctly obtained the integer Euler numbers of the black holes. Although we do not know another way to constrain the compactification radii, we have not shown that the way to constrain the compactification radii is unique. Therefore another way to constrain the compactification radii might be found. We further need to study the way and the physical reasoning to constrain the compactification radii. \acknowledgements We thank Y. Kitazawa for discussions and for carefully reading the manuscript and suggesting various improvements.
{'timestamp': '1998-07-06T07:04:09', 'yymm': '9712', 'arxiv_id': 'hep-th/9712224', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-th/9712224'}
\section{Introduction} In a cold dark matter (CDM) universe, dark matter haloes form hierarchically through accretion and merging(for a recent review, see \citet{Frenk2012}). Many rigorous and reliable predictions for the halo mass function and subhalo mass functions in CDM are provided by numerical simulations\citep[e.g.][]{springel2009, Gao2011, Colin2000, Hellwing2015, Bose2016}. When the small haloes merge to the larger systems, they become the subhaloes and suffer from environmental effects such as tidal stripping, tidal heating and dynamical friction that tend to remove the mass from them and even disrupt them \citep{ Tormen1998, Taffoni2003, Diemand2007, Hayashi2003, gao2004, springel2009, xie2015} . At the mean time, the satellite galaxies that reside in the subhaloes also experience environmental effects. The tidal stripping and ram-pressure can remove the hot gas halo from satellite galaxies which in turn cuts off their supply of cold gas and quenches star formation\citep{Balogh2000,Kawata2008,McCarthy2008,Wang2007,Guo2011,Wetzel2013}. Satellite galaxies in some cases also experience a mass loss in the cold gas component and stellar component during the interaction with the host haloes \citep{Gunn1972,Abadi1999,Chung2009, Mayer2001, Klimentowski2007, Kang2008, Chang2013}. Overall, the subhaloes preferentially lose their dark mass rather than the luminous mass, because the mass distribution of satellite galaxies is much more concentrated than that of the dissipationless dark matter particles. Simulations predict that the mass loss of infalling subhaloes depends inversely on their halo-centric radius \citep[e.g.][]{Springel2001, DeLucia2004, gao2004, xie2015}. Thus, the halo mass to stellar mass ratio of satellite galaxies should increase as a function of halo-centric radius. Mapping the mass function of subhaloes from observations can provide important constraints on this galaxy evolution model. In observations, dark matter distributions are best measured with gravitational lensing. For dark matter subhalos, however, such observations are challenging due to their relative low mass compared to that of the host dark matter halo. The presence of subhalos can cause flux-ratio anomalies in multiply imaged lensing systems \citep{Mao1998,Metcalf2001,Mao2004,xu2009, Nierenberg2014}, perturb the locations, and change the image numbers \citep{kneib1996,Kneib2011}, and disturb the surface brightness of extended Einstein ring/arcs \citep{Koopmans2005,Vegetti2009a,Vegetti2009b,Vegetti2010,Vegetti2012}. However, due to the limited number of high quality images and the rareness of strong lensing systems, only a few subhalos have been detected through strong lensing observation. Besides, strong lensing effects can only probe the central regions of dark matter haloes \citep{Kneib2011}. Therefore, through strong lensing alone, it is difficult to draw a comprehensive picture of the co-evolution between subhalos and galaxies. Subhalos can also be detected in individual clusters through weak gravitational lensing or weak lensing combining strong lensing \citep[e.g.][]{Natarajan2007,Natarajan2009, Limousin2005,Limousin2007,Okabe2014}. In \citet{Natarajan2009}, Hubble Space Telescope images were used to investigate the subhalo masses of $L^*$ galaxies in the massive, lensing cluster, Cl0024+16 at z = 0.39, and to study the subhalo mass as a function of halo-centric radius. \citet{Okabe2014} investigated subhalos in the very nearby Coma cluster with imaging from the Subaru telescope. The deep imaging and the large apparent size of the cluster allowed them to measure the masses of subhalos selected by shear alone. They found 32 subhalos in the Coma cluster and measured their mass function. However, this kind of study requires very high quality images of massive nearby clusters, making it hard to extend such studies to large numbers of clusters. A promising alternative way to investigate the satellite-subhalo relation is through a stacking analysis of galaxy-galaxy lensing with large surveys. Different methods have been proposed in previous studies \citep[e.g.][]{Yang2006, Li2013,Pastor2011,Shirasaki2015}. Although the tangential shear generated by a single subhalo is small, by stacking thousands of satellite galaxies the statistical noise can be suppressed and the mean projected density profile around satellite galaxies can be measured. \citet{Li2014} selected satellite galaxies in the SDSS group catalog from \citep{Yang2007} and measured the weak lensing signal around these satellites with a lensing source catalog derived from the CFHT Stripe82 Survey \citep{Comparat2013}. This was the first galaxy-galaxy lensing measurement of subhalo masses in galaxy groups. However, the uncertainties of the measured subhalo masses were too large to investigate the satellite-subhalo relation as a function of halo-centric radius. In this paper, we apply the same method as \citet{Li2013,Li2014} to measure the galaxy-galaxy lensing signal for satellite galaxies in the SDSS redMaPPer cluster catalog \citep{Rykoff2014, Rozo2014}. Unlike the the group catalog of \citet{Yang2007} which is constructed using SDSS spectroscopic galaxies, the redMaPPer catalog relies on photometric cluster detections, allowing it to go to higher redshifts. As a result, there are more massive clusters in the redMaPPer cluster catalog. Therefore, we expect to signal-to-noise of the satellite galaxy lensing signals to be higher, enabling us to derive better constraints on subhalo properties. The paper is organized as follows. In section \ref{sec:data}, we describe the lens and source catalogs. In section \ref{sec:model}, we present our lens model. In section \ref{sec:res}, we show our observational results and our best fit lens model. The discussions and conclusions are presented in section \ref{sec:sum}. Throughout the paper, we adopt a $\Lambda$CDM cosmology with parameters given by the WMAP-7-year data \citep{komatsu2010} with $\Omega_L=0.728$, $\Omega_{M}=0.272$ and $h \equiv H_0/(100 {\rm km s^{-1} Mpc^{-1}}) = 0.73$. In this paper, stellar mass is estimated assuming a \cite{Chabrier2003} IMF. \section{Observational Data} \label{sec:data} \subsection{Lens selection and stellar masses} \label{sec:lens_select} We use satellite galaxies in the redMaPPer clusters as lenses. The redMaPPer cluster catalog is extracted from photometric galaxy samples of the SDSS Data Release 8 \citep[DR8,][]{Aihara2011} using the red-sequence Matched-filter Probabilistic Percolation cluster finding algorithm \citep{Rykoff2014}. The redMaPPer algorithm uses the 5-band $(ugriz)$ magnitudes of galaxies with a magnitude cut $i<21.0$ over a total area of 10,000 deg$^2$ to photometrically detect galaxy clusters. redMaPPer uses a multi-color richness estimator $\lambda$, defined to be the sum of the membership probabilities over all galaxies. In this work, we use clusters with richness $\lambda>20 $ and photometric redshift $z_{\rm cluster}<0.5$. In the overlapping region with the CFHT Stripe-82 Survey, we have a total of 634 clusters. For each redMaPPer cluster, member galaxies are identified according to their photometric redshift, color and their cluster-centric distance. To reduce the contamination induced by fake member galaxies, we only use satellite galaxies with membership probability $P_{\rm mem}>0.8$. The redMaPPer cluster finder identifies 5 central galaxy candidates for each cluster, each with an estimate of the probability $P_{\rm cen}$ that the galaxy in question is the central galaxy of the cluster. We remove all central galaxy candidates from our lens sample. For more details about redMaPPer cluster catalog, we refer the readers to \citet{Rykoff2014, Rozo2014}. Stellar masses are estimated for member galaxies in the redMaPPer catalog using the Bayesian spectral energy distribution (SED) modeling code {\tt iSEDfit} \citep{Moustakas2013}. Briefly, {\tt iSEDfit} determines the posterior probability distribution of the stellar mass of each object by marginalizing over the star formation history, stellar metallicity, dust content, and other physical parameters which influence the observed optical/near-infrared SED. The input data for each galaxy includes: the SDSS $ugriz$ {\tt model} fluxes scaled to the $r$-band {\tt cmodel} flux; the 3.4- and 4.6-$\micron$ ``forced" WISE \citep{Wright2010} photometry from \citet{Lang2014}; and the spectroscopic or photometric redshift for each object inferred from redMaPPer. We adopt delayed, exponentially declining star formation histories with random bursts of star formation superposed, the Flexible Stellar Population Synthesis (FSPS) model predictions of \citet{Conroy2009, Conroy2010}, and the \citet{Chabrier2003} initial mass function (IMF) from 0.1-100 $M_{\odot}$. For reference, adopting the \citet{Salpeter1955} IMF would yield stellar masses which are on average 0.25 dex (a factor of 1.8) larger. We apply a stellar mass cut of $M_{\rm star}>10^{10} \rm {h^{-1}M_{\odot}}$ to our satellite galaxy sample. In Fig.\ref{fig:mz}, we show the $M_{\rm star}$--$z_l$ distribution for our lens samples, where $z_l$ is the photometric redshift of the satellite galaxy assigned by the redMaPPer algorithm. The low stellar mass satellite galaxies are incomplete at higher redshift, but they will not affect the conclusion of this paper. \begin{figure} \includegraphics[width=0.4\textwidth]{fig1.pdf} \caption{The $M_{\rm star}$--$z_l$ distribution of lens galaxies, where $z_l$ is the photometric redshift, and $M_{\rm star}$ is in units of $M_{\odot}$. } \label{fig:mz} \end{figure} \subsection{The Source Catalog} The source catalog is measured from the Canada--France--Hawaii Telescope Stripe 82 Survey (CS82), which is an $i-$band imaging survey covering the SDSS Stripe82 region. With excellent seeing conditions --- FWHM between 0.4 to 0.8 arcsec --- the CS82 survey reaches a depth of $i_{\rm AB}\sim24.0$. The survey contains a total of 173 tiles, 165 of which from CS82 observations and 8 from CFHT-LS Wide \citep{Erben2013}. The CS82 fields were observed in four dithered observations with 410s exposure. The 5$\sigma$ limiting magnitude is $i_{\rm AB}\sim 24.0$ in a 2 arcsec diameter aperture. Each CFHTLenS science image is supplemented by a mask, indicating regions within which accurate photometry/shape measurements of faint sources cannot be performed, e.g. due to extended haloes from bright stars. According to \citet{Erben2013}, most of the science analysis are safe with sources with MASK$\le1$. After applying all the necessary masks and removing overlapping regions, the effective survey area reduces from 173 deg$^2$ to 129.2 deg$^2$. We also require that source galaxies to have FITCLASS= 0, where FITCLASS is the flag that describes the star/galaxy classification. Source galaxy shapes are measured with the lensfit method \citep{Miller2007, Miller2013}, closely following the procedure in \citet{ Erben2009, Erben2013}. The shear calibration and systematics of the lensfit pipeline are described in detail in \citet[][]{Heymans2012}. The specific procedures that are applied to the CS82 imaging are described in Erben et al. (2015, in preparation). Since the CS82 survey only provides the $i$-band images, the CS82 collaboration derived source photometric redshifts using the $ugriz$ multi-color data from the SDSS co-add \citep{Annis2014}, which reaches roughly 2 magnitudes deeper than the single epoch SDSS imaging. The photometric redshift (photo-z) of the background galaxies were estimated using a Bayesian photo-$z$ code \citep[BPZ,][]{Benitez2000, Bundy2015}. The effective weighted source galaxy number density is 4.5 per arcmin$^{2}$. Detailed systematic tests for this weak lensing catalog are described in Leauthaud et al. 2015 (in prep). \subsection{Lensing Signal Computation} In a galaxy-galaxy lensing analysis, the excess surface mass density , $\Delta\Sigma$ is inferred by measuring the tangential shear, $\gamma_t(R)$, where \begin{equation}\label{eq:ggl} \Delta\Sigma(R)=\gamma_t(R)\Sigma_{\rm crit}={\overline\Sigma}(<R)-\Sigma(R)\, , \end{equation} where ${\overline\Sigma}(<R)$ is the mean surface mass density within $R$, and $\Sigma(R)$ is the average surface density at the projected radius $R$, and $\Sigma_{\rm crit}$ is the lensing critical surface density \begin{equation} \Sigma_{\rm crit}=\frac{c^2}{4\pi G}\frac{D_s}{D_l D_{ls}}\, , \end{equation} where $D_{ls}$ is the angular diameter distance between the lens and the source, and $D_l$ and $D_s$ are the angular diameter distances from the observer to the lens and to the source, respectively. We select satellite galaxies as lenses and stack lens-source pairs in physical radial distance $R$ bins from $0.04$ to $1.5 {\rm h^{-1}Mpc}$. To avoid contamination from foreground galaxies, we remove lens-source pairs with $z_{s}-z_{l} < 0.1$, where $z_s$ and $z_l$ are lens redshift and source redshift respectively. We have also tested the robustness of our results by varying the selection criteria for source galaxies. We find that selecting lens-source pairs with $z_{s}-z_{l} > 0.05$ or $z_{s}-z_{l} > 0.15$ only changes the final lensing signal by less than 7\%, well below our final errors. For a given set of lenses, $\Delta\Sigma(R)$ is estimated using \begin{equation} \Delta\Sigma(R)=\frac{\sum_{l}\sum_{s}w_{ls}\gamma_t^{ls}\Sigma_{\rm crit}(z_l,z_s)}{\sum_{l}\sum_{s}w_{ls}}\, , \end{equation} where \begin{equation} w_{ls}=w_n\Sigma_{\rm crit}^{-2}(z_l,z_s)\, \end{equation} and $w_n$ is a weight factor defined by Eq.\, (8) in \citet{Miller2013}. $w_n$ is introduced to account for the intrinsic distribution of ellipticity and shape measurement uncertainties. In the lensfit pipeline, a calibration factor for the multiplicative error $m$ is estimated for each galaxy based on its signal-to-noise ratio, and the size of the galaxy. Following \citet{Miller2013}, we account for these multiplicative errors in the stacked lensing by the correction factor \begin{equation} 1+K(R)=\frac{\sum_{l}\sum_{s} w_{ls}(1+m_s)}{\sum_{l}\sum_{s} w_{ls}} \end{equation} The corrected lensing measurement is as: \begin{equation} \Delta\Sigma(R)^{\rm corrected}=\frac{\Delta\Sigma(R)}{1+K(R)} \end{equation} In this paper, we stack the tangential shears around satellite galaxies binned according to their projected halo-centric distance $r_p$, and fit the galaxy-galaxy lensing signal to obtain the subhalo mass of the satellite galaxies. We describe our theoretical lens models below. \section{The Lens Model} \label{sec:model} The surface density around a lens galaxy $\Sigma(R)$ can be written as: \begin{equation}\label{xi_gm} \Sigma(R)=\int_{0}^{\infty}\rho_{\rm g, m} \left(\sqrt{R^2+\chi^2} \right)\, {\rm d}\chi\, ; \end{equation} and the mean surface density within radius $R$ is \begin{equation}\label{xi_gm_in} \Sigma(< R)=\frac{2}{R^2} \int_0^{R} \Sigma(u) \, u \, {\rm d}u , \end{equation} where $\rho_{g, m}$ is the density profile around the lens, and $\chi$ is the physical distance along the line of sight. The excess surface density around a satellite galaxy is composed of three components: \begin{equation} \Delta\Sigma(R)=\Delta\Sigma_{\rm sub}(R)+\Delta\Sigma_{\rm host}(R, r_p)+\Delta\Sigma_{\rm star}(R)\, , \end{equation} where $\Delta\Sigma_{\rm sub}(R)$ is the contribution of the subhalo in which the satellite galaxy resides, $\Delta\Sigma_{\rm host}(R, r_p)$ is the contribution from the host halo of the cluster/group, where $r_p$ is the projected distance from the satellite galaxy to the center of the host halo, and $\Delta\Sigma_{\rm star}(R)$ is the contribution from the stellar component of the satellite galaxy. We neglect the two-halo term, which is the contribution from other haloes on the line-of-sight, because this contribution is only important at $R>3 {\rm h^{-1}Mpc}$ for clusters \citep[see][]{Shan2015}. At small scales where the subhalo term dominates, the contribution of the 2-halo term is at least an order of magnitude smaller than the subhalo term \citep{Li2009}. \subsection{Host halo contribution} We assume that host haloes are centered on the central galaxies of clusters, with a density profile following the NFW \citep{NFW97} formula: \begin{equation} \rho_{\rm host}(r)=\frac{\rho_{\rm 0,host}}{(1+r/r_{\rm s,host})^2(r/r_{\rm s,host})} \,, \end{equation} where $r_{\rm s,host}$ is the characteristic scale of the halo and $\rho_{\rm 0,host}$ is a normalization factor. Given the mass of a dark matter halo, its profile then only depends on the concentration parameter $c\equiv r_{\rm 200}/r_{\rm s,host}$, where $ r_{\rm 200}$ is a radius within which the average density of the halo equals to 200 times the universe critical density, $\rho_{\rm crit}$. The halo mass $M_{\rm 200}$ is defined as $M_{\rm 200}\equiv 4\pi/3r_{\rm 200}^3(200 \rho_{\rm crit})$. Various fitting formulae for mass-concentration relations have been derived from N-body numerical simulations \citep[e.g.][]{Bullock2001,Zhao2003,Dolag2004,Maccio2007,Zhao2009, Duffy2008, Neto2007}. These studies find that the concentration decreases with increasing halo mass. Weak lensing observations also measure this trend, but the measured amplitude of the mass-concentration relation is slightly smaller than that in the simulation \citep[e.g.][]{Mandelbaum2008, Miyatake2013, Shan2015}. Since there is almost no degeneracy between the subhalo mass and the concentration \citep{Li2013,George2012}, we expect that the exact choice of the mass-concentration relation should not have a large impact on our conclusions. Throughout this paper, we adopt the mass concentration relation from \citet{Neto2007}: % \begin{equation} c=4.67(M_{\rm 200}/10^{14} \rm {h^{-1}M_{\odot}})^{-0.11} \end{equation} % We stack satellite galaxies with different halo-centric distance $r_p$, and in host halos with different mass. Thus the lensing contribution from the host halo is an average of $\Delta{\Sigma}_{\rm host}$ over $r_p$ and host halo mass $M_{\rm 200}$. The host halo profile is modeled as follows. For each cluster, we can estimate its mass via the richness-mass relation of \citet{Rykoff2012}: \begin{equation} \label{eq:mass_rich} \ln{\left( \frac{M_{\rm 200}}{h_{\rm 70}^{-1}10^{14}M_{\odot}} \right)} = 1.48 + 1.06\ln(\lambda/60) \end{equation} In the redMaPPer catalog, each cluster has five possible central galaxies, each with probability $P_{\rm cen}$. We assume that the average $\Delta\Sigma$ contribution from the host halo to a satellite can be written as: \begin{equation} \Delta\bar{\Sigma}_{\rm host}(R)=A_0 \sum_{i}^{N_{\rm sat}}\sum_{j}^5\Delta\Sigma_{\rm host}(R|r_{\rm p, j}, M_{\rm 200}) P_{\rm cen, j}\, , \end{equation} where $r_{\rm p, j}$ is the projected distance between the satellite and the jth candidate of the host cluster center, $P_{\rm cen, j}$ is the probability of jth candidate to be the true center, and $N_{\rm sat}$ is the number of stacked satellite galaxies. $A_0$ is the only free parameter of the host halo contribution model. It describes an adjustment of the lensing amplitude. If the richness-mass relation is perfect, the best-fit $A_0$ should be close to unity. Note that, the subhalo mass determination is robust against the variation of the normalization in richness-mass relation. If we decrease the normalization in Eq. \ref{eq:mass_rich} by 20\%, the best-fit subhalo mass changes only by 0.01 dex, which is at least 15 times smaller than the $1\sigma$ uncertainties of $M_{\rm sub}$ (see table \ref{tab:para_nfw}). \subsection{Satellite contribution} In numerical simulations, subhalo density profiles are found to be truncated in the outskirts \citep[e.g.][]{Hayashi2003,springel2009, gao2004, xie2015}. In this work, we use a truncated NFW profile \citep[][tNFW, hereafter]{Baltz2009, Oguri2011} to describe the subhalo mass distribution: \begin{equation}\label{eq:rhosub} \rho_{\rm sub}(r)=\frac{\rho_{0}}{r/r_s(1+r/r_s)^2}\left(\frac{r_t^2}{r^2+r_t^2}\right)^2 \, \end{equation} where $r_t$ is the truncation radius of the subhalo, $r_s$ is the characteristic radius of the tNFW profile and $\rho_0$ is the normalization. The enclosed mass with $x\equiv r/r_s$ can be written as:% \begin{eqnarray} M(<x)&=&4\pi\rho_0 r_s^3 \frac{\tau^2}{2(\tau^2+1)^3(1+x)(\tau^2+x^2)}\nonumber\\ &&\hspace*{-16mm}\times\Bigl[(\tau^2+1)x\left\{x(x+1)-\tau^2(x-1)(2+3x) -2\tau^4\right\}\nonumber\\ &&\hspace*{-16mm}+\tau(x+1)(\tau^2+x^2)\left\{2(3\tau^2-1) {\rm arctan}(x/\tau)\right.\nonumber\\ &&\hspace*{-16mm}\left.+\tau(\tau^2-3)\ln(\tau^2(1+x)^2/(\tau^2+x^2))\right\} \Bigr], \label{eq:mbmo_nodim} \end{eqnarray} where $\tau\equiv r_t/r_s$. We define the subhalo mass $M_{\rm sub}$ to be the total mass within $r_t$. Given $M_{\rm sub}$, $r_s$ and $r_t$, the tangential shear $\gamma_t$ can be derived analytically (see the appendix in \citet{Oguri2011}). Previous studies have sometimes used instead the pseudo-isothermal elliptical mass distribution (PIEMD) model derived by \citet{Kassiola1993}) for modeling the mass distribution around galaxies \citep[e.g.][]{Limousin2009,Natarajan2009,Kneib2011}. The surface density of the PIEMD model can be written as: \begin{equation}\label{eq:PIEMD} \Sigma(R)=\frac{\Sigma_0 R_0}{1-R_0/R_t}\left(\frac{1}{\sqrt{R_0^2+R^2}} - \frac{1}{\sqrt{R_t^2+R^2}} \right)\, \end{equation} where $R_0$ is the core radius, $R_t$ is the truncation radius, and $\Sigma_0$ is a characteristic surface density. The subhalo mass $M_{\rm sub}$, which is defined to be the enclosed mass with $R_t$, can be written as: \begin{equation} M_{\rm sub}=\frac{2\pi \Sigma_0 R_0 R_t}{R_t-R_0}\left[ \sqrt{R_0^2-R^2} - \sqrt{R_t^2-R^2}+(R_t-R_0)\right] \end{equation} In this paper, we will fit the data with both tNFW and PIEMD models. Finally, the lensing contributed from stellar component is usually modeled as a point mass: \begin{equation} \Delta\Sigma_{\rm star}(R)=\frac{ M_{\rm star}}{R^2}\,, \end{equation} where $M_{\rm star}$ is the total stellar mass of the galaxy. \section{Results} \label{sec:res} \subsection{Dependence on the projected halo-centric radius} To derive the subhalo parameters, we calculate the $\chi^2$ defined as: \begin{equation} \chi^{2}=\sum_{ij}({\Delta\Sigma(R_i)-\Delta\Sigma(R_i)^{obs}})(\widehat{C_{ij}^{-1}})({\Delta\Sigma(R_j)-\Delta\Sigma(R_j)^{obs}} )\,, \label{chi2} \end{equation} where $\Delta\Sigma(R_i)$ and $\Delta\Sigma(R_i)^{obs}$ are the model and the observed lensing signal in the $i$th radial bin. The matrix $C_{ij}$ is the covariance matrix of data error between different radius bins. Even if the ellipticity of different sources are independent, the cross term of the covariance matrix could still be non-zero, due to some source galaxies are used more than once\citep[e.g.][]{Han2015}. The covariance matrix can be reasonably calculated with bootstrap method using the survey data themselves\citep{Mandelbaum2005}. In the paper, We divide the CS82 survey area into 45 equal area sub-regions. We then generate 3000 bootstrap samples by resampling the 45 sub-regions of CS82 observation data sets and calculate the covariance matrix using the bootstrap sample. Thus, the likelihood function can be given as: \begin{equation} L\propto \exp\left(-\frac{1}{2}\chi^2\right). \label{likelihood} \end{equation} We select satellite galaxies as described in section \ref{sec:lens_select} and measure the stacked lensing signal around satellites in three $r_p$ bins: $0.1<r_p<0.3$; $0.3<r_p<0.6$ and $0.6<r_p<0.9$(in unit of ${\rm h^{-1}Mpc}$). For each bin, we fit the stacked lensing signal with a Monte-Carlo-Markov-Chain(MCMC) method. For the tNFW subhalo case, we have 4 free parameters: $M_{\rm sub}$, the subhalo mass; $r_s$, the tNFW profile scale radius; $r_t$, the tNFW truncation radius, and $A_0$, the normalization factor of the host halo lensing contribution. We adopt flat priors with broad boundaries for these model parameters. We set the upper boundaries for $r_t$ and $r_s$ to be the value of the viral radius and the scale radius of an NFW halo of $10^{13}\rm {h^{-1}M_{\odot}}$. We choose these values because the subhalo masses of satellite galaxy in clusters are expected to be much smaller than $10^{13}\rm {h^{-1}M_{\odot}}$ \citep{Gao2012}. For the PIEMD case, we also have 4 free parameters: $M_{\rm sub}$, $R_0$, $R_t$ and $A_0$. We set the upper boundary of $R_t$ to be the same as $r_t$ in the tNFW case. We set the upper boundary of $R_0$ to be 20 kpc/h, which is much higher than the typical size of $R_0$ in observations\citep{Limousin2005, Natarajan2009}. We believe our choice of priors is very conservative. The detailed choices of priors are listed in table~\ref{tab:prior}. \begin{table} \begin{center} \caption{Flat priors for model parameters. $M_{\rm sub}$ is in units of $\rm {h^{-1}M_{\odot}}$ and distances are in units of ${\rm h^{-1}Mpc}$.} \begin{tabular}{|c|c|c|} \hline &&\\ &lower bound& upper bound \\ &&\\ \hline $A_{\rm 0}$ & 0.3& 2 \\ \hline $\log{M_{\rm sub}} $ & 9 & 13 \\ \hline $r_t$ (tNFW) & 0 & 0.35 \\ \hline $r_s$ (tNFW) & 0& 0.06 \\ \hline $R_0$(PIEMD) & 0 & 0.02\\ \hline $R_t$(PIEMD) & 0 & 0.35\\ \hline \end{tabular} \label{tab:prior} \end{center} \end{table} In Fig.\ref{fig:gglensing}, we show the observed galaxy-galaxy lensing signal. Red dots with error bars represent the observational data. The vertical error bars show 1 $\sigma$ errors estimated with the bootstrap resampling the lens galaxies. Horizontal error bars show the range of each radial bin. The lensing signals show clearly the characteristic shape described in figure 2 of \citet{Li2013}. The lensing signal from the subhalo term dominates the central part. On the other hand, the contribution from the host halo is nearly zero on small scale, and decreases to negative values at intermediate scale. This is because $\Sigma_{\rm host}(R)$ becomes increasingly larger compared to $\Sigma_{\rm host}(<R)$ at intermediate R. At radii where $R>r_p$, $\Delta\Sigma_{\rm host}(R)$ increases with R again. At large scales where $R>>r_p$, $\Delta\Sigma(R,r_p)$ approaches $\Delta\Sigma_{\rm host}(R,0)$. The solid lines show the best-fit results with the tNFW model. Dashed lines of different colors show the contribution from different components. The best-fit model parameters are listed in table~\ref{tab:para_nfw}. Note that, the value of the first point in the $[0.6, 0.9]$ $r_p$ bin is very low, which may be due to the relatively small number of source galaxies in the inner most radial bin. We exclude this point when deriving our best fit model. For comparison, we also show the best-fit parameters including this point in table~\ref{tab:para_nfw}. Fig.~\ref{fig:2dcontour} shows an example of the MCMC posterior distribution of the tNFW model parameters for satellites in the $[0.3, 0.6]{\rm h^{-1}Mpc}$ $r_p$ bin. The constraints on the subhalo mass $\log{M_{\rm sub}}$ are tighter than that in \citet{Li2014}($\sim\pm 0.7$). The amplitude of $A_0$ is slightly smaller than unity, implying that the clusters may be less massive than predicted by the mass-richness relation. However, we are still unable to obtain significant constraints on the structure parameters of the sub-halos. In principle, the density profile cut-off caused by tidal effects can be measured with tangential shear. However, the galaxy-galaxy lensing measurement stacks many satellites, leading to smearing of the signal. With the data used here, the tidal radius as a free parameter are not constrained. Some galaxy-galaxy lensing investigations introduced additional constraints in order to estimate the tidal radius. For example, \citet{Gillis2013b} assumed that galaxies of the same stellar mass but in different environments have similar sub-halo density profiles except the cut-off radius. With this additional assumption, they obtained $r_{\rm tidal}/r_{200}=0.26\pm0.14$. During the review process of our paper, \citet{Sifon2015} posted a similar galaxy-galaxy lensing measurement of satellite galaxies using the KiDs survey, and they also found that their data cannot distinguish models with or without tidal truncation. In Fig.\ref{fig:piemd}, we show the fitting results of the PIEMD model. The best-fit parameters are listed in table~\ref{tab:para_piemd}. For reference, we also over-plot the theoretical prediction of the best-fit tNFW model with blue dashed lines. Both models provide a good description of the data. The best-fitted $M_{\rm sub}$ and $A_0$ of the two models agree well with each other, showing the validity of our results. In numerical simulations, subhalos that are close to host halo centers are subject to strong mass stripping\citep{Springel2001, DeLucia2004, gao2004, xie2015}. The mass loss fraction of subhalos increases from $\sim 30\%$ at $r_{\rm 200}$ to $~90\%$ at 0.1$r_{\rm 200}$ \citep{gao2004,xie2015}. From galaxy-galaxy lensing in this work, we also find that the $M_{\rm sub}$ of $[0.6, 0.9]$ $r_p$ bin is much larger than that in $[0.1, 0.3]$ $r_p$ bin (by a factor of 18). In Fig.\ref{fig:ML}, we plot the subhalo mass-to-stellar mass ratio for three $r_p$ bins. The $M_{\rm sub}/M_{\rm star}$ ratio in the $[0.6, 0.9]{\rm h^{-1}Mpc}$ $r_p$ bin is about 12 times larger than that of the $[0.1, 0.3]{\rm h^{-1}Mpc}$ bin. If we include the first point in the $[0.6, 0.9]{\rm h^{-1}Mpc}$ bin, the $M_{\rm sub}/M_{\rm star}$ of the tNFW model decreases by 40\%. For reference, we over-plot the $M_{\rm sub}/M_{\rm star}$--$r_p$ predicted by the semi-analytical model of \citet{Guo2011}. We adopt the mock galaxy catalog generated with the \citet{Guo2011} model using the Millennium simulation \citep{Springel2006}. We select mock satellite galaxies with stellar masses $M_{\rm star}>10^{10}\rm {h^{-1}M_{\odot}}$ from clusters with $M_{\rm 200}>10^{14}/\rm {h^{-1}M_{\odot}}$. The median $M_{\rm sub}/M_{\rm star}$-$r_p$ relation is shown in Fig.\ref{fig:ML} with a black solid line. The green shaded region represents the parameter space where 68\% of mock satellites distributes. We see the semi-analytical model predicts an increasing $M_{\rm sub}/M_{\rm star}$ with $r_p$, but with a flatter slope than in our observations. Note that we have not attempted to recreate our detailed observational procedure here, so source and cluster selection might potentially explain this discrepancy. Particularly relevant here is the fact that our analysis relies on a probability cut $P_{\rm mem}>0.8$ for satellite galaxies, which implies that $\sim 10\%$ of our satellite galaxies may not be true members of the clusters, but galaxies on the line of sight. This contamination is difficult to eliminate completely in galaxy-galaxy lensing, because the uncertainties in the line-of-sight distances are usually larger than the sizes of the clusters themselves. In \citet{Li2013}, we used mock catalogs constructed from $N$-body simulations to investigate the impact of interlopers. It is found that 10\% of the galaxies identified as satellites are interlopers, and this introduces a contamination of $15\%$ in the lensing signal. We expect a comparable level of bias here, as shown in Appendix A. It should be noted, however, that the average membership probability of our satellite sample does not change significantly with $r_p$, implying that the contamination by fake member galaxies is similar at different $r_p$. We therefore expect that the contamination by interlopers will not lead to any qualitative changes in the shapes of the density profiles. A detailed comparison between the observation and simulation, taking into account the impact of interlopers, will be carried out in a forthcoming paper. \begin{figure} \includegraphics[width=0.4\textwidth]{fig2.pdf} \caption{Observed galaxy-galaxy lensing signal for satellite galaxies as a function of the radius. The top, middle and bottom panels show results for satellites with $r_p$ in the range $[0.1, 0.3]{\rm h^{-1}Mpc}$, $[0.3, 0.6]{\rm h^{-1}Mpc}$ and $[0.6, 0.9]{\rm h^{-1}Mpc}$, respectively. Red points with error bars represent the observational data. The vertical error bars show the 1 $\sigma$ bootstrap error. The horizontal error bars show the bin size. Black solid lines show the best-fit tNFW model prediction. Dashed lines of different colors show the contribution from the subhalo, the host halo, and the stellar mass respectively.} \label{fig:gglensing} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{fig3.pdf} \caption{The contours show 68\% and 95\% confidence intervals for the tNFW model parameters, $M_{\rm sub}$, $r_s$, $r_t$ and $A_0$. Results are shown for satellites with $r_p=[0.3, 0.6]{\rm h^{-1}Mpc}$. The last panel of each row shows the 1D marginalized posterior distributions. Note that the plotting range for $r_s$ and $r_t$ is exactly the same as the limits of our prior, so we actually do not have much constraints for these two parameters, except that high values are slightly disfavored for both. } \label{fig:2dcontour} \end{figure*} \begin{figure} \includegraphics[width=0.4\textwidth]{fig4.pdf} \caption{This figure is similar to Fig\ref{fig:gglensing}. Solid lines represent the best-fit PIEMD model prediction and blue dashed lines are the predictions of the tNFW model.} \label{fig:piemd} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{fig5.pdf} \caption{The subhalo mass-to-stellar mass ratio for galaxies in different $r_p$ bins. Vertical error bars show the 68\% confidence interval. Empty circles show the best-fitted $M_{\rm sub}/M_{\rm star}$ values for $[0.6, 0.9]{\rm h^{-1}Mpc}$ $r_p$ bin including the inner most observational point (see the bottom panel in Fig.\ref{fig:gglensing}). Horizontal error bars show the bin range of $r_p$. The green shaded region represents the parameter space where 68\% of the semi-analytical satellite galaxies distributes. The semi-analytical model also predicts an increasing $M_{\rm sub}/M_{\rm star}$ with $r_p$, but with a flatter slope.} \label{fig:ML} \end{figure} \begin{table*} \begin{center} \caption{The best-fit values of the tNFW model parameters for the stacked satellite lensing signal in different $r_p$ bins. $\log{M_{\rm sub}} $, and $A_{\rm 0}$ are the best-fit values for the subhalo mass, and the normalization factor. We do not show the best-fit of $r_s$ and $r_t$ as they are poorly constrained. $N_{\rm sat}$ is the number of satellite galaxies in each bin. $\langle \log M_{\rm star}\rangle$ is the average stellar mass of satellites. All errors indicate the 68\% confidence intervals. Masses are in units of $\rm {h^{-1}M_{\odot}}$. In our fiducial fitting process, we exclude the first point in $[0.6,0.9]{\rm h^{-1}Mpc}$ $r_p$ bin as a outlier (see Fig.\ref{fig:gglensing}). For comparison, the bottom row of the table shows the fitting results including this first point. } \begin{tabular}{l|c|c|c|c|c|c} \hline &&&&\\ $r_p$ range & $\langle \log(M_{\rm star}) \rangle$ & $\log{M_{\rm sub}} $ & $A_{\rm 0}$ & $M_{\rm sub}/ M_{\rm star}$ &$N_{\rm sat}$ & $\langle z_l \rangle$\\ &&&&\\ \hline & & && \\ $[0.1, 0.3]$ & 10.68 & $ 11.37 ^{+ 0.35 }_{- 0.35}$& $ 0.80^{+ 0.01}_{- 0.01}$ & $ 4.43^{+ 6.63}_{- 2.23}$ & 3963 &0.33 \\ & & && \\ \hline &&&&\\ $[0.3, 0.6]$ & 10.72 & $ 11.92 ^{+ 0.19 }_{- 0.18}$& $ 0.86^{+ 0.02}_{- 0.02}$ & $ 17.23^{+ 6.98}_{- 6.84} $ &2507 & 0.29\\ & & & &\\ \hline &&&&\\ $[0.6, 0.9]$ & 10.78 & $ 12.64 ^{+ 0.12 }_{- 0.11}$& $ 0.81^{+ 0.04}_{- 0.04}$ & $ 75.40^{+ 19.73}_{- 19.09} $&577 &0.24\\ & & & &\\ \hline &&&&\\ $[0.6, 0.9]^*$ & 10.78 & $ 12.49 ^{+ 0.13 }_{- 0.13}$& $ 0.81^{+ 0.04}_{- 0.04}$ & $ 54.64^{+ 15.58}_{- 15.80} $&577 &0.24\\ &&&&\\ \hline \end{tabular} \label{tab:para_nfw} \end{center} \end{table*} \begin{table} \begin{center} \caption{best-fitted parameters with PIEMD model. } \begin{tabular}{l|c|c|c|c} \hline &&&\\ $r_p$ range & $\langle \log(M_{\rm star}) \rangle$ & $\log{M_{\rm sub}} $ & $A_{\rm 0}$ & $M_{\rm sub} /M_{\rm star}$ \\ &&&\\ \hline & & & \\ $[0.1, 0.3]$ & 10.68 & $ 11.30 ^{+ 0.55 }_{- 0.57}$& $ 0.80^{+ 0.01}_{- 0.01}$ & $ 6.72^{+ 7.84}_{- 5.59} $\\ & & & \\ \hline &&&\\ $[0.3, 0.6]$ & 10.72 & $ 11.76 ^{+ 0.33 }_{- 0.32}$& $ 0.86^{+ 0.01}_{- 0.01}$ & $ 14.40^{+ 9.01}_{- 9.16} $\\ & & & \\ \hline &&&\\ $[0.6, 0.9]$ & 10.78 & $ 12.80 ^{+ 0.15 }_{- 0.15}$& $ 0.79^{+ 0.04}_{- 0.04}$ & $ 110.98^{+ 35.76}_{- 36.49} $\\ &&&\\ \hline &&&\\ $[0.6, 0.9]^*$ & 10.78 & $ 12.84 ^{+ 0.13 }_{- 0.13}$& $ 0.79^{+ 0.04}_{- 0.04}$ & $ 119.74^{+ 34.34}_{- 35.47} $\\ &&&\\ \hline \end{tabular} \label{tab:para_piemd} \end{center} \end{table} \subsection{Dependence on satellite stellar mass} In the CDM structure formation scenario, satellite galaxies with larger stellar mass tend to occupy larger haloes at infall time \citep[e.g.][]{Vale2004, Conroy2006,Yang_etal2012,Lu_etal2014}. In addition, massive haloes may retain more mass than lower mass ones at the same halo-centric radius \citep[e.g.][]{Conroy2006,Moster2010,Simha2012}. For these two reasons, we expect that satellite galaxies with larger stellar mass reside in more massive subhalos. To test this prediction, we select all galaxies with $r_p$ in the range $[0.3, 0.9]$ ${\rm h^{-1}Mpc}$, and split the them into two subsamples: $10<\log(M_{\rm star}/\rm {h^{-1}M_{\odot}})<10.5$ and $11<\log(M_{\rm star}/\rm {h^{-1}M_{\odot}})<12$. The galaxy-galaxy lensing signals of the two satellite samples are shown in Fig.\ref{fig:mstar}. It is clear that at small scales, where subhalos dominate, the lensing signals are larger around the more massive satellites. The best-fit subhalo mass for the high mass and low mass satellites are $\log(M_{\rm sub}/\rm {h^{-1}M_{\odot}})=11.14 ^{+ 0.66 }_{- 0.73}$ ($M_{\rm sub}/M_{\rm star}=19.5^{+19.8}_{-17.9}$) and $\log(M_{\rm sub}/\rm {h^{-1}M_{\odot}})=12.38 ^{+ 0.16 }_{- 0.16}$ ($M_{\rm sub}/M_{\rm star}=21.1^{+7.4}_{-7.7}$ respectively. \begin{figure} \includegraphics[width=0.5\textwidth]{fig6.pdf} \caption{Observed galaxy-galaxy lensing signal for satellite galaxies in different stellar mass bins at fixed $r_p$. The legend shows the $\log(M_{\rm star}/\rm {h^{-1}M_{\odot}})$ range for different data points. The solid lines are the best-fit tNFW models.} \label{fig:mstar} \end{figure} \section{Discussion and conclusion} \label{sec:sum} In this paper, we present measurements of the galaxy-galaxy lensing signal of satellite galaxies in redMaPPer clusters. We select satellite galaxies from massive clusters (richness $\lambda>20$) in the redMaPPer catalog. We fit the galaxy-galaxy lensing signal around the satellites using a tNFW profile and a PIEMD profile, and obtain the subhalo masses. We bin satellite galaxies according to their projected halo-centric distance $r_p$ and find that the best-fit subhalo mass of satellite galaxies increases with $r_p$. The best-fit $\log({M_{\rm sub}/\rm {h^{-1}M_{\odot}}})$ for satellites in the $r_p=[0.6, 0.9]{\rm h^{-1}Mpc}$ bin is larger than in the $r_p=[0.1, 0.3]{\rm h^{-1}Mpc}$ bin by a factor of 18. The $M_{\rm sub}/M_{\rm star}$ ratio is also found to increase as a function of $r_p$ by a factor of 12. We also find that satellites with more stellar mass tend to populate more massive subhalos. Our results provide evidence for the tidal striping effects on the subhalos of red sequence satellite galaxies, as expected on the basis of the CDM hierarchical structure formation scenario. Many previous studies have tried to test this theoretical prediction using gravitational lensing. Most of these previous studies focus on measuring subhalo mass in individual rich clusters. For example, \citet{Natarajan2009} report the measurement of dark matter subhalos as a function of projected halo-centric radius for the cluster C10024+16. They found that the mass of dark matter subhalos hosting early type $L^{*}$ galaxies increases by a factor of 6 from a halo-centric radius $r<0.6$ ${\rm h^{-1}Mpc}$ to $r\sim 4$ ${\rm h^{-1}Mpc}$. In recent work, \citet{Okabe2014} present the weak lensing measurement of 32 subhalos in the very nearby Coma cluster. They also found that the mass-to-light ratio of subhalos increase as a function of halo-centric radius: the $M/L_{\rm i'}$ of subhalos increases from 60 at 10$'$ to about 500 at 70$'$. Our work is complementary to these results. In the paper, we measure the galaxy-galaxy lensing signal of subhalos in a statistical sample of rich clusters. Our results lead to similar conclusions and show evidence for the effects of tidal striping. However, due to the statistical noise of the current survey, there is still a large uncertainties in the measurement of $M_{\rm sub}/M_{\rm star}$. Theoretically, the galaxy - halo connection can be established by studying how galaxies of different properties reside in dark matter halos of different masses, through models such as the halo occupation models \citet[e.g.][]{Jing_etal1998, PeacockSmith2000, BerlindWeinberg2002}, the conditional luminosity functions models \citep[e.g.][]{Yang_etal2003}, (sub)halo-abundance-matching \citep[ e.g.][]{Vale2004, Conroy2006, Wang2006, Yang_etal2012, Chaves-Montero2015}, and empirical models of star formation and mass assembly of galaxies in dark matter halos \citep[e.g.][]{Lu_etal2014, Lu_etal2015}. In most of these models, the connection is made between galaxy luminosities (stellar masses) and halo masses before a galaxy becomes a satellite, i.e. before a galaxy and its halo have experienced halo-specific environmental effects. The tidal stripping effects after a halo becomes a sub-halo can be followed in dark matter simulations. This, together with the halo-galaxy connection established through the various models, can be used to predict the subhalo - galaxy relation at the present day. During the revision of this paper, \citet{Han2016} posted a theoretical work on subhalo spatial and mass distributions. In Sec. 6.4 of their paper, they presented how our lensing measurements can be derived theoretically from a subhalo abundance matching point of view. With future data, the galaxy-galaxy lensing measurements of subhalos associated with satellites are expected to provide important constraints on galaxy formation and evolution in dark matter halos. The results of this paper also demonstrate the promise of next generation weak lensing surveys. In \citet{Li2013}, it is shown that the constraints on subhalo structures and $M_{\rm sub}/M_{\rm star}$ can be improved dramatically with next generation galaxy surveys such as LSST, due to the increase in both sky area (17000 deg$^2$) and the depth (40 gal/arcmin$^2$), which is crucial in constraining the co-evolution of satellite galaxies and subhalos. The space mission, such as Euclid, will survey a similar area of the sky (20000 deg$^2$) but with much better image resolutions (FWHM $0.1"$ versus $0.7"$ for LSST), which is expected to provide even better measurements of galaxy-galaxy lensing. The method described here can readily be extended to these future surveys. \section*{Acknowledgements} Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. The Brazilian partnership on CFHT is managed by the Laborat\'orio Nacional de Astrof\`isica (LNA). This work made use of the CHE cluster, managed and funded by ICRA/CBPF/MCTI, with financial support from FINEP and FAPERJ. We thank the support of the Laboratrio Interinstitucional de e-Astronomia (LIneA). We thank the CFHTLenS team for their pipeline development and verification upon which much of this surveys pipeline was built. LR acknowledges the NSFC(grant No.11303033,11133003), the support from Youth Innovation Promotion Association of CAS. HYS acknowledges the support from Marie-Curie International Fellowship (FP7-PEOPLE-2012-IIF/327561), Swiss National Science Foundation (SNSF) and NSFC of China under grants 11103011. HJM acknowledges support of NSF AST-0908334, NSF AST-1109354 and NSF AST-1517528. JPK acknowledges support from the ERC advanced grant LIDA and from CNRS. TE is supported by the Deutsche Forschungs-gemeinschaft through the Transregional Collaborative Research Centre TR 33 - The Dark Universe. AL is supported by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. BM acknowledges financial support from the CAPES Foundation grant 12174-13-0. MM is partially supported by CNPq (grant 486586/2013-8) and FAPERJ (grant E-26/110.516/2-2012).
{'timestamp': '2016-03-16T01:08:32', 'yymm': '1507', 'arxiv_id': '1507.01464', 'language': 'en', 'url': 'https://arxiv.org/abs/1507.01464'}
\section{Introduction} Very High Energy (VHE) $\gamma$-ray astronomy \cite{ong} is still in its infancy. Operating in the energy range from $\sim$\,100 GeV to 30 TeV and beyond, this subfield of astronomy represents an exciting, relatively unsampled region of the electromagnetic spectrum, and a tremendous challenge. To develop more fully this field needs an instrument capable of performing continuous systematic sky surveys and detecting transient sources on short timescales without a priori knowledge of their location. These primary science goals require a telescope with a wide field of view and high duty cycle, excellent source location and background rejection capabilities - an instrument that complements both existing and future ground- and space-based $\gamma$-ray telescopes. To be viable VHE astronomy must overcome a number of fundamental difficulties. Since the flux of VHE photons is small, telescopes with large collecting areas ($\>10^3\,\mathrm{m}^2$) are required to obtain statistically significant photon samples; telescopes of this size can, currently, only be located on the Earth's surface. However, VHE photons do not readily penetrate the $\sim\,28$ radiation lengths of the Earth's atmosphere ($1030\,\mathrm{g}/\mathrm{cm}^2$ thick at sea-level) but instead interact with air molecules to produce secondary particle cascades, or extensive air showers. Another difficulty of VHE astronomy is the large background of hadronic air showers, induced by cosmic-ray primaries (primarily protons), that cannot be vetoed. In this paper we describe the conceptual design of an instrument that builds upon traditional extensive air shower methods; however, unlike typical extensive air shower arrays the detector design utilizes unique imaging capabilities and fast timing to identify (and reject) hadronic cosmic-ray backgrounds and achieve excellent angular resolution, both of which lead to improved sensitivity. In the following sections we briefly motivate the need for such an instrument (Section 2), discuss in detail telescope design parameters with emphasis on their optimization (Section 3), describe the conceptual design of a VHE telescope and the simulations used in this study (Section 4), and evaluate the capabilities of such a detector in terms of source sensitivity (Section 5). Finally, the results of this study are summarized and compared to both current and future VHE telescopes. \section{Motivation} VHE $\gamma$-ray astronomy has evolved dramatically in the last decade with the initial detections of steady and transient sources, galactic and extragalactic sources. To date 7 VHE $\gamma$-ray sources have been unambiguously detected \cite{crab,mrk421,mrk501,1ES2344,p1706,vela,sn1006}; this contrasts dramatically with the number of sources detected in the more traditional regime of $\gamma$-ray astronomy at energies below $\sim$\,20 GeV. The EGRET instrument aboard the Compton Gamma-Ray Observatory, for example, has detected pulsars, supernova remnants, gamma-ray bursts, active galactic nuclei (AGN), and approximately 50 unidentified sources in the 100 MeV-20 GeV range \cite{agn1,agn2}. The power-law spectra of many EGRET sources show no sign of an energy cutoff, suggesting that they may be observable at VHE energies. The 4 Galactic VHE objects, all supernova remnants, appear to have $\gamma$-ray emission that is constant in both intensity and spectrum. The 3 extragalactic VHE sources are AGN of the blazar class. Although AGN have been detected during both quiescent and flaring states, it is the latter that produce the most statistically significant detections. During these flaring states the VHE $\gamma$-ray flux has been observed to be as much as 10 times that of the Crab nebula - the standard candle of TeV astronomy \cite{hegra_flare}. Although observed seasonally since their initial detection, long term continuous monitoring of the TeV sources detected to date has never been possible, nor has there ever been a systematic survey of the VHE sky. This is primarily due to the fact that all VHE source detections to date have been obtained with air-Cherenkov telescopes. Because they are optical instruments, air-Cherenkov telescopes only operate on dark, clear, moonless nights - a $\sim\,5-10\,\%$ duty cycle for observations; these telescopes also have relatively narrow fields of view ($\sim\,10^{-2}\,\mathrm{sr}$). Although they are likely to remain unsurpassed in sensitivity for detailed source observations these telescopes have limited usefulness as transient monitors and would require over a century to complete a systematic sky survey. The identification of additional VHE sources would contribute to our understanding of a range of unsolved astrophysical problems such as the origin of cosmic rays, the cosmological infrared background, and the nature of supermassive black holes. Unfortunately, the field of VHE astronomy is data starved; new instruments capable of providing continuous observations and all-sky monitoring with a sensitivity approaching that of the air-Cherenkov telescopes are therefore required. A VHE telescope with a wide field of view and high duty cycle could also serve as a high-energy early warning system, notifying space- and ground-based instruments of transient detections quickly for detailed multi-wavelength follow-up observations. Its operation should coincide with the launch of the next-generation space-based instrument such as GLAST \cite{glast}. \section{Figure of Merit Parameters} A conceptualized figure of merit is used to identify the relevant telescope design parameters. This figure of merit, also called the signal to noise ratio, can be written as \begin{equation} \left(\frac{signal}{noise}\right) \propto \frac{R_\gamma Q \sqrt{A^{eff}~T}}{\sigma_\theta} \label{equation1} \end{equation} \begin{table} \caption{\label{table1} Figure of merit parameter definitions.} \vskip 0.5cm \small \centerline{ \begin{tabular}{|l l l|} \hline {\em Parameter} & {\em Units} & {\em Definition} \\ \hline \hline $A_{eff}$ & $\mathrm{m}^2$ & (effective) detector area \\ $T$ & sec & exposure \\ $\sigma_\theta$ & $^o$ & angular resolution \\ $R_\gamma$ & - & $\gamma$/hadron relative trigger efficiency \\ $Q$ & - & $\gamma$/hadron identification efficiency \\ \hline \end{tabular}} \end{table} \normalsize where the various parameters are defined in Table\,\ref{table1}. Ultimately, source sensitivity is the combination of these design parameters. Although a more quantitative form of the figure of merit is used to estimate the performance of the conceptual telescope design (see Equation\,\ref{equation4}), we use Equation\,\ref{equation1} to address specific design requirements. \subsection{$R_\gamma$} \label{sec_alt} Air showers induced by primary particles in the 100\,GeV to 10\,TeV range reach their maximum particle\footnote{Throughout the rest of the paper the generic term ``particles'' will refer to $\gamma, e^{\pm}, \mu^{\pm}$, and hadrons unless otherwise noted.} number typically at altitudes between 10 and 15\,km above sea level (a.s.l.). An earth-bound detector therefore samples the cascade at a very late stage of its development, when the number of shower particles has already dropped by an order of magnitude from its value at shower maximum. Figure\,\ref{longi} shows the result of computer simulations\footnote {Here and in the following analysis, the CORSIKA 5.61 \cite{corsika} code is used for air-shower simulation in the atmosphere. It is briefly described in the next section.} of the longitudinal profile of air showers induced in the Earth's atmosphere by proton and $\gamma$-primaries with fixed energies and zenith angles $0^{\mathrm o}\le\theta\le45^{\mathrm o}$. The small number of particles reaching 2500m detector altitude sets severe limits for observations at these altitudes. In addition, the number of particles in proton showers actually exceeds the number of particles in $\gamma$-showers at low altitudes (Figure\,\ref{r_gamma}). This implies that the trigger probability, and thus the effective area, of the detector is larger for proton than for $\gamma$-showers, an unfavorable situation which leads to an $R_{\gamma}$ (the ratio of $\gamma$-ray to proton trigger efficiency) less than 1. At 4500\,m, however, the mean number of particles exceeds the number at 2500\,m by almost an order of magnitude at all energies. Therefore, telescope location at an altitude $\geq$\,4000\,m is important for an air shower array operating at VHE energies not only because of the larger number of particles, and hence the lower energy threshold, but also because of the intrinsic $\gamma$/hadron-separation available, due to the relative trigger probabilities, that exists at higher altitudes. \begin{figure} \epsfig{file=rmiller_fig1.eps,width=14.0cm} \caption{\label{longi} Mean number of particles ($\gamma, e^{\pm}, \mu^{\pm}$, hadrons) vs. altitude for proton- and $\gamma$-induced air showers with primary energies 100\,GeV, 500\,GeV, 1\,TeV, and 10\,TeV. The low energy cutoff of the particle kinetic energy is 100\,keV ($\gamma, e^{\pm}$), 0.1\,GeV ($\mu^{\pm}$), and 0.3\,GeV (hadrons).} \end{figure} \begin{figure} \epsfig{file=rmiller_fig2.eps,width=14.0cm} \caption{\label{r_gamma} Ratio of particle numbers in $\gamma$- and proton-induced showers vs. altitude.} \end{figure} \begin{figure} \epsfig{file=rmiller_fig3.eps,width=14.0cm} \caption{\label{n_muon} For a $150\times 150\,\mathrm{m}^2$ detector area and cores randomly distributed over the detector area, (a) shows the mean number of $\mu^{\pm}$ in proton-induced showers as a function of the energy of the primary particle, and (b) shows the fraction $f$ of proton showers without $\mu^{\pm}$ as a function of the energy of the primary particle. The solid line is the actual value of $f$, the dashed line is the expected value assuming the number of $\mu^{\pm}$ follows a Poisson distribution.} \end{figure} \subsection{$Q$} \begin{figure} \epsfig{file=rmiller_fig4.eps,width=14.0cm} \caption{\label{wavelet} Shower image (spatial particle distribution reaching ground level) for a typical TeV $\gamma$- and proton shower (top). Event image after convolution with ``Urban Sombrero'' smoothing function (middle), and after significance thresholding (bottom). (0,0) is the center of the detector.} \end{figure} The rate of VHE $\gamma$-ray induced showers is significantly smaller than those produced by hadronic cosmic-rays\footnote{At 1 TeV the ratio of proton- to $\gamma$-induced showers from the Crab Nebula is approximately 10$^4$, assuming an angular resolution of 0.5 degrees.}. Therefore rejecting this hadronic background and improving the signal to noise ratio is crucial to the success of any VHE $\gamma$-ray telescope. The effectiveness of a background rejection technique is typically expressed as a quality factor $Q$ defined as \begin{equation} Q = \frac{\epsilon_\gamma}{\sqrt{1-\epsilon_p}} \label{equation2} \end{equation} where $\epsilon_\gamma$ and $\epsilon_p$ are the efficiencies for {\em identifying} $\gamma$-induced and proton-induced showers, respectively. Traditional extensive air-shower experiments have addressed $\gamma$/hadron-separation (i.\,e. background rejection) by identifying the penetrating particle component of air showers (see e.\,g. \cite{hegra_gh,casa_gh}), particularly muons. Although valid at energies exceeding 50 TeV, the number of muons detectable by a telescope of realistic effective area is small at TeV energies (see Figure\,\ref{n_muon}\,(a)). In addition, the $N_{\mu}$-distribution deviates from a Poisson distribution, implying that the fraction of proton showers {\em without} any muon is larger than $e^{-\overline{N}_{\mu}}$ (Figure\,\ref{n_muon}\,(b)). Relying on muon detection for an effective $\gamma$/hadron-separation requires efficient muon detection over a large area. A fine-grained absorption calorimeter to detect muons and perform air shower calorimetry can, in principle, lead to an effective rejection factor; however, the costs associated with such a detector are prohibitive. In contrast to air-shower experiments, imaging air-Cherenkov telescopes have achieved quality factors $Q>$\,7 by performing a {\em shape analysis} on the observed image \cite{whipple_q}. Non-uniformity of hadronic images arises from the development of localized regions of high particle density generated by small sub-showers. Although some of the background rejection capability of air-Cherenkov telescopes is a result of their angular resolution, rejection of hadronic events by identifying the differences between $\gamma$- and proton-induced images considerably increases source sensitivity. Although air-Cherenkov telescopes image the shower as it (primarily) appears at shower maximum, these differences should also be evident in the particle distributions reaching the ground. Figure\,\ref{wavelet}\,(top) shows the particle distributions reaching ground level for typical TeV $\gamma$-ray and proton-induced showers. This figure illustrates the key differences: the spatial distribution of particles in $\gamma$-ray showers tends to be compact and smooth, while in proton showers the distributions are clustered and uneven. Mapping the spatial distribution of shower particles (imaging), and identifying/quantifying shower features such as these should yield improved telescope sensitivity. \subsection{$\sigma_\theta$} Shower particles reach the ground as a thin disk of diameter approximately 100\,m. To first order, the disk can be approximated as a plane defined by the arrival times of the leading shower front particles. The initiating primary's direction is assumed to be a perpendicular to this plane. Ultimately, the accuracy with which the primary particle's direction can be determined is related to the accuracy and total number of the relative arrival time measurements of the shower particles, \begin{equation} \sigma_{\theta} \propto \frac{\sigma_{t}}{\sqrt{\rho}}~, \end{equation} where $\sigma_t$ is the time resolution and $\rho$ is the density of independent detector elements sampling the shower front. The telescope must, therefore, be composed of elements that have fast timing $\sigma_t$ and a minimum of cross-talk since this can affect the shower front arrival time determinations. Once the detector area is larger than the typical lateral extent of air showers, thus providing an optimal lever arm, the angular resolution can be further improved by increasing the sampling density. To achieve ``shower limited'' resolution, individual detector elements should have a time response no larger than the fluctuations inherent in shower particle arrival times ($\leq10$ ns, see Figure\,\ref{n_muon}\,(c)); on the other hand, there is no gain if $\sigma_t$ is significantly smaller than the shower front fluctuations. In practice fitting the shower plane is complicated by the fact that the shower particles undergo multiple scattering as they propagate to the ground leading to a curvature of the shower front. This scattering delays the particle arrival time by $\cal{O}$(ns)/100\,m, however the actual magnitude of curvature is a function of the particle's distance from the core. Determination of the core position, and the subsequent application of a {\em curvature correction} considerably improves the angular resolution by returning the lateral particle distribution to a plane which can then be reconstructed. Core location accuracy can be improved by increasing the sampling density of detector elements and the overall size of the detector itself. \section{Conceptual Design} To summarize the previous sections, an all-sky VHE telescope should satisfy the following design considerations: \begin{itemize} \item{$\sim\,100\,\%$ duty cycle ($T$)} \item{large effective area ($A_{eff}$)} \item{high altitude ($>$\,4000\,m)} \item{high sampling density} \item{fast timing} \item{imaging capability} \end{itemize} In the sections that follow, we study how a pixellated {\em scintillator-based} large-area detector with 100$\%$ active sampling performs as an all-sky monitor and survey instrument. Scintillator is used since it can provide excellent time resolution and has high sensitivity to charged particles, ultimately leading to improvements in angular resolution, energy threshold, and background rejection. To reduce detector cross-talk, improve timing, and enhance the imaging capabilities the detector should be segmented into optically isolated pixels . This type of detector is easier to construct, operate, and maintain compared to other large-area instruments such as water- or gas-based telescopes - advantageous since the high altitude constraint is likely to limit potential telescope sites to remote locations. Many of the design goals are most effectively achieved by maximizing the number of detected air-shower particles. As discussed in Section\,\ref{sec_alt}, detector altitude is of primary importance; however, at the energies of interest here only about $10\,\%$ (Figure\,\ref{converter}\,(a,b)) of the particles reaching the detector level are charged. Thus, the number of detected particles can be increased dramatically by improving the sensitivity to the $\gamma$-ray component of showers. A converter material (e.\,g. lead) on top of the scintillator converts photons into charged particles via Compton scattering and pair production, and, in addition, reduces the time spread of the shower front by filtering low energy particles which tend to trail the prompt shower front (Figure\,\ref{converter}\,(c)) and thus deteriorate the angular resolution. Figure\,\ref{converter}\,(d) shows the charged particle gain expected as a function of the converter thickness for lead, tin, and iron converters. The maximum gain is for a lead converter at $\sim$\,2 radiation lengths ($1\,{\mathrm r.\,l.}=0.56\,{\mathrm cm}$), but the gain function is rather steep below 1\,r.\,l. and flattens above. Because of the spectrum of secondary $\gamma$-rays reaching the detector pair production is the most dominant process contributing to the charged particle gain (Figure\,\ref{converter}\,(d)). \begin{figure} \epsfig{file=rmiller_fig5.eps,width=14.0cm} \caption{\label{converter} Mean number vs. primary energy (a) and energy distribution (b) of secondary $\gamma$'s and $e^{\pm}$ reaching 4500\,m observation altitude. (c) Integral shower particle arrival time distribution for particles within 40\,m distance to the core and various cuts on the particle energy (100\,keV, 1\,MeV, 10\,MeV). (d) Charged particle gain as a function of the converter thickness for lead, tin, and iron converters.} \end{figure} Techniques for reading out the light produced in scintillator-based detector elements have progressed in recent years with the development of large-area sampling calorimeters. Of particular interest is the work by the CDF collaboration on scintillating tiles \cite{bodek}; this technique utilizes fibers, doped with a wavelength shifter and embedded directly in the scintillator, to absorb the scintillation light and subsequently re-emit it at a longer wavelength. This re-emitted light is then coupled to photomultiplier tubes either directly or using a separate clear fiber-optic light guide. This highly efficient configuration is ideal for detecting minimum ionizing particles (MIPs), and produces 4 photoelectrons/MIP on average in a 5\,mm-thick scintillator tile. Using an array of tile/fiber detector elements one can now consider a large-area detector that counts particles and is $\sim\,100\,\%$ active, and it is this paradigm that we discuss in more detail in the following sections. It should be noted that a scintillator-based air-shower detector is not a new idea; however, the use of the efficient tile/fiber configuration in a detector whose physical area is fully active pushes the traditional concept of an air-shower array to the extreme. \subsection{Air Shower and Detector Simulation} The backbone of a conceptual design study is the simulation code. Here the complete simulation of the detector response to air showers is done in two steps: 1) initial interaction of the primary particle (both $\gamma$-ray and proton primaries) with the atmosphere and the subsequent development of the air shower, and 2) detector response to air-shower particles reaching the detector level. The CORSIKA \cite{corsika} air shower simulation code, developed by the KASCADE \cite{kascade} group, provides a sophisticated simulation of the shower development in the Earth's atmosphere. In CORSIKA, electromagnetic interactions are simulated using the EGS\,4 \cite{egs} code. For hadronic interactions, several options are available. A detailed study of the hadronic part and comparisons to existing data has been carried out by the CORSIKA group and is documented in \cite{corsika}. For the simulations discussed here, we use the VENUS \cite{venus} code for high energy hadronic interactions and GHEISHA \cite{gheisha} to treat low energy ($\le\,80\,{\mathrm GeV}$) hadronic interactions. The simulation of the detector itself is based on the GEANT \cite{geant321} package. The light yield of 0.5\,cm tile/fiber assemblies has been studied in detail in \cite{bodek} and \cite{barbaro}, and we adopt an average light yield of 4 photoelectrons per minimum ionizing particle. This includes attenuation losses in the optical fibers and the efficiency of the photomultiplier. Simulation parameters are summarized in Table\,\ref{tab_sim}. It should be noted that wavelength dependencies of the fiber attenuation length and of the photomultiplier quantum efficiency have not been included. \begin{table} \caption{\label{tab_sim} Basic parameters of the shower and detector simulation.} \vskip 0.5cm \small \centerline{ \begin{tabular}{|l|l|} \hline zenith angle range & $0^{\mathrm o}\le\theta\le45^{\mathrm o}$ \\ lower kinetic energy cuts & 0.1\,MeV ($e^{\pm},\,\gamma$) \\ & 0.1\,GeV ($\mu^{\pm}$) \\ & 0.3\,GeV (hadrons) \\ scintillator thickness & 0.5\,cm \\ lead converter thickness & 0.5\,cm \\ PMT transit time spread & 1\,ns (FWHM) \\ average light yield/MIP & 4 photoelectrons \\ \hline \end{tabular}} \end{table} \normalsize As shown in Section\,\ref{sec_alt} only a detector at an altitude above 4000\,m can be expected to give the desired performance; however, we study the effect of three detector altitudes, 2500\,m ($764.3\,{\mathrm g}\,{\mathrm cm}^{-2}$), 3500\,m ($673.3\,{\mathrm g}\,{\mathrm cm}^{-2}$), and 4500\,m ($591.0\,{\mathrm g}\,{\mathrm cm}^{-2}$), for the purpose of completeness. These altitudes span the range of both existing and planned all-sky VHE telescopes such as the Milagro detector \cite{milagro} near Los Alamos, New Mexico (2630\,m a.s.l.), and the ARGO-YBJ \cite{argo} detector proposed for the Yanbajing Cosmic Ray Laboratory in Tibet (4300\,m a.s.l.). \section{Detector Performance} In the remainder of this paper we study the expected performance of a detector based on the conceptual design discussed above. Although the canonical design is a detector with a geometric area $150\times 150\,\mathrm{m}^{2}$, a detector design incorporating a $200\times 200\,\mathrm{m}^{2}$ area has also been analyzed in order to understand how telescope performance scales with area. Pixellation is achieved by covering the physical area of the detector with a mosaic of 5\,mm thick scintillator tiles each covering an area of $1\times 1\,\mathrm{m}^{2}$. \subsection{Energy Threshold} The energy threshold of air-shower detectors is not well-defined. The trigger probability for a shower induced by a primary of fixed energy is not a step-function but instead rises rather slowly due to fluctuations in the first interaction height, shower development, core positions, and incident angles. Figure\,\ref{energy}\,(a) shows the trigger probability as a function of the primary $\gamma$-ray energy for three trigger conditions. Typically, the primary energy where the trigger probability reaches either $10\,\%$ or $50\,\%$ is defined as the energy threshold (see Figure\,\ref{energy}\,(a)). A large fraction of air showers that fulfill the trigger condition will have lower energies since VHE source spectra appear to be power-laws, $E^{-\alpha}$. A more meaningful indication of energy threshold then is the {\it median} energy $E_{med}$; however, this measure depends on the spectral index of the source. For a source with spectral index $\alpha=2.49$ (Crab), Figure\,\ref{energy}\,(b, top) shows $E_{med}$ as a function of detector altitude (for fixed detector size). The median energy increases as altitude decreases since the number of particles reaching the detector level is reduced at lower altitudes. For a detector at 4500\,m a.s.l., $E_{med}$ is about 500\,GeV after imposing a 40 pixel trigger criterion. $E_{med}$ is not a strong function of the detector size (Figure\,\ref{energy}\,(b, bottom)). It is also noteworthy that a larger pixel size of $2\times 2\,\mathrm{m}^2$ instead of $1\times 1\,\mathrm{m}^2$ only slightly increases $E_{med}$; due to the lateral extent of air showers and the large average distances between particles, nearly $95\,\%$ of all showers with more than 50 $1\,\mathrm{m}^2$-pixels also have more than 50 $4\,\mathrm{m}^2$-pixels. \begin{figure} \epsfig{file=rmiller_fig6.eps,width=14.0cm} \caption{\label{energy} (a) Trigger efficiency as a function of the primary particles' energy for three trigger conditions (10, 40, 400 pixels). (b) Median energy of detected $\gamma$-showers as a function of the trigger condition (number of pixels) for three detector altitudes (top) and as a function of the detector size for a fixed altitude (4500\,m) (bottom).} \end{figure} \subsection{Background Rejection and Core Location} \label{sec_bg} Due to its pixellation and $100\%$ active area the telescope described here can provide true images of the spatial distribution of secondary particles reaching the detector. Image analysis can take many forms; the method of wavelet transforms\,\cite{kaiser} is well suited for identifying and extracting localized image features. To identify localized high-density regions of particles an image analysis technique that utilizes digital filters is used; the procedure is briefly summarized below while details are given in \cite{miller}. Proton- and $\gamma$-induced showers can be identified by counting the number of ``hot spots'', or peaks, in a shower image in an automated, unbiased way. These peaks are due to small sub-showers created by secondary particles and are more prevalent in hadronic showers than in $\gamma$-induced showers. To begin, the shower image (i.\,e. the spatial distribution of detected secondary particles) is convolved with a function that smooths the image over a predefined region or length scale (see Figure\,\ref{wavelet}\,(middle)). The smoothing function used in this analysis is the so-called ``Urban Sombrero''\footnote{Also known as the ``Mexican Hat'' function.} function: \begin{equation} g\left(\frac{r}{a}\right) = \left(2 - \frac{r^2}{a^2}\right) e^{-\frac{r^2}{2\,a^2}} \end{equation} where $r$ is the radial distance between the origin of the region being smoothed and an image pixel, and $a$ is the length scale over which the image is to be smoothed. This function is well suited for this analysis since it is a localized function having zero mean; therefore, image features analyzed at multiple scales $a$ will maintain their location in image space. A peak's maximum amplitude is found on length scale $a$ corresponding to the actual spatial extent of the ``hot spot.'' Many peaks exist in these images; the key is to tag statistically significant peaks. To do this the probability distribution of peak amplitudes must be derived from a random distribution of pixels. Using $2\times10^4$ events, each with a random spatial particle distribution, the probability of observing a given amplitude is computed. This is done for events with different pixel multiplicities and for different scale sizes. Results using only a single scale size of 8\,m are presented here. This scale size represents the optimum for the ensemble of showers; scale size dependence as a function of pixel multiplicity is studied in \cite{miller}. In order to identify statistically significant peaks a threshold is applied to the smoothed image. Peaks are eliminated if the amplitude was more probable than $6.3\times10^{-5}$ corresponding to a significance of $<4\,\sigma$; the value of the threshold is chosen to maximize the background rejection. Figure\,\ref{wavelet}\,(bottom) shows the result of thresholding. After applying thresholding the number of significant peaks is counted; if the number of peaks exceeds the mean number of peaks expected from $\gamma$-induced showers then the event is tagged as a ``proton-like'' shower and rejected. The number of expected peaks is energy dependent starting at 1 peak (i.\,e. the shower core), on average, for $\gamma$-showers with less than 100 pixels and increasing with pixel multiplicity; proton-induced showers show a similar behavior except that the number of peaks rises faster with the number of pixels (see Figure\,\ref{peaks}). \begin{figure} \epsfig{file=rmiller_fig7.eps,width=14.0cm} \caption{\label{peaks} (a) Number of significant peaks for $\gamma-$ (top) and proton showers (bottom) as a function of pixel multiplicity. A random probability of $<\,6.3\times10^{-5}$ is used to define a significant peak. (b) Mean number of significant peaks for $\gamma-$ and proton showers as a function of pixel multiplicity.} \end{figure} Additional background rejection may be possible by using information such as the spatial distribution of peaks and the actual shape of individual peak regions. This is currently being investigated. The ability to map and analyze the spatial distributions of air shower particles implies that, in conjunction with analysis techniques such as the one described here, the large cosmic-ray induced backgrounds can be suppressed thereby improving the sensitivity of a ground-based air-shower array; quantitative results on the use of this image analysis technique are described below and summarized in Figure\,\ref{quality}. Image analysis can also be used to identify and locate the shower core. Here the core is identified as the peak with the largest amplitude; this is reasonable assuming that typically the core represents a relatively large region of high particle density. Figure\,\ref{angle_res}\,(a) shows the accuracy of the core fit; other methods are less accurate and more susceptible to detector edge effects and local particle density fluctuations. The core location is used to correct for the curvature of the shower front and to veto events with cores outside the active detector area. Rejecting ``external'' events is beneficial as both angular resolution and background rejection capability are worse for events with cores off the detector. We define the outer 10\,m of the detector as a veto ring and restrict the analysis to events with fitted cores inside the remaining fiducial area ($130\times 130\,\mathrm{m}^{2}$ or $180\times 180\,\mathrm{m}^{2}$ for the $200\times 200\,\mathrm{m}^{2}$ design). This cut identifies and keeps $94\,\%$ of the $\gamma$-showers with {\em true} cores within the fiducial area while vetoing $64\,\%$ of events with cores outside. It is important to note that the non-vetoed events are generally of higher quality (better angular resolution, improved $\gamma$/hadron-separation). In addition, $R_{\gamma}$ is smaller for external events than for internal ones due to the larger lateral spread of particles in proton showers; thus the veto cut actually improves overall sensitivity even though total effective area is decreased. \begin{figure} \epsfig{file=rmiller_fig8.eps,width=14.0cm} \caption{\label{angle_res} (a) Mean distance between reconstructed and true shower core location. (b) Mean angle between reconstructed and true shower direction for a detector without lead converter and with 0.5\,cm and 1.0\,cm lead.} \end{figure} \subsection{Angular Resolution} The shower direction is reconstructed using an iterative procedure that fits a plane to the arrival times of the pixels and minimizes $\chi^{2}$. Before fitting, the reconstructed core position (see Section\,\ref{sec_bg}) is used to apply a curvature correction to the shower front. In the fit, the pixels are weighted with $w(p)=1/\sigma^{2}(p)$, where $\sigma^{2}(p)$ is the RMS of the time residuals $t_{pixel}-t_{fit}$ for pixels with $p$ photoelectrons. $t_{fit}$ is the expected time according to the previous iteration. In order to minimize the effect of large time fluctuations in the shower particle arrival times, we reject pixels with times $t_{pixel}$ with $\left|t_{pixel}-t_{fit}\right|\ge\,10\,\mathrm{ns}$. In addition, only pixels within 80\,m distance to the shower core are included in the fit. Figure\,\ref{angle_res}\,(b) shows the mean difference between the fitted and the true shower direction as a function of the number of pixels for a detector with and without 0.5\,cm and 1.0\,cm of lead, again indicating the benefits of the converter. The angular resolution does not improve considerably when the converter thickness is increased from 0.5\,cm to 1.0\,cm, thus 0.5\,cm is a reasonable compromise considering the tradeoff between cost and performance. \subsection{Sensitivity} \begin{figure} \epsfig{file=rmiller_fig9.eps,width=14.0cm} \caption{\label{sourcebin} (a) Significance for a one day observation of a Crab-like source for three trigger conditions (40, 120, 400 pixels) as a function of the source bin size. (b) Energy distribution of the detected $\gamma$-showers for the three trigger conditions. The source location is $\left|\delta-\lambda\right|\simeq\,5^{\mathrm{o}}$, where $\delta$ is the source declination and $\lambda$ is the latitude of the detector site.} \end{figure} \begin{figure} \epsfig{file=rmiller_fig10.eps,width=14.0cm} \caption{\label{quality} (a) Significance/day for a Crab-like source as a function of the trigger condition with and without $\gamma$/hadron-separation for the $150\times 150\,\mathrm{m}^{2}$ prototype. (b) Quality factor as a function of the trigger condition. (c) Sensitivity as a function of the source position $\left|\delta-\lambda\right|$.} \end{figure} The ultimate characteristic of a detector is its sensitivity to a known standard candle. In this section, the methods described so far are combined to estimate the overall point source sensitivity of a pixellated scintillation detector. As indicated in Equation\,\ref{equation1}, the sensitivity of an air shower array depends on its angular resolution $\sigma_{\theta}$, its effective area $A_{eff}$, the trigger probabilities for source and background showers, and the quality factor of the $\gamma$/hadron-separation. However, as most of the parameters are functions of the primary energy, the sensitivity depends on the spectrum of the cosmic ray background and the spectrum of the source itself. The significance $S$ therefore has to be calculated using \begin{equation} S=\frac{\int A^{eff}_{\gamma}(E)~\epsilon_{\gamma}(E) ~J_{\gamma}(E)~{\mathrm d}E~~f_{\gamma}~T} {\sqrt{\int A^{eff}_{p}(E)~(1\,-\,\epsilon_{p}(E)) ~J_{p}(E)~{\mathrm d}E~~\Delta\Omega~T}} \label{equation4} \end{equation} where $J_{\gamma}$ and $J_{p}$ are the photon and proton energy spectrum, and $f_{\gamma}$ is the fraction of $\gamma$-showers fitted within the solid angle bin $\Delta\Omega=2\,\pi\,(1-\mathrm{cos}\,\theta)$. Other parameters have their standard meaning. The Crab Nebula is commonly treated as a standard candle in $\gamma$-ray astronomy; this allows the sensitivity of different telescopes to be compared. The differential spectrum of the Crab at TeV energies has been measured by the Whipple collaboration \cite{whipple_crab}: \begin{equation} J_{\gamma}(E) = (3.20\,\pm0.17\,\pm0.6)\times10^{-7}\, E_{\mathrm{TeV}}^{-2.49\,\pm0.06\,\pm0.04}\, \mathrm{m}^{-2}\,\mathrm{s}^{-1}\,\mathrm{TeV}^{-1}. \label{crab_rate} \end{equation} The sensitivity of the detector to a Crab-like source can be estimated using Equation\,\ref{equation4} and the differential proton background flux measured by the JACEE balloon experiment \cite{jacee}: \begin{equation} \frac{dJ_{p}(E)}{d\Omega}=(1.11^{+0.08}_{-0.06})\times10^{-1}\, E_{\mathrm{TeV}}^{-2.80\,\pm0.04}\, \mathrm{m}^{-2}\,\mathrm{sr}^{-1}\,\mathrm{s}^{-1}\, \mathrm{TeV}^{-1}. \label{bg_rate} \end{equation} Calculating $S$ using Equation\,\ref{equation4} is not straightforward since $\epsilon_{\gamma}$ is not a constant, but rather a function of the event size and core position and, therefore, angular reconstruction accuracy. This equation can be solved, however, by a Monte Carlo approach: Using the Crab and cosmic-ray proton spectral indices, a pool of simulated $\gamma$- and proton showers ($\cal{O}$($10^{6}$) events of each particle type at each altitude) is generated with energies from 50\,GeV to 30\,TeV and with zenith angles $0^{\mathrm o}\ge\theta\ge45^{\mathrm o}$. A full Julian day source transit can be simulated for a given source bin size and declination by randomly choosing $\gamma$-ray and proton-induced showers from the simulated shower pools at rates given by Equations\,\ref{crab_rate} and~\ref{bg_rate}. The only constraint imposed on the events is that they have the same zenith angle as the source bin at the given time. The showers are then fully reconstructed, trigger and core location veto cuts, as well as $\gamma$/hadron-separation cuts, are applied; $\gamma$-showers are required to fall into the source bin. This procedure produces distributions of pixel multiplicity, core position, etc. reflecting instrumental resolutions and responses. Because the angular resolution varies with the number of pixels, the optimal source bin size also varies. Figure\,\ref{sourcebin}\,(a) shows the significance for a full-day observation of a Crab-like source with $\left|\delta-\lambda\right|\simeq\,5^{\mathrm{o}}$, where $\delta$ is the source declination and $\lambda$ is the latitude of the detector site, as a function of the source bin size for three trigger conditions. As the number of pixels increases, the optimal source bin size decreases from $0.6^{\mathrm{o}}$ (40 pixels) to $0.2^{\mathrm{o}}$ (400 pixels). Figure\,\ref{sourcebin}\,(b) shows how the energy distribution of detected $\gamma$-showers changes with trigger condition. The median energy for a 40 pixel trigger is 600\,GeV, with a substantial fraction of showers having energies below 200\,GeV. For a 120 pixel trigger, $E_{med}$ is 1\,TeV. For a 1 day Crab-like source transit, Figure\,\ref{quality}\,(a) shows how the significance varies as a function of the trigger condition with and without $\gamma$/hadron-separation. If no $\gamma$/hadron-separation is applied the sensitivity increases and then falls above 500 pixels because of the finite size of the detector. However, as shown in Section\,\ref{sec_bg}, above 500 pixels the quality factor of the $\gamma$/hadron-separation counterbalances the loss of area. Figure\,\ref{quality}\,(b) shows the quality factor derived solely from the ratio of source significance with and without separation. As expected, $Q$ increases dramatically with pixel number, leading to significances well above $3\,\sigma$ per day for energies above several TeV. The sensitivity also depends on the declination of the source. Results quoted so far refer to sources $\left|\delta-\lambda\right|\simeq\,5^{\mathrm{o}}$. Figure\,\ref{quality}\,(c) shows how the expected significance per source day transit changes with the source declination $\delta$. \begin{table}[ht] \caption{\label{table2} Significances $S$ for a 1 day observation of a Crab-like source with $\left|\delta-\lambda\right|\simeq\,5^{\mathrm{o}}$ for different altitudes and trigger conditions. $E_{med}$ is the median energy of the detected source particles, values in parentheses denote significances after $\gamma$/hadron separation.} \vskip 0.5cm \small \centerline{ \begin{tabular}{|c|c|| c c | c || c c | c |} \hline & & \multicolumn{3}{|c||}{$150\times 150\,\mathrm{m}^{2}$} & \multicolumn{3}{|c|}{$200\times 200\,\mathrm{m}^{2}$} \\ \hline altitude & trigger & \multicolumn{2}{|c|}{$S \left[\sigma\right]$} & log($E_{med}^{\mathrm{GeV}})$ & \multicolumn{2}{|c|}{$S \left[\sigma\right]$} & log($E_{med}^{\mathrm{GeV}})$ \\ \hline \hline 4500\,m & 40 & 1.3 & (1.3) & 2.8 & 1.8 & (1.8) & 2.8 \\ & 1000 & 1.6 & (2.5) & 3.8 & 1.9 & (2.7) & 3.8 \\ \hline 3500\,m & 40 & 0.9 & (0.9) & 3.1 & 1.2 & (1.2) & 3.0 \\ & 1000 & 0.8 & (1.2) & 4.1 & 1.3 & (1.7) & 4.0 \\ \hline 2500\,m & 40 & 0.4 & (0.4) & 3.3 & 0.6 & (0.6) & 3.2 \\ & 1000 & 0.5 & (0.6) & 4.3 & 0.8 & (1.1) & 4.2 \\ \hline \end{tabular}} \end{table} \normalsize \begin{table}[ht] \caption{\label{table3} Expected rates [kHz] for a $150\times 150\,\mathrm{m}^2$ detector at different altitudes. Cores are randomly distributed over $300\times 300\,\mathrm{m}^{2}$ and no core veto cut is applied.} \vskip 0.5cm \small \centerline{ \begin{tabular}{|l||c|c|c|} \hline trigger & 4500\,m & 3500\,m & 2500\,m \\ \hline \hline 10 & 34.5 & 18.9 & 10.5 \\ 40 & 6.7 & 3.6 & 2.1 \\ 100 & 1.8 & 1.0 & 0.6 \\ 400 & 0.2 & 0.2 & 0.1 \\ 1000 & 0.05 & 0.04 & 0.02 \\ \hline \end{tabular}} \end{table} \normalsize Table\,\ref{table2} summarizes the dependence of the detector performance on the size and the altitude of the detector. Significance scales with $\sqrt{A_{eff}}$ as expected from Equation\,\ref{equation1}. Detector altitude, however, is more critical. Although at 2500\,m $10\,\sigma$ detections of a steady Crab-like source per year are possible, it is only at 4000\,m altitudes where the sensitivity is sufficient to detect statistically significant daily variations of source emissions. It is noteworthy that for the canonical design at 4500\,m, a trigger condition as low as 10 pixels still produces $1.2\,\sigma$ per day at median energies of about 280\,GeV corresponding to an event rate of 34.5\,kHz. Predicted event rates for different trigger conditions at various altitudes are summarized in Table\,\ref{table3}. Sustained event rates below $\sim$\,10\,kHz are achievable with off-the-shelf data acquisition electronics; higher rates may also be possible. The event rates estimated here are relatively low compared to the rate of single cosmic-ray muons; because of the optical isolation and low-cross talk of individual detector elements single muons are unlikely to trigger the detector even with a low pixel multiplicity trigger condition. \section{Conclusion} To fully develop the field of VHE $\gamma$-ray astronomy a new instrument is required - one capable of continuously monitoring the sky for VHE transient emission and performing a sensitive systematic survey for steady sources. To achieve these goals such an instrument must have a wide field of view, $\sim\,100\,\%$ duty cycle, a low energy threshold, and background rejection capabilities. Combining these features we have shown that a detector composed of individual scintillator-based pixels and 100$\%$ active area provides high sensitivity at energies from 100\,GeV to beyond 10\,TeV. Detailed simulations indicate that a source with the intensity of the Crab Nebula would be observed with an energy dependent significance exceeding $\sim\,3\,\sigma$/day. AGN flares, or other transient phenomena, could be detected on timescales $\ll$1 day depending on their intensity - providing a true VHE transient all-sky monitor. A conservative estimate of the sensitivity of a detector like the one described here (the PIXIE telescope) is shown in Figure\,\ref{sensi_comp} compared to current and future ground- and space-based experiments. The plot shows the sensitivities for both 50 hour and 1 year source exposures, relevant for transient and quiescent sources, respectively. A detector based on the PIXIE design improves upon first-generation detector concepts, such as Milagro, in two principal ways: fast timing and spatial mapping of air shower particles. The sensitivity of a sky map produced by this detector in 1 year reaches the flux sensitivity of current air-Cherenkov telescopes (for a 50 hour exposure), making the detection of AGN in their quiescent states possible. \begin{figure} \epsfig{file=rmiller_fig11.eps,width=14.0cm} \caption{\label{sensi_comp} Predicted sensitivity of some proposed and operational ground-based telescopes. The dashed and dotted lines show the predicted sensitivity of the telescope described here (PIXIE) at an altitude $>4000$\,m. The numbers are based on a $5\,\sigma$ detection for the given exposure on a single source. EGRET and GLAST sensitivities are for 1 month of all-sky survey. The ARGO sensitivity is taken from \cite{argo}, all others from \cite{glast_proposal}. Information required to extrapolate the ARGO sensitivity to higher energies is not given in \cite{argo}.} \end{figure} The cost for a detector based on the conceptual design outlined is estimated conservatively at between \$\,500 and \$\,1000 per pixel; the cost of scintillator is the dominant factor. A proposal to perform a detailed detector design study (evaluation of detector materials, data acquisition prototyping, and investigation of construction techniques) leading to a final design is currently pending. Although unlikely to surpass the sensitivity of air-Cherenkov telescopes for detailed single source observations, a sensitive all-sky VHE telescope could continuously monitor the observable sky at VHE energies. In summary, non-optical ground-based VHE astronomy {\em is} viable, and the development of an all-sky VHE telescope with sensitivity approaching that of the existing narrow field of view air-Cherenkov telescopes, will contribute to the continuing evolution of VHE astronomy. \begin{ack} We thank the authors of CORSIKA for providing us with the simulation code; we also acknowledge D.G. Coyne, C.M. Hoffman, J.M. Ryan, and D.A. Williams for their useful comments. This research is supported in part by the U.S. Department of Energy Office of High Energy Physics, the U.S. Department of Energy Office of Nuclear Physics, the University of California (RSM), and the National Science Foundation (SW). \end{ack}
{'timestamp': '1998-12-08T19:48:44', 'yymm': '9812', 'arxiv_id': 'astro-ph/9812153', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/9812153'}
\section{Introduction} The eleven-dimensional interpretation of the massive Type IIA supergravity of Romans \cite{Romans} has devoted a lot of attention recently. It was shown in \cite{BDHS} that it is not possible to construct a covariant supergravity theory in eleven dimensions with cosmological constant. One might assume that the mass arises in the Type IIA theory from the dimensional reduction procedure, \`a la Scherk-Schwarz, however a theory was obtained through this kind of construction in \cite{HLW} with the result that it was not Romans' supergravity but a different massive supergravity for which there is no action. A massive eleven-dimensional supergravity was proposed in \cite{BLO} with the peculiarity that it can only be formulated when the eleventh direction is compact, with Lorentz invariance taking place in the other ten dimensional coordinates. This feature circumvents the no-go theorem of \cite{BDHS}. The explicit eleven-dimensional supergravity action depends on the Killing vector associated to translations along the eleventh coordinate, and gives the massive Type IIA supergravity action after a direct dimensional reduction along this direction. More recently \cite{Hull3} a proposal for massive M-theory has been given based on the connection between M-theory and Type IIB and the relation between the Scherk-Schwarz reduction of Type IIB and the reduction of massive Type IIA. It remains an interesting open problem to make contact with the description of massive eleven-dimensional supergravity with a Killing vector. M-branes propagating in a massive background were constructed in \cite{LO,BLO} and referred to generically as massive M-branes. Their effective actions are described by gauged sigma-models in which the Killing isometry is gauged. They contain as well new couplings proportional to the mass which can be interpreted in terms of ``massive'' solitons. Furthermore, two massive branes in the Type IIA theory can be obtained from a single massive M-brane \cite{BLO}. They are obtained as either direct or double dimensional reduction of the massive M-brane effective action along the space-time coordinate where the isometry is realized. In this paper we will construct the effective action of the massive Type IIA KK-monopole. We will follow two approaches. First of all, we will obtain the action of the IIA KK-monopole by performing a double dimensional reduction in the effective action of the massive eleven-dimensional KK-monopole. As explained in \cite{BEL} the construction of a massive KK-monopole in eleven dimensions is more subtle than that of an ordinary brane, since the monopole is already described by a gauged sigma-model in the massless case. The massive M-KK-monopole giving rise to the massive D6-brane after a direct dimensional reduction was constructed in \cite{BEL}. There it was shown that in order to assure invariance under massive gauge transformations the gauge field associated to the Taub-NUT isometry had to transform proportionally to the mass. Also, new couplings to the Born-Infeld field had to be introduced. In this eleven-dimensional massive monopole the ``mass isometry direction'' coincides with the Taub-NUT direction, and the dimensionally reduced action gives the massive D6-brane. However, in order to obtain a ten dimensional KK-monopole we are not interested in eliminating the isometry in the Taub-NUT direction once a double dimensional reduction is performed. Therefore we first have to find the worldvolume action of a more general M-KK-monopole in which the invariance under massive transformations is achieved by gauging an isometry other than the Taub-NUT isometry. Double dimensional reduction along this new isometry direction will give rise to the action of the massive IIA KK-monopole (see Figure 1). We present these actions in sections \ref{massive-MKK}, \ref{MKK-->IIAKK} and \ref{mIIAKK}. \vskip 12pt \begin{figure}[!ht] \begin{center} \leavevmode \epsfxsize=13.5cm \epsfysize=4cm \epsffile{cuadro3.eps} \caption{\small {\bf Dimensional reductions of the (massive) M-KK-monopole.} In this figure we display the reductions of the massless M-KK-monopole and its two possible massive extensions. The massless M-KK-monopole is described by a gauged sigma-model with Killing vector ${\hat k}$. Reducing along a worldvolume coordinate gives rise to the Type IIA KK-monopole, the reduction along ${\hat k}$ gives the D6-brane and the reduction along a transversal coordinate different than ${\hat k}$ gives a 6-brane (KK-6A) \cite{U1,U2,U3,MO}, described by a gauged sigma-model. The KK-6A brane is not associated to a central charge in the Type IIA supersymmetry algebra \cite{Hull}. There are two possible massive extensions depending on whether the {\it massive} isometry, ${\hat h}$, is chosen to be ${\hat h}={\hat k}$ or ${\hat h} \neq {\hat k}$, such that reducing along ${\hat h}$ gives a massive Type IIA brane. When ${\hat h}={\hat k}$ we obtain a massive D6-brane. When ${\hat h}\neq {\hat k}$ we obtain a massive IIA KK-monopole (KK-5A in the Figure), if ${\hat h}$ lies in a worldvolume direction of the M-KK-monopole; or a massive KK-6A brane, if ${\hat h}$ lies in a transversal direction.} \end{center} \end{figure} An alternative way to derive the action of the massive IIA KK-monopole is to start with the action of the IIB NS-5-brane constructed in \cite{EJL} and perform a (massive) T-duality transformation. The massive T-duality rules for the target space (dual) potentials that couple to the 5-brane are given in Appendix A. We present the details of the calculation of the massive IIA KK-monopole action by this procedure in section \ref{IIBNS5-->IIAKK}. The result coincides with that given in section \ref{mIIAKK} following the double dimensional reduction approach (see Figure 2). \vskip 12pt \begin{figure}[!ht] \begin{center} \leavevmode \epsfxsize= 9cm \epsfysize= 4cm \epsffile{cuadro1.eps} \caption{\small {\bf Derivation of the massive IIA KK-monopole.} In this Figure we show the two procedures that we have followed to obtain the massive IIA KK-monopole. In the first one we perform a double dimensional reduction along the ${\hat h}$ direction of the massive M-KK-monopole with two gauged isometries, the Taub-NUT ${\hat k}$ and ${\hat h}$. In the second case, we obtain the same massive IIA KK-monopole via a massive T-duality transformation on the IIB NS-5-brane effective action.} \label{fig:cuadro1} \end{center} \end{figure} \section{The Massive M-KK-monopole} \label{massive-MKK} Let us start by recalling the action of the massless M-KK-monopole constructed in \cite{BJO} \footnote{For some recent work on KK-monopoles see \cite{varios}.}. The KK-monopole in eleven dimensions behaves like a 6-brane, and its field content is that of the 7-dimensional vector multiplet, involving 3 scalars and 1 vector. Since the embedding coordinates describe $11-7=4$ degrees of freedom one scalar has to be eliminated by gauging an isometry of the background\footnote{For constructions of gauged sigma models with WZ term in arbitrary dimensions see \cite{HS}.}. The Taub-NUT space of the monopole is isometric in its Taub-NUT direction. Let us denote ${\hat k}$ the Killing vector associated to this isometry: \begin{equation} \delta {\hat X}^{\hat \mu}=-{\hat \sigma}^{(0)}{\hat k}^{\hat \mu}\, , \end{equation} \noindent such that the Lie derivatives of all target space fields and gauge parameters with respect to ${\hat k}$ vanish. The effective action of the monopole is constructed by replacing ordinary derivatives by covariant derivatives: \begin{equation} \partial_{\hat \imath}{\hat X}^{\hat \mu}\rightarrow D_{\hat \imath} {\hat X}^{\hat \mu}=\partial_{\hat \imath}{\hat X}^{\hat \mu}+ {\hat A}_{\hat \imath}{\hat k}^{\hat \mu}\, , \end{equation} \noindent with ${\hat A}_{\hat \imath}$ a dependent gauge field given by: \begin{equation} {\hat A}_{\hat \imath}=|{\hat k}|^{-2}\partial_{\hat \imath} {\hat X}^{\hat \mu}{\hat k}_{\hat \mu}\, , \end{equation} \noindent where $|{\hat k}|^2=-{\hat k}^{\hat \mu} {\hat k}^{\hat \nu}{\hat g}_{{\hat \mu}{\hat \nu}}$. Accordingly, the gauge transformation of ${\hat A}_{\hat \imath}$ is: \begin{equation} \delta {\hat A}_{\hat \imath}=\partial_{\hat \imath} {\hat \sigma}^{(0)}\, . \end{equation} \noindent In this way the effective metric becomes: \begin{equation} {\hat g}_{{\hat \mu}{\hat \nu}}D_{\hat \imath} {\hat X}^{\hat \mu}D_{\hat \jmath}{\hat X}^{\hat \nu} ={\hat \Pi}_{{\hat \mu}{\hat \nu}}\partial_{\hat \imath} {\hat X}^{\hat \mu}\partial_{\hat \jmath} {\hat X}^{\hat \nu}\, , \end{equation} \noindent with \begin{equation} {\hat \Pi}_{{\hat \mu}{\hat \nu}}= {\hat g}_{{\hat \mu}{\hat \nu}}+|{\hat k}|^{-2} {\hat k}_{\hat \mu}{\hat k}_{\hat \nu}\, . \end{equation} \noindent Since ${\hat \Pi}_{{\hat \mu}{\hat \nu}} {\hat k}^{\hat \nu}=0$ the isometry direction is effectively eliminated from the action. The construction of the ${\hat \sigma}^{(0)}$--invariant WZ term has been given in \cite{BEL}. We summarize the target space and worldvolume field content in Tables \ref{table1} and \ref{table2}. \begin{table}[!ht] \renewcommand{\arraystretch}{1.5} \begin{center} \begin{tabular}{|c|c|} \hline Target space & Gauge \\ Field & Parameter \\ \hline\hline ${\hat g}_{{\hat \mu}{\hat \nu}}$ & \\ \hline ${\hat C}_{{\hat \mu}{\hat \nu}{\hat \rho}}$ & ${\hat \chi}_{{\hat \mu}{\hat \nu}}$ \\ \hline $(i_{\hat k} {\hat C})_{{\hat \mu}{\hat \nu}}$& $(i_{\hat k} {\hat \chi})_{{\hat \mu}}$\\ \hline ${\hat {\tilde C}}_{{\hat \mu}_1 \dots {\hat \mu}_6}$ & ${\hat {\tilde \chi}}_{{\hat \mu}_1 \dots {\hat \mu}_5}$ \\ \hline $({i}_{\hat k} {\hat {\tilde C}})_{{\hat \mu}_1 \dots {\hat \mu}_5}$ & $({i}_{\hat k} {\hat {\tilde \chi}})_{{\hat \mu}_1 \dots {\hat \mu}_4}$ \\ \hline ${\hat N}_{{\hat \mu}_1 \dots {\hat \mu}_8}$ & ${\hat \Omega}_{{\hat \mu}_1 \dots {\hat \mu}_7}$ \\ \hline $({i}_{\hat k} {\hat {\tilde N}})_{{\hat \mu}_1 \dots {\hat \mu}_7}$ & $(i_{\hat k} {\hat \Omega})_{{\hat \mu}_1 \dots {\hat \mu}_6}$\\ \hline \end{tabular} \end{center} \caption{\label{table1} \small {\bf Target space fields in the M-KK-monopole.} This table shows the 11-dimensional target space fields that couple to the M-KK-monopole, together with their gauge parameters. We also include the contractions with the Killing vector ${\hat k}^{\hat \mu}$. The field ${\hat {\tilde C}}$ is the Poincar\'e dual of ${\hat C}$, the 3-form of eleven-dimensional supergravity, and ${\hat N}$ is the Poincar\'e dual of the Killing vector, considered as a 1-form ${\hat k}_{\hat \mu}$. } \renewcommand{\arraystretch}{1} \end{table} \begin{table}[h] \renewcommand{\arraystretch}{1.5} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Worldvolume & $\sharp$ of \\ Field & d.o.f \\ \hline\hline ${\hat X}^{{\hat \mu}}$ & $11 - 7 - (1) = 3$ \\ \hline $ {\hat \omega}^{(1)}_{{\hat \imath}}$ & $7 - 2 = 5$\\ \hline ${\hat \omega}^{(6)}_{{\hat \imath}_1 \dots {\hat \imath}_6}$ & $-$ \\ \hline \end{tabular} \end{center} \caption{\small \label{table2} {\bf Worldvolume fields.} In this table we summarize the worldvolume fields, together with their degrees of freedom, that occur in the worldvolume action of the M--theory KK--monopole. The worldvolume scalars ${\hat X}^{\hat \mu}$ are the embedding coordinates, ${\hat \omega}^{(1)}$ is a 1-form and ${\hat \omega}^{(6)}$ is a non propagating 6-form that describes the tension of the monopole. Due to the gauging the embedding scalars describe 3 and not 4 degrees of freedom as indicated in the table. } \renewcommand{\arraystretch}{1} \end{table} The resulting effective action gives the D6-brane of the Type IIA theory after a direct dimensional reduction along the Taub-NUT direction \cite{BJO}. In \cite{BEL} the effective action of the M-KK-monopole giving rise to the D6-brane of the massive Type IIA theory was also constructed. As discussed in \cite{BLO} eleven-dimensional massive branes are described by gauged sigma models with gauge coupling constant proportional to $m$. The massive D6-brane is obtained by reducing the massive KK-monopole in which the two Killing vectors associated to the mass and the Taub-NUT isometries coincide. The gauge field ${\hat A}$ must be attributed mass transformation rules and extra terms need to be added to the WZ part. As we mentioned in the Introduction this massive M-KK-monopole cannot give rise to the massive Type IIA KK-monopole after a double dimensional reduction, since the gauged isometry disappears in the reduction. However it is possible to construct a massive M-KK-monopole in which the two isometry directions associated to the mass and the Taub-NUT space are different. Double dimensionally reducing along the direction associated to the mass will give rise to the effective action of the massive IIA KK-monopole. It was shown in \cite{BLO} that a massive brane in eleven dimensions is obtained by gauging an isometry generated by a Killing vector ${\hat h}$: \begin{equation} \delta_{{\hat \rho}^{(0)}}{\hat X}^{\hat \mu}=\frac{m}{2} l_p^2 \, {\hat \rho}^{(0)}{\hat h}^{\hat \mu}\, , \end{equation} \noindent through the introduction of a gauge field ${\hat b}_{\hat \imath}$ transforming as\footnote{Here $l_p$ is the eleven-dimensional Planck length.}: \begin{equation} \delta {\hat b}_{\hat \imath}=\partial_{\hat \imath} {\hat \rho}^{(0)}- l_p^{-2} \, (i_{\hat h}{\hat \chi})_{\hat \imath}\, , \end{equation} \noindent where $(i_{\hat h}{\hat \chi})$ is (the pull-back of) the interior product of the gauge parameter of the eleven-dimensional 3-form ${\hat C}$ with the Killing vector ${\hat h}$. Substituting ordinary derivatives by covariant derivatives \begin{equation} {\cal D}_{\hat \imath}{\hat X}^{\hat \mu}=\partial_{\hat \imath} {\hat X}^{\hat \mu}-\frac{m}{2} l_p^2 \, {\hat b}_{\hat \imath}{\hat h}^{\hat \mu} \end{equation} \noindent one assures invariance under massive transformations: \begin{equation} \label{massivetr} \delta_{\hat \chi}{\hat L}_{{\hat \mu}_1\dots {\hat \mu}_r}= r\frac{m}{2} (-1)^r (i_{\hat h}{\hat \chi})_{[{\hat \mu}_1} (i_{\hat h}{\hat L})_{{\hat \mu}_2\dots {\hat \mu}_r]}\, , \end{equation} \noindent for a rank $r$ 11 dim form ${\hat L}$, and \begin{equation} \delta_{\hat \chi}{\hat g}_{{\hat \mu}{\hat \nu}}=-m (i_{\hat h}{\hat \chi})_{({\hat \mu}} (i_{\hat h} {\hat g})_{{\hat \nu})}\, , \end{equation} \noindent for the 11 dim metric. These transformations give rise to the known massive transformations of the Type IIA background fields after dimensional reduction. Together with the gauging it is necessary to include additional ${\hat b}$--dependent terms (proportional to the mass) to achieve invariance under massive transformations. This suggests that we should construct the massive M-KK-monopole by substituting: \begin{equation} {\hat \Pi}_{{\hat \mu}{\hat \nu}}\partial_{\hat \imath} {\hat X}^{\hat \mu}\partial_{\hat \jmath}{\hat X}^{\hat \nu} \rightarrow {\hat \Pi}_{{\hat \mu}{\hat \nu}} {\cal D}_{\hat \imath}{\hat X}^{\hat \mu}{\cal D}_{\hat \jmath} {\hat X}^{\hat \nu}\, , \end{equation} \noindent with ${\cal D}{\hat X}$ as defined above. This can also be written as: \begin{equation} {\hat \Pi}_{{\hat \mu}{\hat \nu}}{\cal D}_{\hat \imath} {\hat X}^{\hat \mu}{\cal D}_{\hat \jmath}{\hat X}^{\hat \nu} ={\hat g}_{{\hat \mu}{\hat \nu}}D_{\hat \imath} {\hat X}^{\hat \mu}D_{\hat \jmath}{\hat X}^{\hat \nu}\, , \end{equation} \noindent with \begin{equation} \label{covdev} D_{\hat \imath}{\hat X}^{\hat \mu}\equiv \partial_{\hat \imath}{\hat X}^{\hat \mu} +{\hat A}_{\hat \imath}{\hat k}^{\hat \mu} -\frac{m}{2}l_p^2 \, {\hat b}_{\hat \imath} {\hat h}^{\hat \mu}\, , \end{equation} \noindent and: \begin{equation} \label{Atrans} {\hat A}_{\hat \imath}=|{\hat k}|^{-2}{\hat k}_{\hat \mu} \left( \partial_{\hat \imath}{\hat X}^{\hat \mu}-\frac{m}{2} \right. \left. l_p^2 \, {\hat b}_{\hat \imath}{\hat h}^{\hat \mu}\right)\, . \end{equation} \noindent ${\hat \Pi}_{{\hat \mu}{\hat \nu}}$ transforms as a metric under massive transformations iff \begin{equation} \label{vectortr} \delta {\hat k}_{\hat \mu}=-\frac{m}{2} (i_{\hat h}{\hat \chi})_{\hat \mu}(i_{\hat h}{\hat k})\, , \end{equation} \noindent which implies that the Killing vector associated to the Taub-NUT isometry must be attributed a massive transformation: \begin{equation} \label{ktrans} \delta {\hat k}^{\hat \mu}=\frac{m}{2}(i_{\hat k}i_{\hat h} {\hat \chi}){\hat h}^{\hat \mu}\, . \end{equation} \noindent The transformation rule (\ref{vectortr}) is that of a vector under massive transformations (see (\ref{massivetr})). We showed in \cite{BEL} that ${\hat k}_{\hat \mu}$ had to be considered as a target space 1-form in order to construct the action of the 11 dim KK-monopole. In particular, the monopole is charged with respect to its dual 8-form\footnote{To be precise, with respect to its interior product with the Killing vector ${\hat k}$.}. We see here that this is also the case regarding massive transformations. With this transformation rule the interior product of ${\hat k}$ with any eleven-dimensional $r$-form transforms according to (\ref{massivetr}) (see Appendix B). The fact that ${\hat k}^{\hat \mu}$ transforms under massive transformations implies that the Killing condition and the massive transformations do not commute\footnote{They commute if $(i_{\hat k}i_{\hat h}{\hat \chi})=0$, which is not the most general case. Note, however, that if in ten dimensions we set the component of the RR 1-form along the Taub-NUT direction to zero, which can always be done since $C^{(1)}$ is a non-physical Stueckelberg field \cite{Romans,BRGPT}, then this condition is satisfied.}. In the system of adapted coordinates to the isometry generated by ${\hat h}$: ${\hat h}^{\hat \mu}=\delta^{\hat \mu}{}_y$, we can define: \begin{equation} {\hat k}^y=\frac{m}{2} l_p^2 \, {\hat \omega}^{(0)}\, , \end{equation} \noindent with \begin{equation} \delta {\hat \omega}^{(0)}=\frac{1}{2\pi\alpha^\prime} (i_{\hat k}i_{\hat h}{\hat \chi})\, . \end{equation} \noindent In the reduction to ten dimensions ${\hat k}^{\mu}$ (with ${\mu}$ a ten dimensional index) will be the Killing vector generating translations along the Taub-NUT direction, and ${\hat \omega}^{(0)}$ an extra worldvolume scalar needed to compensate for certain massive transformations. It is easy to check that the invariance under ${\hat \sigma}^{(0)}$ is preserved\footnote{In the kinetic term. We will comment later on the WZ term.} if the ${\hat b}$ field transforms as: \begin{equation} \label{btrans} \delta {\hat b}_{\hat \imath}= \partial_{\hat \imath} {\hat \rho}^{(0)} -{\hat \sigma}^{(0)}\partial_{\hat \imath} {\hat \omega}^{(0)} - l_p^{-2} \, (i_{\hat h}{\hat \chi})_{\hat \imath}\, . \end{equation} The gauge transformation rule of ${\hat A}_{\hat \imath}$ (given by (\ref{Atrans})) is still $\delta {\hat A}_{\hat \imath}= \partial_{\hat \imath}{\hat \sigma}^{(0)}$, as in the massless case. The action that we propose for the massive M-theory KK-monopole is the following: \begin{eqnarray} \label{MKKaction} {\hat S}&=&-T_{{\rm mMKK}}\int d^7 {\hat \xi} {\hat k}^2\sqrt{|{\rm det} (D_{\hat \imath}{\hat X}^{\hat \mu}D_{\hat \jmath} {\hat X}^{\hat \nu}{\hat g}_{{\hat \mu}{\hat \nu}}+ l_p^2 |{\hat k}|^{-1}{\hat {\cal K}}^{(2)}_{{\hat \imath} {\hat \jmath}})|} \nonumber \\ &&+\,\, \frac{1}{7!} l_p^2 \, T_{{\rm mMKK}}\int d^7 {\hat \xi} \,\, \varepsilon^{{\hat \imath}_1\dots{\hat \imath}_7} {\hat {\cal K}}^{(7)}_{{\hat \imath}_1\dots {\hat \imath}_7}\, . \end{eqnarray} \noindent The covariant derivatives are defined by (\ref{covdev}) and ${\hat {\cal K}}^{(2)}$ is the massive field strength of the worldvolume field ${\hat \omega}^{(1)}$: \begin{equation} {\hat {\cal K}}^{(2)}=2\partial {\hat \omega}^{(1)}+ l_p^{-2} \, D{\hat X}^{\hat \mu}D {\hat X}^{\hat \nu}(i_{\hat k}{\hat C})_{{\hat \mu}{\hat \nu}} -m l_p^2 \, \partial{\hat \omega}^{(0)}{\hat b} \, . \end{equation} \noindent Finally, ${\hat {\cal K}}^{(7)}$ is given by: \begin{equation} \begin{array}{rcl} {\hat {\cal K}}^{(7)} &=& 7 \left\{ \partial {\hat \omega}^{(6)} +m {\hat \omega}^{(7)} -\frac32 l_p^2 \, m{\hat d}^{(5)} \left(2\partial{\hat\omega}^{(1)} -ml_p^2 \,\partial {\hat \omega}^{(0)}{\hat b} \right) \right. \\ & & \\ & & \left. - {1 \over 7} l_p^{-2} \, D {\hat X}^{{\hat \mu}_1} \dots D {\hat X}^{{\hat \mu}_7} (i_{\hat k} {\hat N})_{{\hat \mu}_1 \dots {\hat \mu}_7} +3 D {\hat X}^{{\hat \mu}_1} \dots D {\hat X}^{{\hat \mu}_5} (i_{\hat k}{\hat {\tilde C}})_{{\hat \mu}_1 \dots {\hat \mu}_5} {\hat {\cal K}}^{(2)}\right. \\ & & \\ & & \left. - {5} l_p^{-2} \, D {\hat X}^{{\hat \mu}_1} \dots D {\hat X}^{{\hat \mu}_7} {\hat C}_{{\hat \mu}_1 \dots {\hat \mu}_3} ({i}_{\hat k} {\hat C})_{{\hat \mu}_4 {\hat \mu}_5} ({i}_{\hat k} {\hat C})_{{\hat \mu}_6 {\hat \mu}_7} \right.\\ & &\\ & & \left. -15 D {\hat X}^{{\hat \mu}_1} \dots D {\hat X}^{{\hat \mu}_5} {\hat C}_{{\hat \mu}_1 \dots {\hat \mu}_3} ({i}_{\hat k} {\hat C})_{{\hat \mu}_4 {\hat \mu}_5} \left( 2\partial {\hat \omega}^{(1)} -m l_p^2 \, \partial{\hat \omega}^{(0)}{\hat b}\right)\right. \\ & & \\ & & \left. -60 l_p^2 \, D {\hat X}^{{\hat \mu}_1} \dots D {\hat X}^{{\hat \mu}_3} {\hat C}_{{\hat \mu}_1 \dots {\hat \mu}_3}\partial {\hat \omega}^{(1)} \left( \partial {\hat \omega}^{(1)}- m l_p^2 \, \partial {\hat \omega}^{(0)}{\hat b} \right)\right.\\ & &\\ & & \left. -60 l_p^4 \, {\hat A} \left( 2\partial {\hat \omega}^{(1)} -3m l_p^2 \, \partial{\hat \omega}^{(0)}{\hat b} \right) \partial {\hat \omega}^{(1)}\partial {\hat \omega}^{(1)} \right\} \, . \\ \end{array} \end{equation} \noindent This action is invariant under the gauge transformations given in Appendices B and \ref{massiveMKK}. Invariance under ${\hat \rho}^{(0)}$ transformations is assured by the presence of covariant derivatives. It can also be seen that with the transformation rule (\ref{btrans}) for the ${\hat b}$ field the action is invariant under ${\hat \sigma}^{(0)}$ transformations. Notice that we have introduced a new auxiliary field, ${\hat d}^{(5)}$, associated to the dual massive transformations with parameter $(i_{\hat h}{\hat \Sigma})$ (see \cite{BLO}). This field transforms proportionally to $(i_{\hat k}i_{\hat h} {\hat \Sigma})$ (see Appendix C.1), since only contractions with the Taub-NUT Killing vector need to be cancelled in the KK-monopole action. Moreover, ${\hat b}$ is a non-propagating field in this action, playing the role of gauge field for the ${\hat h}$ isometry\footnote{Additional $m{\hat b}$ couplings are necessary as well in the WZ term to cancel certain gauge variations.}. We will see in the next section that it disappears in the double dimensionally reduced action. This means that there are no fundamental strings ending on the monopole. When ${\hat k}$ and ${\hat h}$ are parallel, as in \cite{BEL}, ${\hat \omega}^{(1)}$ transforms like ${\hat b}$ (in (\ref{btrans})) \footnote{In this case ${\hat \omega}^{(0)}=0$.} and it is not necessary to introduce this new field. Furthermore, an extra term $-15m l_p^6 \, {\hat b}\partial {\hat b}\partial {\hat b} \partial {\hat b}$ in the WZ part of the action accounts for the variation of the ${\hat \omega}^{(7)}$ field\footnote{With the modification of the transformation law of ${\hat \omega}^{(6)}$ found in \cite{BEL}.}, which is also not needed. This is also the case for ${\hat \omega}^{(0)}$ and ${\hat d}^{(5)}$. In Table \ref{table-mMKK} we summarize the worldvolume fields present in (\ref{MKKaction}). All these fields can be given an interpretation in terms of solitons on the KK-monopole. \begin{table}[h] \renewcommand{\arraystretch}{1.5} \begin{center} \begin{tabular}{|c|c|} \hline Worldvolume & Field \\ Field & Strength \\ \hline\hline ${\hat \omega}^{(1)}_{{\hat \imath}}$ & ${\hat {\cal K}}^{(2)}_{{\hat \imath}{\hat \jmath}}$ \\ \hline ${\hat d}^{(5)}_{{\hat \imath}_1{\hat \imath}_2}$ & $ $ \\ \hline ${\hat \omega}^{(6)}_{{\hat \imath}_1 \dots {\hat \imath}_6}$ & ${\hat {\cal K}}^{(7)}_{{\hat \imath}_1 \dots {\hat \imath}_7}$ \\ \hline ${\hat \omega}^{(7)}_{{\hat \imath}_1 \dots {\hat \imath}_7}$ & \\ \hline \end{tabular} \end{center} \caption{\label{table-mMKK} \small {\bf Worldvolume field content of the massive M-theory KK-monopole.} In this table we give the worldvolume fields, together with their field strengths, present in the worldvolume action of the massive M--theory KK--monopole.} \renewcommand{\arraystretch}{1} \end{table} We find, as in the massless case, a 1-form ${\hat \omega}^{(1)}$, describing a 0-brane soliton in the worldvolume of the KK-monopole. Its dual 4-form describes a 3-brane soliton. They correspond to the intersections: $(0|{\rm M}2,{\rm MKK})$ and $(3|{\rm M}5,{\rm MKK})$, respectively. The monopole contains as well a 4-brane soliton which couples to the 5-form dual to one of the embedding scalars: $(4|{\rm MKK},{\rm MKK})_{1,2}$. All these intersections have been discussed in \cite{BREJS,BGT}. In the massive case the field ${\hat d}^{(5)}$ couples to the 5-brane soliton represented by the configuration\footnote{We use here a notation where $\times (-)$ indicates a worldvolume (transverse) direction. The first $\times$ in a row indicates the time direction.} \cite{deRoo,BGT}: \begin{displaymath} (5|{\rm M}9,{\rm MKK})= \mbox{ $\left\{ \begin{array}{c|cccccccccc} \times & \times & \times & \times & \times & z & - & \times & \times & \times & \times \\ \times & \times & \times & \times & \times & \times & \times & z & - & - & - \end{array} \right.$} \end{displaymath} \noindent Here the $z$-direction in the monopole corresponds to the isometry direction of the Taub-NUT space. A single M9-brane contains as well a Killing isometry in its worldvolume, as has been discussed in \cite{BvdS,BEHHLS,proci}. This isometric direction has been depicted as well as a $z$-direction. The 5-brane soliton predicted by the M-KK and M9-brane worldvolume supersymmetry algebras \cite{BGT} is realized as a 4-brane soliton given that it cannot develop a worldvolume direction along the isometry of the M9-brane. This is in agreement with the worldvolume field content that we have found for the massive M-KK-monopole, since the only worldvolume field to which this soliton can couple is the 5-form ${\hat d}^{(5)}$. The 6-form ${\hat \omega}^{(6)}$ is interpreted as the tension of the monopole and couples to the 5-brane soliton realized as the embedding of an M5-brane on the KK-monopole \cite{Tsey,BREJS}. In the massive case it also plays the role of Stueckelberg field for the auxiliary field ${\hat \omega}^{(7)}$ (see their ${\hat \rho}^{(6)}$ transformation rules in Appendix C.1). Finally, ${\hat \omega}^{(7)}$ couples to the 6-brane soliton describing the embedding of the monopole in an M9-brane: $(6|{\rm M}9,{\rm MKK})$. The M9-brane contains a 1-form vector field in its worldvolume \cite{proci}. The dual of this massive 1-form in the nine dimensional worldvolume is a 7-form field which from the point of view of the M9-brane is the worldvolume field that couples to the 6-brane soliton. \section{Double Dimensional Reduction:\\ Massive MKK $\rightarrow$ Massive KK-5A} \label{MKK-->IIAKK} We can now proceed and perform the double dimensional reduction of the action constructed in the previous section. We will see that the Killing isometry associated to translations of the Taub-NUT coordinate is restored in this process. In adapted coordinates ${\hat h}^{\hat \mu}=\delta^{\hat \mu}{}_y$, we take the ansatz for the worldvolume reduction: \begin{equation} {\hat X}^y=Y={\hat \xi}^6 \end{equation} \noindent with all other worldvolume fields and gauge parameters independent of ${\hat \xi}^6$. The ${\hat k}$ transformation rule (\ref{ktrans}) implies that this vector gets a $y$-component under massive gauge transformations. Therefore we reduce it as: \begin{equation} {\hat k}^\mu=k^\mu \, ,\qquad {\hat k}^y = m 2\pi\alpha^\prime \omega^{(0)} \, . \end{equation} \noindent $k^\mu$ will be the Killing vector associated to the isometry of the Taub-NUT space of the IIA KK-monopole. In order to keep track of all the gauge transformations we have to introduce a compensating gauge transformation: \begin{equation} \delta {\hat \xi}^{{\hat \imath}}=\delta^{{\hat \imath}6} [-\Lambda^{(0)}+\frac{m}{2}(2\pi\alpha^\prime) {\rho}^{(0)}-\frac{m}{2}(2\pi\alpha^\prime)\sigma^{(0)} \omega^{(0)}]\, , \end{equation} \noindent where $\Lambda^{(0)}$ is a g.c.t. in the direction $Y$. The reduction rules for the background fields and gauge parameters can be found for instance in \cite{BEL}. The worldvolume fields reduce as: \begin{equation} \begin{array}{rclrcl} l_p^2 \, {\hat b}_i&=& 2 \pi \alpha^\prime \, b_i \, ,& l_p^2 \, {\hat b}_6 &=& 2 \pi \alpha^\prime \, v^{(0)} \, ,\\ & & \\ l_p^2 \, {\hat \omega}^{(1)}_i &=& 2 \pi \alpha^\prime \, {\omega}^{(1)}_i\, ,& \,\,\,\,\,\, l_p^2 \, {\hat \omega}^{(1)}_6 &=& 2 \pi \alpha^\prime \, \omega^{(0)} \, ,\\ & & \\ l_p^2 \, {\hat d}^{(5)}_{i_1\dots i_5}&=& 2 \pi \alpha^\prime \, d^{(5)}_{i_1\dots i_5} \, , & \,\,\,\,\,\, l_p^2 \, {\hat \omega}^{(6)}_{i_1\dots i_5 6}&=& 2 \pi \alpha^\prime \, \omega^{(5)}_{i_1\dots i_5} \, ,\\ \end{array} \end{equation} \begin{displaymath} l_p^2 \, {\hat \omega}^{(7)}_{i_1\dots i_6 6}= -\frac37 (2\pi\alpha^\prime)^2 \, v^{(0)} \partial_{[i_1}\omega^{(5)}_{i_2\dots i_6]} +\frac67 (2 \pi \alpha^\prime) \left(1-\frac{m}{2}(2\pi\alpha^\prime) v^{(0)}\right) \, \omega^{(6)}_{i_1\dots i_6} \, . \end{displaymath} The (modified) gauge transformations of the new, reduced, worldvolume fields can be found in Appendix C.2. The reduction of ${\hat \omega}^{(6)}$ holds up to a total derivative (see Appendix C). The reduction of ${\hat d}^{(5)}_{i_1\dots i_4 6}$ gives a worldvolume 4-form which is gauge invariant and contributes to the reduced action with a decoupled term, therefore we have fixed it to zero. The scalar field $v^{(0)}$ also has vanishing transformation law. However this field contributes to the double dimensionally reduced action as: \begin{equation} \int d^6\xi (1-\frac{m}{2}(2\pi\alpha^\prime)v^{(0)}) {\cal L}_{{\rm mAKK}}\, , \end{equation} \noindent and is constrained to a constant by the equation of motion of the worldvolume field $\omega^{(5)}$ playing the role of tension of the IIA monopole. In general it modifies the tension of the new massive p-brane as \cite{BLO}: \begin{equation} \label{modifi} T_{\rm mIIA} = \left( 1 - {m \over 2} (2 \pi \alpha^\prime)v^{(0)} \right) T_{\rm mM} \, . \end{equation} This has the implication that for the particular value $v^{(0)}=2/m(2\pi\alpha^\prime)$ the brane tension vanishes. The physical mechanism behind this phenomenon is unclear and it would be interesting to investigate (see the Conclusions for a further discussion). The dependent gauge field ${\hat A}$ reduces as: \begin{eqnarray} {\hat A}_i&=&\left( 1+e^{2\phi}|k|^{-2}(i_k C^{(1)}+ m\pi\alpha^\prime\omega^{(0)})^2 \right)^{-1}\times\nonumber\\ &&\times\left(A_i-e^{2\phi} |k|^{-2}(C^{(1)}_i-m\pi\alpha^\prime b_i)(i_kC^{(1)}+ m\pi\alpha^\prime\omega^{(0)}) \right)\, , \end{eqnarray} \noindent where $A_i\equiv |k|^{-2}k_{\mu}\partial_i X^\mu$, and \begin{eqnarray} {\hat A}_6&=&- \left( 1+e^{2\phi}|k|^{-2}(i_k C^{(1)}+m\pi\alpha^\prime \omega^{(0)})^2 \right)^{-1}\times\nonumber\\ &&\times \,\, e^{2\phi} |k|^{-2}(1-m\pi\alpha^\prime v^{(0)})(i_k C^{(1)}+ m\pi\alpha^\prime\omega^{(0)}) \, . \end{eqnarray} \section{The Action of the Massive IIA KK-monopole} \label{mIIAKK} The double dimensional reduction of the action of the massive M-KK-monopole gives: \begin{eqnarray} \label{accionmasiva} S&=& -T_{\rm mAKK} \int d^6\xi \,\, k^2 e^{-2\phi} \sqrt{1+e^{2\phi}k^{-2}(i_k C^{(1)}+m\pi\alpha^\prime\omega^{(0)})^2} \times \nonumber \\ &&\hspace{-1.5cm} \times \sqrt{\Biggl|{\rm det} \left( \Pi_{ij} -(2\pi\alpha^\prime)^2 k^{-2}{\cal K}^{(1)}_i {\cal K}^{(1)}_j +\frac{(2\pi\alpha^\prime)k^{-1}e^{\phi}}{\sqrt{1+e^{2\phi} k^{-2}(i_k C^{(1)}+m\pi\alpha^\prime\omega^{(0)})^2}} {\cal K}^{(2)}_{ij} \right) \Biggr|}\nonumber \\ &&+\,\, \frac{1}{6!}(2\pi\alpha^\prime)T_{\rm mAKK} \int d^6\xi \epsilon^{i_1\dots i_6} {\cal K}^{(6)}_{i_1\dots i_6}\, . \end{eqnarray} \noindent The covariant derivative is defined as: $D_i X^\mu=\partial_i X^\mu+A_i k^\mu$ and \begin{equation} \Pi_{ij} =D_i X^{\mu}D_j X^{\nu}g_{\mu\nu} \, . \end{equation} \noindent The tension $T_{\rm mAKK}$ is the modified tension given by (\ref{modifi}). The gauge invariant forms ${\cal K}^{(2)}$ and ${\cal K}^{(1)}$ are the field strengths of $\omega^{(1)}$ and $\omega^{(0)}$, respectively: \begin{equation} \label{curvas} \begin{array}{rcl} {\cal K}^{(2)} &=& 2\partial \omega^{(1)}+ \frac{1}{2\pi\alpha^\prime}(i_k C^{(3)}) - 2 {\cal K}^{(1)} (DX C^{(1)}) \\& &\\& & +{m \over 2} \omega^{(0)} (DXDX B) - m (2 \pi \alpha^\prime) \omega^{(0)} \partial \omega^{(0)} A \, ,\\ & &\\ {\cal K}^{(1)} &=& \partial {\omega}^{(0)}-\frac{1}{2\pi\alpha^\prime} (i_k B) \, ,\\ \end{array} \end{equation} \noindent and the WZ term is the field strength of the worldvolume field $\omega^{(5)}$, playing the role of tension of the IIA KK-monopole: \begin{equation} \begin{array}{rcl} \label{WZKK} {\cal K}^{(6)} &=& 6 (\partial \omega^{(5)}+m\omega^{(6)}) -3m(2\pi\alpha^\prime)d^{(5)}\partial\omega^{(0)}+ \frac{1}{2\pi\alpha^\prime}(i_k N) \\ & & \\ & & -15 (i_k C^{(5)})(2\partial\omega^{(1)} +\frac{1}{2\pi\alpha^\prime}(i_k C^{(3)})+\frac{m}{2}\omega^{(0)}B) \\ & & \\ & & -6((i_k {\tilde B}) -\frac{m}{2}(2\pi\alpha^\prime)\omega^{(0)}C^{(5)}) {\cal K}^{(1)} \\ & & \\ & & -60 (2 \pi \alpha^\prime) DX^{{\mu}} DX^{{\nu}} DX^{{\rho}} C^{(3)}_{{\mu} {\nu} {\rho}} \, {\cal K}^{(1)} {\cal K}^{(2)} \\ & & \\ & & +{30 \over 2\pi\alpha^{\prime}}B (i_k C^{(3)})^2 +30DX^{\mu} DX^{\nu} DX^{\rho} C^{(3)}_{\mu \nu \rho} ( i_k C^{(3)}) {\cal K}^{(1)} \\ & & \\ & & -180 (2 \pi \alpha^\prime) DX^{\mu} DX^{{\nu}} B_{{\mu} {\nu}} (\partial\omega^{(1)})^2 \\ & & \\ & & +\frac{20}{2\pi\alpha^\prime}C^{(3)}(i_k B)(i_k C^{(3)}+ m\pi\alpha^\prime\omega^{(0)} B)\\ & & \\ & & -\frac{15}{4}m^2(2\pi\alpha^\prime) (\omega^{(0)})^2 DX^{\mu_1}\dots DX^{\mu_6}B^3_{\mu_1\dots\mu_6} \\ & & \\ & & -45m(2\pi\alpha^\prime)\omega^{(0)}DX^{\mu_1}\dots DX^{\mu_4} B^2_{\mu_1\dots\mu_4}\partial\omega^{(1)}\\ & & \\ & & +\frac{15}{2}m\omega^{(0)}B^2(i_k C^{(3)}) -360 (2 \pi \alpha^\prime)^2 A (\partial \omega^{(1)}+ \frac{m}{4}\omega^{(0)}B)^2 \partial \omega^{(0)} \\ & & \\ & & + 15 (2 \pi \alpha^\prime)^2 { e^{2 \phi} |k|^{-2} \left(i_k C^{(1)}+ m\pi\alpha^\prime\omega^{(0)}\right) \over 1 + e^{2 \phi} |k|^{-2} \left(i_k C^{(1)}+m\pi\alpha^\prime\omega^{(0)}\right)^2} {\cal K}^{(2)}{\cal K}^{(2)}{\cal K}^{(2)} \, .\\ \end{array} \end{equation} In Table \ref{table-mIIAKK} we summarize the worldvolume field content. The gauge transformation rules of background and worldvolume fields can be found in Appendices B, C.2 and reference \cite{BLO}. We summarize our notation for the Type IIA background fields in Table \ref{IIAback}. \begin{table}[h] \renewcommand{\arraystretch}{1.5} \begin{center} \begin{tabular}{|c|c|} \hline Worldvolume & Field \\ Field & Strength \\ \hline\hline $ {\omega}^{(0)} $ & ${\cal K}^{(1)}_i$ \\ \hline $ {\omega}^{(1)}_i$ & ${\cal K}^{(2)}_{ij}$ \\ \hline $d^{(5)}$ & $ $ \\ \hline ${\omega}^{(5)}_{i_1\dots i_5}$ & ${\cal K}^{(6)}_{i_1\dots i_6}$ \\ \hline ${\omega}^{(6)}_{i_1\dots i_6}$ & \\ \hline \end{tabular} \end{center} \caption{\label{table-mIIAKK} \small {\bf Worldvolume fields of the massive IIA KK-monopole.} In this table we give the worldvolume fields, together with their field strengths, that occur in the effective action of the massive IIA KK-monopole.} \renewcommand{\arraystretch}{1} \end{table} \begin{table}[!ht] \renewcommand{\arraystretch}{1.5} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Target space & Gauge & Dual & Gauge \\ Field & Parameter & Field & Parameter \\ \hline\hline ${g}_{{\mu}{\nu}}$, $\phi$ & $-$ & $-$ & $-$ \\ \hline $B_{\mu \nu}$ & $\Lambda_\mu$ & ${\tilde B}_{\mu_1 \dots \mu_6}$ & ${\tilde \Lambda}_{\mu_1 \dots \mu_5}$ \\ \hline $C^{(1)}_\mu$ & $\Lambda^{(0)}$ & $C^{(7)}_{\mu_1 \dots \mu_7}$ & $\Lambda^{(6)}_{\mu_1 \dots \mu_6}$ \\ \hline ${C}^{(3)}_{{\mu}{\nu}{\rho}}$ & $\Lambda^{(2)}_{\mu \nu}$ & $C^{(5)}_{\mu_1 \dots \mu_5}$ & $\Lambda^{(4)}_{\mu_1 \dots \mu_4}$ \\ \hline $k_{\mu}$ & $-$ & ${N}_{\mu_1 \dots \mu_7}$ & ${\Omega}^{(6)}_{\mu_1\dots\mu_6}$ \\ \hline \end{tabular} \end{center} \caption{\label{IIAback} \small {\bf Target space fields of the type IIA superstring.} The type IIA background contains the NS-NS sector: $( g_{\mu \nu}, \phi, B_{\mu \nu})$, the RR sector: $( C^{(1)}, C^{(3)})$, and the Poincar{\'e} duals of the RR fields and the NS-NS 2-form $B$: $( C^{(5)}, C^{(7)}, {\tilde B})$. The Kaluza-Klein monopole couples to a new field $N$, dual to the Killing vector associated to the Taub-NUT isometry, considered as a 1-form $k_\mu$.} \renewcommand{\arraystretch}{1.5} \end{table} The action of the ten dimensional IIA KK-monopole is manifestly invariant under translations of the Taub-NUT coordinate, since all the reduced fields and gauge parameters have vanishing Lie derivative with respect to $k$. This symmetry has been restored in the reduction by a mechanism in which the $y$ component of the Killing vector in eleven dimensions gives rise to an auxiliary worldvolume field that is needed to compensate the massive variation of the Stueckelberg field $i_k C^{(1)}$. A further check of this action is that for $m=0$ reduces to the action of the massless IIA KK-monopole \cite{BEL}. It is also worth noting that the field $b$ has disappeared in the reduced action. This reflects the fact that there are no fundamental strings ending on the monopole. Nevertheless there is a string-like object, described by $\omega^{(1)}$, ending on the monopole. In fact, the worldvolume fields that couple to the soliton solutions of a KK-monopole are those necessary to construct invariant field strengths for the fields $i_k C^{(p+1)}$ (see \cite{EJL}). These field strengths have the form: \begin{equation} {\cal K}^{(p)} = p\partial\omega^{(p-1)}+\frac{1}{2\pi\alpha^\prime} (i_k C^{(p+1)})+\dots \, , \end{equation} \noindent so that $\omega^{(p-1)}$ couples to a $(p-2)$-brane soliton which describes the boundary of a $p$-brane ending on the monopole, with one of its worldvolume directions wrapped around the Taub-NUT direction of the monopole. Since the target space field associated to $\omega^{(1)}$ is $(i_k C^{(3)})$ it describes a wrapped D2-brane ending on the monopole. We also find the soliton configurations: $(2|{\rm D}4,{\rm KK})$, $(3|{\rm NS}5,KK)$ and $(3|{\rm KK},{\rm KK})_{1,2}$ (see \cite{Papa,EJL}). The only modifications due to the mass occur in the explicit expressions of the field strengths, where new terms proportional to the mass appear, which involve worldvolume fields that already propagated in the massless case. There is however an exception, and that is the presence of the $d^{(5)}$ worldvolume field associated to the dual massive transformations, which couples to a 4-brane soliton. This soliton is a domain wall in the six dimensional worldvolume and is described by the configuration: \begin{displaymath} (4|{\rm D}8,{\rm KK})= \mbox{ $\left\{ \begin{array}{c|ccccccccc} \times & \times & \times & \times & \times & - & \times & \times & \times & \times \\ \times & \times & \times & \times & \times & \times & z & - & - & - \end{array} \right.$} \end{displaymath} \noindent which is obtained by reducing the $(4|{\rm M}9,{\rm KK})$ soliton configuration of the M-theory KK-monopole along the isometric direction of the M9-brane. This intersection is related to the Hanany-Witten configuration \cite{HW}: \begin{equation} \label{conf1} \begin{array}{c|c} {\rm D5}:\ \ \ \times & \times \times - - - \times \times \times - \\ {\rm NS5}:\ \times & \times \times \times \times \times - - - - \\ {\rm D3}: \ \ \ \times & \times \times - - - - - - \times \end{array} \end{equation} \noindent by T-duality along the 3,4,9 directions. It is also T-dual to the intersection $(4|{\rm D}7,{\rm NS}5)$ in Type IIB, corresponding to a 4-brane soliton in the IIB NS-5-brane \cite{EGJP}. There is as well another 4-brane soliton in the KK-monopole worldvolume \cite{BREJS} which is already present in the massless case. This is the embedding of the D4-brane on the monopole: $(4|{\rm D}4,{\rm KK})$, and it couples to the worldvolume field $\omega^{(5)}$, describing the tension of the monopole. This is the reduction of the 5-brane soliton $(5|{\rm M}5,{\rm KK})$, which couples to the tension ${\hat \omega}^{(6)}$ of the M-KK-monopole. Finally, the reduction of the 6-brane soliton $(6|{\rm M}9,{\rm KK})$ gives a 5-brane soliton realized as the embedding of the KK-monopole on a KK-7A-brane\footnote{This brane is obtained by reducing the M9-brane along a worldvolume direction other than the $z$-direction, but it is not predicted by the Type IIA supersymmetry algebra. As discussed in \cite{proci}, this is also the case for the KK-6A brane, obtained by reducing the M-KK-monopole along a transverse direction different from the Taub-NUT direction. These branes are required by U-duality of M-theory on a d-torus, as it has been shown in \cite{U1,U2,U3}.}. \section{Massive $T$-duality:\\ IIB NS-$5$ $\rightarrow$ Massive IIAKK} \label{IIBNS5-->IIAKK} In this section we obtain the massive IIA KK-monopole through a ``massive'' T-duality transformation in the action of the IIB NS-5-brane. The action of the Type IIB NS-5-brane was constructed in \cite{EJL} and is given by: \begin{eqnarray} \label{5brane-action} \hspace{-.5cm} S &=& -T_{{\rm NS-5B}} \int d^6 \xi \,\, e^{-2\varphi} \sqrt{1 + e^{2 \varphi} (C^{(0)})^2} \, \times \nonumber \\ &&\hspace{+1cm} \times \, \sqrt{|{\rm det} ( {\j} - (2 \pi \alpha^\prime) {e^{\varphi} \over \sqrt{1 + e^{2 \varphi} (C^{(0)})^2}} {\tilde {\cal F}} )|} \, \nonumber \\ &&\hspace{-.5cm}+ \,\, {1 \over 6!}(2 \pi \alpha^\prime) T_{{\rm NS-5B}} \int d^6\xi \,\, \epsilon^{i_1 \dots i_6} {\tilde {\cal G}}^{(6)}_{i_1 \dots i_6} \, . \end{eqnarray} Here ${\tilde {\cal F}}=2\partial c^{(1)}+\frac{1}{2\pi\alpha^\prime} C^{(2)}$ and: \begin{equation} \begin{array}{rcl} {\tilde {\cal G}}^{(6)} &=& \left\{ 6\partial {\tilde c}^{(5)} - {1 \over 2 \pi \alpha^\prime}{\tilde {\cal B}} - {45 \over 2(2 \pi \alpha^\prime)} {\cal B}C^{(2)}C^{(2)} -15 C^{(4)} {\tilde {\cal F}} \right. \\ & &\\ & & - 180 (2 \pi \alpha^\prime) {\cal B} \partial c^{(1)}\partial c^{(1)} -90 {\cal B} C^{(2)} \partial c^{(1)} \\& &\\& & \left. +15 (2 \pi \alpha^\prime)^2 {C^{(0)} \over e^{-2 \varphi} + (C^{(0)})^2} {\tilde {\cal F}}{\tilde {\cal F}}{\tilde {\cal F}} \right\} \, .\\ \end{array} \end{equation} In Table \ref{IIBback} we have summarized our notation for the Type IIB background fields. \begin{table}[!ht] \renewcommand{\arraystretch}{1.5} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Target space & Gauge & Dual & Gauge \\ Field & Parameter & Field & Parameter \\ \hline\hline ${\j}_{{\mu}{\nu}}$, $\varphi$ & $-$ & $-$ & $-$ \\ \hline ${\cal B}_{\mu \nu}$ & $\Lambda_\mu$ & ${\tilde {\cal B}}_{\mu_1 \dots \mu_6}$ & ${\tilde \Lambda}_{\mu_1 \dots \mu_5}$ \\ \hline $C^{(0)}$ & $-$ & $-$ & $-$ \\ \hline ${C}^{(2)}_{{\mu}{\nu}}$ & $\Lambda^{(1)}_\mu$ & $C^{(6)}_{\mu_1 \dots \mu_6}$ & $\Lambda^{(5)}_{\mu_1 \dots \mu_5}$ \\ \hline \end{tabular} \end{center} \caption{\label{IIBback} \small {\bf Target space fields of the type IIB superstring.} The Type IIB background contains the common sector: $( {\j}_{\mu \nu}, \varphi, {\cal B}_{\mu \nu} )$, the RR sector: $(C^{(0)}, C^{(2)}, C^{(4)})$, and the Poincar{\'e} duals of the 2-forms $C^{(2)}$ and ${\cal B}$: $( C^{(6)}, {\tilde {\cal B}})$. } \renewcommand{\arraystretch}{1} \end{table} We apply now a T-duality transformation along a transverse direction. The worldvolume fields do not change rank after T-duality since the original and dual branes have the same number of worldvolume dimensions. Moreover, we have a new scalar field $Z'$, which is the T-dual of the coordinate along which we perform the duality transformation. The T-duality rules for the worldvolume fields are given by: \begin{equation} \begin{array}{rcl} Z' &=& (2 \pi \alpha^\prime) \omega^{(0)} \, ,\\ & &\\ c^{(1) \prime} &=& - \omega^{(1)} - {m \over 4} (2 \pi \alpha^\prime) (\omega^{(0)})^2 \partial Z \, ,\\ & &\\ \partial{\tilde c}^{(5) \prime} &=& \partial\omega^{(5)} + 60 (2 \pi \alpha^\prime)^2 \partial Z \partial \omega^{(1)} \partial \omega^{(1)} \partial \omega^{(0)} \\& &\\ & &+m\omega^{(6)}-\frac12 m(2\pi\alpha^\prime) d^{(5)}\partial\omega^{(0)}\, . \end{array} \end{equation} \noindent Here $Z$ is the Taub-NUT coordinate of the KK-monopole. Its occurrence on the right hand side of the two expressions above is required by gauge invariance, and assures the gauged sigma-model structure necessary to describe the KK-monopole. Using the massive T-duality rules given in (\ref{IIB-IIA-massive}) and (\ref{atres}) for the background fields, we find the following transformations for the worldvolume curvatures: \begin{equation} {\tilde {\cal F}}^{\prime} = - {\cal K}^{(2)} \, ,\qquad {\tilde {\cal G}}^{(6) \prime}= {\cal K}^{(6)} \, , \end{equation} \noindent where ${\cal K}^{(2)}$ is given by (\ref{curvas}) and ${\cal K}^{(6)}$ by (\ref{WZKK}). Substituting in the IIB NS-5-brane effective action we recover the expression (\ref{accionmasiva}) for the effective action of the massive IIA KK-monopole that we obtained from eleven dimensions. This provides a check of that action as well as of the massive T-duality rule (\ref{atres}) given in the Appendix. One remark is in order at this point. In general the double dimensional reduction of a massive M-brane gives a tension \cite{BLO}: \begin{equation} T=\left( 1- {m \over 2} (2 \pi \alpha^\prime) v^{(0)} \right) {\hat T} \, \int d {\hat \xi} \end{equation} \noindent for the reduced brane, where ${\hat \xi}$ is the compact worldvolume direction. A particular example is the massive IIA KK-monopole obtained in the previous section. This suggests that in the massive case the tensions should transform under T-duality as: \begin{equation} T^\prime_{{\rm B5}}=T_{{\rm mAKK}}= \left( 1- {m \over 2} (2 \pi \alpha^\prime) v^{(0)} \right) T_{{\rm mMKK}}\, , \end{equation} \noindent which seems, however, an arbitrary choice from the T-duality point of view. We leave some especulations about this point for the Conclusions. \section{Conclusions} We have constructed the worldvolume effective action of the Type IIA KK-monopole propagating in a background with non-vanishing cosmological constant. The worldvolume field content of this brane consists on a scalar, a 1-form, two 5-forms and a 6-form. One of the two 5-forms is interpreted as the tension of the monopole, whereas the other one is associated to dual massive transformations in the worldvolume of the monopole. The 6-form and the tension of the monopole are interpreted as a Stueckelberg pair with respect to massive transformations. These worldvolume fields have been interpreted in terms of soliton solutions propagating in the worldvolume of the monopole. The new feature with respect to the massless case is that there is a 4-brane soliton realized as the domain wall intersection of the KK-monopole with a D8-brane, and a 6-brane soliton corresponding to the embedding of the monopole on a KK7-brane. We have already mentioned that the KK7-brane belongs to a particular class of Type IIA brane solutions which are not predicted by the spacetime supersymmetry algebra. It is unclear why this happens. These branes have been encountered already in the literature in a different context, namely they are required in order to fill up multiplets of BPS states in representations of the U-duality symmetry group of M-theory on a d-torus \cite{U1,U2,U3,MO}. As we discuss in \cite{proci}, where we calculate the kinetic part of their worldvolume effective actions, these branes do not have an obvious interpretation in weakly coupled string theory since they scale with the coupling constant more singularly than $1/{g_s^2}$. The derivation of the KK-monopole effective action from eleven dimensions gives a tension depending on the mass and on a Wilson line of the gauge field associated to the massive isometry (which is set to a constant value by the integration of the worldvolume 5-form playing the role of tension of the monopole) (see (\ref{modifi})). In particular, the tension can vanish for a certain value of the Wilson line. In fact, this is a general feature for the Type IIA massive branes that are obtained from eleven dimensions by a double dimensional reduction \cite{BLO}. The presence of covariant derivatives with respect to massive transformations in the eleven-dimensional massive branes implies that in the double dimensional reduction the brane can be wrapped along the eleventh direction with unusual winding number, which depends on the mass and on a Wilson line of the gauge field. Therefore, consistency implies that $(1-\frac{m}{2}(2\pi\alpha^\prime) v^{(0)})$ has to be an integer, i.e. the winding number, with the tension of the reduced brane proportional to it. The Wilson line may be interpreted as a distance $2 \pi \alpha^\prime v^{(0)}$ corresponding to a D8-brane, since a massive IIA brane is a brane in the presence of D8-branes. This is similar to the interpretation of the masses of the string states in terms of the D8-brane positions in a Type ${\rm I}^\prime$ construction. Since the D8-branes are T-dual to the D9-branes, the arbitrariness of $v^{(0)}$ from the T-duality point of view might be originated by Wilson lines in Type IIB with D9-branes. Finally it would be of interest to analyze the six dimensional interacting gauge theory that arises from considering a system of parallel massive KK-monopoles in the limit in which gravity is decoupled \cite{sixdi}. \section*{Acknowledgements} We would like to thank E.~Bergshoeff for useful discussions. The work of E.E.~ is part of the research program of the Dutch Foundation FOM.
{'timestamp': '1999-02-12T15:58:43', 'yymm': '9812', 'arxiv_id': 'hep-th/9812188', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-th/9812188'}
\section{Introduction} Stories are central to human culture and communication. However, it seems that stories are easier said than generated. Despite incredible recent progress in natural language processing generation of longer texts is still a challenge \cite{van2019narrative, rashkin2020plotmachines}. \citet{ostermann2019mcscript2} present a machine comprehension corpus for the end-to-end evaluation of script knowledge with 50\% of the questions in the corpus that require script knowledge for the correct answer. The authors demonstrate that though the task is not challenging to humans, existing machine comprehension models fail to perform well on the data, even if they make use of a commonsense knowledge base. Partially, this challenge could be attributed to the lack of adequate memory models. Longer texts demand better memory mechanisms and possible ways to construct such mechanisms are discussed in the literature for the last 25 years. Long short-term memory networks \cite{hochreiter1997long}, Neural Turing Machines \cite{graves2014neural}, memory networks \cite{weston2014memory} and many other architectures try to tackle this problem. Attempts to introduce some form of memory in transformers, such as \cite{guo2019star} or \cite{burtsev2020memory}, could be regarded as the next steps in this long line of work. There are some interesting recent attempts to generate long texts using some form that makes such longer text feasible for a human reader. For example, \citet{agafonova2020paranoid} generate a diary of a neural network. Yet the generation of a narrative is still challenging. For a detailed review of earlier approaches to narrative generation, we address the reader to \cite{kybartas2016survey}. Even modern models for narrative generation rely heavily on some form of expert knowledge or some type of hierarchical structure of the narrative. For example, \citet{fan2019strategies} first generate the predicate-argument structure of the text, then generate a surface realization of the predicate-argument structure, finally replace the entity placeholders with context-sensitive names and references. \citet{fan2019strategies,ammanabrolu2020story} propose a hierarchical generation framework that first plans a storyline, and then generate a story based on it. present a technique for preprocessing textual story data into event sequences. \citet{xu2018skeleton} develop a model that first generates the most critical phrases, called skeleton, and then expands the skeleton to a complete and fluent sentence. Similarly, \citet{martin2018dungeons} provide a mid-level of abstraction between words and a sentence to minimize event sparsity and present a technique for automated story generation whereby the problem is decomposed into the generation of successive events and the generation of natural language sentences from events. Finally, \citet{brahman2020cue} develop an approach, where the user provides the model with such mid-level sentence abstractions in the form of cue phrases during the generation process. However, we should take into consideration that modern Natural Language Processing (NLP) is fundamentally an experimental discipline, so the lack of dedicated data could be another bottleneck for the development of narrative generation. This paper tries to amend this problem. Unfortunately, the majority of available narrative datasets deal with some constrained form of a short plot that is usually called {\em scenario}. These scenarios are centered around common activities, i.e. going grocery shopping or taking a shower. These narrative datasets available in the literature are also extremely small and could not be used with the most advanced modern NLP models. \citet{regneri2010learning} collect 493 event sequence descriptions for 22 behavior scenarios. \citet{modi2016inscript} present InScript dataset that consists of 1,000 stories centered around 10 different scenarios. \citet{wanzare2019detecting} provide 200 scenarios and attempt to identify all references to them in a collection of narrative texts. \citet{mostafazadeh2016corpus} present a corpus of 50k five-sentence commonsense stories. Finally, there is an MPST dataset that contains 14K movie plot synopses, \cite{kar2018mpst}, and WikiPlots\footnote{https://github.com/markriedl/WikiPlots} that contains 112 936 story plots extracted from English language Wikipedia. Recently \citet{malyshevaDYP} provided a dataset of TV series along with an instrument for narrative arc analysis. These datasets are useful yet as well as a vast majority of the narrative datasets they are only available in English. This paper provides a large multi-language dataset of stories in natural language. The stories have a cross-language index and every story and character are cross-linked if they occur in different languages. Additionally, the texts have tags such as a genre or a topic. This is the first story dataset of such magnitude that we know of. We hope that a large dataset of long storylines could be used for various aspects of narrative research as well as to facilitate experiments with end-to-end narrative generation. \section{Data} StoryDB is motivated by several interesting experiments that used WikiPlots — one of the larger English datasets of narratives available for all-purpose narrative research that we have mentioned earlier. Seeing various applications that Wikiplots dataset found in the NLP community, we believe, that StoryDB would be even more useful due to multiple languages, advanced filtering that guarantees higher quality of obtained data, and genre tagging. To improve reproducibility and make StoryDB usable as Wikipedia is further updated we publish the data as well as the code for the filtering pipeline\footnote{https://drive.google.com/drive/folders/ 1RCWk7pyvIpubtsf-f2pIsfqTkvtV80Yv}. The stories that form StoryDB are extracted from any Wikipedia article that contains a sub-header that contains the word "plot" (e.g., "Plot", "Plot Summary", etc.) in a corresponding language. \subsection{Dataset structure} The dataset consists of several index files and includes a directory \verb"plots". Every file in the directory has a similar structure. Two first letters of the filename stand for the ISO 639-1\footnote{https://en.wikipedia.org/wiki/ISO\_639-1} code of the language for the texts presented in the file. For example, \verb"hy_plots.tsv" contains 4 861 plots in Armenian language. The file \verb"simple_plots.tsv" contains stories in Simple English. Every entry in the plots file has a similar structure and includes the following fields: \begin{itemize} \item ID — the unique number of a plot that is the same across every language in the dataset; \item Lang — the language of this particular entry; \item Link — a link to the Wikipedia page containing the plot; \item Title — the title of the story; \item Text — the text of the story; \item Categories — the categories that Wikipedia assigns to this story. \end{itemize} One can navigate across plot files using StoryDB's Index file \verb"plot_matrix.tsv". The rows of the file stand for languages. If a given plot is available in a given language then the title of this plot stands in the corresponding cell of the \verb"plot_matrix.tsv". For example, if "Wee free men" is available in Simple English it could be found by its title in the corresponding \verb"simple_plots.tsv". StoryDB also includes \verb"plot_rake.tsv" that contains keywords extracted with RAKE algorithm \cite{rose2010automatic} for every story. Finally, the files \verb"ID_lang_tag.tsv" and \verb"ID_tag_average.tsv" include information about tags that correspond to the given story. We discuss tagging procedure in detail later. \subsection{Preprocessing} Our motivation is to provide a dataset of storylines for various languages including the low-resource ones. Roughly speaking, we want to be sure that every story that ends up in StoryDB is a legitimate storyline description in the corresponding natural language. Thus we are more interested in the precision of the dataset rather than in the recall. To guarantee a higher quality of the obtained stories we implemented several heuristical filters that we briefly describe here. English Wikipedia is an order of magnitude bigger than any other Wikipedia both in terms of users and in terms of admins\footnote{https://meta.wikimedia.org/wiki/List\_of\_Wikipedias}. This makes the English list of storylines to be the most extensive one. We regard it as the least noisy one and use it as a reference source for the filtering procedure. We exclude every page that includes a plot yet has no plot section in English Wikipedia for the same entry. If Wikipedia in language X has a page with title A and this page is also available in language Y under title B, we list such pair of stories as \verb"[language_X," \verb"title_A," \verb"language_Y," \verb"title_B"]". Every entry in this list is an edge in a graph of stories. Every vertex in this graph has a corresponding name \verb"language, title". Unlike connected stories from different languages that usually contain similar storylines, the stories listed under the same name in the same language might differ significantly. Say, two stories in language X \verb"[language_X," \verb"title_A]" and \verb"[language_X," \verb"title_B]" are both linked to one story in another language Y \verb"[language_Y," \verb"title_C]". To avoid such ambiguities we exclude fully connected components that contain more than one entry in the same language. Obtained list of stories ends up in the resulting matrix of stories to navigate the dataset. We experimented with various filtering procedures and found this combination to produce a sufficiently rich dataset with a minimum amount of duplicates. StoryDB is also equipped with a catalog of characters. If a given character that has an individual Wikipedia page is mentioned in a story, its description in the original language is saved into the corresponding tsv-file alongside the ID of a story and the language of the description. \subsection{Tagging} We annotate the resulting stories using meta-information on categories from Wiki API\footnote{https://www.mediawiki.org/w/api.php?action=help\& modules=query\%2Bcategories}. For every plot, we list all translated categories marked in every language in which this plot is available. Then we search these category lists for substrings that include tags from the manually created list of tags\footnote{The list of tags is published with the dataset.}. This allows us to proved language-specific tags for every language, that are listed in \verb"ID_lang_tag.tsv". For example, Czech version of Black Night has tags \verb"action;" \verb"crime;" \verb"drama;" \verb"superhero;" \verb"comics;" and \verb"thriller", while the same story in Persian has no tag \verb"comics", but has additional tags \verb"neo-noir;" \verb"psychological;" \verb"epic;" and \verb"screenplays;". File \verb"ID_tag_average.tsv" includes the scores of the tags available for every story. The scores are calculated as follows: we count the number of times that a given tag is associated with a given story. Then we divide this number over the total number of languages in which the story is represented. The obtained space of tags could be useful for narrative exploration. Every story becomes a vector with every coordinate on the interval $[0,1]$. Figure \ref{fig:tags} shows a t-SNE visualisation of this space \cite{van2008visualizing} alongside the centroids of the more distinctive tags. \begin{figure}[t!]\centering \includegraphics[scale=0.2]{tags.png} \caption{t-SNE visualisation for plots in StoryDB clustered according to their tags. Figure shows centroids of the tags with higher variance across the dataset.} \label{fig:tags} \end{figure} \subsection{StoryDB} Figure \ref{fig:size} shows the relative size of the datasets in every language presented in StoryDB. English heavily dominates followed by Italian, French, Russian, and German. \begin{figure*}[h!]\centering \includegraphics[scale=0.2]{plots_500.png} \caption{Number of stories in every language that has more that five hundred entries in StoryDB.} \label{fig:size} \end{figure*} There are more than 20 languages that have three thousand or more stories available, including such languages as Finnish, Hungarian, or Persian. Table \ref{tab:DB} summarises some of the resulting parameters of the obtained dataset. \begin{table}[h!] \centering \begin{tabular}{lr} \multicolumn{2}{c}{Story DB} \\ \hline Number of languages & 42 \\ \begin{tabular}[c]{@{}l@{}}Median \# of stories in a language\end{tabular} & 2 772 \\ \begin{tabular}[c]{@{}l@{}}Maximal \# of stories in a language\end{tabular} & 63 756 \\ \begin{tabular}[c]{@{}l@{}}Minimum \# of stories in a language\end{tabular} & 568 \end{tabular} \caption{Some resulting parameters of the StoryDB.} \label{tab:DB} \end{table} \section{Evaluation} We have used three modern transformer-based architectures for the evaluation: \begin{itemize} \item mBERT\footnote{https://huggingface.co/bert-base-multilingual-cased} \cite{devlin2018bert} — a multi-language version of BERT; \item mDistilBERT\footnote{https://huggingface.co/distilbert-base-multilingual-cased} \cite{Sanh2019DistilBERTAD} — a distilled version of multi-language BERT; \item XLM-Roberta\footnote{https://huggingface.co/xlm-roberta-base} \cite{conneau2020unsupervised} — a model that is two times larger than BERT in terms of the number of parameters. \end{itemize} These models are the most widely used multi-language models to date. The results of the experiments are publicly available at Weights and Biases\footnote{https://wandb.ai/altsoph/storydb \_eval.task1\\ https://wandb.ai/altsoph/storydb\_eval.task3 \\ https://wandb.ai/altsoph/storydb\_eval.task3}, see \cite{wandb}. The evaluation was performed on ten largest languages in StoryDB, namely: English — 'en', French — 'fr', Italian — 'it', Russian — 'ru', German — 'de', Dutch — 'nl', Ukrainian — 'uk', Portuguese — 'pt', Polish — 'pl', and Spanish — 'es'. We evaluated three tasks: \begin{itemize} \item Task A. Multilabel classification for tags on a multilanguage corpus of plots; \item Task B. Multilabel classification for tags in cross-lingual learning; \item Task C. Multilabel classification for tags in cross-lingual learning with a corpus of overlapping plots that occur in every language. \end{itemize} Let us now describe every task in detail. \subsection{Task A} We have sampled the ten most frequents tags out of StoryDB (tag 'film' was the most frequent yet was excluded as a somewhat redundant one). These tags were: 'drama', 'comedy', 'television', 'fiction', 'series', 'action', 'thriller', 'black-and-white', 'science fiction', 'horror'. These ten tags form a vector, where every dimension corresponds to one particular tag. '1' encodes the presence of the tag and '0' stands for the absence of it. For every language out of the top ten in StoryDB, we have sampled 2000 plots such that every plot has at least one tag out of the list of the ten most popular tags. In Task A the plots were sampled randomly for every language, so there is some overlap between languages. On average, 2\% of the plots in one language reoccur in another one. It is important to note that the set of tags for a given plot might differ across languages and one plot could have several tags simultaneously. Thus, multilabel classification is a natural evaluation task under these circumstances. Since the dataset is not balanced with respect to tags we used the binary cross-entropy loss\footnote{https://pytorch.org/docs/stable/generated/ torch.nn.BCELoss.html} over the vector of tags. Table \ref{tab:task_a} and Table \ref{tab:task_a_detail} sum up the results of three models on a multilanguage dataset of plots. Further details across languages and tags are available online\footnote{https://wandb.ai/altsoph/storydb\_eval.task1}. \begin{table}[] \begin{tabular}{lll} & \begin{tabular}[c]{@{}l@{}}Hamming\\ Score\end{tabular} & \begin{tabular}[c]{@{}l@{}}Multilabel\\ Accuracy\end{tabular} \\ \hline mDistillBERT & 0.47 & 0.31 \\ mBERT & 0.50 & 0.33 \\ XLM-RoBERTa & 0.50 & 0.33 \end{tabular} \caption{Task A. Hamming score and multilabel accuracy for the vector of predicted tags on a validation set. Training data consists of sixteen thousand plots in ten languages, with one tenth of the dataset in every language.} \label{tab:task_a} \end{table} \begin{table}[] \begin{tabular}{llll} & mD-BERT & mBERT & XLM-R \\ \hline Comedy & 0.69 & 0.67 & 0.69 \\ Action & 0.67 & 0.70 & 0.67 \\ Fiction & 0.78 & 0.80 & 0.81 \\ Thriller & 0.67 & 0.63 & 0.64 \\ Horror & 0.70 & 0.76 & 0.75 \\ Drama & 0.73 & 0.74 & 0.74 \\ Series & 0.77 & 0.78 & 0.78 \\ Television & 0.74 & 0.76 & 0.76 \\ \begin{tabular}[c]{@{}l@{}}Science\\ Fiction\end{tabular} & 0.78 & 0.80 & 0.81 \\ \begin{tabular}[c]{@{}l@{}}Black and \\ White\end{tabular} & 0.68 & 0.65 & 0.62 \end{tabular} \caption{Task A. AUC-ROC for binary tag classifiers on a validation set. Training data consists of sixteen thousand plots in ten languages, with one tenth of the dataset in every language.} \label{tab:task_a_detail} \end{table} \subsection{Task B} Now let us do a similar setup yet train every model on one language in StoryDB and test its accuracy on another language. The parameters of the training datasets and labels are the same as in Task A above but every model is trained on one dataset and is then tested on other languages. Table \ref{tab:task_b} show the performance of mBERT, yet mDistillBERT and XLM-RoBERTa demonstrate similar behavior. The detailed results could be found online\footnote{https://wandb.ai/altsoph/storydb\_eval.task2}. \begin{table*}[] \centering \begin{tabular}{l|llllllllll} & en & de & nl & fr & it & es & pt & ru & uk & pl \\ \hline en & 0.36 & 0.16 & 0.14 & 0.16 & 0.10 & 0.17 & 0.15 & 0.13 & 0.12 & 0.15 \\ de & 0.15 & 0.40 & 0.16 & 0.18 & 0.12 & 0.16 & 0.20 & 0.19 & 0.18 & 0.21 \\ nl & 0.20 & 0.33 & 0.41 & 0.20 & 0.22 & 0.32 & 0.29 & 0.31 & 0.25 & 0.30 \\ fr & 0.16 & 0.20 & 0.16 & 0.51 & 0.13 & 0.18 & 0.18 & 0.16 & 0.14 & 0.19 \\ it & 0.19 & 0.30 & 0.24 & 0.21 & 0.21 & 0.26 & 0.28 & 0.27 & 0.24 & 0.30 \\ es & 0.23 & 0.24 & 0.20 & 0.22 & 0.18 & 0.45 & 0.27 & 0.22 & 0.22 & 0.23 \\ pt & 0.15 & 0.21 & 0.17 & 0.22 & 0.10 & 0.19 & 0.44 & 0.19 & 0.14 & 0.23 \\ ru & 0.12 & 0.21 & 0.16 & 0.13 & 0.12 & 0.20 & 0.22 & 0.45 & 0.16 & 0.20 \\ uk & 0.10 & 0.20 & 0.16 & 0.14 & 0.09 & 0.19 & 0.19 & 0.23 & 0.25 & 0.20 \\ pl & 0.19 & 0.27 & 0.19 & 0.21 & 0.11 & 0.20 & 0.24 & 0.24 & 0.20 & 0.48 \end{tabular} \caption{Task B. Multilabel accuracy for the vector of predicted tags by mBERT. Training data consists of one thousand six hundred plots in one language. Every row shows validation accuracy of a model pretrained on the corresponding language and validated on the plots in a language from the corresponding column.} \label{tab:task_b} \end{table*} Table \ref{tab:task_b} demonstrates that if we train the model on one language and validate it on the other the quality of the multilabel tag classification drops. This drop varies across languages and tends to be smaller for the languages that belong to the same language family. \subsection{Task C} The last validation is similar to Task B, yet now we sample plots that overlap in every language. This limits us to 1500 plots in six languages that we split into train and test. Now every plot occurs in every language. Table \ref{tab:task_c} shows the model manages to recover certain tags in one language after the pre-training on the other. Table \ref{tab:task_c} shows the performance of XLM-RoBERTa, yet mDistillBERT and mBERT demonstrate similar behavior. The performance of the models tends to be better on overlapping plots if we compare it to Task B. The detailed results could be found online\footnote{https://wandb.ai/altsoph/storydb\_eval.task3}. \begin{table}[] \begin{tabular}{l|llllll} & en & de & nl & fr & it & es \\ \hline en & 0.29 & 0.16 & 0.12 & 0.12 & 0.10 & 0.08 \\ de & 0.27 & 0.31 & 0.18 & 0.19 & 0.15 & 0.11 \\ nl & 0.34 & 0.30 & 0.32 & 0.17 & 0.13 & 0.17 \\ fr & 0.22 & 0.20 & 0.16 & 0.27 & 0.15 & 0.10 \\ it & 0.34 & 0.29 & 0.23 & 0.25 & 0.25 & 0.16 \\ ru & 0.25 & 0.21 & 0.18 & 0.18 & 0.18 & 0.13 \end{tabular} \caption{Task C. Multilabel accuracy for the vector of predicted tags by XLM-RoBERTa across the dataset of plots withour cross-language overlaps. Training data consists of one thousand two hundred plots in one language. Every row shows validation accuracy of a model pretrained on the corresponding language and validated on the plots in a language from the corresponding column.} \label{tab:task_c} \end{table} The multilabel accuracy for tag prediction declines further yet it can neither be attributed to specific lexical properties of a particular language nor any form of overlap of plots across languages. This series of evaluation tasks demonstrates two crucial properties of StoryDB: \begin{itemize} \item StoryDB could be used to work with narrative structures on the most abstract cross-lingual level; \item StoryDB allows controlling for various cross-lingual similarities of plots during ablation experiments with models of narrative. \end{itemize} \section{Discussion} We believe that a broad multilanguage dataset of narratives can facilitate several areas of narrative research. \begin{itemize} \item Cross-cultural research of narrative structure. StoryDB provides possibilities to compare the structure of narrative in various languages. Since StoryDB includes every story in its original language and is equipped with a universal system of tags it is a natural source for such cross-cultural research. \item Classification of narratives. StoryDB includes an extensive amount of narratives for various languages alongside their genre tags. This allows to develop new methods for narrative classification as well as extensively test the ones that already exist, see for example \cite{reiter2014nlp}. \item Quantitative research of the narrative structure. \cite{y2007employing} represents a story as a cluster of emotional links and tensions between characters that progress over storytime. StoryDB includes the description of the plots alongside the key characters. Such information could be insightful for a deeper quantitative understanding of narrative as a by-product of character interaction. \item Summarization of narrative. Parallel corpora in different languages contain similar descriptions of the narrative that could vary in terms of details and length. That makes StoryDB a useful tool for potential narrative summarization research such as \cite{barros2019natsum}. \item End-to-end narrative generation. StoryDB is the first dataset of narratives that we know of that contains narrative descriptions in various natural languages. \end{itemize} \section{Conclusion} This paper presents StoryDB — a broad multi-language dataset of narratives. We describe the construction of the dataset, provide the code for the whole pipeline, list the parameters of the resulting dataset, and briefly discuss several areas of natural language processing research, where StoryDB could be useful for the community. We hope that StoryDB could be broadened as more plot descriptions are added to various languages. These considerations make StoryDB a flexible resource that would be relevant for the NLP community as the subfield of quantitative narrative research moves on. \bibliographystyle{acl_natbib}
{'timestamp': '2021-09-30T02:19:50', 'yymm': '2109', 'arxiv_id': '2109.14396', 'language': 'en', 'url': 'https://arxiv.org/abs/2109.14396'}
\section{Introduction} \label{sec:intro} As a classic vision task, single-image-super-resolution (SISR) restores the original high-resolution (HR) image based on a down-sampled low-resolution (LR) one. It can be applied in various applications, such as low-resolution media data enhancement or video/image upscaling for high resolution display panels. Various classic \cite{irani1991improving,freeman2002example,timofte2013anchored,timofte2014a+} and deep learning (DL)-based \cite{dong2014learning,dong2016accelerating,shi2016real,yu2018wide,liu2020fast} SR methods have been proposed in the past. Compared with classic interpolation algorithms to improve image/video resolution, DL-based methods take advantage of learning mappings from LR to HR images from external datasets. Thus most recent SR works emerge in the DL area. However, one major limitation of existing DL-based SR methods is their high computation and storage overhead to achieve superior image quality, leading to difficulties to implement real-time SR inference even on powerful GPUs, not to mention resource limited edge devices. Due to the ever-increasing popularity of mobile devices and interactive on-mobile applications (such as live streaming), it is essential to derive lightweight SR models with both high image quality and low on-mobile inference latency. There exist several works targeting at efficient SR models, including using upsampling operator at the end of a network \cite{dong2016accelerating,shi2016real}, adopting channel splitting \cite{hui2019lightweight}, using wider activation \cite{yu2018wide}, and combining lightweight residual blocks with variants of group convolution \cite{liu2020fast}. Neural architecture search (NAS) is applied to derive the optimal architecture in many vision tasks. Latest works \cite{chu2019fast,song2020efficient,lee2020journey,chu2020multi} try to derive fast, lightweight, and accurate SR networks via NAS. However, their models are still too large to be implemented on mobile devices. Furthermore, these methods usually take the parameter numbers and computation counts (such as multiply-accumulate (MAC) operations) into the optimization for model efficiency, without considering the actual on-mobile implementation performance such as the inference latency. The actual mobile deployment of SR mobiles has rarely been investigated. The most relevant works are the winner of the PIRM challenge \cite{vu2018fast}, MobiSR \cite{lee2019mobisr}, and work \cite{Zhan2021AchievingOR}. But they either require nearly one second per frame for inference, far beyond real-time, or take a long search time. Targeting at achieving real-time inference of accurate SR model for 720p resolution on various resource-limited hardware such as mobile GPU and DSP, this paper proposes a compiler-aware NAS framework. An adaptive SR block is introduced to conduct the depth search and per-layer width search. Each convolution (CONV) layer is paired with a mask layer in the adaptive SR block for the width search, while the depth search is reached by choosing a path between the skip connection and the masked SR block. The mask can be trained along with the network parameters via gradient descent optimizers, significantly saving training overhead. Instead of using MACs as the optimization target, the latency performance is directly incorporated into the objective function with the usage of a speed model. Our implementation can support real-time SR inference with competitive SR performance on various resource-limited platforms, including mobile GPU and DSP. The contributions are summarized below: \begin{itemize}[leftmargin=*] \item We propose a framework to search for the appropriate depth and per-layer width with adaptive SR blocks. \item We introduce a general compiler-aware speed model to predict the inference speed on the target device with corresponding compiler optimizations. \item The proposed framework can directly optimize the inference latency, providing the foundations for achieving real-time SR inference on mobile. \item Our proposed framework can achieve real-time SR inference (with only tens of milliseconds per frame) for the implementation of 720p resolution with competitive SR performance (in terms of PSNR and SSIM) on mobile (Samsung Galaxy S21). Our achievements can facilitate various practical SR applications with real-time requirements such as live streaming or video communication. \end{itemize} \section{Related Work} \label{sec:related work} \noindent \textbf{SR Works.} In recent years, most SR works have shifted their approaches from classic methods to DL-based methods with significant SR performance improvements. From the pioneering SRCNN \cite{dong2014learning} to later works with shortcut operator, dense connection, and attention mechanism \cite{kim2016accurate,lim2017enhanced,zhang2018residual,zhang2018image,dai2019second}, the up-scaling characteristic have dramatically boosted at the cost of high storage and computation overhead. Most of the works mentioned above even take seconds to process only one image on a powerful GPU, let alone mobile devices or video applications. \noindent \textbf{Efficient SR.} Prior SR works are hard to be implemented on resource-limited platforms due to high computation and storage cost. To obtain more compact SR models, FSRCNN \cite{dong2016accelerating} postpones the position of the upsampling operator. IDN \cite{hui2018fast} and IMDN \cite{hui2019lightweight} utilize the channel splitting strategy. CARN-M \cite{ahn2018fast} explores a lightweight SR model by combining efficient residual blocks with group convolutions. SMSR \cite{wang2021exploring} learns sparse masks to prune redundant computation for efficient inference. ASSLN \cite{zhang2021aligned} and SRPN \cite{zhang2021learning} leverage structure-regularized pruning and impose regularization on the pruned structure to guarantee the alignment of the locations of pruned filters across different layers. SR-LUT \cite{jo2021practical} uses look-up tables to retrieve the precomputed HR output values for LR input pixels, with a more significant SR performance degradation. However, these SR models do not consider the actual mobile deployment, and the sizes of the models are still too large. The actual SR deployment is rarely investigated. The winner of the PIRM challenge \cite{vu2018fast}, MobiSR \cite{lee2019mobisr}, and work \cite{Zhan2021AchievingOR} explore the on-device SR, but the models take seconds for a single image, far from real time, or require long search time. Work \cite{ignatov2021real} considers real-time SR deployed on the powerful mobile TPU, which is not widely adopted such as mobile CPU/GPU. \noindent \textbf{NAS for SR.} NAS has been shown to outperform heuristic networks in various applications. Recent SR works start to leverage NAS to find efficient, lightweight, and accurate SR models. Works \cite{chu2019fast,chu2020multi,Zhan2021AchievingOR} leverage reinforced evolution algorithms to achieve SR as a multi-objective problem. Work~\cite{ahn2021neural} uses a hierarchical search strategy to find the connection with local and global features. LatticeNet \cite{luo2020latticenet} learns the combination of residual blocks with the attention mechanism. Work~\cite{wu2021trilevel,huang2021lightweight,ding2021hrnas} search lightweight architectures at different levels with differentiable architecture search (DARTS) \cite{liu2018darts}. DARTS based methods introduce architecture hyper-parameters which are usually continuous rather than binary, incurring additional bias during selection and optimization. Furthermore, the above-mentioned methods typically take the number of parameters or MACs into the objective function, rather than on-mobile latency as discussed in Sec.~\ref{sec: speed_incorporation}. Thus they can hardly satisfy the real-time requirement. \noindent \textbf{Hardware Acceleration.} A significant emphasis on optimizing the DNN execution has emerged in recent years \cite{lane2016deepx,xu2018deepcache,huynh2017deepmon,yao2017deepsense,han2016mcdnn,niu2020patdnn,dong2020rtmobile,jian2021radio,gong2022automatic}. There are several representative DNN acceleration frameworks including Tensorflow-Lite \cite{TensorFlow-Lite}, Alibaba MNN \cite{Ali-MNN}, Pytorch-Mobile \cite{Pytorch-Mobile}, and TVM \cite{chen2018tvm}. These frameworks include several graph optimization techniques such as layer fusion, and constant folding. \section{Motivation and Challenges} \label{sec:challenges} With the rapid development of mobile devices and real-time applications such as live streaming, it is essential and desirable to implement real-time SR on resources-limited mobile devices. However, it is challenging. To maintain or upscale the spatial dimensions of feature maps based on large input/output size, SR models typically consume tens of or hundreds of GMACs (larger than several GMACs in image classification \cite{liu2019metapruning,wan2020fbnetv2}), incurring difficulties for real-time inference. For example, prior works on mobile SR deployment \cite{lee2019mobisr} and \cite{vu2018fast} achieve 2792ms and 912ms on-mobile inference latency, respectively, far from real-time. We can adopt NAS or pruning methods to find a lightweight SR model with fast speed on mobile devices. But there are several challenges: (\texttt{C1}) tremendous searching overhead with NAS, (\texttt{C2}) misleading magnitude during pruning, (\texttt{C3}) speed incorporation issues, and (\texttt{C4}) heuristic depth determination. \noindent \textbf{Tremendous Searching Overhead with NAS.} In NAS, the exponentially growing search space leads to tremendous search overhead. Specifically, the RL-based \cite{zoph2016neural,zhong2018practical,zoph2018learning} or evolution-based NAS methods \cite{real2019regularized,tan2019efficientnet,yang2020cars} typically need to sample large amounts of candidate models from the search space and train each candidate architecture with multiple epochs, incurring long search time and high computation cost. Besides, differentiable NAS methods \cite{brock2017smash,bender2018understanding,liu2018darts} build super-nets to train multiple architectures simultaneously, causing significant memory cost and limited discrete search space up-bounded by the available memory. To mitigate these, there are certain compromised strategies, such as proxy tasks (to search on CIFAR and target on ImageNet) \cite{real2019regularized,zhou2020econas,yang2020cars} and performance estimation (to predict/estimate the architecture performance with some metrics) \cite{abdelfattah2021zerocost,tanaka2020pruning,ming_zennas_iccv2021}. \noindent \textbf{Misleading Magnitude during Pruning.} Pruning can also be adopted to reduce the model size, which determines the per-layer pruning ratio and pruning positions. With the assumption that weights with smaller magnitudes are less important for final accuracy, magnitude-based pruning \cite{han2016deep_compression,mao2017exploring,he2017channel,zhang2021unified,wen2016learning,gong2020privacy,li2020ss,ma2020blk,yuan2021mest} is widely employed to prune weights smaller than a threshold. However, the assumption is not necessarily true, and weight magnitudes can be misleading. Magnitude-based pruning is not able to achieve importance shifting during pruning. As detailed in Appendix~\ref{app: small_magenitude}, in iterative magnitude pruning, small weights pruned first are not able to become large enough to contribute to the accuracy. Thus layers pruned more at initial will be pruned more and more, causing a non-recoverable pruning policy. It becomes pure exploitation without exploration. \label{sec: speed_incorporation} \noindent\textbf{Speed Incorporation Issues.} \label{sec: speed_incorporation} To achieve real-time inference on mobile, it is essential to obtain the on-mobile speed performance when searching architectures. However, it is non-trivial to achieve this since testing speed requires an additional process to interact with the mobile device for a few minutes, which can hardly be incorporated into a typical model training. To mitigate this, certain methods \cite{liu2019metapruning,wan2020fbnetv2,9522982} adopt weight number or computation counts as an estimation of the speed performance. Other methods \cite{wu2019fbnet,dai2019chamnet,yang2018netadapt} first collect on-mobile speed data and then build lookup tables with the speed data to estimate the speed. \noindent\textbf{Heuristic Depth Determination.} Reducing model depth can avoid all computations in the removed layers, thus significantly accelerating the inference. Since previous NAS works do not incorporate a practical speed constraint or measurement during optimization, their search on model depth is usually heuristic. Designers determine the model depth according to a simple rule that the model should satisfy an inference budget, without a specific optimization method \cite{ming_zennas_iccv2021,liu2018progressive,zhou2020econas,yang2020cars,bender2018understanding,liu2018darts}. More efforts are devoted to searching other optimization dimensions such as kernel size or width rather than model depth. \section{Our Method} \label{sec:method} We first introduce the framework, then discuss the components of the framework in detail. We also specify how it can deal with the challenges in Sec. \ref{sec:challenges}. \subsection{Framework with Adaptive SR Block} In the framework, we perform a compiler-aware architecture depth and per-layer width search to achieve real-time SR on mobile devices. The search space contains the width for each CONV layer and the number of stacked SR blocks in the model, which is too large to be explored with a heuristic method. Therefore, we propose an adaptive SR block to implement the depth and per-layer width search, and the model is composed of multiple adaptive SR blocks. Fig.~\ref{fig:search} shows the architecture of the adaptive SR block. It consists of a masked SR block, a speed model, and an aggregation layer. The adaptive SR block has two inputs (and outputs) corresponding to the SR features and the accumulated speed, respectively. It achieves per-layer width search with mask layers in the masked SR blocks and depth search with aggregation layer to choose a path between the skip connection and the masked SR block. Besides, to obtain the on-mobile speed performance, we adopt a speed model to predict the speed of the masked SR block. The speed model is trained on our own dataset with speed performance of various block width configurations measured through compiler optimizations for significant inference acceleration to achieve accurate speed prediction. \begin{figure}[t] \centering \includegraphics[width=0.78\columnwidth]{figs/search_fig.pdf} \caption{Architecture of the adaptive SR block search.} \label{fig:search} \end{figure} \subsection{Per-Layer Width Search with Mask Layer for \texttt{C1} and \texttt{C2}} Width search is performed for each CONV layer in a typical WDSR block \cite{yu2018wide}. WDSR is chosen as our basic building blocks since it has demonstrated high efficiency in SR tasks \cite{Yu_2019_ICCV,yu2019autoslim,Cheng_2019_CVPR_Workshops}. Note that our framework is not limited to the WDSR block and can be easily extended to various residual SR blocks \cite{ahn2018fast,lim2017enhanced,hui2018fast} in the literature. To satisfy the real-time requirement, we perform a per-layer width search to automatically select an appropriate number of channels for each CONV layer in the WDSR block. Specifically, we insert a differentiable mask layer (a depth-wise $1\times 1$ CONV layer) after each CONV layer to serve as the layer-wise trainable mask, as shown below, {\small \begin{equation} \bm a_l^n = \bm m_l^n \odot (\bm w_l^n \odot \bm a_{l-1}^n), \end{equation}}% where $\odot$ denotes the convolution operation. $\bm w_l^n \in R^{o\times i\times k\times k}$ is the weight parameters in the $l^{th}$ CONV layer of the $n^{th}$ block, with $o$ output channels, $i$ input channels, and kernels of size $k\times k $. $\bm a_l^n \in R^{B\times o\times s\times s'}$ represents the output features of $l^{th}$ layer (with the trainable mask), with $o$ channels and $s\times s'$ feature size. $B$ denotes the batch size. $\bm m_l^n \in R^{o\times 1\times 1\times 1}$ is the corresponding weights of the depth-wise CONV layer (i.e., the mask layer). We use each element of $\bm m_l^n$ as the pruning indicator for the corresponding output channel of $\bm w_l^n \odot \bm a_{l-1}^n$. Larger elements of $\bm m_l^n$ mean that the corresponding channels should be preserved while smaller elements indicate pruning the channels. Formally, we use a threshold to convert $\bm m_l^n$ into a binary mask, {\small \begin{equation} \bm b_l^n = \begin{cases} 1, \bm m_l^n > thres. \\ 0, \bm m_l^n \leq thres. \end{cases} \text{(element-wise)}, \end{equation}}% where $\bm b_l^n \in \{0, 1\}^{o\times 1\times 1\times 1}$ is the binarized $\bm m_l^n$. We initialize $\bm m_l^n$ with random values between 0 and 1, and the adjustable $thres$ is set to 0.5 in our case. The WDSR block with the proposed mask layers is named as masked SR block. Thus we are able to obtain a binary mask for each CONV layer. The next problem is how to make the mask trainable, as the binarization operation is non-differentiable, leading to difficulties for back-propagation. To solve this, we integrate Straight Through Estimator (STE) \cite{bengio2013setimating} as shown below, {\small \begin{equation} \frac{\partial \mathcal L}{\partial \bm m_l^n} = \frac{\partial \mathcal L}{\partial \bm b_l^n}, \end{equation}}% where we directly pass the gradients through the binarization. The STE method is originally proposed to avoid the non-differentiable problems in quantization tasks \cite{liu2019learning,yin2019understanding}. Without STE, some methods adopt complicated strategies to deal with the non-differentiable binary masks such as \cite{guo2020dmcp,guan2020dais}. With the binarization and the STE method, we are able to build a trainable mask to indicate whether the corresponding channel is pruned or not. Our mask generation and training are more straightforward and simpler. For example, proxyless-NAS \cite{cai2018proxylessnas} transforms the real-valued weights to binary gates with a probability distribution, and adopts complex mask updating procedure (such as task factorizing). SMSR \cite{wang2021exploring} adopts Gumbel softmax to perform complex sparse mask CONV. Unlike proxylessNAS or SMSR, we generate binary masks simply via a threshold and train the masks directly via STE. \subsection{Speed Prediction with Speed Model for \texttt{C3}} \label{sec: speed_model_framework} To achieve real-time SR inference on mobile devices, we take the inference speed into the optimization to satisfy a given real-time threshold. It is hard to measure the practical speed or latency of various structures on mobile devices during optimization. Traditionally, the inference speed may be estimated roughly with the number of computations \cite{liu2019metapruning,wan2020fbnetv2,9522982} or a latency lookup table \cite{wu2019fbnet,dai2019chamnet,yang2018netadapt}, which can hardly provide an accurate speed. To solve this problem, we adopt a DNN-based speed model to predict the inference speed of the block. The input of the speed model is the width of each CONV layer in the block, and it outputs the block speed. As shown in Fig. \ref{fig:search}, the width of each CONV layer can be obtained through the mask layer. Thus the speed model can work perfectly with the width search, dealing with \texttt{C3} to provide speed performance of various architectures. To train such a speed model, we first need to build a speed dataset with block latency of various layer width configurations in the block. Next, we can train a speed model based on the dataset to predict the speed. We find that the trained speed model is accurate in predicting the speed of different layer widths in the block (with 5\% error at most). We show the details about the dataset, speed model, and the prediction accuracy in Sec.~\ref{sec: speed_model_compiler_optimization} and Appendix B. We highlight that our speed model not only takes the masks as inputs to predict the speed, but also back-propagates the gradients from the speed loss (Eq.~\eqref{eq:loss}) to update the masks as detailed in Sec.~\ref{sec: training_loss}, rather than just predicting performance forwardly such as \cite{wen2020neural}. That is why we build the speed model based on DNNs instead of loop-up tables. The trainable masks and the speed model are combined comprehensively to solve the problem more efficiently. \subsection{Depth Search with Aggregation Layer for \texttt{C4}} Although reducing the per-layer width can accelerate the inference, removing the whole block can avoid the computations of the whole block, thus providing higher speedup. Hence, besides width search, we further incorporate depth search to automatically determine the number of adaptive SR blocks in the model. Note that although per-layer width search may also converge to zero width, which eliminates the entire block, we find that in most cases, there are usually a few channels left in each block to promote the SR performance, leading to difficulties in removing the whole block. Thus it is necessary to incorporate depth search. To perform depth search, we have two paths in each adaptive SR block. As shown in Fig.~\ref{fig:search}, one path is the skip connection, and the other path is the masked SR block. In the aggregation layer, there is a parameter like a switch to control which path the SR input goes through. If the SR input chooses the skip path, the masked SR block is skipped, and the latency of this block is just 0, leading to significant inference acceleration. The aggregation layer plays a key role in the path selection. It contains two trainable parameters $\alpha_s$ and $\alpha_b$. In the forward pass, it selects the skip path or the masked WDSR block path based on the relative relationship of $\alpha_s$ and $\alpha_b$, as shown below, {\small \begin{align} \bm \beta_s = 0 \ \text{and} \ \beta_b =1, \text{if} \ \alpha_s \le \alpha_b, \\ \bm \beta_s = 1 \ \text{and} \ \beta_b =0, \text{if} \ \alpha_s > \alpha_b, \end{align}}% where the binarized variables $\beta_s$ and $\beta_b$ denote the path selection ($\beta_s {=}1$ means choosing the skip path and $\beta_b{=}1$ means choosing the masked SR block path). Since the comparison operation is non-differentiable, leading to difficulties for back-propagation, similarly we adopt STE \cite{bengio2013setimating} to make it differentiable as below, {\small \begin{align} \frac{\partial \mathcal L}{\partial \alpha_s} = \frac{\partial \mathcal L}{\partial \beta_s}, \ \ \frac{\partial \mathcal L}{\partial \alpha_b} = \frac{\partial \mathcal L}{\partial \beta_b}. \end{align}} In the aggregation layer, the forward computation can be represented below, {\small \begin{align} \bm {a}^n & = \beta_s \cdot \bm a^{n-1} + \beta_b \cdot \bm a_L^n, \\ v_n & = v_{n-1} + \beta_b \cdot v_c, \end{align}} where $\bm {a}^n$ is the SR output features of the $n^{th}$ adaptive SR block. $\bm {a}^n_L$ is the SR output features of masked SR block in the $n^{th}$ adaptive SR block, and $L$ is the maximum number of CONV layers in each block and we have $l{\le} L$. $v_n$ is the accumulated speed or latency until the $n^{th}$ adaptive SR block and $v_c$ is the speed of the current block which is predicted by the speed model. By training $\alpha_s$ and $\alpha_b$, the model can learn to switch between the skip path and the SR path to determine the model depth, thus dealing with \texttt{C4}. \subsection{Training Loss} \label{sec: training_loss} Multiple adaptive SR blocks can form the SR model, which provides two outputs including the typical SR outputs and the speed outputs. The training loss is a combination of a typical SR loss and a speed loss as below, {\small \begin{align} \mathcal L_{SPD} &= \text{max}\{0, v_N- v_T \}, \\ \mathcal L & = \mathcal L_{SR} + \gamma \mathcal L_{SPD}, \label{eq:loss} \end{align}}% where $v_T$ is the real-time threshold, $v_N$ is the accumulated speed of $N$ blocks, and $\gamma$ is a parameter to control their relative importance. The objective is to achieve high SR performance while the speed can satisfy a real-time threshold. To summarize, with the trainable masks, the speed model, and the aggregation layer in the adaptive SR block, our search algorithm achieves the following advantages: \begin{itemize}[leftmargin=*] \item The mask can be trained along with the network parameters via gradient descent optimizers, thus dealing with \texttt{C1} to save search overhead compared with previous one-shot pruning \cite{he2017channel,frankle2018lottery} or NAS methods \cite{zoph2016neural,zhong2018practical} to train multiple epochs for each candidate architecture with huge searching efforts. \item Compared with magnitude-based threshold pruning, we decouple the trainable masks from original model parameters, thus enabling exploitation and overcoming the drawbacks of magnitude-based pruning, dealing with \texttt{C2}. \item We use the speed model for predicting the speed to solve \texttt{C3}, which is differentiable regarding the trainable mask. Thus the mask is trained to find a model with both high SR performance and fast inference speed. \item We also incorporate depth search though aggregation layers to deal with \texttt{C4}. \end{itemize} \section{Compiler Awareness with Speed Model } \label{sec: speed_model_compiler_optimization} To satisfy the speed requirement with a given latency threshold on a specific mobile device, it is required to obtain the actual inference latency on the device. It is non-trivial to achieve this as the model speed varies with different model width and depth. It is unrealistic to measure the actual on-mobile speed during the search, as the search space is quite large, and testing the mobile speed of each candidate can take a few minutes, which is not compatible with DNN training. To solve this problem, we adopt a speed model to predict the inference latency of the masked SR block with various width configurations. With the speed model, we can obtain the speed prediction as outputs by providing the width of each CONV layer in the SR block as inputs. It is fully compatible with the trainable mask, enabling differentiable model speed with respect to the layer width. To obtain the speed model, we first build a latency dataset with latency data measured on the hardware platforms incorporated with compiler optimizations. Then the DNN speed model is trained based on the latency dataset. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{figs/Com_diagram.pdf} \caption{The overview of compiler optimizations.} \label{fig:compiler} \end{figure} \noindent\textbf{Compiler Optimization.} To build a latency dataset, we need to measure the speed of various block configurations on mobile devices. Compiler optimizations are adopted to accelerate the inference speed during speed testing. It is essential to incorporate compiler optimizations as they can significantly accelerate the inference speed. The overview of the compiler optimizations is shown in Fig. \ref{fig:compiler}. To fully exploit the parallelism for a higher speedup, the key features of SR have to be considered. As the objective of SR is to obtain a HR image from its LR counterpart, each layer has to maintain or upscale the spatial dimensions of the feature, leading to larger feature map size and more channels compared with classification tasks. Therefore, the data movements between the memory and cache are extremely intensive. To reduce the data movements for faster inference, we adopt two important optimization techniques: 1) operator fusion and 2) decreasing the amount of data to be copied between CPU and GPU. Operator fusion is a key optimization technique adopted in many state-of-the-art DNN execution framework \cite{TensorFlow-Lite,Ali-MNN,Pytorch-Mobile}. However, these frameworks usually adopt fusion approaches based on certain patterns that are too restrictive to cover the diversity of operators and layer connections. To address this problem, we classify the existing operations in the SR model into several groups based on the mapping between the input and output, and develop rules for different combinations of the groups in a more aggressive fusion manner. For instance, CONV operation and depth-to-space operation can be fused together. With layer fusion, both the memory consumption of the intermediate results and the number of operators can be reduced. An auto-tuning process is followed to determine the best-suited configurations of parameters for different mobile CPUs/GPUs and Domain Specific Language (DSL) based code generation. After that, a high-level DSL is leveraged to specify the operator in the computational graph of a DNN model. We show more details about compiler optimization in Appendix C. \noindent\textbf{Latency Dataset.} To train the speed model, we first measure and collect the inference speed of the WDSR block under various CONV layer width configurations. After that, a dataset of the WDSR block on-mobile speed with different configurations can be built. We vary the number of filters in each CONV layer as the different width configurations. The inference time is measured on the target device (Samsung Galaxy S21) by stacking 20 WDSR blocks with the same configuration, and the average latency is used as the inference time to mitigate the overhead of loading data on mobile GPU. As the maximum number of CONV layers in each masked WDSR block is $L$, each data point in the dataset can be represented as a tuple with $L{+}2$ elements: $\{\mathcal{F}_{CONV^1}, \cdots, \mathcal{F}_{CONV^{L+1}}, \mathcal{T}_{inference}\}$, where $\mathcal{F}_{CONV^i},$ for $i{\in}\{1,\cdots, L\}$, indicates the number of input channels for the $i^{th}$ CONV layer, $\mathcal{F}_{CONV^{L+1}}$ is the number of output channels for the last CONV layer, and $\mathcal{T}_{inference}$ is the inference speed for this configuration measured in milliseconds. The entire dataset is composed of 2048 data points. \noindent\textbf{Speed Model.} With the latency dataset, the speed model can be trained on the collected data points. The inference speed estimation is a regression problem, thus, a network with 6 fully-connected layers combined with ReLU activation is used as the speed model. During the speed model training, 90\% of the data is used for training and the rest is for validation. After training, the speed model can predict the inference time of various block configurations with high accuracy. From our results, the speed model only incurs 5\% of deviation for the speed prediction. The speed model has two advantages: (1) It is compatible with the width search framework as the trainable mask can be directly fed into the speed model. (2) It makes the model speed differentiable with respect to the masks, and back-propagates gradients to update the masks, thus the model can update the model speed by adjusting the layer width though back-propagation. \section{Experiments} \label{sec:exp} \begin{table*}[t] \scriptsize\sffamily \centering \renewcommand{\arraystretch}{1.2} \begin{adjustbox}{max width=0.9\textwidth} \begin{tabular}{c|c|c|c|c|cccc|cccc} \toprule \multirow{2}{*}{Scale} & \multicolumn{1}{c|}{\multirow{2}{*}{Method}} & \multicolumn{1}{c|}{\multirow{2}{*}{ \makecell[c]{ Params \\ (K)} }} & \multicolumn{1}{c|}{\multirow{2}{*}{\makecell[c]{ MACs \\(G)}} } & \multicolumn{1}{c|}{\multirow{2}{*}{ \makecell[c]{Latency \\ (ms)}}} & \multicolumn{4}{c|}{PSNR} & \multicolumn{4}{c}{SSIM} \\ \cline{6-13} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{Set5} & \multicolumn{1}{c}{Set14} & \multicolumn{1}{c}{B100} & \multicolumn{1}{c|}{Urban100} & \multicolumn{1}{c}{Set5} & \multicolumn{1}{c}{Set14} & \multicolumn{1}{c}{B100} & \multicolumn{1}{c}{Urban100} \\ \hline \hline \multirow{14}{*}{\footnotesize $\times$2} & \textsc{FSRCNN}~\cite{dong2016accelerating} & 12 & 6.0 & 128.47 & 37.00 & 32.63 & 31.53 & 29.88 & 0.9558 & 0.9088 & 0.8920 & 0.9020 \\ & \textsc{MOREMNAS-C}~\cite{chu2020multi} & 25 & 5.5 & --- & 37.06 & 32.75 & 31.50 & 29.92 & 0.9561 & 0.9094 & 0.8904 & 0.9023 \\ & \textsc{TPSR-NOGAN}~\cite{lee2020journey} & 60 &14.0 & --- &37.38 & 33.00 & 31.75 & 30.61 & 0.9583 & 0.9123 & 0.8942 & 0.9119\\ & \textsc{LapSRN}~\cite{lai2017deep} & 813& 29.9& --- & 37.52 & 33.08 &31.80 & 30.41 & 0.9590 &0.9130 &0.8950 &0.9100 \\ & \textsc{CARN-M}~\cite{ahn2018fast} & 412 &91.2& 1049.92 & 37.53 & 33.26 & 31.92 & 31.23 & 0.9583 &0.9141 &0.8960 &0.9193\\ & \textsc{FALSR-C}~\cite{chu2019fast} & 408 &93.7 & --- & 37.66 & 33.26 & 31.96 & 31.24 & 0.9586 &0.9140 &0.8965 &0.9187 \\ & {\textsc{ESRN-V}}~\cite{song2020efficient} & 324 & 73.4 & --- & 37.85 & 33.42 &32.10 & 31.79 & 0.9600 & 0.9161 & 0.8987 & 0.9248 \\ & \textsc{EDSR}~\cite{lim2017enhanced} & 1518 & 458.0 & 2031.65 & 37.99 & 33.57 & 32.16 & 31.98 & 0.9604 & 0.9175 & 0.8994 & 0.9272 \\ & \textsc{WDSR}~\cite{yu2018wide} & 1203 & 274.1 & 1973.31 & 38.10 & 33.72 &32.25 & 32.37 & 0.9608 & 0.9182 & 0.9004 & 0.9302 \\ & { \textsc{SMSR}~\cite{wang2021exploring}} & 985 & 131.6 & --- & 38.00 & 33.64 & 32.17 & 32.19 & 0.9601 & 0.9179 & 0.8990 & 0.9284 \\ & { \textsc{SRPN-L}~\cite{zhang2021learning}} & 609 & 139.9& --- & 38.10 & 33.70 & 32.25 & 32.26 & 0.9608 & 0.9189 & 0.9005 & 0.9294 \\ & \textbf{Ours ($v_T{=}100$}ms\textbf{)} & 47 & 11.0 & \textbf{98.90} & 37.64 & 33.16 & 31.91 & 31.08 & 0.9591 & 0.9136 & 0.8961 & 0.9170 \\ & \textbf{Ours ($v_T{=}70$}ms\textbf{)} & 28 & 6.6 & \textbf{66.09} & 37.49 & 33.05 & 31.81 & 30.76 & 0.9584 & 0.9123 & 0.8946 & 0.9135 \\ & \textbf{Ours ($v_T{=}40$}ms, \textbf{real-time)} & 11 & 2.5 & \textbf{34.92} & 37.19 & 32.80& 31.60 & 30.15 &0.9572 & 0.9099 & 0.8919 & 0.9054 \\ \hline \multirow{16}{*}{\footnotesize $\times$4} & \textsc{FSRCNN}~\cite{dong2016accelerating} & 12 & 4.6 & 98.13 & 30.71 & 27.59 & 26.98 & 24.62 & 0.8657& 0.7535 &0.7150 &0.7280 \\ & {\textsc{TPSR-NOGAN}}~\cite{lee2020journey} & 61 &3.6 & 55.82 &31.10 & 27.95 & 27.15 & 24.97 & 0.8779 &0.7663 &0.7214 &0.7456 \\ & {\textsc{FEQE-P}}~\cite{vu2018fast} & 96 & 5.6 & 82.81 & 31.53 & 28.21 &27.32 &25.32 & 0.8824 & 0.7714 & 0.7273 & 0.7583 \\ & \textsc{CARN-M}~\cite{ahn2018fast} & 412 &32.5 & 374.15 & 31.92 & 28.42 &27.44 & 25.62 & 0.8903 &0.7762 &0.7304 &0.7694\\ & {\textsc{ESRN-V}}~\cite{song2020efficient} & 324 & 20.7 & --- & 31.99 & 28.49 & 27.50 & 25.87 & 0.8919 &0.7779 &0.7331 &0.7782 \\ & {\textsc{IDN}}~\cite{hui2018fast} & 600 & 32.3 & --- & 31.99 & 28.52 & 27.52 & 25.92 & 0.8928 &0.7794 &0.7339 &0.7801 \\ & \textsc{EDSR}~\cite{lim2017enhanced} & 1518 & 114.5 & 495.90 & 32.09 & 28.58 & 27.57 & 26.04 & 0.8938 & 0.7813 & 0.7357 & 0.7849 \\ &\textsc{DHP-20}~\cite{li2020dhp}& 790 & 34.1 & --- & 31.94 &28.42 & 27.47 & 25.69 & \quad--- & \quad--- & \quad--- &\quad--- \\ &\textsc{IMDN}~\cite{hui2019lightweight}& 715 & 40.9 & --- & 32.21 & 28.58 & 27.56 & 26.04 & 0.8948 &0.7811 &0.7353 & 0.7838\\ & {\textsc{WDSR}}~\cite{yu2018wide} & 1203 & 69.3 & 533.02 & 32.27 & 28.67 & 27.64 &26.26 & 0.8963 &0.7838 &0.7383 &0.7911 \\ & {\textsc{SR-LUT-S}~\cite{jo2021practical} } & 77 & --- & --- & 29.77 & 26.99 & 26.57 & 23.94 & 0.8429 & 0.7372 & 0.6990 & 0.6971 \\ & {\textsc{SMSR}~\cite{wang2021exploring} } & 1006 & 41.6 & --- & 32.12 & 28.55 & 27.55 & 26.11 & 0.8932 & 0.7808 & 0.7351 & 0.7868 \\ & { \textsc{SRPN-L}~\cite{zhang2021learning}} & 623 & 35.8& --- & 32.24 & 28.69 & 27.63 & 26.16 & 0.8958 & 0.7836 & 0.7373 & 0.7875 \\ & \textbf{Ours ($v_T{=}100$}ms\textbf{)} & 188 & 10.8 & \textbf{93.50} & 32.02 & 28.50 & 27.51 & 25.83 & 0.8922 & 0.7778 & 0.7328 & 0.7769 \\ & \textbf{Ours ($v_T{=}70$}ms\textbf{)} & 116 & 6.7 & \textbf{64.95} & 31.88 & 28.43 & 27.46 &25.69 & 0.8905 & 0.7760 & 0.7312 & 0.7715 \\ & \textbf{Ours ($v_T{=}40$}ms,\textbf{real-time)} & 66 & 3.7 & \textbf{36.46} & 31.73 & 28.28 & 27.34 &25.44 & 0.8878 & 0.7725 &0.7281 & 0.7620 \\ \bottomrule \multicolumn{13}{l}{$*$ Some latency results are not reported as the models are not open-source or contain operators that cannot run on mobile GPU. } \\ \multicolumn{13}{l}{$\dag$ The latency results are measured on the GPU of Samsung Galaxy S21. } \end{tabular} \end{adjustbox} \caption{Comparison with SOTA efficient SR models for implementing 720p.} \label{table:result_sr} \end{table*} \subsection{Experimental Settings} \noindent\textbf{SR Datasets.} All SR models are trained on the training set of DIV2K \cite{Agustsson_2017_CVPR_Workshops} with 800 training images. For evaluation, four benchmark datasets Set5 \cite{bevilacqua2012low}, Set14 \cite{yang2010image}, B100 \cite{martin2001database}, and Urban100 \cite{huang2015single} are used for test. The PSNR and SSIM are calculated on the luminance channel (a.k.a. Y channel) in the YCbCr color space. \noindent\textbf{Evaluation Platforms and Running Configurations.} The training codes are implemented with PyTorch. 8 GPUs are used to conduct the search, which usually finishes in 10 hours. The latency is measured on the GPU of an off-the-shelf Samsung Galaxy S21 smartphone, which has the Qualcomm Snapdragon 888 mobile platform with a Qualcomm Kryo 680 Octa-core CPU and a Qualcomm Adreno 660 GPU. Each test takes 50 runs on different inputs with 8 threads on CPU, and all pipelines on GPU. The average time is reported. \noindent\textbf{Training Details.} $48\times48$ RGB image patches are randomly sampled from LR images for each input minibatch. We use the architecture of WDSR with 16 blocks as the backbone of our NAS process. Considering the huge input size of SR (normally nHD--$640{\times}360$ inputs or higher resolution for $\times2$ task), a compact version of the WDSR block is chosen to fit the mobile GPU, where the largest filer number for each CONV layer is 32, 146, and 28, respectively. The backbone is initialized with the parameters of the pretrained WDSR model. Traditional MAE loss is used to measure the differences between the SR image and the ground-truth as the SR loss. The parameter $\gamma$ in the training loss denoted as Eq.~\eqref{eq:loss} is set to 0.01. The first 20 epochs are used for the NAS process, and the following 30 epochs for fine-tuning the searched model. ADAM optimizers with $\beta_{1}{=}0.9$, $\beta_{2}{=}0.999$, and $\epsilon{=}\num{1e-8}$ are used for both model optimization and fine-tuning process. The learning rate is initialized as $\num{1e-4}$ and reduced by half at 10, 16 epochs and at 20, 25 epochs in the NAS and fine-tuning process, respectively. The details of the searched architecture are in Appendix D. \noindent\textbf{Baseline Methods. } We compare with some traditional human-designed SR models such as FSRCNN and EDSR. Besides, some baselines optimizing the speed or hardware with NAS approaches are also covered. For example, TPSR-NOGAN, FALSR-C, ESRN-V optimize the SR efficiency to facilitate the deployment on end devices. Moreover, we compare with some methods exploring the sparsity in SR models such as DHP, SMSR, and SRPN-L for efficient inference. \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{figs/eccv_patch.pdf} \caption{Visual Comparisons with other methods on Urban100/B100 for $\times$4 SR.} \label{fig:patch} \vspace{-0.2cm} \end{figure*} \subsection{Experimental Results} \noindent\textbf{Comparison with Baselines on SR Performance.} The comparisons of the models obtained by the proposed framework with state-of-the-art efficient SR works are shown in Table \ref{table:result_sr}. Two commonly used metrics (PSNR and SSIM) are adopted to evaluate image quality. The evaluations are conducted on $\times$2 and $\times$4 scales. For a fair comparison, we start from different low-resolution inputs but the high-resolution outputs are 720p ($1280{\times} 720$). To make a comprehensive study, the latency threshold $v_T$ is set to different values. Specifically, as real-time execution typically requires at least 25 frames$/$sec (FPS), the latency threshold $v_T$ is set as 40ms to obtain SR models for real-time inference. For $\times$2 scale, the model obtained with latency threshold $v_T$=100ms outperforms TPSR-NOGAN, LAPSRN, and CARN-M in terms of PSNR and SSIM with fewer parameters and MACs. Compared with FALSR-C, ESRN-V, EDSR, WDSR, SMSR, and SRON-L, our model greatly reduces the model size and computations with a competitive image quality performance. By setting $v_T$ as 70ms, our model has similar parameters and MACs as MOREMNAS-C, but achieves higher PSNR and SSIM performance. Similar results can be obtained on the $\times$4 scale. Furthermore, for both scales, by setting $v_T$ as 40ms, we obtain extremely lightweight models and the models still maintain satisfying PSNR and SSIM performance on all four datasets. Although SR-LUT uses look-up tables for efficient SR inference, it suffers from more significant SR performance degradation. The visual comparisons with other SR methods for $\times$4 up-scaling task are shown in Fig.~\ref{fig:patch}. Our model can recover the details comparable or even better than other methods by using fewer parameters and computations. \noindent\textbf{Comparison with Baselines on Speed Performance.} In general, our method can achieve real-time SR inference (higher than 25 FPS) for implementing 720p resolution up-scaling with competitive image quality in terms of PSNR and SSIM on mobile platforms (Samsung Galaxy S21). Compared with \cite{wang2021exploring} which also explore the sparsity of SR models, our method can achieve more significant model size and computation reduction (our 11GMACs v.s. 131.6GFLOPs \cite{wang2021exploring} for $\times$2 scale), leading to faster speed (our 11.3ms v.s. 52ms \cite{wang2021exploring} on Nvidia A100 GPU). \begin{figure}[t] \begin{floatrow} \ffigbox{ \includegraphics[width=0.9\linewidth]{figs/Human_Design_vs_NAS.pdf} }{ \caption{Comparison of $\times2$ SR results between searched models and heuristic models on Set5 with latency measured on the GPU of Samsung Galaxy S21.} \label{fig:search_vs_human} } \capbtabbox{ \scriptsize\sffamily \centering \renewcommand{\arraystretch}{1.1} \begin{adjustbox}{max width=0.45\textwidth} \begin{tabular}{cc|c|cc|cc} \toprule \multicolumn{2}{c|}{Search Method} & \multirow{2}{*}{ \makecell[c]{Latency \\ (ms)}} & \multicolumn{2}{c|}{Set 5 } & \multicolumn{2}{c}{Urban100 } \\ \cline{1-2} \cline{4-7} \multicolumn{1}{c|}{ \makecell[c]{Width \\ Search} } & \makecell[c]{Depth\\ Search} & & PSNR & SSIM & PSNR & SSIM \\ \hline \hline \multicolumn{1}{c|}{\xmark } & \xmark & 150.92 & 37.62 & 0.9589 & 31.03 & 0.9164 \\ \hline \multicolumn{1}{c|}{\xmark } & \cmark & 111.58 & 37.65 & 0.9591 & 31.10 & 0.9172 \\ \hline \multicolumn{1}{c|}{\cmark } & \xmark & 108.38 & 37.65 & 0.9591 & 31.02 & 0.9161 \\ \hline \multicolumn{1}{c|}{\cmark } & \cmark & \textbf{98.90} & 37.64 & 0.9591 & 31.08 & 0.9170 \\ \bottomrule \end{tabular} \end{adjustbox} }{ \caption{Comparison of different search schema for $\times2$ scales. The performance is evaluated on Set5 and Urban100 datasets} \label{table:ablation} } \end{floatrow} \end{figure} \noindent\textbf{Comparison with Heuristic Models.} We compare our searched models with heuristic models, which are obtained by evenly reducing the depth and width from the WDSR model. Since we do not search per-layer width in heuristic models, the width is the same among all blocks in one heuristic model. For a fair comparison, the same compiler optimization framework is adopted for both searched models and heuristic models. As shown in Fig.~\ref{fig:search_vs_human}, we can see that the NAS approach can achieve faster inference than the heuristic models under the same PSNR, demonstrating the effectiveness of the search approach. \noindent\textbf{Compiler Optimization Performance.} To demonstrate the effectiveness of our compiler optimizations, we implement CARN-M \cite{ahn2018fast}, FSRCNN \cite{dong2016accelerating}, and our searched model with the open-source MNN framework. By comparing their PSNR and FPS performance, we find that our model can achieve higher FPS and PSNR than the baseline models, with detailed results in Appendix E. We also compare with the compilation of \cite{huynh2017deepmon} detailed in Appendix F. \noindent\textbf{Performance on Various Devices.} Our main results are trained and tested on the mobile GPU. We highlight that our method can be easily applied to all kinds of devices with their corresponding speed models. To demonstrate this, we perform compiler optimizations for the DSP on the mobile device and train the corresponding speed model. With the new speed model, we use our method to search an SR model for the DSP, which can achieve 37.34 PSNR on Set5 with 32.51 ms inference speed for $\times2$ up-scaling task, detailed in Appendix G. \subsection{Ablation Study} For the ablation study, we investigate the influence of depth search and per-layer width search separately for $\times$2 scale task. Multiple runs are taken for each search method with different latency threshold $v_T$ so that the searched models have similar PSNR and SSIM on Set5 to provide a clear comparison. From the results in Table \ref{table:ablation}, we can see that both depth search only and width search only can greatly reduce the latency with better image quality than non-search case. Specifically, as a missing piece in many prior SR NAS works, depth search provides better PSNR and SSIM performance than width search on Urban100 with a slightly higher latency, which shows the importance of this search dimension. By combining depth search and width search, we could reach faster inference with similar PSNR and SSIM than conducting either search alone. \section{Conclusion} \label{sec:conclusion} We propose a compiler-aware NAS framework to achieve real-time SR on mobile devices. An adaptive WDSR block is introduced to conduct depth search and per-layer width search. The latency is directly taken into the optimization objective with the leverage of a speed model incorporated with compiler optimizations. With the framework, we achieve real-time SR inference for the implementation of 720p with competitive SR performance on mobile. \\ \noindent\textbf{Acknowledgments.} The research reported here was funded in whole or in part by the Army Research Office/Army Research Laboratory via grant W911-NF-20-1-0167 to Northeastern University. Any errors and opinions are not those of the Army Research Office or Department of Defense and are attributable solely to the author(s). This research is also partially supported by National Science Foundation CCF-1937500 and CNS-1909172.
{'timestamp': '2022-07-27T02:05:57', 'yymm': '2207', 'arxiv_id': '2207.12577', 'language': 'en', 'url': 'https://arxiv.org/abs/2207.12577'}
\section{Introduction} We consider a model that generalizes many previously studied optimization problems in the framework of scheduling and (minimization) load balancing problems on parallel machines. We use this generalization in order to exhibit that there is a standard way to design efficient polynomial time approximation schemes for all these special cases and for new special cases as well. In the earlier works, approximation schemes for many of special cases of our model were developed using ad-hoc tricks, we show that such ad-hoc methods are not necessary. Before going into the details of the definition of our model, we define the types of approximation schemes. A $\rho$-approximation algorithm for a minimization problem is a polynomial time algorithm that always finds a feasible solution of cost at most $\rho$ times the cost of an optimal solution. The infimum value of $\rho$ for which an algorithm is a $\rho$-approximation is called the approximation ratio or the performance guarantee of the algorithm. A polynomial time approximation scheme (PTAS) for a given problem is a family of approximation algorithms such that the family has a $(1+\eps)$-approximation algorithm for any $\eps >0$. An efficient polynomial time approximation scheme (EPTAS) is a PTAS whose time complexity is upper bounded by the form $f(\frac{1}{\eps}) \cdot poly(n)$ where $f$ is some computable (not necessarily polynomial) function and $poly(n)$ is a polynomial of the length of the (binary) encoding of the input. A fully polynomial time approximation scheme (FPTAS) is a stricter concept. It is defined like an EPTAS, with the added restriction that $f$ must be a polynomial in $\frac 1{\eps}$. Note that whereas a PTAS may have time complexity of the form $n^{g(\frac{1}{\eps})}$, where $g$ is for example linear or even exponential, this cannot be the case for an EPTAS. The notion of an EPTAS is modern and finds its roots in the FPT (fixed parameter tractable) literature (see e.g. \cite{CT97,DF99,FG06,Marx08}). Since these problems are proven to be strongly NP-Hard \cite{Garey:1979:CIG:578533} (as for example our model is an extension of the minimum makespan problem on identical machines), it is unlikely (impossible assuming $\textmd{P}\neq \mathnormal{NP}$) that an optimal polynomial time algorithm or an FPTAS will be found for them. In our research, we will focus on finding an EPTAS for this general model and as a bi-product, obtain improved results to many of its special cases. As usual, in order to present an EPTAS we can show that for a sufficiently small value of $\eps$ there exists an algorithm of time complexity of the form $f(\frac{1}{\eps}) \cdot poly(n)$ with an approximation ratio of $1+\kappa \eps$ for an arbitrary constant $\kappa$ (independent of $\eps$). \paragraph{{\bf Our model}.} Being a scheduling problem, the definition of the problem can be partitioned into the characteristics of the machines, the properties of the jobs, and the objective function. \paragraph{Machines characteristics.} We are given $m$ machines denoted as $\{ 1,2,\ldots ,m\}$ each of which can be activated to work in one of $\tau$ types denoted as $1,2,\ldots ,\tau$. The type of the machine will influence the processing time of a job assigned to that machine. The input defines for every machine $i$ a (positive rational) speed $s_i$ and an activation cost function $\alpha_i(t)$ that is a non-negative rational number denoting the cost of activating machine $i$ in type $t$. We are also given a budget $\hat{A}$ on the total activation cost of all machines. The meaning of this budget is that a feasible solution needs to specify for every machine $i$ its type $t_i$ such that the total activation cost is at most the budget, that is, the following constraint holds $$ \sum_{i=1}^m \alpha_i(t_i) \leq \hat{A} \ . $$ In our work we assume that $\tau$ is a constant while $m$ is a part of the input. Furthermore, without loss of generality we assume that $1=s_1\geq s_2\geq \cdots s_m >0$. \paragraph{Jobs characteristics.} There are $n$ jobs denoted as $J=\{ 1,2,\ldots ,n\}$. Job $j$ is associated with a size ($\tau$-dimensional) vector $p_j$ that specify the size $p_j(t)$ of job $j$ if it is assigned to a machine of type $t$. That is, if job $j$ is assigned to machine $i$, and we activate machine $i$ in type $t$, then the processing time of job $j$ (on this machine) is $\frac{p_j(t)}{s_i}$. Furthermore, for every job $j$ we are given a rejection penalty $\pi_j$ that is a positive rational number denoting the cost of not assigning job $j$ to any machine. A definition of a feasible solution specifies for every job $j$ if $j$ is rejected (and thus incurs a rejection penalty of $\pi_j$) or not and if it is not rejected (i.e., $j$ is accepted), then the machine $i$ that $j$ is assigned to. Formally, we need to specify a {\it job assignment} function $\sigma:J \rightarrow \{ 0,1,2,\ldots ,m\}$, where $\sigma(j)=0$ means that $j$ is rejected, and $\sigma(j)=i$ for $i\geq 1$ means that $j$ is assigned to machine $i$. \paragraph{Definition of the objective function.} As stated above a feasible solution defines a type $t_i$ for every machine $i$, and a job assignment function $\sigma$. The load of machine $i$ in this solution is $$\Lambda_i=\frac{\sum_{j\in J : \sigma(j)=i} p_{j}(t_i)}{s_i} \ . $$ Our objective function is specified using a function $F$ defined over the vector of the loads of the machines $F(\Lambda_1,\Lambda_2, \ldots ,\Lambda_m)$ that is the assignment cost of the jobs to the machines. $F$ is defined by two scalar parameters $\phi > 1$ and $1 \geq \psi \geq 0$ as follows: $$F(\Lambda_1,\Lambda_2, \ldots ,\Lambda_m) = \psi \cdot \max_{i=1}^m \Lambda_i + (1-\psi) \cdot \sum_{i=1}^m (\Lambda_i)^{\phi} \ .$$ The value of $\psi$ has the following meaning. For $\psi=1$, the value of $F$ is the makespan of the schedule, i.e., the maximum load of any machine, while for $\psi=0$, the value of $F$ is the sum of the $\phi$ powers of the loads of the machines, an objective that is equivalent to the $\ell_{\phi}$ norm of the vector of loads. For $\psi$ that is strictly between $0$ and $1$, the value of $F$ is a convex combination of these classical objectives in the load balancing literature. The common values of $\phi$ that were motivated by various applications that were considered in the literature are $\phi=2$ and $\phi=3$. Our objective is to find a type $t_i$ for every machine $i$ such that $ \sum_{i=1}^m \alpha_i(t_i) \leq \hat{A}$, and a job assignment $\sigma$ so that the following objective function (denoted as $obj$) will be minimized: $$obj = F(\Lambda_1,\Lambda_2, \ldots ,\Lambda_m) + \sum_{j\in J: \sigma(j)=0} \pi_j \ . $$ Our result is an EPTAS for this load balancing problem. For ease of notation we denote this problem by $P$ and let $\eps>0$ be such that $1/\eps \geq 100$ is an integer. We will use the fact that $\phi$ is a constant and the following simple property throughout the analysis. \begin{lemma}\label{claim_prop_F} Given a value of $\rho>1$ and two vectors $(\Lambda_1,\ldots ,\Lambda_m)$ and $(\Lambda'_1,\ldots ,\Lambda'_m)$ such that for every $i$ we have $\Lambda_i \leq \Lambda'_i \leq (1+\eps)^{\rho} \Lambda_i$, then $$F(\Lambda_1,\Lambda_2, \ldots ,\Lambda_m) \leq F(\Lambda'_1,\Lambda'_2, \ldots ,\Lambda'_m)\leq (1+\eps)^{\rho \cdot \phi} F(\Lambda_1,\Lambda_2, \ldots ,\Lambda_m) \ .$$ \end{lemma} \begin{proof} Using the definition of $F$ we have \begin{eqnarray*} F(\Lambda_1,\Lambda_2, \ldots ,\Lambda_m) &=& \psi \cdot \max_{i=1}^m \Lambda_i + (1-\psi) \cdot \sum_{i=1}^m (\Lambda_i)^{\phi} \\ &\leq& \psi \cdot \max_{i=1}^m \Lambda'_i + (1-\psi) \cdot \sum_{i=1}^m (\Lambda'_i)^{\phi} \\ &=& F(\Lambda'_1,\Lambda'_2, \ldots ,\Lambda'_m) \\ &\leq & \psi \cdot \max_{i=1}^m (1+\eps)^{\rho} \Lambda_i + (1-\psi) \cdot \sum_{i=1}^m ((1+\eps)^{\rho}\Lambda_i)^{\phi}\\ &\leq & (1+\eps)^{\rho\cdot \phi} \cdot \left( \psi \cdot \max_{i=1}^m \Lambda_i + (1-\psi) \cdot \sum_{i=1}^m (\Lambda_i)^{\phi} \right) \\ &=& (1+\eps)^{\rho\cdot \phi} \cdot F(\Lambda_1,\Lambda_2, \ldots ,\Lambda_m) . \end{eqnarray*} \qed \end{proof} \paragraph{Special cases of our model and related literature on these cases.} The objective function we consider here generalizes the makespan minimization objective (the special case with all $\pi_j=\infty$ and $\psi=1$), the sum of the $\phi$ powers of the machines loads (the special case with all $\pi_j=\infty$ and $\psi=0$), as well as these two objectives with job rejections (i.e., finite $\pi_j$ for some $j\in J$). As for the machines model that we consider, next we state some of the earlier studied special cases of this model. We say that {\em machines have pre-specified type} if $\hat{A}=0$ and for every $i$ we have a value $t_i$ such that $\alpha_i(t_i)=0$ and $\alpha_i(t)=1$ if $t\neq t_i$. This special case of the machine environment is the case of unrelated machines with a constant number of types, whose special case where machines have a common speed was studied in \cite{JM17} who presented an EPTAS for the makespan objective (the extension of this scheme to machines of different speeds was explored in \cite{JM17v2}). This EPTAS of \cite{JM17} improves earlier PTAS's for that special cases \cite{BW12,WBB13,GJKS16}. The $\ell_p$-norm minimization objective for the case where machines have pre-spcified type and all speeds are $1$ admits a PTAS \cite{BW12}. The case where machines have pre-specified type generalizes its special case of {\em uniformly related machines} that is the case where $\tau=1$. For this machines model, Jansen \cite{Ja10} presented an EPTAS for the makespan objective improving the earlier PTAS established in the seminal work of Hochbaum and Shmoys \cite{HS88}, while Epstein and Levin \cite{EL13} presented an EPTAS for the minimization of the $\ell_p$-norm of the vector of machines loads improving an earlier PTAS by Epstein and Sgall \cite{ES04}. Later on, Epstein and Levin \cite{DBLP:journals/corr/EpsteinL14} presented an EPTAS for another scheduling objective, namely total weighted completion times, and their scheme for the case where all jobs are released at time $0$, implies a different EPTAS for the minimization of the sum of squares of the loads of the machines on uniformly related machines. As far as we know the two schemes of \cite{EL13,DBLP:journals/corr/EpsteinL14} are the only examples for EPTAS's for load balancing objectives on uniformly related machines where one cannot use the dual approximation method of \cite{HS87,HS88}. Our approach here is based on \cite{DBLP:journals/corr/EpsteinL14}. The case of identical machines is the special case of uniformly related machines where all speeds are equal. See \cite{HS87,HocBook,AAWY98} for earlier approximation schemes for this case. The next special objective we consider here is {\em scheduling with rejection}. This is the special case of our objective function where $\pi_j$ is finite (at least for some jobs). In \cite{BLMSS00,ES04} there is a PTAS for this variant (for $\psi\in \{ 0,1\}$) on identical machines and on uniformly related machines. The last special case we consider here is the {\em machines with activation costs} model that was considered by \cite{DBLP:conf/soda/KhullerLS10}. They considered the special case of our model with makespan objective and $\tau=2$, with $\alpha_i(1)=0$ for all $i$, and $p_j(1)=\infty$ for all $j\in J$. In this case activating a machine as type $1$ means that the machine is not operating and cannot process any job. For this case \cite{DBLP:conf/soda/KhullerLS10} presents a PTAS. We summarize the previously studied special cases with reference to the previously approximation scheme of the better complexity class (i.e., we cite the first EPTAS if there is one, and the first PTAS if an EPTAS was not known prior to our work) in table \ref{table1}. \begin{table}[h!] \begin{center} {\normalsize \begin{tabular}{| l | c | r |} \hline Definition of the special case using our notation&PTAS/EPTAS& Refernce\\ \hline \hline $\tau=1$, $\psi=1$, and $\pi_j=\infty\ \forall j$ & EPTAS & \cite{Ja10}\\ \hline $\tau=1$, $\psi=0$, and $\pi_j=\infty\ \forall j$ & EPTAS & \cite{EL13}\\ \hline $\tau=1$, $\psi=1$, and $s_i=1\ \forall i$ & {\bf PTAS} & \cite{BLMSS00}\\ \hline $\tau=1$, $\psi=1$ or $\psi=0$ & {\bf PTAS} & \cite{ES04}\\ \hline Machines with pre-specified type, $\psi=1$ and $\pi_j=\infty\ \forall j$ & EPTAS & \cite{JM17v2}\\ \hline Machines with pre-specified type, $\psi=0$, $s_i=1 \ \forall i$, and $\pi_j=\infty\ \forall j$ & {\bf PTAS} & \cite{BW12} \\ \hline $\tau=2$, $\alpha_i(1)=0 \ \forall i$, $p_j(1)=\infty \ \forall j$, $\psi=1$, and $\pi_j=\infty\ \forall j$ & {\bf PTAS } & \cite{DBLP:conf/soda/KhullerLS10}\\ \hline \end{tabular} \vspace{0.2in} \caption{Summary of previous studies of special cases of problem $P$. For every row for which the second column is a PTAS, our EPTAS is the first efficient polynomial time approximation scheme for this special case.} \label{table1} } \end{center} \end{table} \paragraph{Outline of the scheme} We apply geometric rounding of the parameters of the input (see Section \ref{sec:round}), followed by a guessing step in which we guess for each type the minimum index of the machine that is activated to this type together with its approximated load (see Section \ref{sec:guess}). This guessing is motivated by a standard characterization of near-optimal solutions that is described earlier in Section \ref{sec:nice}. Based on these rounding and guessing steps, we formulate a mixed integer linear program (MILP) that is solved to optimality in polynomial time using \cite{Len83,Kan83} and the property that the number of integer variables is a constant (see Section \ref{sec:MILP} for the derivation of this mathematical program), and we prove that the optimal cost to our scheduling problem $P$ is approximated by the solution obtained to the MILP. Last, we use the solution of the MILP to round it into a feasible solution to problem $P$ whose cost is approximately the cost of the solution of the MILP (see Section \ref{sec:round-milp} for a description of this step and its analysis). \section{Rounding of the input\label{sec:round}} In what follows we would like to assume that the speed of each machine is an integer power of $1+\eps$, and that for every job $j$ and type $t$, we have that $p_j(t)$ is an integer power of $1+\eps$. Given an instance $I$ of problem $P$ that does not satisfy these conditions, we round down the speed of each machine $i$ to an integer power of $1+\eps$, and for each job $j$ and type $t$, we round up the value of $p_j(t)$ to an integer power of $1+\eps$. That is, we create a new rounded instance $I'$ in which the speed of machine $i$ is $s'_i$, and for each job $j$ and type $t$, we let $p'_j(t)$ be its size if it is assigned to a machine of type $t$, where we define $$ s'_i = (1+\eps)^{\lfloor \log_{1+\eps} s_i \rfloor} \ \ \ \ \forall i \ \ \ , \ \ \ p'_j(t) = (1+\eps)^{\lceil \log_{1+\eps} p_j(t) \rceil} \ \ \ \ \ \forall j,t \ .$$ The other parameters of the input are left in $I'$ as they were in $I$. The analysis of this step is proved in the following lemma that follows using standard arguments. Recall that a feasible solution to $P$ means selecting a type for each machine satisfying the total activation cost constraint and specifying a job assignment function. \begin{lemma}\label{rounding_step_lem} Given a feasible solution to $I$ of cost $C_I$, then the same solution is a feasible solution to $I'$ of cost (evaluated as a solution to $I'$) at most $(1+\eps)^{2\phi} \cdot C_I$. Given a feasible solution to $I'$ of cost $C_{I'}$, then the same solution is a feasible solution to $I$ of cost (evaluated as a solution to $I$) at most $C_{I'}$. \end{lemma} \begin{proof} Consider a job assignment function $\sigma$, and a selection of type $t_i$ for every machine $i$. The feasibility conditions in $I$ and in $I'$ are the same because the total activation budget constraint is satisfied in $I$ if and only if it is satisfied in $I'$ as the activation cost functions as well as the value of $\hat{A}$ are the same in the two instances. It remains to consider the cost of this assignment as a solution to $I$ and as a solution to $I'$. The total rejection penalty of the jobs rejected by $\sigma$ is the same in the two instances. Consider machine $i$, and let $\Lambda_i$ be its load in $I$, and $\Lambda'_i$ be its load in $I'$ (with respect to the given solution). Assume that the solution we consider activate machine $i$ as type $t$. Then, $$\Lambda_i = \frac{ \sum_{j\in J: \sigma(j)=i} p_j(t)}{s_i} \mbox{ and } \Lambda'_i = \frac{ \sum_{j\in J: \sigma(j)=i} p'_j(t)}{s'_i} \ . $$ We have using our definition of the rounding that $s'_i \leq s_i \leq (1+\eps)s'_i$ and for all $j$, $(1+\eps) p_j(t) \geq p'_j(t) \geq p_j(t)$. Therefore, $$ \Lambda_i = \frac{ \sum_{j\in J: \sigma(j)=i} p_j(t)}{s_i} \leq \frac{ \sum_{j\in J: \sigma(j)=i} p'_j(t)}{s'_i} = \Lambda'_i \ \ ,$$ while $$ \Lambda'_i = \frac{ \sum_{j\in J: \sigma(j)=i} p'_j(t)}{s'_i} \leq \frac{ \sum_{j\in J: \sigma(j)=i} (1+\eps)^2 \cdot p_j(t)}{s_i}= (1+\eps)^2 \Lambda_i \ .$$ Thus, we conclude that for every machine $i$ we have $\Lambda_i \leq \Lambda'_i \leq (1+\eps)^2\Lambda_i$. Therefore, by Lemma \ref{claim_prop_F} and the definition of the objective function $obj$, the claim follows. \qed \end{proof} Using this lemma and noting that applying the rounding step takes linear time, we conclude that without loss of generality with a slight abuse of notation, we assume that the input instance satisfies the properties that $s_i$ and $p_j(t)$ are integer powers of $1+\eps$ (for all $i,j,t$). \section{Characterization of near-optimal solutions\label{sec:nice}} We say that a feasible solution to $P$ is {\em nice} (or nice solution) if the following property holds. Let $i<i'$, be a pair of machines that are activated to a common type $t$ such that $i$ is the minimum index of a machine that is activated to type $t$, then the load of $i$ is at least the load of $i'$ times $\eps^2$. The following lemma together with the guessing step described in the next section serve as an alternative to the dual approximation method of \cite{HS87,HS88} and suit cases in which the dual approximation method does not work (i.e., non-bottleneck load balancing problems). \begin{lemma}\label{nice_lem} Given an instance of $P$ and a feasible solution $\sol$ of cost $\sol$, there exists a feasible solution $\sol'$ that is a nice solution whose cost $\sol'$ satisfies $\sol'\leq (1+\eps)^{\phi}\cdot \sol$. \end{lemma} \begin{proof} We apply the following process for modifying $\sol$ into $\sol'$. The process changes the assignment of some jobs that are not rejected in $\sol$. Thus, the value of the total rejection penalty (i.e., $\sum_{j\in J: \sigma(j)=0} \pi_j$) is left without modification. The process is defined for every type $t$ (one type at a time) by changing the assignment of some jobs that are assigned to machines of type $t$ and are moved to the lowest index machine of type $t$. Consider a fixed type $t$ and let $i$ be a machine of lowest index that is assigned type $t$ in $\sol$. We will modify the set of jobs assigned to $i$, and we let $\lambda$ denote the current load of $i$. We perform the following iteration until the first time where the cost of the solution is increasing (or we decide to stop and move to the next type). Thus, we stop after applying the first iteration that causes the cost of the solution to increase. Let $i''$ be a machine of maximum load among the machines that are assigned type $t$ in the current solution. If $\lambda$ is at least the load of $i''$ times $\eps^2$ (and in particular if $i''=i$), we do nothing and continue to the next type (the condition of nice solutions is satisfied for type $t$ by definition of maximum). Otherwise, we move all jobs assigned to machine $i''$ to be assigned to machine $i$. We recalculate $\lambda$ and the cost of the resulting solution and check if we need to apply the iteration again (if we stop and move to the next type, then by the convexity of $F$, the new load of $i$ is larger than the load of $i''$ prior to this iteration, and by the definition of $i''$ the condition of nice solutions is satisfied for type $t$). Consider a specific type $t$ with $i$ as defined above, and let $i'$ be the value of $i''$ in the last iteration. We let $\lambda'$ be the load of $i$ just before the last iteration of the last procedure. Using Lemma \ref{claim_prop_F} and the symmetry of $F$, it suffices to show that in the last iteration the maximum of the loads of $i$ and $i'$ is increased by a multiplicative factor of at most $1+\eps$. There are two cases. In the first case assume that $s_{i'} > \eps s_i$. Consider first moving all jobs assigned to $i$ (prior to the last iteration) to run on $i'$. The resulting load of $i'$ is increased by at most $\frac{\lambda'}{\eps}$, and this is at most $(1+\eps)$ times the load of $i'$ prior to the last iteration. Then by moving all the jobs that are assigned to $i'$ to run on $i$ the load incurred by these jobs can only decrease and the claim follows. Otherwise, we conclude that $s_{i'}\leq \eps s_i$. Thus, by moving the jobs previously assigned to $i'$ to be assigned to $i$, the total processing time of these jobs is at most $\eps$ times the load of $i'$ (in $\sol$). Since $i'$ is selected as the machine of maximum load (of type $t$ in the solution prior to the last iteration), the new load of $i$ is at most $1+\eps$ times the load of $i'$ in the solution obtained prior to the last iteration. \qed \end{proof} \section{Guessing step\label{sec:guess}} We apply a guessing step of (partial) information on an optimal solution (among nice solutions). See e.g. \cite{schuurman2001approximation} for an overview of this technique of guessing (or partitioning the solutions space) in the design of approximation schemes. In what follows, we consider one nice solution of minimal cost (among all nice solutions) to the (rounded) instance and denote both this solution and its cost by $\opt$ together with its job assignment function $\sigma^o$ and the type $t^o_i$ assigned (by $\opt$) to machine $i$ (for all $i$). The guessing is of the following information. We guess the approximated value of the makespan in $\opt$, and denote it by $O$. That is, if $\opt$ rejects all jobs then $O=0$, and otherwise the makespan of $\opt$ is in the interval $(O/(1+\eps),O]$. Furthermore, for every type $t$, we guess a minimum index $\mu(t)$ of a machine of type $t$ (namely, $\mu(t)=\min_{i:t^o_i=t} i$), and its approximated load $L_t$ that is a value such that the load of machine $\mu(t)$ is in the interval $(L_t-\frac{\eps O}{\tau}, L_t]$. Without loss of generality, we assume that $O\geq \max_t L_t$. \begin{lemma}\label{guessing_lem} The number of different possibilities for the guessed information on $\opt$ is $$O(nm\log_{1+\eps} n \cdot (m\tau / \eps )^{\tau}) \ . $$ \end{lemma} \begin{proof} To bound the number of values for $O$, let $i$ be a machine where the makespan of \opt\ is achieved and let $j$ be the job of maximum size (with respect to type $t^o_i$) assigned to $i$. Then, the makespan of \opt\ is in the interval $[p_j(t^o_i)/s_i, n\cdot p_j(t^o_i)/s_i ]$, and thus the number of different values that we need to check for $O$ is $O(nm \log_{1+\eps} n)$. For every type $t$ we guess the index of machine $\mu(t)$ (and there are $m$ possible such indices) and there are at most $\frac{\tau}{\eps}+1$ integer multiplies of $\eps \cdot O /\tau$ that we need to check for $L_t$ (using $L_t \leq O$). \qed \end{proof} \begin{remark} If we consider the model of machines with pre-specified type, then we do not need to guess the value of $\mu(t)$ (for all $t$) and the number of different possibilities for the guessed information on $\opt$ is $O(nm\log_{1+\eps} n \cdot (\tau / \eps )^{\tau})$. \end{remark} \section{The mixed integer linear program \label{sec:MILP}} Let $\gamma \geq 10$ be a constant that is chosen later ($\gamma$ is a function of $\tau$ and $\eps$). For a type $t$ and a real number $W$, we say that job $j$ is {\em large for $(t,W)$} if $p_j(t) \geq \eps^{\gamma} \cdot W$, and otherwise it is {\em small for $(t,W)$}. \paragraph{Preliminaries.} Our MILP follows the configuration-MILP paradigm as one of its main ingredients. Thus, next we define our notion of configurations. A {\em configuration $C$} is a vector encoding partial information regarding the assignment of jobs to one machine where $C$ consists of the following components: $t(C)$ is the type assigned to a machine with configuration $C$, $s(C)$ is the speed of a machine with configuration $C$, $w(C)$ is an approximated upper bound on the total size of jobs assigned to a machine with this configuration where we assume that $w(C)$ is an integer power of $1+\eps$ and the total size of jobs assigned to this machine is at most $(1+\eps)^3 \cdot w(C)$, $r(C)$ is an approximated upper bound on the total size of small jobs (small for $(t(C),w(C)$) assigned to a machine with this configuration where we assume that $r(C)$ is an integer multiple of $\eps \cdot w(C)$ and the total size of small jobs assigned to this machine is at most $r(C)$, last, for every integer value of $\nu$ such that $(1+\eps)^{\nu} \geq \eps^{\gamma} \cdot w(C)$ we have a component $\ell(C,\nu)$ counting the number of large jobs assigned to a machine of configuration $C$ with size $(1+\eps)^{\nu}$. Furthermore we assume that $r(C)+ \sum_{\nu} (1+\eps)^{\nu} \cdot \ell(C,\nu) \leq (1+\eps)^3 \cdot w(C)$. Let $\C$ be the set of all configurations. \begin{lemma}\label{number_conf_lem} For every pair $(s,w)$, we have $$|\{ C\in \C: s(C)=s, w(C)=w\}| \leq \tau\cdot \left( \frac{2}{\eps} \right)^{(2\gamma+1)^2 \log_{1+\eps} (1/\eps) } \ . $$ \end{lemma} The right hand side is (at least) an exponential function of $1/\eps$ that we denote by $\beta$. \begin{proof} The number of types is $\tau$, so $t(C)$ has $\tau$ possible values. The value of $r(C)$ is an integer multiple of $\eps \cdot w$ that is smaller than $(1+\eps)^3 w \leq 2w$ so there are at most $\frac 2{\eps}$ such values. The number of different values of $\nu$ for which a job of size $(1+\eps)^{\nu}\geq \eps^{\gamma} \cdot w$ is smaller than $2w$ is at most $\log_{1+\eps} \frac {2}{\eps^\gamma} \leq 2\gamma \cdot \log_{1+\eps} \frac {1}{\eps} $ and for each such value of $\nu$ we have that the value of $\ell(C,\nu)$ is a non-negative integer smaller than $\frac{2}{\eps^{\gamma}}$. This proves the claim using $\gamma \geq 10$. \qed \end{proof} Our MILP formulation involves several blocks and different families of variables that are presented next (these blocks have limited interaction). We present the variables and the corresponding constraints before presenting the objective function. \paragraph{First block - machine assignment constraints.} For every machine $i$ and every type $t$, we have a variable $z_{i,t}$ that encodes if machine $i$ is assigned type $t$, where $z_{i,t}=1$ means that machine $i$ is assigned type $t$. Furthermore, for every type $t$ and every speed $s$, we have a variable $m(s,t)$ denoting the number of machines of (rounded) speed $s$ that are assigned type $t$. For every type $t$, we have $z_{i,t}=0$ for all $i< \mu(t)$ while $z_{\mu(t),t}=1$ enforcing our guessing. The (additional) machine assignment constraints are as follows: For every machine $i$, we require $$\sum_{t=1}^{\tau} z_{i,t} = 1 , $$ encoding the requirement that for every machine $i$, exactly one type is assigned to $i$. For every type $t$ and speed $s$, we have $$\sum_{i:s_i = s} z_{i,t} = m(s,t) \ . $$ Let $m_s$ be the number of machines in the rounded instance of speed $s$, then $$\sum_{t=1}^{\tau} m(s,t) = m_s \ . $$ Last, we have the machine activation budget constraint $$\sum_{t=1}^{\tau} \sum_{i=1}^m \alpha_i(t)z_{i,t} \leq \hat{A} \ .$$ The variables $z_{i,t}$ are fractional variables and their number is $O(m\tau)$, for every type $t$ and for every speed $s$ such that $s_{\mu(t)} \geq s\geq s_{\mu(t)} \cdot \eps^{\gamma}$ we require that $m(s,t)$ is an integer variable while all other variables of this family of variables are fractional. Observe that the number of variables that belong to this family and are required to be integral (for the MILP) formulation is $O(\tau \gamma \log_{1+\eps} \frac{1}{\eps})$ that is bounded by a polynomial in $\frac {\tau \gamma}{\eps}$, and the number of fractional variables of the family $m(s,t)$ is $O(n\tau)$. \paragraph{Second block - job assignment to machine types and rejection constraints.} For every job $j$ and every $t\in \{ 0,1,\ldots ,\tau\}$, we have a variable $y_{j,t}$ that encodes if job $j$ is assigned to machine that is activated to type $t$ (for $t\geq 1$) or rejected (for $t=0$). That is, for $t\geq 1$, if $y_{j,t}=1$, then job $j$ is assigned to machine of type $t$, and if $y_{j,0}=1$ it means that $j$ is rejected (and we will pay the rejection penalty $\pi_j$). Furthermore for every type $t$ and every {\em possible} integer value $\zeta$ we have two variables $n(\zeta,t)$ and $n'(\zeta,t)$ denoting the number of jobs assigned to machine of type $t$ whose (rounded) size (if they are assigned to machine of type $t$) is $(1+\eps)^{\zeta}$ that are assigned as large jobs and that are assigned as small jobs, respectively. Here, possible values of $\zeta$ for a given $t$ are all integers for which $(1+\eps)^{\zeta} \leq s_{\mu(t)} \cdot \min \{ L_t/(\eps^3), O\}$ such that the rounded input contains at least one job whose size (when assigned to a machine of type $t$) is $(1+\eps)^{\zeta}$ (where recall that $L_t$ is the guessed load of machine $\mu(t)$ and $O$ is the guessed value of the makespan). We denote by $\zeta(t)$ the set of possible values of $\zeta$ for the given $t$. We implicitly use the variables $n(\zeta,t)$ and $n'(\zeta,t)$ for $\zeta \notin \zeta(t)$ (i.e., impossible values of $\zeta$) by setting those variables to zero. The constraints that we introduce for this block are as follows: For every job $j$, we should either assign it to a machine (of one of the types) or reject it, and thus we require that $$\sum_{t=0}^{\tau} y_{j,t}=1 \ . $$ Furthermore, for every type $t$ and possible value of $\zeta$ (i.e., $\zeta \in \zeta(t)$) we require, $$\sum_{j: p_j(t)=(1+\eps)^{\zeta}} y_{j,t} \leq n(\zeta,t) +n'(\zeta,t) \ . $$ For the MILP formulation, the variables $y_{j,t}$ are fractional, while the variables $n(\zeta,t)$ and $n'(\zeta,t)$ are integer variables only if $\zeta \in \zeta(t)$ and $(1+\eps)^{\zeta} \geq s_{\mu(t)} L_t \eps^{\gamma}$ (and otherwise they are fractional). Observe that we introduce for this block $O(n\tau)$ fractional variables (excluding variables that are set to $0$ corresponding to impossible values of $\zeta$) and $O(\tau \gamma \log_{1+\eps} \frac{1}{\eps})$ integer variables. \paragraph{Third block - configuration constraints.} For every $C\in \C$ we have a variable $x_C$ denoting the number of machines of speed $s(C)$ activated to type $t(C)$ whose job assignment is according to configuration $C$. Furthermore, for every configuration $C \in \C$ and every integer value of $\nu$ such that $(1+\eps)^{\nu} < \eps^{\gamma} w(C)$ we have a variable $\chi(C,\nu)$ denoting the number of jobs whose size (when assigned to machine of type $t(C)$) is $(1+\eps)^{\nu}$ that are assigned to machines of configuration $C$. Such a variable $\chi(C,\nu)$ exists only if there exists at least one job $j$ whose size (when assigned to a machine of type $t$) is $(1+\eps)^{\nu}$. For $C \in \C$, we let $\nu(C)$ denote the set of values of $\nu$ for which the variable $\chi(C,\nu)$ exist. For every $t$, we require that machine $\mu(t)$ has a configuration where $s_{\mu(t)}\cdot L_t$ is approximately $w(C)$. Thus, for every $t$, we will have the constraint $$\sum_{C\in \C: s(C)=s_{\mu(t)}, t(C)=t, s_{\mu(t)}\cdot L_t \leq w(C) \leq (1+\eps)^3 \cdot s_{\mu(t)}\cdot L_t } x_C \geq 1 \ .$$ For the MILP formulation, $x_C$ is required to be integer only if $C$ is a {\em heavy} configuration, where $C$ is {\em heavy} if $w(C) \geq \eps^{\gamma^3} L_{t(C)} \cdot s_{\mu(t(C))}$. The variables $\chi(C,\nu)$ are fractional for all $C\in \C$ and $\nu\in \nu(C)$. Observe that the number of integer variables depends linearly in $\beta$ where the coefficient is upper bounded by a polynomial function of $\frac{\gamma}{\eps}$. It remains to consider the constraints bounding these variables together with the $n(\zeta,t)$, $n'(\zeta,t)$ and $m(s,t)$ introduced for the earlier blocks. Here, the constraints have one sub-block for each type $t$. The {\em sub-block of type $t$} (for $1\leq t \leq \tau$) consists of the following constraints: For every type $t$ and every (rounded) speed $s$ we cannot have more than $m(s,t)$ machines with configurations satisfying $t(C)=t$ and $s(C)=s$, and therefore we have the constraint $$\sum_{C\in \C: t(C)=t, s(C) =s} x_C \leq m(s,t) \ . $$ For every $\zeta \in \zeta(t)$, we have that all the $n(\zeta,t)$ jobs of size $(1+\eps)^{\zeta}$ that we guarantee to schedule on machine of type $t$ are indeed assigned to such machine as large jobs. Thus, we have the constraints $$\sum_{C\in \C: t(C)=t} \ell(C,\zeta) \cdot x_C = n(\zeta,t) . $$ The last constraints ensures that for every $\zeta \in \zeta(t)$, the total size of all jobs of size at least $(1+\eps)^{\zeta}$ that are scheduled as small jobs fits the total area of small jobs in configurations for which $(1+\eps)^{\zeta}$ is small with respect to $(t,w(C))$. Here, we need to allow some additional slack, and thus for configuration $C$ we will allow to use $r(C)+2\eps w(C)$ space for small jobs. Thus, for every integer value of $\zeta$ we have the constraint $$\sum_{\zeta' \geq \zeta} n'(\zeta',t) \cdot (1+\eps)^{\zeta'} \leq \sum_{C\in \C: t(C)=t, \eps^{\gamma} \cdot w(C) > (1+\eps)^{\zeta}} (r(C)+2\eps w(C)) x_C \ . $$ Observe that while we define the last family of constraints to have an infinite number of constraints, we have that if when we increase $\zeta$, the summation on the left hand side is the same, then the constraint for the larger value of $\zeta$ dominates the constraint for the smaller value of $\zeta$. Thus, it suffices to have the constraints only for $\zeta \in \cup_{t=1}^{\tau} \zeta(t)$. In addition to the last constraints we have the non-negativity constraints (of all variables). \paragraph{The objective function.} Using these variables and (subject to these) constraints we define the minimization (linear) objective function of the MILP as $$ \psi \cdot O + (1-\psi) \cdot \sum_{C\in \C} \left(\frac{w(C)}{s(C)}\right)^{\phi} \cdot x_C + \sum_{j=1}^n \pi_j \cdot y_{j,0}\ .$$ Our algorithm solves optimally the MILP and as described in the next section uses the solution for the MILP to obtain a feasible solution to problem $P$ without increasing the cost too much. Thus, the analysis of the scheme is crucially based on the following proposition. \begin{proposition}\label{prop-milp} The optimal objective function value of the MILP is at most $(1+\eps)^{\phi}$ times the cost of $\opt$ as a solution to $P$. \end{proposition} \begin{proof} Based on the nice solution $\opt$ to the rounded instance, specified by the type $t^o_i$ assigned to machine $i$ (for all $i$) and the job assignment function $\sigma^o$, we specify a feasible solution to the MILP as follows. Later we will bound the cost of this feasible solution. First, consider the variables introduced for the machine assignment block and its constraints. The values of $z_{i,t}$ are as follows: $z_{i,t^o_i}=1$ and for $t\neq t^o_i$, we let $z_{i,t}=0$. Furthermore, for every speed $s$ and type $t$, we let $m(s,t)$ be the number of machines of speed $s$ that are assigned type $t$. Then, for every type $t$, we will have $z_{i,t}=0$ for $i<\mu(t)$ and $z_{\mu(t),t}=1$ by our guessing. For every machine $i$, we have $\sum_{t=1}^{\tau} z_{i,t} = 1$ since every machine is assigned exactly one type. For every type $t$ and speed $s$, we have $\sum_{i:s_i = s} z_{i,t} = m(s,t)$, as the left hand side counts the number of machines of speed $s$ and type $t$. The constraint $\sum_{t=1}^{\tau} m(s,t) = m_s$ is satisfied as every machine is activated with exactly one type and thus contribute to exactly one of the counters of the summation on the left hand side. Last, we have that the machine activation budget constraint $\sum_{t=1}^{\tau}\sum_{i=1}^m \alpha_i(t)z_{i,t} \leq \hat{A}$ is satisfied by our assignment of values to the variables as the left hand side is exactly the total activation cost of $\opt$, and $\opt$ is a feasible solution to $P$. Next, consider the other variables. For every job $j$, we let $y_{j,0}=1$ if $j$ is rejected by $\opt$, and otherwise we let $y_{j,t}=1$ if and only if the following holds for some value of $i$ $$\sigma^o(j)=i \ \ \mbox{and} \ \ t^o_i=t \ .$$ Observe that the total rejection penalty of the jobs that are rejected by $\opt$ is exactly $\sum_j \pi_j y_{j,0}$. Next, we assign a configuration to every machine based on $\opt$. Consider a specific machine $i$, we let $C(i)$ be the configuration we define next. $t(C(i))=t^o_i$ is the type assigned to machine $i$ by $\opt$ and $s(C(i))=s_i$ is its speed. The value of $w(C(i))$ is computed by rounding up the total size of jobs assigned to $i$ in $\opt$ to the next integer power of $1+\eps$, $r(C(i))$ is computed by first computing the total size of small jobs assigned to $i$ and then rounding up to the next integer multiple of $\eps \cdot w(C(i))$, last, for every integer value of $\nu$ such that $(1+\eps)^{\nu} \geq \eps^{\gamma} \cdot w(C(i))$, the component $\ell(C(i),\nu)$ counts the number of jobs assigned to $i$ whose size is $(1+\eps)^{\nu}$. Thus, the requirement $r(C(i))+ \sum_{\nu} (1+\eps)^{\nu} \cdot \ell(C(i),\nu) \leq (1+\eps)^3 \cdot w(C(i))$ is satisfied as the left hand side exceeds the total size of jobs assigned to $i$ in $\opt$ by at most $\eps w(C(i))$ and the right hand side exceeds the total size of jobs assigned to $i$ in $\opt$ by at least $3\eps w(C(i))$. For every configuration $C\in \C$ we let $x_C$ be the number of machines whose assigned configuration is $C$. Furthermore, for every $C\in \C$ and every $\nu \in \nu(C)$, we calculate the number of jobs of size $(1+\eps)^{\nu}$ (when assigned to machines of type $t(C)$) that are assigned by $\opt$ to machines whose assigned configuration is $C$, and we let $\chi(C,\nu)$ be this number. By our guessing, we conclude that $$\sum_{C\in \C: s(C)=s_{\mu(t)}, t(C)=t, s_{\mu(t)}\cdot L_t \leq w(C) \leq (1+\eps)^3 \cdot s_{\mu(t)}\cdot L_t } x_C \geq 1$$ is satisfied for every type $t$. For every type $t$ and every $\zeta \in \zeta(t)$, we let $n(\zeta,t)$ be $n(\zeta,t)=\sum_{i: t(C(i))=t} \ell(C(i),\zeta)$, and $n'(\zeta,t)$ be defined as $n'(\zeta,t) = \max \{ 0, \sum_{j: p_j(t)=(1+\eps)^{\zeta}} y_{j,t} - n(\zeta,t)\}$. This completes the assignment of values to the variables that we consider. Then, since for every job $j$, if $\sigma^o(j) \geq 1$, then $\sigma^o(j)$ has exactly one type, and otherwise $y_{j,0}=1$, by definition of the values of the $y$-variables, we have $\sum_{t=0}^{\tau} y_{j,t}=1$. Furthermore, for every type $t$ and $\zeta \in \zeta(t)$), by definition of $n'(\zeta,t)$, we have $$\sum_{j: p_j(t)=(1+\eps)^{\zeta}} y_{j,t} \leq n(\zeta,t) +n'(\zeta,t) \ . $$ For every type $t$ and every (rounded) speed $s$, $\opt$ does not have more than $m(s,t)$ machines with configurations satisfying $t(C)=t$ and $s(C)=s$, and therefore we have the constraint $\sum_{C\in \C: t(C)=t, s(C) =s} x_C \leq m(s,t)$. For every $t$ and every $\zeta \in \zeta(t)$, we have that $$\sum_{C\in \C: t(C)=t} \ell(C,\zeta) \cdot x_C = \sum_{i:t(C(i))=t} \ell(C(i),\zeta) = n(\zeta,t) \ , $$ where the first equality holds by changing the order of summation (using the definition of $x_C$) and the second holds by the definition of the value of $n(\zeta,t)$. Last, for every machine $i$ and every $\zeta \in \zeta(t(C(i)))$, the total size of all jobs of size at least $(1+\eps)^{\zeta}$ that are scheduled as small jobs on machine $i$ is smaller than $r(C(i))$. Thus, for every integer value of $\zeta$, we have $$\sum_{\zeta' \geq \zeta} n'(\zeta',t) \cdot (1+\eps)^{\zeta'} \leq \sum_{C\in \C: t(C)=t, \eps^{\gamma} \cdot w(C) > (1+\eps)^{\zeta}} (r(C)+2\eps w(C)) x_C \ . $$ We summarize that the solution we found is a feasible solution to the MILP (where all variables are integer, so the integrality constraints of the MILP hold as well). Last, consider the objective function value of the solution we constructed. It is $ \psi \cdot O + (1-\psi) \cdot \sum_{C\in \C} \left(\frac{w(C)}{s(C)}\right)^{\phi} \cdot x_C + \sum_{j=1}^n \pi_j \cdot y_{j,0} $. By our guessing, we conclude that $O$ is at most $1+\eps$ times the makespan of $\opt$. By definition, for machine $i$ whose assigned configuration is $C(i)$ and its load in $\opt$ is $\Lambda^o_i$ we have $\frac{w(C(i))}{s_i} \leq (1+\eps)\Lambda^o_i$. Therefore, we have that \begin{eqnarray*} && \psi \cdot O + (1-\psi) \cdot \sum_{C\in \C} \left(\frac{w(C)}{s(C)}\right)^{\phi} \cdot x_C + \sum_{j=1}^n \pi_j \cdot y_{j,0}\\ &\leq& (1+\eps)^{\phi} \cdot F(\Lambda^o_1, \ldots ,\Lambda^o_m) + \sum_{j=1}^n \pi_j \cdot y_{j,0} \ , \end{eqnarray*} and using the fact that the total rejection penalty in $\opt$ equals $\sum_{j=1}^n \pi_j \cdot y_{j,0}$, the claim follows. \qed \end{proof} \section{Transforming the solution to the MILP into a schedule \label{sec:round-milp}} Consider the optimal solution $(z^*,m^*, y^*,n^*,n'^*,x^*,\chi^*)$ for the MILP, our first step is to round up each component of $n^*$ and $n'^*$. That is, we let $\hat{n}(\zeta,t)=\lceil n^*(\zeta,t) \rceil$ and $\hat{n}'(\zeta,t) =\lceil n'^*(\zeta,t) \rceil$ for every $\zeta$ and every $t$. Furthermore, we solve the following linear program (denoted as $(LP-y)$) that has totally unimodular constraint matrix and integer right hand side, and let $\hat{y}$ be an optimal integer solution for this linear program: \begin{eqnarray*} \min &\sum_{j=1}^n \pi_j \cdot y_{j,0} & \\ \mbox{subject to}& \sum_{t=0}^{\tau} y_{j,t}=1 & \forall j\in J,\\ &\sum_{j\in J: p_j(t)=(1+\eps)^{\zeta}} y_{j,t} \leq \hat{n}(\zeta,t) +\hat{n}'(\zeta,t)& \forall t \in\{ 1,2,\ldots ,\tau\} \ , \ \forall \zeta\in \zeta(t) \} \ , \\ &y_{j,t}\geq 0 & \forall j\in J \ , \ \forall t\in \{ 0,1,\ldots ,\tau\} \ . \end{eqnarray*} We will assign jobs to types (and reject some of the jobs) based on the values of $\hat{y}$, that is if $\hat{y}_{j,t}=1$ we will assign $j$ to a machine of type $t$ (if $t\geq 1$) or reject it (if $t=0$). Since $y^*$ is a feasible solution to $(LP-y)$ of cost that equal the total rejection penalty of the solution to the MILP, we conclude that the total rejection penalty of this (integral) assignment of jobs to types is at most the total rejection penalty of the solution to the MILP. In what follows we will assign $\hat{n}(\zeta,t)+\hat{n}'(\zeta,t)$ jobs of size $(1+\eps)^{\zeta}$ to machines of type $t$ (for all $t$). The next step is to round up each component of $x^*$, that is, let $\hat{x}_C=\lceil x^*_C \rceil$, and allocate $\hat{x}_C$ machines of speed $s(C)$ that are activated as type $t(C)$ and whose schedule {\em follows} configuration $C$. These $\hat{x}_C$ machines are partitioned into $x'_C=\lfloor x^*_C \rfloor$ {\em actual machines} and $\hat{x}_C-x'_C$ {\em virtual machines}. Both actual and virtual machines are not machines of the instance but {\em temporary machines} that we will use for the next step. \begin{lemma}\label{job-assignment-lemma} It is possible to construct (in polynomial time) an allocation of $\hat{n}(\zeta,t)$ jobs of size $(1+\eps)^{\zeta}$ for all $t,\zeta$ to (actual or virtual) machines that follow configurations in $\{ C\in \C : t(C)=t, (1+\eps)^{\zeta} \geq \eps^{\gamma} \cdot w(C)\ \}$, and of $\hat{n}'(\zeta,t)$ jobs of size $(1+\eps)^{\zeta}$ for all $t,\zeta$ to (actual or virtual) machines that follow configurations in $\{ C\in \C : t(C)=t,(1+\eps)^{\zeta} < \eps^{\gamma} \cdot w(C)\ \}$, such that for every machine that follows configuration $C\in \C$, the total size of jobs assigned to that machine is at most $(1+\eps)^7 w(C)$. \end{lemma} \begin{proof} First, we allocate the large jobs. That is, for every value of $t$ and $\zeta$ we allocate $\hat{n}(\zeta,t)$ jobs whose size (when assigned to machine of type $t$) is $(1+\eps)^{\zeta}$ to (actual or virtual) machines that follow configurations in $\{ C\in \C : t(C)=t, (1+\eps)^{\zeta} \geq \eps^{\gamma} \cdot w(C)\ \}$. We allocate for every machine that follow configuration $C$ exactly $\ell(C,\zeta)$ such jobs (or less if there are no additional jobs of this size to allocate). Since $\sum_{C\in \C: t(C)=t} \ell(C,\zeta) \cdot x^*_C = n^*(\zeta,t)$, we allocate in this way at least $n^*(\zeta,t)$ such jobs as large jobs. By the constraint $\sum_{C\in \C: t(C)=t} \ell(C,\zeta) \cdot x_C = n(\zeta,t)$, we have the following \begin{eqnarray*} \sum_{C\in \C: t(C)=t} \ell(C,\zeta) \cdot \hat{x}_C &\geq& \sum_{C\in \C: t(C)=t} \ell(C,\zeta) \cdot x^*_C\\ &=& n^*(\zeta,t) \\ &>& \hat{n}(\zeta,t) -1 \end{eqnarray*} Since both sides of the last sequence of inequalities are integer number, we conclude that $\sum_{C\in \C: t(C)=t} \ell(C,\zeta) \cdot \hat{x}_C \geq \hat{n}(\zeta,t)$, and thus all these $\hat{n}(\zeta,t)$ jobs are assigned to machines that follow configurations in $\{ C\in \C : t(C)=t, (1+\eps)^{\zeta} \geq \eps^{\gamma} \cdot w(C)\ \}$ without exceeding the bound of $\ell(C,\zeta)$ on the number of jobs of size $ (1+\eps)^{\zeta}$ assigned to each such machine. Next, consider the allocation of small jobs. We apply the following process for each type separately. We first allocate one small job of each size to machine $\mu(t)$ where here we mean that if machine $\mu(t)$ follows configuration $C(\mu(t))$, then we will allocate one job of each size of the form $(1+\eps)^{\zeta}$ for all $\zeta$ for which $(1+\eps)^{\zeta-3} < \eps^{\gamma} w(C(\mu(t)))$. Observe that these values of $\zeta$ are the only values for which we may have $\hat{n}'(\zeta,t)\neq n'^*(\zeta,t)$. Let $\tilde{n}'(\zeta,t)$ denote the number of jobs of size $(1+\eps)^{\zeta}$ that we still need to allocate. Since $\gamma \geq 10$, this step increases the total size of jobs assigned to machine $\mu(t)$ by $\eps^{\gamma} \cdot w(C(\mu(t))) \cdot \sum_{h=0}^{\infty} \frac{1}{(1+\eps)^{h-3}} \leq \eps w(C(\mu(t)))$. Given a type $t$, we sort the machines that follow configurations with type $t$ according to the values of $w(C)$ of the configuration $C$ that they follow. We sort the machines in a monotonically non-increasing order of $w(C)$. Similarly, we sort the collection of $\tilde{n}'(\zeta,t)$ jobs of size $(1+\eps)^{\zeta}$ (for all values of $\zeta$) in a non-decreasing order of $\zeta$. We allocate the jobs to (actual or virtual) machines using the next fit heuristic. That is, we start with the first machine (in the order we described) as the current machine. We pack one job at a time to the current machine, whenever the total size of the jobs that are assigned to the current machine (that follows configuration $C$) exceeds $r(C)+2\eps w(C)$ (it is at most $r(C)+3\eps w(C)$ as we argue below), we move to the next machine and define it as the current machine. We need to show that all jobs are indeed assigned in this way and that whenever we pack a job into the current machine that follows configuration $C$, the size of the job is smaller than $\eps^{\gamma} w(C)$. If we append one (non-existing) extra machine of type $t$ that follows a configuration with $w(C) < \min_{j} p_{j}(t) / \eps^{\gamma}$, then it suffices to show that whenever we move to a new current machine that follows configuration $C$, all the (small) jobs of size $(1+\eps)^{\zeta}$ for $\zeta\in \{ \zeta': (1+\eps)^{\zeta'} \geq \eps^{\gamma} w(C)\}$ are assigned. This last required property holds, as whenever the last set of values of $\zeta$ is changed, we know that the total size of the (small) jobs we already assigned is (unless all these jobs are assigned) at least \begin{eqnarray*} && \sum_{C\in \C: t(C)=t, \eps^{\gamma} \cdot w(C) > (1+\eps)^{\zeta}} (r(C)+2\eps w(C)) \hat{x}_C \\ &\geq& \sum_{C\in \C: t(C)=t, \eps^{\gamma} \cdot w(C) > (1+\eps)^{\zeta}} (r(C)+2\eps w(C)) x^*_C \\ &\geq& \sum_{\zeta' \geq \zeta} n'^*(\zeta',t) \cdot (1+\eps)^{\zeta'} \\ &\geq& \sum_{\zeta' \geq \zeta} \tilde{n}'(\zeta',t) \cdot (1+\eps)^{\zeta'} \ . \end{eqnarray*} By allocating a total size of at most $r(C)+3\eps w(C)$ of small jobs to a machine that follows configuration $C$, the resulting total size of jobs assigned to that machine is at most $$(1+\eps)^3 w(C)+ 3\eps w(C) \leq (1+\eps)^6 w(C)$$ and the claim follows. \qed \end{proof} The assignment of jobs for which $y_{j,0} \neq 0$ to machines is specified by assigning every job that was assigned to a virtual machine that follows configuration $C$ to machine $\mu(t(C))$ instead, and assigning the jobs allocated to actual machines by allocating every actual machine to an index in $\{ 1,2,\ldots ,m\}$ following the procedure described in the next step. Before describing the assignment of actual machines to indices in $\{ 1,2,\ldots ,m\}$ of machines in the instance (of problem $P$), we analyze the increase of the load of machine $\mu(t)$ due to the assignment of jobs that were assigned to virtual machines that follow configuration with type $t$. \begin{lemma}\label{virtual_machine_alloc_lem} There is a value of $\gamma$ for which the resulting total size of jobs assigned to machine $\mu(t)$ is at most $(1+\eps)^8 \cdot w(C(\mu(t))$ where machine $\mu(t)$ follows the configuration $C(\mu(t))$. \end{lemma} \begin{proof} Since $x^*_C$ is forced to be integral for all heavy configurations, we conclude that if there exists a virtual machine that follows configuration $C$ with type $t(C)=t$, then $w(C)\leq \eps^{\gamma^3} \cdot L_{t(C)}\cdot s_{\mu(t(C))} \leq 2\eps^{\gamma^3} w(C(\mu(t)))$, where the last inequality holds using the constraint $\sum_{C\in \C: s(C)=s_{\mu(t)}, t(C)=t, s_{\mu(t)}\cdot L_t \leq w(C) \leq (1+\eps)^3 \cdot s_{\mu(t)}\cdot L_t } x_C \geq 1$ and allocating configuration $C(\mu(t))$ that causes $x^*$ to satisfy this inequality, to machine $\mu(t)$. For each configuration $C$, there is at most one virtual machine that follows $C$, and since there are at most $\beta$ configurations with a common component of $w(C)$ and the given type $t$ and speed $s$, we conclude using the fact that $\opt$ is nice that the total size of jobs that we move from their virtual machines to machine $\mu(t)$ is at most $$2\beta \cdot \eps^{\gamma^3} w(C(\mu(t))) \cdot \sum_{h=0}^{\infty} \frac{1}{(1+\eps)^h} \cdot \sum_{h'=0}^{\infty} \frac{1}{\eps^2}\frac{1}{(1+\eps)^{h'}} \leq \beta \cdot \eps^{\gamma^3-5} w(C(\mu(t))) \ .$$ The claim will follow if we can select a value of $\gamma$ such that $\beta< \eps^{6-\gamma^3}$. Recall that $\beta = \tau\cdot \left( \frac{2}{\eps} \right)^{(2\gamma+1)^2 \log_{1+\eps} (1/\eps) }\leq \tau\cdot \left( \frac{2}{\eps} \right)^{5\gamma^2 /\eps^2} $. Thus, in order to ensure that $\beta< \eps^{6-\gamma^3}$, it suffices to select $\gamma$ such that $\tau < \left( \frac{\eps}{2} \right)^{(5\gamma^2 /\eps^2) + 6-\gamma^3}$. Observe that for every $\eps>0$ the function $H(\gamma)= (5\gamma^2 /\eps^2) + 6- \gamma^3$ decreases without bounds when $\gamma$ increases to $\infty$. Thus, we can select a value of $\gamma$ (as a function of $\tau$ and $\eps$) for which the last inequality holds, e.g. selecting $\gamma = \tau \cdot \frac{20}{\eps^2}$ is sufficient. \qed \end{proof} Next, we describe the assignment of actual machine to indices in $\{1,2,\ldots ,m\}$. More precisely, the last step is to assign a type $\hat{t}_i$ for every machine $i$ satisfying the total activation cost bound, and to allocate for every $C\in \C$ and for every actual machine that follows configuration $C$, an index $i$ such that $\hat{t}_i=t(C)$. This assignment of types will enforce our guessing of $\mu(t)$ for all $t$ This step is possible as we show next using the integrality of the assignment polytope. \begin{lemma}\label{machine_assignment_types_feas_lem} There is a polynomial time algorithm that finds a type $t_i$ for every machine $i$, such that the total activation cost of all machines is at most $\hat{A}$, for all $s,t$ the number of machines of speed $s$ that are activated to type $t$ is at least the number of actual machines that follow configurations with type $t$ and speed $s$, and for all $t$ $\mu(t)$ is the minimum index of a machine that is assigned type $t$. \end{lemma} \begin{proof} We consider the following linear program (denoted as $(LP-z)$): \begin{eqnarray*} \min & \sum_{t=1}^{\tau} \sum_{i=1}^m \alpha_i(t)z_{i,t} & \\ s.t. & \sum_{t=1}^{\tau} z_{i,t}=1 & \forall i, \\ & \sum_{i: s_i=s} z_{i,t} \geq \sum_{C\in \C : t(C)=t, s(C)=s} x'_C& \forall s,\ \forall t\\ & z_{i,t} =0 & \forall t, \forall i<\mu(t) \\ & z_{\mu(t),t}=1 & \forall t \\ & z_{i,t} \geq 0 & \forall i \forall t \ \ . \end{eqnarray*} The constraint matrix of $(LP-z)$ is totally unimodular, and thus by solving the linear program and finding an optimal basic solution, we find an optimal integer solution in polynomial time. Since the fractional solution $z^*$ is a feasible solution with objective function value that does not exceed $\hat{A}$, we conclude that the optimal integer solution that we find does not violate the upper bound on the total activation cost. This integer solution defines a type $t_i$ for every machine $i$ by letting $t_i$ be the value for which $z_{i,t_i}=1$. Then using the constraints of the linear program for every speed $s$ and every type $t$, the number of machines of speed $s$ for which we define type $t$ is at least the number of actual machines that follow configurations with type $t$ and speed $s$ and furthermore for every type $t$ $\mu(t)$ is the minimum index of a machine that is assigned type $t$, as required. \qed \end{proof} Thus, we conclude: \begin{theorem}\label{main-thm} Problem $P$ admits an EPTAS. \end{theorem} \begin{proof} The time complexity of the scheme is $O(f(\frac{1}{\eps},\gamma,\tau) \cdot m^{O(\tau)} \cdot poly(n))$, and as proved in Lemma \ref{virtual_machine_alloc_lem}, $\gamma$ is a function of $\eps$ and $\tau$. Thus, in order to show that the algorithm is an EPTAS, it suffices to prove its approximation ratio, and that the resulting solution is feasible. Based on the sequence of lemmas, the approximation ratio is proved, using the fact that the load of an empty set of jobs is zero no matter what is the type of the machine and thus for every $i$, if machine $i$ is assigned an empty set of jobs the objective function value does not depend on the type assigned to $i$. Thus, in the last step the cost of the solution does not increase. Thus, we need to show that the resulting solution is feasible. Note that every job assignment is feasible, the feasibility of the assignment of types to machines is feasible using Lemma \ref{machine_assignment_types_feas_lem}. Thus, our solution is a feasible solution to problem $P$. \qed \end{proof} \bibliographystyle{abbrv}
{'timestamp': '2018-02-28T02:08:34', 'yymm': '1802', 'arxiv_id': '1802.09828', 'language': 'en', 'url': 'https://arxiv.org/abs/1802.09828'}
\section{Introduction}\label{sec:intro} M-dwarfs ($M_{\star}\lesssim0.6{\rm M}_{\odot}$) constitute more than seventy per cent of the Galactic stellar population \citep{Hen97} and consequently, they influence a wide-range of astrophysical phenomena, from the total baryonic content of the universe, to the shape of the stellar initial mass function. Furthermore, they are fast becoming a key player in the hunt for Earth-like planets (e.g. \citealt{Nut08, Kopp09,Law11}). The lower masses and smaller radii of M-dwarfs mean that an Earth-like companion causes a deeper transit and induces a greater reflex motion in its host than it would do to a solar analogue, making it comparatively easier to detect Earths in the traditional habitable zones of cool stars. The inferred properties of exoplanet companions, such as their density, atmospheric structure and composition, currently depend on a precise knowledge of the fundamental properties of the host star, such as its mass, radius, luminosity and effective temperature at a given age. Yet, to date, no theoretical model of low-mass stellar evolution can accurately reproduce all of the observed properties of M-dwarfs \citep{Hil04,Lop05}, which leaves their planetary companions open to significant mischaracterisation. Indeed, the characterisation of the atmosphere of the super-Earth around the M-dwarf GJ 1214 seems to depend on the spot coverage of the host star \citep{Moo12}. Detached, double-lined, M-dwarf eclipsing binaries (MEBs) provide the most accurate and precise, model-independent means of measuring the fundamental properties of low-mass stars \citep{And91}, and the coevality of the component stars, coupled with the assumption that they have the same metallicity due to their shared natal environment, places stringent observational constraints on stellar evolution models. In the best cases, the uncertainties on the masses and radii measured using MEBs can be just $0.5\%$ \citep{Mor09,Kra11a}. However, since M-dwarfs are intrinsically faint, only a small number of MEBs have been characterised so far with suitable accuracy to calibrate low-mass stellar evolution models, and there are even fewer measurements below $\sim0.35{\rm M}_{\odot},$ where stellar atmospheres are thought to transport energy purely by convection \citep{Chab97}. More worryingly, existing observations show significant discrepancies with stellar models. The measured radii of M-dwarfs are inflated by $5-10\%$ compared to model estimates and their effective temperatures appear too cool by $3-5\%$ (see e.g. \citealt{Lop05, Rib06,Mor10, Tor10, Kra11a}). This anomaly has been known for some time but remains enigmatic. Bizarrely, the two discrepancies compensate each other in the mass-luminosity plane such that current stellar models can accurately reproduce the observed mass-luminosity relationship for M-dwarfs. Two different physical mechanisms have been suggested as the cause of this apparent radius inflation: i) metallicity \citep{Ber06,Lop07} and ii) magnetic activity \citep{Mul01,Rib06,Tor06,Chab07}. \citet{Ber06} and \citet{Lop07} used interferometrically-measured radii of single, low-mass stars to look for correlation between inflation and metallicity. Both studies found evidence that inactive, single stars with inflated radii corresponded to stars with higher metallicity, but this did not hold true for active, fast-rotating single stars and further studies could not confirm the result \citep{Dem09}. While metallicity may play a role in the scatter of effective temperatures for a given mass (the effective temperature depends on the bolometric luminosity which is a function of metallicity), it seems unlikely that it is the main culprit of radius inflation. The magnetic activity hypothesis is steered by the fact that the large majority of well-characterised MEBs are in short ($<2$ day) orbits. Such short period systems found in the field (i.e. old systems) are expected to be tidally-synchronised with circularised orbits \citep{Zah77}. The effect of tidal-locking is to increase magnetic activity and is a notion that is supported by observations of synchronous, rapid rotation rates in MEBs, a majority of circular orbits for MEBs, plus X-ray emission and H$\alpha$ emission from at least one of the components. It is hypothesised that increased magnetic activity affects the radius of the star in two ways. Firstly, it can inhibit the convective flow, thus the star must inflate and cool to maintain hydrostatic equilibrium. \citet{Chab07} modelled this as a change in the convective mixing length, finding that a reduced mixing length could account for the inflated radii of stars in the partially-radiative mass regime, but it had negligible effect on the predicted radii of stars in the fully-convective regime. However, \citet{Jac09} showed that the radii of young, single, active, fully-convective stars in the open cluster NGC 2516 could be inflated by up to $50\%$, based on radii derived using photometrically-measured rotation rates and spectroscopically-measured projected rotational velocities. This therefore suggests that inhibition of convective flow is not the only factor responsible for the radius anomaly. The second consequence of increased magnetic activity is a higher production of photospheric spots which has a two-fold effect: i) a loss of radiative efficiency at the surface, causing the star to inflate and ii) a systematic error in light curve solutions due to a loss of circular symmetry caused by a polar distribution of spots. \citet{Mor10} showed that these two effects could account for $\sim3\%$ and $0-6\%$ of the radius inflation, respectively, with any any remaining excess ($0-4\%$) produced by inhibition of convective efficiency. This however is only under certain generalisations, such as a $30\%$ spot coverage fraction and a concentration of the spot distribution at the pole. One would perhaps expect the systematic error induced by star spots to be wavelength dependant, such that radius measurements obtained at longer wavelength would be closer to model predictions. \citet{Kra11a} searched for correlation between the radius anomaly and the orbital periods of MEBs, to see if the data and the models converged at longer periods ($\sim3$ days) where the stellar activity is less aggravated by fast rotation speeds. They found tentative evidence to suggest that this is the case but it is currently confined to the realm of small statistics. Not long after their study, the MEarth project uncovered a 41-day, non-synchronised, non-circularised, inactive MEB with radius measurements still inflated on average by $\sim4\%$, despite a detailed attempt to account for spot-induced systematics \citep{Irw11}. They suggest that either a much larger spot coverage than the $30\%$ they assumed is required to explain the inflation, or perhaps that the equation of state for low-mass stars, despite substantial progress (see review by \citealt{Chab05}), is still inadequate. Clearly, a large sample of MEBs with a wide-range of orbital periods is key to defining the magnetic activity effect and understanding any further underlying physical issues for modelling the evolution of low-mass single stars. This in turn will remove many uncertainties in the properties of exoplanets with M-dwarf host stars. With that in mind, this paper presents the discovery of many new MEBs to emerge from the WFCAM Transit Survey, including a full characterisation to reasonable accuracy for three of the systems using 4-m class telescopes, despite their relatively faint magnitudes ($i=16.7-17.6$). In Section~\ref{sec:discovery}, we describe the WFCAM Transit Survey (WTS) and its observing strategy, and Section~\ref{sec:obs} provides additional details of the photometric and spectroscopic data we used to fully characterise three of the MEBs. In Section~\ref{sec:identify}, we outline how we identified the MEBs amongst the large catalogue of light curves in the WTS. Sections~\ref{sec:indices}-\ref{sec:RVs} present our analysis of all the available follow-up data used to characterise three of the MEBs including their system effective temperatures, metallicities, H$\alpha$ emission and surface gravities, via analysis of low-resolution spectroscopy, their size-ratio and orbital elements using multi-colour light curves, and their mass ratios using radial velocities obtained with intermediate-resolution spectra. These results are combined in Section~\ref{sec:absdim} to determine individual masses, radii, effective temperatures. We also calculate their space velocities and assess their membership to the Galactic thick and thin disks. Lastly, in Section~\ref{sec:discuss}, we discuss our results in the context of low-mass stellar evolution models and a mass-radius-period relationship, as suggested by \citet{Kra11a}. \section{The WFCAM Transit Survey}\label{sec:discovery} We identified our new MEBs using observations from the WFCAM Transit Survey (WTS) \citep{Bir11}. The WTS in an on-going photometric monitoring campaign that operates on the 3.8m United Kingdom Infrared Telescope (UKIRT) at Mauna Kea, Hawaii. Its primary and complementary science goals are: i) to provide a stringent observational constraint on planet formation theories through a statistically meaningful measure of the occurrence rate of hot Jupiters around low-mass stars \citep{Kov12} and ii) to detect a large sample of eclipsing binaries stars with low-mass primaries and characterise them to high enough accuracy such that we strongly constrain the stellar evolution models describing the planet-hosting M-dwarfs found in the survey. The WTS contains $\sim6,000$ early to mid M-dwarfs with $J\leq16$ mag, covering four regions of the sky which span a total of 6 square degrees. We combine the large aperture of UKIRT with the Wide-Field Camera (WFCAM) infrared imaging array to observe in the $J$-band ($1.25\mu$m), near the peak of the spectral energy distribution (SED) of a cool star. Our observing strategy takes advantage of a unique opportunity offered by UKIRT, thanks to the highly efficient queue-scheduled operational mode of the telescope. Rather than requesting continuous monitoring, we noted there was room for a flexible proposal in the queue, which did not require the very best observing conditions, unlike most of the on-going UKIRT programmes that require photometric skies with seeing $<1.3^{\prime\prime}$ \citep{Lawr07}. The WTS is therefore designed in such a way that there is always at least one target field visible and it can observe in mediocre seeing and thin cloud cover. We chose four target fields to give us year-round visibility, with each field passing within 15 degrees of zenith. To select the fields, we combined 2MASS photometry and the dust extinction maps of \citet{Sch98} to find regions of sky that maximised the number of dwarf stars and maximised the ratio of dwarfs to giants \citep{Cru03}, while maintaining $E(B-V)<0.1$. We stayed relatively close to the galactic plane to increase the number of early M-dwarfs, but restricted ourselves to $b>5$ degrees to avoid the worst effects of overcrowding. The survey began on August 05, 2007, and the eclipsing systems presented in this paper are all found in just one of the four WTS fields. The field is centred on $\rm RA=19h$, $\rm Dec=+36d$, (hereafter, the 19h field), for which the WTS has its most extensive coverage, with $1145$ data points as of June 16, 2011. Note that this field is very close to, but does not overlap with, the Kepler field \citep{Bat06}, but it is promising that recent work showed the giant contamination in the Kepler field for magnitudes in a comparable range to our survey was low ($7\pm3\%$ M-giant fraction for $K_{P}>14$), \citealt{Mann12}. \section{Observations and Data Reduction}\label{sec:obs} \subsection{UKIRT/WFCAM $J$-band photometry}\label{sec:photo-wts} UKIRT and the WFCAM detector provide the survey with a large database of infrared light curves in which to search for transiting and eclipsing systems. The WFCAM detector consists of four $2048\times2048$ $18\mu$m pixel HgCdTe Rockwell Hawaii-II, non-buttable, infrared arrays that each cover $13.65^{\prime}\times13.65^{\prime}$ and are separated by $94\%$ of a chip width \citep{Cas07}. Each WTS field covers $1.5$ square degrees of sky, comprising of eight pointings of the WFCAM paw print, exposing for a 9-point jitter pattern with 10 second exposures at each position, and tiled to give uniform coverage across the field. It takes 15 minutes to observe an entire WTS field ($9\times10{\rm s}\times8+$overheads), resulting in a cadence of 4 data points per hour (corresponding to one UKIRT Minimum Schedulable Block). Unless there are persistently bad sky conditions at Mauna Kea, due to our relaxed observing constraints the WTS usually observes only at the beginning of the night, just after twilight in $>1^{\prime\prime}$ seeing when the atmosphere is still cooling and settling. The 2-D image processing of the WFCAM observations and the generation of light curves closely follows that of \citet{Irw07} and is explained in detail in \citet{Kov12}. We refer the avid reader to these publications for an in-depth discussion of the reduction techniques but briefly describe it here. For image processing, we use the automatically reduced images from the Cambridge Astronomical Survey Unit pipeline\footnote{http://casu.ast.cam.ac.uk/surveys$-$projects/wfcam/technical}, which is based on the INT wide-field survey pipeline \citep{Irw01}. This provides the 2-D instrumental signature removal for infrared arrays including the removal of the dark and reset anomaly, the flat-field correction using twilight flats, decurtaining and sky subtraction. We then perform astrometric calibration using 2MASS stars in the field-of-view, resulting in an astrometric accuracy of $\sim20-50$ mas after correcting for field and differential distortion\footnote{\scriptsize{http://casu.ast.cam.ac.uk/surveys$-$projects/wfcam/technical/astrometry}}. For photometric calibration, the detector magnitude zero-point is derived for each frame using measurements of stars in the 2MASS Point Source Catalogue that fall within the same frame \citep{Hod09}. In order to generate a master catalogue of source positions for each field in the $J$-band filter, we stack 20 frames taken in the best conditions (i.e. seeing, sky brightness and transparency) and run our source detection software on the stacked image \citep{Irw85,Irw01}. The resulting source positions are used to perform co-located, variable, `soft-edged' (i.e. pro-rata flux division for boundary pixels) aperture photometry on all of the time-series images (see \citealt{Irw07}). For each of the four WFCAM detector chips, we model the flux residuals in each frame as a function of position using a 2-D quadratic polynomial, where the residuals are measured for each object as the difference between its magnitude on the frame in question and its median magnitude calculated across all frames. By subtracting the model fit, this frame-to-frame correction can account for effects such as flat-fielding errors, or varying differential atmospheric extinction across each frame, which can be significant in wide-field imaging (see e.g. \citealt{Irw07}). Our source detection software flags any objects with overlapping isophotes. We used this information in conjunction with a morphological image classification flag also generated by the pipeline to identify non-stellar or blended objects. The plate scale of WFCAM ($0.4^{\prime\prime}$/pix) is significantly smaller than those of most small aperture, ground-based transit survey instruments, such as SuperWASP \citep{Pol06}, HATNet \citep{Bak04} and TrES \citep{Dun04}, and can have the advantage of reducing the numbers of blended targets, and therefore the numbers of transit mimics, despite observing fainter stars. The last step in the light curve generation is to perform a correction for residual seeing-correlated effects caused by image blending that are not removed by the frame-to-frame correction. For each light curve, we model the deviations from its median flux as a function of the stellar image FWHM on the corresponding frame, using a quadratic polynomial that we then subtract. We note that this method addresses the symptoms, but not the cause, of the effects of blending. Figure~\ref{fig:rms} shows the per data point photometric precision of the final light curves for the stellar sources in the 19hr field. The RMS is calculated as a robust estimator using as $1.48\times${\sc mad}, i.e. the equivalent of the Gaussian RMS, where the {\sc mad} is the median of the absolute deviations \citep{Hoa83}. The upturn between $J\sim12-13$ mag marks the saturation limit, so for our brightest objects, we achieve a per data point precision of $\sim3-5$ mmag. The blue solid line shows the median RMS in bins of $0.2$ mag. The median RMS at $J=16$ mag is $\sim1\%$ ($\sim10$ mmag), with a scatter of $\sim0.8-1.5\%$, and only $5\%$ of sources have an RMS greater than $15$ mmag at this magnitude. Hence, for the majority of sources with $J\leq16$ mag, the precision is in theory suitable for detecting not only M-dwarf eclipsing binaries but also transits of mid-M stars by planets with radii $\sim1{\rm R}_{\oplus}$ (see \citealt{Kov12} for the WTS sensitivity to Jupiter- and Neptune-sized planets). The $16$ new MEBs are shown on the plot by the red star symbols. Note that shorter period MEBs sit higher on the RMS diagram, but that genuine longer period MEBs still have RMS values close to the median, due to our robust estimator and the long observing baseline of the survey. For the MEB light curves characterised in this paper, we perform an additional processing step, in which we use visual examination to clip several clear outlying data points at non-consecutive epochs. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{wts_rms_hess} \caption{The RMS scatter per data point of the WTS light curves as a function of WFCAM $J$ magnitude, for sources in the 19hr field with stellar morphological classification. The RMS is a robust estimator calculated as $1.48$ $\times$ the median of the absolute deviations. We achieve a per data point photometric precision of $3-5$ mmag for the brightest objects, with a median RMS of $\sim1\%$ for $J=16$ mag. Saturation occurs between $\sim12-13$ mag as it varies across the field and with seeing. The dashed red horizontal line at $3$ mmag marks the limit of our photometric precision. The blue solid curve shows the median RMS in bins of $0.2$ mag. The red stars show the positions of the $16$ WTS 19hr field MEBs. The shorter period MEBs sit higher in the plot. RMS values are given in Table~\ref{tab:others}} \label{fig:rms} \end{figure} The WTS $J$-band light curve data for the MEBs reported in this paper are given in Table~\ref{tab:wts_lcs}. We have adopted a naming system that uniquely identifies each source handled by our data reduction process, and thus we refer to MEBs characterised in this paper as: 19b-2-01387, 19c-3-01405, and 19e-3-08413. The first number in the naming strategy gives the Right Ascension hour the target field. The following letter corresponds to one of the eight pointings that make up the whole WTS target field. The number between the hyphens denotes which of the four WFCAM chips the source is detected on and the final 5 digits constitute the source's unique sequence number in our master catalogue of WTS sources. \begin{table*} \centering \begin{tabular}{@{\extracolsep{\fill}}lccccccc} \hline \hline Name&HJD&$J_{\rm WTS}$&$\sigma_{J_{\rm WTS}}$&$\Delta m_{0}^{a}$&FWHM$^{b}$&$x^{c}$&$y^{c}$\\ &&(mag)&(mag)&(mag)&(pix)&(pix)&(pix)\\ \hline 19b-2-01387&2454317.808241&14.6210&0.0047&0.0001&2.17&321.98&211.07\\ 19b-2-01387&2454317.820311&14.6168&0.0047&0.0002&2.37&321.74&210.88\\ ...&...&...&...&...&...&...&...\\ \hline \end{tabular} \caption{ The WTS $J$-band light curves of 19b-2-01387, 19c-3-01405 and 19e-3-08413. Magnitudes are given in the WFCAM system. \citet{Hod09} provide conversions for other systems. The errors, $\sigma_{J}$, are estimated using a standard noise model, including contributions from Poisson noise in the stellar counts, sky noise, readout noise and errors in the sky background estimation. $^{a}$ Correction to the frame magnitude zero point applied in the differential photometry procedure. More negative numbers indicate greater losses. See \citet{Irw07}. $^{b}$ Median FWHM of the stellar images on the frame. $^{c}$ $x$ and $y$ pixel coordinates the MEB systems on the image, derived using a standard intensity-weighted moments analysis. (This table is published in full in the online journal and is shown partially here for guidance regarding its form and content.)} \label{tab:wts_lcs} \end{table*} Some sources in the WTS fields are observed multiple times during a full field pointing sequence due to the slight overlap in the exposed areas in the tile pattern. 19c-3-01405 is one such target, receiving two measurements for every full field sequence. The median magnitude for 19c-3-01405 on each chip differs by 32 mmag. \citet{Hod09} claim a photometric calibration error of $1.5\%$ for WFCAM thus the median magnitudes have a $\sim2\sigma$ calibration error. The photometric calibration uses 2MASS stars that fall on chip in question, so different calibration stars are used for different chips and pointing. We combined the light curves from both exposures to create a single light curve with $893+898=1791$ data points, after first subtracting the median flux from each light curve. The combined light curve has the same out-of-eclipse RMS, $8$ mmag, as the two single light curves. The other two MEBs, 19b-2-01387 and 19e-3-08413, have $900$ and $899$ data points and an out-of-eclipse RMS of $5$ mmag and $7$ mmag, respectively. We also obtained single, deep exposures of each WTS field in the WFCAM $Z$, $Y$, $J$, $H$ and $K$ filters (exposure times $180, 90, 90, 4\times90$ and $4\times90$ seconds, respectively). These are used in conjunction with $g, r, i$ and $z$ photometry from SDSS DR7 to create SEDs and derive first estimates of the effective temperatures for all sources in the field, as described in Section~\ref{sec:SED}. \subsection{INT/WFC $i$-band follow-up photometry} Photometric follow-up observations to help test and refine our light curve models were obtained in the Sloan $i$-band using the Wide Field camera (WFC) on the 2.5m Isaac Newton Telescope (INT) at Roque de Los Muchachos, La Palma. We opted to use the INT's Sloan $i$ filter rather than the RGO I-band filter as i) it has significantly less fringing and, ii) unlike the RGO filter, it has a sharp cut-off at $\sim 8500$ \AA~ and therefore avoids strong, time-variable telluric water vapour absorption lines, which could induce systematics in our time-series photometry \citep{BaiJ03}. The observing run, between July 18 - August 01, 2010, was part of a wider WTS follow-up campaign to confirm planetary transit candidates and thus only a few windows were available to observe eclipses. Using the WFC in fast mode (readout time 28 sec. for 1$\times$1 binning), we observed a full secondary eclipse of 19b-2-01387 and both a full primary and a full secondary eclipse of 19e-3-08413. The observations were centred around the expected times of primary and secondary eclipse, allowing at least $30$ minutes of observation either side of ingress and egress to account for any uncertainty in our predicted eclipse times based on the modelling of the WTS light curves. In total, we observed $120$ epochs for the secondary eclipse of 19b-2-01387 using $60$s exposures, and $89$ and $82$ data points for the primary and secondary eclipse of 19e-3-08413, respectively, using $90$s exposures. We reduced the data using custom built {\sc idl} routines to perform the standard 2-D image processing (i.e. bias subtraction and flat-field division). Low-level fringing was removed by subtracting a scaled super sky-frame. To create the light curves, we performed variable aperture photometry using circular apertures with the {\sc idl} routine {\sc aper}. The sky background was estimated using a 3$\sigma$-clipped median on a 30$\times$30 pixel box, rejecting bad pixels. For each MEB, we selected sets of 15-20 bright, nearby, non-saturated, non-blended reference stars to create a master reference light curve. For each reference star, we selected the aperture with the smallest out-of-eclipse RMS. We removed the airmass dependence by fitting a second order polynomial to the out-of-eclipse data. The INT $i$-band light curve data is presented in Table~\ref{tab:int_lc}. The RMS of the out-of-eclipse data for the primary eclipse of 19b-2-01387 is $4.4$ mmag while the out-of-eclipse RMS values for the primary and secondary eclipses of 19e-3-08413 are $5.7$ mmag and $7.1$ mmag, respectively. \begin{table} \begin{center} \begin{tabular}{@{\extracolsep{\fill}}cccc} \hline \hline Name&HJD&$\Delta m_{i_{\rm INT}}$&$\sigma_{m_{i_{\rm INT}}}$\\ &&(mag)&(mag)\\ \hline 19b-2-01387&2455400.486275&-0.0044&-0.0034\\ 19b-2-01387&2455400.487652&-0.0049&-0.0024\\ ...&...&...&...\\ \hline \end{tabular} \caption{ INT $i$-band follow-up light curves of 19b-2-01387 and 19e-3-08413. $\Delta m_{i_{\rm INT}}$ are the differential magnitudes with respect to the median of the out-of-eclipse measurements such that the out-of-eclipse magnitude is $m_{i_{\rm INT}}=0$. The errors, $\sigma_{i}$, are the scaled Gaussian equivalents of the median absolute deviation of the target from the reference at each epoch i.e. $\sigma_{i}\sim1.48\times \rm MAD$. (This table is published in full in the online journal and is shown partially here for guidance regarding its form and content.)} \label{tab:int_lc} \end{center} \end{table} \subsection{IAC80/CAMELOT $g$-band follow-up photometry} We obtained a single primary eclipse of 19e-3-08413 in the Sloan $g$-band filter using the CAMELOT CCD imager on the 80cm IAC80 telescope at the Observatorio del Teide in Tenerife. The observations were obtained on the night of 08 August 2009, during a longer run to primarily follow-up WTS planet candidates. Exposure times were $60$ seconds and were read out with $1\times1$ binning of the full detector, resulting in a cadence of $71$ seconds, making a total of 191 observations for the night. The time-series photometry was generated using the VAPHOT package\footnote{http://www.iac.es/galeria/hdeeg/} \citep{Dee01}. The bias and flat field images were processed using standard {\sc iraf} routines in order to calibrate the raw science images. The light curve was then generated using VAPHOT, which is a series of modified {\sc iraf} routines that performs aperture photometry; these routines find the optimum size aperture that maximize the signal-to-noise ratio for each star. The user can specify whether to use a variable aperture to account for a time-variable point-spread-function (e.g. due to changes in the seeing) or to fix it for all images. For this data set, we fixed the aperture and used an ensemble of $6$ stars with a similar magnitude to the target to create a master reference light curve. Finally, a second order polynomial was fitted to the out-of-eclipse data the target light curve to remove a long-term systematic trend. The $g$-band light curve is shown in the bottom left panel of Figure~\ref{fig:19elc}, and the data are given in Table~\ref{tab:iac80_lc}. The out-of-eclipse RMS for the target is $26.9$ mmag, which is higher than the follow-up with the INT, due to the smaller telescope diameter. \begin{table} \begin{center} \begin{tabular}{@{\extracolsep{\fill}}ccc} \hline \hline HJD&$\Delta m_{g_{\rm IAC80}}$&$\sigma_{m_{g_{\rm IAC80}}}$\\ &(mag)&(mag)\\ \hline 2455052.51020&-0.0417&0.0290\\ 2455052.51113&-0.0091&0.0301\\ ...&...&...\\ \hline \end{tabular} \caption{IAC80 $g$-band follow-up light curve of 19e-3-08413. $\Delta m_{g_{\rm IAC80}}$ are the differential magnitudes with respect to the median of the out-of-eclipse measurements such that the out-of-eclipse magnitude is $m_{g_{\rm IAC80}}=0$. The errors, $\sigma_{g}$, are those computed by the {\sc iraf.phot} package. (This table is published in full in the online journal and is shown partially here for guidance regarding its form and content.)} \label{tab:iac80_lc} \end{center} \end{table} \subsection{WHT low-resolution spectroscopy} We carried out low-resolution spectroscopy during a wider follow-up campaign of the WTS MEB and planet candidates on several nights between July 16 and August 17, 2010, using the William Herschel Telescope (WHT) at Roque de Los Muchachos, La Palma. These spectra allow the identification of any giant contaminants via gravity sensitive spectral features, and provide estimates of the effective system temperatures, plus approximate metallicities and chromospheric activity indicators (see section~\ref{sec:indices}). We used the Intermediate dispersion Spectrograph and Imaging System (ISIS) and the Auxiliary-port Camera (ACAM) on the WHT to obtain our low-resolution spectra. In all instances we used a $1.0^{\prime\prime}$ slit. We did not use the dichroic during the ISIS observations because it can induce systematics and up to $10\%$ efficiency losses in the red arm, which we wanted to avoid given the relative faintness of our targets. Wavelength and flux calibrations were performed using periodic observations of standard lamps and spectrophotometric standard stars throughout the nights. Table~\ref{tab:spectroscopic} summarises our low-resolution spectroscopic observations. \begin{table} \centering \begin{tabular}{@{\extracolsep{\fill}}l@{\hspace{3pt}}r@{\hspace{7pt}}r@{\hspace{7pt}}r@{\hspace{7pt}}r@{\hspace{7pt}}r@{\hspace{7pt}}r@{\hspace{7pt}}} \hline \hline Name&Epoch$^{a}$&$t_{\rm int}$& Instr.&$\lambda_{\rm range}$&R&SNR\\ &&(s)&&(\AA)&&\\ \hline 19b-2-01387&394.71&300&ISIS&6000-9200&$1000$&27\\ 19c-3-01405&426.53&900&ACAM&3300-9100&$450$&30\\ 19e-3-08413&426.54&900&ACAM&3300-9100&$450$&30\\ \hline \end{tabular} \caption{Summary of low resolution spectroscopic observations at the William Herschel Telescope, La Palma. $^{a}$ JD-2455000.0. } \label{tab:spectroscopic} \end{table} The reduction of the low-resolution spectra was performed with a combination of {\sc iraf} routines and custom {\sc idl} procedures. In {\sc idl}, the spectra were trimmed to encompass the length of the slit, bias-subtracted and median-filtered to remove cosmic rays. The ACAM spectra were also flat-fielded. We corrected the flat fields for dispersion effects using a pixel-integrated sensitivity function. The {\sc iraf.apall} routine was used to identify the spectra, subtract the background and optimally sum the flux in apertures along the trace. For the ISIS spectrum, wavelength and flux calibration was performed with the CuNe+CuAr standard lamps and ING flux standard SP2032+248. For ACAM, arc frames were used to determine the wavelength solution along the slit using a fifth order spline function fit with an RMS $\rm \sim0.2$\AA. For flux-calibration, we obtained reference spectra of the ING flux standard SP2157+261. \subsection{WHT/ISIS intermediate-resolution spectroscopy}\label{sec:hires} Modelling the individual radial velocities (RVs) of components in a binary system provides their mass ratio and a lower limit on their physical separation. Combining this information with an inclination angle determined by the light curve of an eclipsing system ultimately yields the true masses and radii of the stars in the binary. We measured the RVs of the components in our MEBs using spectra obtained with the intermediate-resolution, single-slit spectrograph ISIS mounted on the WHT. We used the red arm with the R1200R grating centred on $8500$\AA, giving a wavelength coverage of $\sim8100-8900$\AA. The slit width was chosen to match the approximate seeing at the time of observation giving an average spectral resolution $R\sim9300$. The spectra were processed entirely with {\sc iraf}, using the {\sc ccdproc} packages for instrumental signature removal. We optimally extracted the spectra for each object on each night and performed wavelength and flux calibration using the semi-automatic {\sc kpno.doslit} package. Wavelength calibration was achieved using CuNe arc lamp spectra taken after each set of exposures and flux calibration was achieved using observations of spectrophotometric standards. \subsubsection{Radial velocities via cross-correlation} The region $8700-8850$\AA ~contains a number of relatively strong metallic lines present in M-dwarfs and is free of telluric absorption lines making it amenable for M-dwarf RV measurements \citep{Irw09b}. We used the {\sc iraf} implementation of the standard 1-D cross-correlation technique, {\sc fxcor}, to extract the RV measurements for each MEB component using synthetic spectra from the MARCS\footnote{http://marcs.astro.uu.se/} spectral database \citep{Gus08} as templates. The templates had plane-parallel model geometry, a temperature range from 2800-5500K incremented in 200K steps, solar metallicity, surface gravity $\log(g)=5.0$ and a $2$ km/s micro-turbulence velocity, which are all consistent with low-mass dwarf stars. The best-matching template i.e. the one that maximised the cross-correlation strength of the primary component for each object, was used to obtain the final RVs of the system, although note that the temperature of the best-matching cross-correlation template is not a reliable estimate of the true effective temperature. The saturated near-infrared Ca II triplet lines at $8498, 8542$ and $8662$\AA ~were masked out during the cross-correlation. A summary of our observations and the extracted radial velocities are given in table~\ref{tab:hires}. \begin{table*} \centering \begin{tabular}{@{\extracolsep{\fill}}lcrlrrrr} \hline \hline Name&HJD&Slit&$t_{int}$&SNR&Phase&$\rm RV_{1}$&$\rm RV_{2}$\\ &&($^{\prime\prime}$)&(n$\times$sec)&&&(km/s)&(km/s)\\ \hline 19b-2-01387& 2455395.55200&1.2&$2\times1200$&22.7&0.1422&-143.2&8.0\\ 19b-2-01387& 2455396.46471&0.7&$3\times600$&6.22&0.7513&23.7&-158.0\\ 19b-2-01387& 2455407.52383&1.0&$3\times900$&14.0&0.1314&-137.9&-4.2\\ 19b-2-01387& 2455407.62644&1.0&$3\times1200$&8.0&0.1998&-155.3&25.1\\ 19b-2-01387& 2455408.38324&1.0&$3\times900$&9.1&0.7049&14.5&-157.6\\ 19b-2-01387& 2455408.51689&1.0&$3\times1200$&12.8&0.7941&15.1&-153.7\\ 19b-2-01387& 2455408.63070&1.0&$3\times1200$&13.4&0.8700&-9.8&-139.2\\%** 19b-2-01387& 2455409.38673&1.0&$3\times1200$&14.3&0.3745&-128.4&-4.8\\ \hline 19c-3-01405& 2455407.43073&1.0&$1200+630$&6.4&0.2244&-62.5&57.0\\ 19c-3-01405& 2455407.47937&1.0&$3\times1200$&5.3&0.2343&-57.0&52.7\\ 19c-3-01405& 2455407.58012&1.0&$3\times1200$&5.3&0.2547&-63.8&54.6\\ 19c-3-01405& 2455408.46929&1.0&$3\times1200$&6.0&0.4347&-21.7&22.0\\ 19c-3-01405& 2455409.56881&1.0&$3\times1200$&6.0&0.6573&47.3&-52.6\\ 19c-3-01405& 2455409.68190&0.8&$3\times1200$&5.1&0.6802&42.5&-64.4\\ 19c-3-01405& 2455409.47707&0.8&$3\times1200$&7.5&0.6387&46.4&-43.6\\ \hline 19e-3-08413& 2455408.42993&1.0&$3\times1200$&7.1&0.6640&108.0&-46.5\\ 19e-3-08413& 2455408.56307&1.0&$3\times1200$&8.7&0.7435&113.1&-58.4\\ 19e-3-08413& 2455409.43629&1.0&$3\times1200$&8.9&0.2654&-24.8&140.9\\ 19e-3-08413& 2455409.52287&0.8&$3\times1200$&7.5&0.3171&-27.6&125.6\\ 19e-3-08413& 2455409.61343&0.8&$3\times1200$&7.5&0.3712&-9.4&109.1\\ \hline \end{tabular} \caption{Summary of intermediate-resolution spectroscopic observations. All observations were centred on $8500$\AA.} \label{tab:hires} \end{table*} \section{Identification of M-dwarf Eclipsing Binaries}\label{sec:identify} \subsection{The M-dwarf sample}\label{sec:SED} It is possible to select M-dwarfs in WTS fields using simple colour-colour plots such as those shown in Figure~\ref{fig:cc_wts}, which were compiled using our deep WFCAM photometry plus magnitudes from SDSS DR7, which has a fortuitous overlap with the 19hr field. \citet{Jon94} showed that the $(i-K)$ colour is a reasonable estimator for the effective temperature, however the eclipsing nature of the systems we are interested in can cause irregularities in the colour indices, especially since the WFCAM photometry was taken at different epochs to each other and the SDSS photometry. For example, a system of two equal mass stars in total eclipse result is 0.75 mag fainter compared to its out-of-eclipse magnitude. We made a more robust sample of M-dwarfs by estimating the effective temperature of each source in the 19h field via SED fitting of all the available passbands i.e. SDSS $g, r, i$ and $z$-band plus WFCAM $Z, Y, J, H$ and $K$-band. By rejecting the most outlying magnitudes from the best SED fit, one becomes less susceptible to errors from in-eclipse observations. Note that the SDSS $u$-band magnitudes of our redder sources are affected by the known red leak in the $u$ filter and are hence excluded from the SED fitting process. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{cc_wts_2_meb_paper} \caption{Colour-colour plots of the sources in one of the WFCAM pointings for the 19hr field (black $+$), overlaid with the full 19hr field sample of detached MEB candidates (blue filled circles and red filled squares). The filled red squares mark the three MEB systems characterised in this paper. The orange crosses mark the M-dwarf candidate sources in the pointing (see Section~\ref{sec:SED}). The triangles mark the masses for the given colour index, derived from the 1 Gyr solar metallicity isochrone of the \citet{Bar98} low-mass stellar evolution models. The arrows mark the maximum reddening vector, assuming a distance of $1$ kpc.} \label{fig:cc_wts} \end{figure} To perform the SED fitting, we first put all the observed photometry to the Vega system (see \citealt{Hew06} and \citealt{Hod09} for conversions). Although the WFCAM photometry is calibrated to $1.5-2\%$ with respect to 2MASS \citep{Hod09}, the 2MASS photometry also carries its own systematic error, so we assumed an extra $3\%$ systematic error added in quadrature to the photometric errors for each source to account for calibration errors between different surveys. We used a simple $\chi^{2}$ fitting routine to compare the data to a set of solar metallicity model magnitudes at an age of 1 Gyr from the stellar evolution models of \citet{Bar98}. We linearly interpolated the model magnitudes onto a regular grid of $5$ K intervals from $1739-6554$ K, to enable a more precise location of the $\chi^{2}$ minimum. If the worst fitting data point in the best $\chi^{2}$ fit was more than a $5\sigma$ outlier, we excluded that data point and re-ran the fitting procedure. This makes the process more robust to exposures taken in eclipse. The errors on the effective temperatures include the formal $1\sigma$ statistical errors from the $\chi^{2}$ fit (which are likely to be under-estimated) plus an assumed $\pm100$ K systematic uncertainty. This error also takes into account the known missing opacity issue in the optical bandpasses in the \citet{Bar98} models. Our M-dwarf sample is conservative. It contains any source with an SED effective temperature $\leq4209$ K, magnitude $J\leq16$ mag and a stellar class morphology flag (as determined by the data reduction pipeline). The maximum effective temperature corresponds to a radius of $0.66{\rm R}_{\odot}$ at the typical field star age of 1 Gyr, according to the stellar evolution models of \citet{Bar98}. We opted to restrict our MEB search to $J\leq16$ mag because the prospects for ground-based radial velocity follow-up are bleak beyond $J=16$ mag ($I\sim18$ mag, \citealt{Aig07}) if we wish to achieve accurate masses and radii that provide useful constraints on stellar evolution models. We found a total of $2,705$ M-dwarf sources in the 19hr field. Table~\ref{tab:photometric} gives the single epoch, deep photometry from SDSS and WFCAM, plus the proper motions from the SDSS DR7 database \citep{Mun04,Mun08} for the systems characterised in this paper. Their SED-derived system effective temperatures, $\rm T_{\rm eff,SED}$ are given in Table~\ref{tab:spec_ind}. \begin{table} \centering \begin{tabular}{@{\extracolsep{\fill}}l@{\hspace{3pt}}rrr} \hline \hline Parameter&19b-2-01387&19c-3-01405&19e-3-08413\\ \hline $\alpha_{J2000}$&19:34:15.5&19:36:40.7&19:32:43.2\\ $\delta_{J2000}$&36:28:27.3&36:42:46.0&36:36:53.5\\ $\mu_{\alpha}cos\delta$ ($^{\prime\prime}$/yr)&$0.023\pm0.003$&$-0.002\pm0.004$ &$0.008\pm0.004$\\ $\mu_{\delta}$ ($^{\prime\prime}$/yr)&$0.032\pm0.003$&$-0.001\pm0.004$ &$-0.007\pm0.004$\\ $g$&$19.088\pm0.010$&$20.342\pm0.024$&$20.198\pm0.020$\\ $r$&$17.697\pm0.006$&$18.901\pm0.012$&$18.640\pm0.009$\\ $i$&$16.651\pm0.004$&$17.634\pm0.008$&$17.488\pm0.005$\\ $z$&$16.026\pm0.007$&$16.896\pm0.012$&$16.847\pm0.010$\\ $Z$&$15.593\pm0.005$&$16.589\pm0.007$&$16.156\pm0.006$\\ $Y$&$15.188\pm0.006$&$16.432\pm0.011$&$15.832\pm0.008$\\ $J$&$14.721\pm0.004$&$15.706\pm0.006$&$15.268\pm0.005$\\ $H$&$14.086\pm0.003$&$15.105\pm0.006$&$14.697\pm0.005$\\ $K$&$14.414\pm0.006$&$14.836\pm0.007$&$14.452\pm0.006$\\ \hline \end{tabular} \caption{A summary of photometric properties for the three MEBs, including our photometrically derived effective temperatures and spectral types. The proper motions $\mu_{\alpha}cos\delta$ and $\mu_{\delta}$ are taken from the SDSS DR7 database. SDSS magnitudes $g, r, i$ and $z$ are in AB magnitudes, while the WFCAM $Z, Y, J, H$ and $K$ magnitudes are given in the Vega system. The errors on the photometry are the photon-counting errors and do not include the extra $3\%$ systematic error we add in quadrature when performing the SED-fitting. Conversions of the WFCAM magnitudes to other systems can be found in \citet{Hod09}. Note that the WFCAM $K$-band magnitude for 19b-2-01387 was obtained during an eclipse event and does not represent the total system magnitude.} \label{tab:photometric} \end{table} \subsubsection{Interstellar reddening}\label{sec:redden} The photometry for the 19hr field is not dereddened before performing the SED fitting. The faint magnitudes of our M-dwarf sources implies they are at non-negligible distances and that extinction along the line-of-sight may be significant. This means that our M-dwarf sample may contain hotter sources than we expect. At $J\le16$ mag, assuming no reddening, the WTS is distance-limited to $\sim1$ kpc for the earliest M-dwarfs ($M_{J}=6$ mag at 1 Gyr for M0V, $M_{\star}=0.6{\rm M}_{\odot}$, using the models of \citealt{Bar98}). We investigated the reddening effect in the direction of the 19hr field using a model for interstellar extinction presented by \citet{Dri03}. In this model, extinction does not have a simple linear dependency on distance but is instead a three-dimensional description of the Galaxy, consisting of a dust disk, spiral arms as mapped by HII regions, plus a local Orion-Cygnus arm segment, where dust parameters are constrained by COBE/DIRBE far infrared observations. Using this model, we calculate that $A_{V}=0.319$ mag ($E(B-V)=0.103$ mag) at $1$ kpc in the direction of the 19hr field. We used the conversion factors in Table 6 of \citet{Sch98} to calculate the absorption in the UKIRT and SDSS bandpasses, finding $A_{g}=0.370$ mag, $A_{K}=0.036$ mag, $E(r-i)=0.065$, $E(i-z)=0.059$ and $E(J-H)=0.032$. The reddening affect along the line-of-sight to the field thus appears to be small. We show this maximum reddening vector as an arrow in Figure~\ref{fig:cc_wts}. For the most interesting targets in the WTS (EBs or planet candidates), we obtain low-resolution spectra to further characterise the systems and check their dwarf-like nature (see Section~\ref{sec:indices}). Effective temperatures based on spectral analysis suffer less from the effects of reddening effects because the analysis depends not only on the slope of the continuum but also the shape of specific molecular features, unlike the SED fitting. Therefore, the SED effective temperatures are only a first estimate and we will later adopt values derived by fitting model atmospheres to low-resolution spectra of our MEBs (see Section~\ref{sec:spt-teff}). \subsection{Eclipse detection}\label{sec:detect} We made the initial detection of our MEBs during an automated search for transiting planets in the WTS light curves, for which we used the Box-Least-Squares (BLS) algorithm, {\sc occfit}, as described in \citet{Aig04}, and employed by \citet{Mil08}. The box represents a periodic decrease in the mean flux of the star over a short time scale (an upside-down top hat). The in-occultation data points in the light curves fall into a single bin, $I$, while the out-of-occultation data points form the ensemble $O$. This single bin approach may seem simplistic but in the absence of significant intrinsic stellar variability, such as star spot modulation, it becomes a valid approximation to an eclipse and is sufficient for the purpose of \emph{detection}. Given the relatively weak signal induced by star spot activity in the $J$-band, we did not filter the light curves for stellar variability before executing the detection algorithm. We ran {\sc occfit} on the M-dwarf sample light curves in the 19h field. Our data invariably suffer from correlated `red' noise, thus we adjust the {\sc occfit} detection statistic, $S$, which assesses the significance of our detections, with the procedure described by \citet{Pon06b} to derive a new statistic, $S_{\rm red}$. This process is explained in detail for {\sc occfit} detections in \citet{Mil08}. \subsection{Candidate selection} To automatically extract the MEB candidates from results of running {\sc occfit} on the M-dwarf sample light curves, we required that $S_{\rm red} \ge 5$ and that the detected orbital period must not be near the common window-function alias at one day i.e. $0.99>P>1.005$ days. This gave $561$ light curves to eyeball, during which we removed objects with spurious eclipse-like features associated with light curves near the saturation limit. In total, we found $26$ sources showing significant eclipse-features in the 19h field, of which $16$ appear to be detached and have full-phase coverage, with well-sampled primary and secondary eclipses. The detached MEB candidates are marked on the colour-colour plot in Figure~\ref{fig:cc_wts} by the blue filled circles and red filled squares. The orbital periods of the MEBs corresponding to the blue filled circles are given in Table~\ref{tab:others} and their folded light curves are shown in Figures~\ref{fig:others} and~\ref{fig:others2}. The MEBs corresponding to the red filled squares are the subjects of the remaining detailed analysis in this paper. \section{Low-resolution spectroscopic analysis}\label{sec:indices} Low-resolution spectra of our three characterised MEBs, as shown in Figure~\ref{fig:lowres}, permit a further analysis of their composite system properties and provide consistency checks on the main-sequence dwarf nature of the systems. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{lowres_meb} \caption{Low-resolution spectra of our three new MEBs plus a known M-giant star (top spectrum) for comparison. The TiO absorption band at $7100$\AA ~signifies the onset of the M-dwarf spectral types. The dotted vertical lines, from left the right, mark the Na I, $\rm H\alpha$ and the Na I doublet rest wavelengths in air. The Na I doublet is strong in dwarfs while the Calcium infrared triplet at $8498, 8542$ and $8662$\AA ~is strongest in giants. The deep features at $7594$ and $7685$\AA~ are telluric $O_{2}$ absorption. $\rm H\alpha$ emission is clearly present in all three MEBs.} \label{fig:lowres} \end{figure} \subsection{Surface Gravity} \citet{Sle06} and \citet{Lod11} have shown that the depths of alkaline absorption lines between $6300-8825$\AA~ can highlight low surface gravity features in low-mass stars. We used the spectral indices $\rm Na_{8189}$ and $\rm TiO_{7140}$ to search for any giant star contaminants in the MEBs and found that all three MEBs have indices consistent with dwarf star gravity. We note that our low-resolution spectra were not corrected for telluric absorption, which is prevalent in the $\rm Na_{8189}$ region, and thus our measured indices may not be completely reliable. However a visual inspection of the spectra also reveals deep, clear absorption by the NaI doublet at $8183$\AA, $8195$\AA~ as highlighted in Figure~\ref{fig:lowres}, which is not seen in giant stars. For comparison, we also observed an M4III giant standard star, [R78b] 115, shown at the top of Figure~\ref{fig:lowres}, with the same set up on the same night. It lacks the deep Na I doublet absorption lines found in dwarfs and its measured spectral indices are $\rm TiO_{7140}=2.0\pm0.2$ and $\rm Na_{8189}=0.97\pm0.04$, which places it in the low-surface gravity region for M4 spectral types in Figure 11 of \citet{Sle06}. The gravity-sensitive spectral index values for our MEBs are given in Table~\ref{tab:spec_ind}. \subsection{Metallicity}\label{sec:metals} The profusion of broad molecular lines in M-dwarf spectra, caused by absorbing compounds such as Titanium Oxide and Vanadium Oxide redwards of $6000$\AA ~\citep{Kir91}, make it difficult to accurately define the continuum level, which complicates M-dwarf metallicity measurements. However, recent work shows that the relative strengths of metal hydride and metal oxide molecular bands in low-resolution optical wavelengths can be used to separate metal-poor subdwarfs from solar-metallicity systems. For example, \citet{Woo09} provided a set of equal metallicity contours in the plane of the CaH2+CaH3 and TiO5 spectral indices defined by \citet{Rei95}, and they mapped the metallicity index $\zeta_{\rm TiO/CaH}$ described by \citet{Lep07} onto an absolute metallicity scale, calibrated by metallicity measurements from well-defined FGK stars with M-dwarf companions, albeit with a significant scatter of $\sim0.3$ dex. \citet{Dhi11} have refined the coefficients for $\zeta_{\rm TiO/CaH}$ after finding a slight bias for higher metallicity in early M-dwarfs. We measured the CaH2+CaH3 and TiO5 indices in our MEB spectra and compared them with these works. We found that all three of our systems are consistent with solar metallicity. The measured values of the metallicity-sensitive indices for our MEBs are given in Table~\ref{tab:spec_ind}. One should note that further progress has been made in M-dwarf metallicity measurements by moving to the infrared and using both low-resolution $K$-band spectra \citep{Roj10,Mui11} and high-resolution $J$-band spectra \citep{One11,delBur11}. These regions are relatively free of molecular lines, allowing one to isolate atomic lines (such as Na I and Ca I) and thus achieve a precise continuum placement. However, in the spectra of M-dwarf short period binary systems, one must be aware that the presence of double-lines and rotationally-broadened features further increase the uncertainty in their metallicity estimates. \subsection{H$\alpha$ Emission} All three of our MEBs show clear $\rm H\alpha$ emission in their low-resolution spectra, although it is not possible to discern if both components are in emission. The equivalent widths of these lines, which are a measure of the chromospheric activity, are reported in Table~\ref{tab:spec_ind}, where a negative symbol denotes emission. H$\alpha$ emission can be a sign of youth, but we do not see any accompanying low-surface gravity features. The strength of the H$\alpha$ emission seen in our MEBs is comparable with other close binary systems (e.g. \citealt{Kra11a}) and thus is most likely caused by high magnetic activity in the systems. None of the systems have equivalent widths $< -8$\AA, which places them in the non-active accretion region of the empirically derived accretion criterion of \citet{Barr03}. \subsection{Spectral type and effective temperature}\label{sec:spt-teff} Our low-resolution spectra permit an independent estimate of the spectral types and effective temperatures of the MEBs to compare with the SED fitting values. Initially, we assessed the spectral types using the {\sc Hammer}\footnote{http://www.astro.cornell.edu/$\sim$kcovey/thehammer.html} spectral-typing tool, which estimates MK spectral types by measuring a set of atomic and molecular features \citep{Cov07}. One can visually inspect the automatic fit by eye and adjust the fit interactively. For the latest-type stars (K and M), the automated characterisation is expected to have an uncertainty of $\sim2$ subclasses. We found that 19b-2-01387 has a visual best-match with an M2V system, while the other two MEBs were visually closest to M3V systems. M-dwarf studies \citep{Rei95,Giz97} have found that the TiO5 spectral index could also be used to estimate spectral types to an accuracy of $\pm0.5$ subclasses for stars in the range K7V-M6.5V. The value of this index and the associated spectral type (SpT) are given for each of our three MEBs in Table~\ref{tab:spec_ind}. We find a reasonable agreement between the spectral index results, the visual estimates and the SED derived spectral types. \citet{Woo06} derived a relationship between the CaH2 index and the effective temperatures of M-dwarfs in the range $3500 $K$ <\rm T_{\rm eff}<4000$ K. Table~\ref{tab:spec_ind} gives the value of this index and the associated effective temperatures, labelled $\rm T_{\rm eff}$ (CaH2), for our three MEBs. \citet{Woo06} do not quote an uncertainty on the relationship, so we assumed errors of $\pm150$ K. Within the assumed errors, the effective temperatures derived from the spectral indices and the SED fitting agree, but the relationship between the CaH2 index and $\rm T_{\rm eff}$ has not been robustly tested for the CaH2 values we have measured. Instead, we have determined the system effective temperatures for our MEBs by directly comparing the observed spectra to cool star model atmospheres using a $\chi^{2}$-minimisation algorithm. This incorporated the observational errors, which were taken from the error spectrum produced during the optimal extraction of the spectra. We used a grid of NextGen atmospheric models \citep{All97} interpolated to the same resolution as our low-resolution spectra. The models had increments of $\Delta{\rm T}_{\rm eff}=100$K, solar metallicity and a surface gravity $\log(g)=5.0$ (a typical value for early-type field M-dwarfs), and spanned $5000-8500$\AA. During the fitting, we masked out the strong telluric $\rm O_{2}$ features at $7594, 7685$\AA~ and the H$\alpha$ emission line at $6563$\AA ~as these are not present in the models, although we found that their inclusion had a negligible affect on the results. All the spectra were normalised to their continuum before fitting. We fitted the $\chi^2$-distribution for each MEB with a six-order polynomial to locate its minimum. The corresponding best-fitting ${\rm T}_{\rm eff}$ (atmos., adopted) is given in Table~\ref{tab:spec_ind}. Assuming systematic correlation between adjacent pixels in the observed spectrum, we multiplied the formal $1\sigma$ errors from the $\chi^{2}$-fit by $\sqrt{3}$ to obtain the final errors on the system effective temperatures. From here on, our analysis is performed with system effective temperatures derived from model atmosphere fitting. Although our different methods agree within their errors, the model atmosphere fitting is more robust against reddening effects, even if this effect is expected to be small, as discussed earlier. \begin{table*} \centering \begin{tabular}{@{\extracolsep{\fill}} lrrrrrrrrrr} \hline \hline Name&${\rm T}_{\rm eff}$&${\rm T}_{\rm eff}$&${\rm T}_{\rm eff}$&SpT&$\rm TiO5$&$\rm CaH2$&$\rm CaH3$&$\rm TiO_{7140}$&$\rm Na_{8189}$&EW(H$\alpha$)\\%&SpT &(SED)& (atmos., adopted)& (CaH2)& (TiO5)&&&&&&(\AA)\\%&(SED) \hline 19b-2-01387&$3494\pm116$&$3590\pm100$&$3586\pm150$&M$2.7\pm0.5$& $0.52$&$0.52$&$0.73$&$1.46$&$0.89$&$-3.2$\\%\pm0.2$\\&M$2.2\pm1.0$ 19c-3-01405&$3389\pm110$&$3307\pm130$&$3514\pm150$&M$2.8\pm0.5$& $0.50$&$0.48$&$0.75$&$1.60$&$0.87$&$-4.3$\\%\pm0.2$\\&M$2.3\pm1.0$ 19e-3-08413&$3349\pm111$&$3456\pm140$&$3569\pm150$&M$2.3\pm0.5$& $0.54$&$0.51$&$0.73$&$1.46$&$0.90$&$-4.1$\\%\pm0.2$\\&M$2.5\pm1.0$ \hline \end{tabular} \caption{A summary of the spectral indices, derived effective temperatures and spectral types (SpT) for the three characterised MEBs. The photometric estimates are labelled with (SED). They have the smallest errors, which include the formal uncertainties plus a $100$ K systematic uncertainty, but they potentially suffer from reddening effects and under-estimation of the errors. Our adopted effective temperatures are marked (atmos., adopted). They are derived from comparison with the NextGen model atmosphere spectra \citep{All97} and are more robust against reddening effects. The (TiO5) and (CaH2) labels mark values derived from the spectral index relations of \citet{Rei95} and \citet{Woo06}, respectively. We use $\rm T_{\rm eff}$ (atmos., adopted) for all subsequent analysis in this paper.} \label{tab:spec_ind} \end{table*} \section{Light curve analysis}\label{sec:lcanalysis} Light curves of an eclipsing binary provide a wealth of information about the system, including its orbital geometry, ephemeris, and the relative size and relative radiative properties of the stars. We used the eclipsing binary software, {\sc jktEBOP}\footnote{http://www.astro.keele.ac.uk/$\sim$jkt/} \citep{Sou04b,Sou04c}, to model the light curves of our MEBs. {\sc jktEBOP} is a modified version of {\sc EBOP} (Eclipsing Binary Orbit Program; \citealt{Nel72,Pop81,Etz80}). The algorithm is only valid for well-detached eclipsing binaries with small tidal distortions, i.e near-spherical stars with oblateness $<0.04$ \citep{Pop81}. A first pass fit with {\sc jktEBOP} showed that this criterion is satisfied by all three of our MEBs. The light curve model of a detached, circularised eclipsing binary is largely independent of its radial velocity model, which allowed us to perform light curve modelling and derive precise orbital periods on which to base our follow-up multi-wavelength photometry and radial velocity measurements. The RV-dependent part of the light curve model is the mass ratio, $q$, which controls the deformation of the stars. In our initial analysis to determine precise orbital periods, we assumed circular stars, which is reasonable for detached systems, but the observed mass ratios (see Section~\ref{sec:RVs}) were adopted in the final light curve analysis. {\sc jktEBOP} depends on a number of physical parameters. We allowed the following parameters to vary for all three systems during the final fitting process: i) the sum of the radii as a fraction of their orbital separation, $(R_{1}+R_{2})/a$, where $R_{j}$ is the stellar radius and $a$ is the semi-major axis, ii) the ratio of the radii, $k=R_{2}/R_{1}$, iii) the orbital inclination, $i$, iv) the central surface brightness ratio, $J$, which is essentially equal to the ratio of the primary and secondary eclipse depths, v) a light curve normalisation factor, corresponding to the magnitude at quadrature phase, vi) $e\rm cos\omega$, where $e$ is the eccentricity and $\omega$ is the longitude of periastron, vii) $e\rm sin\omega$, viii) the orbital period, $P$ and ix) the orbital phase zero-point, $T_{0}$, corresponding to the time of mid-primary eclipse. The starting values of $P$ and $T_{0}$ are taken from the original {\sc occfit} detection (see Section~\ref{sec:detect}). In the final fit, the observed $q$ value is held fixed. The reflection coefficients were not fitted, instead they were calculated from the geometry of the system. The small effect of gravity darkening was determined by fixing the gravity darkening coefficients to suitable values for stars with convective envelopes ($\beta=0.32$) \citep{Luc67}. {\sc jktEBOP} will allow for a source of third light in the model, whether it be from a genuine bound object or from some foreground or background contamination, so we initially allowed the third light parameter to vary but found it to be negligible in all cases and thus fixed it to zero in the final analysis. Our light curves, like many others, are not of sufficient quality to fit for limb darkening, so we fixed the limb darkening coefficients for each component star. {\sc jktld} is a subroutine of {\sc jktEBOP} that gives appropriate limb darkening law coefficients for a given bandpass based on a database of coefficients calculated from available stellar model atmospheres. We used the PHOENIX model atmospheres \citep{Cla00,Cla04} and the square-root limb darkening law in all cases. Studies such as \citet{VH93} have shown that the square-root law is the most accurate at infrared wavelengths. For each star, we assumed surface gravities of $\log(g)=5$, a solar metallicity and micro-turbulence of $2$ km/s, and used estimated effective temperatures for the component stars: $[T_{\rm eff,1},T_{\rm eff,2}]=$[3500K, 3450K] for 19b-2-01387, $[T_{\rm eff,1},T_{\rm eff,2}]=$[3300K, 3300K] for 19e-3-08413, and $[T_{\rm eff,1},T_{\rm eff,2}]=$[3525K, 3350K] for 19c-3-01405. Note that we did not iterate the limb darkening coefficients with the final derived values of $T_{1}$ and $T_{2}$ (see Section~\ref{sec:absdim}) as they only differed by $\sim30$ K ($<1\sigma$) from the assumed values. This would be computationally intensive to do and would result in a negligible effect on the final result. \begin{figure*} \centering \includegraphics[width=0.49\textwidth,clip=true, trim=0cm -0.5cm 0cm 0cm]{19b_2_01387_lc_wts_int} \includegraphics[width=0.49\textwidth]{19b_2_01387_mega} \caption[19b-2-01387]{{\bf 19b-2-01387} Left top panel: full phase WFCAM $J$-band light curve. Left bottom panel: the INT/WFC $i$-band light curve at secondary eclipse. The solid red and purple lines show the best-fit from {\sc jktEBOP}. The blue data points in the smaller panels show the residuals after subtracting the model. Right: Parameter correlations from Monte Carlo simulations and histograms of individual parameter distributions. The red dashed vertical lines mark the $68.3\%$ confidence interval. There is a strong correlation between the light ratio, the radius ratio, and the inclination (which is skewed), indicating the difficulty in constraining the model even with our high quality light curves.} \label{fig:19blc} \end{figure*} The phase-folded $J$-band light curves for the MEBs and their final model fits are shown in Figures~\ref{fig:19blc},~\ref{fig:19clc} and~\ref{fig:19elc}, while the model values are given in Table~\ref{tab:lcanal}. \begin{figure*} \centering \includegraphics[width=0.49\textwidth,clip=true, trim=0cm -8cm 0cm 0cm]{19c_3_01405_lc_small} \includegraphics[width=0.49\textwidth]{19c_3_01405_mega} \caption{{\bf 19c-3-01405} Left: WFCAM $J$-band light curve. Lines and panels as in Figure~\ref{fig:19blc}. The magnitude scale is differential as we have combined light curves from two different WFCAM chips. Right: Monte Carlo results with lines as in Figure~\ref{fig:19blc}. Our inability to constrain the model with follow-up data results in strong correlation between the radius ratio and light ratio and parameter distributions that are significantly skewed. There are also degeneracies in the inclination which is expected given the near identical eclipse depths.} \label{fig:19clc} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{19e_3_08413_lc_wts_int_iac80} \includegraphics[width=0.49\textwidth,clip=true, trim=0cm -9cm 0cm 0cm]{19e_3_08413_mega}\\ \caption{{\bf 19e-3-08413} Left top panel: full phase WFCAM $J$-band light curve. Left middle panel: INT/WFC $i$-band light curves of a primary and a secondary eclipse. Left bottom panel: IAC80 $g$-band light curve of a primary eclipse. The solid red, purple, and cyan lines show the best-fit from {\sc jktEBOP}. Right: Parameter correlations from residual permutations, which gave the larger errors on the model parameters than the Monte Carlo simulations, indicating time-correlated systematics. There are strong correlations between the light ratio, radius ratio and inclination. } \label{fig:19elc} \end{figure*} \begin{table*} \centering \begin{tabular}{lrrrr} \hline \hline Parameter&19b-2-01387&19c-3-01405&19e-3-08413\\ \hline WTS $J$-band&&&\\ P (days)&$1.49851768\pm0.00000041$&$4.9390945\pm0.0000015$&$1.67343720\pm0.00000048$\\ $T_{0}$ (HJD)&$2454332.889802\pm0.000077$&$2454393.80791\pm0.00022$&$2454374.80821\pm0.00016$\\ ($R_{1}+R_{2}$)/a&$0.17818\pm0.00040$&$0.07023\pm0.00035$&$0.1544\pm0.0016$\\ $k$&$0.967\pm0.044$&$0.987\pm0.081$&$0.782\pm0.070$\\ $J$&$0.9307\pm0.0043$&$0.993\pm0.013$&$0.8162\pm0.0084$\\ $i~(^{\circ})$&$88.761\pm0.051$&$89.741\pm0.053$&$87.59\pm0.26$\\ $e\cos\omega$&$-0.00020\pm0.00017$&$0.000060\pm0.000068$&$-0.00014\pm0.00017$\\ $e\sin\omega$&$-0.0007\pm0.0026$&$-0.0041\pm0.0059$&$0.0112\pm0.0062$\\ Normalisation (mag)&$14.64726\pm0.00017$&$0.00003\pm0.00020$&$15.22776\pm0.00020$\\ $R_{1}/a$&$0.0906\pm0.0020$&$0.0354\pm0.0014$&$0.0867\pm0.0027$\\ $R_{2}/a$&$0.0875\pm0.0021$&$0.0348\pm0.0015$&$0.0676\pm0.0040$\\ $L_{2}/L_{1}$&$0.871\pm0.076$&$0.97\pm0.15$&$0.503\pm0.090$\\ $e$&$0.0066\pm0.0026$&$0.0058\pm0.0043$&$0.0114\pm0.0062$\\ $\omega~(^{\circ})$&$268.0\pm1.7$&$180.5\pm90.9$&$91.1\pm1.2$\\ $\sigma_{J}$ (mmag)&5.2&8.4&8.7\\ \hline INT $i$-band&&&\\ $J$&0.8100&---&0.63\\ $\sigma_{i}$ (mmag)&5.7&---&12.1\\ \hline IAC80 $g$-band&&&\\ $J$&---&---&0.6455\\ $\sigma_{i}$ (mmag)&---&---&29.9\\ \hline \end{tabular} \caption{Results from the $J$ and $i$-band light curve analysis. Only perturbed parameters are listed. The light curve parameter errors are the $68.3\%$ confidence intervals while the model values are the means of the $68.3\%$ confidence level boundaries, such that the errors are symmetric. $T_{0}$ corresponds to the epoch of mid-primary eclipse for the first primary eclipse in the $J$-band light curve. Errors on 19e-3-08413 are from residual permutation analysis as they were the largest, indicating time-correlated systematics. $\sigma_{J,i}$ give the RMS of the residuals to the final solutions, where all parameters in the fit are fixed to the quoted values and the reflection coefficients calculated from the system geometry.} \label{tab:lcanal} \end{table*} \subsection{Error analysis} {\sc jktEBOP} uses a Levenberg--Marquardt minimisation algorithm \citep{Pre92} for least-squares optimisation of the model parameters; however, the formal uncertainties from least-squares solutions are notorious for underestimating the errors when one or more model parameters are held fixed, due to the artificial elimination of correlations between parameters. {\sc jktEBOP} provides a method for assessing the $1\sigma$ uncertainties on the measured light curve parameters through Monte Carlo (MC) simulations. In these simulations, a synthetic light curve is generated using the best-fitting light curve model at the phases of the actual observations. Random Gaussian noise is added to the model light curve which is then fitted in the same way as the data. This process is repeated many times and distribution of the best fits to the synthetic light curves provide the $1\sigma$ uncertainties on each parameter. \citet{Sou05} showed this technique is robust and gives similar results to Markov Chain Monte Carlo techniques used by others, under the (reasonable) assumption that the best fit to the observations is a good fit. {\sc jktEBOP} can also perform a residual permutation (prayer bead) bead error analysis which is useful for assigning realistic errors in the presence of correlated noise \citep{Sou08}. For each MEB, we have performed both MC simulations (using $10,000$ steps) and a prayer bead analysis. The reported errors are those from the method that gave the largest uncertainties. The correlations between the parameter distributions from the MC and prayer bead analysis are shown in Figures~\ref{fig:19blc},~\ref{fig:19clc} and~\ref{fig:19elc} along with histograms of the distributions of individual parameters. The distributions are not perfectly Gaussian and result in asymmetric errors for the $68.3\%$ confidence interval about the median. As we wish to propagate these errors into the calculation of absolute dimension, we have symmetrized the errors by adopting the mean of the $68.3\%$ boundaries (the $15.85\%$ and $84.15\%$ confidence limits) as the parameter value and quoting the $68.3\%$ confidence interval as the $\pm1\sigma$ errors. These errors are given in Table~\ref{tab:lcanal} for each MEB. Our follow-up $g$- and $i$-band light curves (where available) were used to check the $J$-band solution by modelling them with the derived $J$-band parameters, but allowing the surface brightness ratio and the light curve normalisation factor to vary. The limb darkening coefficients were changed to those appropriate for the respective $g$- and $i$-band and the reflection coefficients were again determined by the system geometry. The RMS values of the these fits are given in Table~\ref{tab:lcanal} along with the derived $g$- and $i$-band surface brightness ratio for completeness. The $g$- and $i$-band phase-folded data is shown overlaid with the models in Figure~\ref{fig:19blc} and~\ref{fig:19elc}. We find that the $J$-band solutions are in good agreement with the $g-$ and $i$-band data. \subsection{Light ratios} All three of our MEBs exhibit near equal-depth eclipses, implying that the systems have components with similar mass. This is promising because it suggest relatively large reflex motions that will appear as well-separated peaks in a cross-correlation function from which we derive RVs. However, it is well-known for systems with equal size components that the ratio of the radii, which depends on the depth of the eclipses, is very poorly determined by the light curve \citep{Pop84}, even with the high photometric precision and large number epochs in the WTS (see \citet{And80,Sou07d} for other excellent examples of this phenomenon). Conversely, $(R_{1}+R_{2})/a$, is often very well-constrained because it depends mainly on the duration of the eclipses and the orbital inclination of the system. The reason that the ratio of the radii is so poorly constrained stems from the fact that quite different values of $R_{2}/R_{1}$ result in very similar eclipse shapes. Unfortunately, we found that all three of our MEBs presented problems associated with poorly constrained $R_{2}/R_{1}$, revealed in the initial modelling as either a large skew in the errors on the best-fit parameters or best-fit solutions that were physically implausible. For example, for 19b-2-01387, the initial best-fit gave $L_{2}/L_{1}>1$ and $R_{2}/R_{1}>1$ while $T_{2}/T_{1}<1$. We know from our low-resolution spectroscopy that these stars are very likely to be ordinary main-sequence M-dwarfs and while their exact radii may be under-estimated by models, they generally obey the trend that less massive stars are less luminous, smaller and cooler. We note that \citet{Sta07} found a temperature reversal in a system of two young brown dwarfs where the less massive component was hotter but smaller and fainter than its companion. In their case the more massive component, although cooler, had an RV curve and eclipse depth that were consistent. In our cases, the most massive component (smallest $K_{\star}$) comes towards us (blue-shift) after the deepest (primary) eclipse, so it must be the more luminous component. The uncertainty in our modelling is most likely to due to insufficient coverage of the mid-eclipse points. However, we can try to use external data as an additional constraint in the fit. Some authors employ a spectroscopically derived light ratio as an independent constraint on $k$ in the light curve modelling \citep{Sou04a,Sou07,Nor94}. {\sc jktEBOP} allows the user to incorporate an input light ratio in the model and propagates the errors in a robust way. The input light ratio adds a point in the flux array at a specific phase \citep{Sou07}. If this is supplied with a very small error, the point is essentially fixed. We have tried several methods to estimate the light ratio for each of our three systems, although we stress here that none of the estimates should be considered as significant. One requires high resolution spectra to extract precise light ratios, via the analysis of the equivalent width ratios of metallic lines, which will be well-separated if observed at quadrature \citep{Sou05}. With a high resolution spectrum, one can disentangle the components of the eclipsing binary and perform spectral index analysis on the separate components (e.g. \citealt{Irw07b}). 19b-2-01387 is our brightest system and subsequently has the highest signal-to-noise in our intermediate-resolution spectra. The best spectrum is from the first night of observations. For this system, we estimated the light ratio in three ways: i) by measuring the ratio of the equivalent widths of the lines in the Na II doublet (shown in Figure~\ref{fig:Na_doublet}), ii) by using the two-dimensional cross-correlation algorithm, {\sc todcor} \citep{Zuc94}, which weights the best-matching templates by the light ratio and, iii) by investigating the variation in the goodness-of-fit for a range of input light ratios in the model. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{19b_2_01387_1_shifts} \caption{{\bf 19b-2-01387}: A high signal-to-noise intermediate resolution spectrum taken near quadrature phase of 19b-2-01387 in the Na II doublet wavelength region which we used to measure equivalents widths to estimate the light ratio. The purple vertical lines show the rest frame wavelength of the doublet at $\lambda8183.27$\AA, $\lambda8194.81$\AA. The red lines mark the doublet for primary object and the green lines mark the secondary doublet lines, based on the RVs derived in Section~\ref{sec:RVs}.} \label{fig:Na_doublet} \end{figure} For the first method, the {\sc iraf.splot} task was used to measure the equivalent width of the Na I doublet feature with rest wavelength $8183.27$\AA ~for each star. Note that this assumes the components have the same effective temperature. The ratio was $EW(2)/EW(1)=0.3582/0.4962 = 0.7219$. In the second method, we found that only the spectrum from the first night contained sufficient SNR to enable {\sc todcor} to correctly identify the primary and secondary components. It is known that {\sc todcor} does not perform as well for systems with similar spectral features \citep{Sou07a} so we do not use it to derive RVs for our nearly equal mass systems. The {\sc todcor} estimated light ratio was $L_{2}/L_{1}=0.846$. In the final method, we iterated {\sc jktEBOP} across a grid of initial light ratios between 0.6-1.1, in steps of 0.01, with very small errors while allowing all our usual parameters to vary. The resulting $\chi^{2}$-distribution is not well-behaved. There is a local and global minimum at $L_{2}/L_{1}=0.72$ and $L_{2}/L_{1}=0.97$, respectively, but the global minimum is bracketed on one side by a significant jump to a much larger $\chi^{2}$ suggesting numerical issues. We opted to use the light ratio derived with {\sc todcor} as the input to the model. This value lies half-way between the two minimums of the $\chi^{2}$ distribution, so we supplied it with a $\sim15\%$ error to allow the parameter space to be explored, given the uncertainty in our the measurement. Our follow-up $i$-band data of a single secondary eclipse also prefers a light ratio less than unity, but the lack of phase coverage does not give a well-constrained model. The resulting parameter distributions, shown in Figure~\ref{fig:19blc}, show strong correlation between the light ratio and $R_{2}/R_{1}$ as expected. The resulting $1\sigma$ error boundary for the light ratio, which is computed from $k$ and $J$, is in broad agreement with the methods used to estimate it. For 19e-3-08413, we obtained additional $i$-band photometry of a primary and secondary eclipse, plus a further primary eclipse in the $g$-band. Here, we have estimated the light ratio by fitting our two datasets in these wavebands separately, using appropriate limb darkening coefficients for the $i$- and $g$-bands in each case, and allowing all our usual parameters to vary. We find best-fit values from the $i$- and $g-$bands of $L_{2}/L_{1}=0.29$ and $L_{2}/L_{1}=0.36$, respectively. This confirms a light ratio less than unity, but as the light ratio depends on the surface brightness ratio, which in turn is wavelength dependent, we adopted $L_{2}/L_{1}=0.29$ with input with an error of $\pm0.5$ in the final fit to the $J$-band data. Note we chose to use the $i$-band value as it is closer in wavelength to the $J$-band and the light curve was of higher quality. In the case of 19c-3-01405, we could not derive a light ratio from the low SNR spectra, nor do we have follow-up $i$-band photometry (due to time scheduling constraints). The eclipses are virtually identical so we supplied an input light ratio of $L_{2}/L_{1}=1.0$ with an error of $50\%$. Unfortunately, the final error distributions for the parameters are still quite skewed, as shown in Figure~\ref{fig:19clc}. \subsection{Star spots} For 19e-3-08413, we found that the residual permutation analysis gave larger errors, indicating time-correlated systematics. We have not allowed for spot modulation in our light curve model thus the residuals systematics may have a stellar origin. As mentioned previously, we expect star spot modulation in the $J$-band to be relatively weak as the SED of the spot and the star at these wavelength are more similar than at shorter wavelengths. It is difficult to test for the presence of spots in the $g$- and $i$-band data as we do not have suitable coverage out-of-eclipse. We only have full-phase out-of-eclipse observations in a single $J$-bandpass therefore any physical spot model will be too degenerate between temperature and size to be useful. Furthermore, our $J$-band data span nearly four years, yet spot size and location are expected to change on much shorter timescales, which leads to a change in the amplitude and phase of their sinusoidal signatures. Stable star spot signatures over the full duration of our observations would be unusual. The WTS observing pattern therefore makes it difficult to robustly fit simple sinusoids, as one would need to break the light curve into many intervals in order to have time spans where the spots did not change significantly (e.g. three week intervals), and these would consequently consist of few data points. Nevertheless, we have attempted to test for spot modulation in a very simplistic manner by fitting the residuals of our light curve solutions as a function of time ($t$) with the following sinusoid: \begin{equation} f(t)=a_{0}+a_{1}\sin(2\pi (t/a_{2}) + a_{3}), \label{eqn:sine} \end{equation} where the systemic level ($a_{0}$), amplitude ($a_{1}$), and phase ($a_{3}$) were allowed to vary in the search for the best-fit, while the period ($a_{2}$) was held fixed at the orbital period as we expect these systems to be synchronised (see Table~\ref{tab:dimensions} for the theoretical synchronisation timescales). Once the best-fit was found, the values were used as starting parameters for the {\sc idl} routine {\sc mpfitfun}, to refine the fit and calculate the errors on each parameter. Table~\ref{tab:sine} summarises our findings. \begin{table*} \centering \begin{tabular}{lrrrrrrrr} \hline \hline Name&Amplitude&Phase&$\gamma$&$\chi^{2}_{\nu,\rm before}$&$\chi^{2}_{\nu,\rm after}$&RMS$_{\rm before}$&RMS$_{\rm after}$\\ &(mmag)&&(mmag)&&&(mmag)&(mmag)\\ \hline 19b-2-01387&$1.83\pm0.23$&$2.53\pm0.012$&$0.19\pm0.15$&1.11&1.04&5.2&4.9\\ 19c-3-01405&$0.22\pm0.27$&$-1.5\pm1.3$&$0.23\pm0.20$&0.87&0.87&8.4&8.4\\ 19e-3-08413&$3.47\pm0.32$&$-0.143\pm0.050$&$0.39\pm0.22$&1.32&1.19&7.8&7.5\\ \hline \end{tabular} \caption{Results of modelling the light curve model residuals with the simple sinusoid defined by Equation~\ref{eqn:sine}, to test for the presence of spot modulation. The terms `before' and `after' refer to the reduced $\chi^{2}$ and RMS values before subtracting the best-fit sine curve and then after the subtraction. Note: mmag = $10^{-3}$ mag. The RMS$_{\rm before}$ value for 19e-3-08413 is different to Table~\ref{tab:lcanal} as one data point was clipped due to it being a significant outlier.} \label{tab:sine} \end{table*} There is evidence to suggest a low-level synchronous sinusoidal modulation in 19b-2-01387 and 19e-3-08413 with amplitude $\sim1.8-3.5$ mmag, but we do not find significant modulation for our longest period MEB (19c-3-01405). The modulation represents a source of systematic error that if modelled and accounted for, could reduce the errors our radius measurements. However, with only one passband containing out-of-eclipse variation, we cannot provide a useful non-degenerate model. Good-quality out of eclipse monitoring is required and given that spot modulation evolves, contemporaneous observations are needed, preferably taken at multiple wavelengths to constrain the spot temperatures \citep{Irw11}. It is surprising that the apparent spot modulation in our MEBs persists over the long baseline of the WTS observations and perhaps an alternate explanation lies in residual ellipsoid variations from tidal effects between the two stars. We note here that our limiting errors in comparing these MEBs to the mass-radius relationship in Section~\ref{sec:mrrel} are on the masses, not the radii. \section{Radial velocity analysis}\label{sec:RVs} To extract the semi-amplitudes ($K_{1}, K_{2}$) and the centre-of-mass (systemic) velocity, $\gamma$, of each MEB system, we modelled the RV data using the {\sc idl} routine {\sc mpfitfun} \citep{Mark09}, which uses the Levenberg--Marquardt technique to solve the least-squares problem. The epochs and periods were fixed to the photometric solution values as these are extremely well-determined from the light curve. Circular orbits were assumed ($e=0$) for all three systems as the eccentricity was negligible in all light curve solutions. We fitted the primary RV data first using the following model: \begin{equation} RV_{1} = \gamma - K_{1}\sin(2\pi\phi) \end{equation} where $\phi$ is the phase, calculated from the light curve solution, and $K$ is the semi-amplitude. To obtain $K_{2}$, we then fitted the secondary RV data points using the equation above, but this time fixed $\gamma$ to the value determined from the primary RV data. \begin{equation} RV_{2} = \gamma + K_{2}sin(2\pi\phi) \end{equation} The errors on each RV measurement are weighted by the RV error given by {\sc iraf.fxcor} and then scaled until the reduced $\chi^{2}$ of the model fit is unity. The RMS of the residuals is quoted alongside the derived parameters in Table~\ref{tab:rv_anal}, and is treated as the typical error on each RV data point. The RMS ranges from $\sim2-5$ km/s between the systems and for the given magnitudes of our systems is the same as the predictions of \citet{Aig07} who calculated the limiting RV accuracy for ISIS on the WHT, when using 1 hour exposures and an intermediate resolution grating centred on $8500$\AA. The RV curves for the primary and secondary components of the three MEBs are shown in Figure~\ref{fig:RVs} along with the residuals of each fit. The error bars are the scaled errors from {\sc iraf.fxcor} and serve as an indicator of the signal-to-noise in the individual spectra and the degree of mismatch with the best template. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{19b_2_01387_rv_meb}\\ \includegraphics[width=0.49\textwidth]{19c_3_01405_rv_meb}\\ \includegraphics[width=0.49\textwidth]{19e_3_08413_rv_meb} \caption{Primary and secondary RV curves for the MEBs. Top: {\bf 19b-2-01387}; Middle: {\bf 19c-3-01405}; Bottom: {\bf 19e-3-08413}. The solid black circles are RV measurements for the primary star, while open circles denote the secondary star RV measurements. The solid red lines are the model fits to the primary RVs and the dashed green lines are the fits to the secondary RVs, fixed to the systemic velocity of their respective primaries. The horizontal dotted lines mark the systemic velocities. The error bars are from {\sc iraf.fxcor} but are scaled so that the reduced $\chi^{2}$ of the model fit is unity. They are merely an indication of the signal-to-noise of the individual spectra and the mismatch between the template and data. Under each RV plot is a panel showing the residuals of the best-fits to the primary and secondary RVs. Note the change in scale for the y-axis. The typical RV error for each component is given in Table~\ref{tab:rv_anal} by the RMS of their respective residuals.} \label{fig:RVs} \end{figure} \begin{table} \centering \begin{tabular}{lrrr} \hline \hline Parameter&19b-2-01387&19c-3-01405&19e-3-08413\\ \hline $K_{1}$ (km/s)&$90.7\pm1.6$&$55.2\pm2.2$&$72.1\pm2.0$\\ $K_{2}$ (km/s)&$94.0\pm2.3$&$60.2\pm1.4$&$95.2\pm3.0$\\ $\gamma$ (km/s)&$-70.7\pm1.3$&$-4.8\pm2.0$&$43.8\pm1.8$\\ $\rm RMS_{1}$ (km/s)&1.8&3.7&2.7\\ $\rm RMS_{2}$ (km/s)&5.4&2.5&5.0\\ $q$ &$0.965\pm0.029$&$0.917\pm0.042$&$0.757\pm0.032$\\ $a\sin i$ $({\rm R}_{\odot})$ &$5.472\pm0.083$&$11.27\pm0.25$ &$5.53\pm0.12$\\ $M_{1}\sin^{3}i$ (${\rm M}_{\odot}$)&$0.498\pm0.019$&$0.410\pm0.021$&$0.462\pm0.025$\\ $M_{2}\sin^{3}i$ (${\rm M}_{\odot}$)&$0.480\pm0.017$&$0.376\pm0.023$&$0.350\pm0.018$\\ \hline \end{tabular} \caption[RV Analysis]{Results from radial velocity analysis.} \label{tab:rv_anal} \end{table} \section{Absolute dimensions and space velocities}\label{sec:absdim} Combining the results of the light curve and RV curve modelling allows us to derive the absolute masses and radii of our MEB components. Table~\ref{tab:dimensions} gives these dimensions along with the separations, individual effective temperatures, surface gravities, and bolometric luminosities for each binary system. The masses and radii lie within the ranges $0.35-0.50{\rm M}_{\odot}$ and $0.37-0.5{\rm R}_{\odot}$ respectively, and span orbital periods from $1-5$ days. The derived errors on the masses and radii are $\sim3.5-6.4\%$ and $\sim2.7-5.5\%$, respectively. \begin{table} \centering \begin{tabular}{@{\extracolsep{\fill}}l@{\hspace{2pt}}r@{\hspace{7pt}}r@{\hspace{7pt}}r@{\hspace{1pt}}} \hline \hline Parameter&19b-2-01387&19c-3-01405&19e-3-08413\\ \hline $M_{1}$ (${\rm M}_{\odot}$) &$0.498\pm0.019$&$0.410\pm0.023$&$0.463\pm0.025$\\ $M_{2}$ (${\rm M}_{\odot}$) &$0.481\pm0.017$&$0.376\pm0.024$&$0.351\pm0.019$\\ $R_{1}$ (${\rm R}_{\odot}$) &$0.496\pm0.013$&$0.398\pm0.019$&$0.480\pm0.022$\\ $R_{2}$ (${\rm R}_{\odot}$) &$0.479\pm0.013$&$0.393\pm0.019$&$0.375\pm0.020$\\ a (${\rm R}_{\odot}$) &$5.474\pm0.083$&$11.27\pm0.27$&$5.54\pm0.12$\\ $\log(g_{1})$&$4.745\pm0.039$ &$4.851\pm0.055$&$4.742\pm0.053$\\ $\log(g_{2})$&$4.760\pm0.035$ &$4.825\pm0.064$&$4.834\pm0.051$\\ $\rm T_{\rm eff,1}$ (K) &$3498\pm100$ &$3309\pm130$ &$3506\pm140$\\ $\rm T_{\rm eff,2}$ (K) &$3436\pm100$ &$3305\pm130$ &$3338\pm140$\\ $L_{\rm bol,1} (\rm L_{\sun})$&$0.0332\pm0.0042$&$0.0172\pm0.0031$&$0.0314\pm0.0058$\\%from log10 err get non-log10 err using err= sigl1*l1*alog(10) $L_{\rm bol,2} (\rm L_{\sun})$&$0.0289\pm0.0037$&$0.0166\pm0.0031$&$0.0167\pm0.0033$\\ $M_{\rm 1,bol}$ &$8.45\pm0.14$&$9.16\pm0.20$&$8.51\pm0.19$\\ $M_{\rm 2,bol}$ &$8.60\pm0.14$&$9.20\pm0.20$&$9.26\pm0.23$\\ $V_{\rm 1rot,sync}$ (km/s) &$16.73\pm0.45$&$4.08\pm0.19$&$14.51\pm0.55$\\ $V_{\rm 2rot,sync}$ (km/s) &$16.73\pm0.45$&$4.01\pm0.20$&$11.31\pm0.70$\\ $t_{\rm sync}$ (Myrs) &$0.05$&$6.3$&$0.1$\\ $t_{\rm circ}$ (Myrs) &$2.6$&$1480$&$4.0$\\ $d_{\rm adopted}$ (pc) &$545\pm29$&$645\pm53$&$610\pm52$\\ U (km/s) &$-63.6\pm7.0$&$-2.4\pm9.0$&$30.9\pm8.6$\\ V (km/s) &$1.0\pm7.8$&$1.3\pm12.2$&$-10.2\pm11.8$\\ W (km/s) &$-37\pm6.4$&$-4.2\pm8.5$&$30.1\pm8.1$\\ \hline \end{tabular} \caption{Derived properties for the three MEBs. $V_{\rm rot,sync}$ are the rotational velocities assuming the rotation period is synchronised with the orbital period. $t_{\rm sync}$ and $t_{\rm circ}$ are the theoretical tidal synchronisation and circularisation timescales from \citet{Zah75,Zah77}} \label{tab:dimensions} \end{table} Eclipsing binaries are one of the first rungs on the Cosmic Distance Ladder and have provided independent distance measurements within the local group e.g. to the Large Magellanic Cloud and to the Andromeda Galaxy \citep{Gui98,Rib05,Bon07}. The traditional method for measuring distances to eclipsing binaries is to compute the bolometric magnitude using the luminosity, radius and effective temperature found from the light curve and RV curve analysis. This is combined with a bolometric correction and the system apparent magnitude to compute the distance. While this can yield quite accurate results, the definitions for effective temperature and the zero points for the absolute bolometric magnitude and the bolometric correction must be consistent \citep{Bes98,Gir02}. However, we have opted to use a different method to bypass the uncertainties attached to bolometric corrections. We used {\sc jktabsdim} \citep{Sou05b}, a routine that calculates distances using empirical relations between surface brightness and effective temperature. These relations are robustly tested for dwarfs with ${\rm T}_{\rm eff}>3600$ K and there is evidence that they are valid in the infrared to $\sim3000$ K \citep{Ker04}. The scatter around the calibration of the relations in the infrared is on the $1\%$ level. The effective temperature scales used for the EB analysis and the calibration of its relation with surface brightness should be the same to avoid systematic errors but this is a more relaxed constraint than required by bolometric correction methods \citep{Sou05b}. The infrared $J, H$ and $K$-bands are relatively unaffected by interstellar reddening but we have shown in Section~\ref{sec:redden} that we expect a small amount. In the distance determination, we have calculated the distances at zero reddening and at the maximum reddening ($E(B-V)=0.103$ at 1 kpc for early M-dwarfs with $J\leq16$ mag). Our adopted distance, $d_{\rm adopted}$, reported in Table~\ref{tab:dimensions} is the mid-point of the minimum and maximum distance values at the boundaries of their the individual errors, which includes the propagation of the effective temperature uncertainties. The MEBs lie between $\sim550-650$ pc. With a full arsenal of kinematic information (distance, systemic velocities, proper motions and positions) we can now derive the true space motions, $UVW$, for the MEBs and determine whether they belong to the Galactic disk or halo stellar populations. We used the method of \citet{Joh87} to determine $UVW$ values with respect to the Sun (heliocentric) but we adopt a left-handed coordinate system to be consistent with the literature, that is, $U$ is positive away from the Galactic centre, $V$ is positive in the direction of Galactic rotation and $W$ is positive in the direction of the north Galactic pole. We use the prescription of \citet{Joh87} to propagate the errors from the observed quantities and the results are summarised in Table~\ref{tab:dimensions}. Figure~\ref{fig:uvw} shows the MEBs in relation to the error ellipse for the Galactic young disk as defined by \citet{Leg92} ($-20<U<50$, $-30<V<0$, $-25<W<10$ w.r.t the Sun). 19c-3-01405 is consistent within its error with the young disk. 19b-2-01387 is an outlier to the young disk criterion. Instead, \citet{Leg92} define objects around the edges of the young disk ellipse as members of the young-old disk population, which has a sub-solar metallicity ($-0.5<[m/H]<0.0$). 19e-3-08413 exceeds the allowed $W$ range for the young disk, despite overlap in the $UV$ plane. \citet{Leg92} assign these objects also to the young-old disk group. This suggest that two of our MEBs could be metal-poor but our spectral index measurements in Section~\ref{sec:indices} are not accurate enough to confirm this. We would require, for example, higher resolution, $J$-band spectra to assess the metallicities in detail \citep{One11}. Comparisons with space motions of solar neighbourhood moving groups do not reveal any obvious associations \citep{Sod93b}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{UVW_meb} \caption{The UVW space motions with respect to the Sun for our MEBs. The errors have been propagated according to \citet{Joh87}. The solid ellipses are the error ellipses for the young disk defined by \citet{Leg92}. The dashed vertical lines in the lower plot mark the $W$ boundary within which the young-old disk population is contained \citep{Leg92}.} \label{fig:uvw} \end{figure} \section{Discussion}\label{sec:discuss} \subsection{The mass-radius diagram}\label{sec:mrrel} Figure~\ref{fig:mr-rel} shows the positions of our MEBs in the mass-radius plane and compares them to literature mass-radius measurements derived from EBs with two M-dwarfs, EBs with an M-dwarf secondary but hotter primary, eclipsing M-dwarf - white dwarf systems, and inactive single stars measured by interferometry. We only show values with reported mass and radius errors comparable to or better than our own errors. The solid line marks the $5$ Gyr, solar metallicity isochrone from the \citet{Bar98} models (solid line), with a convective mixing length equal to the scale height ($\rm L_{\rm mix}=H_{\rm P}$), while the dash-dot line shows the corresponding $1$ Gyr isochrone. It is clear that some MEBs, both in the WTS and in the literature, have an excess in radius above the model predictions, and although there is no evidence to say that all MEBs disagree with the models, the scatter in radius at a given mass is clear, indicating a residual dependency on other parameters. \citet{Kni11} measured the average fractional radius excess below $0.7{\rm M}_{\odot}$, but based on the findings of \citet{Chab07} and \citet{Mor10}, split the sample at the fully-convective boundary to investigate the effect of inhibited convection. The dashed line in Figure~\ref{fig:mr-rel} marks the average radius inflation they found with respect to the 5 Gyr isochrone for the fully-convective mass region below $0.35{\rm M}_{\odot}$ and in the partially-convective region above ($7.9\%$ for $>0.35{\rm M}_{\odot}$, but only by $4.5\%$ for $>0.35{\rm M}_{\odot}$). The WTS MEBs sit systematically above the 5 Gyr isochrone but appear to have good agreement with the average radius inflation for their mass range. It is interesting to note that we find similar radius excesses to the literature despite using infrared light curves. At these wavelengths, we crudely expect lower contamination of the light curves by sinusoidal star spots signals and less loss of circular symmetry, on account of the smaller difference between the spectral energy distributions of the star and the spots in the $J$-band. If one could eliminate the $\sim3\%$ systematic errors in MEB radii caused by polar star spots \citep{Mor10} by using infrared data, yet still see similar excess, this would be evidence for a larger effect from magnetic fields (or another hidden parameter) than currently thought. Unfortunately, the errors on our radii do not allow for a robust claim of this nature, but it is an interesting avenue for the field. \begin{figure*} \centering \includegraphics[width=\textwidth]{mr_relationship_meb_paper} \caption{The mass-radius diagram for low-mass stars. The filled circles show literature MEB values with reported mass errors $<6\%$ and radius errors $<6.5\%$. Also shown are literature values for i) the low-mass secondaries of eclipsing binaries with primary masses $>0.6{\rm M}_{\odot}$, ii) M-dwarfs found in M-dwarf - white dwarf eclipsing binaries (MD-WD), and iii) radius measurements of single M-dwarfs from interferometric data. The red squares mark the new WTS MEBs. The diagonal lines show model isochrones from the \citet{Bar98} models ($[m/H]=0$, $Y=0.275$ and $L_{mix}=H_{P}$), while the vertical dotted line marks the onset of fully-convective envelopes \citep{Chab97}. The dashed line shows the 5 Gyr isochrone plus the average radius excess found by \citet{Kni11}, assuming a discontinuity at the fully-convective transition. Above $0.35{\rm M}_{\odot}$, the model is inflated by $7.9\%$, but below it is only inflated by $4.5\%$. The bottom panel shows the radius anomaly, $R_{\rm obs}/R_{\rm model}$ computed using the $5$ Gyr isochrone and again the dashed line shows the corresponding average radius excess found by{Kni11}. The literature data used in these plots are given in Table~\ref{tab:mrptdata}.} \label{fig:mr-rel} \end{figure*} The components of our new MEBs do not seem to converge towards the standard 5 Gyr isochrone as they approach the fully-convective region. In fact, our lowest mass star, which has a mass error bar that straddles the fully-convective boundary, is the most inflated of the six components we have measured. The lower panel of Figure~\ref{fig:mr-rel} illustrates this inflation more clearly by showing the radius anomaly $R_{\rm obs}/R_{\rm model}$ as a function of mass, as computed with the standard $5$ Gyr isochrone. The errors on the radius anomaly include the observed error on the radius and the observed error on the mass (which propagates into the value of $R_{\rm model}$), added in quadrature. The spread in radii at a given mass is clearer here, and we discuss why stars of the same mass could be inflated by different amounts in Section~\ref{sec:mrprel} by considering their rotational velocities. A comparison of the measured radii of all known MEBs to the model isochrones shown in Figure~\ref{fig:mr-rel} might lead one to invoke young ages for most of the systems, because stars with $M_{\star}\lesssim0.7{\rm M}_{\odot}$ are still contracting onto the pre-main sequence at an age $\lesssim 200$ Myr and therefore have larger radii. While young stars exist in the solar neighbourhood (as shown by e.g. \citet{Jef93} who found an upper limit of 10-15 young stars within 25pc), it is highly unlikely that all of the known MEBs are young. Indeed, the derived surface gravities for our MEBs are consistent with older main-sequence stars. We see emission of H$\alpha$ in all three systems, which can be an indicator of youth, but close binary systems are known to exhibit significantly more activity than wide binaries or single stars of the same spectral type (see e.g. \citealt{Shk10}). We therefore do not have independent evidence to strongly associate the inflated radii of our MEBs with young ages. \subsection{The mass-$\rm T_{\rm eff}$ diagram}\label{sec:mtrel} As discussed in Section~\ref{sec:intro}, there is some evidence for a radius-metallicity correlation \citep{Ber06,Lop07} amongst M-dwarfs. Model values for effective temperatures depend on model bolometric luminosities, which are a function of metallicity. Metal-poor stars are less opaque so model luminosities and effective temperatures increase while the model radii shrink by a small amount \citep{Bar98}. Figure~\ref{fig:mt-rel} shows our MEBs in the mass-$\rm T_{\rm eff}$ plane plus the same literature systems from Figure~\ref{fig:mr-rel} where effective temperatures are available. The two lines show the standard 5 Gyr isochrone of the \citet{Bar98} models for solar metallicity stars (solid line) and for metal-poor stars (dot-dash line). \begin{figure*} \centering \includegraphics[width=\textwidth]{mt_relationship_meb_paper} \caption{The mass-$\rm T_{\rm eff}$ diagram for low mass stars. Two different metallicity isochrones from the \citet{Bar98} 1 Gyr models are over-plotted to show the effect of decreasing metallicity. The vertical dotted line marks the fully-convective boundary \citep{Chab97}. The data used in this plot are given in Table~\ref{tab:mrptdata}.} \label{fig:mt-rel} \end{figure*} The large errors in the mass-${\rm T}_{\rm eff}$ plane for M-dwarfs mean that it is not well-constrained. Section~\ref{sec:indices} has already highlighted some of difficulties in constraining effective temperatures and metallicities for M-dwarfs, but one should also note that effective temperatures reported in the literature are determined using a variety of different methods, e.g. broad-band colour indices, spectral indices, or model atmosphere fitting using several competing radiative transfer codes. It also involves a number of different spectral type - ${\rm T}_{\rm eff}$ relations, and as \citet{Rey11} have demonstrated, these can differ by up to $500$ K for a given M-dwarf subclass. While the intrinsic scatter in the effective temperatures at a given mass may be caused by metallicity effects, the overall trend is that models predict temperatures that are too hot compared to observed values, especially below $0.45{\rm M}_{\odot}$. Our new MEBs, which we determined to have metallicities consistent with the Sun, also conform to this trend. Furthermore, several studies of the inflated CM Dra system have found it to be metal-poor \citep{Vit97,Vit02}, whereas models would suggest it was metal-rich for its mass, based on its cooler temperature and larger radius (see Table~\ref{tab:mrptdata} for data). In this case, the very precisely measured inflated radius of CM Dra cannot be explained by a high metallicity effect. In fact, the tentative association of two of our new MEBs with the slightly metal-poor young-old disk population defined by \citet{Leg92}, would also make it difficult to explain their inflated radii using the metallicity argument. The scatter in the mass-${\rm T}_{\rm eff}$ plane can also arise from spot coverage due to the fact that very spotty stars have cooler effective temperatures at a given mass, and consequently larger radii for a fixed luminosity. Large spot coverage fractions are associated with high magnetic activity, which is induced by fast rotational velocities. Table~\ref{tab:dimensions} gives the synchronous rotational velocities of the stars in our MEBs along with their theoretical timescales for tidal circularisation and synchronisation. Among our new systems, 19c-3-01405 contains the slowest rotating stars ($\sim 4$ km/s) on account of its longer orbital period, and its components have stellar radii that are the most consistent with the standard 5 Gyr model. The other faster rotating stars in our MEBs have radii that deviate from the model by more than $1\sigma$. We discuss this tentative trend between radius inflation and rotational velocity (i.e. orbital period, assuming the systems are tidally-locked) in the next section. \subsection{A mass-radius-period relationship?}\label{sec:mrprel} In a recent paper, \citet{Kra11a} presented six new MEBs with masses between $0.38-0.59{\rm M}_{\odot}$ and short orbital periods spanning $0.6-1.7$ days. Their measurements combined with existing literature revealed that the mean radii of stars in systems with orbital periods less than $1$ were different at the $2.6\sigma$ level to those at longer periods. Those with orbital periods $<1$ day were systematically larger than the predicted radii by $4.8\pm1\%$, whereas for periods $>1.5$ days the deviation from the \citet{Bar98} models are much smaller ($1.7\pm0.7\%$). The implication is that a very short orbital period, i.e. very high level of magnetic activity, leads to greater radius inflation, and one then expects the level of radius inflation to decrease at longer periods. Figure~\ref{fig:pr-rel} shows the radius anomaly ($R_{obs}/R_{model}$) as a function of period for our new MEBs plus literature values whose reported errors are compatible with our own measurements ($\sigma_{M_{\rm obs}}<6\%$ and $\sigma_{R_{\rm obs}}<6.5\%$). We used the 5 Gyr, solar metallicity isochrone from the \citet{Bar98} models, with $L_{\rm mix}=H_{\rm P}$, to derive the radius anomalies. The models were linearly interpolated onto a finer grid with intervals of $0.0001{\rm M}_{\odot}$, and the model photospheric radii were calculated using $R_{\rm model}=\sqrt{L_{\rm model}/4\pi\sigma T_{\rm eff,model}^{4}}$. \begin{figure*} \centering \includegraphics[width=\textwidth]{pr_relationship_meb} \caption[The Radius Anomaly as a Function of Orbital Period]{The radius anomaly as a function of orbital period using the 5 Gyr solar-metallicity isochrone from the \citet{Bar98} models. Our new MEBs are shown by the red open squares. Literature radius anomalies with radius errors $<6.5\%$ are also plotted. The errors are a quadrature sum of the measured radius error plus a propagated error from the observed mass which determines the model radius. The dashed and dotted lines show the best-fit from a straight-line and exponentially decaying model to the data, respectively. The coefficients and goodness of fit for these fits are given in Table~\ref{tab:linefit}. The data used in this plot are given in Table~\ref{tab:mrptdata}.} \label{fig:pr-rel} \end{figure*} Despite the small sample, we have performed an error weighted statistical analysis of the period distribution, including our new measurements, to compare to the unweighted analysis presented in \citet{Kra11a}. Table~\ref{tab:stats} reports the weighted mean ($\bar{\mu}$) and weighted sample standard deviation ($\sigma$) of the radius anomaly for three different period ranges: i) all periods, ii) periods $\leq1$ day and, iii) periods $>1$ day. The boundary between the `short' and `long' period samples was chosen initially to match the analysis by \citet{Kra11a}. A T-test using the weighted mean and variances of the short and long period samples shows that their mean radii are distinct populations at a $4.0\sigma$ significance, in support of \citeauthor{Kra11a}'s findings. However, the significance level is strongly dependent on the chosen period boundary, and is skewed by the cluster of very precisely measured values near $1.5$ days. For example, a peak significance of $4.8\sigma$ is found when dividing the sample at $1.5$ days, but sharply drops to $\sim1\sigma$ for periods of $1.7$ days or longer. At short periods, it rises gradually towards the peak from $1\sigma$ at $0.3$ days. \begin{table} \centering \begin{tabular}{cccc} \hline \hline Period&$\bar{\mu}$&$\pm \frac{\sigma}{\sqrt{N}}$&$\sigma$\\ \hline All&103.6\%&0.5\%&3.2\%\\ \hline $P\leq1.0$&106.1\%&0.9\%&3.5\%\\ \hline $P>1.0$&102.6\%&0.4\%&2.4\%\\ \hline \end{tabular} \caption{A statistical analysis of the mean radius inflation for different period ranges. $\sigma$ is the weighted sample standard deviation.} \label{tab:stats} \end{table} Instead, we have attempted to find a very basic mathematical description for any correlation between radius inflation and orbital period, but we appreciate our efforts are hampered by small number statistics. We fitted the distribution of the radius anomaly as a function of period, using first a linear model and then as an exponentially decaying function. We used the {\sc idl} routine {\sc mpfitfun} to determine an error weighted best-fit and the $1\sigma$ errors of the model parameters. The results are reported in Table~\ref{tab:linefit} and the best-fit models are over-plotted in Figure~\ref{fig:pr-rel}, but neither model is a good fit (although the exponential fairs moderately better). While there is marginal evidence for greater inflation in the shortest period systems, we find that the expected convergence towards theoretical radius values for longer period, less active systems is not significantly supported by the available observation data. \begin{table*} \centering \begin{tabular}{lcccccc} \hline \hline Model&$a_{0}$&$a_{1}$&$a_{2}$&$\chi^{2}$&DOF&$\chi_{\nu}^{2}$\\ \hline $R_{\rm obs}/R_{\rm mod}=a_{0}+a_{1}P$&$1.0401\pm0.0017$&$-0.000386\pm0.000086$&--&490.5&46&10.7\\ $R_{\rm obs}/R_{\rm mod}=a_{0}+a_{1}e^{a_{2}P}$&$1.0221\pm0.0027$&$0.089\pm0.015$&$-1.57\pm0.33$&401.6&45&8.9\\ \hline \end{tabular} \caption{Results from an error weighted modelling of the radius anomaly as a function of period. $a_{i}$ are the coefficients of the models and $P$ is the orbital period in days. Neither of these simple models provide a statistically good fit, indicating a more complex relationship between the radius anomaly and orbital period.} \label{tab:linefit} \end{table*} There are two pertinent observations worth addressing, namely the low-mass eclipsing binaries LSPM J1112+7626 and Kepler-16 \citep{Irw11,Doy11,Ben12}, which were announced after the \citet{Kra11a} study. These systems significantly extended the observed orbital period range, with almost identical 41-day orbital periods, and both containing one fully-convective component ($M_{\star}\sim0.35{\rm M}_{\odot}$, \citealt{Chab97}) and one partially convective component (see Table~\ref{tab:mrptdata}). The radius inflation differs significantly between these two systems, as shown on the right-hand side of Figure~\ref{fig:pr-rel}. While the more massive, partially-convective component of Kepler-16 is well-described by the 1 Gyr model isochrone \citet{Bar98} (see Figure~\ref{fig:mr-rel}), the other three stars suffer significant radius inflation, with no obvious correlation between the amount of inflation and the masses, even though one of them is a partially-convective star. This residual inflation, particularly for the fully-convective stars at long periods, may pose a challenge to the magnetic activity hypothesis as the sole reason for discrepancies between models and observations, especially given the extremely high-quality measurements of Kepler-16. However, one should note that other studies have suggested that the presence of a strong magnetic field can alter the interior structure of a low-mass star, such that is pushes the fully-convective mass limit for very active stars to lower values \citep{Mul01,Chab07}, so these stars may still suffer from a significant inhibition of convective flow. The radius anomaly raises concern over the usefulness of the known MEBs in calibrating models for the evolution of singular M-dwarf stars that are the favoured targets of planet-hunting surveys searching for habitable worlds. \citet{Kra11a} argue that the high-activity levels in very close MEBs make them poor representatives of typical single low-mass stars and that the observed radius discrepancies should not be taken as an indictment of stellar evolution models. However, we have seen that radius inflation remains in MEBs systems with low magnetic activity and furthermore, the inflated components of LSPM J1112+7626 do not exhibit H$\alpha$ emission that is typically associated with the high activity levels in MEBs with inflated radii. \citet{Wes11} used H$\alpha$ emission as an activity indicator to determine that the fraction of single, active, early M-dwarfs is small ($<5\%$), but increases to $40-80\%$ for M4-M9 dwarfs. Yet, it may be that the amount of activity needed to inflate radii to the measured values in MEBs is small and therefore below the level where observable signatures appear in H$\alpha$ emission. This would then question the reliability of H$\alpha$ emission as an activity indicator, meaning the fraction of `active', single M-dwarfs may be even higher than the \citet{Wes11} study. Given that these very small stars are a ripe hunting-ground for Earth-size planets, we must be able to constrain stellar evolution models in the presence of magnetic activity if we are to correctly characterise planetary companions. We note that even the very precisely-calibrated higher-mass stellar evolution models \citep{And91,Tor10} do not reproduce the radii of active stars accurately (see \citet{Mor09} who found $4-8\%$ inflation in a G7+K7 binary with a 1.3 day orbit). In order to establish a stringent constraint on the relationship between mass, period and radius, we need further measurements of systems that i) include `active' and `non-active' stars that span the fully-convective and partially-convective mass regimes, and ii) a better sampled range of orbital periods beyond 5 days to explore systems that are not synchronised. We may ultimately find that activity does not account for the full extent of the radius anomaly, and as suggested by \citet{Irw11}, perhaps the equation of state for low-mass stars can still be improved. On the other hand, perhaps the importance of tidal effects between M-dwarfs in binaries with wider separations has been underestimated, as it has been shown that the orbital evolution of M-dwarf binary systems is not well-described by current models \citep{Nef12}. \section{Conclusions} In this paper, we have presented a catalogue of $16$ new low-mass, detached eclipsing binaries that were discovered in the WFCAM Transit Survey. This is the first time dynamical measurements of M-dwarf EBs have been detected and measured primarily with infrared data. The survey light curves are of high quality, with a per epoch photometric precision of $3-5$ mmag for the brightest targets ($J\sim13$ mag), and a median RMS of $\lesssim1\%$ for $J\lesssim16$ mag. We have reported the characterisation of three of these new systems using follow-up spectroscopy from ground-based $2-4$ m class telescopes. The three systems ($i=16.7-17.6$ mag) have orbital periods in the range $1.5-4.9$ days, and span masses $0.35-0.50{\rm M}_{\odot}$ and radii $0.38-0.50{\rm R}_{\odot}$, with uncertainties of $\sim 3.5-6.4\%$ in mass and $\sim 2.7-5.5\%$ in radius. Two of the systems may be associated with the young-old disk population as defined by \citet{Leg92} but our metallicity estimates from low-resolution spectra do not confirm a non-solar metallicity. The radii of some of the stars in these new systems are significantly inflated above model predictions ($\sim3-12\%$). We analysed their radius anomalies along with literature data as a function of the orbital period (a proxy for activity). Our error-weighted statistical analysis revealed marginal evidence for greater radius inflation in very short orbital periods $<1$ day, but neither a linear nor exponentially decay model produced a significant fit to the data. As a result, we found no statistically significant evidence for a correlation between the radius anomaly and orbital period, but we are limited by the small sample of precise mass and radius measurements for low-mass stars. However, it is clear that radius inflation exists even at longer orbital periods in systems with low (or undetectable) levels of magnetic activity. A robust calibration of the effect of magnetic fields on the radii of M-dwarfs is therefore a key component in our understanding of these stars. Furthermore, it is a limiting factor in characterising the planetary companions of M-dwarfs, which are arguably our best targets in the search for habitable worlds and the study of other Earth-like atmospheres. More measurements of the masses, radii and orbital periods of M-dwarf eclipsing binaries, spanning both the fully convective regime and partially convective mass regime, for active and non-active stars, across a range of periods extending beyond 5 days, are necessary to provide stringent observational constraints on the role of activity in the evolution of single low-mass stars. However, the influence of spots on the accuracy to which we can determine the radii from light curves will continue to impede these efforts, even in the most careful of cases (see e.g. \citealt{Mor10,Irw11}). This work has studied only one third of the M-dwarfs in the WFCAM Transit Survey. Observations are on-going and we expect our catalogue of M-dwarf eclipsing binaries to increase. This forms part of the legacy of the WTS and will provide the low-mass star community with high-quality MEB light curves. Furthermore, the longer the WTS runs, the more sensitive we become to valuable long-period, low-mass eclipsing binaries. These contributions plus other M-dwarf surveys, such as MEarth and PTF/M-dwarfs, will ultimately provide the observational calibration needed to anchor the theory of low-mass stellar evolution. \section*{Acknowledgements} We would like to thank I. Baraffe for providing the model magnitudes for our SED fitting in the appropriate filters, and S. Aigrain for the use of the {\sc occfit} transit detection algorithm. This work was greatly improved by several discussions with J. Irwin, as well as J. Southworth, K. Stassun and R. Jeffries. We thank the referee for their insightful comments which have improved this work and we also thank C. del Burgo for his comments on the original manuscript. JLB acknowledges the support of an STFC PhD studentship during part of this research. We thank the members of the WTS consortium and acknowledge the support of the RoPACS network. GK, BS, PC, NG, and HS are supported by RoPACS, while JLB, BN, SH, IS, DP, DB, RS, EM and YP have received support from RoPACS during this research, a Marie Curie Initial Training Network funded by the European Commission’s Seventh Framework Programme. NL is supported by the national program AYA2010-19136 funded by the Spanish ministry of science and innovation. Finally, we extend our thanks to the fantastic team of TOs and support staff at UKIRT, and all those observers who clicked on U/CMP/2. The United Kingdom Infrared Telescope is operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the U.K. This article is based on observations made with the INT, WHT operated on the island of La Palma by the ING in the Spanish Observatorio del Roque de los Muchachos, and with the IAC80 on the island of Tenerife operated by the IAC in the Spanish Observatorio del Teide. This research uses products from SDSS DR7. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This work also makes use of NASA’s Astrophysics Data System (ADS) bibliographic services, and the SIMBAD database, operated at CDS, Strasbourg, France. {\sc iraf} is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation. \bibliographystyle{mn2e}
{'timestamp': '2012-06-14T02:02:30', 'yymm': '1206', 'arxiv_id': '1206.2773', 'language': 'en', 'url': 'https://arxiv.org/abs/1206.2773'}
\section{Introduction} The behavior of the Riemann zeta function on the critical line has been intensively studied, in particular in relation with the Riemann hypothesis. A natural question concerns the order of magnitude of the moments of $\zeta$: $$\mu_k(T) := \frac{1}{T} \int_0^T |\zeta (1/2 + it)|^{2k} dt = \mathbb{E} [ |\zeta (1/2 + iUT)|^{2k}],$$ where $k$ is a positive real number and $U$ is a uniform random variable in $[0,1]$. It is believed that the order of magnitude of $\mu_k(T)$ is $(\log T)^{k^2}$ for fixed $k$ and $T$ tending to infinity. More precisely, it is conjectured that there exists $C_k > 0$ such that $$\mu_k(T) \sim_{T \rightarrow \infty} C_k (\log T)^{k^2}.$$ An explicit expression of $C_k$ has been predicting by Keating and Snaith \cite{bib:KSn}, using an expected analogy between $\zeta$ and the characteristic polynomials of random unitary matrices. The conjecture has been only proven for $k =1$ by Hardy and Littlewood and for $k = 2$ by Ingham (see Chapter VII of \cite{Tit}). The weaker conjecture $\mu_k(T) = T^{o(1)}$ for fixed $k > 0$ and $T \rightarrow \infty$ is equivalent to the Lindel\"of hypothesis which states that $|\zeta(1/2 + it) | \leq t^{o(1)}$ when $t$ goes to infinity. The Lindel\"of hypothesis is a still open conjecture which can be deduced from the Riemann hypothesis. Under the Riemann hypothesis, it is known that $(\log T)^{k^2}$ is the right order of magnitude for $\mu_k(T)$. In \cite{Sound}, Soundararajan proves that $$\mu_k(T) = (\log T)^{k^2 + o(1)}$$ for fixed $k > 0$ and $T$ tending to infinity. In \cite{Harper}, Harper improves this result by showing that $$\mu_k(T) \ll_k (\log T)^{k^2},$$ for all $k > 0$ and $T$ large enough, the notation $A \ll_x B$ meaning that there exists $C > 0$ depending only on $x$ such that $|A| \leq C B$. On the other hand, the lower bound $$\mu_k(T) \gg_k (\log T)^{k^2}$$ has been proven by Ramachandra \cite{Ra78, Ra95}) and Heath-Brown \cite{HB81}, assuming the Riemann hypothesis, and, for $k \geq 1$, by Raziwi\l\l \, and Soundararajan \cite{RS13}, unconditionally. The moment $\mu_k(T)$ can be written as follows: $$\mu_k(T) = \mathbb{E} [ \exp ( 2k \Re \log \zeta(1/2 + iUT))].$$ Here $\log \zeta$ denotes the unique determination of the logarithm which is well-defined and countinous everywhere except at the left of the zeros and the pole of $\zeta$, and which is real on the interval $(1, \infty)$. It is now natural to also look at similar moments written in terms of the imaginary part of $\log \zeta$: $$\nu_k (T) = \mathbb{E} [ \exp ( 2k \Im \log \zeta(1/2 + iUT))].$$ Note that $\Im \log \zeta$ is directly related to the fluctuations of the distribution of the zeros of $\zeta$ with respect to their "expected distribution": we have $$N(t) = \frac{t}{2 \pi} \log \frac{t}{2 \pi e} + \frac{1}{\pi} \Im \log \zeta(1/2 + it) + \mathcal{O}(1),$$ where $N(t)$ is the number of zeros of $\zeta$ with imaginary part between $0$ and $t$. In the present article, we prove, conditionally on the Riemann hypothesis, an upper bound on $\nu_k(T)$ with the same accuracy as the upper bound on $\mu_k(T)$ obtained by Soundararajan in \cite{Sound}. The general strategy is similar, by integrating estimates on the tail of the distribution of $\Im \log \zeta$, obtained by using bounds on moments of sums on primes coming from the logarithm of the Euler product of $\zeta$. The main difference with the paper by Soundararajan \cite{Sound} is that we do not have an upper bound of $\Im \log \zeta$ which is similar to the upper bound of $\log |\zeta|$ given in his Proposition. On the other hand, from the link between $\Im \log \zeta$ and the distribution of the zeros of $\zeta$, we can deduce that $\Im \log \zeta(1/2 + it)$ cannot decrease too fast when $t$ increases. We intensively use this fact in order to estimate $\Im \log \zeta$ in terms of sums on primes. The precise statement of our main result is the following: \begin{theorem} Under the Riemann hypothesis, for all $k \in \mathbb{R}$, $\varepsilon> 0$, $$\mathbb{E} [\exp(2 k \Im \log \zeta(1/2 + iTU))] \ll_{k, \varepsilon} (\log T)^{k^2 + \varepsilon},$$ where $U$ is a uniform variable on $[0,1]$. \end{theorem} The proof of this result is divided into two main parts. In the first part, we bound the tail of the distribution of $\Im \log \zeta(1/2 + iTU)$ in terms of the tail of an averaged version of this random variable. In the second part, we show that this averaged version is close to a sum on primes, whose tail is estimated from bounds on its moments. Combining this estimate with the results of the first part gives a proof of the main theorem. \section{Comparison of $\Im \log \zeta$ with an averaged version} The imaginary part of $\log \zeta$ varies in a smooth and well-controlled way on the critical line when there are no zeros, and has positive jumps of $\pi$ when there is a zero. We deduce that it cannot decrease too fast. More precisely, the following holds: \begin{proposition} For $2 \leq t_1 \leq t_2$, we have $$\Im \log \zeta(1/2 + it_2) \geq \Im \log \zeta(1/2 + i t_1) - (t_2 - t_1) \log t_2 + \mathcal{O}(1). $$ \end{proposition} \begin{proof} Is is an easy consequence of Theorem 9.3 of Titchmarsh \cite{Tit}: for example, see Proposition 4.1 of \cite{bib:Naj} for details. \end{proof} We will now define some averaging of $\Im \log \zeta$ around points of the critical line. From the previous proposition, if $\Im \log \zeta$ is large at some point $1/2 + it_0$ of the critical line, then it remains large on some segment $[1/2 + it_0,1/2 + i(t_0 + \delta)]$ which tends to also give a large value of an average of $\Im \log \zeta(1/2 + it)$ for $t$ around $t_0$. Our precise way of averaging is the following. We fix a function $\varphi$ satisfying the following properties: $\varphi$ is real, nonnegative, even, dominated by any negative power at infinity, and its Fourier transform is compactlly supported, takes values in $[0,1]$, is even and equal to $1$ at zero. The Fourier transform is normalized as follows: $$\widehat{\varphi}(\lambda) = \int_{-\infty}^{\infty} \varphi(x) e^{-i \lambda x} dx.$$ For $H> 0$ we define an averaged version of $\Im \log \zeta$ as follows: $$I(\tau,H) := \int_{-\infty}^{\infty} \Im \log \zeta (1/2 + i (\tau + t H^{-1}))\varphi (t) dt.$$ The following result holds: \begin{proposition} Let $\varepsilon \in (0,1/2)$. Then, there exist $a, K > 1$, depending only on $\varphi$ and $\varepsilon$, and satisfying the following property. For $T > 100$, $\tau \in [\sqrt{T},T]$, $ K < V < \log T$, $H := K V^{-1} \log T $, the inequalities $$\Im \log \zeta(1/2 + i \tau) \geq V,$$ $$\Im \log \zeta(1/2 + i (\tau - e^r H^{-1})) \geq - 2 V$$ for all integers $r$ between $0$ and $\log \log T$, together imply $$I (\tau + a H^{-1}, H) \geq (1-\varepsilon) V.$$ Similarly, the inequalities $$\Im \log \zeta(1/2 + i \tau) \leq -V,$$ $$\Im \log \zeta(1/2 + i (\tau + e^r H^{-1})) \leq 2 V$$ for all integers $r$ between $0$ and $\log \log T$, together imply $$I (\tau - a H^{-1}, H) \leq -(1-\varepsilon) V.$$ \end{proposition} \begin{proof} First, we observe that $H > 1$ since $K > 1$ and $V < \log T$. We deduce that for all the values of $s$ such that $\Im \log \zeta (1/2 + is)$ is explicitly written in the proposition, $ \sqrt{T} - \log T \leq s \leq T + \log T$: in particular $s > 2$ since $T > 100$, and we can apply the previous proposition to compare these values of $\Im \log \zeta$. If $\Im \log \zeta (1/2 + i \tau) \geq V$, then for all $t \geq 0$, \begin{align*} & \Im \log \zeta (1/2 + i (\tau + t H^{-1}) ) \geq V - t H^{-1} \log (\tau + tH^{-1}) + \mathcal{O}(1) \\ & \geq V - t K^{-1} V (\log T)^{-1} \log (\tau + t H^{-1}) + \mathcal{O}(1). \end{align*} Since $H > 1$ and $\tau \leq T$, $$\Im \log \zeta (1/2 + i (\tau + t H^{-1}) ) \geq V ( 1 - t K^{-1} (\log T)^{-1} \log(T + t)) + \mathcal{O}(1).$$ We have $$\log (T + t) = \log T + \log ( 1 + t/T) \leq \log T + \log (1+t),$$ and then, integrating against $\varphi(t-a)$ from $0$ to $\infty$, \begin{align*} & \int_0^{\infty} \Im \log \zeta (1/2 + i (\tau + t H^{-1}) ) \varphi(t-a) dt \\ & \geq V \left[ \int_{0}^{\infty} \varphi(t-a) dt - K^{-1} \int_{0}^{\infty} t \varphi(t-a) dt \right] + \mathcal{O}_{a,\varphi}(1), \end{align*} since $$V K^{-1} (\log T)^{-1} \int_0^{\infty} t\varphi(t-a) \log (1 + t) dt = \mathcal{O}_{a, \varphi}(1),$$ because $V K^{-1} (\log T)^{-1} < 1$ and $\varphi$ is integrable against $t \log (1+t)$ (it is rapidly decaying at infinity). We deduce, since $ V K^{-1} > 1$ and then $\mathcal{O}_{a, \varphi}(1) = \mathcal{O}_{a, \varphi}( V K^{-1})$, and since the integral of $\varphi$ on $\mathbb{R}$ is $\widehat{\varphi}(0) = 1$, $$\int_0^{\infty} \Im \log \zeta (1/2 + i (\tau + t H^{-1}) ) \varphi(t-a) dt \geq V ( 1 - o_{\varphi, a \rightarrow \infty}(1) - \mathcal{O}_{a, \varphi}( K^{-1})).$$ If we take $a$ large enough depending on $\varphi$ and $\varepsilon$, and then $K$ large enough depending on $\varphi$, $a$ and $\varepsilon$, we deduce $$\int_0^{\infty} \Im \log \zeta (1/2 + i (\tau + t H^{-1}) ) \varphi(t-a) dt \geq V (1 - \varepsilon/2).$$ Now, let us consider the same integral between $-\infty$ and $0$. For $0 \leq r \leq \log \log T$ integer and $u \in [0, e^{r} - e^{r-1}]$ for $r \geq 1$, $u \in [0,1]$ for $r = 0$, $$\Im \log \zeta (1/2 + i (\tau - (e^r - u) H^{-1})) - \Im \log \zeta (1/2 + i (\tau - e^r H^{-1})) \geq - u H^{-1} \log T,$$ and then $$\Im \log \zeta (1/2 + i (\tau - (e^r - u) H^{-1})) \geq - 2 V - u K^{-1} V$$ Since $u \leq e^r$, $K > 1$, and $ 1 + e^r - u \geq e^{r-1}$, $$\Im \log \zeta (1/2 + i (\tau - (e^r - u) H^{-1})) \geq - 2 V - e (1 + e^r - u) V,$$ and then, for all $t \in [- e^{\lfloor \log \log T \rfloor}, 0]$, $$\Im \log \zeta (1/2 + i (\tau + t H^{-1})) \geq - \mathcal{O}( V (1+|t|)).$$ If $K$ is large enough depending on $\varphi$, this estimate remains true for $t < - e^{\lfloor \log \log T \rfloor}$, since by Titchmarsh \cite{Tit}, Theorem 9.4, and by the fact that $|t| \geq e^{\log \log T -1 } \gg \log T$, \begin{align*} |\Im \log \zeta (1/2 + i (\tau + t H^{-1})) | & \ll \log ( 2 + \tau + |t| H^{-1}) \ll \log (T + |t|) \\ & = \log T + \log ( 1 + |t|/T) \leq \log T + |t|/T \ll |t|, \end{align*} whereas $V > K > 1$. Integrating against $\varphi(t-a)$, we get $$\int_{-\infty}^0 \Im \log \zeta (1/2 + i (\tau + t H^{-1}) ) \varphi(t-a) dt \geq - \mathcal{O} ( I V ),$$ where $$I = \int_{-\infty}^0 (1 + |t|) \varphi(t-a) dt \leq \int_{-\infty}^{-a} (1 + |s|) \varphi(s) ds \underset{a \rightarrow \infty}{\longrightarrow} 0.$$ Hence, for $a$ large enough depending on $\varepsilon$ and $\varphi$, $$\int_{-\infty}^0 \Im \log \zeta (1/2 + i (\tau + t H^{-1}) ) \varphi(t-a) dt \geq - \varepsilon V/2.$$ Adding this integral to the same integral on $[0, \infty)$, we deduce the first part of the proposition. The second part is proven in the same way, up to minor modifications which are left to the reader. \end{proof} In the previous proposition, if we take $\tau$ random and uniformly distributed in $[0,T]$, we deduce the following result: \begin{proposition} For $\varepsilon \in (0,1/2)$, $K$ as in the previous proposition (depending on $\varepsilon$ and $\varphi$), $T > 100$, $K < V < \log T$, $H = KV^{-1} \log T$, $U$ uniformly distributed on $[0,1]$, \begin{align*} \mathbb{P} [ | & \Im \log \zeta (1/2 + i UT)| \geq V] \leq \mathbb{P} [ |I(UT, H)| \geq (1-\varepsilon) V ] \\ & + ( 1 + \log \log T) \mathbb{P} [ |\Im \log \zeta (1/2 + i UT)| \geq 2 V] + \mathcal{O}_{\varepsilon,\varphi} (T^{-1/2}). \end{align*} \end{proposition} \begin{proof} We have immediately, by taking $\tau = UT$, \begin{align*} \mathbb{P} [ |\Im \log \zeta (1/2 + i UT)| & \geq V] \leq \mathbb{P} [ I(UT + a H^{-1}, H) \geq (1-\varepsilon) V ] \\ & + \mathbb{P} [ I(UT - a H^{-1}, H) \leq -(1-\varepsilon) V ] \\ & + \sum_{r = 0}^{ \lfloor \log \log T \rfloor} \mathbb{P} [ \Im \log \zeta (1/2 + i (UT + e^r H^{-1}) ) \geq 2 V] \\ & + \sum_{r = 0}^{ \lfloor \log \log T \rfloor} \mathbb{P} [ \Im \log \zeta (1/2 + i (UT - e^r H^{-1}) ) \leq - 2 V] \\ & + \mathcal{O}(T^{-1/2}), \end{align*} the last term being used to discard the event $UT \leq \sqrt{T}$. Now, for $u \in \mathbb{R} $, the symmetric difference between the uniform laws on $[0,T]$ and $[u H^{-1}, T + u H^{-1}]$ is dominated by a measure of total mass $\mathcal{O} ( |u| H^{-1} T^{-1})$. Hence, in the previous expression, we can replace $UT + u H^{-1}$ by $UT$ in each event, with the cost of an error term $\mathcal{O} ( |u| H^{-1} T^{-1}) = \mathcal{O}(|u| T^{-1})$. The values of $|u|$ which are involved are less than $\max(a, \log T)$, and there are $\mathcal{O}(\log \log T)$ of them. Hence, we get an error term $\mathcal{O} (T^{-1}(a + \log T) \log \log T) = \mathcal{O}_{\varepsilon, \varphi} (T^{-1/2})$ since $a$ depends only on $\varepsilon$ and $\varphi$. \end{proof} We can now iterate the proposition: applying it for $V, 2V, 4V,...$. After a few manipulations, it gives the following: \begin{proposition} \label{sumV} For $\varepsilon \in (0,1/2)$, $K$ as in the previous proposition (depending on $\varepsilon$ and $\varphi$), $T > 100$, $K< V < \log T$, $H = KV^{-1} \log T$, $U$ uniformly distributed on $[0,1]$, \begin{align*} \mathbb{P} [ | & \Im \log \zeta (1/2 + i UT)| \geq V] \\ & \leq \sum_{r = 0}^{p-1} (1 +\log \log T)^r \mathbb{P} [ |I(UT, 2^{-r} H)| \geq (1-\varepsilon) 2^r V ] + \mathcal{O}_{\varepsilon,\varphi} (T^{-1/3}), \end{align*} where $p$ is the first integer such that $2^p V \geq \log T$. \end{proposition} \begin{proof} We iterate the formula until the value of $V$ reaches $\log T$. The number of steps is dominated by $\log \log T - \log K \leq \log \log T$. Each step gives an error term of at most $\mathcal{O}_{\varepsilon, \varphi} ( ( 1+ \log \log T)^{\mathcal{O} (\log \log T)} T^{-1/2})$. Hence, the total error is $ \mathcal{O}_{\varepsilon,\varphi} (T^{-1/3})$. We deduce \begin{align*} \mathbb{P} [ |\Im & \log \zeta (1/2 + i UT)| \geq V] \leq \sum_{r = 0}^{p-1} (1 +\log \log T)^r \mathbb{P} [ |I(UT, 2^{-r} H)| \geq (1-\varepsilon) 2^r V ] \\ & + (1 +\log \log T)^p \, \mathbb{P} [ | \Im \log \zeta (1/2 + i UT)| \geq 2^p V] + \mathcal{O}_{\varepsilon, \varphi} (T^{-1/3}), \end{align*} Under the Riemann hypothesis, Theorem 14.13 of Titchmarsh \cite{Tit} shows that $| \Im \log \zeta (1/2 + i UT)| \ll (\log \log T)^{-1} \log T$ under the Riemann hypothesis. Hence the probability that $|\Im \log \zeta|$ is larger than $2^p V \geq \log T$ is equal to zero if $T$ is large enough, which can be assumed (for small $T$, we can absorb everything in the error term). \end{proof} \section{Tail distribution of the averaged version of $\Im \log \zeta$ and proof of the main theorem} The averaged version $I(\tau, H)$ of $\Im \log \zeta$ can be written in terms of sums indexed by primes: \begin{proposition} \label{Naj32} Let us assume the Riemann hypothesis. There exists $\alpha > 0$, depending only on the function $\varphi$, such that for all $\tau \in \mathbb{R}$, $0 < H < \alpha \log (2+|\tau|)$, $$ I (\tau, H) = \Im \sum_{p \in \mathcal{P}} p^{-1/2 - i \tau} \widehat{\varphi} (H^{-1} \log p) + \frac{1}{2} \Im \sum_{p \in \mathcal{P}} p^{-1- 2 i \tau} \widehat{\varphi} (2 H^{-1} \log p) + \mathcal{O}_{\varphi} (1), $$ $\mathcal{P}$ being the set of primes. \end{proposition} \begin{proof} This result is an immediate consequence of Proposition 3.2 of \cite{bib:Naj}, which is itself deduced from Lemma 5 of Tsang \cite{Tsang86}. \end{proof} We will now estimate the tail distribution of $I(UT, H)$, where $U$ is uniformly distributed on $[0,1]$, by using upper bounds of the moments of the sums on primes involved in the previous proposition. We use Lemma 3 of Soundararajan \cite{Sound}, which is presented as a standard mean value estimate by the author (a similar result can be found in Lemma 3.3 of Tsang's thesis \cite{Tsang84}), and which can be stated as follows: \begin{proposition} \label{Lemma3Sound} For $T$ large enough and $2 \leq x \leq T$, for $k$ a natural number such that $x^k \leq T/\log T$, and for any complex numbers $a(p)$ indexed by the primes, we have $$\int_{T}^{2T} \left| \sum_{p \leq x, p \in \mathcal{P}} \frac{a(p)}{p^{1/2 + it}} \right|^{2k} dt \ll T k! \left( \sum_{p \leq x, p \in \mathcal{P}} \frac{|a(p)|^2}{p} \right)^k.$$ \end{proposition} From Propositions \ref{Naj32} and \ref{Lemma3Sound}, we can deduce the following tail estimate: \begin{proposition} For $T$ large enough, $\varepsilon \in (0,1/10)$, $K > 1$ depending only on $\varepsilon$ and $\varphi$, $0 < V < \log T$, $H = K V^{-1} \log T$, we have $$\mathbb{P} [ |I(UT, H)| \geq (1-\varepsilon) V] \ll_{\varepsilon, \varphi} e^{- (1- 3 \varepsilon) V^2 / \log \log T} + e^{- b_{\varepsilon, \varphi} V \log V} + T^{-1/2},$$ where $b_{\varepsilon, \varphi} > 0$ depends only on $\varepsilon$ and $\varphi$. \end{proposition} \begin{proof} We can assume $V \geq 10 \sqrt{ \log \log T}$ and $V$, $T$ larger than any given quantity depending only on $\varepsilon$ and $\varphi$: otherwise the upper bound is trivial. In particular, if we choose $\alpha$ as in Proposition \ref{Naj32}, it depends only on $\varphi$, and then, for $K > 1$ depending only on $\varepsilon$ and $\varphi$, we can assume $V > 2K/\alpha$. From this inequality, we deduce $H < ( \alpha/2) \log T$, which gives $H < \alpha \log (2 + UT)$ with probability $ 1 - \mathcal{O} (T^{-1/2})$. Under this condition, Proposition \ref{Naj32} applies to $\tau = UT$, and we deduce: $$I(UT, H) = \Im S_1 + \Im S_2 + \Im S_3 + \mathcal{O}_{\varphi}(1),$$ where $$S_1 := \sum_{p \in \mathcal{P}, p \leq T^{1/( V \log \log T)} } p^{-1/2 - i UT} \widehat{\varphi} (H^{-1} \log p),$$ $$S_2 := \sum_{p \in \mathcal{P}, p > T^{1/( V \log \log T)} } p^{-1/2 - i UT} \widehat{\varphi} (H^{-1} \log p),$$ $$S_3 := \frac{1}{2} \sum_{p \in \mathcal{P} } p^{-1 - 2 i UT} \widehat{\varphi} (2 H^{-1} \log p).$$ Since we can assume $V$ large depending on $\varepsilon$ and $\varphi$, we can suppose that the term $ \mathcal{O}_{\varphi}(1)$ is smaller than $\varepsilon V/20$: $$|I(UT, H)| \leq |S_1 |+ |S_2| + |S_3 |+ \varepsilon V/20,$$ with probability $ 1 - \mathcal{O} (T^{-1/2})$. We deduce $$ \mathbb{P} [ |I(UT, H)| \geq (1-\varepsilon) V] \leq \mathbb{P} [ |S_1| \geq (1- 1.1 \varepsilon) V] $$ $$+ \mathbb{P} [| S_2 |\geq \varepsilon V/100] + \mathbb{P} [| S_3 | \geq \varepsilon V/100] + \mathcal{O}(T^{-1/2}).$$ We estimate the tail of these sums by applying Markov inequality to their moment of order $2k$, $k$ being a suitably chosen integer. Since $\widehat{\varphi}$ is compactly supported, all the sums have finitely many non-zero terms, and we can apply, for $T$ large, the lemma to all values of $k$ up to $\gg_{\varphi} \log (T /\log T) / H$, i.e. $\gg_\varphi V/K$, and then $\gg_{\varepsilon, \varphi} V$. For the sum $S_1$, we can even go up to $(V \log \log T)/2$. For $S_2$, we can take $k = \lfloor c_{\varepsilon, \varphi} V \rfloor$, for a suitable $c_{\varepsilon, \varphi} > 0$ depending only on $\varepsilon$ and $\varphi$. The moment of order $2k$ is $$\ll k! \left(\sum_{T^{1/(V \log \log T)} < p \leq e^{\mathcal{O}_\varphi (H)}} p^{-1} \right)^k \leq k^k ( \log \log \log T + \mathcal{O}_{\varepsilon, \varphi}(1) )^k$$ Hence, $$\mathbb{P} [ | S_2 |\geq \varepsilon V/100] \leq (\varepsilon V/100)^{- 2 \lfloor c_{\varepsilon, \varphi} V \rfloor} ( c_{\varepsilon, \varphi} V ( \log \log \log T + \mathcal{O}_{\varepsilon, \varphi}(1) ) )^{ \lfloor c_{\varepsilon, \varphi} V \rfloor } $$ Since we have assumed $V \geq 10 \sqrt{\log \log T}$, we have $$(\varepsilon V/100)^{-2} c_{\varepsilon, \varphi} V ( \log \log \log T + \mathcal{O}_{\varepsilon, \varphi}(1) ) \leq V^{-0.99}$$ and $$ \lfloor c_{\varepsilon, \varphi} V \rfloor \geq 0.99 \, c_{\varepsilon, \varphi} V,$$ for $T$ large enough depending on $\varepsilon$ and $\varphi$. Hence $$ \mathbb{P} [ | S_2| \geq \varepsilon V/100] \leq V^{-0.98 c_{\varepsilon, \varphi} V},$$ which is acceptable. An exactly similar proof is available for $S_3$, since we even get a $2k$-th moment bounded by $k! (\mathcal{O}(1))^k$. For $S_1$, the $2k$-th moment is $$\ll k! (\log \log T + \mathcal{O}_{\varepsilon, \varphi}(1))^k$$ for $k \leq (V \log \log T)/2$. Hence, the probability that $|S_1| \geq W := (1 - 1.1 \varepsilon)V$ is $$\ll W^{-2k} k! (\log \log T + \mathcal{O}_{\varepsilon, \varphi}(1))^k $$ We appoximately optimize this expression in $k$. If $V \leq (\log \log T)^2/2$, we can take $k = \lfloor W^2/ \log \log T \rfloor$ since this expression is smaller than $V \log \log T/2$. Notice that since $V \geq 10 \sqrt{\log \log T}$ and $\varepsilon < 1/10$, we have $W \geq 8 \sqrt{\log \log T}$ and $k$ is strictly positive. The probability that $|S_1| \geq W$ is then $$ \ll [W^{-2} (k/e) (\log \log T + \mathcal{O}_{\varepsilon, \varphi}(1))]^k \sqrt{k} $$ The quantity inside the bracket is smaller than $e^{-(1- (\varepsilon/100))}$ for $T$ large enough depending on $\varepsilon$ and $\varphi$. Hence, in this case, the probability is $$ \leq e^{- (1- (\varepsilon/100)) k} \sqrt{k} \ll_{\varepsilon} e^{- (1- (\varepsilon/50)) k} \ll e^{ - (1- (\varepsilon/50)) ( 1- 1.1 \varepsilon)^2 V^2 / \log \log T}.$$ This is acceptable. If $V > (\log \log T)^2/2$, we take $k = \lfloor V \log \log T/2 \rfloor$. We again get a probability $$ \ll [W^{-2} (k/e) (\log \log T + \mathcal{O}_{\varepsilon, \varphi}(1))]^k \sqrt{k}.$$ Inside the bracket, the quantity is bounded, for $T$ large enough depending on $\varepsilon$ and $\varphi$, by \begin{align*} & W^{-2} (V \log \log T/2e) (1.001 \log \log T) \leq W^{-2} V (\log \log T)^2/5.4 \\ & = V^{-1} (\log \log T)^2 (1-1.1\varepsilon)^{-2}/5.4 \leq 2 (1-1.1\varepsilon)^{-2}/5.4 \leq 1/2 \end{align*} Hence, we get a probability $$\ll 2^{-k} \sqrt{k} \ll e^{-k/2} \ll e^{- V \log \log T/4} \ll e^{-V \log V/4},$$ the last inequality coming from the fact that $V < \log T$ by assumption. This is again acceptable. \end{proof} We then get the following bounds for the tail of $\Im \log \zeta$, which easily imply the main theorem by integrating against $e^{2 k V}$: \begin{proposition} For all $\varepsilon \in (0,1/10)$, $V > 0$, $$\mathbb{P} [ | \Im \log \zeta(1/2 + iUT)| \geq V] \ll_{\varepsilon} e^{(\log \log \log T)^3} e^{- (1-\varepsilon) V^2/ \log \log T} + e^{- c_{\varepsilon} V \log V},$$ where $c_{\varepsilon} > 0$ depends only on $\varepsilon$. \end{proposition} \begin{proof} We fix a function $\varphi$ satisfying the assumptions given at the beginning: this function will be considered as universal, and then we will drop all the dependences on $\varphi$ in this proof. From Theorem 14.13 of Titchmarsh \cite{Tit}, we can assume $V \ll (\log \log T)^{-1} \log T$ and then $V < \log T$ for $T$ large (otherwise the probability is zero). We can then also assume $T$ larger than any given quantity depending only on $\varepsilon$ (if $T$ is small, $V$ is small), and $V \geq 10 \sqrt{\log \log T}$. Under these assumptions, we can suppose $V > K$ if $K > 0$ depends only on $\varepsilon$, which allows to apply Proposition \ref{sumV}. The error term $\mathcal{O}_{\varepsilon}(T^{-1/3})$ can be absorbed in $\mathcal{O}_{\varepsilon}( e^{-c_{\varepsilon} V \log V})$ since $V \ll (\log \log T)^{-1} \log T$. The sum in $r$ is, by the previous proposition, dominated by \begin{align*} \sum_{r = 0}^{p-1} (1 + \log \log T)^{r} e^{- (1-3 \varepsilon) (2^r V)^2/ \log \log T} & + \sum_{r = 0}^{p-1} (1 + \log \log T)^{r} e^{- b_{\varepsilon} (2^r V) \log (2^r V)} \\ & + T^{-1/2} \sum_{r = 0}^{p-1} (1 + \log \log T)^{r}, \end{align*} where $b_{\varepsilon} > 0$ depends only on $\varepsilon$. We can assume $V$ large, and then the exponent in the last exponential decreases by at least $b_{\varepsilon} V$ when $r$ increases by $1$, and then by more than $\log ( 2( 1+ \log \log T))$ when $T$ is large enough depending on $\varepsilon$, since $V \geq 10 \sqrt{ \log \log T}$. Hence, each term of the sum is less than half the previous one and the sum is dominated by its first term. This is absorbed in $\mathcal{O}_{\varepsilon}( e^{-c_{\varepsilon} V \log V})$. For the last sum, we observe that $2^{p-1} V < \log T$ by definition of $p$, and then (since we can assume $V > 1$), $p \ll \log \log T$, which gives a term $$\ll T^{-1/2} ( \log \log T) (1 + \log \log T)^{ \mathcal{O} (\log \log T)} \ll T^{-1/3},$$ which can again be absorbed in $\mathcal{O}_{\varepsilon}( e^{-c_{\varepsilon} V \log V})$ since we can assume $V \ll (\log \log T)^{-1} \log T$. For the first sum, we separate the terms for $r \leq 10 \log \log \log T$, and for $r > 10 \log \log \log T$. For $T$ large, the sum of the terms for $r$ small is at most \begin{align*} & \sum_{r = 0}^{\lfloor 10 \log \log \log T \rfloor} ( 1 + \log \log T)^{r} e^{-(1-3 \varepsilon) V^2/ \log \log T} \\ & \ll (1 + 10 \log \log \log T) e^{ 10 \log \log \log T \log ( 1+ \log \log T)} e^{-(1-3 \varepsilon) V^2/ \log \log T} \\ & \ll e^{(\log \log \log T)^3} e^{- (1- 3 \varepsilon) V^2/ \log \log T} \end{align*} This term is acceptable after changing the value of $\varepsilon$. When $r > 10 \log \log \log T$, we have (for $T$ large, $V \geq 10 \sqrt{\log \log T}$ and $\varepsilon < 1/10$) $$(1-3\varepsilon) (2^r V)^2 / \log \log T \geq 2^{2r} = e^{ 20 (\log 2) \log \log \log T} \geq (\log \log T)^{13}.$$ The exponent is multiplied by $4$ when $r$ increases by $1$, and then decreased by more than $3 (\log \log T)^{13}$, whereas the prefactor is multiplied by $1 + \log \log T$. Hence, the term $r = \lfloor 10 \log \log \log T \rfloor + 1$ dominates the sum of all the terms $r > 10 \log \log \log T$, and its order of magnitude is acceptable. \end{proof} \bibliographystyle{plain}
{'timestamp': '2019-09-17T02:11:51', 'yymm': '1804', 'arxiv_id': '1804.00343', 'language': 'en', 'url': 'https://arxiv.org/abs/1804.00343'}
\section{Introduction} When two black holes spiral together and coalesce, they emit gravitational radiation which in general possesses net linear momentum. This accelerates (i.e., ``kicks'') the coalescence remnant relative to the initial binary center of mass. Analytical calculations have determined the accumulated kick speed from large separations until when the holes plunge towards each other \citep{Per62,Bek73,Fit83,FD84,RR89,W92,FHH04,BQW05,DG06}, but because the majority of the kick is produced between plunge and coalescence, fully general relativistic numerical simulations are necessary to determine the full recoil speed. Fortunately, the last two years have seen rapid developments in numerical relativity. Kick speeds have been reported for non-spinning black holes with different mass ratios \citep{HSL06,Baker06,Gonzalez06} and for binaries with spin axes parallel or antiparallel to the orbital axes \citep{Herrmann07,Koppitz07, Baker07}, as well as initial explorations of more general spin orientations \citep{Gonzalez07,Campanelli07a,Campanelli07b}. For mergers with low spin or spins both aligned with the orbital angular momentum, these results indicate maximum kick speeds $<200$~km~s$^{-1}$. Remarkably, however, it has recently been shown that when the spin axes are oppositely directed and in the orbital plane, and the spin magnitudes are high (dimensionless angular momentum ${\hat a}\equiv cJ/GM^2\sim 1$), the net kick speed can perhaps be as large as $\sim 4000$~km~s$^{-1}$ \citep{Gonzalez07,sb07,Campanelli07b}. The difficulty this poses is that the escape speed from most galaxies is $<1000$~km~s$^{-1}$ (see Figure~2 of \citealt{Merritt04}), and the escape speed from the central bulge is even smaller. Therefore, if large recoil speeds are typical, one might expect that many galaxies that have undergone major mergers would be without a black hole. This is in clear contradiction to the observation that galaxies with bulges all appear to have central supermassive black holes (see \citealt{FF05}). It therefore seems that there is astrophysical avoidance of the types of supermassive black hole coalescences that would lead to kicks beyond galactic escape speeds. From the numerical relativity results, this could happen if (1)~the spins are all small, (2)~the mass ratios of coalescing black holes are all much less than unity, or (3)~the spins tend to align with each other and the orbital angular momentum. The low-spin solution is not favored observationally. X-ray observations of several active galactic nuclei reveal relativistically broadened Fe K$\alpha$ fluorescence lines indicative of spins ${\hat a}>0.9$ \citep{Iwasawa96,Fab02,RN03,BR06}. A similarly broad line is seen in the stacked spectra of active galactic nuclei in a long exposure of the Lockman Hole \citep{Streb05}. More generally, the inferred average radiation efficiency of supermassive black holes suggests that they tend to rotate rapidly (\citealt{Sol82,YT02}; see \citealt{marconi04} for a discussion of uncertainties). This is also consistent with predictions from hierarchical merger models (e.g., \citealt{volonteri05}). Mass ratios much less than unity may occur in some mergers, and if the masses are different enough then the kick speed can be small. For example, \citet{Baker07}, followed by \cite{Campanelli07b}, suggest that the spin kick component scales with mass ratio $q\equiv m_1/m_2\leq 1$ as $q^2({\hat a}_2-q{\hat a}_1)/(1+q)^5$, hence for ${\hat a}_1=-{\hat a_2}=1$ the maximum kick speed is $\propto q^2/(1+q)^4$. For $q<0.1$ this scales roughly as $q^2$ and hence kicks are small. However, for $q>0.2$ the maximum kick is within a factor $\sim 3$ of the kick possible for $q=1$. An unlikely conspiracy would thus seem to be required for the masses always to be different by the required factor of several. Some tens of percent of galaxies appear to have undergone at least one merger with mass ratio $>0.25$ within redshift $z<1$ (for recent observational results with different methods, see \citealt{Bell06b,Lotz06}, and for a recent simulation see \citealt{Maller06}). The well-established tight correlations between central black hole mass and galactic properties such as bulge velocity dispersion (see \citealt{FF05} for a review) then suggest strongly that coalescence of comparable-mass black holes should be common. The most likely solution therefore seems to be that astrophysical processes tend to align the spins of supermassive black holes with the orbital axis. This astrophysical alignment is the subject of this {\it Letter}. Here we show that gas-rich mergers tend to lead to strong alignment of the spin axes with the orbital angular momentum and thus to kick speeds much less than the escape speeds of sizeable galaxies. In contrast, gas-poor mergers show no net tendency for alignment, assuming an initially uniform distribution of spin and orbital angular momentum vectors. We demonstrate this aspect of gas-poor mergers in \S~2. In \S~3 we discuss gas-rich mergers, and show that observations and simulations of nuclear gas in galactic mergers suggest that the black holes will be aligned efficiently. We discuss consequences and predictions of this alignment in \S~4. \section{Gas-poor Mergers} Several recent models and observations have been proposed as evidence that some galactic mergers occur without a significant influence of gas. Possible signatures include the metal richness of giant ellipticals (e.g., \citealt{NO07}) and slow rotation and the presence of boxy orbits in the centers of some elliptical galaxies (e.g., \citealt{Bell06a,Naab06}). Consider such a gas-free merger, and assume that we can therefore treat the gradual inspiral of two spinning black holes as an isolated system. As laid out clearly by \cite{Schnittman04}, throughout almost the entire inspiral there is a strong hierarchy of time scales, such that $t_{\rm inspiral}\gg t_{\rm precess}\gg t_{\rm orbit}$. \cite{Schnittman04} therefore derived orbit-averaged equations for the spin evolution in the presence of adiabatic dissipation. Such effects can lead to relaxation onto favored orientations. The question is then whether, with the uniform distribution of orbital and spin directions that seems expected in galactic mergers, there is a tendency to align in such a way that the net kicks are small. Using equations A8 and A10 from \cite{Schnittman04}, we have evolved the angles between the two spin vectors, and between the spins and the orbital angular momentum. We find that for isotropically distributed initial spins and orbits, the spins and orbits at close separation are also close to isotropically distributed (see Figure~\ref{fig:finalangles}). Thus, although (as we confirm) \cite{Schnittman04} showed that for special orientations the spins might align (e.g., for an initial $\cos\theta_1\approx 1$, or as we also discovered, for an initial $\cos\theta_2\approx -1$), the initial conditions resulting in such alignment are special and subtend only a small solid angle. The conclusion is that gas-poor mergers alone cannot align spins sufficiently to avoid large kicks due to gravitational radiation recoil. Indeed, \cite{sb07} find that for mass ratios $q>0.25$, spin magnitudes ${\hat a}_1={\hat a_2}=0.9$, and isotropic spin directions, $\sim 8$\% of coalescences result in kick speeds $>1000$~km~s$^{-1}$ and $\sim 30$\% yield speeds $>500$~km~s$^{-1}$. The high maximum speeds inferred by \cite{Campanelli07b} are likely to increase these numbers. We now discuss gas-rich mergers, which can naturally reduce the kick speeds by aligning black hole spins with their orbital axis. \section{Gas-rich Mergers \label{S_wet_mergers}} Consider now a gas rich environment, which is common in many galactic mergers. The key new element is that gas accretion can exert torques that change the direction but not the magnitude of the spin of a black hole, and that the lever arm for these torques can be tens of thousands of gravitational radii \citep{bp75,np98,na99}. In particular, \citet{np98} and \cite{na99} demonstrate that the black hole can align with the larger scale accretion disk on a timescale that is as short as 1\% of the accretion time. An important ingredient of this scenario is the realization by \citet{pp83} that the warps are transmitted through the disk on a timescale that is shorter by a factor of $1/2\alpha^2$ compared with the transport of the orbital angular momentum in flat disks, where $\alpha\sim 0.01-0.1$ is the standard viscosity parameter \citep{ss73}. The question that distinguishes gas-rich from gas-poor mergers is therefore whether the accreted mass is $\sim 0.01-0.1M_{\rm bh}$ during the sinking of the black holes towards the center of the merged galaxy, where $M_{\rm bh}$ is a black hole mass. Numerical simulations show that galactic mergers trigger large gas inflows into the central kiloparsec, which in gas rich galaxies can result in a $\sim 10^9 \>{\rm M_{\odot}}$ central gas remnant with a diameter of only few$\times$100~pc \citep{bh91,bh96,mh94,msh05,kazantzidis05}. Such mergers are thought to be the progenitors of ultraluminous infrared galaxies. \citet{kazantzidis05} find that the strong gas inflows observed in cooling and star formation simulations always produce a rotationally supported nuclear disk of size $\sim 1-2$~kpc with peak rotational velocities in the range of 250$-$300 ${\rm km\, s^{-1}}$. The results of numerical simulations are in good agreement with observations, which also show that the total mass of the gas accumulated in the central region of merger galaxies can reach $10^9-10^{10}\>{\rm M_{\odot}}$ and in some cases account for about half of the enclosed dynamical mass \citep{tacconi99}. Observations imply that the cold, molecular gas settles into a geometrically thick, rotating structure with velocity gradients similar to these obtained in simulations and with densities in the range $10^2-10^5\,{\rm cm^{-3}}$ \citep{ds98}. Both observations and simulations of multiphase interstellar matter with stellar feedback show a broad range of gas temperatures, where the largest fraction of gas by mass has a temperature of about $100\,$K \citep{wn01,wn02}. We therefore consider an idealized model based on these observations and simulations. In our model, the two black holes are displaced from the center embedded within the galactic-scale gas disk. We are mainly concerned with the phase in which the holes are separated by hundreds of parsecs, hence the enclosed gas and stellar mass greatly exceeds the black hole masses and we can assume that the black holes interact independently with the disk. Based on the results of \citet{escala04,escala05}, \citet{mayer06}, and \citet*{dotti06} the time for the black holes to sink from these separations to the center of the disk due to dynamical friction against gaseous and stellar background is $\leq5\times10^7$yr, which is comparable to the starburst timescale, $\sim10^8$yr \citep{larson87}. The accretion onto the holes is mediated by their nuclear accretion disks fed from the galactic scale gas disk at the Bondi rate, ${\dot M}_{\rm Bondi}$, as long as it does not exceed the Eddington rate, ${\dot M_{\rm Edd}}$ \citep{GT04}. Locally, one can estimate the Bondi radius $R_{\rm Bondi}=GM_{\rm bh}/v_g^2\approx 40~{\rm pc}\,(M_{\rm bh}/10^8\,M_\odot)(v_g/100~{\rm km~s}^{-1})^{-2}$ that would be appropriate for a total gas speed at infinity, relative to a black hole, of $v_g$ (we use a relatively large scaling of 100~km~s$^{-1}$ for this quantity to be conservative and to include random motions of gas clouds as well as the small thermal speed within each cloud). The accretion rate onto the holes will then be $\min({\dot M_{\rm Bondi}},{\dot M_{\rm Edd}})$, where ${\dot M}_{\rm Bondi}\approx 1~M_\odot\,{\rm yr}^{-1}\; (v_g/100\,{\rm km\,s}^{-1})^{-3}(n/100\,{\rm cm}^{-3})(M_{\rm bh}/10^8\>{\rm M_{\odot}})^2$. We also note that clearing of a gap requires accretion of enough gas to align the holes with the large-scale gas flow. At this rate, the holes will acquire 1-10\% of their mass in a time short compared to the time needed for the holes to spiral in towards the center or the time for a starburst to deplete the supply of gas. The gas has significant angular momentum relative to the black holes: analogous simulations in a planetary formation environment suggest that the circularization radius is some hundredths of the capture radius (e.g., \citealt{HB91,HB92}). This corresponds to more than $10^5$ gravitational radii, hence alignment of the black hole spin axes is efficient (and not antialignment, since the cumulative angular momentum of the accretion disk is much greater than the angular momentum of the black holes; see \citealt{king05,lp06}). If the black hole spins have not been aligned by the time their Bondi radii overlap and a hole is produced in the disk, further alignment seems unlikely \citep[for a different interpretation see][]{liu04}. The reason is twofold: the shrinking of the binary due to circumbinary torques is likely to occur within $<{\rm few}\times 10^7$~yr \citep{escala04,escala05}, and accretion across the gap only occurs at $\sim 10$\% of the rate it would have for a single black hole \citep{lsa99,ld06,mm06}, with possibly even smaller rates onto the holes themselves. This therefore leads to the gas-poor merger scenario, suggesting that massive ellipticals or ellipticals with slow rotation or boxy orbits have a several percent chance of having ejected their merged black holes but that other galaxy types will retain their holes securely. \section{Predictions, Discussion and Conclusions} We propose that when two black hole accrete at least $\sim 1-10$\% of their masses during a gas-rich galactic merger, their spins will align with the orbital axis and hence the ultimate gravitational radiation recoil will be $<200$~km~s$^{-1}$. In this section we discuss several other observational predictions that follow from this scenario. The best diagnostic of black hole spin orientation is obtained by examining AGN jets. All viable jet formation mechanisms result in a jet that is initially launched along the spin axis of the black hole. This is the case even if the jet is energized by the accretion disk rather than the black hole spin since the orientation of the inner accretion disk will be slaved to the black hole spin axis by the Bardeen-Petterson effect \citep{bp75}. At first glance, alignment of black hole spin with the large scale angular momentum of the gas would seem to run contrary to the observation that Seyfert galaxies have jets that are randomly oriented relative to their host galaxy disks \citep{kinney00}. However, Seyfert morphology is not consistent with recent major mergers \citep{veilleux03}, hence randomly-oriented minor mergers or internal processes (e.g., scattering of a giant molecular cloud into the black hole loss cone) are likely the cause of the current jet directions in Seyfert galaxies. Within our scenario, one will never witness dramatic spin orientation changes during the final phase of black hole coalescence following a gas-rich merger. There is a class of radio-loud AGN known as ``X-shaped radio galaxies'', however, that possess morphologies interpreted precisely as a rapid ($<10^5$\,yr) re-alignment of black hole spin during a binary black hole coalescence \citep{ekers78,dt02,wang03,komossa03b,lr07,cheung07}. These sources have relatively normal ``active'' radio lobes (often displaying jets and hot-spots) but, in addition, have distinct ``wings'' at a different position angle. The spin realignment hypothesis argues that the wings are old radio lobes associated with jets from the one of the pre-coalescence black holes in which the spin axis possessed an entirely different orientation to the post-coalescence remnant black hole \citep{me02}. If this hypothesis is confirmed by, for example, catching one of these systems in the small window of time in which both sets of radio lobes have active hot spots, it would contradict our scenario unless it can be demonstrated that all such X-shaped radio-galaxies originate from gas-poor mergers. However, the existence of a viable alternative mechanism currently prevents a compelling case from being made that X-shaped radio sources are a unique signature of mis-aligned black hole coalescences. The collision and subsequent lateral expansion of the radio galaxy backflows can equally well produce the observed wings \citep{capetti02}. Indeed, there is circumstantial evidence supporting the backflow hypothesis. \citet{kraft05} present a Chandra observation of the X-shaped radio galaxy 3C~403 and find that the hot ISM of the host galaxy is strongly elliptical, with a (projected) eccentricity of $e\sim 0.6$. Furthermore, the wings of the ``X'' are closely aligned with the minor axis of the gas distribution, supporting a model in which the wings correspond to a colliding backflow that has ``blown'' out of the ISM along the direction of least resistance. Although 3C~403 is the only X-shaped radio galaxy for which high-resolution X-ray maps of the hot ISM are available, \citet{capetti02} have noted that a number of X-shaped sources have wings that are oriented along the minor axis of the {\it optical} host galaxy. This suggests that the conclusion of \citet{kraft05} for 3C~403 may be more generally true. There is one particular system, 0402$+$379, that might provide a direct view of spin alignment in a binary black hole system. Very Long Baseline Array (VLBA) imaging of this radio galaxy by \citet{maness04} discovered two compact flat spectrum radio cores, and follow-up VLBA observations presented by \citet{rodriguez06} showed the cores to be stationary. A binary supermassive black hole is the most satisfactory explanation for this source, with the projected distance between the two black holes being only 7.3\,pc. Within the context of our gas-rich merger scenario, these two black holes already have aligned spins. Existing VLBA data only show a jet associated with one of the radio cores. We predict that, if a jet is eventually found associated with the other radio core, it will have the same position angle as the existing jet. Evidence already exists for alignment of a coalescence remnant with its galactic scale gas disk. \citet{perlman01} imaged the host galaxies of three compact symmetric objects and discovered nuclear gas disks approximately normal to the jet axis. The presence of such a nuclear gas disk as well as disturbances in the outer isophotes of all three host galaxies suggests that these galaxies had indeed suffered major gas-rich mergers within the past $10^8$\,yr. In conclusion, we propose that in the majority of galactic mergers, torques from gas accretion align the spins of supermassive black holes and their orbital axis with large-scale gas disks. This scenario helps explain the ubiquity of black holes in galaxies despite the potentially large kicks from gravitational radiation recoil. Further observations, particularly of galaxy mergers that do not involve significant amounts of gas, will test our predictions and may point to a class of large galaxies without central black holes. \acknowledgments We thank Doug Hamilton and Eve Ostriker for insightful discussions. TB thanks the UMCP-Astronomy Center for Theory and Computation Prize Fellowship program for support. CSR and MCM gratefully acknowledge support from the National Science Foundation under grants AST0205990 (CSR) and AST0607428 (CSR and MCM).
{'timestamp': '2007-04-13T00:54:12', 'yymm': '0703', 'arxiv_id': 'astro-ph/0703054', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/0703054'}
\section*{Introduction} Let $C$ be a nonempty convex set in a (real) normed linear space $X$. A function $f\colon C\to\mathbb{R}$ is called {\em d.c.}\ (or ``delta-convex'') if it can be represented as a difference of two continuous convex functions on $C$. An extension of this notion, the notion of a {\em d.c.\ mapping} $F\colon C\to Y$ (see Definition~\ref{D:dc}) where $Y$ is a normed linear space, was introduced in \cite{VeZa1} and studied in \cite{VeZa1}, \cite{DuVeZa}, \cite{VeZa2} and some other papers by the authors. The present paper concerns the following natural questions. \begin{enumerate} \item[(Q1)] When is it possible to extend a d.c.\ function (or a d.c.\ mapping) on $C$ to a d.c.\ function (or a d.c.\ mapping) on the whole $X$? \item[(Q2)] When is it possible to extend a continuous convex function on a closed subspace $Y$ of $X$ to a continuous convex function on $X$? \end{enumerate} In Section 2, we show how results of \cite{VeZa2} on compositions of d.c.\ functions and mappings imply positive results concerning (Q1). For instance, Corollary~\ref{elpecka}(a) reads as follows. {\em Let $X$ be a (subspace of some) $L_p(\mu)$ space with $1<p\le2$. Let $C\subset X$ be a convex set with a nonempty interior. Then each continuous convex function $f$ on $C$, which is Lipschitz on every bounded subset of $\mathrm{int}\,C$, admits a d.c.\ extension to the whole $X$.} \noindent (Note that only the case of $C$ unbounded is interesting; cf.~Lemma~\ref{F:1}(c).) The needed results from \cite{VeZa2}, together with some definitions and auxiliary facts, are presented in Section~1 (Preliminaries). Section 3 contains two counterexamples. The first one (Example~\ref{pasovec}) shows that, in the above mentioned Corollary~\ref{elpecka}(a), we cannot conclude that $f$ admits a continuous {\em convex} extension (even for $X= \mathbb{R}^2$). The second counterexample (Example~\ref{koulovec}) shows that, in the above mentioned Corollary~\ref{elpecka}(a), it is not possible to relax the assumption that $f$ is Lipschitz on bounded sets by assuming that $f$ is only locally Lipschitz on $C$. In the last Section~4, we consider the question (Q2) of extendability of continuous convex functions from a closed subspace $Y$ to the whole $X$. The authors of~\cite{BMV} obtained a necessary and sufficient condition on $Y$ in terms of nets in $Y^*$ and, using Rosenthal's extension theorem, they proved the following interesting corollary (\cite[Corollary~4.10]{BMV}). {\em If $X$ is a Banach space and $X/Y$ is separable, then each continuous convex function on $Y$ admits a continuous convex extension to $X$.} \noindent Using methods from \cite{H} and \cite{VeZa2}, we give a necessary and sufficient condition on $Y$ of a different type in Theorem~\ref{univ}. As an application, we present an elementary alternative proof of the above mentioned \cite[Corollary~4.10]{BMV}, which works also for noncomplete $X$. \section{Preliminaries}\label{S:prelim} We consider only normed linear spaces over the reals $\mathbb{R}$. For a normed linear space $X$ we use the following fairly standard notations: $B_X$ denotes the closed unit ball; $U(c,r)$ is the open ball centered in $c$ with radius $r$; $[x,y]$ is the closed segment $\mathrm{conv}\{x,y\}$ (the meaning of the symbols $(x,y)$ and $(x,y]=[y,x)$ is clear). By definition, the distance of a set from the empty set $\emptyset$ is $\infty$, and the restriction of a mapping to $\emptyset$ has all properties like continuity, Lipschitz property, boundedness and so on. We will frequently use also the following less standard notation. \begin{notation} Let $A,B,A_n,B_n$ ($n\in\mathbb{N}$) be subsets of a normed linear space $X$. We shall write: \begin{itemize} \item $A\subset\subset B$ whenever there exists $\epsilon>0$ such that $A+\epsilon B_X\subset B$; \item $A_n\nearrow A$ whenever $A_n\subset A_{n+1}$ for each $n\in\mathbb{N}$, and $\bigcup_{n\in\mathbb{N}}A_n=A$; \item $A_n\nearrow\!\!\!\nearrow A$ whenever $A_n\subset\subset A_{n+1}$ for each $n\in\mathbb{N}$, and $\bigcup_{n\in\mathbb{N}}A_n=A$. \end{itemize} \end{notation} We shall use the following simple facts about convex sets and functions. \begin{lemma}[{\cite[Lemma~2.3]{VeZa2}}]\label{L:dn} Let $C\subset X$ be nonempty, open and convex. Let $\{C_n\}$ be a sequence of convex sets with nonempty interiors, such that $C_n\nearrow C$. Then there exists a sequence $\{D_n\}$ of nonempty, bounded, open, convex sets such that $D_n\nearrow\!\!\!\nearrow C$, and $D_n\subset\subset C_n$ for each $n$. \end{lemma} \begin{lemma}[{\cite[Fact~1.6]{VeZa2}}]\label{F:1} Let $C\subset X$ be a nonempty convex set, $f\colon C\to\mathbb{R}$ be a convex function. \begin{enumerate} \item[(a)] If $C$ is open and bounded and $f$ is continuous, then $f$ is bounded below on $C$. \item[(b)] If $f$ is bounded on $C$ then $f$ is Lipschitz on each $D\subset\subset C$. \item[(c)] If $f$ is Lipschitz then it admits a Lipschitz convex extension to $X$. \end{enumerate} \end{lemma} \begin{lemma}\label{K} Let $f$ be a continuous convex function on an open convex subset $C$ of a normed linear space $X$. Then there exists a sequence $\{D_n\}$ of nonempty bounded open convex sets such that $D_n\nearrow C$ and $f$ is Lipschitz (and hence bounded) on each $D_n$. \end{lemma} \begin{proof} Fix $x_0\in C$ and consider the nonempty open convex sets $C_n:=\{x\in C: f(x)<f(x_0)+n\}$. By Lemma~\ref{L:dn}, there exist nonempty bounded open convex sets $D_n$ such that $D_n\subset\subset C_n$ and $D_n\nearrow\!\!\!\nearrow C$. Using Lemma~\ref{F:1}(a), it is easy to see that $f$ is bounded on each $D_{n+1}$. Hence, by Lemma~\ref{F:1}(b), $f$ is Lipschitz on each $D_n$. \end{proof} Let us recall the following easy known fact (see, e.g., \cite[Theorem~1.25]{valentine}): if $A,B$ are convex sets in a vector space then \begin{equation}\label{Val} \mathrm{conv}(A\cup B)=\bigcup_{0\le t\le1}[(1-t)A+tB]=\bigcup_{a\in A,\, b\in B}[a,b]\,. \end{equation} \begin{lemma}\label{conv} Let $Y$ be a closed subspace of a normed linear space $X$, $C\subset Y$ and $A\subset X$ convex sets. \begin{enumerate} \item[(a)] $\mathrm{conv}(A\cup C)\cap Y=\mathrm{conv}[(Y\cap A)\cup C]$. \item[(b)] If $\mathrm{int}\,A\ne\emptyset$ and $A$ is dense in $X$, then $A=X$. \item[(c)] If $C$ is open in $Y$, $A$ is open in $X$ and $A\cap C\ne\emptyset$, then $\mathrm{conv}(A\cup C)$ is open. \end{enumerate} \end{lemma} \begin{proof} (a) The inclusion ``$\supset$'' is obvious. To prove the other inclusion, consider an arbitrary $y\in Y\cap\mathrm{conv}(A\cup C)$. Then $y\in[a,c]$ for some $a\in A$, $c\in C$. If $y\ne c$ then necessarily $a\in Y$ (since $y,c\in Y$) and hence $y\in \mathrm{conv}[(Y\cap A)\cup C]$; and the last formula is trivial for $y=c$.\\ (b) follows, e.g., from the well-known fact that $\mathrm{int}(\overline{A})=\mathrm{int}\,A$ whenever $\mathrm{int}\,A$ is nonempty.\\ (c) Fix an arbitrary $a_0\in A\cap C$. For each $x\in C$, there obviously exists $y\in C\setminus\{a_0\}$ such that $x\in(y,a_0]$; consequently, there exists $t\in(0,1]$ with $x\in(1-t)C+tA$. Now we are done, since $$\mathrm{conv}(A\cup C)= C\cup\bigcup_{0<t\le1}[(1-t)C+tA]= \bigcup_{0<t\le1}[(1-t)C+tA]$$ and the members of the last union are open. \end{proof} In the rest of this section, we collect some facts about d.c.\ functions and mappings, which we will need in the next sections. Let $C$ be a convex set in a normed linear space $X$. Recall that a function $f\colon C\to\mathbb{R}$ is {\em d.c.}\ (or ``delta-convex'') if it can be represented as a difference of two continuous convex functions on $C$. The following generalization to the case of vector-valued mappings on $C$ was studied in \cite{VeZa1} for open $C$, and in \cite{VeZa2} for a general (convex) $C$. \begin{definition}\label{D:dc} Let $X,Y$ be normed linear spaces, $C\subset X$ be a convex set, and $F\colon C\to Y$ be a continuous mapping. We say that $F$ is {\em d.c.}\ (or ``delta-convex'') if there exists a continuous (necessarily convex) function $f\colon C\to\mathbb{R}$ such that $y^*\circ F+f$ is convex on $C$ whenever $y^*\in Y^*$, $\|y^*\|\le1$. In this case we say that $f$ controls $F$, or that $f$ is a {\em control function} for $F$. \end{definition} \begin{remark}\label{R:dc} It is easy to see (cf.~\cite{VeZa1}) that: \begin{enumerate} \item[(a)] a mapping $F=(F_1,\ldots,F_m)\colon C\to\mathbb{R}^m$ is d.c.\ if and only if each of its components $F_k$ is a d.c.\ function; \item[(b)] the notion of delta-convexity does not depend on the choice of equivalent norms on $X$ and $Y$. \end{enumerate} \end{remark} \begin{lemma}[{\cite[Lemma~5.1]{VeZa2}}]\label{ndc} Let $X,Y$ be normed linear spaces, let $A\subset X$ be an open convex set with $0\in A$, and let $F\colon A\to Y$ be a mapping. Suppose there exist $\lambda\in(0,1)$ and a sequence of balls $B(x_n,\delta_n)\subset A$ such that $\{x_n\}\subset\lambda A$, $\delta_n\to0$ and $F$ is unbounded on each $B(x_n,\delta_n)$. Then $F$ is not d.c.\ on $A$. \end{lemma} The following result was proved in \cite[Theorem~18(i)]{DuVeZa} for $X,Y$ Banach spaces, but the proof therein works for normed linear spaces as well. \begin{proposition}\label{P:lip} Let $X,Y$ be normed linear spaces, $C\subset X$ be a bounded open convex set, and $F\colon C\to Y$ be a d.c.\ mapping with a Lipschitz control function. Then $F$ is Lipschitz. \end{proposition} \begin{lemma}[{\cite[Lemma~2.1]{VeZa2}}]\label{L:hartman} Let $X,Y$ be normed linear spaces, $C\subset X$ a nonempty convex set, and $F\colon C\to Y$ a mapping. Let $\emptyset \neq D_n \subset C$ ($n\in\mathbb{N}$) be convex sets such that $D_n\nearrow C$ and, for each $n$, $\mathrm{dist}(D_n,C\setminus D_{n+1})>0$, $D_n$ is relatively open in $C$, and $F|_{D_n}$ is d.c.\ with a control function $\gamma_n\colon D_n\to \mathbb{R}$ which is either bounded or Lipschitz. Then $F$ is d.c.\ on $C$. \end{lemma} An important ingredient of the present paper is application of the following two results on compositions of d.c.\ mappings. \begin{proposition}[\cite{VeZa1},\cite{VeZa2}]\label{P:veza} Let $X,Y,Z$ be normed linear spaces, $A\subset X$ and $B\subset Y$ convex sets, and $F\colon A\to B$ and $G\colon B\to Z$ d.c.\ mappings. If $G$ is Lipschitz and has a Lipschitz control function, then $G\circ F$ is d.c.\ on $A$. \end{proposition} \begin{lemma}[{\cite[Lemma~3.2(ii)]{VeZa2}}]\label{P:main} Let $U,V,W$ be normed linear spaces, let $A\subset U$ be an open convex set and $B\subset V$ a convex set, and let $\Phi\colon A\to B$ and $\Psi\colon B\to W$ be mappings. Suppose that $\Phi$ is d.c.\ and there exist sequences of convex sets $A_n \subset A$, $B_n \subset B$ such that $\mathrm{int}\,A_n \neq \emptyset$, $A_n\nearrow A$, $\Phi(A_n) \subset B_n$, and $\Psi|_{B_n}$ is Lipschitz and d.c.\ with a Lipschitz control function. Then $\Psi\circ \Phi$ is d.c.\ on $A$. \end{lemma} Let us recall that a normed linear space $X$ is said to have {\em modulus of convexity of power type 2} if there exists $a>0$ such that $\delta_X(\epsilon)\ge a\epsilon^2$ for each $\epsilon\in(0,2]$ (where $\delta_X$ denotes the classical modulus of convexity of $X$; see e.g.\ \cite[p.409]{BeLin} for the definition). \begin{proposition}[{\cite[Corollary~3.9(a)]{VeZa2}}]\label{bilinear} Let $Y,V,X,Z$ be normed linear spa\-ces and let both $Y$ and $V$ admit renormings with modulus of convexity of power type 2. Let $B\colon Y\times V\to Z$ be a continuous bilinear mapping, $C\subset X$ an open convex set, and let $F\colon C\to Y$ and $G\colon C\to V$ be d.c.\ mappings. Then the mapping $B\circ(F,G)\colon x\mapsto B(F(x),G(x))$ is d.c.\ on $C$. \end{proposition} \section{Extensions of d.c.\ mappings} Let $X,Y$ be normed linear spaces, $C\subset X$ be a convex set, and $F\colon C\to Y$ be a d.c.\ mapping. In the present section, we are interested in existence of a d.c.\ extension of $F$ to the whole $X$ or at least to the closure of $C$. Let us start with a simple observation. \begin{observation}\label{O:exten} Let $X,Y,C,F$ be as in the beginning of this section, and $f\colon C\to\mathbb{R}$ a control function of $F$. \begin{enumerate} \item[(a)] If $Y$ is finite-dimensional, and both $F,f$ are Lipschitz on $C$, then $F$ admits a d.c.\ extension to $X$. \item[(b)] If both $F,f$ admit continuous extensions $\tilde{F},\tilde{f}$ to a convex set $D$ such that $C\subset D\subset\overline{C}$, then $\tilde F$ is d.c.\ with the control function $\tilde f$. \end{enumerate} \end{observation} \begin{proof} By Remark~\ref{R:dc}, it suffices to prove {\it(a)} for $Y=\mathbb{R}$. In this case, $F=(F+f)-f$ is a difference of two Lipschitz convex functions on $C$. By Lemma~\ref{F:1}(c), $F$ can be extended to a difference of two Lipschitz convex functions on $X$. The assertion {\it(b)} follows by a simple limit argument. \end{proof} \begin{proposition}\label{L:uzaver} Let $X$ be a normed linear space, $Y$ a Banach space, $C\subset X$ a convex set with a nonempty interior and $F\colon C\to Y$ a d.c.\ mapping. Suppose there exists a nondecreasing sequence $\{A_n\}$ of open convex sets in $X$ such that $\overline{C}\subset \bigcup A_n$ and, for each $n$, $F|_{(\mathrm{int}\,C)\cap A_n}$ has a Lipschitz control function. Then $F$ admits a d.c.\ extension to $\overline{C}$. \end{proposition} \begin{proof} Since $A_n\nearrow A:=\bigcup_k A_k$, Lemma~\ref{L:dn} allows us to suppose that the sets $A_n$ are also bounded and satisfy $A_n\nearrow\!\!\!\nearrow A$. By Proposition~\ref{P:lip}, $F|_{(\mathrm{int}\,C)\cap A_n}$ is Lipschitz for each $n$. Consequently, since $\overline{C}\cap A_n\subset\overline{(\mathrm{int}\,C)\cap A_n}\;$, $F|_{(\mathrm{int}\,C)\cap A_n}$ has a unique Lipschitz extension $F_n^*\colon \overline{C}\cap A_n\to Y$. Since, for $n_2>n_1$, $F^*_{n_2}$ obviously extends $F^*_{n_1}$, there exists a unique continuous $F^*\colon \overline{C}\to Y$ which extends each $F^*_n$. Since, for each $n$, any Lipschitz control function for $F|_{(\mathrm{int}\,C)\cap A_n}$ has a Lipschitz extension to $D_n:=\overline{C}\cap A_n$, Observation~\ref{O:exten}(b) gives that $F^*|_{D_n}$ has a Lipschitz control function. Moreover, $\bigcup D_n=\overline{C}$ and $\mathrm{dist}(D_n,\overline{C}\setminus D_{n+1})= \mathrm{dist}(D_n,\overline{C}\setminus A_{n+1})\ge \mathrm{dist}(A_n,X\setminus A_{n+1})>0$ for each $n$. Applying Lemma~\ref{L:hartman} with $D:=\overline{C}$, we obtain that $F^*$ is d.c.\ on $\overline{C}$. \end{proof} Now we will prove the main result of the present section. For the definition of modulus of convexity of power type 2 see the text before Proposition \ref{bilinear}. \begin{theorem}\label{T:rozsir} Let $X,Y$ be normed linear spaces, $C\subset X$ a convex set with a nonempty interior and $F\colon C\to Y$ a d.c.\ mapping. Let $A\supset C$ be an open convex set in $X$. Suppose that $X$ admits a renorming with modulus of convexity of power type 2, and either $C$ is closed or $Y$ is a Banach space. Then the following assertions are equivalent. \begin{enumerate} \item[(i)] $F$ admits a d.c.\ extension $\widehat{F}\colon A\to Y$. \item[(ii)] Some control function $f$ of $F$ admits a continuous convex extension $\widehat{f}\colon A\to \mathbb{R}$. \item[(iii)] There exists a nondecreasing sequence $\{D_n\}$ of open convex sets such that $A=\bigcup D_n$ and, for each $n$, $(\mathrm{int}\,C)\cap D_n\ne\emptyset$ and the restriction of $F$ to $(\mathrm{int}\,C)\cap D_n$ has a Lipschitz control function. \end{enumerate} \end{theorem} \begin{proof} The implication $(i)\Rightarrow(ii)$ is trivial, while $(ii)\Rightarrow(iii)$ follows immediately from Lemma~\ref{K} applied to $\hat f$. Let us prove $(iii)\Rightarrow(i)$. By translation we can suppose that $0\in (\mathrm{int}\,C)\cap D_1$. The sets $A_n:=D_n\cap U(0,n)$ ($n\in\mathbb{N}$) form a sequnce of bounded open convex sets such that $A_n\nearrow A$, $0\in (\mathrm{int}\,C)\cap A_1$ and, for each $n$, \begin{equation}\label{contr} \text{ $F|_{(\mathrm{int}\,C)\cap A_n}$ has a Lipschitz control function $f_n$. } \end{equation} First we will extend $F$ to a mapping $F^*\colon \overline{C}\cap A\to Y$. If $C$ is closed, then $\overline{C}\cap A=C$; so we put $F^*=F$. If $C$ is not closed, $Y$ is a Banach space by the assumptions. Proposition~\ref{P:lip} and \eqref{contr} imply that $F$ is Lipschitz on $(\mathrm{int}\,C)\cap A_n$. Note that $\overline{C}\cap A_n\subset\overline{(\mathrm{int}\,C)\cap A_n}\,$; thus $F|_{(\mathrm{int}\,C)\cap A_n}$ has a unique Lipschitz extension $F_n^*\colon \overline{C}\cap A_n\to Y$. Since, for $n_2>n_1$, $F^*_{n_2}$ obviously extends $F^*_{n_1}$, there exists a unique continuous $F^*\colon \overline{C}\cap A\to Y$ which extends each $F^*_n$. In both cases ($C$ closed or not), $f_n$ has a Lipschitz extension to $B_n:=\overline{C}\cap A_n$. By Observation~\ref{O:exten}(b), \begin{equation}\label{contr2} \text{ $F^*|_{B_n}$ is Lipschitz and d.c.\ with a Lipschitz control function. } \end{equation} Denote by $\mu$ the Minkowski functional of $C$, i.e. \[ \mu(x)=\inf\{t>0: x\in tC\}. \] It is well known that $\mu$ is a Lipschitz convex function on $X$ (recall that $0\in\mathrm{int}\,C$), and $\mu(x)\le1$ if{f} $x\in \overline{C}$. Consider the ``radial projection'' $P$ onto $\overline{C}$, given by \[ P(x)=\begin{cases} x &\text{if $x\in \overline{C}$;}\\ \frac{x}{\mu(x)} &\text{if $x\in X\setminus \overline{C}$.} \end{cases} \] The function $x\mapsto\max\{1,\mu(x)\}$ is convex and Lipschitz, and its values belong to $[1,\infty)$. The function $t\mapsto\frac{1}{t}$ is convex and Lipschitz on $[1,\infty)$; consequently, by Proposition~\ref{P:veza}, the composed function $x\mapsto\frac{1}{\max\{1,\mu(x)\}}$ is d.c.\ on $X$. Moreover, the mapping $B\colon\mathbb{R}\times X\to X$, given by \[ B(t,x)=tx\,, \] is a continuous bilinear mapping. Since $P(x)=B\left(\frac{1}{\max\{1,\mu(x)\}},x\right)$, $x\in X$, Proposition~\ref{bilinear} implies that $P$ is d.c.\ on $X$. Let us show that $\hat{F}:=F^*\circ(P|_A)$ is a d.c.\ extension of $F$ to $A$. The fact that $\hat{F}$ extends $F$ is obvious. To prove that $\hat{F}$ is d.c., it is sufficient to apply Lemma~\ref{P:main} with $B:=\overline{C}\cap A$, $\Phi:=P|_A$, $\Psi:=F^*$ (and $A,A_n,B_n$ as above). Indeed, the assumptions of that lemma are satisfied since $\Phi(A_n)=P(A_n)\subset \overline{C}\cap A_n=B_n$ (note that $P(A_n)\subset A_n$ because $0\in A_1$) and \eqref{contr2} holds. \end{proof} \begin{remark} \begin{enumerate} \item[(a)] We do not know whether the renorming assumption on $X$ in Theorem~\ref{T:rozsir} can be omitted or essentially weakened. \item[(b)] The condition (ii) in Theorem~\ref{T:rozsir} can be substituted by the following formally weaker condition: \begin{enumerate} \item[(ii')] {\em some control function of $F$ can be extended to a d.c.\ function on $A$.} \end{enumerate} Indeed, if $f_1$ and $f_2$ are continuous convex functions on $A$ such that $f_1-f_2$ controls $F$ on $C$, then also the sum $f_1 +f_2$ controls $F$ on $C$. \end{enumerate} \end{remark} \begin{corollary}\label{C:rozsir} Let $X,Y$ be normed linear spaces, $C\subset X$ be a convex set with a nonempty interior, and $F\colon C\to Y$ be a d.c.\ mapping. Supose that, the restriction of $F$ to each bounded open convex subset of $C$ has a Lipschitz control function. \begin{enumerate} \item[(a)] If $Y$ is a Banach space, then $F$ admits a d.c.\ extension to $\overline{C}$. \item[(b)] If $X$ admits a renorming with modulus of convexity of power type 2, and either $C$ is closed or $Y$ is a Banach space, then $F$ admits a d.c.\ extension to the whole $X$. \end{enumerate} \end{corollary} \begin{proof} Consider the sets $A_n:=U(0,n)$ ($n\in\mathbb{N}$) and apply Proposition~\ref{L:uzaver} to get (a), and Theorem~\ref{T:rozsir} to get (b). \end{proof} \begin{corollary}\label{elpecka} Let $X$ be a (subspace of some) $L_p(\mu)$ space with $1<p\le2$. Let $C\subset X$ be a convex set with a nonempty interior. \begin{enumerate} \item[(a)] Each continuous convex function on $C$, which is Lipschitz on every bounded subset of $\mathrm{int}\,C$, admits a d.c.\ extension to the whole $X$. \item[(b)] Each Banach space-valued ${\mathcal C}^{1,1}$ mapping on $C$ admits a d.c.\ extension to the whole $X$. \end{enumerate} \end{corollary} \begin{proof} It is known (see e.g.\ \cite[p.410]{BeLin}) that $X$, in the $L_p$-norm, has modulus of convexity of power type 2. Therefore, \cite[Proposition~1.11]{VeZa2} easily implies that each Banach space-valued ${\mathcal C}^{1,1}$ mapping on any open convex subset of $X$ is d.c.\ with a control function that is Lipschitz on bounded sets. Now, both (a) and (b) follow from Corollary~\ref{C:rozsir}(b). \end{proof} For extensions from closed finite-dimensional convex subsets, we have the following simple corollary. Recall that a {\em finite-dimensional set} (in a vector space) is a set whose linear span is finite-dimensional. \begin{corollary} Let $X,Y$ be normed linear spaces, and $F\colon C\to Y$ be a d.c.\ mapping, where $C\subset X$ is a finite-dimensional closed convex set. Then the following assertions are equivalent: \begin{enumerate} \item[(i)] $F$ admits a d.c.\ extension $\widehat{F}\colon X\to Y$; \item[(ii)] $F$ has a locally Lipschitz control function $f\colon C\to\mathbb{R}$. \item[(iii)]For each $x \in C$, there exists $r_x >0$ such that the restriction of $F$ to $C \cap U(x,r_x)$ has a Lipschitz control function. \end{enumerate} \end{corollary} \begin{proof} The implication $(i)\Rightarrow(ii)$ is obvious since each continuous convex function on $X$ is locally Lipschitz. The implication $(ii)\Rightarrow(iii)$ is trivial. Let $(iii)$ hold. Suppose that $0\in C$ and denote $X_0:=\mathrm{span}\,C$ ($=\mathrm{aff}\,C$). Then $C$, being finite-dimensional, has a nonempty interior in $X_0$. Let $B \subset C$ be a bounded convex set which is open in $X_0$. For each $x \in \overline{B} \cap C$ choose $r_x$ by $(iii)$ and a Lipschitz convex function $\varphi_x$ on $C \cap U(x,r_x)$ which controls $F$ on $C \cap U(x,r_x)$. Since $\overline{B} \cap C$ is compact, we can choose $x_1, \dots,x_n$ in $\overline{B} \cap C$ such that $\overline{B} \cap C \subset \bigcup_{i=1}^n \, U(x_i, r_{x_i})$. Extend $\varphi_{x_i}$ to a Lipschitz convex function $\psi_i$ on $X_0$ (cf. Lemma~\ref{F:1}(c)) and put $\psi := \sum_{i=1}^n \psi_i$. Then clearly $\psi|_{B}$ is a Lipschitz control function of $F|_B$. By Corollary~\ref{C:rozsir}(b), there exists a d.c.\ extension $F_0\colon X_0\to Y$ of $F$. Let $\pi\colon X\to X_0$ be a continuous linear projection onto $X_0$. Then the mapping $\widehat{F}:=F_0\circ\pi$ is a d.c.\ extension of $F$ (cf. \cite[Lemma 1.5(b)]{VeZa1}). Thus (i) holds and the proof is complete. \end{proof} \section{Counterexamples} \begin{example}\label{pasovec} There exists a continuous convex function $f$ on the strip $P:= \mathbb{R} \times [-1,0]$ such that \begin{enumerate} \item $f$ has a d.c.\ extension to $\mathbb{R}^2$, and \item $f$ has no convex extension to $\mathbb{R}^2$. \end{enumerate} \end{example} \begin{proof} For $(x,y) \in P$, we set $$ f(x,y) := \sup \{a_t(x,y):\ t \in \mathbb{R}\},\ \text{where}\ \ a_t(x,y) := t^2 + 2t(x-t) + t^2 y.$$ Observe that \begin{equation}\label{supp} a_t(\cdot,0) \ \text{is the support affine function to the function}\ \ p(x):=x^2\ \text{at}\ t, \end{equation} \begin{equation}\label{nez} a_t(t,y) \geq 0 \ \ \text{for}\ \ y \in [-1,0],\ \text{and} \end{equation} \begin{equation}\label{partder} \frac{\partial a_t}{\partial x} (z) = 2t,\ \ \ \frac{\partial a_t}{\partial y} (z) = t^2\ \ (z\in \mathbb{R}^2). \end{equation} Now fix $\tau \in \mathbb{R}$ and consider a $t \in \mathbb{R}$. Then \eqref{supp} implies $a_t(\tau,0) \leq p(\tau) = \tau^2$, so \eqref{partder} gives $a_t(\tau,y) \leq \tau^2$ for $y \in [-1,0]$. Consequently, $f(\tau,0) = a_{\tau}(\tau,0)= \tau^2$ and $f(\tau,y) \leq \tau^2 < \infty$ for $y \in [-1,0]$. Thus $f$ is a finite convex function on $P$. Moreover $f\ge0$ on $P$. Note that $a_t(\tau,0) = -t^2 + 2t \tau \leq 0$ whenever $|t| \geq 2 |\tau|$. If $z=(z_1,z_2)\in(\tau-1,\tau +1) \times [-1,0]$ and $|t|\ge2(|\tau|+1)$, then $a_t(z_1,0)\le0$ since $|t|\ge2|z_1|$. For such $z$ we have $a_t(z)\le0\le f(z)$ because $a_t(z_1,\cdot)$ is nondecreasing by \eqref{partder}. It follows that \begin{equation}\label{supmen} f(z) = \sup \{a_t(z):\ |t| \leq 2(|\tau|+1) \} \ \ \text{for}\ \ z \in (\tau-1,\tau +1) \times [-1,0]. \end{equation} Using \eqref{supmen} and \eqref{partder}, we easily obtain that $f$ is locally Lipschitz on $P$; so it is Lipschitz on each bounded subset of $P$. Consequently, (i) follows from Corollary~\ref{C:rozsir}. Now, suppose that (ii) is false, that is, there exists a convex extension $\tilde{f}\colon\mathbb{R}^2\to\mathbb{R}$ of $f$. Since $\tilde f(\tau,0) = f(\tau,0)= \tau^2$, we have $\frac{\partial \tilde f}{\partial x} (\tau,0) = 2 \tau$ for each $\tau \in \mathbb{R}$. Now we will prove that, for each $\tau >0$, \begin{equation}\label{smder} d^+_{(0,-1)} \tilde f (\tau,0) = d^+_{(0,-1)} a_{\tau} (\tau,0) = -\tau^2 \end{equation} where $d^+_v g(z)$ denotes the one-sided derivative of $g$ at $z$ in the direction $v$. To this end, choose an arbitrary $\varepsilon >0$ and find $0<\delta< \sqrt{\varepsilon}$ such that $|t^2 - \tau^2| < \varepsilon$ whenever $|t-\tau|< \delta$. If $|t-\tau| \geq \delta$ and $y \in (-\delta^2/\tau^2,0]$, then \begin{equation*} \begin{split} a_{\tau}(\tau,y) - a_t(\tau,y) &= \tau^2 + \tau^2y + t^2 - 2t\tau -t^2y \\ & = (t-\tau)^2 + (\tau^2-t^2)y \geq \delta^2 + \tau^2 y > 0. \end{split} \end{equation*} If $|t-\tau|< \delta$ and $y\le0$, then $$ a_{\tau}(\tau,y) - a_t(\tau,y) = (t-\tau)^2 + (\tau^2-t^2)y \geq \varepsilon y.$$ Therefore, $\tilde f(\tau,y) \geq a_{\tau}(\tau,y) \geq \tilde f(\tau,y) + \varepsilon y$ whenever $y \in (-\delta^2/\tau^2,0]$. Since $\varepsilon>0$ was arbitrary, we easily obtain \eqref{smder}. Since $\tilde f$ is convex, the function $v \mapsto d^+_v \tilde f (\tau,0)$ is positively homogenous and subadditive. Therefore, for $\tau>0$, we have $$ d^+_{(\tau,-3)} \tilde f (\tau,0) \leq d^+_{(0,-3)} \tilde f (\tau,0) + d^+_{(\tau,0)} \tilde f (\tau,0) = -3\tau^2 + 2 \tau^2 = - \tau^2,$$ and consequently $d^+_{(-\tau,3)} \tilde f (\tau,0) \geq \tau^2$. By convexity of $\tilde f$, $$ \tilde f (0,3) \geq \tilde f(\tau,0) + d^+_{(-\tau,3)} \tilde f (\tau,0) \geq \tau^2 + \tau ^2 = 2\tau^2.$$ Since $\tau >0$ was arbitrary, $\tilde f (0,3) = \infty$, a contradiction. \end{proof} \begin{example}\label{koulovec} In $X=\ell_2$, there exist a closed convex set $C\subset U(0,1)$ with a nonempty interior and a continuous convex function $f\colon C\to\mathbb{R}$ such that: \begin{enumerate} \item[(a)] $f$ has a continuous convex extension to $U(0,1)$, in particular, $f$ is locally Lipschitz on $C$ (even there exists a nondecreasing sequence of open convex sets $A_n\nearrow U(0,1)$ such that $f$ is Lipschitz on each $A_n\cap C$); \item[(b)] $f$ has no d.c.\ extension to $U(0,r)$ whenever $r>1$. \end{enumerate} \end{example} \begin{proof} Let $e_n$ be the $n$-th vector of the standard basis of $X=\ell_2$. For $n,k\in\mathbb{N}$ with $n<k$, put $$\textstyle z_{n,k}=(1-\frac1n)e_n + h_n(1-\frac1k)e_k $$ where $h_n>0$ is such that $(1-\frac1n)^2+h_n^2=1$. Note that $\|z_{n,k}\|^2=(1-h_n^2)+h_n^2(1-h_k^2)=1-h_n^2 h_k^2$. Put $$\textstyle C:=\overline{\mathrm{conv}}\left[ \frac12 B_{X}\cup\{z_{n,k}:\; n,k\in\mathbb{N},\;n<k\} \right]. $$ Obviously, $C$ is a closed convex set with a nonempty interior and $C \subset B_X$. We claim that $C\subset U(0,1)$. If this is not the case, there exists $x\in C$ with $\|x\|=1$. Thus $\sup\langle x,C\rangle=\langle x,x\rangle=1.$ On the other hand, there exists $n_0\in\mathbb{N}$ such that $|\langle x,e_n\rangle|<\frac13$ and $h_n<\frac13$ whenever $n>n_0$. Thus $|\langle x,z_{n,k}\rangle|\le\frac23$ for $k>n>n_0$. There exists $k_0>n_0$ such that $|\langle x,e_k\rangle|<\frac{1}{2n_0}$ whenever $k>k_0$. Hence, for $n\le n_0$ and $k>k_0$, we have $|\langle x,z_{n,k}\rangle|\le(1-\frac1n)+\frac{1}{2n_0}\le 1-\frac{1}{n_0}+\frac{1}{2n_0}=1-\frac{1}{2n_0}$. Since obviously $\sup\langle x,\frac12 B_{X}\rangle=\frac12$, we obtain \begin{align*} \sup\langle x,C\rangle&=\textstyle \max\bigl\{\frac12 , \sup\left\{\langle x,z_{n,k}\rangle:\; n,k\in\mathbb{N},\;n<k\right\}\bigr\}\\ &\le\textstyle \max\bigl[\{\frac23, 1-\frac{1}{2n_0}\} \cup \{\|z_{n,k}\|:\; n<k\le k_0,\;n\le n_0\}\bigr]<1. \end{align*} This contradiction proves our claim. The function $x\mapsto 1-\|x\|$ is positive, continuous and concave on $U(0,1)$. Since the function $t\mapsto\frac1t$ is convex and decreasing on $(0,\infty)$, the composed function $g(x)=\frac{1}{1-\|x\|}$ is convex continuous, and hence locally Lipschitz, on $U(0,1)$. Thus $f:=g|_C$ satisfies (a) by Lemma~\ref{K}. Let us show (b). By Lemma~\ref{ndc}, it suffices to prove that $g$ is unbounded on subsets of $C$ of arbitrarily small diameter. Fix $n\in\mathbb{N}$. For any two distinct indices $k,l>n$, we have $$\textstyle \|z_{n,k}-z_{n,l}\|^2= h_n^2\left[(1-\frac1k)^2+(1-\frac1l)^2\right]\le 2h_n^2 $$ which implies $\mathrm{diam}\,\{z_{n,k}: k>n\}\le\sqrt{2}\,h_n$. This completes the proof since $g(z_{n,k})\to\infty$ as $k\to\infty$. \end{proof} \section{Extensions of convex functions from subspaces} Let $Y$ be a closed subspace of a normed linear space $X$, and $f\colon Y\to \mathbb{R}$ a continuous convex function. The present section concerns the problem of existence of a continuous convex extension $\hat{f}\colon X\to\mathbb{R}$ of $f$. An example of nonexistence of $\hat f$ was given in \cite[Example~4.2]{BMV}. On the other hand, it is easy and well known that such $\hat f$ exists if either $Y$ is complemented in $X$ or $f$ is Lipschitz (see, e.g., \cite{BMV}). Borwein and Vanderwerff proved in \cite[Fact, p.1801]{BV} that $\hat f$ exists whenever $f$ is bounded on each bounded subset of $Y$; however, this sufficient condition is not necessary (see Remark~\ref{nenutna}). The following theorem contains a necessary and sufficient condition $(iv)$ of the same type, but the proof is more difficult and uses different methods. Our main new observation is that a modification of Hartman's construction from \cite{H} gives the implication $(iv)\Rightarrow(ii)$; and we use also the implication $(ii)\Rightarrow(i)$ already proved in \cite{BMV}. \begin{theorem}\label{indiv} Let $X$ be a normed linear space, $Y \subset X$ its closed subspace, and $f\colon Y \to \mathbb{R}$ a continuous convex function. Then the following statements are equivalent. \begin{enumerate} \item The function $f$ admits a continuous convex extension to $X$. \item There exists a continuous convex function $g\colon X \to \mathbb{R}$ such that $f \leq g|_{Y}$. \item $f$ admits a d.c.\ extension to $X$. \item There exists a sequence $\{C_n\}$ of nonempty open convex subsets of $X$ such that $C_n \nearrow X$ and $f$ is bounded on each set $C_n \cap Y\ (n \in \mathbb{N})$. \item There exists a sequence $\{B_n\}$ of nonempty open convex subsets of $X$ such that $B_n \nearrow X$ and $f$ is Lipschitz on each set $B_n \cap Y\ (n \in \mathbb{N})$. \end{enumerate} \end{theorem} \begin{proof} $(i)\Rightarrow(ii)$ is obvious, while $(ii)\Rightarrow(i)$ was proved in \cite[Lemma~4.7]{BMV} for $X$ a Banach space. However, the proof therein works also in the normed linear case (note that the convex extension $\tilde{f}$ from \cite[Lemma~4.7]{BMV} is continuous since it is locally upper bounded). $(i)\Rightarrow(iii)$ is trivial. $(iii)\Rightarrow(ii)$. If (iii) holds, there exist continuous convex functions $u,v$ on $X$ such that $u(y)-v(y)=f(y)$ for $y\in Y$. Choose a continuous affine function $a$ on $X$ such that $a\le v$. Then $$ u(y)-a(y)=f(y)+(v(y)-a(y))\ge f(y),\quad y\in Y; $$ so we can put $g:=u-a$. $(i)\Rightarrow(v)$ follows immediately applying Lemma~\ref{K} to a continuous convex extension $\hat{f}$ of $f$. $(v)\Rightarrow(iv)$. Clearly, it suffices to put $C_n=B_n\cap U(0,n)$ ($n\in\mathbb{N}$). It remains to prove $(iv) \Rightarrow (ii)$. Using Lemma~\ref{L:dn} and an obvious shift of indices, it is easy to find a sequence $\{D_n\}$ of bounded open convex subset of $X$ such that (for each $n$) $D_n \cap Y \neq\emptyset$, $f$ is bounded on $D_n\cap Y$, $d_n: = \mathrm{dist}(D_n, X \setminus D_{n+1})>0$, and $D_n \nearrow X$. Now we will construct inductively a sequence $(g_n)_{n\in \mathbb{N}}$ of functions on $X$ such that, for each $n \in \mathbb{N}$, \begin{enumerate} \item[(a)] $g_n$ is convex and Lipschitz; \item[(b)] $g_n = g_{n-1}$ \ on\ $D_{n-1}$ whenever $n>1$; \item[(c)] $g_n \geq f$ \ on \ $D_{n+1} \cap Y$. \end{enumerate} Set $M_n := \sup\{f(y):\ y \in D_n \cap Y\}$; by the assumptions $M_n < \infty$. Define $g_1(x) := M_2,\ x \in X$. Then the conditions (a), (b), (c) clearly hold for $n=1$. Now suppose that $k>1$ and we already have $g_1,\dots, g_{k-1}$ such that (a), (b), (c) hold for each $1 \leq n <k$. We can clearly choose $a \in \mathbb{R}$ such that $g_{k-1}(x) \geq a$ for each $x \in D_{k-1}$, and then $b > 0$ such that $a + b\, d_{k-1} \geq M_{k+1}$. Define $$ g_k(x) := \max \{g_{k-1}(x), a + b\, \mathrm{dist} (x, D_{k-1})\},\ \ \ x \in X.$$ We will show that the conditions (a), (b), (c) hold for $n=k$. The validity of (a) is obvious. If $x \in D_{k-1}$, then $g_k(x) = \max\{g_{k-1}(x),a\} = g_{k-1}(x)$; so (b) holds. Now consider an arbitrary $y \in D_{k+1} \cap Y$. If $y \in D_{k-1}$, using (b) for $n=k$ and (c) for $n=k-1$, we obtain $g_k(y) = g_{k-1}(y) \geq f(y)$. If $y \in D_k \setminus D_{k-1}$, using the definition of $g_k$ and (c) for $n=k-1$, we also obtain $g_k(y) \ge g_{k-1}(y) \geq f(y)$. If $y \in D_{k+1} \setminus D_k$, then $$ g_k(y) \geq a + b \, \mathrm{dist}(y, D_{k-1}) \geq a + b\, d_{k-1} \geq M_{k+1} \geq f(y).$$ Now, for each $x \in X$, the sequence $\{g_n(x)\}$ is constant for large $n$'s, hence $g(x):= \lim_{n\to \infty} g_n(x)$ is defined on $X$. Since $g = g_n$ on $D_n$ by (b), the conditions (a) and (c) easily imply that $g$ is a continuous convex function on $X$ such that $f \leq g|_{Y}$. \end{proof} \begin{remark}\label{nenutna} As already mentioned, (i) holds whenever \begin{enumerate} \item[$(*)$] $f$ is bounded on each bounded subset of $Y$ \end{enumerate} (indeed, (iv) holds with $C_n:=U(0,n)$). To see that $(*)$ is not necessary with $Y\ne X$, consider an arbitrary infinite dimensional Banach space $X$, a closed subspace $Y$ of finite codimension in $X$, and a continuous convex function $f$ on $Y$ which is unbounded on some bounded set (for its existence, see \cite{BFV}). \end{remark} \begin{theorem}\label{univ} Let $X$ be a normed linear space and $Y \subset X$ its closed subspace. Then the following statements are equivalent. \begin{enumerate} \item Each continuous convex function $f\colon Y\to\mathbb{R}$ admits a continuous convex extension to $X$. \item If $\{C_n\}$ is a sequence of open convex subsets of $Y$ such that $C_n\nearrow Y$, then there exists a sequence $\{D_n\}$ of open convex subsets of $X$ such that $D_n\nearrow X$ and $D_n\cap Y\subset C_n$. \item If $\{C_n\}$ is a sequence of open convex subsets of $Y$ such that $C_n\nearrow Y$, then there exists a sequence $\{\tilde{C}_n\}$ of open convex subsets of $X$ such that $\tilde{C}_n\nearrow X$ and $\tilde{C}_n\cap Y = C_n$. \end{enumerate} \end{theorem} \begin{proof} $(i)\Rightarrow(ii)$. Let $\{C_n\}$ be as in (ii). Using Lemma \ref{L:dn}, we can (and do) suppose that $C_n\ne\emptyset$ and $C_n\subset\subset C_{n+1}$ in $Y$ ($n\in\mathbb{N}$). Fix $a\in C_1$ and put $C_0:=\{a\}$. Choose $\epsilon_n>0$ such that $C_n+\varepsilon_n B_Y\subset C_{n+1}$ ($n\ge0$), and consider the function \[ f(y):=\sum_{n=0}^\infty \frac{1}{\varepsilon_n}\,\mathrm{dist}(y,C_n)\,,\quad y\in Y. \] It is easy to see that $f$ is a continuous convex function on $Y$; therefore it admits a continuous convex extension $\hat{f}$ to $X$ by (i). Let us show that the sets $D_n:=\{x\in X: \hat{f}(x)<n\}$ ($n\in\mathbb{N}$) have the desired properties. Obviously, they are convex and open, and $D_n\nearrow X$. Consider $n\in\mathbb{N}$ and $y\in Y\setminus C_n$. Since $\mathrm{dist}(y,C_k)\ge \epsilon_k$ for $0\le k<n$, we have $$ f(y)\ge\sum_{k=0}^{n-1}\frac{1}{\epsilon_k}\,\mathrm{dist}(y,C_k)\,\ge n. $$ This shows that $D_n\cap Y\subset C_n$. $(ii)\Rightarrow(i)$. Let $f$ be as in (i). Then the sets $C_n:=\{y\in Y: f(y)<n,\;\|y\|<n\}$ ($n\in\mathbb{N}$) are open convex and satisfy $C_n\nearrow Y$. Observe that $f$ is bounded on each $C_n$ by Lemma~\ref{F:1}(a). Find $D_n$ ($n\in\mathbb{N}$) by (ii). Since $D_n\cap Y\subset C_n$, the sequence $\{D_n\}$ satisfies the condition (iv) of Theorem~\ref{indiv}, and so (i) follows. $(ii)\Rightarrow(iii)$. Let $\{C_n\}$ be as in (iii). Find $D_n$ ($n\in\mathbb{N}$) by (ii). Choose $n_0\in\mathbb{N}$ such that $D_{n_0}\cap Y\ne\emptyset$. For $n\ge n_0$, put $\tilde{C}_n:=\mathrm{conv}(D_n\cup C_n)$. By Lemma~\ref{conv}(a),(c), we have that $\tilde{C}_n\cap Y=C_n$ and the convex set $\tilde{C}_n$ is open for any $n\ge n_0$. Let $n_1$ be the smallest index such that $C_{n_1}\ne\emptyset$. Fix $c\in C_{n_1}$ and choose $r>0$ such that $U(c,r)\subset\tilde{C}_{n_0}$ and $U(c,r)\cap Y\subset C_{n_1}$. Put $\tilde{C}_n=\emptyset$ for $1\le n<n_1$, and $\tilde{C}_n:=\mathrm{conv}(U(c,r)\cup C_n)$ for $n_1\le n<n_0$. Using Lemma~\ref{conv}(a),(c) as above, we easily obtain that the sequence $\{\tilde{C}_n\}$ has the desired properties. The reverse implication $(iii)\Rightarrow(ii)$ is obvious. \end{proof} As an application of Theorem~\ref{univ}, we give an alternative proof (see Theorem~\ref{subspace}) of the fact that separability of the quotient space $X/Y$ is sufficient for extendability of all continuous convex functions on $Y$ to $X$. This was proved in \cite{BMV} for Banach spaces using a condition about nets in $Y^*$, equivalent to (i) of Theorem \ref{univ}, together with Rosenthal's extension theorem. Our proof (for general normed linear spaces) is based on Theorem~\ref{univ} and on the following elementary lemma. \begin{lemma}\label{kuzeliky} Let $Y$ be a closed subspace of a normed linear space $X$. Let $B=r B_X$ for some $r>0$. Then, for any $x\in X$, there exists $y_x\in Y$ such that \begin{equation}\label{E:kuzeliky} \mathrm{conv}[(x+B)\cup B] \cap Y \subset \mathrm{conv}[\{y_x\}\cup 8B]. \end{equation} \end{lemma} \begin{proof} If $x=0$, $y_x=0$ works. For $x\ne0$, denote $P:=\mathrm{conv}[(x+B)\cup B]\cap Y$ ($\ne\emptyset$) and $s:=\sup\{\|y\|:y\in P\}$, and choose $u_0\in P$ such that $\|u_0\|>s-r$. Observe that $u_0\ne0$ since $s\ge\sup\{\|y\|:y\in B\cap Y\}=r$. We claim that $y_x=8u_0$ works. Fix $a_0\in x+B$, $b_0\in B$ and $\lambda\in[0,1]$ such that $u_0=(1-\lambda)a_0+\lambda b_0$. It suffices to prove that $P\setminus B\subset \mathrm{conv}[\{y_x\}\cup 8B].$ Given $u\in P\setminus B$, choose $a\in x+B$ and $b\in B$ such that $u\in[a,b]$. Note that $a\ne b$ since $u\notin B$. Put $v=(1-\lambda)a+\lambda b$ and observe that $\|v-u_0\|\le 2r$. Consider the half-line $H:=\{v+t(a-b): t\ge0\}$. Let $v_1\in H$ be such that $\|v_1-v\|=5r$. Then $v_1\in u_0+7B$ since $\|v_1-u_0\|\le\|v_1-v\|+\|v-u_0\|\le 7r$. We claim that no point $y\in H$ with $\|y-v\|>5r$ can belong to $P$ since it satisfies $\|y\|>s$. Indeed, since $v\in[y,b]$, \begin{align*} \|y\| &\ge\|y-b\|-\|b\|=\|y-v\|+\|v-b\|-\|b\| \ge \|y-v\| +\|v\|-2\|b\| \\ & \ge \|y-v\| +\|u_0\|-\|u_0-v\|-2\|b\| > 5r+(s-r)-2r-2r=s. \end{align*} Consequently, if $u\in H$ then $u\in[v_1,v]$, and if $u\in[a,b]\setminus H$ then $u\in[v,b]$. In both cases, $u\in[v_1,b]\subset\mathrm{conv}[(u_0+7B)\cup B]$. To finish, observe that $u_0+7B=\frac18(8u_0)+\frac78(8B)$ implies $$ u\in \mathrm{conv}[(u_0+7B)\cup B]\subset \mathrm{conv}[\{8u_0\}\cup 8B]. $$ \end{proof} \begin{theorem}[{\cite[Corollary~4.10]{BMV}} for $X$ Banach]\label{subspace} Let $Y$ be a closed subspace of a normed linear space $X$ such that $X/Y$ is separable. Then each continuous convex function $f\colon Y\to\mathbb{R}$ admits a continous convex extension to $X$. \end{theorem} \begin{proof} It suffices to verify the condition (ii) of Theorem~\ref{univ}. Let $C_1\subset C_2\subset\ldots$ be open convex subsets of $Y$ such that $\bigcup_n C_n=Y$. We can (and do) suppose that $0\in\mathrm{int}_Y\,C_1$. Fix $r>0$ such that \begin{equation}\label{E:8r} 8 r B_Y\subset C_1. \end{equation} Fix a dense sequence $\{\xi_n\}_{n\in\mathbb{N}}\subset X/Y$ and, for each $n$, choose an arbitrary $z_n\in\xi_n$. The sets $Z_n:=\mathrm{conv}\{z_1,\ldots,z_n\}$ ($n\in\mathbb{N}$) form a nondecreasing sequence of compact convex sets such that the union $\bigcup_n(Z_n+Y)$ is dense in $X$. Define $Z_0=\emptyset$. {\it Claim.} There exists an increasing sequence of integers $\{k_n\}_{n\ge0}$ such that $k_0=1$ and, for each $n$, \begin{equation}\label{E:claim} \mathrm{conv}(Z_n\cup B)\cap Y\subset C_{k_n}\qquad \text{where $B=rB_X$.} \end{equation} To prove this, we shall proceed by induction with respect to $n$. Observe that \eqref{E:claim} is satisfied for $n=0$ and $k_0=1$. Suppose we already have $k_0,\ldots,k_{n-1}$. Since $Z_n$ is compact, there exists a finite set $F\subset Z_n$ such that $Z_n\subset F+B$. For any $x\in F$, fix $y_x\in Y$ satisfying \eqref{E:kuzeliky}. Choose an integer $k_n>k_{n-1}$ such that $y_x\in C_{k_n}$ for each $x\in F$. Then, using \eqref{Val}, we obtain \[ \mathrm{conv}(Z_n\cup B)= \bigcup_{z\in Z_n}\mathrm{conv}(\{z\}\cup B)\subset \bigcup_{x\in F}\mathrm{conv}((x+B)\cup B). \] Consequently, using \eqref{E:kuzeliky} and Lemma \ref{conv}(a), we obtain \begin{align*} \mathrm{conv}(Z_n\cup B)\cap Y &\subset \bigcup_{x\in F}[\mathrm{conv}((x+B)\cup B)\cap Y] \\ &\subset \bigcup_{x\in F}[\mathrm{conv}(\{y_x\}\cup 8B)\cap Y] \\ &= \bigcup_{x\in F} \mathrm{conv}[(8B\cap Y)\cup \{y_x\}] \subset C_{k_n} \end{align*} since, by \eqref{E:8r}, $(8B\cap Y)\cup \{y_x\} \subset C_{k_n}$ for each $x\in F$. This proves our Claim. For each $j\in\mathbb{N}$, let $n(j)$ be the unique nonnegative integer with $k_{n(j)}\le j <k_{n(j)+1}$. Let us define a nondecreasing sequence $\{D_j\}_{j\in\mathbb{N}}$ of open convex sets by $$ D_j:=\mathrm{int}[\mathrm{conv}(Z_{n(j)}\cup B\cup C_j)]. $$ By Lemma~\ref{conv}(a) and \eqref{E:claim}, we have \begin{align*} Y\cap D_j&\subset Y\cap \mathrm{conv}(Z_{n(j)}\cup B\cup C_j)\\ &= Y\cap\mathrm{conv}[\mathrm{conv}(Z_{n(j)}\cup B)\cup C_j] \\&= \mathrm{conv}\bigl\{[Y\cap\mathrm{conv}(Z_{n(j)}\cup B)]\cup C_j\bigr\} \\&\subset \mathrm{conv}\{C_{k_{n(j)}}\cup C_j\}=C_j. \end{align*} It remains to prove that $\bigcup_j D_j=X$. By Lemma~\ref{conv}(b), this is equivalent to say that $\bigcup_j D_j$ is dense in $X$. Since, for each $j$, $D_j$ is dense in $\tilde{D}_j:=\mathrm{conv}(Z_{n(j)}\cup B\cup C_j)$, it suffices to show that $\bigcup_j \tilde{D}_j$ is dense. Note that $H:=\frac12\,\bigcup_n(Z_n+Y)$ is dense in $X$ since $\bigcup_n(Z_n+Y)$ is dense. If $h\in H$ then $h=\frac12(z+y)$ with $z\in Z_n$ for some $n$, and $y\in Y$. Then, for sufficiently large $j$, we have $z\in Z_{n(j)}$ and $y\in C_j$, and hence $h\in \tilde{D}_j$. Consequently, $\bigcup_j \tilde{D}_j$ is dense since it contains $H$. \end{proof} \bigskip \bigskip \subsection*{Acknowledgement} The research of the first author was partially supported by the Ministero dell'Istruzione, dell'Universit\`a e della Ricerca of Italy. The research of the second author was partially supported by the grant GA\v CR 201/06/0198 from the Grant Agency of Czech Republic and partially supported by the grant MSM 0021620839 from the Czech Ministry of Education.
{'timestamp': '2008-10-08T15:40:06', 'yymm': '0810', 'arxiv_id': '0810.1433', 'language': 'en', 'url': 'https://arxiv.org/abs/0810.1433'}
\section{Introduction} $\delta$ Scuti stars are a class of pulsating stars located on the Hertzsprung-Russell Diagram on or around the Main Sequence and intersecting the instability strip. They are 1.5 - 2.5 $M_{\odot}$ stars pulsating often in one main dominant oscillation mode or many lower amplitude pulsation modes. They have been interesting targets seismologically, because the oscillation amplitudes often reach tenths of magnitudes, and we understand their stellar structure relatively well, so seismology can allow us to probe details of the microphysics such as energy transport mechanisms, convective core overshoot, as well as other less well-developed theories such as rapid rotation. The main setbacks that $\delta$ Scuti seismology face (as well as other pulsating stars) are 1) the fundamental stellar parameters are not well-enough constrained to allow the few pulsation modes to probe the structure, and 2) rapid rotation causes each of the degrees $l$ to split into $2l+1$ $m$-modes, making mode-identification a difficult task \cite{gou05}. Indeed these problems are not exclusive nor exhaustive. This has prompted authors to look towards objects where at least one of these problems can be eliminated \cite{lb02, ah04, mac06, cos07}. Observing stars where the number of free parameters is constrained, such as in open clusters or multiple systems is a possibility for overcoming these obstacles. In order to use seismology to probe the interior of a star, the parameters of the star need to be known quite well, for example, the mass should be known to 1-2\% \cite{cre07}. The observables from a binary system provide strict constraints on the parameters of the component stars. If the binary is an eclipsing and spectroscopic system, the absolute values of the masses and radii can be extracted to 1-2\% (e.g. \cite{rib99,las00}). The objective of this study is to find out if the uncertainties in the stellar parameters can be reduced, so that seismology can be applied to those stars that exhibit one or few pulsation modes. We look at the particular case of a pulsating star in an eclipsing binary system and compare the parameter uncertainties with those of an isolated star. This study quantifies how well the stellar parameters can be extracted in various hypothetical systems. \section{Methods} The mathematical basis of this study lies in the application of Singular Value Decomposition (SVD) techniques to physical models. This technique has been used in previous studies, such as \cite{bro94, mm05, cre07} and it is being applied to many other areas of astrophysics and science, because of its powerful diagnostic properties. Intricate details of this mathematical technique are elaborated upon in the above mentioned publications as well as in \cite{pre92}. Here we give the basic equations to enable an understanding of this work. \subsection{Singular Value Decomposition} SVD is the decomposition of any $M\times N$ matrix {\bf D} into 3 components ${\bf U}$, ${\bf V^T}$ and ${\bf W}$ given by ${\bf D} = {\bf U W V^T}$. ${\bf V^T}$ is the transpose of ${\bf V}$ which is an $N \times N$ orthogonal matrix that contains the {\it input} basis vectors for ${\bf D}$, or the vectors associated with the parameter space. ${\bf U}$ is an $M \times N$ orthogonal matrix that contains the {\it output} basis vectors for ${\bf D}$, or the vectors associated with the observable space. ${\bf W}$ is a diagonal matrix that contains the {\it singular values} of ${\bf D}$. The key element to our work is the description of the matrix {\bf D}. Here we define {\bf D} to be a matrix whose elements consist of the partial derivatives of each of the observables with respect to each of the parameters of the system, in function of the expected measurement errors on each of the observables: \begin{equation} D_{ij} = \frac{\partial B_i}{\partial P_j} \epsilon_i^{-1}. \label{eqn:designmatrix} \end{equation} Here $B_i$ are each of the $i = 1, 2, ...M$ observables of the system, with measurement or expected errors $\epsilon_i$, and $P_j$ are each of the $j = 1,2,...,N$ free parameters of the system (see section \ref{sec:obspar} for discussion on the observables and the parameters). By writing the design matrix in function of the measurement errors, we provide a quantitative description of the information content of each of the observables for determining the stellar parameters and their uncertainties. Supposing that we are looking for the true solution ${\bf P}_{\rm R}$ of the system. By starting from an initial close guess of the solution ${\bf P}_0$, SVD can be used as an inversion technique by calculating a set of parameter corrections ${\bf \delta P}$ that minimizes some goodness-of-fit function: ${\bf \delta P = V\bar{W}^{-1}U^T \delta B}$, where ${\bf \delta B}$ are the differences between the set of actual observations {\bf O} and the calculated observables ${\bf B}_{\rm 0}$ given the initial parameters ${\bf P}_{\rm 0}$. ${\bf \bar{W}}$ is a modification of the matrix {\bf W} such that the inverse of the values below a certain threshold are set to 0. The formal errors are comprised of the sum of all of the ${\bf V}_k/w_k$, where each ${\bf V}_k/w_k$ describes the direction and magnitude to move each parameter, so that the true solution ${\bf P}_{\rm R}$ and formal uncertainties can be given by \begin{equation} {\bf P}_{\rm R} = {\bf P}_{\rm 0} + {\bf V\bar{W}^{-1}U^T \delta B} \left ( \pm \frac{{\bf V}_1}{w_1} \pm \frac{{\bf V}_2}{w_2} \pm ... \pm \frac{{\bf V}_N}{w_N} \right). \label{eqn:dscuti_er} \end{equation} The {\it covariance matrix} {\bf C} consequently comes in a very neat and compact form: \begin{equation} C_{jl} = \sum_{k=1}^N \frac{V_{jk}V_{lk}}{w_{k}^2}, \label{eqn:covariance} \end{equation} and the square roots of the diagonal elements of the covariance matrix are the theoretical parameter uncertainties: \begin{equation} \sigma_j^2 = \sum_{k=1}^N \left ( \frac{V_{jk}}{w_k} \right )^{2}. \label{eqn:uncertainties} \end{equation} \subsection{Observables, Parameters \& Models\label{sec:obspar}} We describe a single $\delta$ Scuti star by a set of parameters or ingredients. The ingredients for the stellar model are mass $M$, age $\tau$, rotational velocity $v$, initial Hydrogen and heavy metals mass fraction $X$ and $Z$ where $X+Y+Z = 1$ and $Y$ is initial Helium mass fraction, and mixing-length parameter $\alpha$ where applicable\footnote{For masses larger than about 1.5 $M_{\odot}$ the outer convective layer is relatively thin, so the observables of the star are not very sensitive to the value of the mixing-length parameter.}. The distance to the object $d$ is also included as a parameter. For a binary system, the additional parameters are the system properties: separation of components $a$, eccentricity of orbit $e$, longitude of periastron $\omega$ and inclination of orbit $i$. Fortunately, both stellar components in a binary system share the parameters $\tau$, $X$ and $Z$, so then the individual stars differ mainly by $M$ and $v$. The parameters of the binary system of this study are given in Table \ref{tab:parameters}. The observables are the measurable quantities of the system. These include things such as radius, effective temperature, gravity, metallicity and parallax for a single star system. For a binary system, the observables include effective temperature ratio, relative radii, radial velocities and orbital period. The Aarhus STellar Evolution Code \cite{chr82} is used to calculate the stellar evolution models. This code uses the stellar parameters as the input ingredients, and returns a set of global stellar properties such as radius and effective temperature, as well as the interior profiles of the star such as mass, density and pressure. Oscillation frequencies for this rapidly rotating star are calculated using MagRot \cite{gt90, bt06}. Using the global stellar properties and the distance to the star, SDSS \cite{yor00} magnitudes and colours in various filters are evaluated using the Basel model atmospheres \cite{lej97}. Most of the binary observables are calculated analytically using the combined stellar and binary parameters. In this work we refer to the non-seismic data as the {\it classical} observables. Most of the classical observables of the binary system are given in Table~\ref{tab:parameters}. A clear distinction should be made between the input parameters of the system and output measurable quantities, the observables. So to discriminate between the errors in both parameters and observables, we shall denote the (derived) parameter uncertainties by $\sigma$, while $\epsilon$ is reserved for the observable errors. Both luminosity and effective temperature can be observables, but in Section \ref{sec:lt} the derived uncertainties of these properties are discussed. This refers specifically to the calculated error boxes in the luminosity-temperature (L-T) diagram, and here their uncertainties will also be denoted by $\sigma$. \begin{table} \centering \caption{\label{tab:parameters} System Parameters \& Observables} \begin{tabular}{@{}l*{15}{l}} \br Parameter & Value ($P_j$) & & & Observable & Value ($O_i$) & $\epsilon_i$ & & \\ \mr $M_A$ & 1.8 $M_{\odot}$ & & &$R_A$ & 1.95 $R_{\odot}$ & 0.02 \\ $M_B$ & 1.7 $M_{\odot}$ & & &$R_B$ & 1.81 $R_{\odot}$ & 0.02 \\ $\tau$ & 0.7 Gyr & & & $T_B/T_A$ & 0.97 & 0.05\\ $X$ & 0.700 & & & $T_{\rm eff}$ & 6965 (K) & 100 \\ $Z$ & 0.035 & & & [M/H] & 0.31 (dex) & 0.05\\ $v_A$ & 100.0 km s$^{-1}$ & & & $v_A \sin i$ & 99.7 km s$^{-1}$ & 2.5 \\ $v_B$ & 80.0 km s$^{-1}$ & & & $v_B \sin i$ & 59.8 km s$^{-1}$& 2.5 \\ $d$ & 200 pc & & & $\pi$ & 5.0 (mas) & 0.5\\ $a$ & 0.15 AU & & &$\Pi$ & 0.031 (yrs) & 0.00001\\ $i$ & 85.6 $^{\circ}$ & & & $i$ & 85.6 $^{\circ}$ & 0.05\\ $e$ & 0.0 & & & $M_A \sin^3 i $ & 1.78 $M_{\odot}$& 0.06\\ $\omega$ & 0.0 & & & $M_B \sin^3 i$ & 1.69 $M_{\odot}$& 0.05\\ \br \end{tabular} \end{table} \section{Results} The theoretical uncertainties in each of the parameters of $M_A$, $\tau$, $X$ and $Z$ are calculated as a function of error in radius $\epsilon_{R_{\star}}$ and as a function of error in colour $\epsilon_{(i-z)}$, using Equation \ref{eqn:uncertainties} coupled with the observable errors given in Table \ref{tab:parameters}. Consequently the theoretical uncertainties in luminosity and effective temperature are calculated using Equation \ref{eqn:dscuti_er}, for three different values of $\epsilon_{R_{\star}}$ and three different values of effective temperature error $\epsilon_{T_{\rm eff}}$. The results are shown in Figures \ref{fig1} and \ref{fig2} below. \subsection{Parameter Uncertainties} Figure \ref{fig1} shows the theoretical uncertainties ($\sigma$) in $M_A$, $\tau$, $X$ and $Z$ of a pulsating star as a function of error in the radius ({\it left panels}) and as a function of error in the photometric colours ({\it right panels}). Note that the left panels do {\it not} include photometric information, i.e. no colours nor magnitudes. The dashed lines show the results for a single star (the observables are radius, effective temperature, gravity, metallicity, ...) while the solid lines show the results for a component of a binary system (observables are those of the single star and the binary observables). The lines with the diamonds include one identified mode as well as the classical observables. The results for the other stellar parameters are not shown, because the four aforementioned parameters and $v_A$ are responsible for determining the model structure of the pulsating star. $\sigma (v_A)$ is is usually independent of $\epsilon_{R_{\star}}$ and $\epsilon_{(i-z)}$; it is determined mainly by the observables $v \sin i$ and $i$ from a combination of spectroscopy and the photometric light curve. \begin{figure} \begin{center} \includegraphics[width = 0.45\textwidth, height = 0.75\textheight] {fnrad_sgdl.eps} \includegraphics[width = 0.45\textwidth, height = 0.75\textheight] {fncol_sgdl.eps} \end{center} \caption{\label{fig1} Theoretical uncertainties ($\sigma$) in mass $M$, age $\tau$, initial hydrogen $X$ and metal $Z$ content as a function of observable error. The left panel shows the uncertainties as a function of observable radius error, here no photometric information has been included. The right panel shows the uncertainties as a function of observable error in photometric colour. The dashed and continuous lines show the results for the single star and the binary system respectively, those with diamonds show the results when an identified mode is included in the set of observables.} \end{figure} For the single star without an identified mode ({\it dashed lines no diamonds}), the parameter uncertainties remain at a large constant value as a function of radius ({\it left panels}) but do decrease slightly with improved photometric data ({\it right panels}). Only when seismic data are included for the single star system ({\it dashed lines, diamonds}) {\it and} the observable errors are small, the parameters are constrained to a usable amount. By comparing the solid lines with and without diamonds in Figure \ref{fig1}, it can be seen that the addition of one identified mode makes almost no difference to the parameter uncertainties for the binary system. This implies that there is enough information provided by the binary constraints to sufficiently determine the stellar parameters. In this sense, the identified mode is {\it redundant} information, and thus can be used maybe to test the interior of the star. Including photometric information ({\it right panels}) provides an interesting result: the information provided by the single star system can supersede that of the binary system for $\tau$ and $Z$ . This is because the colours are uncontaminated by a component star. This only happens at very small measurement errors, and only when an identified mode is included for the single star. \subsection{Luminosity-Temperature Error Box \label{sec:lt}} The correlation matrices come in a compact form when using SVD. This then allows a calculation of the theoretical uncertainties in both effective temperature and luminosity (L-T) (Equation \ref{eqn:dscuti_er}). Figure \ref{fig2} shows the theoretical error box in effective temperature and luminosity for a single star system ({\it dashed lines}) and a binary system ({\it solid lines}). The observables do not include photometric information, and for the single star an identified mode is included\footnote{Figure \ref{fig1} shows that the parameters are not constrained for the single star if the identified mode is not included.}, while for the binary system no seismic data is included. \begin{figure} \begin{center} \includegraphics[width = 1.\textwidth, height = 0.3\textheight] {rad_sgdl_lt.eps} \end{center} \caption{\label{fig2} The theoretical error boxes for luminosity and effective temperature. The dashed lines represent the results for the single star while the continous lines represent the results for the binary system. The left panel shows the results while reducing the error in radius, and the right panel shows the results while reducing the error in effective temperature.} \end{figure} \subsubsection{Single Star} Observe how the error box reduces significantly while reducing the error in the radius observable ({\it left panel}). The uncertainty in $T_{\rm eff}$ also reduces slightly. The right panel also shows that by reducing the errors in the observable $T_{\rm eff}$, an expected corresponding reduction in the uncertainties in $T_{\rm eff}$ is noted. The $\epsilon_{T_{\rm eff}}$ of 200, 100, and 50 K, produces a $\sigma (T_{\rm eff})$ of 250, 110, and 50 K. The fact that these uncertainties are reproduced also gives confidence in this method. $\sigma(L_{\star})$ changes slightly as a function of $\epsilon_{T_{\rm eff}}$, its value is determined mostly by the error in the radius observable (2\%). Looking back to the left panel, we see that interpolating between 1\% and 3\% $\epsilon_{R_{\star}}$ produces a $\sigma(L_{\star})$ = 0.5 L$_{\odot}$ for $\epsilon_{R_{\star}}$ = 2\%. This is the value that is shown in the right panel. \subsubsection{Binary System} For the binary system ({\it solid lines}), {\it no} identified mode is included. The error box for the binary system does not reduce while reducing the errors in the radius, because of the small uncertainties in these parameters. However, the error box does reduce when the error in effective temperature is reduced, reproducing accurately the input $\epsilon_{T_{\rm eff}}$ of $\sigma(T_{\rm eff})$ = 200, 100, and 50 K. $\sigma (L_{\star})$ does not decrease in either panel, because the mass is well-determined for the binary system and provides this narrow constraint on $L_{\star}$. In all cases, note that the constraints provided by the binary system without an identified mode are more effective than those from the single star when an identified mode is included. \section{Conclusions} This study investigated whether the uncertainties in the stellar parameters of a pulsating component in an eclipsing binary system were sufficient so that an observed pulsation mode could be used to test the physics of the stellar interiors. Additionally we studied the information content of a pulsating star in a single star system to quantify how much is gained in terms of precision in parameters and size of the L-T error box by observing the star in a detached eclipsing binary system. The conclusions are summarized as follows: \begin{itemize} \item A single star system without an identified mode remains poorly understood when observables such as the radius or colours are poorly measured. The parameter uncertainties are too large to correctly place the star in the L-T diagram. \item A binary system {\it without} seismic information provides better constraints than the single star system when an oscillation mode has been identified. \item Reducing the size of some observable errors has little or no impact on the parameter determinations for the binary system, because these parameters are already well constrained. \item The tight constraints provided by the binary system for the stellar parameters reduces the size of the error box in the L-T diagram significantly. \item By carefully constraining the parameters of the star, just as an eclipsing binary system allows us to do, an accurate estimate of the stellar model under study can be obtained. This allows the redundant observables (like an oscillation mode) to be used exclusively to test the physics of the interior of a star. \end{itemize} \section*{References}
{'timestamp': '2008-10-15T18:15:06', 'yymm': '0810', 'arxiv_id': '0810.2191', 'language': 'en', 'url': 'https://arxiv.org/abs/0810.2191'}
\section{Introduction} Uncertainty relations place fundamental limits on the precision achievable in measuring non-commuting observables. The original idea of uncertainty relation was first introduced by Heisenberg~\cite{heis} for position and momentum observables $Q$, $P$. Subsequently, a mathematically precise version (with $\hbar=1$) \be \label{pqunc} (\Delta Q)^2 \, (\Delta P)^2 \geq \frac{1}{4} \ee was formulated by Kennard~\cite{kennard}. Here $(\Delta \Gamma)^2 = \langle \Gamma^2 \rangle - \langle \Gamma \rangle^2$ denotes the variance of the observable $\Gamma=Q \ {\rm or}\ P$ and the bracket $\langle \cdots \rangle=\mbox{Tr}\,[\rho\cdots ]$ corresponds to the expectation value in a quantum state $\rho$. Weyl~\cite{weyl} and Robertson~\cite{robt} extended the uncertainty relation (\ref{pqunc}) for any arbitrary pair of physical observables $A_1$, $A_2$: \be \label{unc} (\Delta A_1)^2 (\Delta A_2)^2 \geq \frac{1}{4}\left\vert \langle\,[A_1,\,A_2 ]\,\rangle \right\vert^2 \ee which is commonly referred to as the Heisenberg-Robertson uncertainty relation in the literature. Here $[A_1,\,A_2 ]=A_1A_2-A_2A_1$ denotes the commutator of the observables $A_1$ and $A_2$. The uncertainty relation (\ref{unc}) imposes restrictions on the product of variances $(\Delta A_1)^2$ and $(\Delta A_2)^2$ -- essentially limiting the capability towards precise prediction of the measurement results of non-commuting observables. In general the Heisenberg-Robertson approach of placing limits on the uncertainties of incompatible observables in the given quantum state $\rho$ sets the conventional framework for deriving preparatory uncertainty relations~\cite{busch_werner,note1}. Apart from their fundamental interest, uncertainty relations play a significant role in the field of quantum information processing, with several applications such as entanglement detection~\cite{hf,hfbound, ogunhe}, quantum cryptography~\cite{koshi, berta, hangi, tom, branciard, aru_hsk, coles}, quantum metrology~\cite{reid2011} and foundational tests of quantum theory~\cite{scully,reid}. Motivated by their applicability, there has been an ongoing interest in reformulating uncertainty relations expressing trade-off of more than two incompatible observables, formalized in terms of variances~\cite{hf,ogunhe,akp1,rivas,varbase,akp2,chen,shabbir,xiao,bagchi,ma,song,bsanders,maconne19, busch2019,zheng,zukow}, or via information entropies~\cite{ hirschman, beckner, bial1, deutch, partovi, bial2, kraus, mu, sw, bial3}. Recently several experimental tests have been carried out to verify different forms of uncertainty relations~\cite{expt_wang, expt2017, chen_expt, expt2019}. In this paper we investigate variance based LSUR for local angular momentum operators of a bipartite quantum system, proposed by Hofmann and Takeuchi~\cite{hf}, violation of which witnesses entanglement. One of the main advantages of employing LSUR is the fact that it is possible to detect entanglement without a complete knowledge of the quantum state and it suffices to determine experimental friendly variances of local angular momentum observables for this purpose. We show that the angular momentum LSUR, which places lower bound on the set of all bipartite separable states, gets violated if and only if the covariance matrix of the two-qubit reduced system of the $N$-qubit permutation symmetric state is negative. Since it has been shown ~\cite{ijmp06,pla07,prl07} that the covariance matrix negativity serves as a necessary and sufficient for entanglement in a two-qubit symmetric system, our result establishes a one-to-one connection between entanglement and violation of the LSUR. Furthermore, we show that negative covariance matrix of the two-qubit reduced system of a $N$-qubit symmetric state leads to a variance based test of pairwise entanglement. \section{Sum Uncertainty Relation} It may be noted that the term $\langle\,[A_1,\,A_2 ]\,\rangle={\rm Tr}\left(\rho\,[A_1,\,A_2] \right) $ appearing in the right hand side of the Heisenberg-Robertson uncertainty relation (\ref{unc}) vanishes in some specific quantum states $\rho$. In such cases, one ends up with a trivial relation $(\Delta A_1)^2 (\Delta A_2)^2 \geq 0$ for the product of variances of non-commuting observables $A_1, \, A_2$. Moreover, variance vanishes in the eigenstate of one of the observables. In such cases the Heisenberg-Robertson uncertainty relation (\ref{unc}) fails to capture the intrinsic indeterminacy of non-commuting observables. To overcome such issues it is preferable to employ uncertainty relations placing non-trivial bounds on the sum of variances $(\Delta A_1)^2 + (\Delta A_2)^2$. In fact, a lower bound for the sum of variances may be found by using the inequality $\sum_{\alpha=1}^{m} a_\alpha/m \geq \left(\prod_{\alpha}a_\alpha\right)^{1/m}$ between the airthmetic mean and the geometirc mean of real non-negative numbers $a_\alpha, \alpha=1,2,\ldots, m$. Choosing $a_1=(\Delta A_1)^2,\ a_2=(\Delta A_2)^2$ and using (\ref{unc}) one obtains a variance based sum uncertainty relation \be \label{sur0} (\Delta A_1)^2 + (\Delta A_2)^2\geq \left\vert \langle\,[A_1,\,A_2 ]\,\rangle \right\vert. \ee However the sum uncertainty relation (\ref{sur0}) is a byproduct of the Heisenberg-Robertson inequality (\ref{unc}) and thus it is non-informative for some quantum states $\rho$ in which one of the variances and/or $\langle\,[A_1,\,A_2 ]\,\rangle$ vanish. \subsection{Sum uncertainty relations for angular momentum operators} Hofmann and Takeuchi~\cite{hf} proposed that a non-trivial lower bound ${\cal U}>0$ must exist for the sum of variances of a set $\{A_\alpha\}$, $\alpha=1,2,\ldots $ of non-commuting observables, as they do not share any simultaneous eigenstate: \be \label{sur1} \sum_{\alpha}\,(\Delta A_\alpha)^2\geq {\cal U}. \ee The lower bound ${\cal U}$ in the sum uncertainty relation (\ref{sur1}) corresponds to the absolute minimum value $\left[\sum_{\alpha}\, (\Delta A_\alpha)^2\right]_{\rm min}$ for any quantum state $\rho$. While it is, in general, tough to determine ${\cal U}$ for any arbitrary set $\{A_{\alpha}\}$ of non-commuting observables, there are some important physical examples in finite dimensional Hilbert spaces, where the limiting value $U$ can be readily identified. For example, in the case of a spin-$j$ quantum system the components of angular momentum operators $\{J_1, J_2, J_3\}$ satisfy the following conditions: \begin{eqnarray*} \left\langle \left(J^2_1+J^2_2+J^2_3\right)\right\rangle&=&j(j+1) \\ \langle J_1\rangle^2+\langle J_2\rangle^2+\langle J_3\rangle^2 &\leq& j^2. \end{eqnarray*} Thus, one obtains a variance based sum uncertainty relation for the components $(J_1, J_2, J_3)$ of angular momentum operator~\cite{hf}: \be \label{sur} (\Delta J_1)^2+(\Delta J_2)^2+(\Delta J_3)^2 \geq j, \ee imposing a limit on the measurement precision of more than one of the angular momentum componets. In the specific example of $j=1/2$ (i.e., for a qubit), we have $J_\alpha=\sigma_\alpha/2, \alpha=1,2,3$, where $\sigma_\alpha$ denote the Pauli matrices. One then obtains the sum uncertainty relation for qubits~\cite{hf} \be \label{paulisur} (\Delta \sigma_1)^2+(\Delta \sigma_2)^2+(\Delta \sigma_3)^2 \geq 2. \ee Given an arbitrary single qubit density matrix $\rho$, expressed in the standard basis $\{\vert 0\rangle, \vert 1\rangle\}$, \begin{eqnarray} \rho&=&\frac{1}{2}\left(I+\sum_{\alpha=1}^3\, s_\alpha \sigma_\alpha\right), \\ s_\alpha&=&{\rm Tr}[\rho\, \sigma_\alpha]; \ \ \ \sum_{\alpha} s_\alpha^2\leq 1, \nonumber \end{eqnarray} one obtains $(\Delta \sigma_1)^2+(\Delta \sigma_2)^2+(\Delta \sigma_3)^2=3-\sum_{\alpha} s_\alpha^2\geq 2$, in accordance with the sum uncertainty relation (\ref{paulisur}). Equality sign holds when $\sum_\alpha\, s_\alpha^2=1$ i.e., for pure states of qubit. \subsection{Local sum uncertainty relations for bipartite systems} Let us consider angular momentum operators $J_{A\alpha}, \ J_{B\alpha}$ acting on the Hilbert spaces ${\cal H}_A$, ${\cal H}_B$ of dimensions $d_A=(2j_A+1)$, $d_B=(2j_B+1)$ respectively. They satisfy the sum uncertainty relations \be \label{main} \sum_{\alpha=1}^3\, (\Delta J_{A\alpha})^2 \geq j_A, \ \ \sum_{\alpha=1}^3\, (\Delta J_{B\alpha})^2 \geq j_B. \ee Then, the LSUR~\cite{hf} \begin{eqnarray} \label{lsur} \sum_{\alpha=1}^3\, \left[\Delta \left(J_{A\alpha}+J_{B\alpha}\right)\right]^2\geq j_A+j_B \end{eqnarray} is necessarily satisfied by the set of all bipartite separable states $\rho^{(\rm sep)}_{AB}=\sum_{k}\, p_k\,\left(\rho_{Ak}\otimes \rho_{Bk}\right)$ in the Hilbert space ${\cal H}_A\otimes {\cal H}_B$. Violation of the LSUR (\ref{lsur}) serves as a clear signature of entanglement. This is readily seen by considering a bipartite system with $j_A=j_B=j$ prepared in a spin singlet state $$\vert\Psi^{\rm singlet}_{AB}\rangle=\frac{1}{\sqrt{2j+1}}\sum_{m=-j}^j\, (-1)^{j-m}\, \vert j, m\rangle_A \otimes \vert j,- m\rangle_B$$ in which one obtains $$\left(J_{A\alpha}+J_{B\alpha}\right)\vert\Psi^{\rm singlet}_{AB}\rangle\equiv 0,\ \ \alpha=1,\,2,\,3,$$ leading to $$\sum_{\alpha=1}^3\, \left[\Delta \left(J_{A\alpha}+J_{B\alpha}\right)\right]^2=0.$$ In other words, the LSUR (\ref{lsur}) gets violated in a spin singlet state $\vert\Psi^{\rm singlet}_{AB}\rangle$, thus highlighting the entanglement property that local observables of the subsystem $A$ can be determined by performing measurements on the other subsystem $B$. More specifically, violation of LSUR signifies in general that correlations between subsystems in an entangled state can be determined with enhanced precision than those in a separable state. It is of interest to explore if violation of local sum uncertainty relations is both necessary and sufficient to detect entanglement in some special classes of bipartite quantum systems. To this end, Hoffmann and Takeuchi~\cite{hf} considered the following two-qubit LSUR \be \label{lsur_qubit} \sum_{\alpha=1}^3\,\left[\,\Delta(\sigma_\alpha \otimes I+I \otimes \sigma_\alpha)\right]^2\geq 4, \ee obtained by substituting $j_A=j_B=1/2$ and $J_{A\alpha}=(\sigma_\alpha\otimes I)/2$, $J_{B\alpha}=(I\otimes \sigma_\alpha)/2,\, \alpha=1,\, 2,\, 3$ in (\ref{lsur}). In the one-parameter family of two-qubit Werner states, \be \label{werner} \rho^{\rm Werner}_{AB}=\frac{(1-x)}{4}(I_2\otimes I_2)+x\, \vert\Psi\rangle_{AB}\langle \Psi\vert, \ee where $\vert\Psi\rangle_{AB}=\frac{1}{\sqrt{2}}\, (\vert 0_A\,1_B\rangle-\vert 1_A\,0_B\rangle$ and $0\leq x\leq 1$. It is readily seen that $$[\Delta(\sigma_\alpha \otimes I+I \otimes \sigma_\alpha)]^2=2\, (1-x), \ \ \alpha=1,2,3.$$ Thus, the left hand side of LSUR (\ref{lsur_qubit}) is given by \be \label{WerLSUR} \sum_{\alpha=1}^3\,\left[\,\Delta(\sigma_\alpha \otimes I+I \otimes \sigma_\alpha)\right]^2=6\, (1-x). \ee in the Werner class of two-qubit states, which are known to be entangled for $x> 1/3$. From (\ref{WerLSUR}) it is evident that the LSUR (\ref{lsur_qubit}) is violated in the parameter range $1/3<x\leq 1$. Thus, violation of the two-qubit LSUR (\ref{lsur_qubit}) is both necessary and sufficient for detecting entanglement in the one-parameter family of two-qubit Werner states. In the next section we discuss violation of LSUR in permutation symmetric $N$-qubit states. \section{Violation of LSUR by permutation symmetric $N$-qubit states} \label{sec2I} Permutation symmetric $N$-qubit states draw attention due to their experimental feasibility and for the mathematical simplicity offered by them~\cite{ijmp06,pla07,prl07,sym01,sym05,aups,sym06,braun,markham,arus,lamata,symnew}. This class of states belong to the $d=2j+1=N+1$ dimensional subspace of the $2^N$ dimensional Hilbert space, which corresponds to the maximum value $j=N/2$ of angular momentum of $N$-qubit system. Invariance under exchange of particles labeled by $\alpha$, $\beta$ in a multiparty state $\rho^{\rm sym}$ is exhibited by the property \be \Pi_{\alpha\beta}\rho^{\rm sym}=\rho^{\rm sym}\Pi_{\alpha\beta}=\rho^{\rm sym} \ee where $ \Pi_{\alpha\beta}$ denotes the permutation operator corresponding to the interchange of labels $\alpha,\, \beta$. We focus our attention on permutation symmetric $N$-qubit states ${\rho_{AB}^{\rm sym}}$ with even $N$, i.e., $N=2n,\ n=$integer (which corresponds to the value $j=N/2=n$ associated with the set $\{J^{\rm total}_{\alpha}=J_\alpha\otimes I_2^{\otimes n}+I_2^{\otimes n}\otimes J_\alpha,\ \alpha=1,2,3\}$ of collective angular momentum operators of the $N$-qubit system) and explore the LSUR (\ref{lsur}) for bipartite divisions $A$, $B$ characterized respectively by $j_A=j_B=n/2$. The angular momentum operators of the bipartitions $A$ and $B$ are explicitly given by \begin{eqnarray} \label{jajb} J_{A\alpha}=J_\alpha\otimes I_2^{\otimes n}& =&\frac{1}{2}\, \sum_{k=1}^{n}\,\sigma_{k\alpha}\otimes I_2^{\otimes n} \nonumber \\ J_{B\alpha}= I_2^{\otimes n} \otimes J_\alpha &=& I_2^{\otimes n}\otimes \frac{1}{2}\,\sum_{k=1}^{n}\, \sigma_{k\alpha}, \end{eqnarray} where $I_2^{\otimes n}=I_2\otimes I_2\otimes\ldots \otimes I_2$ denotes $n$-th order tensor product of $2\times 2$ identity matrix $I_2$ and \begin{equation} \label{sigman} \sigma_{k\alpha}=I_2\otimes I_2\otimes\ldots\otimes \sigma_\alpha\otimes I_2\otimes \ldots \otimes I_2 \end{equation} with $\sigma_\alpha,\ \alpha=1,2,3$ appearing at the position $k\leq n$ in the $n^{\rm th}$ order tensor product. Using angular momentum algebra it readily follows that \begin{eqnarray} \label{nnp1A} \sum_{\alpha=1}^3\, \left\langle\, J^2_{A\alpha}\right\rangle_{\rho_{AB}^{\rm sym}}=j_A(j_A+1)=\frac{n(n+2)}{4} \\ \label{nnp1B} \sum_{\alpha=1}^3\, \left\langle\, J^2_{B\alpha}\right\rangle_{\rho_{AB}^{\rm sym}}=j_B(j_B+1)=\frac{n(n+2)}{4}. \end{eqnarray} Furthermore, the expectation values $\left\langle J_{A\alpha}\right\rangle_{\rho_{AB}^{\rm sym}}$, $\left\langle J_{B\alpha}\,\right\rangle_{\rho_{AB}^{\rm sym}}$, $\left\langle J_{A\alpha}\,J_{B\alpha}\right\rangle_{\rho_{AB}^{\rm sym}}$ evaluated in the $N$-qubit symmetric state ${\rho_{AB}^{\rm sym}}$ can be expressed in terms of two-qubit averages as follows: \begin{eqnarray} \label{s1} \left\langle J_{A\alpha}\right\rangle_{\rho_{AB}^{\rm sym}}&=& \left\langle J_{\alpha}\otimes\,I_2^{\otimes n}\right\rangle_{\rho_{AB}^{\rm sym}} \nonumber \\ & =&\frac{1}{2}\, \sum_{k=1}^n\, \left\langle\,\sigma_{k\,\alpha} \otimes I_2^{\otimes{n}}\,\right\rangle_{\rho_{AB}^{\rm sym}}\nonumber \\ &=&\frac{n}{2}\, \langle\,\sigma_{\alpha}\otimes I_2 \rangle_{\varrho^{\rm sym}} \\ \left\langle J_{B\alpha}\,\right\rangle_{\rho_{AB}^{\rm sym}}&=& \left\langle I_2^{\otimes n}\otimes J_{\alpha}\,\right\rangle_{\rho_{AB}^{\rm sym}}\nonumber \\ &=&\frac{1}{2}\, \sum_{l=1}^n\, \left\langle\, I_2^{\otimes{n}}\, \otimes \sigma_{l\,\alpha} \right\rangle_{\rho_{AB}^{\rm sym}}\nonumber \\ \label{s2} &=&\frac{n}{2}\, \langle\,I_2\otimes\sigma_{\alpha}\rangle_{\varrho^{\rm sym}},\\ \label{scor} \left\langle J_{A\alpha}\,J_{B\alpha}\right\rangle_{\rho_{AB}^{\rm sym}}&=& \left\langle J_{\alpha}\,\otimes J_{\beta}\right\rangle_{\rho_{AB}^{\rm sym}} \nonumber \\ & =& \frac{1}{4}\, \sum_{k,l=1}^n\, \left\langle\,\sigma_{k\,\alpha} \otimes\, \sigma_{l\,\alpha}\right\rangle_{\rho_{AB}^{\rm sym}}, \nonumber \\ &=& \frac{n^2}{4}\, \left\langle\,\sigma_{\alpha} \otimes\, \sigma_{\alpha}\right\rangle_{\varrho^{\rm sym}} \end{eqnarray} Here it may be noticed that \begin{eqnarray*} \left\langle\,\sigma_{k\,\alpha} \otimes I_2^{\otimes{n}}\,\right\rangle_{\rho_{AB}^{\rm sym}}&=& \langle\,\sigma_{\alpha}\otimes I_2 \rangle_{\varrho^{\rm sym}} \\ \left\langle\, I_2^{\otimes{n}}\, \otimes \sigma_{k\,\alpha} \right\rangle_{\rho_{AB}^{\rm sym}}&=& \langle\,I_2\otimes\sigma_{\alpha}\rangle_{\varrho^{\rm sym}}, \\ \left\langle\,\sigma_{k\,\alpha} \otimes\, \sigma_{l\,\alpha}\right\rangle_{\rho_{AB}^{\rm sym}}&=&\left\langle\,\sigma_{\alpha} \otimes\, \sigma_{\alpha}\right\rangle_{\varrho^{\rm sym} \end{eqnarray*} irrespective of the qubit labels $k,l=1,2,\ldots n$ (because the system is symmetric under interchange of constituent qubits)~\cite{ijmp06,pla07, prl07}, with $\varrho^{\rm sym}$ denoting the two-qubit reduced state of {\em any random pair} $(k,l)$ of qubits, drawn from the $n$-qubit partitions $A$ and $B$ of the $N=2n$-qubit state ${\rho_{AB}^{\rm sym}}$. Expressing the two-qubit symmetric density matrix \begin{eqnarray} \label{varrho} \varrho^{\rm sym}&=&\frac{1}{4}\left(I_2\otimes I_2+\sum_{\alpha=1}^3 \, (\sigma_\alpha\otimes I_2+I_2\otimes \sigma_\alpha)\, s_\alpha\right. \nonumber \\ && \ \ \ \ \left.+\sum_{\alpha,\beta=1}^3 \, (\sigma_\alpha\otimes \sigma_\beta)\, t_{\alpha\,\beta}\right) \end{eqnarray} in terms of its 8 real state parameters~\cite{ijmp06,pla07,prl07} \begin{eqnarray} \label{salpha0} s_\alpha&=&\langle\,\sigma_{\alpha}\otimes I_2 \rangle_{\varrho^{\rm sym}}=\langle\,I_2\otimes\sigma_{\alpha}\rangle_{\varrho^{\rm sym}} \\ \label{tab0} t_{\alpha\,\beta}&=&\left\langle\,\sigma_{\alpha} \otimes\, \sigma_{\beta}\right\rangle_{\varrho^{\rm sym}}=\left\langle\,\sigma_{\beta} \otimes\, \sigma_{\alpha}\right\rangle_{\varrho^{\rm sym}}\\ &=&t_{\beta\,\alpha}, \ \ \ \ \ \ t_{11}+t_{22}+t_{33}=1, \nonumber \end{eqnarray} leads to the following identifications for the expectation values $\left\langle J_{A\alpha}\right\rangle_{\rho_{AB}^{\rm sym}}$, $\left\langle J_{B\alpha}\,\right\rangle_{\rho_{AB}^{\rm sym}}$, $\left\langle J_{A\alpha}\,J_{B\alpha}\right\rangle_{\rho_{AB}^{\rm sym}}$ (see (\ref{s1}), (\ref{s2}), (\ref{scor})): \begin{eqnarray} \label{salpha1} \left\langle J_{A\alpha}\right\rangle_{\rho_{AB}^{\rm sym}}&=&\frac{n}{2}\, s_\alpha = \left\langle J_{B\alpha}\,\right\rangle_{\rho_{AB}^{\rm sym}} \\ \label{t} \left\langle J_{A\alpha}\,J_{B\alpha}\right\rangle_{\rho_{AB}^{\rm sym}}&=& \frac{n^2}{4}\, t_{\alpha\alpha \end{eqnarray} It may be noted that under {\em identical} local unitary transformation $\varrho^{\rm sym}\rightarrow U^{\otimes 2}\varrho^{\rm sym}\left(U^{\dag}\right)^{\otimes 2}$,\ $U\in SU(2)$, the qubit orientation vector $s=(s_1,s_2,s_3)^T$ (see (\ref{salpha1})) and the $3\times 3$ real symmetric two-qubit correlation matrix $T=(t_{\alpha\,\beta}),\ \alpha,\beta=1,2,3$ (see (\ref{t})) transform as~\cite{pla07} \begin{eqnarray} s\rightarrow s'&=&R\, s \nonumber \\ T\rightarrow T'&=& R\,T\,R^T \end{eqnarray} where $R\in SO(3)$ denotes the $3\times 3$ rotation matrix. Let us consider the angular momentum operators of the partitions $A$ and $B$ of the even $N$-qubit density matrix $\rho_{AB}^{\rm sym}$ (See (\ref{jajb})) \[ J_{A\alpha}=J_\alpha\otimes I_2^{\otimes n} =\frac{1}{2}\, \sum_{k=1}^{n}\,\sigma_{k\alpha}\otimes I_2^{\otimes n}, \] \begin{eqnarray} \label{jb} J'_{B\alpha}&=&I_2^{\otimes n} \otimes \left\{ \left[U^\dagger(\hat{a},\theta)\,\right]^{\otimes n}\,J_\alpha \left[U(\hat{a},\theta)\right]^{\otimes n}\right\} \nonumber \\ &=& I_2^{\otimes n}\otimes \frac{1}{2}\,\sum_{k=1}^{n}\, \left\{\left[U^\dagger(\hat{a},\theta)\,\right]^{\otimes n} \sigma_{k\alpha}\,\left[U(\hat{a},\theta)\right]^{\otimes n}\right\}\nonumber \\ &=&I_2^{\otimes n}\otimes \frac{1}{2}\,\sum_{k=1}^{n}\,\left\{\sum_{\beta=1}^{3}\, R_{\alpha\beta}(\hat{a},\theta)\, \sigma_{k\beta}\right\} \nonumber \\ &=&\sum_{\beta=1}^3 R_{\alpha\beta}(\hat{a},\theta)\, J_{B\beta} \end{eqnarray} where $U(\hat{a},\,\theta)=\exp \left(\frac{-i(\sigma\cdot{\hat a})\theta}{2}\right)\in SU(2)$ denotes local unitary operation and $R_{\alpha\beta}(\hat{a},\theta)$, $\alpha,\, \beta=1,\,2,\,3$ denote the elements of the corresponding $3\times 3$ proper orthogonal rotation matrix $R(\hat a,\, \theta) \in SO(3)$, with $\hat{a},\ \theta$ being the axis, angle parameters. In a symmetric even $N$-qubit state, the angular momentum operators $J_{A\alpha}$, $J'_{B\alpha}$, $\alpha=1,\,2,\,3$ obey the following sum uncertainty relations (see (\ref{main})): \begin{eqnarray} & &\sum_{\alpha=1}^3 (\Delta J_{A\alpha})^2 \geq \frac{n}{2}, \ \ \ \ \ \sum_{\alpha=1}^3 (\Delta J'_{B\alpha})^2\geq \frac{n}{2}, \end{eqnarray} where $n=N/2$. Then the ensuing LSUR (See (\ref{lsur})) \be \label{lsursym} \sum_{\alpha=1}^3 \left[\Delta \left(J_{A\alpha}+ J'_{B\alpha}\right)\right]^2\geq n. \ee is satisfied by the set of all separable symmetric states. Violation of the LSUR (\ref{lsursym}) i.e., $\sum_{\alpha=1}^3 \left[\Delta \left(J_{A\alpha}+ J'_{B\alpha}\right)\right]^2~<~n$ reveals that the bipartite state $\rho^{\rm sym}_{AB}$ is entangled. We define \begin{eqnarray} \label{chin} &&\chi(\hat{a},\theta)=\frac{1}{n^2}\left( \sum_{\alpha=1}^3\left\{ \Delta \left[ J_{A\alpha}+ J'_{B\alpha}\right]\right\}^2-n\right) \nonumber \\ &&\ \ =\frac{1}{n^2}\left( \sum_{\alpha=1}^3\left\{\Delta \left[J_{A\alpha}+\sum_{\beta=1}^3\, R_{\alpha\beta}(\hat{a},\theta)\,J_{B\beta}\right]\right\}^2-n\right) \nonumber \\ \end{eqnarray} and observe that $\chi(\hat{a},\theta)<0 \Longrightarrow$ the LSUR (\ref{lsursym}) is violated. This in turn implies that $\rho^{\rm sym}_{AB}$ is entangled. We now proceed to prove the following Lemma. \begin{lem} \label{chiexp} In an even $N$-qubit permutation symmetric state $\rho_{AB}^{\rm sym}$ it is seen that \be \label{lemF} \chi(\hat{a},\theta)=\frac{1}{2}\left(\,1- s_0^2+{\rm Tr}\, [R(\hat{a},\theta)\,C]\right) \ee where $C=T-ss^T$ is the $3\times 3$ real symmetric {\em covariance matrix}~\cite{pla07,prl07} defined in terms of the two-qubit correlation matrix $T=(t_{\alpha\beta}),\ \alpha,\beta=1,2,3, {\rm Tr}\,[T]=1$ (see (\ref{tab0})) and the qubit orientation vector $s=(s_1, s_2,s_3)^T$ (see (\ref{salpha0})). The squared magnitude of the qubit orientation vector is denoted by $s_0^2=s^T s=s_1^2+s_2^2+s_3^2$. \end{lem} \begin{pf} Let us express \begin{eqnarray*} \left[\Delta (J_{A\alpha}+J'_{B\alpha})\right]^2&=&\left[\Delta (J_{A\alpha}\right]^2+\left[\Delta (J'_{B\alpha}\right]^2 \\ &+& 2\langle J_{A\alpha} J'_{B\alpha} \rangle - 2\langle J_{A\alpha}\rangle\, \langle J'_{B\alpha} \rangle \end{eqnarray*} and simplify the left hand side of the LSUR (\ref{lsursym}) by substituting (\ref{s1})--(\ref{scor}), (\ref{salpha0}), (\ref{tab0}) to obtain \begin{eqnarray} \label{final} \sum_{\alpha=1}^3\, \left[\Delta (J_{A\alpha}+J'_{B\alpha})\right]^2&=&\frac{n(n+2)}{2}-\frac{n^2}{2}\, s_0^2 \nonumber \\ & & +\frac{n^2}{2}\, {\rm Tr}\, \left[ R(\hat{a},\theta) C\right]. \end{eqnarray} It readily follows that $\chi(\hat{a},\theta)$ defined in (\ref{chin}) reduces to the simple form given by (\ref{lemF}). \hskip 1.3in $\square$ \end{pf} Our main result is presented in the form of the following theorem: \begin{thm} Permutation symmetric even $N$-qubit state violates the angular momentum LSUR (\ref{lsursym}) if the associated two-qubit covariance matrix $C=T-ss^T$ is negative. \end{thm} \begin{pf} Violation of the LSUR (\ref{lsursym}) is ensured whenever $\chi(\hat{a},\theta)<0.$ for any values of the axis-angle parameters $a~=~(a_1,a_2,a_3)^T$, $a^Ta=a_1^2+a_2^2+a_3^2=1$, $0\leq \theta\leq 2\pi.$ Substituting the explicit form~\cite{kns} for the elements of the rotation matrix $R(\hat{a},\theta)$ i.e., \begin{eqnarray} \label{rot} R_{\alpha\beta}&=&\cos \theta\, \delta_{\alpha\beta}+(1-\cos \theta)a_\alpha a_\beta-\sin \theta\,\epsilon_{\alpha\beta\gamma}\,a_\gamma, \nonumber \\ & &\hskip 1in \alpha,\,\beta,\,\gamma=1,\,2,\,3 \end{eqnarray} in terms of the axis-angle parameters and using the property $c_{\alpha\beta}=c_{\beta\alpha}$ of the covariance matrix $C=T-s^Ts$, we obtain \begin{eqnarray} \label{rotC} {\rm Tr}\, [R(\hat{a},\theta)\,C]&=& \cos \theta\, {\rm Tr}\, [C] + (1-\cos \theta)\, a^T\,C\,a \nonumber\\ &=& \cos\theta\, (1-s_0^2) + (1-\cos \theta)\, a^TCa. \nonumber \end{eqnarray} where we have substituted \be {\rm Tr}\, [C]=(1-s_0^2). \ee Furthermore, choosing the rotation axis $a=(a_1,a_2,a_3)^T$ to be the eigenvector of the two-qubit covariance matrix $C$, corresponding to its least eigenvalue $c_L$, and setting the rotation angle to be $\theta=\pi$ in (\ref{rotC}) we get \begin{eqnarray} \label{rotCf} {\rm Tr}\, [R(\hat{a},\theta=\pi)\,C]&=& -(1-s_0^2) + 2\, c_L. \end{eqnarray} Substituting (\ref{rotCf}) in (\ref{lemF}) leads to \begin{equation} \chi(\hat{a},\theta=\pi)= c_L. \end{equation} It is thus evident that the LSUR (\ref{lsursym}) is violated, whenever $c_L<0$, thus proving the Theorem. \hskip 0.7in $\square$ \end{pf} In Ref.~\cite{pla07} it has been established by some of us that a two-qubit symmetric state $\varrho^{\rm sym}$ is entangled (negative under partial transpose) if and only if its associated covariance matrix $C=T-ss^T$ is negative. Thus the above Theorem draws attenton to the fact that entanglement in the two-qubit reduced state $\varrho^{\rm sym}$ of the whole symmetric even $N$-qubit system $\rho_{AB}^{\rm sym}$ reflects itself in the violation of the LSUR (\ref{lsursym}). Substituting (\ref{rotCf}) in (\ref{final}), the left hand side of LSUR, one obtains \begin{eqnarray} \label{lsurlhs} \sum_{\alpha=1}^3\, \left[\Delta (J_{A\alpha}+J'_{B\alpha})\right]^2=n(1+n\, c_L). \end{eqnarray} Thus \be \label{lsurc} \sum_{\alpha=1}^3\, \left[\Delta (J_{A\alpha}+J'_{B\alpha})\right]^2\geq n \Rightarrow (1+n\, c_L)\geq 1 \ee where LSUR (\ref{lsursym}) is expressed in terms of the least eigenvalue of the two-qubit covariance matrix $C$ . As long as $c_L\geq 0$ (which happens to be the case for separable symmetric states) the LSUR (\ref{lsurc}) is obeyed. In the next section we discuss two specific physical examples of $N$-qubit symmetric states which violate (\ref{lsurc}). \section{Examples of $N$-qubit symmetric states violating the angular momentum LSUR } \subsection{Symmetric multiqubit state generated by one-axis twisting Hamiltonian} Kitagawa and Ueda~\cite{ku} had proposed a non-linear Hamiltonian $$\hat{H}=\chi J^2_1,$$ referred to as the {\em one-axis twisting Hamiltonian} for generating multiqubit spin squeezed states, where $J_1$ denotes one of the components of the collective angular momentum operator of the $N$-qubit system. Dynamical evolution of an initially {\em spin-down} state $\vert j,\,-j\rangle$ of the \break $N=2j$ qubit system, governed by the one-axis twisting Hamiltonian, results in~\cite{ku}: \begin{eqnarray} \label{oath} \vert\Psi_{\rm {KU}}\rangle&=&\exp(-iHt)\vert j,\,-j\rangle, \ \ j=\frac{N}{2}. \end{eqnarray} In Ref. \cite{ijmp06,aups}, authored by some of us, the mean spin vector $s=(s_1,s_2,s_3)^T$ and the correlation matrix $T$ of a random pair of qubits drawn from the state $\vert\Psi_{\rm {KU}}\rangle$ were explicitly given: The mean spin vector is given by \be {s}^T=\left(0,\,0,\,-\cos^{(N-1)}(\chi\,t)\right) \ee and the the non-zero elements of the $3\times 3$ real symmetric matrix $T$ take the form, \begin{eqnarray} & & t_{11}=t_{13}=t_{23}=0, \ \ \ t_{12}=\cos^{(N-2)}(\chi\,t)\,\sin(\chi\,t) \ \nonumber\\ & & t_{22}=\frac{1}{2}\left[1- \cos^{(N-2)}(2\chi\,t) \right],\ \ t_{33}=1-t_{22} \end{eqnarray} We then construct the covariance matrix $C=T-{s}\, {{s}}^T$ and evaluate its eigenvalues as a function of the number $N$ of qubits and the dimensionless dynamical parameter $\chi\,t$. Fig.~1 illustrates the behaviour of the left hand side of the LSUR (\ref{lsurc}) with respect to $\chi\,t$ for different choices of $n=N/2$. \begin{figure}[h] \label{1} \begin{center} \includegraphics*[width=3in,keepaspectratio]{KU.eps} \caption{(Color online) The left hand side of the LSUR (\ref{lsurlhs}) i.e., $\sum_{\alpha=1}^3\, \left[\Delta (J_{A\alpha}+J'_{B\alpha})\right]^2=n(1+n\, c_L)$ in the $N$-qubit symmetric state $\vert\Psi_{\rm {KU}}\rangle$ given in (\ref{oath}), as a function of the dimensionless dynamical parameter $\chi t$ for different choices of $n=N/2$. Violation of LSUR (\ref{lsurc}) is clearly seen as $1+ n c_L< 1$. This in turn highlights the pairwise entanglement in the symmetric $N$-qubit state (\ref{oath}) where bipartite divisions are characterized by the collective angular momenta $j_A=j_B=n/2$.} \end{center} \end{figure} \subsection{One parameter family of W-class $N$ qubit states} We consider the one-parameter $N$-qubit symmetric state of the W-class~\cite{adsum}: \begin{eqnarray} \label{wclass} \vert \Psi\rangle_{\rm W}&=&\, a\,\vert 0_1,0_2,\ldots, 0_N\rangle+\sqrt{1-a^2}\,\vert W\rangle, \nonumber\\ && \hskip 1in 0<a<1, \end{eqnarray} where \begin{eqnarray} \vert W\rangle &=&\frac{1}{\sqrt{N}}\, \left(\vert 1_1,0_2,\ldots\,0_N\rangle + \vert 0_1,1_2,0_3,\ldots\,0_N\rangle +\right. \nonumber \\ && \hskip 0.4in \left.\ldots\ldots +\vert 0_1,0_2,\ldots\,1_N\rangle \right) \end{eqnarray} denotes the symmetric $N$-qubit W state. Reduced two-qubit density matrix $\varrho^{\rm sym}={\rm Tr}_{N-2}\, [\,\vert \Psi\rangle_{\rm W}\langle \Psi\vert\, ]$ obtained by tracing any of the $N-2$ qubits is found to be ~\cite{adsum} \be \label{2wclass} \varrho^{\rm sym}=\frac{1}{A+2D}\left(\begin{array}{cccc} A & B &B & 0\\ B & D &D & 0\\ B & D &D & 0\\ 0 & 0 & 0 & 0 \end{array}\right) \ee where \begin{eqnarray} A&=&\frac{N^2a^2+(N-2)(1-a^2)}{N^2\,a^2+N(1-a^2)},\ B=\frac{a\sqrt{1-a^2}}{1+a^2(N-1)},\ \nonumber \\ D&=& \frac{1-a^2}{N^2\,a^2+N(1-a^2)}.\ \end{eqnarray} The covariance matrix elements $c_{\alpha\beta}=t_{\alpha\beta}-s_{\alpha}s_{\beta}$ are readily evaluated in the two-qubit symmetric state $\varrho^{\rm sym}={\rm Tr}_{N-2}\, [\,\vert \Psi\rangle_{\rm W}\langle \Psi\vert\, ]$ and the associated $3\times 3$ matrix $C=T-ss^T$ takes the form \be \label{cwclass} C=\frac{1}{A+2D}\left(\begin{array}{ccc} 2D-4B^2 & 0 & 2B(1-D) \\ 0 & 2D & 0 \\ 2B(1-D) & 0 & A-2D-A^2 \end{array}\right) \ee We have plotted the left hand side of the LSUR (\ref{lsurc}) as a function of the parameter $a$, for different choices of the angular momenta $j_A=j_B=n/2$ in Fig.~2. Violation of the LSUR i.e., $(1+n\, c_L) \leq 1$ is manifestly seen in Fig.~2, thus highlighting the pairwise entanglement in the bipartite division (characterized by collective angular momenta $j_A=j_B=n/2$) of the W-class $N$-qubit symmetric state (\ref{wclass}). \begin{figure}[h] \label{1} \begin{center} \includegraphics*[width=3in,keepaspectratio]{Wclass.eps} \caption{(Color online) Plot of $(1+n\, c_L)$ i.e., the left hand side (\ref{lsurc}) of the LSUR, in the W-class $N$-qubit symmetric state (\ref{wclass}), as a function of the parameter $a$ for different values of $n=N/2$. $c_L$ denotes the minimum eigenvalue of the covariance matrix $C$ (see (\ref{cwclass})) associated with the W-class state.} \end{center} \end{figure} \section{Conclusion} In this paper, it is shown that violation of local sum uncertainty relations (LSUR) in angular momentum operators is a necessary and sufficient condition for entanglement in two-qubit permutation symmetric states. The angular momentum LSUR for permutation symmetric even $N$-qubit states are shown to be violated when the least eigenvalue of the covariance matrix of the two-qubit reduced state is negative. The entanglement in the two-qubit reduced system is thus shown to imply entanglement between bipartite divisions of the even $N$-qubit symmetric state. Our illustration of the result in two important classes of multiqubit states helps in discerning the bipartite entanglement through the parameters of two-qubit marginal. \section*{Acknowledgements} HSK acknowledges the support of NCN through grant SHENG (2018/30/Q/ST2/00625). Sudha and ARU are supported by the Department of Science and Technology, India (Project No. DST/ICPS/QUST/Theme-2/2019).
{'timestamp': '2021-03-30T02:56:23', 'yymm': '2103', 'arxiv_id': '2103.15731', 'language': 'en', 'url': 'https://arxiv.org/abs/2103.15731'}
\section{#1}\setcounter{equation}{0}} \newcommand{\leavevmode\hbox{\rm{\small1\kern-3.8pt\normalsize1}}}{\leavevmode\hbox{\rm{\small1\kern-3.8pt\normalsize1}}} \def{\hbox{$\sqcup$}\llap{\hbox{$\sqcap$}}}{{\hbox{$\sqcup$}\llap{\hbox{$\sqcap$}}}} \begin{document} \addtolength{\baselineskip}{1.5mm} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \begin{flushright} DAMTP--1998--160\\ gr-qc/9812032\\ \end{flushright} \smallskip \begin{center} {\huge{Bounds on negative energy densities in static \\[2mm] space-times}}\\[12mm] {Christopher J. Fewster\footnote{Electronic address: {\tt cjf3@york.ac.uk}}}\\[6mm] {\sl Department of Mathematics, University of York, Heslington, York YO10 5DD, England}\\[8mm] {Edward Teo\footnote{Electronic address: {\tt E.Teo@damtp.cam.ac.uk}}}\\[6mm] {\sl Department of Applied Mathematics and Theoretical Physics, University of Cambridge,\\ Silver Street, Cambridge CB3 9EW, England\\[3mm] Department of Physics, National University of Singapore, Singapore 119260} \end{center} \vspace{0.8cm} \centerline{\bf Abstract}\bigskip \noindent Certain exotic phenomena in general relativity, such as backward time travel, appear to require the presence of matter with negative energy. While quantum fields are a possible source of negative energy densities, there are lower bounds---known as quantum inequalities---that constrain their duration and magnitude. In this paper, we derive new quantum inequalities for scalar fields in static space-times, as measured by static observers with a choice of sampling function. Unlike those previously derived by Pfenning and Ford, our results do not assume any specific sampling function. We then calculate these bounds in static three- and four-dimensional Robertson--Walker universes, the de Sitter universe, and the Schwarzschild black hole. In each case, the new inequality is stronger than that of Pfenning and Ford for their particular choice of sampling function. \vfill\eject \setcounter{footnote}{0} \renewcommand{\thefootnote}{\arabic{footnote}} \sect{Introduction} In recent years, there has been much interest in various exotic solutions of general relativity---such as traversable wormholes \cite{MT,MTY}, the Alcubierre ``warp drive'' \cite{alcubierre}, and the Krasnikov ``tube'' \cite{kransnikov}---that permit hyperfast or backward time travel. However, these space-times without exception require the presence of matter which possess {\it negative\/} energy densities \cite{FR-worm,PF-warp,ER,olum}, and hence violate the standard energy conditions. Now, it is well-known that quantum field theory, unlike classical physics, allows energy density to be unboundedly negative at a point in space-time \cite{epstein}. Should the theory place no restrictions on this negative energy, quantum fields could be used to produce gross macroscopic effects such as those mentioned above, or even a violation of cosmic censorship or the second law of thermodynamics. It is therefore important to have a quantitative handle on the permitted amount of negative energy in a neighbourhood of a space-time point. Ford and Roman \cite{FRa,FR} have found inequalities which constrain the duration and magnitude of negative energy densities for quantised free, real scalar fields in Minkowski space. They show that a static observer, who samples the energy density by time-averaging it against the Lorentzian function \begin{equation} \label{Lorentzian} f(t)={t_0\over\pi}{1\over t^2+t_0^2}\,, \end{equation} obtains a result which is bounded from below by a negative quantity depending inversely on the characteristic timescale $t_0$. For example, in the case of a massless scalar field in four dimensions, the renormalised energy density in any quantum state satisfies \begin{equation} \rho\geq-{3\over32\pi^2t_0^4}\,. \end{equation} This means the more negative the energy density that is present in an interval, the shorter the duration of this interval must be. Thus, this ``quantum inequality''---in a way reminiscent of the uncertainty principle of quantum mechanics\footnote{However, the derivation of the quantum inequalities does not depend on any putative time-energy uncertainty principle.}---serves to limit any large-scale, long-time occurrence of negative energy. In the infinite sampling time limit $t_0\rightarrow\infty$, it reduces to the usual averaged weak energy condition (for quantum fields \cite{PFb,pfenning}). Eveson and one of the present authors~\cite{fewster} have recently presented a different derivation of the quantum inequalities for a massive scalar field in $n$-dimensional Minkowski space (with $n\geq2$). The method used is straightforward---involving only the canonical commutation relations and the convolution theorem of Fourier analysis---and has the virtue of being valid for any smooth, non-negative and even sampling function decaying sufficiently quickly at infinity. Furthermore, the resulting bounds turn out to be stronger than those obtained by Ford and Roman~\cite{FRa,FR} when the Lorentzian sampling function is applied. In the present paper, we extend this method to derive quantum inequalities for scalar fields in generally curved but static space-times using arbitrary smooth, non-negative (although not necessarily even, as assumed in \cite{fewster}) sampling functions of sufficiently rapid decay. We obtain a lower bound on the averaged normal-ordered energy density in the Fock space built on the static vacuum in terms of the appropriate mode functions. Since the normal-ordered energy density in a given state is the difference between the renormalised energy density in this state and the (generally nonzero and potentially negative) renormalised energy density of the static vacuum, our bound also constrains the renormalised energy density (cf.~\cite{PFb}). We apply our bound to several examples where the bound can be explicitly evaluated, namely the three- and four-dimensional Robertson--Walker universes, the de Sitter universe, and the Schwarzschild black hole. In all these cases, we obtain bounds which are up to an order of magnitude stronger than those previously derived by Pfenning and Ford \cite{PFb,pfenning,PFa} for the specific sampling function they used. \sect{Derivation of the quantum inequality} \label{QI} We shall consider $n+1$-dimensional space-times that are globally static, with time-like Killing vector $\partial_t$. The metric of such a space-time takes the general form \begin{equation} {\rm d}s^2=-|g_{tt}({\bf x})|{\rm d}t^2+ g_{ij}({\bf x}){\rm d}x^i{\rm d}x^j, \end{equation} where ${\bf x}=(x^1,x^2\dots,x^n)$ and $i,j=1,2,\dots,n$. The equation of a free, real scalar field $\phi$ of mass $\mu\geq0$ in this space-time is \begin{equation} -{1\over|g_{tt}|}\partial_t^2\phi+\nabla^i\nabla_i\phi -\mu^2\phi=0\,. \end{equation} Suppose it admits a complete, orthonormal set of positive frequency solutions. We write these mode functions as \begin{equation} f_\lambda(t,{\bf x})=U_\lambda({\bf x}){\rm e}^{-i\omega_\lambda t}, \end{equation} where $\lambda$ denotes the set of quantum numbers needed to specify the mode (which may be continuous or discrete). A general quantum scalar field can then be expanded as \begin{equation} \phi=\sum_\lambda(a_\lambda f_\lambda+a_\lambda^\dagger f_\lambda^\ast)\,, \end{equation} in terms of creation and annihilation operators $a_\lambda^\dagger$, $a_\lambda$ obeying the canonical commutation relations \begin{equation} \label{ccr} [a_\lambda,a_{\lambda^\prime}^\dagger]=\delta_{\lambda \lambda^\prime}\leavevmode\hbox{\rm{\small1\kern-3.8pt\normalsize1}}\,,\qquad [a_\lambda,a_{\lambda^\prime}]=[a_\lambda^\dagger, a_{\lambda^\prime}^\dagger]=0\,, \end{equation} and which generate the Fock space built on the static vacuum state $|0\rangle$. We shall be interested in the energy density of $\phi$ along the world-line $x^\mu(t)=(t,{\bf x_0})$ of a {\em static\/} observer, with ${\bf x_0}$ kept fixed. If the field is in a normalised quantum state $|\psi\rangle$, the normal-ordered energy density as measured by such an observer at time $t$ is \cite{PFb,pfenning} \begin{eqnarray} \langle\,:T_{\mu\nu}u^\mu u^\nu:\,\rangle&=&{\rm Re}\, \sum_{\lambda,\lambda^\prime}\bigg\{{\omega_\lambda\omega_{\lambda^\prime} \over|g_{tt}|}\Big[U_\lambda^\ast U_{\lambda^\prime}\langle a_\lambda^\dagger a_{\lambda^\prime}\rangle{\rm e}^{i(\omega_\lambda-\omega_{\lambda^\prime})t} -U_\lambda U_{\lambda^\prime}\langle a_\lambda a_{\lambda^\prime}\rangle {\rm e}^{-i(\omega_\lambda+\omega_{\lambda^\prime})t}\Big]\cr &&\hskip.5in+\Big[\nabla^iU_\lambda^\ast \nabla_i U_{\lambda^\prime}\langle a_\lambda^\dagger a_{\lambda^\prime}\rangle {\rm e}^{i(\omega_\lambda-\omega_{\lambda^\prime})t} +\nabla^iU_\lambda\nabla_iU_{\lambda^\prime}\langle a_\lambda a_{\lambda^\prime}\rangle {\rm e}^{-i(\omega_\lambda+\omega_{\lambda^\prime})t}\Big]\cr &&\hskip.5in+m^2\Big[U_\lambda^\ast U_{\lambda^\prime}\langle a_\lambda^\dagger a_{\lambda^\prime}\rangle{\rm e}^{i(\omega_\lambda-\omega_{\lambda^\prime})t}+U_\lambda U_{\lambda^\prime} \langle a_\lambda a_{\lambda^\prime}\rangle {\rm e}^{-i(\omega_\lambda+\omega_{\lambda^\prime})t}\Big]\bigg\}\,, \label{Tmunuexp} \end{eqnarray} where $u^\mu=\big(|g_{tt}|^{-1/2},{\bf 0}\big)$ is the observer's four-velocity, and $U_\lambda$ and its derivatives are evaluated at ${\bf x_0}$. We have also written $\langle\,\cdot\,\rangle\equiv\langle\psi|\cdot|\psi\rangle$ for brevity. Recall that the normal-ordered energy density is the difference between the renormalised energy density in the two states $|\psi\rangle$ and $|0\rangle$. We now define a weighted energy density \begin{equation} \rho = \int_{-\infty}^\infty{\rm d}t\,\langle\,:T_{\mu\nu}u^\mu u^\nu :\,\rangle\, f(t)\,, \end{equation} where $f$ is any smooth, non-negative function decaying at least as fast as ${\rm O}(t^{-2})$ at infinity, and normalised to have unit integral. Ford and coworkers~\cite{FRa,FR,PFb,pfenning,PFa} employ the Lorentzian function~(\ref{Lorentzian}), whose specific properties play a key r\^ole in their arguments [in particular, the Fourier transform of~(\ref{Lorentzian}) is simply the function $\exp(-|\omega|t_0)$]; we emphasise that our arguments apply to general $f$. Substituting from Eq.~(\ref{Tmunuexp}), the weighted energy density measured by the observer is \begin{eqnarray} \rho &=&{\rm Re}\, \sum_{\lambda,\lambda^\prime}\bigg\{{\omega_\lambda\omega_{\lambda^\prime} \over|g_{tt}|}\Big[U_\lambda^\ast U_{\lambda^\prime}\langle a_\lambda^\dagger a_{\lambda^\prime}\rangle\widehat{f}(\omega_{\lambda'}-\omega_\lambda) -U_\lambda U_{\lambda^\prime}\langle a_\lambda a_{\lambda^\prime}\rangle \widehat{f}(\omega_\lambda+\omega_{\lambda'})\Big]\cr &&\hskip.5in+\Big[\nabla^iU_\lambda^\ast \nabla_i U_{\lambda^\prime}\langle a_\lambda^\dagger a_{\lambda^\prime}\rangle \widehat{f}(\omega_{\lambda'}-\omega_\lambda) +\nabla^iU_\lambda\nabla_iU_{\lambda^\prime}\langle a_\lambda a_{\lambda^\prime}\rangle \widehat{f}(\omega_\lambda+\omega_{\lambda'}) \Big]\cr &&\hskip.5in+m^2\Big[U_\lambda^\ast U_{\lambda^\prime}\langle a_\lambda^\dagger a_{\lambda^\prime}\rangle \widehat{f}(\omega_{\lambda'}-\omega_\lambda) +U_\lambda U_{\lambda^\prime} \langle a_\lambda a_{\lambda^\prime}\rangle \widehat{f}(\omega_\lambda+\omega_{\lambda'})\Big]\bigg\}\,, \end{eqnarray} where we define the Fourier transform of $f$ by \begin{equation} \widehat{f}(\omega)=\int_{-\infty}^\infty{\rm d}t\,f(t){\rm e}^{-i\omega t}\,. \end{equation} By applying the inequality (\ref{inequality}), proved in the Appendix, to each of the cases $q_\lambda={\omega_\lambda\over|g_{tt}|^{1/2}}U_\lambda$, $\nabla_iU_\lambda$ and $mU_\lambda$, we obtain the following manifestly negative lower bound for $\rho$: \begin{equation} \rho\geq-{1\over2\pi}\int_0^\infty{\rm d}\omega\,\sum_\lambda \bigg({\omega_\lambda^2\over|g_{tt}|}U_\lambda^\ast U_\lambda +\nabla^iU_\lambda^\ast\nabla_iU_\lambda+m^2U_\lambda^\ast U_\lambda\bigg) \left|\widehat{f^{1/2}}(\omega+\omega_\lambda)\right|^2. \end{equation} Using the field equation satisfied by the spatial mode function \cite{PFb,pfenning}: \begin{equation} \nabla^i\nabla_iU_\lambda+\bigg({\omega_\lambda^2\over|g_{tt}|} -m^2\bigg)U_\lambda=0\,, \end{equation} this inequality can be rewritten as \begin{equation} \label{QIb} \rho\geq-{1\over\pi}\int_0^\infty{\rm d}\omega\,\sum_\lambda \bigg({\omega_\lambda^2\over|g_{tt}|}+{1\over4}\nabla^i\nabla_i\bigg) |U_\lambda|^2\left|\widehat{f^{1/2}}(\omega+\omega_\lambda)\right|^2. \end{equation} This is the desired quantum inequality, which is valid for general sampling functions $f(t)$, subject to the above-stated conditions. Another useful form of it can be obtained by introducing the new variable $u=\omega+\omega_\lambda$: \begin{equation} \label{QIc} \rho\geq-{1\over\pi}\int_{\omega_{\rm min}}^\infty{\rm d}u\, \left|\widehat{f^{1/2}}(u)\right|^2\sum_{\lambda~{\rm s.t.}~\omega_\lambda\leq u} \bigg({\omega_\lambda^2\over|g_{tt}|}+{1\over4}\nabla^i\nabla_i\bigg) \,|U_\lambda|^2, \end{equation} with $\omega_{\rm min}\equiv\min_\lambda\omega_\lambda$. To simplify it any further would require a specific choice of $f(t)$. For example, with the even sampling function \begin{equation} \label{mysamp} f(t)={2\over\pi}{t_0^3\over(t^2+t_0^2)^2}\,, \end{equation} that is peaked at $t=0$, we have \begin{equation} \left|\widehat{f^{1/2}}(\omega)\right|^2=2\pi t_0{\rm e}^{-2|\omega|t_0}. \end{equation} In this case, the quantum inequality can be expressed in terms of the Euclidean Green's function \begin{equation} G_{\rm E}(t,{\bf x};t^\prime,{\bf x}^\prime)=\sum_\lambda U_\lambda^\ast({\bf x})U_\lambda({\bf x}^\prime){\rm e}^{\omega_\lambda (t-t^\prime)}, \end{equation} quite compactly as \begin{equation} \rho\geq-\hbox{$1\over4$}{\hbox{$\sqcup$}\llap{\hbox{$\sqcap$}}}_{\rm E}G_{\rm E}(-t_0,{\bf x}; t_0,{\bf x})\,, \end{equation} where ${\hbox{$\sqcup$}\llap{\hbox{$\sqcap$}}}_{\rm E}\equiv{1\over|g_{tt}|}\partial_{t_0}^2+\nabla^i\nabla_i$ is the Euclidean wave operator. This bound is, in fact, identical to one that was derived in \cite{PFb,pfenning} assuming the Lorentzian sampling function (\ref{Lorentzian}). But because (\ref{mysamp}) is a more sharply peaked function [half the area under the Lorentzian function lies within $|t|<t_0$, while this figure is ${1\over2}+{1\over\pi}\simeq0.82$ for (\ref{mysamp})], this is a first indication that the inequality derived here is a stronger result. Finally, we record the fact that for the Lorentzian function, \begin{equation} \label{FT} \left|\widehat{f^{1/2}}(\omega)\right|^2=\frac{4t_0}{\pi} K_0(t_0|\omega|)^2, \end{equation} where $K_0(x)$ is the modified Bessel function of zeroth order. In the rest of this paper, we shall consider the quantum inequality in specific examples of globally static space-times where the left-hand side of (\ref{QIb}) or (\ref{QIc}) can be explicitly evaluated. As these examples have been considered previously by Pfenning and Ford \cite{PFb,pfenning,PFa}, we shall at times be brief and refer to their papers for more details. For the most part we will closely follow their notation and conventions. \sect{Minkowski space} We begin with a review of the quantum inequality in $n+1$-dimensional Minkowski space, the case that was treated in \cite{fewster}. The mode functions for a free scalar field of mass $\mu$ are \begin{equation} U_{\bf k}({\bf x})={1\over[(2\pi)^n 2\omega_{\bf k}]^{1/2}}{\rm e}^{i{\bf k}\cdot{\bf x}},\qquad \omega_{\bf k}=\sqrt{|{\bf k}|^2+\mu^2}\,, \end{equation} with each component of the $n$-dimensional (spatial) momentum covector ${\bf k}$ satisfying $-\infty<k_i<\infty$. The quantum inequality (\ref{QIb}) becomes \begin{eqnarray} \label{Mbound} \rho&\geq&-{1\over2\pi}\int_0^\infty{\rm d}\omega \int{{\rm d}^n{\bf k}\over(2\pi)^n}\,\omega_{\bf k} \left|\widehat{f^{1/2}}(\omega+\omega_{\bf k})\right|^2\cr &=&-\frac{C_n}{2\pi}\int_0^\infty{\rm d}\omega\int_\mu^\infty{\rm d}\omega'\, {\omega'}^2({\omega'}^2-\mu^2)^{n/2-1} \left|\widehat{f^{1/2}}(\omega+\omega')\right|^2, \end{eqnarray} where $C_n$ is equal to the area of the unit $n-1$-sphere divided by $(2\pi)^n$, i.e., \begin{equation} C_n\equiv\frac{1}{2^{n-1}\pi^{n/2}\Gamma(\frac{1}{2}n)}\,. \end{equation} If we make the change of variables $u=\omega+\omega'$ and $v=\omega'$, the quantum inequality (\ref{Mbound}) can be rewritten as \begin{equation} \label{Mbound2} \rho\ge-\frac{C_n}{2\pi(n+1)}\int_\mu^\infty{\rm d}u\, \left|\widehat{f^{1/2}}(u)\right|^2 u^{n+1}Q_n\bigg(\frac{u}{\mu}\bigg), \end{equation} where the functions $Q_n(x)$ are defined by \begin{equation} Q_n(x) = (n+1)x^{-(n+1)}\int_1^x{\rm d}y\, y^2(y^2-1)^{n/2-1}. \end{equation} There are several special cases in which this bound can be evaluated analytically \cite{fewster}, notably massless fields in two and four dimensions with the sampling function (\ref{Lorentzian}). In the former, the bound is four times stronger than that derived by Ford and Roman \cite{FR}, but $1{1\over2}$ times weaker than the optimal one of Flanagan \cite{flanagan}. In the latter case, the present bound is $9\over64$ of Ford and Roman's result. \sect{Three-dimensional closed universe} \label{IIIDCU} The line element for the static, three-dimensional closed universe is \begin{equation} {\rm d}s^2=-{\rm d}t^2+a^2({\rm d}\theta^2+\sin^2\theta\,{\rm d}\varphi^2)\,, \end{equation} where $a$ is the radius of the two-sphere at each constant time-slice, and the angular variables take values $0\leq\theta\leq\pi$, $0\leq\varphi<2\pi$ (and will do so for all the space-times considered in this paper). We consider the massive scalar field equation on this background with a coupling of strength $\xi$ to the scalar curvature $R=2/a^2$: \begin{equation} {\hbox{$\sqcup$}\llap{\hbox{$\sqcap$}}}\phi-(\mu^2+\xi R)\phi=0\,, \end{equation} whose mode-function solutions are given in terms of the usual spherical harmonics $Y_{lm}(\theta,\varphi)$ by~\cite{pfenning,PFa} \begin{equation} U_{lm}({\bf x})={1\over(2\omega_la^2)^{1/2}}Y_{lm}(\theta,\varphi)\,, \end{equation} for $l=0,1,2,\dots$ and $m=-l,-l+1,\dots,l$, with \begin{equation} \omega_l=a^{-1}\sqrt{l(l+1)+2\xi+(a\mu)^2}\, . \end{equation} The $Y_{lm}(\theta,\varphi)$ obey the sum rule \begin{equation} \label{sumrule} \sum_{m=-l}^{l}\big|Y_{lm}(\theta,\varphi)\big|^2={2l+1\over4\pi}\,, \end{equation} which can be used in (\ref{QIb}) and (\ref{QIc}) to show that \begin{eqnarray} \label{3Dbound} \rho&\geq&-{1\over8\pi^2a^2}\int_0^\infty{\rm d}\omega\sum_{l=0}^\infty\, (2l+1)\omega_l\left|\widehat{f^{1/2}}(\omega+\omega_l)\right|^2\cr &=&-{1\over8\pi^2a^2}\int_{\omega_0}^\infty{\rm d}u\, \left|\widehat{f^{1/2}}(u)\right|^2 \sum_{l=0}^{N(u)}\,(2l+1)\omega_l\,. \end{eqnarray} Here, $N(u)\equiv\max\{n\in{\bf Z}:\,\omega_n\leq u\}$, i.e., \begin{equation} N(u)=\left\lfloor{\sqrt{1-4[2\xi+(a\mu)^2-(au)^2]}-1\over2} \right\rfloor, \end{equation} where $\lfloor x \rfloor$ denotes the integer part of $x$. While the bound in (\ref{3Dbound}) can be readily evaluated using numerical techniques, it may be worthwhile to first simplify it analytically as much as possible. This may be useful if one should want to draw conclusions about its general properties. In particular, we shall present a general strategy for approximating finite summations like that in (\ref{3Dbound}). \begin{figure}[t] \begin{center} \epsfxsize=6.in \epsffile{QI-fig1.eps} \caption{Graph of the QI bound for the 3D closed universe [dashed line], and that obtained by Pfenning and Ford [solid line], against $\mu$.} \label{fig2} \end{center} \end{figure} The summation in (\ref{3Dbound}) can be evaluated using the trapezoidal rule of numerical integration (e.g., see Eq.~(3.6.1) of \cite{Hildebrand}): \begin{equation} \label{trapez} \sum_{n=0}^Ng(n)=\int_0^N{\rm d}x\,g(x)+{1\over2}\left[g(0)+g(N)\right] +{N\over12}g''(\zeta)\,, \end{equation} for some $\zeta\in(0,N)$. In the present case, $g(x)=(2x+1)\sqrt{x(x+1)+ 2\xi+(a\mu)^2}$, and the integral in (\ref{trapez}) can be evaluated analytically. Furthermore, $g''(\zeta)$ is non-decreasing in the interval in question, so its occurrence in (\ref{3Dbound}) can be replaced by $g''(N)$, at the expense of weakening the bound slightly. We obtain the final inequality \begin{equation} \label{3Dbound1} \rho\geq-{1\over8\pi^2a^3}\int_{\omega_0}^\infty{\rm d}u\, \left|\widehat{f^{1/2}}(u)\right|^2 \left\{\int_0^{N(u)}{\rm d}x\,g(x)+{1\over2}\left[g(0)+g(N(u))\right] +{N(u)\over12}g''(N(u))\right\}, \end{equation} with \begin{eqnarray} \int{\rm d}x\,g(x)&=&{2\over3}\left[x(x+1)+2\xi+(a\mu)^2\right]^{3/2},\cr g''(x)&=&{3(2x+1)\over\sqrt{x(x+1)+2\xi+(a\mu)^2}}-{1\over4} {(2x+1)^3\over\big[x(x+1)+2\xi+(a\mu)^2\big]^{3/2}}\,. \end{eqnarray} The graph of the bound in (\ref{3Dbound}) is plotted against mass in Fig.~\ref{fig2}, for $a=1$ and $\xi=0$. As usual, the sampling function $f(t)$ is taken to be the Lorentzian function (\ref{Lorentzian}), with $t_0=1$. When plotted on the same scale, that of (\ref{3Dbound1}) is almost indistinguishable from the former graph. For comparison, the corresponding bound obtained by Pfenning and Ford \cite{pfenning,PFa} is also plotted in Fig.~\ref{fig2}. It is clear that our bound is stronger for all values of mass. \sect{Four-dimensional Robertson--Walker universe} \label{IVDRWU} We shall first consider the case of the open universe, before proceeding to the closed universe. The line element is \begin{equation} {\rm d}s^2=-{\rm d}t^2+a^2\left[{\rm d}\chi^2+\sinh^2\chi\, \big({\rm d}\theta^2+\sin^2\theta\,{\rm d}\varphi^2\big)\right], \end{equation} where $a$ characterises the scale of the spatial section, and $0\leq\chi<\infty$. The mode functions for a scalar field of mass $\mu$ are \cite{parker} \begin{eqnarray} U_{qlm}({\bf x})&=&{1\over(2\omega_qa^3)^{1/2}} \Pi^{(-)}_{ql}(\chi)Y_{lm}(\theta,\varphi)\,,\cr \omega_q&=&\sqrt{{q^2+1\over a^2}+\mu^2}\,, \end{eqnarray} with $0<q<\infty$ and $l,m$ as usual. The functions $\Pi^-_{ql}(\chi)$ satisfy \begin{equation} \Pi^{(-)}_{ql}(\chi)\propto\sinh^l\chi\, \bigg({{\rm d}\over{\rm d}\cosh\chi}\bigg)^{l+1}\cos q\chi\,, \end{equation} and obey the sum rule \begin{equation} \sum_{l,m}\,\left|\Pi^{(-)}_{ql}(\chi)Y_{lm} (\theta,\varphi)\right|^2 ={q^2\over 2\pi^2}\,. \end{equation} The right-hand side does not depend on the angular variables, as is expected of a system with isotropic symmetry. Hence, the quantum inequality (\ref{QIb}) becomes \begin{equation} \label{Cbound} \rho\geq-{1\over4\pi^3a^3}\int_0^\infty{\rm d}\omega \int_0^\infty{\rm d}q\,\omega_qq^2 \left|\widehat{f^{1/2}}(\omega+\omega_q)\right|^2. \end{equation} Note that this bound is identical in form to that in (four-dimensional) Minkowski space. Both (\ref{Mbound}) and (\ref{Cbound}) can, in fact, be written as \begin{eqnarray} \rho&\geq&-{1\over4\pi^3}\int_0^\infty{\rm d}\omega \int_C^\infty{\rm d}\omega'\,\omega'^2\sqrt{\omega'^2-C^2} \left|\widehat{f^{1/2}}(\omega+\omega')\right|^2\cr &=&-\frac{1}{16\pi^3}\int_C^\infty{\rm d}u\,\left|\widehat{f^{1/2}}(u)\right|^2 u^4 Q_3\left(\frac{u}{C}\right), \end{eqnarray} where \begin{equation} C\equiv\sqrt{{\epsilon\over a^2}+\mu^2},\qquad\epsilon= \cases{0&Minkowski space;\cr1&open universe,} \end{equation} and an explicit expression (and graph) for $Q_3(x)$ can be found in \cite{fewster}. The Minkowski space result is obviously recovered in the limit of infinite $a$. Furthermore, since $Q_3(x)$ is an increasing function on $[1,\infty)$, the bound for general $a$ is tighter than that in Minkowski space for all sampling functions $f(t)$. Pfenning and Ford \cite{pfenning,PFa} also noted this for their particular choice of $f(t)$. We now turn to the closed or Einstein universe, with line element \begin{equation} {\rm d}s^2=-{\rm d}t^2+a^2\left[{\rm d}\chi^2+\sin^2\chi\, \big({\rm d}\theta^2+\sin^2\theta\,{\rm d}\varphi^2\big)\right], \end{equation} where $0\leq\chi\leq\pi$. The mode functions are \cite{parker} \begin{eqnarray} U_{nlm}({\bf x})&=&{1\over(2\omega_na^3)^{1/2}} \Pi^{(+)}_{nl}(\chi)\, Y_{lm}(\theta,\varphi)\,,\cr \omega_n&=&\sqrt{{n(n+2)\over a^2}+\mu^2}\,, \end{eqnarray} with $n=0,1,2,\dots$, $l=0,1,\dots,n$, $m=-l,-l+1\ldots,l$, and \begin{equation} \Pi^{(+)}_{nl}(\chi)\propto\sin^l\chi\, \bigg({{\rm d}\over{\rm d}\cos\chi}\bigg)^{l+1}\cosh (n+1)\chi\,. \end{equation} Using the sum rule \begin{equation} \sum_{l,m}\,\left|\Pi^{(+)}_{nl}(\chi)Y_{lm} (\theta,\varphi)\right|^2={(n+1)^2\over 2\pi^2}\,, \end{equation} we obtain the quantum inequality \begin{eqnarray} \label{RWbound} \rho&\geq&-{1\over4\pi^3a^3}\int_0^\infty{\rm d}\omega \sum_{n=0}^\infty\,\omega_n(n+1)^2 \left|\widehat{f^{1/2}}(\omega+\omega_n)\right|^2\cr &=&-{1\over4\pi^3a^3}\int_\mu^\infty{\rm d}u\, \left|\widehat{f^{1/2}}(u)\right|^2 \sum_{n=0}^{N(u)}\,\omega_n(n+1)^2, \end{eqnarray} with \begin{equation} N(u)\equiv\Big\lfloor\sqrt{(au)^2-(a\mu)^2+1}-1\Big\rfloor\,. \end{equation} \begin{figure}[t] \begin{center} \epsfxsize=6.in \epsffile{QI-fig2.eps} \caption{Graphs of the QI bounds for the 4D closed universe with $a\mu=1$: (\ref{Boundmua1}) using a dashed line, (\ref{Boundmua2}) using a dotted line, and that obtained by Pfenning and Ford [solid line].} \label{fig3} \end{center} \end{figure} An obvious special case to investigate is $a\mu=1$, in which $\omega_n=\mu(n+1)$. The sum in (\ref{RWbound}) may then be evaluated exactly, to give \begin{equation} \label{Boundmua1} \rho \ge -\frac{1}{16\pi^3a^4}\int_\mu^\infty{\rm d}u\, \left|\widehat{f^{1/2}}(u)\right|^2(N(u)+1)^2(N(u)+2)^2. \end{equation} This bound may be weakened slightly, by replacing $N(u)=\lfloor au-1\rfloor$ with the larger quantity $au-1$, to give \begin{equation} \label{Boundmua2} \rho \ge -\frac{1}{16\pi^3}\int_{\mu}^\infty{\rm d}u\, \left|\widehat{f^{1/2}}(u)\right|^2 \bigg( u^4+ \frac{2u^3}{a} + \frac{u^2}{a^2}\bigg)\,. \end{equation} It clearly differs from the massless Minkowski bound by ${\rm O}(1/a)$ terms. The bounds in (\ref{Boundmua1}) and (\ref{Boundmua2}) are plotted in Fig.~\ref{fig3} against mass. The difference between these two graphs can be further minimised using the approximations below, but at the expense of having a more complicated expression for the bound. The corresponding bound derived in \cite{pfenning,PFa} is also plotted on the same graph. \begin{figure}[t] \begin{center} \epsfxsize=6.in \epsffile{QI-fig3.eps} \caption{Graphs of the QI bounds for the 4D closed universe: (\ref{Boundgen1}) using a solid line, and its approximation (\ref{Boundgen2}) using a dashed line.} \label{fig4} \end{center} \end{figure} \begin{figure}[t] \begin{center} \epsfxsize=6.in \epsffile{QI-fig4.eps} \caption{Graphs of the QI bounds for the 4D closed universe: (\ref{Boundgen1}) using a dashed line, and that of Pfenning and Ford [solid line].} \label{fig5} \end{center} \end{figure} Returning to the general case, we note that (\ref{RWbound}) can be written as \begin{equation} \label{Boundgen1} \rho\geq-{1\over4\pi^3a^4}\int_\mu^\infty{\rm d}u\, \left|\widehat{f^{1/2}}(u)\right|^2 \sum_{n=1}^{N'}\,n^2\sqrt{n^2+(a\mu)^2-1}\,, \end{equation} where $N'\equiv\lfloor a\sqrt{u^2-\mu^2}\rfloor$. The finite sum can again be approximated analytically using the trapezoidal rule, now in the form: \begin{equation} \sum_{n=1}^Ng(n)=\int_1^N{\rm d}x\,g(x)+{1\over2}\left[g(1)+g(N)\right] +{N-1\over12}g''(\zeta)\,, \end{equation} for some $\zeta\in(1,N)$. From the fact that the second derivative of $g(x)=x^2\sqrt{x^2+(a\mu)^2-1}$ is non-decreasing in this interval, we obtain the inequality \begin{equation} \label{Boundgen2} \rho\geq-{1\over4\pi^3a^4}\int_\mu^\infty{\rm d}u\, \left|\widehat{f^{1/2}}(u)\right|^2 \left\{\int_1^{N'}{\rm d}x\,g(x)+{1\over2}\left[g(1)+g(N')\right] +{N'-1\over12}g''(N') \right\}\,, \end{equation} with \begin{eqnarray} \int{\rm d}x\,g(x)&=&{1\over4}x\big[x^2+(a\mu)^2-1\big]^{3/2}-{1\over8} \big[(a\mu)^2-1\big]x\sqrt{x^2+(a\mu)^2-1}\cr &&\qquad-{1\over8}\big[(a\mu)^2-1\big]^2\ln\Big(x+\sqrt{x^2+(a\mu)^2-1}\Big)\,, \cr g''(x)&=&2\sqrt{x^2+(a\mu)^2-1}+{5x^2\over\sqrt{x^2+(a\mu)^2-1}} -{x^4\over\big[x^2+(a\mu)^2-1\big]^{3/2}}\,. \end{eqnarray} The bound in (\ref{Boundgen1}) and its approximation in (\ref{Boundgen2}) are plotted in Fig.~\ref{fig4} against $\mu$, for $a=1$. As can be seen, the approximation is only very slightly weaker than the exact bound. Also plotted in Fig.~\ref{fig5} is the bound obtained in \cite{pfenning,PFa}, for comparison with (\ref{Boundgen1}). \sect{de Sitter space-time} A convenient static parametrisation of the de Sitter universe is \begin{equation} {\rm d}s^2 = -\bigg(1-{r^2\over\alpha^2}\bigg)\,{\rm d}t^2 +\bigg(1-{r^2\over\alpha^2}\bigg)^{-1}{\rm d}r^2+r^2({\rm d}\theta^2 +\sin^2\theta\,{\rm d}\varphi^2)\,, \end{equation} with $0\le r\le \alpha$. The surface $r=\alpha$ is the particle horizon for an observer located at the origin. In this representation, the mode functions for a scalar field with mass $\mu$ and energy $\omega$ are \begin{equation} U_{klm}({\bf x})={1\over(4\pi\alpha^2k)^{1/2}} f_{kl}(z)Y_{lm}(\theta,\varphi)\,, \end{equation} where we denote $z\equiv r/\alpha$ and $k\equiv\alpha\omega$. The latter continuously parametrises the mode function from zero to infinity, while $l$ and $m$ are as in Sec.~\ref{IIIDCU}. The radial function can then be solved in terms of the hypergeometric function $F(a,b;c;z)$ as \cite{higuchi} \begin{equation} f_{kl}(z)={\Gamma(b^+_l)\Gamma(b^-_l)\over\Gamma(l+{3\over 2}) \Gamma(ik)}z^l(1-{z^2})^{ik/2}F\bigg(b^+_l,b^-_l;l+{3\over 2};z^2\bigg)\,, \end{equation} with \begin{equation} b^\pm_l\equiv{1\over 2}\bigg(l+{3\over2}+ik \pm\sqrt{{9\over4}-\alpha^2\mu^2}\bigg)\,. \end{equation} Using the sum rule (\ref{sumrule}) in the quantum inequality (\ref{QIb}), we have for an observer at the origin, \begin{eqnarray} \rho&\geq&-{1\over64\pi^3\alpha^4}\int_0^\infty{\rm d}\omega \int_0^\infty{\rm d}k\sum_{l=0}^\infty\,{2l+1\over k}\bigg|{\Gamma(b^+_l) \Gamma(b^-_l)\over\Gamma(l+{3\over 2})\Gamma(ik)}\bigg|^2\cr&&\quad \lim_{z\rightarrow0} \bigg\{{4k^2\over1-z^2}+{1\over z^2}\partial_z\big[z^2(1-z^2)\partial_z\big] \bigg\}z^{2l}\bigg|F\bigg(b^+_l,b^-_l;l+{3\over 2};z^2\bigg)\bigg|^2 \left|\widehat{f^{1/2}}(\omega+k/\alpha)\right|^2.~~~~~~~ \end{eqnarray} In fact, only the $l=0$ and $l=1$ terms contribute (cf. Eqs.~(4.126) and~(4.127) of~\cite{pfenning}), and the expression may be simplified to give \begin{eqnarray} \label{deSitter} \rho&\geq&-{1\over8\pi^5\alpha^4}\int_0^\infty{\rm d}\omega \int_0^\infty{\rm d}k\,\sinh(\pi k)\Big\{(k^2+\alpha^2\mu^2)\big| \Gamma(b^+_0)\Gamma(b^-_0)\big|^2+4\big|\Gamma(b^+_1)\Gamma(b^-_1)\big|^2 \Big\}\cr&&\hskip1.65in \times\left|\widehat{f^{1/2}}(\omega+k/\alpha)\right|^2. \end{eqnarray} \begin{figure}[t] \begin{center} \epsfxsize=6.in \epsffile{QI-fig5.eps} \caption{Graphs of the QI bounds for de Sitter space-time: (\ref{deSitter}) using a dashed line, and that of Pfenning and Ford [solid line].} \label{fig6} \end{center} \end{figure} As was noted in \cite{PFb,pfenning}, there are two cases for which the gamma functions in (\ref{deSitter}) can be evaluated analytically, namely when $\mu=0$ and $\sqrt{2}/\alpha$. Assuming the Lorentzian sampling function (\ref{Lorentzian}) and using (\ref{FT}), we obtain, for the massless case, \begin{equation} \rho\geq-{t_0\over2\pi^4\alpha^2}\int_0^\infty{\rm d}\omega \int_0^\infty{\rm d}\omega^\prime\,(5\omega^\prime+2\alpha^2\omega^\prime{}^3) K_0\big(t_0(\omega+\omega^\prime)\big)^2. \end{equation} Defining the new variables $u=\omega+\omega^\prime$ and $v=\omega^\prime$, the bound becomes \begin{equation} -{t_0\over2\pi^4\alpha^2}\int_0^\infty{\rm d}u\,K_0(t_0u)^2 \int_0^u{\rm d}v\,(5v+2\alpha^2v^3)\,. \end{equation} This can be explicitly evaluated using the integral \begin{equation} \label{integral} \int_0^\infty{\rm d}u\,u^{\alpha-1}K_0(t_0u)^2={2^{\alpha-3} \over t_0^\alpha\Gamma(\alpha)}\Gamma\Big({\alpha\over2}\Big)^4, \end{equation} to obtain \begin{equation} \label{mydS} \rho\geq-{3\over32\pi^2t_0^4}{9\over64}\bigg[1+{16\over9}{5\over3} \Big({t_0\over\alpha}\Big)^2\bigg]\,. \end{equation} This bound is at least four times stronger than that obtained in \cite{PFb,pfenning}: \begin{equation} \label{forddS} \rho\geq-{3\over32\pi^2t_0^4}\bigg[1+{5\over3} \Big({t_0\over\alpha}\Big)^2\bigg]\,. \end{equation} In the limit $\alpha\rightarrow\infty$ or $t_0\rightarrow0$, we expect to recover the results for Minkowski space. Indeed, the bound in (\ref{mydS}) is then $9\over64$ that in (\ref{forddS}), as was observed in \cite{fewster}. When $\mu=\sqrt{2}/\alpha$, we similarly obtain the quantum inequality \begin{equation} \label{mydS2} \rho\geq-{3\over32\pi^2t_0^4}{9\over64}\bigg[1+{16\over9} \Big({t_0\over\alpha}\Big)^2\bigg]\,, \end{equation} in contrast to that derived in \cite{PFb,pfenning}: \begin{equation} \rho\geq-{3\over32\pi^2t_0^4}\bigg[1+ \Big({t_0\over\alpha}\Big)^2\bigg]\,. \end{equation} The bound in (\ref{deSitter}) and that derived in \cite{PFb,pfenning} are plotted for general $\mu$, and $\alpha=1$, in Fig.~\ref{fig6}. We have, in fact, proved that for general $\mu$, the de Sitter bound (\ref{deSitter}) differs from the Minkowski space bound~(\ref{Mbound2}) by terms no greater than order $\alpha^{-1/2}$ as $\alpha\to\infty$, and so our results for these cases agree in this limit. This estimate involves bounds on the integrand in Eq.~(\ref{deSitter}) which are uniform in $k$ and $\omega$. The proof, which we omit, is accordingly somewhat technical. It is unclear whether the argument can be strengthened to show that the deviation is in fact ${\rm O}(\alpha^{-2})$ in general, as it is for the specific cases considered in~(\ref{mydS}) and~(\ref{mydS2}). \sect{Schwarzschild space-time} As the final example, we shall examine the quantum inequalities in a black hole space-time. The line element for the Schwarzschild black hole of mass $M$ is \begin{equation} {\rm d}s^2=-\left(1-{2M\over r}\right){\rm d}t^2 +\left(1-{2M\over r}\right)^{-1}{\rm d}r^2 +r^2({\rm d}\theta^2 + \sin^2\theta\,{\rm d}\varphi^2)\,. \end{equation} For simplicity, we shall only consider a massless scalar field in this space-time. The mode functions, in the region exterior to the horizon $r>2M$, take the form \cite{deWitt} \begin{eqnarray} \stackrel{\rightarrow}{U}_{\omega lm}({\bf x})&=& {1\over(4\pi\omega)^{1/2}}\stackrel{\rightarrow}{R}_l(\omega|r) Y_{lm}(\theta,\varphi)\,,\cr \stackrel{\leftarrow}{U}_{\omega l m}({\bf x})&=&{1\over(4\pi\omega)^{1/2}} \stackrel{\leftarrow}{R}_l(\omega|r)Y_{lm}(\theta,\varphi)\,, \end{eqnarray} where, as usual, $\omega$ is the energy of the field and $Y_{lm}(\theta,\varphi)$ are the spherical harmonics. $\stackrel{\rightarrow}{R}_l(\omega|r)$ and $\stackrel{\leftarrow}{R}_l(\omega|r)$ are the outgoing and ingoing solutions to the radial part of the wave equation, respectively. Although this equation cannot be solved analytically, the asymptotic forms of the solutions are known near the horizon and at infinity. Again, using the sum rule (\ref{sumrule}), we see that the quantum inequality (\ref{QIb}) becomes \begin{eqnarray} \label{Schwarz} \rho&\geq&-{1\over16\pi^3}\int_0^\infty{\rm d}\omega \int_0^\infty{{\rm d}\omega^\prime\over\omega^\prime}\sum_{l=0}^\infty\, (2l+1)\, \bigg\{{\omega^\prime{}^2\over1-{2M\over r}}+{1\over4r^2}\partial_r\Big[ r^2(1-2M/r)\partial_r\Big]\bigg\}\cr &&\hskip1.65in\times\,\bigg[\Big|\stackrel{\rightarrow}{R}_l(\omega^\prime|r)\Big|^2+ \Big|\stackrel{\leftarrow}{R}_l(\omega^\prime|r)\Big|^2\bigg] \left|\widehat{f^{1/2}}(\omega +\omega^\prime)\right|^2. \end{eqnarray} In writing this, we are assuming that the mode functions are defined to have positive frequency with respect to the time-like Killing vector $\partial_t$. This is the Boulware vacuum. Now, in the two regions where $\stackrel{\rightarrow}{R}_l(\omega|r)$ and $\stackrel{\leftarrow}{R}_l(\omega|r)$ are known explicitly, we have \cite{candelas} \begin{equation} \sum_{l=0}^\infty\,(2l+1)\Big|\stackrel{\rightarrow}{R}_l(\omega|r)\Big|^2 \simeq\cases{4\omega^2(1-2M/r)^{-1},&$r\rightarrow2M$,\cr \displaystyle{{1\over r^2}\sum_{l=0}^\infty}(2l+1)\big|B_l(\omega)\big|^2, &$r\rightarrow\infty$,} \end{equation} and \begin{equation} \sum_{l=0}^\infty\,(2l+1)\Big|\stackrel{\leftarrow}{R}_l(\omega|r)\Big|^2 \simeq\cases{ \displaystyle{{1\over4M^2} \sum_{l=0}^\infty}(2l+1)\big|B_l(\omega)\big|^2,&$r\rightarrow2M$,\cr 4\omega^2,&$r\rightarrow\infty$.} \end{equation} If we further assume the low-energy condition $2M\omega\ll1$, then \cite{jensen} \begin{equation} \label{B_l} B_l(\omega)\simeq{(l!)^3\over(2l+1)!(2l)!}(-4iM\omega)^{l+1}. \end{equation} These results can be substituted into (\ref{Schwarz}), and the bound explicitly evaluated using the integral (\ref{integral}). However, the maximum value of $l$ for which the expansion in (\ref{Schwarz}) is valid depends on the order of the leading terms which have been dropped in $B_l(\omega)$. If (\ref{B_l}) is exact to ${\rm O}\left[(M\omega)^{l+2} \right]$, then only the $l=0$ terms should be retained, as the $B_1$ contribution would be smaller than the corrections to $B_0$ \cite{PFb,pfenning}. Near the horizon, the quantum inequality can be expressed in terms of the observer's proper time: \begin{equation} \tau_0=\bigg(1-{2M\over r}\bigg)^{1\over2}t_0\,, \end{equation} as \begin{equation} \label{Sbounda} \rho\geq-{3\over32\pi^2\tau_0^4}\bigg\{ {1\over24}\bigg({2M\tau_0\over r^2}\bigg)^2\bigg(1-{2M\over r}\bigg)^{-1} +{9\over64}\bigg[1+\bigg(1-{2M\over r}\bigg)\bigg]+\cdots\bigg\}\,, \end{equation} where the ellipsis denotes higher-order terms that have been dropped. This is to be compared with the result derived in \cite{PFb,pfenning}: \begin{equation} \label{Sboundb} \rho\geq-{3\over32\pi^2\tau_0^4}\bigg\{ {1\over6}\bigg({2M\tau_0\over r^2}\bigg)^2\bigg(1-{2M\over r}\bigg)^{-1} +1+\bigg(1-{2M\over r}\bigg)+\cdots\bigg\}\,. \end{equation} The bound in (\ref{Sbounda}) is between $9\over64$ and $1\over4$ that in (\ref{Sboundb}), at least in the present approximation. Note that in either case, the bound becomes arbitrarily negative near the horizon of the black hole. On the other hand, the quantum inequality for an observer near infinity becomes \begin{equation} \rho\geq-{3\over32\pi^2\tau_0^4}{9\over64}\bigg\{1-{2M\over r} +\bigg({2M\over r}\bigg)^2\bigg[1+{16\over9}{1\over3}\bigg( {\tau_0\over r}\bigg)^2\bigg]-\bigg({2M\over r}\bigg)^3 \bigg[1+{16\over9}\bigg({\tau_0\over r}\bigg)^2\bigg]+\cdots\bigg\}\,, \end{equation} while the corresponding inequality obtained in \cite{PFb,pfenning} is \begin{equation} \rho\geq-{3\over32\pi^2\tau_0^4}\bigg\{1-{2M\over r}+\bigg({2M\over r}\bigg)^2 \bigg[1+{1\over3}\bigg({\tau_0\over r}\bigg)^2\bigg]-\bigg({2M\over r}\bigg)^3 \bigg[1+\bigg({\tau_0\over r}\bigg)^2\bigg]+\cdots\bigg\}\,. \end{equation} Again, the former bound is between $9\over64$ and $1\over4$ the latter. It gives the correct Minkowski space result in the limit $r\rightarrow\infty$ or $M\rightarrow0$. \sect{Concluding remarks} In summary, we have derived new quantum inequalities (\ref{QIb}) or (\ref{QIc}) on the normal-ordered averaged energy density in static space-times, that are valid for quite general sampling functions. They were then applied to several standard examples using the Lorentzian sampling function. (Of course, other space-times could readily be considered, such as Rindler space, flat space with perfectly reflecting mirrors, and other black holes \cite{PFb,pfenning}.) The resulting bounds are stronger than previous results, and would lead to even tighter constraints on the various exotic space-times mentioned at the beginning of the paper. Before we conclude, a few comments are in order. An important question is whether our quantum inequalities are optimal. This could, for example, be proved by finding a quantum state that saturates the bound, which would necessarily belong to the kernel of all the operators ${\cal O}^\pm(\omega)$ in (\ref{O}). However, it is known that our bound, when applied to a massless scalar field in two-dimensional Minkowski space, is $1{1\over2}$ times weaker than the optimal value obtained by Flanagan \cite{flanagan}. Unfortunately, his derivation relies on some special features of two-dimensional massless field theory, and does not appear to generalise to other more realistic cases. An interesting application of our quantum inequality would be to the static Morris--Thorne-type wormholes \cite{MT}. Ford and Roman have applied the flat-space version of their quantum inequalities to this case, and have found that they constrain the size of such wormholes \cite{FR-worm}. They justified this procedure by making the sampling timescale much shorter than the minimum characteristic curvature scale, so that space-time appears locally flat. However, it would be desirable to verify this calculation using the full curved space results; this should not be too difficult once the form of the scalar field mode functions in the wormhole space-time have been determined. \section*{Acknowledgment} CJF thanks Simon Eveson for useful discussions concerning the use of the trapezoidal rule in Secs.~\ref{IIIDCU} and~\ref{IVDRWU}.
{'timestamp': '1999-02-16T17:56:10', 'yymm': '9812', 'arxiv_id': 'gr-qc/9812032', 'language': 'en', 'url': 'https://arxiv.org/abs/gr-qc/9812032'}
\section{INTRODUCTION} Our sun has an eleven-year magnetic activity cycle, which is thought to be sustained by the dynamo motion of internal ionized plasma, i.e., a transformation of kinetic energy to magnetic energy \citep{1955ApJ...122..293P}. Our understanding of the solar dynamo has significantly improved during the past fifty years, and some kinematic studies can now reproduce solar magnetic features such as equatorward migration of sunspots and poleward migration of the magnetic field \citep{1995A&A...303L..29C,1999ApJ...518..508D,2001A&A...374..301K,2005LRSP....2....2C,2010ApJ...709.1009H,2010ApJ...714L.308H}. The most important mechanism of the solar dynamo is the $\Omega$ effect, the bending of pre-existing poloidal magnetic fields by differential rotation and the generation of toroidal magnetic fields. Thus, the distribution of the differential rotation in the convection zone is a significant factor for the solar dynamo. Using helioseismology, it has recently been shown that the solar internal differential rotation is in a non-Taylor-Proudman state \citep[see review by][]{2003ARA&A..41..599T}, meaning the iso-rotation surfaces are {\it not} parallel to the axis. \par Based on solar observations, it is known that Ca H-K fluxes can be a signature of stellar chromospheric activity, and such chromospheric signatures are in correlation with magnetic activity. \cite{1968ApJ...153..221W,1978ApJ...226..379W} and \cite{1995ApJ...438..269B} discuss a class of stars that shows a periodic variation in Ca H-K fluxes, which suggests that they have a magnetic cycle similar to our sun. It is natural to conjecture that such magnetic activity is maintained by dynamo action. Various studies have been conducted to investigate the relationship between stellar angular velocity $\Omega_0$ and its latitudinal difference $\Delta\Omega$ i.e., $\Delta\Omega\propto\Omega_0^n$, where the suggested range of $n$ is $0 <n < 1$ \citep{1996ApJ...466..384D,2003A&A...398..647R,2005MNRAS.357L...1B}. This means that the angular velocity difference $\Delta\Omega$ increases and the relative difference $\Delta\Omega/\Omega_0$ decreases with increases in the stellar rotation rate $\Omega_0$.\par In this paper, we investigate differential rotation in rapidly rotating stars using a mean field framework. Our study is based on the work of \cite{2005ApJ...622.1320R}, in which he suggests the importance of the role of the subadiabatic layer below the convection zone in order to maintain a non-Taylor-Proudman state in the Sun. The aim of this paper is to use a mean field model to analyze firstly the dependence of the morphology of differential rotation on stellar angular velocity, and secondly the physical process which determines the observable angular velocity difference $\Delta \Omega$. According to our knowledge, this is the first work which systematically discusses the application of Rempel's (2005b) solar model to stars. \par Other research adopts another approach to the use of mean field models for the analysis of differential rotation in rapidly-rotating stars \citep{1995A&A...299..446K,2001A&A...366..668K}. In these studies, the non-Taylor-Proudman state is sustained by anisotropy of turbulent thermal conduction. This anisotropy is generated by the effects of stellar rotation on convective turbulence.\par Three-dimensional numerical studies on stellar differential rotation also exist \citep{2008ApJ...689.1354B,2009AnRFM..41..317M}. In these studies, they resolve stellar thermal driven convection and can calculate a self-consistent turbulent angular momentum transport and anisotropy of turbulent thermal conductivity. The subadiabatic layer below the convection zone, however, is not included. The effects of anisotropy of turbulent thermal conductivity and the subadiabatic layer are discussed in this paper. \section{MODEL} Using numerical settings similar to those of Rempel's (2005b), we solve the axisymmetric hydrodynamic equations in spherical geometry $(r,\theta)$, where $r$ is the radius, and $\theta$ is the colatitude. The basic assumptions are as follows. \begin{enumerate} \item A mean field approximation is adopted. All processes on the convective scale are parameterized. Thus, the coefficients for turbulent viscosity, turbulent heat conductivity, and turbulent angular momentum transport are explicitly given in the equations. \item The perturbations of the density and pressure associated with differential rotation are small, i.e., $\rho_1\ll \rho_0$ and $p_1\ll p_0$. Here $\rho_0$ and $p_0$ denote the reference state density and pressure respectively, whereas $\rho_1$ and $p_1$ are the perturbations. We neglect the second-order terms of these quantities. Note that the perturbation of angular velocity ($\Omega_1$) and meridional flow ($v_r$, $v_\theta$) are not small. \item Since the reference state is assumed to be in an energy flux balance, the entropy equation includes only perturbations. \end{enumerate} \subsection{Equations} We do not use the anelastic approximation here. The equations in an inertial frame can be expressed as \begin{eqnarray} && \frac{\partial \rho_1}{\partial t}= -\frac{1}{r^2}\frac{\partial }{\partial r}(r^2v_r\rho_0) -\frac{1}{r\sin \theta}\frac{\partial }{\partial \theta}(\sin\theta v_\theta \rho_0)\label{continuity},\\ && \frac{\partial v_r}{\partial t}= -v_r\frac{\partial v_r}{\partial r} -\frac{v_\theta}{r}\frac{\partial v_r}{\partial \theta} +\frac{v_\theta^2}{r} -\frac{1}{\rho_0} \left[ \rho_1 g+\frac{\partial p_1}{\partial r} \right] +(2\Omega_0\Omega_1+\Omega_1^2)r\sin^2\theta+\frac{F_r}{\rho_0}\label{vx},\\ && \frac{\partial v_\theta}{\partial t}= -v_r\frac{\partial v_\theta}{\partial r} -\frac{v_\theta}{r}\frac{\partial v_\theta}{\partial \theta} -\frac{v_rv_\theta}{r} -\frac{1}{\rho_0}\frac{1}{r}\frac{\partial p_1}{\partial \theta} +(2\Omega_0\Omega_1+\Omega_1^2)r\sin\theta\cos\theta+\frac{F_\theta}{\rho_0}\label{vy},\\ && \frac{\partial \Omega_1}{\partial t}= -\frac{v_r}{r^2}\frac{\partial}{\partial r}[r^2(\Omega_0+\Omega_1)] -\frac{v_\theta}{r\sin^2\theta}\frac{\partial }{\partial \theta} [\sin^2\theta(\Omega_0+\Omega_1)] +\frac{F_\phi}{\rho_0 r\sin\theta}\label{om1},\\ && \frac{\partial s_1}{\partial t}= -v_r\frac{\partial s_1}{\partial r} -\frac{v_\theta}{r}\frac{\partial s_1}{\partial \theta} +v_r\frac{\gamma\delta}{H_p} +\frac{\gamma-1}{p_0}Q +\frac{1}{\rho_0 T_0}\mathrm{div}(\kappa_\mathrm{t}\rho_0T_0\mathrm{grad} s_1)\label{se1}, \end{eqnarray} where $\Omega_0$ is a constant value that represents the angular velocity of the rigidly rotating radiative zone. We set it as a parameter in Table \ref{param}. $\gamma$ is the ratio of specific heats, with the value for an ideal gas being $\gamma=5/3$. $\kappa_\mathrm{t}$ is the coefficient of turbulent thermal conductivity. $\delta=\nabla-\nabla_\mathrm{ad}$ represents superadiabaticity, where $\nabla=d(\ln T)/d(\ln p)$ (see \S \ref{back}). $g$ denotes gravitational acceleration. Following from this, the perturbation of pressure $p_1$ and pressure scale height $H_p$ are expressed as \begin{eqnarray} && p_1=p_0 \left( \gamma\frac{\rho_1}{\rho_0}+s_1 \right), \\ && H_p=\frac{p_0}{\rho_0 g}. \end{eqnarray} $s_1$ is dimensionless entropy normalized by the specific heat capacity at constant volume $c_\mathrm{v}$. Turbulent viscous force ${\bf F}$ follows from \begin{eqnarray} F_r=\frac{1}{r^2}\frac{\partial }{\partial r}(r^2R_{rr}) +\frac{1}{r\sin\theta}\frac{\partial }{\partial \theta}(\sin\theta R_{\theta r}) -\frac{R_{\theta\theta}+R_{\phi\phi}}{r}, \end{eqnarray} \begin{eqnarray} F_\theta=\frac{1}{r^2}\frac{\partial }{\partial r}(r^2R_{r\theta}) +\frac{1}{r\sin\theta}\frac{\partial }{\partial \theta}(\sin\theta R_{\theta \theta}) +\frac{R_{r\theta}-R_{\phi\phi}\cot\theta}{r}, \end{eqnarray} \begin{eqnarray} F_\phi=\frac{1}{r^2}\frac{\partial }{\partial r}(r^2R_{r\phi}) +\frac{1}{r\sin\theta}\frac{\partial }{\partial \theta}(\sin\theta R_{\theta \phi}) +\frac{R_{r\phi}+R_{\theta\phi}\cot\theta}{r}, \end{eqnarray} with the Reynolds stress tensor \begin{eqnarray} R_{ik}=\rho_0 \left[ \nu_\mathrm{tv} \left( E_{ik}-\frac{2}{3}\delta_{ik}\mathrm{div}{\bf v} \right) +\nu_\mathrm{tl}\Lambda_{ik} \right].\label{reynolds} \end{eqnarray} Here $\nu_\mathrm{tv}$ is the coefficient of turbulent viscosity and $\nu_\mathrm{tl}$ is the coefficient of the $\Lambda$ effect \citep{1995A&A...299..446K}, a non-diffusive angular momentum transport caused by turbulence. $\nu_\mathrm{tv}$ and $\nu_\mathrm{tl}$ are expected to have the same value, since both effects are caused by turbulence, i.e., thermal driven convection. We discuss this in more detail in \S \ref{diffusivity}. $E_{ik}$ denotes the deformation tensor, which is given in spherical coordinates by \begin{eqnarray} &&E_{rr}=2\frac{\partial v_r}{\partial r},\\ &&E_{\theta\theta}=2\frac{1}{r}\frac{\partial v_\theta}{\partial \theta}+2\frac{v_r}{r},\\ &&E_{\phi\phi}=\frac{2}{r}(v_r+v_\theta\cot\theta), \\ &&E_{r\theta}=E_{\theta r}=r\frac{\partial }{\partial r} \left(\frac{v_\theta}{r}\right)+\frac{1}{r}\frac{\partial v_r}{\partial \theta},\\ &&E_{r\phi}=E_{\phi r}=r\sin\theta\frac{\partial \Omega_1}{\partial r},\\ &&E_{\theta\phi}=E_{\phi\theta}=\sin\theta\frac{\partial \Omega_1}{\partial \theta}. \end{eqnarray} An expression for the $\Lambda$ effect ($\Lambda_{ik}$) is given later. The amount of energy that is converted by the Reynolds stress from kinematic energy to internal energy is given by \begin{eqnarray} Q=\sum_{i,k}\frac{1}{2}E_{ik}R_{ik}. \end{eqnarray} \subsection{Background Stratification}\label{back} We use an adiabatic hydrostatic stratification for the spherically symmetric reference state of $\rho_0$, $p_0$ and $T_0$. Gravitational acceleration is assumed to have $\sim r^{-2}$ dependence, since the radiative zone ($r<0.65R_\odot$) has most of the solar mass. This is expressed as, \begin{eqnarray} && \rho_0(r)=\rho_\mathrm{bc} \left[ 1+\frac{\gamma-1}{\gamma}\frac{r_\mathrm{bc}}{H_\mathrm{bc}} \left(\frac{r_\mathrm{bc}}{r}-1\right) \right]^{1/(\gamma-1)},\\ && p_0(r)=p_\mathrm{bc} \left[ 1+\frac{\gamma-1}{\gamma}\frac{r_\mathrm{bc}}{H_\mathrm{bc}} \left(\frac{r_\mathrm{bc}}{r}-1\right) \right]^{\gamma/(\gamma-1)},\\ && T_0(r)=T_\mathrm{bc} \left[ 1+\frac{\gamma-1}{\gamma}\frac{r_\mathrm{bc}}{H_\mathrm{bc}} \left(\frac{r_\mathrm{bc}}{r}-1\right) \right],\\ && g(r)=g_\mathrm{bc} \left( \frac{r}{r_\mathrm{bc}} \right)^{-2}, \end{eqnarray} where $\rho_\mathrm{bc}$, $p_\mathrm{bc}$, $T_\mathrm{bc}$, $H_\mathrm{bc}=p_\mathrm{bc}/(\rho_\mathrm{bc}g_\mathrm{bc})$ and $g_\mathrm{bc}$ denote the values at the base of the convection zone $r=r_\mathrm{bc}$ of density, pressure, temperature, pressure scale height and gravitational acceleration, respectively. In this study we use $r_\mathrm{bc}=0.71R_\odot$, with $R_\odot$ representing the solar radius ($R_\odot=7\times10^{10}\ \mathrm{cm}$). We adopt solar values $\rho_\mathrm{bc}=0.2\ \mathrm{g\ cm^{-3}}$, $p_\mathrm{bc}=6\times10^{13}\ \mathrm{dyn\ cm^{-2}}$, $T_\mathrm{bc}=mp_\mathrm{bc} /(k_\mathrm{B}\rho_\mathrm{bc})\sim1.82\times10^6\ \mathrm{K}$ and $g_\mathrm{bc}=5.2\times10^4\ \mathrm{cm\ s^{-2}}$, where $k_\mathrm{B}$ is the Boltzmann constant, and $m$ is the mean particle mass. Fig. \ref{background} shows the profiles of background density, pressure and temperature, and gravitational acceleration.\par Although the real sun's stratification is not adiabatic in the convection zone, our reference state is valid, since the absolute value of superadiabaticity is small. In order to include the deviation from adiabatic stratification, we assume superadiabaticity $\delta$ has the following profile: \begin{eqnarray} \delta=\delta_\mathrm{conv}+\frac{1}{2}(\delta_\mathrm{os}-\delta_\mathrm{conv}) \left[ 1-\tanh \left( \frac{r-r_\mathrm{tran}}{d_\mathrm{tran}} \right) \right]. \end{eqnarray} Here $\delta_\mathrm{os}$ and $\delta_\mathrm{conv}$ denote the values of superadiabaticity in the overshoot region. $r_\mathrm{tran}$ and $d_\mathrm{tran}$ denote the position and the steepness of the transition toward the subadiabatically stratified overshoot region, respectively. Superadiabaticity in convection zone is define as \begin{eqnarray} \delta_\mathrm{conv}=\delta_\mathrm{c}\frac{r-r_\mathrm{sub}}{r_\mathrm{max}-r_\mathrm{sub}}, \end{eqnarray} where $r_\mathrm{max}$ denotes the location of the upper boundary. We specify $\delta_\mathrm{os}=-1.5\times10^{-5}$, $r_\mathrm{tran}=0.725R_\odot$, $r_\mathrm{sub}=0.8R_\odot$ and $d_\mathrm{tran}=d_\mathrm{sub}=0.0125R_\odot$ in our simulations. $\delta_c$ is took as a free parameter. The entropy gradient can be expressed as \begin{eqnarray} \frac{ds_0}{dr}=-\frac{\gamma\delta}{H_p}. \end{eqnarray} The third term of eq. (\ref{entropy}), $v_r\gamma\delta/H_p$, includes the effect of deviations from adiabatic stratification. The term indicates that an upflow (downflow) can make negative (positive) entropy perturbations in the subadiabatically stratified layers $(\delta<0)$. \subsection{Diffusivity Profile}\label{diffusivity} We assume the coefficients of turbulent viscosity and thermal conductivity to be constant within the convection zone, and these smoothly connect with the values of the overshoot region. We assume that the diffusivities only depend on the radial coordinate: \begin{eqnarray} && \nu_\mathrm{tv}=\nu_\mathrm{os}+\frac{\nu_\mathrm{0v}}{2} \left[ 1+\tanh \left( \frac{r-r_\mathrm{tran}+\Delta}{d_{\kappa\nu}} \right) \right]f_c(r),\\ && \nu_\mathrm{tl}=\frac{\nu_\mathrm{0l}}{2} \left[ 1+\tanh \left( \frac{r-r_\mathrm{tran}+\Delta}{d_{\kappa\nu}} \right) \right]f_c(r),\\ && \kappa_\mathrm{t}=\kappa_\mathrm{os}+\frac{\kappa_0}{2} \left[ 1+\tanh \left( \frac{r-r_\mathrm{tran}+\Delta}{d_{\kappa\nu}} \right) \right]f_c(r), \end{eqnarray} with \begin{eqnarray} && f_c(r)=\frac{1}{2} \left[ 1+\tanh \left( \frac{r-r_\mathrm{bc}}{d_\mathrm{bc}} \right) \right],\\ && \Delta=d_{\kappa\nu}\tanh^{-1}(2\alpha_{\kappa\nu}-1), \end{eqnarray} where $\nu_\mathrm{0v}$, $\nu_\mathrm{0l}$ and $\kappa_0$ are the values of the turbulent diffusivities within the convection zone, and $\nu_\mathrm{os}$ and $\kappa_\mathrm{os}$ are the values in the overshoot region. We specify $\nu_\mathrm{0l}=\kappa_\mathrm{0l}=3\times10^{12}\ \mathrm{cm^2\ s^{-1}}$, $\nu_\mathrm{os}=6\times10^{10}\ \mathrm{cm^2\ s^{-1}}$ and $\kappa_\mathrm{os}=6\times10^{9}\ \mathrm{cm^2\ s^{-1}}$, and we treat $\nu_\mathrm{0v}$ as a parameter. $\alpha_{\kappa\nu}$ specifies the values of the turbulent diffusivities at $r=r_\mathrm{tran}$, i.e., $\nu_\mathrm{tv}=\nu_\mathrm{os}+\alpha_{\kappa\nu}\nu_\mathrm{0v}$, $\nu_\mathrm{tl}=\alpha_{\kappa\nu}\nu_\mathrm{0l}$ and $\kappa_\mathrm{t}=\kappa_\mathrm{os}+\alpha_{\kappa\nu}\kappa_\mathrm{0}$ at $r=r_\mathrm{tran}$. $d_\mathrm{bc}$ and $d_{\kappa\nu}$ are the widths of transition. We specify $\alpha_{\kappa\nu}=0.1$, $d_\mathrm{bc}=0.0125R_\odot$ and $d_{\kappa\nu}=0.025R_\odot$. As already mentioned, the coefficients for turbulent viscosity and the $\Lambda$ effect are different in our model from those of Rempel's (2005b). There are two reasons for this. One is that we intend to investigate the influence of both effects on stellar differential rotation separately (see \S \ref{delta_omega}). The other reason is that the formation of a tachocline in a reasonable amount of time requires a finite value (though small) for the coefficient of turbulent viscosity even in the radiative zone, in which there is likely to be weak turbulence \citep{2005ApJ...622.1320R}. Fig. \ref{diffusive} shows the profiles of $\nu_\mathrm{tv}$, $\nu_\mathrm{tl}$ and $\kappa_\mathrm{t}$. \subsection{The $\Lambda$ Effect}\label{s:lambda} In this study we adopt the non-diffusive part of the Reynolds stress, called the $\Lambda$ effect. The $\Lambda$ effect transports angular momentum and generates differential rotation. The $\Lambda$ effect tensors are expressed as \begin{eqnarray} && \Lambda_{r\phi}=\Lambda_{\phi r}=+L(r,\theta)\cos(\theta+\lambda), \\ && \Lambda_{\theta\phi}=\Lambda_{\phi \theta}=-L(r,\theta)\sin(\theta+\lambda), \end{eqnarray} where $L(r,\theta)$ is the amplitude of the $\Lambda$ effect and $\lambda$ is the inclination of the flux vector with respect to the rotational axis. We use for the amplitude of the $\Lambda$ effect the expressions \begin{eqnarray} && f(r,\theta)=\sin^l\theta\cos\theta\tanh \left(\frac{r_\mathrm{max}-r}{d} \right), \\ && L(r,\theta)=\Lambda_0\Omega_0\frac{f(r,\theta)}{\mathrm{max}|f(r,\theta)|} \label{lambda},\label{amp_lambda} \end{eqnarray} where $d=0.025R_\odot$. $\lambda$ and $\Lambda_0$ are free-parameters. The value of $l$ needs to be equal to or larger than 2 to ensure regularity near the pole, so we set $l=2$. The $\Lambda$ effect does not depend on $v_r$, $v_\theta$ or $\Omega_1$, meaning it is a stationary effect. We emphasize that the $\Lambda$ effect depends on stellar angular velocity $\Omega_0$, since the $\Lambda$ effect is generated by turbulence and Coriolis force. The more rapidly the star rotates, the more angular momentum the $\Lambda$ effect can transport. The dependence of $\Lambda_0$ and $\lambda$ on stellar angular velocity is discussed in \S \ref{variation}. \subsection{Numerical Settings}\label{numerical} Using the modified Lax-Wendroff scheme with TVD artificial viscosity \citep{davis1984tvd}, we solve Equations (\ref{continuity})-(\ref{se1}) numerically for the northern hemisphere of the meridional plane in $0.65R_\odot < r <0.93R_\odot$ and $0 < \theta < \pi/2$. We use a uniform resolution of $200$ points in the radial direction and $400$ points in the latitudinal direction in all of our simulations. Each simulation run is conducted until it reaches a stationary state. All the variables $\rho_1$, $v_r$, $v_\theta$, $\Omega_1$ and $s_1$ are equal to zero in the initial condition. At the top boundary ($r=0.93R_\odot$) we adopt stress-free boundary conditions for $v_r$, $v_\theta$ and $\Omega_1$ and set the derivative of $s_1$ to zero: \begin{eqnarray} && \frac{\partial v_r}{\partial r}=0,\\ && \frac{\partial}{\partial r} \left( \frac{v_\theta}{r} \right)=0,\\ &&\frac{\partial \Omega_1}{\partial r}=0,\\ &&\frac{\partial s_1}{\partial r}=0. \end{eqnarray} The boundary conditions for $v_r$, $v_\theta$ and $s_1$ at the lower boundary ($r=0.65R_\odot$) are the same as those at the top boundary. Differential rotation connects with the rigidly rotating core at the lower boundary, so we adopt $\Omega_1=0$ there. At both radial boundaries, we set $\rho_1$ to make the right side of eq. (\ref{vx}) equal zero. At the pole and the equator ($\theta=0$ and $\pi/2$) we use the symmetric boundary condition: \begin{eqnarray} && \frac{\partial \rho_1}{\partial \theta}=0, \\ && \frac{\partial \Omega_1}{\partial \theta}=0, \\ && \frac{\partial v_r}{\partial \theta}=0, \\ && v_\theta=0,\\ && \frac{\partial s_1}{\partial \theta}=0. \end{eqnarray}\par Due to the low Mach number of the expected flows, a direct compressible simulation is problematic, so adopting the same technique as \cite{2005ApJ...622.1320R}, we reduce the speed of sound by multiplying the right side of eq. (\ref{continuity}) by $1/\zeta^2$. The equation of continuity is therefore replaced with \begin{eqnarray} \frac{\partial \rho_1}{\partial t}+ \frac{1}{\zeta^2}\mathrm{div}(\rho_0{\bf v})=0. \end{eqnarray} The speed of sound then becomes $\zeta$ times smaller than the original speed. We use $\zeta=200$ in all our calculations. This technique can be used safely in our present study since we only discuss stationary states, so the factor $\zeta$ becomes unimportant. The validity of this technique is carefully discussed by \cite{2005ApJ...622.1320R}. We test our code by reproducing the results presented by \cite{2005ApJ...622.1320R} and check the numerical convergence by runs with different grid spacings. After checking and cleaning up at every time step, conservation of total mass, total angular momentum and total energy are maintained through the simulation runs. \section{Stellar Differential Rotation and the Taylor-Proudman Theorem}\label{differential} In this section, based on the work of \cite{2005ApJ...622.1320R}, we explain how the subadiabatically stratified region can generate solar-like differential rotation. The $\phi$ component of the vorticity equation can be expressed as \begin{eqnarray} \frac{\partial \omega_\phi}{\partial t}=[...] +r\sin\theta\frac{\partial \Omega^2}{\partial z} -\frac{g}{\gamma r}\frac{\partial s_1}{\partial \theta},\label{therm01} \end{eqnarray} where $\Omega=\Omega_0+\Omega_1$, and the $z$ axis represents the rotational axis. The inertial term and the diffusion term are neglected. If the last term of eq. (\ref{therm01}) is zero, meaning there is no variation in entropy in the latitudinal direction, then $\partial \Omega^2/\partial z =0$ in a stationary state, which is the Taylor-Proudman state. Solar-like differential rotation is generated in four stages. \begin{enumerate} \item In the northern hemisphere, the $\Lambda$ effect transports angular momentum in the negative $z$ direction and generates a negative $\partial \Omega^2/\partial z$. \item The negative $\partial \Omega^2/\partial z$ generates a negative $\omega_\phi$ due to Coriolis force. This counter-clockwise meridional flow corresponds to a negative $v_r$ (downflow) at high latitudes and a positive $v_r$ (upflow) at low latitudes. \item As we mentioned in Section \ref{back}, downflow (upflow) generates positive (negative) entropy perturbations in the subadiabatically stratified layer beneath the convection zone ($\delta<0$). Meridional flow can generate positive entropy perturbations at high latitudes and negative entropy perturbations at low latitudes. Therefore, $\partial s_1/\partial \theta$ becomes negative in the overshoot region. \item The negative $\partial s_1/\partial \theta$ also keeps $\partial \Omega^2/\partial z$ negative in a stationary state. \end{enumerate} The profile of angular velocity in the convection zone is determined by a balance of angular momentum transport from meridional flow and a reduction in meridional flow from buoyancy force at the subadiabatic layer. \section{RESULTS AND DISCUSSION} We run simulations for seventeen cases, with Table \ref{param} showing the parameters for each case. \subsection{Stellar Differential Rotation}\label{taylor} In this section, we discuss the cases with angular velocities up to 16 times the solar value (represented by $\Omega_\odot$), placing an emphasis on the morphology of stellar differential rotation. Fig. \ref{rapid} shows the results of our calculations which correspond to cases 1-5 in Table \ref{param}. It is found that the larger stellar angular velocity is, the more likely it is for differential rotation to be in the Taylor-Proudman state, in which the contour lines of the angular velocity are parallel to the rotational axis. To evaluate these results quantitatively, we define a parameter which denotes the morphology of differential rotation. We call it the Non-Taylor-Proudman parameter (hereafter the NTP parameter), which is expressed as \begin{eqnarray} P_\mathrm{ntp}=\frac{1}{R_\odot^2\Omega_0^2}\int\frac{\partial \Omega_1^2}{\partial z}dV =\frac{1}{R_\odot^2\Omega_0^2}\int \left( \cos\theta\frac{\partial }{\partial r} -\frac{\sin\theta}{r}\frac{\partial }{\partial \theta} \right) \Omega_1^2dV, \end{eqnarray} where $\Omega_0$ is the angular velocity of the radiative zone. When the NTP parameter is zero, differential rotation is in the Taylor-Proudman state. Conversely, differential rotation is far from the Taylor-Proudman state with a large absolute value of the NTP parameter. The value of the NTP parameter with various stellar angular velocities is shown in Fig. \ref{npp}. The NTP monotonically decreases with increases in stellar angular velocity. These results indicate that with large stellar angular velocity values, differential rotation approaches the Taylor-Proudman state. These results are counter-intuitive, however, since we do not expect differential rotation to approach the Taylor-Proudman state with increasing stellar angular velocity values, since the $\Lambda$ effect, which is a driver of the deviation from the Taylor-Proudman state, is proportional to stellar angular velocity $\Omega_0$. These are the most significant findings of this paper, so hereafter in this section we discuss these unexpected results. \par We next discuss the temperature difference between the equator and the pole at the base of the convection zone ($r=0.71R_\odot$). Since temperature is given as a function of entropy by \begin{eqnarray} T_1=\frac{T_0}{\gamma} \left[s_1+(\gamma-1)\frac{p_1}{p_0} \right], \end{eqnarray} and it is easier to measure than entropy, we use it here for discussing the thermal structure of the simulation results in the convection zone. Further, although it is mentioned in \S \ref{differential} that entropy gradient is crucial for breaking the Taylor-Proudman constraint, the temperature difference can be used as its proxy. Fig. \ref{entropy} shows the relationship between stellar angular velocity $\Omega_0$ and temperature difference $\Delta T$ at $r=0.71R_\odot$, where $\Delta T =\max (T_1(r_\mathrm{bc},\theta))-\min (T_1(r_\mathrm{bc},\theta))$. Although the temperature difference monotonously increases with larger stellar angular velocity values, it is not enough to make the rotational profile largely deviate from the Taylor-Proudman state. This can be explained by using the thermal wind equation, which is a steady state solution of eq. (\ref{therm01}): \begin{eqnarray} 0= r\sin\theta\frac{\partial \Omega^2}{\partial z} -\frac{g}{\gamma r}\frac{\partial s_1}{\partial \theta}.\label{therm02} \end{eqnarray} The inertial term and the diffusion term are neglected here. This equation indicates that, for a given value of the NTP, we need an entropy gradient proportional to $\Omega_0^2$. However, our simulation results show that $\Delta T \propto \Omega_0^{0.58}$, which means that as $\Omega_0$ increases, the thermal driving force becomes insufficient to push differential rotation away from the Taylor-Proudman state. In other words, the latitudinal entropy gradient in rapidly rotating stars is so small that differential rotation stays close to the Taylor-Proudman state. In our model, meridional flow generates latitudinal entropy gradient at the base of the convection zone. It is conjectured that the insufficient thermal drive is due to a slow meridional flow.\par We next investigate the dependence of meridional flow on stellar angular velocity. Fig. \ref{vari_some} shows the radial profile of latitudinal velocity $v_\theta$ at $\theta=45^\circ$, using the results of cases 1, 2 and 9. In case 2, stellar angular velocity is twice that of case 1 (the solar value). In case 9, stellar angular velocity is equal to the solar value, and the amplitude of the $\Lambda$ effect is two times the value in case 1. Fig. \ref{vari_some} shows that meridional flow does not depend on stellar angular velocity, while it correlates with the $\Lambda$ effect. Considering eq. (\ref{lambda}), the $\Lambda$ effect increases with larger values of stellar angular velocity, since the amplitude of the $\Lambda$ effect is proportional to $\Omega_0$. The reason why differential rotation in rapidly rotation stars is close to the Taylor-Proudman state is that meridional flow does not become fast with large stellar angular velocity values. \par We interpret the result that the speed of meridional flow does not depend on stellar angular velocity in our model as follows. With large values of stellar angular velocity, more angular momentum is transported by the $\Lambda$-effect (Note that the $\Lambda$-effect is proportional to $\Omega_0$ in equation (\ref{amp_lambda})), so meridional flow obtains more energy from differential rotation. The energy gain does not result in an increase in speed because of the associated enhancement of the Coriolis force, which bends the meridional flow in the longitudinal direction. Another explanation is possible in terms of angular momentum transport. The angular momentum fluxes from both meridional flow and the Reynolds stress ($\Lambda$ effect) must be balanced in a steady state. The former is proportional to $v_\mathrm{m}\Omega_0$ and the latter is proportional to $\Omega_0$, where $v_\mathrm{m}$ is the amplitude of meridional flow. Therefore, meridional flow does not depend on stellar angular velocity \citep{2005LRSP....2....1M}. Our results (Fig. \ref{vari_some}) indicate that with a larger stellar angular velocity (case 2), the above mechanism does not generate fast meridional flow. However, this does not occur when only the $\Lambda$ effect is large (case 9). \subsection{Angular Velocity Difference on the Surface}\label{delta_omega} In this subsection we discuss angular velocity difference $\Delta \Omega$ at the surface and the relationship between our results and previous observations. We conduct numerical simulations to investigate the physical process which determines $\Delta \Omega$ (cases 1, 6-11). We define angular velocity difference as $\Delta \Omega = \max(\Omega_1(r_\mathrm{max},\theta))-\min(\Omega_1(r_\mathrm{max},\theta))$. \par $\Delta \Omega$ is determined by two opposing effects, a smoothing effect from turbulent viscosity and a steepening effect from the $\Lambda$ effect. In a stationary state these two effects cancel each other out. Latitudinal flux for turbulent viscosity and the $\Lambda$ effect can be written as $\rho_0\nu_\mathrm{0v}\Delta \Omega/\Delta \theta$ and $\rho_0\nu_\mathrm{0l}\Lambda_0\Omega_0$, respectively. Because these two have approximately the same value, $\Delta\Omega$ can be estimated as \begin{eqnarray} \Delta \Omega \sim\frac{\nu_\mathrm{0l}}{\nu_\mathrm{0v}}\Lambda_0\Omega_0\Delta\theta, \label{deltaomega} \end{eqnarray} where $\Delta \theta$ denotes the differential rotation region.\par In order to confirm eq. (\ref{deltaomega}), we conduct two sets of simulations, firstly varying the value of turbulent viscosity ($\nu_\mathrm{0v}$), and secondly the amplitude of the $\Lambda$ effect ($\Lambda_0$). Note that the setting for turbulent viscosity does not reflect a real situation, since the coefficients of turbulent viscosity and the $\Lambda$ effect should have a common value. Nonetheless, this is necessary for the purpose of our investigation. The simulation results are shown in Figures \ref{vis_vari} and \ref{lam_vari}. We obtain $\Delta \Omega \propto \nu_\mathrm{0v}^{-0.88}$ and $\Delta \Omega \propto \Lambda_0^{1.1}$, which are consistent with eq. (\ref{deltaomega}). \par Fig. \ref{angular_difference} shows the results of the dependence of $\Delta \Omega$ on $\Omega_0$ (Cases 1-5). Asterisks denote the difference at the surface between the equator and the pole, squares show the difference between the equator and the colatitude $\theta=45^\circ$, and triangles are the difference between the equator and the colatitude $\theta=60^\circ$. The difference at low latitudes (squares and triangles) monotonically increases with stellar angular velocity. However this is not the case for angular velocity difference between the equator and the pole (asterisk). As we discussed in \S \ref{taylor}, when stellar rotation velocity is large, the Taylor-Proudman state is achieved, meaning the gradient of angular velocity at the surface concentrates in lower latitudes. Due to this concentration, $\Delta\theta$ becomes smaller in Eq. (\ref{deltaomega}) with larger values of $\Omega_0$. Thus, $\Delta \Omega_0$ does not show an explicit dependence on $\Omega_0$. At low latitudes, $\Delta\theta$ is fixed and the angular velocity difference increases with stellar angular velocity. We obtain $\Delta\Omega\propto\Omega_0^{0.43}$ (between the equator and the colatitude $\theta=45^\circ$: squares) and $\Delta\Omega\propto\Omega_0^{0.55}$ (between the equator and the colatitude $\theta=60^\circ$: triangles). This indicates that $\Delta\Omega/\Omega_0 $ decreases with stellar angular velocity. These results are consistent with previous stellar observations \citep{1996ApJ...466..384D,2003A&A...398..647R,2005MNRAS.357L...1B}. \subsection{Variation of $\Lambda$-effect and superadiabaticity}\label{variation} In this section, we discuss the dependence of meridional flow and differential rotation on free parameters. The parameter set is shown in Table \ref{param} (cases 12-17). At first we investigate the influence of the variation of the $\Lambda$ effect. The $\Lambda$ effect has two free parameters, i.e., amplitude $\Lambda_0$ and inclination angle $\lambda$ (see \S \ref{s:lambda}). Amplitude is thought to become smaller with a larger stellar angular velocity, due to the saturation of the correlations such as $\langle v'_rv'_\phi \rangle$ and $\langle v'_\theta v'_\phi \rangle$, where $v_r'$, $v_\theta'$ and $v_\phi'$ are the radial, latitudinal and longitudinal component turbulent velocities, respectively. Fig. \ref{vari_some} shows that meridional flow becomes slower with a smaller $\Lambda_0$, keeping the $\Omega_0$ value constant (Case 10). It is clear with the result of \S \ref{taylor} that meridional flow becomes slow with a larger angular velocity when the variation of $\Lambda_0$ is included. \cite{2008ApJ...689.1354B} reported this effect with their three-dimensional hydrodynamic calculation. When meridional flow is slow, the entropy gradient generated by the subadiabatic layer is small, and differential rotation approaches the Taylor-Proudman state.\par The inclination angle is thought to be small with large stellar angular velocity values, since the motion across the rotational axis is restricted \citep{1993A&A...276...96K}. In case 12, differential rotation with a small inclination angle ($\lambda=2.5^\circ$) is calculated. Other parameters are the same as case 1. The radial distribution of meridional flow is shown in Fig. \ref{vari_some}. Meridional flow becomes faster with a smaller inclination angle. Because of the efficient angular momentum transport in the $z$ direction when the inclination angle is small, the second term on the right hand side of Eq. (\ref{therm01}) is large. This generates a large $\omega_\phi$, i.e. fast meridional flow. \par In summary, we found that rapid stellar rotation causes two opposing effects on the speed of meridional flow. The speed is reduced by the suppression of $\Lambda_0$, while it is enhanced by the angular momentum transport along the axial direction with a smaller $\lambda$. Although the results of the three-dimensional calculation suggest that meridional flow becomes slower with a larger stellar angular velocity, our model cannot draw a conclusion about the speed of meridional flow in rapidly rotating stars. \par Next we investigate the influence of superadiabaticity in the convection zone. In cases 13-17, superadiabaticity in the convection zone $\delta_\mathrm{c} = 1\times 10^{-6}$. The differences of the NTP parameters with adiabatic and superadiabatic convection zones $(P_{\mathrm{ntp}(\delta_\mathrm{c}=0)}-P_{\mathrm{ntp}(\delta_\mathrm{c}=10^{-6})})/P_{\mathrm{ntp}(\delta_\mathrm{c}=0)}$ are shown in Fig. \ref{super}. The NTP parameter values with a superadiabatic convection zone are smaller than those with an adiabatic convection zone, since meridional flow in the superadiabatic convection zone makes the entropy gradient small. This result is suggested by \cite{2005ApJ...631.1286R}. Note that the difference between the values of the NTP parameters with an adiabatic and those with a superadiabatic convection zone decreases as the stellar angular velocity increases, since the generation of entropy gradient by the subadiabtic layer becomes ineffective with a larger stellar angular velocity. \section{SUMMARY} We have investigated differential rotation in rapidly rotating stars using a mean field model. This work is significant because it can be used as a base for further research on stellar activity cycles, which are most likely caused by the dynamo action of differential rotation in the stellar convection zone. \par First, we investigated the morphology of differential rotation in rapidly rotating stars. Although more angular momentum is transported by convection with larger stellar angular velocity, the Coriolis force is stronger than in the solar case, so meridional flow does not be fast. In our model, meridional flow generates latitudinal entropy gradient in the subadiabatically stratified overshoot region. Since the meridional flow is not fast, the entropy gradient is insufficient to move differential rotation far from the Taylor-Proudman state in rapidly rotating stars. As a result, the differential rotation of stars with large stellar angular velocity is close to the Taylor-Proudman state.\par The temperature difference between latitudes is probably controlled by two important factors, i.e., the subadiabatic layer below the convection zone and anisotropic heat transport caused by turbulence and rotation. We suggest that the former is important in slow rotators like the sun, and the latter in rapid rotators. The subadiabatic-layer effect is included in our model, while anisotropic heat transport is not. We found that the effect of the subadiabatic layer can generate a temperature difference $\Delta T=10\ \mathrm{K}$ in the solar case, which moderately increases with higher rotation speeds, and $\Delta T=30\ \mathrm{K}$ in case $\Omega_0=8\Omega_\odot$. The three-dimensional simulations by \cite{2008ApJ...689.1354B} include a self-consistent calculation of anisotropy of turbulent thermal transport but not the subadiabatic layer at the bottom boundary. In their calculation $\Delta T$ is most likely smaller than $10\ \mathrm{K}$ in the solar case, since they cannot reproduce the solar differential rotation only with anisotropy of thermal transport. Also, $\Delta T=100\ \mathrm{K}$ in case $\Omega_0=5\Omega_\odot$, which is larger than the case with the subadiabatic layer. We speculate that anisotropic heat transport becomes more significant in rapidly rotating stars. There is also a possibility that our calculated entropy gradient at the base of the convection zone can be used as a boundary condition for a self-consistent three dimensional simulation of stellar convection \citep{2006ApJ...641..618M}. Note that differential rotation in rapidly rotating stars in \cite{2001A&A...366..668K} is not in the Taylor-Proudman state when anisotropy of turbulent thermal conductivity is included. A future study of the simultaneous effects of the attached subadiabatic layer beneath convection zone and anisotoropy of the turbulent thermal conductivity on stellar differential rotation would provide a better understanding of stellar differential rotation. \par Next, we investigated angular velocity difference at the surface. The $\Lambda$ effect causes spatial difference in the rotation profile, while turbulent viscosity reduces the difference. Angular velocity difference $\Delta\Omega$ is determined in eq. (\ref{deltaomega}), which is then used to investigate differential rotation in rapidly rotating stars. Since stellar rotation is close to the Taylor-Proudman state, and the radiative core is rotating rigidly, differential rotation is concentrated at low latitudes with large stellar angular velocity. This concentration leads to a small $\Delta\theta$ in eq. (\ref{deltaomega}). Therefore, only at low latitudes our model is consistent with stellar observations. \par Our conclusions are as follows: (1) Differential rotation approaches the Taylor-Proudman state when stellar rotation is faster than solar rotation. (2) Entropy gradient generated by the attached subadiabatic layer beneath the convection zone becomes relatively small with a large stellar angular velocity. (3) Turbulent viscosity and turbulent angular momentum transport determine the spatial difference of angular velocity $\Delta\Omega$. (4) The results of our mean field model can explain observations of stellar differential rotation.\par Our future work will focus on the stellar MHD dynamo. Several investigations have been conducted on the stellar dynamo using a kinematic dynamo framework \citep{2001ASPC..248..235D,2001ASPC..248..189C,2009A&A...497..829M,2010A&A...509A..32J}. Since, under such a framework, only the magnetic induction equation is solved using a given velocity field, solving a linear equation, such analysis does not give sufficient information on the strength of the dynamo-generated stellar magnetic field. To obtain the full amplitude of the stellar magnetic field, the feedback to the velocity field is required, i.e., an MHD framework. Adopting a similar approach to \cite{2006ApJ...647..662R}, we can use the results of this paper to investigate the strength of the stellar magnetic field. Recent observations of the strength of the magnetic field generated by stellar differential rotation have been conducted using spectroscopy \citep[e.g.][]{2008MNRAS.388...80P}. A comparison of these observations and numerical calculations of the stellar dynamo could give new insight into the stellar magnetic field. Finally, our stellar MHD dynamo study would also contribute to the understanding of recent investigations into stellar magnetic cyclic activity periods \citep{1984ApJ...287..769N,1999ApJ...524..295S}. \acknowledgements We are most grateful to Dr. M. Rempel for helpful advice. Numerical computations were carried out at the General-Purpose PC farm in the Center for Computational Astrophysics (CfCA) of the National Astronomical Observatory of Japan. The page charge for this paper is supported by CfCA. We have greatly benefited from the proofreading/editing assistance from the GCOE program.
{'timestamp': '2011-07-15T02:00:29', 'yymm': '1107', 'arxiv_id': '1107.2685', 'language': 'en', 'url': 'https://arxiv.org/abs/1107.2685'}
\section{Introduction} \subsection{Igusa's zeta function and the Monodromy Conjecture} For a prime $p$, we denote by \Qp\ the field of $p$-adic numbers and by \Zp\ its subring of $p$-adic integers. We denote by $|\cdot|$ the $p$-adic norm on \Qp. Let $n\in\Zplusnul$ and denote by $|dx|=|dx_1\wedge\cdots\wedge dx_n|$ the Haar measure on \Qpn, so normalized that \Zpn\ has measure one. \begin{definition}[Igusa's $p$-adic local zeta function] Let $p$ be a prime number, $f(x)=f(x_1,\ldots,x_n)$ a polynomial in $\Qp[x_1,\ldots,x_n]$, and $\Phi$ a Schwartz--Bruhat function on \Qpn, i.e., a locally constant function $\Phi:\Qpn\to\C$ with compact support. Igusa's $p$-adic local zeta function associated to $f$ and $\Phi$ is defined as \begin{equation*} Z_{f,\Phi}:\{s\in\C\mid\Re(s)>0\}\to\C:s\mapsto\int_{\Qpn}|f(x)|^s\Phi(x)|dx|. \end{equation*} We will mostly consider the case where $\Phi$ is the characteristic function of either \Zpn\ or $p\Zpn=(p\Zp)^n$. By Igusa's $p$-adic zeta function \Zf\ of $f$ (without mentioning $\Phi$), we mean $Z_{f,\Phi}$, where $\Phi=\chi(\Zpn)$ is the characteristic function of \Zpn. By the local Igusa zeta function \Zof\ of $f$, we mean $Z_{f,\Phi}$, where $\Phi=\chi(p\Zpn)$ is the characteristic function of $p\Zpn$. \end{definition} Using resolution of singularities, Igusa \cite{Igu74} proves in 1974 that $Z_{f,\Phi}$ is a rational function in the variable $t=p^{-s}$; more precisely, he shows that there exists a rational function $\widetilde{Z}_{f,\Phi}\in\Q(t)$, such that $Z_{f,\Phi}(s)=\widetilde{Z}_{f,\Phi}(p^{-s})$ for all $s\in\C$ with $\Re(s)>0$. Denoting the meromorphic continuation of $Z_{f,\Phi}$ to the whole complex plane again with $Z_{f,\Phi}$, he also obtains a set of candidate poles for $Z_{f,\Phi}$ in terms of numerical data associated to an embedded resolution of singularities of the locus $f^{-1}(0)\subset\Qpn$. In 1984 Denef \cite{Den84} proves the rationality of $Z_{f,\Phi}$ in an entirely different way, using $p$-adic cell decomposition. For a prime number $p$ and $f(x)=f(x_1,\ldots,x_n)\in\Zp[x_1,\ldots,x_n]$, Igusa's zeta function \Zf\ is closely related to the numbers $N_l$ of solutions in $(\Zp/p^l\Zp)^n$ of the polynomial congruences $f(x)\equiv0\bmod p^l$ for $l\geqslant1$. For instance, the poles of \Zf\ determine the behavior of the numbers $N_l$ for $l$ big enough. The poles of Igusa's zeta function are also the subject of the Monodromy Conjecture, formulated by Igusa in 1988. It predicts a remarkable connection between the poles of \Zf\ and the eigenvalues of the local monodromy of $f$. The conjecture is motivated by analogous results for Archimedean local zeta functions (over \R\ or \C\ instead of \Qp) and---of course---by all known examples supporting it. If the Monodromy Conjecture were true, it would explain why generally only few of the candidate poles arising from an embedded resolution of singularities, are actually poles. \begin{conjecture}[Monodromy Conjecture for Igusa's $p$-adic zeta function over \Qp]\label{mcigusa1} \textup{\cite{Igu88}}. Let $f(x_1,\ldots,x_n)$ be a polynomial in $\Z[x_1,\ldots,x_n]$. For almost all\,\footnote{By \lq almost all\rq\ we always mean \lq all, except finitely many\rq, unless expressly stated otherwise.} prime numbers $p$, we have the following. If $s_0$ is a pole of Igusa's local zeta function \Zf\ of $f$, then $e^{2\pi i\Re(s_0)}$ is an eigenvalue of the local monodromy operator acting on some cohomology group of the Milnor fiber of $f$ at some point of the hypersurface $f^{-1}(0)\subset\C^n$. \end{conjecture} There is a local version of this conjecture considering Igusa's zeta function on a small enough neighborhood of $0\in\Qpn$ and local monodromy only at points of $f^{-1}(0)\subset\C^n$ close to the origin. \begin{conjecture}[Local version of Conjecture~\ref{mcigusa1}]\label{mcigusa1lokaal Let $f(x_1,\ldots,x_n)$ be a polynomial in $\Z[x_1,\ldots,x_n]$ with $f(0)=0$. For almost all prime numbers $p$ and for $k$ big enough, we have the following. If $s_0$ is a pole of $Z_{f,\chi(p^k\Zpn)}$, then $e^{2\pi i\Re(s_0)}$ is an eigenvalue of the local monodromy of $f$ at some point of the hypersurface $f^{-1}(0)\subset\C^n$ close to the origin. \end{conjecture} There exists a stronger, related conjecture, also due to Igusa and also inspired by the analogous theorem in the Archimedean case. \begin{conjecture}\label{Smcigusa1} \textup{\cite{Igu88}}. Let $f(x_1,\ldots,x_n)\in\Z[x_1,\ldots,x_n]$. For almost all prime numbers $p$, we have the following. If $s_0$ is a pole of \Zf, then $\Re(s_0)$ is a root of the Bernstein--Sato polynomial $b_f(s)$ of $f$. There is a local version of this conjecture considering $Z_{f,\chi(p^k\Zpn)}$ for $k$ big enough and the local Bernstein--Sato polynomial $b_f^0(s)$ of $f$. \end{conjecture} Malgrange \cite{malgrange} proved in 1983 that if $f(x_1,\ldots,x_n)\in\C[x_1,\ldots,x_n]$ and $s_0$ is a root of the Bernstein--Sato polynomial of $f$, then $e^{2\pi is_0}$ is a monodromy eigenvalue of $f$. Therefore Conjecture~\ref{Smcigusa1} implies Conjecture~\ref{mcigusa1}. The above conjectures were verified by Loeser for polynomials in two variables \cite{Loe88} and for non-degenerated polynomials in several variables subject to extra non-natural technical conditions (see \cite{Loe90} or Theorem~\ref{theoloesernondeg}). In higher dimension or in a more general setting, there are various partial results, e.g., \cite{ACLM02,ACLM05,BorTVBN,BMTmcha,HMY07,LVmcndss,LV09,Loe90,NV10bis,VVmcid2,Vey93,Vey06}. In \cite{LVmcndss} Lemahieu and Van Proeyen prove the Monodromy Conjecture for the local topological zeta function (a kind of limit of Igusa zeta functions) of a non-degenerated surface singularity. Hence they achieve the result of Loeser for the topological zeta function in dimension three without the extra conditions. \subsection{Statement of the main theorem} The (first) goal of this paper is to obtain the result of Lemahieu and Van Proeyen for the original local Igusa zeta function, i.e., to prove Conjecture~\ref{mcigusa1lokaal} for a polynomial in three variables that is non-degenerated over \C\ and \Fp\ with respect to its Newton polyhedron. Before formulating our theorem precisely, let us first define the Newton polyhedron of a polynomial and the notion of non-degeneracy. \begin{definition}[Newton polyhedron]\label{def_NPad} Let $R$ be a ring. For $\omega=(\omega_1,\ldots,\omega_n)\in\Zplusn$, we denote by $x^{\omega}$ the corresponding monomial $x_1^{\omega_1}\cdots x_n^{\omega_n}$ in $R[x_1,\ldots,x_n]$. Let $f(x)=f(x_1,\ldots,x_n)=\sum_{\omega\in\Zplusn}a_{\omega}x^{\omega}$ be a nonzero polynomial over $R$ satisfying $f(0)=0$. Denote the support of $f$ by $\supp(f)=\{\omega\in\Zplusn\mid a_{\omega}\neq0\}$. The Newton polyhedron \Gf\ of $f$ is then defined as the convex hull in \Rplusn\ of the set \begin{equation*} \bigcup_{\omega\in\supp(f)}\omega+\Rplusn. \end{equation*} The global Newton polyhedron $\Gglf$ of $f$ is defined as the convex hull of $\supp(f)$. Clearly we have $\Gf=\Gglf+\Rplusn$. \end{definition} \begin{notation}\label{notftauart3} Let $f$ be as in Definition \ref{def_NPad}. For every face\footnote{By a face of $\Gf$ we mean $\Gf$ itself or one of its proper faces, which are the intersections of $\Gf$ with a supporting hyperplane. See, e.g., \cite{Roc70}.} $\tau$ of the Newton polyhedron $\Gf$ of $f$, we put \begin{equation*} \ft(x)=\sum_{\omega\in\tau}a_{\omega}x^{\omega}. \end{equation*} \end{notation} \begin{definition}[Non-degenerated over \C] Let $f(x)=f(x_1,\ldots,x_n)$ be a nonzero polynomial in $\C[x_1,\ldots,x_n]$ satisfying $f(0)=0$. We say that $f$ is non-degenerated over \C\ with respect to all the faces of its Newton polyhedron \Gf, if for every\footnote{Thus also for \Gf.} face $\tau$ of \Gf, the zero locus $\ft^{-1}(0)\subset\C^n$ of \ft\ has no singularities in \Ccrossn. We say that $f$ is non-degenerated over \C\ with respect to all the compact faces of its Newton polyhedron, if the same condition is satisfied, but only for the compact faces $\tau$ of \Gf. \end{definition} \addtocounter{footnote}{-1} \begin{definition}[Non-degenerated over \Qp]\label{def_non-degenerated2} Let $f(x)=f(x_1,\ldots,x_n)$ be a nonzero polynomial in $\Qp[x_1,\ldots,x_n]$ satisfying $f(0)=0$. We say that $f$ is non-degenerated over \Qp\ with respect to all the faces of its Newton polyhedron \Gf, if for every\footnotemark\ face $\tau$ of \Gf, the zero locus $\ft^{-1}(0)\subset\Qpn$ of \ft\ has no singularities in \Qpxn. We say that $f$ is non-degenerated over \Qp\ with respect to all the compact faces of its Newton polyhedron, if we have the same condition, but only for the compact faces $\tau$ of \Gf. \end{definition} \begin{notation}\label{notftaubarart3} For $f\in\Zp[x_1,\ldots,x_n]$, we denote by $\overline{f}$ the polynomial over \Fp, obtained from $f$, by reducing each of its coefficients modulo $p\Zp$. \end{notation} \addtocounter{footnote}{-1} \begin{definition}[Non-degenerated over \Fp]\label{def_non-degenerated3} Let $f(x)=f(x_1,\ldots,x_n)$ be a non\-zero polynomial in $\Zp[x_1,\ldots,x_n]$ satisfying $f(0)=0$. We say that $f$ is non-degenerated over \Fp\ with respect to all the faces of its Newton polyhedron \Gf, if for every\footnotemark\ face $\tau$ of \Gf, the zero locus of the polynomial \fbart\ has no singularities in \Fpcrossn, or, equivalently, the system of polynomial congruences \begin{equation*} \left\{ \begin{aligned} \ft(x)&\equiv0\bmod p,\\ \frac{\partial \ft}{\partial x_i}(x)&\equiv0\bmod p;\quad i=1,\ldots,n; \end{aligned} \right. \end{equation*} has no solutions in \Zpxn. We say that $f$ is non-degenerated over \Fp\ with respect to all the compact faces of its Newton polyhedron, if the same condition is satisfied, but only for the compact faces $\tau$ of \Gf. \end{definition} \begin{remarks}\label{verndcndfp} \begin{enumerate} \item Let $f(x_1,\ldots,x_n)\in\Z[x_1,\ldots,x_n]$ be a nonzero polynomial satisfying $f(0)=0$. Suppose that $f$ is non-degenerated over \C\ with respect to all the (compact) faces of its Newton polyhedron \Gf. Then $f$ is non-degenerated over \Fp\ with respect to all the (compact) faces of \Gf, for almost all $p$. This is a consequence of the Weak Nullstellensatz. \item The condition of non-degeneracy is a generic condition in the following sense. Let $\Gamma\subset\Rplusn$ be a Newton polyhedron. Then almost all\footnote{By \lq almost all\rq\ we mean the following. Let $B$ be any bounded subset of \Rplusn\ that contains all vertices of $\Gamma$. Put $N=\#\Zn\cap\Gamma\cap B$, and associate to every $f(x)\in\C[x_1,\ldots,x_n]$ with $\Gf=\Gamma$ and $\supp(f)\subset B$ an $N$-tuple containing its coefficients. Then the set of $N$-tuples corresponding to a non-degenerated polynomial, is Zariski-dense in $\C^N$.} pol\-y\-no\-mi\-als $f(x)\in\C[x_1,\ldots,x_n]$ with $\Gf=\Gamma$ are non-degenerated over \C\ with respect to all the faces of $\Gamma$. (The same is true if we replace \C\ by \Qp.) \end{enumerate} \end{remarks} We can now state our main theorem. \begin{theorem}[Monodromy Conjecture for Igusa's $p$-adic local zeta function of a non-degenerated surface singularity]\label{mcigusandss} Let $f(x,y,z)\in\Z[x,y,z]$ be a nonzero polynomial in three variables satisfying $f(0,0,0)=0$, and let $U\subset\C^3$ be a neighborhood of the origin. Suppose that $f$ is non-degenerated over \C\ with respect to all the compact faces of its Newton polyhedron, and let $p$ be a prime number such that $f$ is also non-degenerated over \Fp\ with respect to the same faces.\footnote{By Remark~\ref{verndcndfp}(i) this is the case for almost all prime numbers $p$.} Suppose that $s_0$ is a pole of the local Igusa zeta function \Zof\ associated to $f$. Then $e^{2\pi i\Re(s_0)}$ is an eigenvalue of the local monodromy of $f$ at some point of $f^{-1}(0)\cap U$. \end{theorem} Next we want to state two results of Denef and Hoornaert on Igusa's zeta function for non-degenerated polynomials. To do so, we need some notions that are closely related to Newton polyhedra. We introduce them in the following subsection (see also \cite{BorIZFs,DH01,DL92,HMY07}). \subsection{Preliminaries on Newton polyhedra}\label{premartdrie} We gave the definition of a Newton polyhedron in Definition~\ref{def_NPad}. Now we introduce some related notions. \begin{definition}[$m(k)$]\label{def_mfad} Let $R$ be a ring, and let $f(x)=f(x_1,\ldots,x_n)$ be a nonzero polynomial over $R$ satisfying $f(0)=0$. For $k\in\Rplusn$, we define \begin{equation*} m(k)=\inf_{x\in\Gf}k\cdot x, \end{equation*} where $k\cdot x$ denotes the scalar product of $k$ and $x$. \end{definition} The infimum in the definition above is actually a minimum, where the minimum can as well be taken over the global Newton polyhedron \Gglf\ of $f$, which is a compact set, or even over the finite set $\supp(f)$. \begin{definition}[First meet locus]\label{def_firstmeetlocusad} Let $f$ be as in Definition \ref{def_mfad} and $k\in\Rplusn$. We define the first meet locus of $k$ as the set \begin{equation*} F(k)=\{x\in\Gf\mid k\cdot x=m(k)\}, \end{equation*} which is always a face of \Gf. \end{definition} \begin{definition}[Primitive vector] A vector $k\in\Rn$ is called primitive if the components of $k$ are integers whose greatest common divisor is one. \end{definition} \begin{definition}[\Dtu]\label{def_Dfad} Let $f$ be as in Definition \ref{def_mfad}. For a face $\tau$ of \Gf, we call \begin{equation*} \Dtu=\{k\in\Rplusn\mid F(k)=\tau\} \end{equation*} the cone associated to $\tau$. The \Dtu\ are the equivalence classes of the equivalence relation $\sim$ on \Rplusn, defined by \begin{equation*} k\sim k'\qquad\textrm{if and only if}\qquad F(k)=F(k'). \end{equation*} The \lq cones\rq\ \Dtu\ thus form a partition of \Rplusn: \begin{equation*} \{\Dtu\mid \tau\ \mathrm{is\ a\ face\ of}\ \Gf\}=\Rplusn/\sim. \end{equation*} \end{definition} The \Dtu\ are in fact relatively open\footnote{A subset of \Rplusn\ is called relatively open if it is open in its affine closure.} convex cones\footnote{A subset $C$ of \Rn\ is called a convex cone if it is a convex set and $\lambda x\in C$ for all $x\in C$ and all $\lambda\in\Rplusnul$.} with a very specific structure, as stated in the following lemma. \begin{lemma}[Structure of the \Dtu]\label{lemma_struc_Dftad} \textup{\cite[Lemma 2.6]{DH01}}. Let $f$ be as in Definition~\ref{def_mfad}. Let $\tau$ be a proper face of \Gf\ and let $\tau_1,\ldots,\tau_r$ be the facets\footnote{A facet is a face of codimension one.} of \Gf\ that contain $\tau$. Let $v_1,\ldots,v_r$ be the unique primitive vectors in $\Zplusn\setminus\{0\}$ that are perpendicular to $\tau_1,\ldots,\tau_r$, respectively. Then the cone \Dtu\ associated to $\tau$ is the convex cone \begin{equation*} \Dtu=\{\lambda_1v_1+\lambda_2v_2+\cdots+\lambda_rv_r\mid \lambda_j\in\Rplusnul\}, \end{equation*} and its dimension\footnote{The dimension of a convex cone is the dimension of its affine hull.} equals $n-\dim\tau$. \end{lemma} \begin{definition}[Rational, simplicial, simple]\label{def_rationalconead} For $v_1,\ldots,v_r\in\Rn\setminus\{0\}$, we call \begin{equation*} \Delta=\cone(v_1,\ldots,v_r)=\{\lambda_1v_1+\lambda_2v_2+\cdots+\lambda_rv_r\mid \lambda_j\in\Rplusnul\} \end{equation*} the cone strictly positively spanned by the vectors $v_1,\ldots,v_r$. When the $v_1,\ldots,v_r$ can be chosen from \Zn, we call it a rational cone. If we can choose $v_1,\ldots,v_r$ linearly independent over \R, then $\Delta$ is called a simplicial cone. If $\Delta$ is rational and $v_1,\ldots,v_r$ can be chosen from a \Z-module basis of \Zn, we call $\Delta$ a simple cone. \end{definition} It follows from Lemma~\ref{lemma_struc_Dftad} that the topological closures $\overbar{\Dtu}$\footnote{$\overbar{\Dtu}=\{\lambda_1v_1+\lambda_2v_2+\cdots+\lambda_rv_r\mid \lambda_j\in\Rplus\}=\{k\in\Rplusn\mid F(k)\supset\tau\}$.} of the cones \Dtu\ form a fan\footnote{A fan $\mathcal{F}$ is a finite set of rational polyhedral cones such that every face of a cone in $\mathcal{F}$ is contained in $\mathcal{F}$ and the intersection of each two cones $C$ and $C'$ in $\mathcal{F}$ is a face of both $C$ and $C'$.} of rational polyhedral cones\footnote{A rational polyhedral cone is a closed convex cone, generated by a finite subset of \Zn.}. \begin{remark} The function $m$ from Definition~\ref{def_mfad} is linear on each $\overbar{\Dtu}$. \end{remark} We state without proofs the following two lemmas (see, e.g., \cite{DH01}). \begin{lemma}[Simplicial decomposition] Let $\Delta$ be the cone strictly positively spanned by the vectors $v_1,\ldots,v_r\allowbreak\in\Rplusn\setminus\{0\}$. Then there exists a finite partition of $\Delta$ into cones $\delta_i$, such that each $\delta_i$ is strictly positively spanned by a \R-linearly independent subset of $\{v_1,\ldots,v_r\}$. We call such a decomposition a simplicial decomposition of $\Delta$ without introducing new rays. \end{lemma} \begin{lemma}[Simple decomposition] Let $\Delta$ be a rational simplicial cone. Then there exists a finite partition of $\Delta$ into simple cones. (In general, such a decomposition requires the introduction of new rays.) \end{lemma} Finally, we need the following notion, which is related to the notion of a simple cone. \begin{definition}[Multiplicity]\label{def_multad} Let $v_1,\ldots,v_r$ be \Q-linearly independent vectors in \Zn. The multiplicity of $v_1,\ldots,v_r$, denoted by $\mult(v_1,\ldots,v_r)$, is defined as the index of the lattice $\Z v_1+\cdots+\Z v_r$ in the group of points with integral coordinates in the subspace spanned by $v_1,\ldots,v_r$ of the \Q-vector space~$\Q^n$. If $\Delta$ is the cone strictly positively spanned by $v_1,\ldots,v_r$, then we define the multiplicity of $\Delta$ as the multiplicity of $v_1,\ldots,v_r$, and we denote it by $\mult\Delta$. \end{definition} The following is well-known (see, e.g., \cite[\S 5.3, Thm.~3.1]{Adkins}). \begin{proposition} Let $v_1,\ldots,v_r$ be \Q-linearly independent vectors in \Zn. The multiplicity of $v_1,\ldots,v_r$ equals the cardinality of the set \begin{equation*} \Zn\cap\left\{\sum\nolimits_{j=1}^rh_jv_j\;\middle\vert\;h_j\in[0,1)\text{ for }j=1,\ldots,r\right\}. \end{equation*} Moreover, this number is the greatest common divisor of the absolute values of the determinants of all $(r\times r)$-submatrices of the $(r\times n)$-matrix whose rows contain the coordinates of $v_1,\ldots,v_r$. \end{proposition} \begin{remark}\label{premartdrieeinde} Let $\Delta$ be as in Definition~\ref{def_multad}. Note that $\Delta$ is simple if and only if $\mult\Delta=\mult(v_1,\ldots,v_r)=1$. \end{remark} \subsection{Theorems of Denef and Hoornaert} \begin{notation}\label{notsigmakartdrieintro} For $k=(k_1,\ldots,k_n)\in\Rn$, we denote $\sigma(k)=k_1+\cdots+k_n$. \end{notation} \begin{theorem}\label{theodenef1} \textup{\cite{Den18,Den95}}.\footnote{The theorem was announced in \cite{Den18} and a proof is written down in \cite{Den95}.} Let $f(x_1,\ldots,x_n)\in\Qp[x_1,\ldots,x_n]$ be a nonzero polynomial with $f(0,\ldots,0)=0$, and $\Phi$ a Schwartz--Bruhat function on \Qpn. Let $\tau_1,\ldots,\tau_r$ be all the facets of \Gf, and let $v_1,\ldots,v_r$ be the unique primitive vectors in $\Zplusn\setminus\{0\}$ that are perpendicular to $\tau_1,\ldots,\tau_r$, respectively. Suppose that $f$ is non-degenerated over \Qp\ with respect to all the compact faces of its Newton polyhedron, and suppose that the support of $\Phi$ is contained in a small enough neighborhood of the origin. If $s_0$ is a pole of $Z_{f,\Phi}$, then \begin{equation}\label{scpsvolgdenef} \begin{aligned} s_0&=-1+\frac{2k\pi i}{\log p}\qquad\text{for some $k\in\Z$, or}\\ s_0&=-\frac{\sigma(v_j)}{m(v_j)}+\frac{2k\pi i}{m(v_j)\log p} \end{aligned} \end{equation} for some $j\in\{1,\ldots,r\}$ with $m(v_j)\neq0$ and some $k\in\Z$. \end{theorem} Essential in the proof of Theorem~\ref{mcigusandss} is the following combinatorial formula for \Zof\ for non-degenerated polynomials due to Denef and Hoornaert. \begin{theorem}\label{formdenhoor} \textup{\cite[Thm.~4.2]{DH01}}. Let $f(x)=f(x_1,\ldots,x_n)$ be a nonzero pol\-y\-no\-mi\-al in $\Zp[x_1,\ldots,x_n]$ satisfying $f(0)=0$. Suppose that $f$ is non-de\-gen\-er\-ated over \Fp\ with respect to all the compact faces of its Newton polyhedron \Gf. Then the local Igusa $p$-adic zeta function associated to $f$ is the meromorphic complex function \begin{equation}\label{cfDHinlartdrie} \Zof=\sum_{\substack{\tau\mathrm{\ compact}\\\mathrm{face\ of\ }\Gf}}L_{\tau}S(\Dtu), \end{equation} with \begin{gather*} L_{\tau}:s\mapsto L_{\tau}(s)=\left(\frac{p-1}{p}\right)^n-\frac{N_{\tau}}{p^{n-1}}\frac{p^s-1}{p^{s+1}-1},\\ N_{\tau}=\#\left\{x\in\Fpcrossn\;\middle\vert\;\fbart(x)=0\right\}, \end{gather*} and \begin{equation*} S(\Dtu):s\mapsto S(\Dtu)(s)=\sum_{k\in\Zn\cap\Dtu}p^{-\sigma(k)-m(k)s} \end{equation*} for every compact face $\tau$ of \Gf. The $S(\Dtu)$ can be calculated as follows. Choose a decomposition $\{\delta_i\}_{i\in I}$ of the cone \Dtu\ into simplicial cones $\delta_i$ without introducing new rays. Then clearly \begin{equation}\label{deelsomform2} S(\Dtu)=\sum_{i\in I}S(\delta_i), \end{equation} in which \begin{equation*} S(\delta_i):s\mapsto S(\delta_i)(s)=\sum_{k\in\Zn\cap\delta_i}p^{-\sigma(k)-m(k)s}. \end{equation*} Suppose that the cone $\delta_i$ is strictly positively spanned by the linearly independent primitive vectors $v_j$, $j\in J_i$, in $\Zplusn\setminus\{0\}$. Then we have \begin{equation*} S(\delta_i)(s)=\frac{\Sigma(\delta_i)(s)}{\prod_{j\in J_i}(p^{\sigma(v_j)+m(v_j)s}-1)}, \end{equation*} with $\Sigma(\delta_i)$ the function \begin{equation}\label{defSigmadiThDH} \Sigma(\delta_i):s\mapsto\Sigma(\delta_i)(s)=\sum_hp^{\sigma(h)+m(h)s}, \end{equation} where $h$ runs through the elements of the set \begin{equation*} H(v_j)_{j\in J_i}=\Z^n\cap\lozenge(v_j)_{j\in J_i}, \end{equation*} with \begin{equation*} \lozenge(v_j)_{j\in J_i}=\left\{\sum\nolimits_{j\in J_i}h_jv_j\;\middle\vert\;h_j\in[0,1)\text{ for all }j\in J_i\right\} \end{equation*} the fundamental parallelepiped spanned by the vectors $v_j$, $j\in J_i$. \end{theorem} \begin{remark} There exists a global version of this formula for \Zf; the condition is that $f$ is non-degenerated over \Fp\ with respect to \underline{all} the faces of its Newton polyhedron, and the sum \eqref{cfDHinlartdrie} should be taken over \underline{all} the faces of \Gf\ as well (including \Gf\ itself). In the few definitions that follow, we state everything for the local Igusa zeta function \Zof, since this zeta function is the subject of our theorem. Nevertheless, all notions and results have straightforward analogues for \Zf\ (see \cite{DH01}). \end{remark} The formula for \Zof\ in the theorem confirms (under slightly different conditions) the result of Denef that if $s_0$ is a pole of \Zof, it must be one of the numbers \eqref{scpsvolgdenef} from Theorem~\ref{theodenef1}. We call these numbers the candidate poles of \Zof. \subsection{Expected order and contributing faces} \begin{definition}[Expected order of a candidate pole] Let $f$ be as in Theorem~\ref{formdenhoor}, and suppose that $s_0$ is a candidate pole of \Zof. We define the expected order of the candidate pole $s_0$ (as a pole of \Zof\ with respect to the formula in Theorem~\ref{formdenhoor}) as \begin{equation}\label{defexpordpole} \max\{\text{order of $s_0$ as a pole of $L_{\tau}S(\Dtu)$}\mid\text{$\tau$ face of \Gf}\}. \end{equation} Hereby we agree that the order of $s_0$ as a pole of $L_{\tau}S(\Dtu)$ equals zero, if $s_0$ is not a pole of $L_{\tau}S(\Dtu)$. Note that if $\Re(s_0)\neq-1$, we may omit $L_{\tau}$ in \eqref{defexpordpole}. \end{definition} \begin{remark} Clearly the expected order of a candidate pole $s_0$ of \Zof\ is an upper bound for the actual order of $s_0$ as a pole of \Zof. \end{remark} \begin{definition}[Contributing vector/face/cone]\label{defcontrinlad} Let $f$ be as in Theorem~\ref{formdenhoor}, and suppose that $s_0$ is a candidate pole of \Zof. We say that a primitive vector $v\in\Zplusn\setminus\{0\}$ contributes to $s_0$ if $p^{\sigma(v)+m(v)s_0}=1$, or, equivalently, if \begin{equation*} s_0=-\frac{\sigma(v)}{m(v)}+\frac{2k\pi i}{m(v)\log p} \end{equation*} for some $k\in\Z$. We say that a facet $\tau$ of \Gf\ contributes to the candidate pole $s_0$, if the unique primitive vector $v\in\Zplusn\setminus\{0\}$ that is perpendicular to $\tau$, contributes to $s_0$. More generally, a face of \Gf\ is said to contribute to $s_0$, if it is contained in one or more contributing facets of \Gf. Finally, we say that a cone $\delta=\cone(v_1,\ldots,v_r)$, minimally\footnote{By \lq minimally\rq\ we mean that $\delta\neq\cone(v_1,\ldots,v_{j-1},v_{j+1},\ldots,v_r)$ for all $j\in\{1,\ldots,r\}$.} strictly positively spanned by the primitive vectors $v_1,\ldots,v_r\in\Zplusn\setminus\{0\}$, contributes to $s_0$, if one or more of the vectors $v_j$ contribute to $s_0$. Note that in this way a face $\tau$ of \Gf\ contributes to $s_0$ if and only if its associated cone \Dtu\ does so. \end{definition} Let $f$ be as above, and suppose that $s_0$ is a candidate pole of \Zof\ with $\Re(s_0)\neq-1$. From Theorem~\ref{formdenhoor} it should be clear that if we want to investigate whether $s_0$ is actually a pole or not, we only need to consider the sum $\sum L_{\tau}S(\Dtu)$ over the contributing compact faces $\tau$ of \Gf. Furthermore, if, for a contributing compact face $\tau$, in order to deal with $S(\Dtu)$, we consider a simplicial subdivision $\{\delta_i\}_i$ of the cone \Dtu, we only need to take into account the terms of \eqref{deelsomform2} corresponding to the contributing simplicial cones in $\{\delta_i\}_i$, in order to decide whether $s_0$ is a pole or not. \begin{remark} Let $f$ be as in Theorem~\ref{formdenhoor}. Let $\tau$ be a facet of \Gf\ and $s_0$ a candidate pole of \Zof. One easily checks that if $\tau$ contributes to $s_0$, then $\Re(s_0)=-1/t_0$, where $(t_0,\ldots,t_0)$ denotes the intersection point of the affine support $\aff\tau$ of $\tau$ with the diagonal $\{(t,\ldots,t)\mid t\in\R\}\subset\Rn$ of the first orthant (see also \cite[Prop.~5.1]{DH01}). \end{remark} \subsection{$B_1$-facets and the structure of the proof of the main theorem} The proof of Theorem~\ref{mcigusandss} consists of three results, namely Theorem~\ref{theoAenL}, Proposition~\ref{propAenL}, and Theorem~\ref{maintheoartdrie}, all stated below. The first two results have been proved by Lemahieu and Van Proeyen in \cite{LVmcndss}; the last one is the subject of the current paper; its proof covers Sections~\ref{fundpar}--\ref{secgeval7art3} (pp.~\pageref{fundpar}--\pageref{eindegrbew}). In order to state the theorems, we have one last important notion to introduce: that of a $B_1$-facet. \begin{definition}[$B_1$-facet]\label{defbeenfacetad} Let $R$ be a ring and $n\in\Z_{\geqslant2}$. Let $f(x)=f(x_1,\ldots,x_n)$ be a nonzero polynomial over $R$ satisfying $f(0)=0$. We call a facet $\tau$ of \Gf\ a $B_1$-simplex for a variable $x_i\in\{x_1,\ldots,x_n\}$, if $\tau$ is a simplex with $n-1$ vertices in the coordinate hyperplane $\{x_i=0\}$ and one vertex in the hyperplane $\{x_i=1\}$. We call a facet of \Gf\ a $B_1$-simplex, if it is a $B_1$-simplex for some variable $x_i$. A facet $\tau$ of \Gf\ is called non-compact for a variable $x_j\in\{x_1,\ldots,x_n\}$, if for every point $(x_1,\ldots,x_n)\in\tau$, we have $(x_1,\ldots,x_{j-1},x_j+1,x_{j+1},\ldots,x_n)\in\tau$. For $j\in\{1,\ldots,n\}$, we shall denote by $\pi_j$ the projection \begin{equation*} \pi_j:\Rn\to\R^{n-1}:(x_1,\ldots,x_n)\mapsto(x_1,\ldots,x_{j-1},x_{j+1},\ldots,x_n). \end{equation*} Suppose that $n\geqslant3$. We call a facet $\tau$ of \Gf\ a non-compact $B_1$-facet for a variable $x_i$, if $\tau$ is non-compact for precisely one variable $x_j\neq x_i$ and $\pi_j(\tau)$ is a $B_1$-simplex in $\R^{n-1}$ for the variable $x_i$. A facet of \Gf\ is called a non-compact $B_1$-facet, if it is a non-compact $B_1$-facet for some variable $x_i$. Finally, we call a facet of \Gf\ a $B_1$-facet (or $B_1$ for short) for a variable $x_i$, if it is either a $B_1$-simplex for $x_i$ or a non-compact $B_1$-facet for $x_i$; we call it a $B_1$-facet when it is $B_1$ for some variable $x_i$. \end{definition} The first step in the proof of Theorem~\ref{mcigusandss} is the fact that \lq almost all\rq\ candidate poles of \Zof\ induce monodromy eigenvalues; \lq almost all\rq\ means all, except---possibly---those that are only contributed by $B_1$-facets. \begin{theorem}[On the candidate poles of \Zof\ contributed by non-$B_1$-facets]\label{theoAenL} Cfr.\ \textup{\cite[Theorem~10]{LVmcndss}}. Let $f$ and $p$ be as in Theorem~\ref{mcigusandss}. Let $s_0$ be a candidate pole of \Zof\ and suppose that $s_0$ is contributed by a facet of \Gf\ that is not a $B_1$-facet. Then $e^{2\pi i\Re(s_0)}$ is an eigenvalue of the local monodromy of $f$ at some point of the surface $f^{-1}(0)\subset\C^3$ close to the origin. \end{theorem} The proof of the theorem above relies on Varchenko's formula \cite{Varchenko} for the zeta function of monodromy of $f$ (at the origin) in terms of the Newton polyhedron of $f$, which in turn relies on A'Campo's formula \cite{AC75} for the same zeta function in terms of an embedded resolution of singularities of $f^{-1}(0)\subset\C^3$. In this context, we would also like to mention the results of Denef--Sperber \cite{denefsperber} and Cluckers \cite{CluckersDUKE,CluckersTAMS} on exponential sums associated to non-degenerated polynomials. Here one also obtains nice results when imposing certain conditions on the faces of the Newton polyhedron that are similar to the one in the theorem above. This is probably also a good place to state the result of Loeser on the Monodromy Conjecture for non-degenerated singularities. Loeser proves (in general dimension) a result similar to Theorem~\ref{theoAenL}, imposing several, rather technical conditions on the Newton polyhedron's facets. \begin{theorem}\label{theoloesernondeg} \textup{\cite{Loe90}}. Let $f(x_1,\ldots,x_n)\in\C[x_1,\ldots,x_n]$ be a nonzero pol\-y\-no\-mial with $f(0,\ldots,0)=0$. Suppose that $f$ is non-degenerated over \C\ with respect to all the compact faces of its Newton polyhedron. Let $\tau_0$ be a compact facet of \Gf, and let $\tau_1,\ldots,\tau_r$ be all the facets of \Gf\ that are different, but not disjoint from $\tau_0$. Denote by $v_0,v_1,\ldots,v_r$ the unique primitive vectors in $\Zplusn\setminus\{0\}$ that are perpendicular to $\tau_0,\tau_1,\ldots,\tau_r$, respectively. Suppose that \begin{enumerate} \item ${\ds\frac{\sigma(v_0)}{m(v_0)}<1}$\quad and that \item ${\ds\frac{1}{\mult(v_0,v_j)}\left(\sigma(v_j)-\frac{\sigma(v_0)}{m(v_0)}m(v_j)\right)\not\in\Z}$\quad for all $j\in\{1,\ldots,r\}$. \end{enumerate} Then $-\sigma(v_0)/m(v_0)$ is a root of the local Bernstein--Sato polynomial $b_f^0$ of $f$. Hereby $\mult(v_0,v_j)$ denotes the multiplicity of $v_0$ and $v_j$ (cfr.\ Def\-i\-ni\-tion~\ref{def_multad}). \end{theorem} By the result of Malgrange \cite{malgrange} we mentioned earlier, under the conditions of the theorem, we also have that $e^{-2\pi i\sigma(v_0)/m(v_0)}$ is an eigenvalue of the local monodromy of $f$ at some point of $f^{-1}(0)\subset\C^n$ close to the origin. Loeser proves that this remains true if we replace Condition~(i) by $\sigma(v_0)/m(v_0)\not\in\Z$. Let us go back to Theorem~\ref{theoAenL} and the $B_1$-facets. What can we say about the candidate poles of \Zof\ that are exclusively contributed by $B_1$-facets? In 1984 Denef announced the following theorem (in general dimension) on candidate poles of $Z_{f,\Phi}$ that are contributed by a single $B_1$-simplex. \begin{theorem}\label{deneftheonp1} Cfr.\ \textup{\cite{Den18,DenefSargos92}}.\footnote{The theorem was announced in \cite{Den18} and a proof is sketched in the real case in \cite{DenefSargos92}. This proof is adaptable to the $p$-adic case, but except for dimension three, a complete detailed proof has not been written down yet.} Let $f(x_1,\ldots,x_n)\in\Qp[x_1,\ldots,x_n]$ be a non\-zero polynomial with $f(0,\ldots,0)=0$, and $\Phi$ a Schwartz--Bruhat function on \Qpn. Suppose that $f$ is non-degenerated over \Qp\ with respect to all the compact faces of its Newton polyhedron, and suppose that the support of $\Phi$ is contained in a small enough neighborhood of the origin. Let $\tau_0,\tau_1,\ldots,\tau_r$ be all the facets of \Gf, and let $v_0,v_1,\ldots,v_r$ be the unique primitive vectors in $\Zplusn\setminus\{0\}$ that are perpendicular to $\tau_0,\tau_1,\ldots,\tau_r$, respectively. Suppose that $\tau_0$ is a $B_1$-simplex, that $\sigma(v_0)/m(v_0)\neq1$, and that $\sigma(v_0)/m(v_0)\neq\sigma(v_j)/m(v_j)$ for all $j\in\{1,\ldots,r\}$. Then there is no pole $s_0$ of $Z_{f,\Phi}$ with $\Re(s_0)=-\sigma(v_0)/m(v_0)$. \end{theorem} We can restate Denef's theorem as follows. \begin{theorem}\label{deneftheonp1bis} Cfr.\ \textup{\cite{Den18,DenefSargos92}}. Let $f(x_1,\ldots,x_n)\in\Qp[x_1,\ldots,x_n]$ be a non\-zero polynomial with $f(0,\ldots,0)=0$, and $\Phi$ a Schwartz--Bruhat function on \Qpn. Suppose that $f$ is non-degenerated over \Qp\ with respect to all the compact faces of its Newton polyhedron, and suppose that the support of $\Phi$ is contained in a small enough neighborhood of the origin. Let $s_0\neq-1$ be a real candidate pole of $Z_{f,\Phi}$. Suppose that exactly one facet of \Gf\ contributes to $s_0$ and that this facet is a $B_1$-simplex. Then there exists no pole of $Z_{f,\Phi}$ with real part $s_0$. \end{theorem} Denef noticed that one cannot expect this theorem to be generally true for candidate poles that are contributed by several $B_1$-simplices. He gave the following counterexample\footnote{Denef in fact showed that for $f=x^n+xy+y^m+z^2$, the candidate pole $-3/2$ (which is contributed by two $B_1$-simplices) is an actual pole of $Z_{f,\Phi}$ for $n,m$ big enough.} in dimension three. We will discuss the example in detail, as it also illustrates Denef and Hoornaert's formula. \begin{figure} \centering \psset{unit=.0888\textwidth \subfigure[Newton polyhedron \Gf\ of $f=x^3+xy+y^2+z^2$ and its faces]{ \psset{Beta=20}\psset{Alpha=65} \begin{pspicture}(-2.85,-2.6)(4.82,5 \pstThreeDCoor[xMin=0,yMin=0,zMin=0,xMax=6,yMax=5,zMax=5,labelsep=5pt,linecolor=black,linewidth=1.0pt] { \psset{linecolor=black,linewidth=.5pt,linestyle=dashed,subticks=1} \pstThreeDLine(1,0,0)(1,1,0)\pstThreeDLine(1,1,0)(0,1,0) } { \psset{dotstyle=none,dotscale=1,drawCoor=false} \psset{linecolor=lightgray,opacity=.6,linewidth=1.2pt,linejoin=1} \pstThreeDLine*(0,0,5)(0,0,2)(3,0,0)(6,0,0)(6,0,5)(0,0,5) \pstThreeDLine*(0,0,5)(0,0,2)(0,2,0)(0,5,0)(0,5,5)(0,0,5) \pstThreeDLine*(6,0,0)(3,0,0)(1,1,0)(0,2,0)(0,5,0)(6,5,0)(6,0,0) } { \psset{dotstyle=none,dotscale=1,drawCoor=false} \psset{linecolor=black,linewidth=1.2pt,linejoin=1,arrows=->} \psset{fillstyle=none} \pstThreeDLine(0,0,2)(0,0,5) \pstThreeDLine(3,0,0)(6,0,0) \pstThreeDLine(0,2,0)(0,5,0) } { \psset{dotstyle=none,dotscale=1,drawCoor=false} \psset{linecolor=black,linewidth=1.2pt,linejoin=1} \psset{fillcolor=lightgray,opacity=.6,fillstyle=solid} \pstThreeDLine(0,0,2)(1,1,0)(3,0,0)(0,0,2)(0,2,0)(1,1,0) } \pstThreeDPut[pOrigin=t](3,0,-0.2){$A$} \pstThreeDPut[pOrigin=t](3,0,-0.53){\scriptsize$(3,0,0)$} \pstThreeDPut[pOrigin=t](1,1,-0.2){$B$} \pstThreeDPut[pOrigin=t](1,1,-0.53){\scriptsize$(1,1,0)$} \pstThreeDPut[pOrigin=t](0,2,-0.2){$C$} \pstThreeDPut[pOrigin=t](0,2,-0.53){\scriptsize$(0,2,0)$} \pstThreeDPut[pOrigin=bl](0,0.05,2.15){$D${\scriptsize\,$(0,0,2)$}} \pstThreeDPut[pOrigin=c](1.33,0.33,0.66){$\tau_0$} \pstThreeDPut[pOrigin=c](0.33,1,0.66){$\tau_1$} \pstThreeDPut[pOrigin=c](0,2.5,2.5){$\tau_2$} \pstThreeDPut[pOrigin=c](3,0,2.5){$\tau_3$} \pstThreeDPut[pOrigin=c](4,3.333,0){$\tau_4$} \pstThreeDPut[pOrigin=bl](6,0,0.35){$l_x$} \pstThreeDPut[pOrigin=br](0,5,0.2){$l_y$} \pstThreeDPut[pOrigin=tl](0,0.15,4.99){$l_z$} \end{pspicture} } \\[+2.25ex] \subfigure[Cones \Dtu\ associated to the faces $\tau$ of \Gf]{ \psset{unit=.058\textwidth \psset{Beta=10}\psset{Alpha=35} \begin{pspicture}(-9.75,-2)(7.22,10.6 \pstThreeDCoor[xMin=0,yMin=0,zMin=0,xMax=10,yMax=10,zMax=10,nameX={},nameY={},nameZ={},labelsep=5pt,linecolor=gray,linewidth=1.0pt] \psset{linecolor=gray,linewidth=.5pt,linejoin=1,linestyle=dashed,fillcolor=lightgray,fillstyle=none} } \psset{linecolor=black,linewidth=1.1pt,linejoin=1,fillcolor=lightgray,fillstyle=none,arrows=->} \pstThreeDLine(0,0,0)(10,0,0 \pstThreeDLine(0,0,0)(0,10,0 \pstThreeDLine(0,0,0)(0,0,10 } \psset{labelsep=2.5pt} \uput[90](0,1.2){\psframebox*[framesep=1.5pt,framearc=0]{\darkgray$v_4$}} } \psset{linecolor=black,linewidth=1.1pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,0)(2.22,4.44,3.33 \pstThreeDLine(0,0,0)(3.33,3.33,3.33 } \psset{linecolor=darkgray,linewidth=1.2pt,linejoin=1,arrows=->,arrowscale=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,0)(2,4,3 \pstThreeDLine(0,0,0)(1.2,1.2,1.2 \pstThreeDLine(0,0,0)(1.2,0,0 \pstThreeDLine(0,0,0)(0,1.2,0 \pstThreeDLine(0,0,0)(0,0,1.2 } \psset{linecolor=white,linewidth=4pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(3.10,3.55,3.33)(2.45,4.22,3.33 \pstThreeDLine(3,4,3)(1.66,6.66,1.66 } \psset{linecolor=black,linewidth=.4pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(10,0,0)(3.33,3.33,3.33 \pstThreeDLine(0,10,0)(2.22,4.44,3.33 \pstThreeDLine(0,0,10)(0,10,0)(10,0,0)(0,0,10 \pstThreeDLine(0,0,10)(2.22,4.44,3.33)(3.33,3.33,3.33)(0,0,10 } \psset{linecolor=black,linewidth=.4pt,linejoin=1,linestyle=dashed,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(3.33,3.33,3.33)(1.665,6.665,1.665)\pstThreeDLine(1.665,6.665,1.665)(0,10,0 } \psset{labelsep=3pt} \uput[-30](.33,1){\darkgray$v_0$} \uput[-90](-1,-.16){\darkgray$v_2$} \uput[-90](.7,-.21){\darkgray$v_3$} } \psset{labelsep=2.5pt} \uput[180](-.3,.8){\darkgray$v_1$} } \psset{labelsep=5pt} \uput[27](2.875,4.215){$\Delta_{l_x}$} \uput[146](-4.1,4.425){$\Delta_{l_y}$} \uput[-93](-1.225,-1.21){$\Delta_{l_z}$} \uput[0](5.75,-1.42){$y,\Delta_{\tau_3}$} \uput[180](-8.2,-1){$\Delta_{\tau_2},x$} } \psset{labelsep=4.7pt} } \rput(1.9,3.50){$\Delta_A$} \rput(-.0333,4.4){\psframebox*[framesep=1.3pt,framearc=0]{$\Delta_B$}} \rput(-2.6,3.50){$\Delta_C$} \rput(1.05,1.79){$\delta_1$} \rput(2.475,.53){\psframebox*[framesep=1.3pt,framearc=0]{$\delta_2$}} \rput(-1.7,.4){$\delta_3$} \pstThreeDNode(2.22,4.44,3.33){dt0} \pstThreeDNode(3.33,3.33,3.33){dt1} \pstThreeDNode(1.11,2.22,6.65){AB} \pstThreeDNode(1.11,7.20,1.66){AD} \pstThreeDNode(1.66,1.66,6.65){BC} \pstThreeDNode(2.664,3.996,3.33){BD} \pstThreeDNode(6.65,1.66,1.66){CD} \rput[l](-9.7,4.6){\rnode{CDlabel}{$\Delta_{[CD]}$}} \rput[Bl](-9.7,10.18){\rnode{dt1label}{$\Delta_{\tau_1}$}} \rput[B](-4.8,10.18){\rnode{BClabel}{$\Delta_{[BC]}$}} \rput[B](0,10.18){$z,\Delta_{\tau_4}$} \rput[B](3.3,10.18){\rnode{ABlabel}{$\Delta_{[AB]}$}} \rput[r](7.24,6.3){\rnode{dt0label}{$\Delta_{\tau_0}$}} \rput[Br](7.24,10.18){\rnode{BDlabel}{$\Delta_{[BD]}$}} \rput[r](7.24,2.4){\rnode{ADlabel}{$\Delta_{[AD]}$}} \nccurve[linewidth=.3pt,nodesepB=2.5pt,nodesepA=1pt,angleA=-115,angleB=0]{->}{dt0label}{dt0} \ncline[linewidth=.3pt,nodesepB=2.5pt,nodesepA=2pt]{->}{dt1label}{dt1} \ncline[linewidth=.3pt,nodesepB=2pt,nodesepA=1.5pt]{->}{ABlabel}{AB} \ncline[linewidth=.3pt,nodesepB=2pt,nodesepA=1pt]{->}{ADlabel}{AD} \ncline[linewidth=.3pt,nodesepB=2pt,nodesepA=1pt]{->}{BClabel}{BC} \ncline[linewidth=.3pt,nodesepB=2pt,nodesepA=1.5pt]{->}{BDlabel}{BD} \ncline[linewidth=.3pt,nodesepB=2pt,nodesepA=2.5pt]{->}{CDlabel}{CD} \end{pspicture} } \psset{Beta=30}\psset{Alpha=45} \caption{Combinatorial data associated to $f=x^3+xy+y^2+z^2$} \label{figvoorbeeldart3} \end{figure} \begin{example}[Actual pole of \Zof\ only contributed by $B_1$-facets]\label{denefvbart3} {\normalfont [Denef, 1984]}. Let $p\geqslant3$ be a prime number and consider $f=x^3+xy+y^2+z^2\in\Zp[x,y,z]$. The Newton polyhedron \Gf\ of $f$ and the cones associated to its faces are drawn in Figure~\ref{figvoorbeeldart3}. One checks that $f$ is non-degenerated over \Fp\ with respect to all the compact faces of \Gf\ ($p\neq2$). Table~\ref{tabelfacetsart3} gives an overview of the facets $\tau_j$ of \Gf, their associated numerical data $(m(v_j),\sigma(v_j))$, and their associated candidate poles of \Zof. Facets $\tau_0$ and $\tau_1$ are $B_1$-simplices, while $\tau_2,\tau_3,\tau_4$ lie in coordinate hyperplanes and hence do not yield any candidate poles. Poles of \Zof\ are therefore among the numbers \begin{equation*} s_k=-\frac{3}{2}+\frac{k\pi i}{3\log p},\quad s'_l=-1+\frac{2l\pi i}{\log p};\qquad k,l\in\Z. \end{equation*} The candidate poles $s_k$ with $3\nmid k$ are only contributed by $\tau_0$ and have expected order one, while the $s_k$ with $3\mid k$ are contributed by $\tau_0$ and $\tau_1$; the latter have expected order two since the contributing facets $\tau_0$ and $\tau_1$ share the edge $[BD]$. We will now calculate \Zof\ using Theorem~\ref{formdenhoor} in order to find out which candidate poles are actually poles. Table~\ref{tabelconesart3} provides an overview of \Gf's compact faces and their associated cones and all the data needed to fill in the theorem's formula. The numbers $N_{\tau}$ that appear in the $L_{\tau}(s)$ are listed in the third column. Hereby $N_0$ and $N_1$ represent the numbers \begin{align*} N_0&=\#\left\{(x,z)\in(\Fpcross)^2\;\middle\vert\;x^3+z^2=0\right\}\qquad\text{and}\\ N_1&=\#\left\{(y,z)\in(\Fpcross)^2\;\middle\vert\;y^2+z^2=0\right\}. \end{align*} The $S(\Dtu)(s)$ can be calculated based on the data on the cones \Dtu\ in the right-hand side of Table~\ref{tabelconesart3}. We find that \Dtu\ is simplicial for every (compact) face $\tau$ of \Gf, except for $\tau=D$. Those cones \Dtu, $\tau\neq D$, are even simple, except for $\Delta_A,\Delta_B,\Delta_{[AB]}$, whose corresponding fundamental parallelepipeds contain besides the origin also the integral point $(1,2,2)$ (see Table~\ref{tabelfacetsart3}). In order to calculate $S(\Delta_D)(s)$, we choose to decompose $\Delta_D$ into the simplicial cones $\delta_1,\delta_2,\delta_3$ that happen to be simple as well (see Table~\ref{tabelconesart3}). We now obtain \Zof\ as \begin{equation*} \Zofs=\sum_{\substack{\tau\mathrm{\ compact}\\\mathrm{face\ of\ }\Gf}}L_{\tau}(s)S(\Dtu)(s)= \frac{(p-1)(p^{s+3}-1)}{p^3(p^{s+1}-1)(p^{2s+3}-1)}. \end{equation*} Note that \Zofs\ does not depend on $N_0$ or $N_1$. We conclude that the candidate poles that are only contributed by a single $B_1$-simplex are not poles. On the other hand we find that the numbers $s_{3k}$, despite being only contributed by $B_1$-simplices, are indeed poles, although their order is lower than expected. \end{example} \begin{table} \centering \caption{Numerical data associated to $(1,2,2)$ and the facets of \Gf}\label{tabelfacetsart3} { \setlength{\belowrulesep}{.65ex} \setlength{\aboverulesep}{.4ex} \setlength{\belowbottomsep}{0ex} \setlength{\defaultaddspace}{.5em} \begin{tabular}{*{6}{l}}\toprule facet $\tau$ & $\tau$ com- & primitive vector & \multirow{2}*{$m(v)$} & \multirow{2}*{$\sigma(v)$} & candidate poles of \Zof\\ of \Gf & pact? & $v\perp\tau$ & & & contributed by $\tau$\\\midrule $\tau_0$ & yes & $v_0(2,4,3)$ & $6$ & $9$ & ${\ds -\frac32+\frac{k\pi i}{3\log p};\ k\in\Z}$\\\addlinespace $\tau_1$ & yes & $v_1(1,1,1)$ & $2$ & $3$ & ${\ds -\frac32+\frac{k\pi i}{\log p};\ k\in\Z}$\\\addlinespace $\tau_2$ & no & $v_2(1,0,0)$ & $0$ & $1$ & none\\\addlinespace $\tau_3$ & no & $v_3(0,1,0)$ & $0$ & $1$ & none\\\addlinespace $\tau_4$ & no & $v_4(0,0,1)$ & $0$ & $1$ & none\\\midrule[\heavyrulewidth] & & integral point $h$ & $m(h)$ & $\sigma(h)$ & \\\cmidrule{3-5} & & $(1,2,2)$ & $3$ & $5$ & \\\bottomrule \end{tabular} } \end{table} \begin{sidewaystable} \caption{Data associated to the compact faces of \Gf\ and their associated cones}\label{tabelconesart3} { \setlength{\belowrulesep}{.8ex} \setlength{\aboverulesep}{.7ex} \setlength{\belowbottomsep}{0ex} \setlength{\defaultaddspace}{.57em} \begin{tabular}{*{9}{l}}\toprule face $\tau$ & \multirow{2}*{\fbart} & \multirow{2}*{$N_{\tau}$} & \multirow{2}*{$L_{\tau}(s)$} & cone & $\dim$ & primitive & $\mult$ & \multirow{2}*{$S(\Dtu)(s),S(\delta_i)(s)$}\\ of \Gf & & & & $\Dtu,\delta_i$ & $\Dtu,\delta_i$ & generators & $\Dtu,\delta_i$ & \\\midrule $A$ & $x^3$ & $0$ & $\bigl(\frac{p-1}{p}\bigr)^3$ & $\Delta_A$ & $3$ & $v_0,v_3,v_4$ & $2$ & $\frac{1+p^{3s+5}}{(p^{6s+9}-1)(p-1)^2}$\\\addlinespace $B$ & $xy$ & $0$ & $\bigl(\frac{p-1}{p}\bigr)^3$ & $\Delta_B$ & $3$ & $v_0,v_1,v_4$ & $2$ & $\frac{1+p^{3s+5}}{(p^{6s+9}-1)(p^{2s+3}-1)(p-1)}$\\\addlinespace $C$ & $y^2$ & $0$ & $\bigl(\frac{p-1}{p}\bigr)^3$ & $\Delta_C$ & $3$ & $v_1,v_2,v_4$ & $1$ & $\frac{1}{(p^{2s+3}-1)(p-1)^2}$\\\midrule[.03em] & & & & $\delta_1$ & $3$ & $v_0,v_1,v_3$ & $1$ & $\frac{1}{(p^{6s+9}-1)(p^{2s+3}-1)(p-1)}$\\\addlinespace & & & & $\delta_2$ & $2$ & $v_1,v_3$ & $1$ & $\frac{1}{(p^{2s+3}-1)(p-1)}$\\\addlinespace & & & & $\delta_3$ & $3$ & $v_1,v_2,v_3$ & $1$ & $\frac{1}{(p^{2s+3}-1)(p-1)^2}$\\\addlinespace $D$ & $z^2$ & $0$ & $\bigl(\frac{p-1}{p}\bigr)^3$ & $\Delta_D$ & $3$ & $v_0,v_1,v_2,v_3$ & -- & $\frac{p^{6s+10}-1}{(p^{6s+9}-1)(p^{2s+3}-1)(p-1)^2}$\\\midrule[.03em] $[AB]$ & $x^3+xy$ & $(p-1)^2$ & $\bigl(\frac{p-1}{p}\bigr)^3-\bigl(\frac{p-1}{p}\bigr)^2\frac{p^s-1}{p^{s+1}-1}$ & $\Delta_{[AB]}$ & $2$ & $v_0,v_4$ & $2$ & $\frac{1+p^{3s+5}}{(p^{6s+9}-1)(p-1)}$\\\addlinespace $[BC]$ & $xy+y^2$ & $(p-1)^2$ & $\bigl(\frac{p-1}{p}\bigr)^3-\bigl(\frac{p-1}{p}\bigr)^2\frac{p^s-1}{p^{s+1}-1}$ & $\Delta_{[BC]}$ & $2$ & $v_1,v_4$ & $1$ & $\frac{1}{(p^{2s+3}-1)(p-1)}$\\\addlinespace $[AD]$ & $x^3+z^2$ & $(p-1)N_0$ & $\bigl(\frac{p-1}{p}\bigr)^3-\frac{(p-1)N_0}{p^2}\frac{p^s-1}{p^{s+1}-1}$ & $\Delta_{[AD]}$ & $2$ & $v_0,v_3$ & $1$ & $\frac{1}{(p^{6s+9}-1)(p-1)}$\\\addlinespace $[BD]$ & $xy+z^2$ & $(p-1)^2$ & $\bigl(\frac{p-1}{p}\bigr)^3-\bigl(\frac{p-1}{p}\bigr)^2\frac{p^s-1}{p^{s+1}-1}$ & $\Delta_{[BD]}$ & $2$ & $v_0,v_1$ & $1$ & $\frac{1}{(p^{6s+9}-1)(p^{2s+3}-1)}$\\\addlinespace $[CD]$ & $y^2+z^2$ & $(p-1)N_1$ & $\bigl(\frac{p-1}{p}\bigr)^3-\frac{(p-1)N_1}{p^2}\frac{p^s-1}{p^{s+1}-1}$ & $\Delta_{[CD]}$ & $2$ & $v_1,v_2$ & $1$ & $\frac{1}{(p^{2s+3}-1)(p-1)}$\\\addlinespace $\tau_0$ & $x^3+xy+z^2$ & $(p-1)^2-N_0$ & $\bigl(\frac{p-1}{p}\bigr)^3-\frac{(p-1)^2-N_0}{p^2}\frac{p^s-1}{p^{s+1}-1}$ & $\Delta_{\tau_0}$ & $1$ & $v_0$ & $1$ & $\frac{1}{p^{6s+9}-1}$\\\addlinespace $\tau_1$ & $xy+y^2+z^2$ & $(p-1)^2-N_1$ & $\bigl(\frac{p-1}{p}\bigr)^3-\frac{(p-1)^2-N_1}{p^2}\frac{p^s-1}{p^{s+1}-1}$ & $\Delta_{\tau_1}$ & $1$ & $v_1$ & $1$ & $\frac{1}{p^{2s+3}-1}$\\\bottomrule \end{tabular} } \end{sidewaystable} In situations as in Example~\ref{denefvbart3} that are not covered by Theorem~\ref{theoAenL}, one needs to prove that the pole in question induces a monodromy eigenvalue. This is done by Lemahieu and Van Proeyen in the following proposition and forms the second step in the proof of Theorem~\ref{mcigusandss}. Note that the two $B_1$-simplices in the example are $B_1$ with respect to different variables. \begin{proposition}\label{propAenL} Cfr.\ \textup{\cite[Theorem~15]{LVmcndss}}. Let $f$ and $p$ be as in Theorem~\ref{mcigusandss}. Let $s_0$ be a candidate pole of \Zof\ and suppose that $s_0$ is contributed by two $B_1$-facets of \Gf\ that are \underline{not} $B_1$ for a same variable and that have an edge in common. Then $e^{2\pi i\Re(s_0)}$ is an eigenvalue of the local monodromy of $f$ at some point of the surface $f^{-1}(0)\subset\C^3$ close to the origin. \end{proposition} The proof of the proposition again uses Varchenko's formula and is part of the proof of Theorem~15 in \cite{LVmcndss}. In this paper one considers the local topological zeta function \Ztopof\ instead of \Zof; however, the candidate poles of \Ztopof\ are precisely the real parts of the candidate poles of \Zof, and whenever a facet of \Gf\ contributes to a candidate pole $s_0$ of \Zof, it contributes to the candidate pole $\Re(s_0)$ of \Ztopof\ as well; therefore Proposition~\ref{propAenL} follows in the same way. In order to conclude Theorem~\ref{mcigusandss}, we want to prove that the remaining candidate poles, i.e., candidate poles only contributed by $B_1$-facets, but not satisfying the conditions of Proposition~\ref{propAenL}, are actually not poles. The result is---under slightly different conditions and in dimension three---an optimization of Theorem~\ref{deneftheonp1bis}, partially allowing the candidate pole $s_0$ to be contributed by several $B_1$-facets, including non-compact ones. This is the final step of the proof. \begin{theorem}[On candidate poles of \Zof\ only contributed by $B_1$-facets]\label{maintheoartdrie} Let $f(x,y,z)\in\Zp[x,y,z]$ be a nonzero polynomial in three variables with $f(0,0,0)=0$. Suppose that $f$ is non-degenerated over \Fp\ with respect to all the compact faces of its Newton polyhedron. Let $s_0$ be a candidate pole of \Zof\ with $\Re(s_0)\neq-1$, and suppose that $s_0$ is only contributed by $B_1$-facets of \Gf. Assume also that for any pair of contributing $B_1$-facets, we have that \begin{itemize} \item[-] either they are $B_1$-facets for a same variable, \item[-] or they have at most one point in common. \end{itemize} Then $s_0$ is not a pole of \Zof. \end{theorem} This is the key theorem of the paper that we will prove in the next eight sections. The theorem has been proved for the local topological zeta function by Lemahieu and Van Proeyen \cite[Proposition~14]{LVmcndss}. In our proof we will consider the same seven cases as they did, distinguishing all possible configurations of contributing $B_1$-facets. The idea is to calculate in every case the residue(s) of \Zof\ in the candidate pole in question $s_0$, based on Denef and Hoornaert's formula for \Zof\ for non-degenerated $f$ (Theorem~\ref{formdenhoor}). The main difficulty in comparison with the topological zeta function approach lies in the calculation of $\Sigma(\delta)(s_0)$ and $\Sigma(\delta)'(s_0)$ for different simplicial cones $\delta$ (see Equation~\eqref{defSigmadiThDH} in Theorem~\ref{formdenhoor}). Where for the topological zeta function it is sufficient to consider the multiplicity of a cone, for the $p$-adic zeta function one has to sum over the lattice points that yield this multiplicity. Lemahieu and Van Proeyen used a computer algebra package to manipulate the rational expressions they obtained for the topological zeta function and so achieved their result; in the $p$-adic case this approach is no longer possible. In order to deal with the aforementioned sums, we study in Section~\ref{fundpar} the integral points in a three-dimensional fundamental parallelepiped. The aim is to achieve an explicit description of the points that we can use in the rest of the proof to calculate the sums $\Sigma(\delta)(s_0)$ and $\Sigma(\delta)'(s_0)$ over those points. These calculations often lead to polynomial expressions with floor or ceil functions in the exponents; dealing with them forms the second main difficulty of the proof. A third complication is due to the existence of imaginary candidate poles; their residues are usually harder to calculate than those of their real colleagues. \begin{remark} Although everything in this paper is formulated for \Qp, the results can be generalized in a straightforward way to arbitrary $p$-adic fields. The reason is that Denef and Hoornaert's formula for Igusa's zeta function has a very similar form over finite field extensions of \Qp. \end{remark} \subsection{Overview of the paper} As mentioned before, Section~\ref{fundpar} contains an elaborate study of the integral points in three-dimensional fundamental parallelepipeds. Sections~\ref{secgeval1art3}--\ref{secgeval7art3} cover the proof of Theorem~\ref{maintheoartdrie}; every section treats one possible configuration of $B_1$-facets contributing to a same candidate pole. In Section~\ref{sectkarakter} we verify the analogue of Theorem~\ref{maintheoartdrie} for Igusa's zeta function of a polynomial $f(x_1,\ldots,x_n)\in\Zp[x_1,\ldots,x_n]$ and a non-trivial character of \Zpx. This leads to the Monodromy Conjecture in this case as well. In Section~\ref{sectmotivisch} we state and prove the motivic version of our main theorem; i.e., we obtain the motivic Monodromy Conjecture for a non-degenerated surface singularity. This section also contains a detailed proof of the motivic analogue of Denef and Hoornaert's formula. Our objective is to obtain a formula for the motivic zeta function as an element in the ring $\MC[[T]]$, as it is defined, rather than as an element in some localization of $\MC[[T]]$.\footnote{The ring \MC\ denotes the localization of the Grothendieck ring of complex algebraic varieties with respect to the class of the affine line, while $T$ is a formal indeterminate.} This explains the technicality of the formula and its proof. \section{On the integral points in a three-dimensional fundamental parallelepiped spanned by primitive vectors}\label{fundpar} \setcounter{subsection}{-1} \subsection{Introduction}\label{introductionalgfp} We recall some basic definitions and results. \begin{definition}[Primitive vector] A vector $w=(a_1,\ldots,a_n)\in\Zn$ is called primitive if $\gcd(a_1,\ldots,a_n)=1$. \end{definition} \begin{definition}[Fundamental parallelepiped] Let $w_1,\ldots,w_t$ be \R-linear independent primitive vectors in $\Zplus^n$. We call the set \begin{equation*} \lozenge(w_1,\ldots,w_t)=\left\{\sum\nolimits_{j=1}^th_jw_j\;\middle\vert\;h_j\in[0,1);\quad j=1,\ldots,t\right\}\subset\Rplusn \end{equation*} the fundamental parallelepiped spanned by the vectors $w_1,\ldots,w_t$. \end{definition} \begin{definition}[Multiplicity] The number of integral points (i.e., points with integer coordinates) in a fundamental parallelepiped $\lozenge(w_1,\ldots,w_t)$ is called the multiplicity of the fundamental parallelepiped. We denote it by \begin{equation*} \mult\lozenge(w_1,\ldots,w_t)=\#\left(\Zn\cap\lozenge(w_1,\ldots,w_t)\right). \end{equation*} \end{definition} The following result is well-known. \begin{proposition}\label{multipliciteit} Let $w_1,\ldots,w_t$ be \R-linear independent primitive vectors in $\Zplus^n$. The multiplicity of the fundamental parallelepiped $\lozenge(w_1,\ldots,w_t)$ equals the greatest common divisor of the absolute values of the determinants of all $(t\times t)$-submatrices of the $(t\times n)$-matrix whose rows contain the coordinates of $w_1,\ldots,w_t$. \end{proposition} \begin{notation}\label{notatiematrices} We will denote the determinant of a square, real matrix $M=(a_{ij})_{ij}$ by $\det M=\lvert a_{ij}\rvert_{ij}$ and its absolute value by $\abs{\det M}=\lVert a_{ij}\rVert_{ij}$. \end{notation} For the rest of this section, we fix three linearly independent primitive vectors $w_1,w_2,w_3\in\Zplus^3$ and denote their coordinates by \begin{equation*} w_1(a_1,b_1,c_1),\qquad w_2(a_2,b_2,c_2),\qquad\text{and}\qquad w_3(a_3,b_3,c_3). \end{equation*} We also fix notations for the following sets and their cardinalities (cfr.\ Figure~\ref{fundpar3D}): \begin{alignat*}{3} H&=\Z^3\cap\lozenge(w_1,w_2,w_3),&\qquad\qquad\ \ \mu&=\#H&&=\mult\lozenge(w_1,w_2,w_3),\\ H_1&=\Z^3\cap\lozenge(w_2,w_3)\subset H,&\mu_1&=\#H_1&&=\mult\lozenge(w_2,w_3),\\ H_2&=\Z^3\cap\lozenge(w_1,w_3)\subset H,&\mu_2&=\#H_2&&=\mult\lozenge(w_1,w_3),\\ H_3&=\Z^3\cap\lozenge(w_1,w_2)\subset H,&\mu_3&=\#H_3&&=\mult\lozenge(w_1,w_2). \end{alignat*} \begin{figure} \psset{unit=.06\textwidth \psset{Alpha=46} \centering \begin{pspicture}(-4.8,-2.8)(4.9,5.75 {\darkgray\pstThreeDCoor[linecolor=darkgray,linewidth=.8pt,xMin=0,yMin=0,zMin=0,xMax=6,yMax=6,zMax=6]} { \psset{linecolor=lightgray,linewidth=.8pt,opacity=.8} \pstThreeDLine*(0,0,0)(5,6,2)(10,8,6)(5,2,4)(0,0,0) \pstThreeDLine*(0,0,0)(5,6,2)(7,10,9)(2,4,7)(0,0,0) \pstThreeDLine*(0,0,0)(5,2,4)(7,6,11)(2,4,7)(0,0,0) } { \psset{linecolor=black,linewidth=.8pt,arrows=->,arrowscale=1.2} \pstThreeDLine(0,0,0)(5,6,2) \pstThreeDLine(0,0,0)(5,2,4) \pstThreeDLine(0,0,0)(2,4,7) } \newgray{mygray}{.8} \psset{linecolor=mygray,linewidth=2.2pt} \pstThreeDLine(10.6,9.2,8.1)(11,10,9.5)\pstThreeDLine(8,10.4,9.8)(11,11.6,12.2) } { \psset{linecolor=black,linewidth=.4pt,linestyle=dashed} \pstThreeDLine(5,6,2)(10,8,6)\pstThreeDLine(5,2,4)(10,8,6) \pstThreeDLine(5,6,2)(7,10,9)\pstThreeDLine(2,4,7)(7,10,9) \pstThreeDLine(5,2,4)(7,6,11)\pstThreeDLine(2,4,7)(7,6,11) \pstThreeDLine(10,8,6)(12,12,13)\pstThreeDLine(7,10,9)(12,12,13)\pstThreeDLine(7,6,11)(12,12,13) } \pstThreeDPut(5,4,3){$H_1$}\pstThreeDPut(3.5,5,4.5){$H_2$}\pstThreeDPut(3.5,3,5.5){$H_3$}\pstThreeDPut(6,6,6.5){$H$} \uput{6pt}[65](1.5,4){$w_1$}\uput{6pt}[160](-2,1){$w_2$}\uput{6pt}[-70](.82,-2.15){$w_3$} \end{pspicture} \psset{Alpha=45} \caption{A fundamental parallelepiped spanned by three primitive vectors in $\Zplus^3$. The sets $H,H_1,H_2,H_3$ denote the intersections of the respective fundamental parallelepipeds with $\Z^3$.} \label{fundpar3D} \end{figure} Throughout this section we consider the matrix \begin{equation*} M= \begin{pmatrix} a_1&b_1&c_1\\ a_2&b_2&c_2\\ a_3&b_3&c_3 \end{pmatrix} \in\Zplus^{3\times 3} \end{equation*} and its minors \begin{align*} M_{11}&= \begin{pmatrix} b_2&c_2\\ b_3&c_3 \end{pmatrix}, & M_{12}&= \begin{pmatrix} a_2&c_2\\ a_3&c_3 \end{pmatrix}, & M_{13}&= \begin{pmatrix} a_2&b_2\\ a_3&b_3 \end{pmatrix},\\ M_{21}&= \begin{pmatrix} b_1&c_1\\ b_3&c_3 \end{pmatrix}, & M_{22}&= \begin{pmatrix} a_1&c_1\\ a_3&c_3 \end{pmatrix}, & M_{23}&= \begin{pmatrix} a_1&b_1\\ a_3&b_3 \end{pmatrix},\\ M_{31}&= \begin{pmatrix} b_1&c_1\\ b_2&c_2 \end{pmatrix}, & M_{32}&= \begin{pmatrix} a_1&c_1\\ a_2&c_2 \end{pmatrix}, & M_{33}&= \begin{pmatrix} a_1&b_1\\ a_2&b_2 \end{pmatrix}. \end{align*} Let us denote $d=\det M$ and $d_{ij}=\det M_{ij}$; $i,j=1,2,3$. The matrix \begin{equation*} \adj M= \begin{pmatrix} d_{11}&-d_{21}&d_{31}\\ -d_{12}&d_{22}&-d_{32}\\ d_{13}&-d_{23}&d_{33} \end{pmatrix} =\left((-1)^{i+j}d_{ij}\right)_{ij}^{\mathrm{T}} \end{equation*} is called the adjugate matrix of $M$ and has the important property that \begin{equation*} (\adj M)M=M(\adj M)=dI, \end{equation*} with $I$ the $(3\times 3)$-identity matrix. According to Proposition~\ref{multipliciteit}, we have \begin{alignat*}{2} \mu&=\#H&&=\abs{d},\\ \mu_1&=\#H_1&&=\gcd(\abs{d_{11}},\abs{d_{12}},\abs{d_{13}}),\\ \mu_2&=\#H_2&&=\gcd(\abs{d_{21}},\abs{d_{22}},\abs{d_{23}}),\\ \mu_3&=\#H_3&&=\gcd(\abs{d_{31}},\abs{d_{32}},\abs{d_{33}}). \end{alignat*} Note that every $h\in H$ can be written in a unique way as \begin{equation*} h=h_1w_1+h_2w_2+h_3w_3 \end{equation*} with $h_j\in[0,1)$; $j=1,2,3$. We shall always denote the coordinates of a point $h\in H$ with respect to the basis $(w_1,w_2,w_3)$ of $\R^3$ over \R, by $(h_1,h_2,h_3)$. \begin{notation}\label{notatiemodulo} Let $a\in\R$. We denote by $\lfloor a\rfloor$ the largest integer not greater than $a$ (integer part of $a$) and by $\lceil a\rceil$ the smallest integer not less than $a$. The fractional part of $a$ will be denoted by $\{a\}=a-\lfloor a\rfloor\in[0,1)$. By generalization, we shall denote for any $b\in\Rplusnul$ by $\{a\}_b$ the unique element $\{a\}_b\in[0,b)$ such that $a-\{a\}_b\in b\Z$. Note that \begin{equation*} \{a\}_b=b\left\{\frac{a}{b}\right\}. \end{equation*} \end{notation} The aim of this section is to prove the following theorem. \begin{theorem}\label{algfp} \begin{enumerate} \item\label{punteen} The multiplicities $\mu_1,\mu_2,\mu_3$ all divide $\mu$; \item\label{punttwee} we have even more: for all distinct $i,j\in\{1,2,3\}$ it holds that $\mu_i\mu_j\mid\mu$. \item\label{puntdrie} For every $h\in H_1$ the coordinates $h_2,h_3$ of $h$ belong to the set \begin{equation*} \left\{0,\frac{1}{\mu_1},\frac{2}{\mu_1},\ldots,\frac{\mu_1-1}{\mu_1}\right\}, \end{equation*} and every element of the above set is the $w_2$-coordinate ($w_3$-coordinate) of exactly one point $h\in H_1$; i.e., \begin{equation*} \{h_2\mid h\in H_1\}=\{h_3\mid h\in H_1\}=\left\{0,\frac{1}{\mu_1},\frac{2}{\mu_1},\ldots,\frac{\mu_1-1}{\mu_1}\right\}. \end{equation*} Moreover, there exists a unique $\xi_1\in\vereen$ with $\xi_1+\mu_1\Z$ a generator of the additive group $\Z/\mu_1\Z$ (i.e., with $\gcd(\xi_1,\mu_1)=1$), such that all $\mu_1$ points of $H_1$ are given by \begin{equation*} \frac{i}{\mu_1}w_2+\left\{\frac{i\xi_1}{\mu_1}\right\}w_3;\qquad i=0,\ldots,\mu_1-1. \end{equation*} Of course, we have analogous results for $H_2$ and $H_3$. \item\label{puntvier} For every $h\in H$ the coordinate $h_1$ of $h$ belongs to the set \begin{equation*} \left\{0,\frac{\mu_1}{\mu},\frac{2\mu_1}{\mu},\ldots,\frac{\mu-\mu_1}{\mu}\right\}. \end{equation*} Moreover, every possible coordinate $l\mu_1/\mu$, $l\in\{0,\ldots,\mu/\mu_1-1\}$, occurs precisely $\mu_1$ times. (The set $H$ indeed contains $\mu_1(\mu/\mu_1)=\mu$ points.) We have of course analogous results for the coordinates $h_2$ and $h_3$ of the points $h\in H$. \item\label{puntvijf} By (ii) we can write $\mu=\mu_1\mu_2\phi_3$ with $\phi_3\in\Zplusnul$. It then holds that \begin{equation*} \gcd(\mu_1,\mu_2)\mid\mu_3\mid\gcd(\mu_1,\mu_2)\phi_3. \end{equation*} As a consequence we have that $\gcd(\mu_1,\mu_2,\mu_3)=\gcd(\mu_1,\mu_2)$. (The same result holds, of course, as well for the other two combinations of two out of three multiplicities $\mu_j$.) \item\label{puntzes} We give an explicit description of the $\mu$ points of $H$. \item\label{puntzeven} Finally, we explain how the numbers $\xi_1,\xi_2,\xi_3$ (mentioned above), and $\eta,\eta',l_0$ (defined later on) that appear in the several descriptions of points of $H$, can be calculated from the coordinates of $w_1,w_2,w_3$. \end{enumerate} \end{theorem} \subsection{A group structure on $H$} \begin{notation} For any $h\in\R^3$ we denote by $\{h\}$ its reduction modulo $\Z w_1+\Z w_2+\Z w_3$; i.e., $\{h\}$ denotes the unique element $\{h\}\in\lozenge(w_1,w_2,w_3)$ such that $h-\{h\}\in\Z w_1+\Z w_2+\Z w_3$. If we write $h$ as $h=h_1w_1+h_2w_2+h_3w_3$ with $h_1,h_2,h_3\in\R$, we have that \begin{equation*} \{h\}=\{h_1\}w_1+\{h_2\}w_2+\{h_3\}w_3. \end{equation*} \end{notation} We can make $H$ into a group by considering addition modulo $\Z w_1+\Z w_2+\Z w_3$ as a group law: \begin{equation*} \{\cdot +\cdot\}:H\times H\to H:(h,h')\mapsto \{h+h'\}. \end{equation*} The operation $\{\cdot +\cdot\}$ makes $H$ into a finite abelian group of order $\mu$. It is easy to verify that the subsets $H_1,H_2,H_3$ of $H$ are in fact subgroups. Consider the abelian group $\Z^3,+$ and its subgroups \begin{align*} \Lambda&=\Z w_1+\Z w_2+\Z w_3,&\Lambda_1&=\Z w_2+\Z w_3,\\ \Lambda_2&=\Z w_1+\Z w_3,&\Lambda_3&=\Z w_1+\Z w_2, \end{align*} generated by $\{w_1,w_2,w_3\},\{w_2,w_3\},\{w_1,w_3\},$ and $\{w_1,w_2\}$, respectively. It then holds that \begin{equation}\label{isomorfismen} \begin{alignedat}{2} H&\cong\frac{\Z^3}{\Lambda},&H_1&\cong\frac{\Z^3\cap\left(\R w_2+\R w_3\right)}{\Lambda_1},\\ H_2&\cong\frac{\Z^3\cap\left(\R w_1+\R w_3\right)}{\Lambda_2},&\qquad H_3&\cong\frac{\Z^3\cap\left(\R w_1+\R w_2\right)}{\Lambda_3}. \end{alignedat} \end{equation} \subsection{Divisibility among the multiplicities $\mu,\mu_1,\mu_2,\mu_3$} Since $H_1,H_2,H_3$ form subgroups of $H$, their orders divide the order of $H$: $\mu_1,\mu_2,\mu_3\mid\mu$ (Theorem~\ref{algfp}(i)). Consider the subgroups $H_1,H_2$ of $H$. The subgroup $H_1\cap H_2$ precisely contains the integral points in the fundamental parallelepiped \begin{equation*} \lozenge(w_3)=\{h_3w_3\mid h_3\in[0,1)\}. \end{equation*} Hence since $w_3$ is primitive, $H_1\cap H_2$ is the trivial group (this can also be seen from the isomorphisms \eqref{isomorfismen}). It follows that $H_1+H_2\cong H_1\oplus H_2$ and thus \begin{equation*} \abs{H_1+H_2}=\abs{H_1\oplus H_2}=\abs{H_1}\abs{H_2}=\mu_1\mu_2. \end{equation*} The fact that $H_1+H_2$ is a subgroup of $H$ now easily implies that $\mu_1\mu_2\mid\mu$. Analogously, we find that $\mu_1\mu_3,\mu_2\mu_3\mid\mu$. This proves Theorem~\ref{algfp}(ii). From now on, we shall write \begin{equation*} \mu=\mu_1\mu_2\phi_3=\mu_1\mu_3\phi_2=\mu_2\mu_3\phi_1 \end{equation*} with $\phi_1,\phi_2,\phi_3\in\Zplusnul$. \subsection{On the $\mu_1$ points of $H_1$} Let $h\in H_1=\Z^3\cap\lozenge(w_2,w_3)$ and write \begin{equation*} h=h_2w_2+h_3w_3 \end{equation*} with $h_2,h_3\in[0,1)$. Because $\abs{H_1}=\mu_1$, the $\mu_1$-th multiple of $h$ in $H$ must equal the identity element: \begin{equation*} \{\mu_1h\}=\{\mu_1h_2\}w_2+\{\mu_1h_3\}w_3=(0,0,0); \end{equation*} i.e., $\{\mu_1h_2\}=\{\mu_1h_3\}=0$, and thus $h_2,h_3\in(1/\mu_1)\Z$. Since $h_2,h_3\in(1/\mu_1)\Z$ and $0\leqslant h_2,h_3<1$, the only possible values for $h_2,h_3$ are \begin{equation*} 0,\frac{1}{\mu_1},\frac{2}{\mu_1},\ldots,\frac{\mu_1-1}{\mu_1}. \end{equation*} Moreover, since $w_2,w_3$ are primitive, every $i/\mu_1$, $i\in\vereen$, is the $w_2$-coordinate ($w_3$-coordinate) of at most, and therefore exactly, one point $h\in H_1$: \begin{equation*} \{h_2\mid h\in H_1\}=\{h_3\mid h\in H_1\}=\left\{0,\frac{1}{\mu_1},\frac{2}{\mu_1},\ldots,\frac{\mu_1-1}{\mu_1}\right\}. \end{equation*} So there exists a unique $\xi_1\in\vereen$ such that \begin{equation}\label{puntenvanHeen} h^{\ast}=\frac{1}{\mu_1}w_2+\frac{\xi_1}{\mu_1}w_3\in H_1. \end{equation} Consider the cyclic subgroup $\langle h^{\ast}\rangle\subset H_1$ generated by $h^{\ast}$. This subgroup contains the $\mu_1$ distinct elements \begin{equation*} \{ih^{\ast}\}=\frac{i}{\mu_1}w_2+\left\{\frac{i\xi_1}{\mu_1}\right\}w_3;\qquad i=0,\ldots,\mu_1-1; \end{equation*} of $H_1$, and therefore equals $H_1$. Figure~\ref{fundpar2D} illustrates the situation. This gives us a complete\footnote{In Paragraph~\ref{pardetxieencnul} we explain how to obtain $\xi_1$ from the coordinates of $w_2$ and $w_3$.} description of the points of $H_1$. Besides, since $h_3=\{i\xi_1/\mu_1\}$ runs through $\{0,1/\mu_1,\ldots,(\mu_1-1)/\mu_1\}$ when $i$ runs through $\{0,\ldots,\mu_1-1\}$, we have that $\xi_1+\mu_1\Z$ generates $\Z/\mu_1\Z,+$ and therefore $\gcd(\xi_1,\mu_1)=1$. Obviously, analogous results hold for $H_2$ and $H_3$, concluding Theorem~\ref{algfp}(iii). \begin{figure} \psset{unit=.110864745\textwidth \centering \begin{pspicture}(-2.52,-.92)(6.5,2.92 {\lightgray\pstThreeDCoor[linecolor=lightgray,linewidth=.7pt,Alpha=135,xMin=0,yMin=0,zMin=0,xMax=2.2,yMax=2.6,zMax=2.97,spotX=0]} { \psset{linecolor=black,linewidth=.3pt,linestyle=dashed} \psline(-2.5,-.91)(-2.5,2.91)\psline(-2.5,2.91)(6.5,2.91)\psline(6.5,2.91)(6.5,-.91)\psline(6.5,-.91)(-2.5,-.91) } { \psset{linecolor=darkgray,linewidth=.3pt,linestyle=dashed} \psline(.12,.36)(1.8,.6)\psline(.24,.72)(.8,.8)\psline(.36,1.08)(2.6,1.4)\psline(.48,1.44)(1.6,1.6) \psline(.56,.08)(.8,.8)\psline(1.12,.16)(1.6,1.6)\psline(1.68,.24)(1.8,.6)\psline(2.24,.32)(2.6,1.4) } \psdots[dotsize=4pt,dotstyle=o,linecolor=black](-1.8,-.6)(-.8,-.8)(-2,.4)(-1,.2)(0,0)(1,-.2)(2,-.4)(3,-.6)(4,-.8)(-2.2,1.4)(-1.2,1.2)(-.2,1)(.8,.8)(1.8,.6)(2.8,.4)(3.8,.2)(4.8,0)(5.8,-.2)(-2.4,2.4)(-1.4,2.2)(-.4,2)(.6,1.8)(1.6,1.6)(2.6,1.4)(3.6,1.2)(4.6,1)(5.6,.8)(.4,2.8)(1.4,2.6)(2.4,2.4)(3.4,2.2)(4.4,2)(5.4,1.8)(6.4,1.6)(5.2,2.8)(6.2,2.6) \psdots[dotsize=4pt,linecolor=black](0,0)(.8,.8)(1.8,.6)(1.6,1.6)(2.6,1.4) { \psset{linecolor=black,linewidth=1pt,linestyle=dashed} \psline(.6,1.8)(3.4,2.2)\psline(2.8,.4)(3.4,2.2) } { \psset{linecolor=black,linewidth=1pt,arrows=->,arrowscale=1} \psline(0,0)(.6,1.8)\psline(0,0)(2.8,.4) } \uput[-90](0,0){$0$}\uput[-90](.56,.08){$\frac15$}\uput[-90](1.12,.16){$\frac25$}\uput[-90](1.68,.24){$\frac35$}\uput[-90](2.24,.32){$\frac45$}\uput[180](0,0){$0$}\uput[180](.12,.36){\scriptsize$1/5$}\uput[180](.24,.72){\scriptsize$2/5$}\uput[180](.36,1.08){\scriptsize$3/5$}\uput[180](.48,1.44){\scriptsize$4/5$}\uput{6pt}[-45](2.8,.4){$w_2$}\uput{6pt}[135](.6,1.8){$w_3$}\uput{6pt}[45](3.4,2.2){$w_2+w_3$}\uput{6pt}[135](6.5,-.91){$\R w_2+\R w_3$}\rput{8.4}(2.68,1.864){$\lozenge(w_2,w_3)$} \end{pspicture} \caption{Example of a fundamental parallelepiped $\lozenge(w_2,w_3)$ spanned by two primitive vectors $w_2$ and $w_3$ in $\Zplus^3$. The dots represent the integral points in the plane $\R w_2+\R w_3$; the solid dots are the integral points inside the fundamental parallelepiped and make up $H_1$. Observe the coordinates $(h_2,h_3)$ of the points $h\in H_1$ with respect to the basis $(w_2,w_3)$. In this example the multiplicity $\mu_1$ of $\lozenge(w_2,w_3)$ equals $5$ and $\xi_1=2$.} \label{fundpar2D} \end{figure} \subsection{On the $w_1$-coordinates of the points of $H$} Let $h\in H$ and write \begin{equation}\label{hgelijkaan} h=h_1w_1+h_2w_2+h_3w_3 \end{equation} with $h_1,h_2,h_3\in[0,1)$. Because $\abs{H}=\mu$, it holds that \begin{equation*} \{\mu h\}=\{\mu h_1\}w_1+\{\mu h_2\}w_2+\{\mu h_3\}w_3=(0,0,0), \end{equation*} and therefore \begin{equation*} h_1,h_2,h_3\in\left\{0,\frac{1}{\mu},\frac{2}{\mu},\ldots,\frac{\mu-1}{\mu}\right\}. \end{equation*} Let us study the $w_1$-coordinates of the $\mu$ points of $H$ in more detail. Note that the $\mu/\mu_1$ cosets of the subgroup $H_1$ of $H$ form the equivalence classes of the equivalence relation $\sim$ on $H$ defined by \begin{equation*} h\sim h'\qquad\text{if and only if}\qquad h_1=h_1'; \end{equation*} i.e., $H/H_1=H/\sim$ as sets. This implies that there are $\mu/\mu_1$ possible values for the $w_1$-coordinate of a point of $H$, and since every coset of $H_1$ contains $\mu_1$ elements, every possible $w_1$-coordinate occurs precisely $\mu_1$ times. Moreover, the classes modulo \Z\ of the possible $w_1$-coordinates form a subgroup of $(1/\mu)\Z/\Z$, isomorphic to $H/H_1$. The possible values for the coordinates $h_1$ of the points $h\in H$ are therefore \begin{equation*} 0,\frac{\mu_1}{\mu},\frac{2\mu_1}{\mu},\ldots,\frac{\mu-\mu_1}{\mu}, \end{equation*} and every $l\mu_1/\mu$, $l\in\{0,\ldots,\mu/\mu_1-1\}$, is the $w_1$-coordinate of exactly $\mu_1$ points of $H$. Again, there are similar results for the other two coordinates $h_2$ and $h_3$. We conclude Theorem~\ref{algfp}(iv). \subsection{More divisibility relations} \begin{notation}\label{notatiegammalambda} For the remaining of this section, we will use the following notations: \begin{equation*} \gamma=\gcd(\mu_1,\mu_2)\qquad\text{and}\qquad\lambda=\lcm(\mu_1,\mu_2)=\frac{\mu_1\mu_2}{\gamma}. \end{equation*} We will denote as well $\mu_1'=\mu_1/\gamma$ and $\mu_2'=\mu_2/\gamma$. \end{notation} Recall that \begin{equation*} \mu=\mu_1\mu_2\phi_3=\mu_1\mu_3\phi_2=\mu_2\mu_3\phi_1. \end{equation*} It follows that $\mu_1\mu_3,\mu_2\mu_3\mid\mu_1\mu_2\phi_3$. Hence $\mu_3\mid\mu_1\phi_3,\mu_2\phi_3$ and thus $\mu_3\mid\gamma\phi_3$. We already know that the subgroup $H_1+H_2$ of $H$ is isomorphic to the direct sum $H_1\oplus H_2$ of $H_1$ and $H_2$ and therefore contains $\mu_1\mu_2$ elements. We can write down the $\mu_1\mu_2$ points of $H_1+H_2$ explicitly. The $\mu_1$ points of $H_1$ are given by \begin{equation*} \left\{\frac{i\xi_1'}{\mu_1}\right\}w_2+\frac{i}{\mu_1}w_3;\qquad i=0,\ldots,\mu_1-1; \end{equation*} for some $\xi_1'\in\vereen$ with $\gcd(\xi_1',\mu_1)=1$. We prefer this representation (with $\xi_1'$ instead of $\xi_1$) of the points of $H_1$ to the one on p.~\pageref{puntenvanHeen} (Eq.~\eqref{puntenvanHeen}), because this one is more convenient for what follows. In the same way, we can list the points of $H_2$ as \begin{equation*} \left\{\frac{j\xi_2'}{\mu_2}\right\}w_1+\frac{j}{\mu_2}w_3;\qquad j=0,\ldots,\mu_2-1; \end{equation*} for some uniquely determined $\xi_2'\in\vertwee$ relatively prime to $\mu_2$. Consequently, $H_1+H_2$ consists of the following $\mu_1\mu_2$ points: \begin{multline}\label{puntenvanHeenplusHtwee} \left\{\frac{j\xi_2'}{\mu_2}\right\}w_1+\left\{\frac{i\xi_1'}{\mu_1}\right\}w_2+\left\{\frac{j\mu_1+i\mu_2}{\mu_1\mu_2}\right\}w_3;\\i=0,\ldots,\mu_1-1;\quad j=0,\ldots,\mu_2-1. \end{multline} Let us take a look at the $w_3$-coordinates of the above points. We see that for each $h\in H_1+H_2$, the $w_3$-coordinate $h_3$ is a multiple of $\gamma/\mu_1\mu_2$. Indeed, for all $i,j$ it holds that $\gamma=\gcd(\mu_1,\mu_2)\mid j\mu_1+i\mu_2$. Moreover, $\gamma/\mu_1\mu_2$ and all of its multiples in $[0,1)$ are the $w_3$-coordinate of some point in $H_1+H_2$. Indeed, if we write $\gamma$ as $\gamma=\alpha_1\mu_1+\alpha_2\mu_2$ with $\alpha_1,\alpha_2\in\Z$, we have that \begin{equation*} \frac{\gamma}{\mu_1\mu_2}=\left\{\frac{j\mu_1+i\mu_2}{\mu_1\mu_2}\right\} \end{equation*} for $j=\{\alpha_1\}_{\mu_2}$ and $i=\{\alpha_2\}_{\mu_1}$. It follows that \begin{align*} &\{h_3\mid h\in H_1+H_2\}\\ &\qquad\qquad=\left\{\left\{\frac{j\mu_1+i\mu_2}{\mu_1\mu_2}\right\}\;\middle\vert\;i\in\vereen,\ j\in\vertwee\right\}\\ &\qquad\qquad=\left\{0,\frac{\gamma}{\mu_1\mu_2},\frac{2\gamma}{\mu_1\mu_2},\ldots,\frac{\mu_1\mu_2-\gamma}{\mu_1\mu_2}\right\}\\ &\qquad\qquad=\left\{0,\frac{1}{\lambda},\frac{2}{\lambda},\ldots,\frac{\lambda-1}{\lambda}\right\}. \end{align*} As we know, every multiple of $\mu_3/\mu$ in $[0,1)$ is the $w_3$-coordinate of some point in $H$. Choose $h^{\ast}\in H$ with $h^{\ast}_3=\mu_3/\mu$, and consider the coset $h^{\ast}+(H_1+H_2)$ of $H_1+H_2$ in the quotient group $H/(H_1+H_2)$. Since \begin{equation*} \left\lvert\frac{H}{H_1+H_2}\right\rvert=\frac{\abs{H}}{\abs{H_1+H_2}}=\frac{\mu}{\mu_1\mu_2}=\phi_3, \end{equation*} it holds that \begin{equation*} \bigl\{\phi_3\bigl(h^{\ast}+(H_1+H_2)\bigr)\bigr\}=\{\phi_3h^{\ast}\}+(H_1+H_2)=H_1+H_2, \end{equation*} and thus $\{\phi_3h^{\ast}\}\in H_1+H_2$. The $w_3$-coordinate \begin{equation*} \{\phi_3h^{\ast}_3\}=\left\{\frac{\phi_3\mu_3}{\mu}\right\}=\left\{\frac{\mu_3}{\mu_1\mu_2}\right\} \end{equation*} of $\{\phi_3h^{\ast}\}$ therefore must equal \begin{equation*} \left\{\frac{\mu_3}{\mu_1\mu_2}\right\}=\left\{\frac{j\mu_1+i\mu_2}{\mu_1\mu_2}\right\} \end{equation*} for some $i\in\vereen$ and some $j\in\vertwee$. It follows that $\mu_3$ is a \Z-linear combination of $\mu_1$ and $\mu_2$; hence $\gamma\mid\mu_3$. Next, we will count the number of points in $(H_1+H_2)\cap H_3$. Based on the explicit description \eqref{puntenvanHeenplusHtwee} of the $\mu_1\mu_2$ points of $H_1+H_2$, we have to examine for which $(i,j)\in\vereen\times\vertwee$ it holds that \begin{equation*} h_3=\left\{\frac{j\mu_1+i\mu_2}{\mu_1\mu_2}\right\}=0. \end{equation*} Since \begin{equation*} 0\leqslant\frac{j\mu_1+i\mu_2}{\mu_1\mu_2}<2, \end{equation*} we have that $h_3=0$ if and only if $i=j=0$ or \begin{equation}\label{eqvoorwaarde} j\mu_1+i\mu_2=\mu_1\mu_2. \end{equation} For this last equality to hold, it is necessary that $\mu_2\mid j\mu_1$ and $\mu_1\mid i\mu_2$, which is equivalent to\footnote{Cfr.\ Notation~\ref{notatiegammalambda}.} $\lambda\mid j\mu_1,i\mu_2$, and even to $\mu_2'\mid j$ and $\mu_1'\mid i$. In other words, Equality~\ref{eqvoorwaarde} implies that \begin{align*} &j=\frac{g\mu_2}{\gamma}\qquad\text{and}\qquad i=\frac{g'\mu_1}{\gamma}\\ \intertext{for certain $g,g'\in\{0,\ldots,\gamma-1\}$ and is therefore equivalent to} &j=\frac{g\mu_2}{\gamma}\qquad\text{and}\qquad i=\frac{(\gamma-g)\mu_1}{\gamma} \end{align*} for some $g\in\{1,\ldots,\gamma-1\}$. We can conclude that $(H_1+H_2)\cap H_3$ contains precisely $\gamma=\gcd(\mu_1,\mu_2)$ points, and they are given by \begin{equation*} h=\left\{\frac{g\xi_2'}{\gamma}\right\}w_1+\left\{\frac{(\gamma-g)\xi_1'}{\gamma}\right\}w_2;\qquad g=0,\ldots,\gamma-1. \end{equation*} Because $\xi_2'+\mu_2\Z$ and $\xi_1'+\mu_1\Z$ generate $\Z/\mu_2\Z$ and $\Z/\mu_1\Z$, respectively, and $\gamma\mid\mu_1,\mu_2$, it follows that $\xi_1'+\gamma\Z$ and $\xi_2'+\gamma\Z$ are both generators of $\Z/\gamma\Z$. (Moreover, the map $g\mapsto\{\gamma-g\}_{\gamma}$ is a permutation of $\{0,\ldots,\gamma-1\}$.) The coordinates \begin{equation*} h_1(g)=\left\{\frac{g\xi_2'}{\gamma}\right\}\qquad\text{and}\qquad h_2(g)=\left\{\frac{(\gamma-g)\xi_1'}{\gamma}\right\} \end{equation*} therefore both run through all the elements of \begin{equation*} \left\{0,\frac{1}{\gamma},\frac{2}{\gamma},\ldots,\frac{\gamma-1}{\gamma}\right\} \end{equation*} when $g$ runs through $\{0,\ldots,\gamma-1\}$. (Hence the maps $g\mapsto h_1(g)$ and $g\mapsto h_2(g)$ from $\{0,\ldots,\gamma-1\}$ to $\{0,1/\gamma,\ldots,(\gamma-1)/\gamma\}$ are both bijections.) This leads to the existence of a unique $\xi_{\gamma}\in\{0,\ldots,\gamma-1\}$, coprime to $\gamma$, such that the elements of $(H_1+H_2)\cap H_3$ can be represented as \begin{equation}\label{xigamma} \frac{g}{\gamma}w_1+\left\{\frac{g\xi_{\gamma}}{\gamma}\right\}w_2;\qquad g=0,\ldots,\gamma-1. \end{equation} Hence $(H_1+H_2)\cap H_3$ is the cyclic subgroup of $H$ generated by $(1/\gamma)w_1+(\xi_{\gamma}/\gamma)w_2$. \begin{remark} The number $\xi_{\gamma}$ appearing in \eqref{xigamma} is determined by the equality \begin{equation*} \xi_{\gamma}+\gamma\Z=-(\xi_2'+\gamma\Z)^{-1}(\xi_1'+\gamma\Z) \end{equation*} in the ring $\Z/\gamma\Z,+,\cdot$. Furthermore, we have that $\xi_{\gamma}=\{\xi_3\}_{\gamma}$, with $\xi_3$ the unique element of \verdrie\ such that \begin{equation*} \frac{1}{\mu_3}w_1+\frac{\xi_3}{\mu_3}w_2\in H_3. \end{equation*} \end{remark} \begin{remark} From $\gamma=\gcd(\mu_1,\mu_2)\mid\mu_3$, we see that in fact \begin{equation*} \gamma=\gcd(\mu_1,\mu_2,\mu_3)=\gcd(\abs{d_{ij}})_{i,j=1,2,3}, \end{equation*} and by symmetry, $\gamma=\gcd(\mu_1,\mu_2)=\gcd(\mu_1,\mu_3)=\gcd(\mu_2,\mu_3)$. \end{remark} The quotientgroup \begin{equation}\label{quotientgroep} \frac{H_1+H_2}{(H_1+H_2)\cap H_3} \end{equation} of order $\mu_1\mu_2/\gamma=\lambda$ partitions $H_1+H_2$ based on the $w_3$-coordinates of its points, and \begin{equation*} \{h_3+\Z\mid h\in H_1+H_2\},+ \end{equation*} therefore is the unique subgroup of $\R/\Z,+$ of order $\lambda$. The set of $w_3$-coordinates of points of $H_1+H_2$ is thus $\{0,1/\lambda,\ldots,(\lambda-1)/\lambda\}$ (we already knew that), and since every coset of $(H_1+H_2)\cap H_3$ counts $\gamma$ points, every $l/\lambda$, $l\in\{0,\ldots,\lambda-1\},$ is the $w_3$-coordinate of precisely $\gamma$ points in $H_1+H_2$. This ends the proof of Theorem~\ref{algfp}(v). \subsection{Explicit description of the points of $H$} We shall write the $\mu_1$ points of $H_1$ and the $\mu_2$ points of $H_2$ as \begin{alignat*}{3} h(i,0,0)&=\frac{i}{\mu_1}w_2&&+\left\{\frac{i\xi_1}{\mu_1}\right\}w_3;&\qquad&i=0,\ldots,\mu_1-1;\\\intertext{and} h(0,j,0)&=\frac{j}{\mu_2}w_1&&+\left\{\frac{j\xi_2}{\mu_2}\right\}w_3;&&j=0,\ldots,\mu_2-1; \end{alignat*} respectively. The $\mu_1\mu_2$ points of $H_1+H_2$ are then given by \begin{multline*} h(i,j,0)=\frac{j}{\mu_2}w_1+\frac{i}{\mu_1}w_2+\left\{\frac{i\xi_1\mu_2+j\xi_2\mu_1}{\mu_1\mu_2}\right\}w_3;\\i=0,\ldots,\mu_1-1;\quad j=0,\ldots,\mu_2-1. \end{multline*} The $w_3$-coordinate $h_3(i,j,0)$ of $h(i,j,0)$ can also be written as \begin{equation*} h_3(i,j,0)=\left\{\frac{i\xi_1\mu_2+j\xi_2\mu_1}{\mu_1\mu_2}\right\}=\frac{l(i,j,0)\gamma}{\mu_1\mu_2}=\frac{l(i,j,0)}{\lambda}, \end{equation*} with \begin{multline*} l(i,j,0)=\frac{\{i\xi_1\mu_2+j\xi_2\mu_1\}_{\mu_1\mu_2}}{\gamma}=\left\{\frac{i\xi_1\mu_2+j\xi_2\mu_1}{\gamma}\right\}_{\lambda}\\ =\{i\xi_1\mu_2'+j\xi_2\mu_1'\}_{\lambda}\in\{0,\ldots,\lambda-1\} \end{multline*} for all $i,j$. This results in \begin{equation*} h(i,j,0)=\frac{j}{\mu_2}w_1+\frac{i}{\mu_1}w_2+\frac{l(i,j,0)}{\lambda}w_3;\qquad i=0,\ldots,\mu_1-1;\quad j=0,\ldots,\mu_2-1; \end{equation*} whereby $l(i,j,0)$ runs exactly $\gamma$ times through all the elements of $\{0,\ldots,\lambda-1\}$ when $i$ and $j$ run through \vereen\ and \vertwee, respectively. Because $H$ is the disjoint union of the $\phi_3$ cosets of $H_1+H_2$ in $H$, we can describe all elements of $H$ by choosing representatives $h(0,0,k)$; $k=0,\ldots,\phi_3-1$; one for each coset, and then view $H$ as the set \begin{equation*} H=H_1+H_2+\{h(0,0,1),\ldots,h(0,0,\phi_3-1)\}. \end{equation*} We know that every $h\in H$ has a $w_1$-coordinate $h_1$ of the form $h_1=t\mu_1/\mu$ for some $t\in\{0,\ldots,\mu/\mu_1-1\}$, i.e., of the form \begin{equation*} h_1=\frac{j\phi_3+k}{\mu_2\phi_3} \end{equation*} for some $j\in\vertwee$ and some $k\in\{0,\ldots,\phi_3-1\}$, and that every such number $(j\phi_3+k)/\mu_2\phi_3$ is the $w_1$-coordinate of precisely $\mu_1$ points of $H$. In this way, we can associate to each $h\in H$ a number $k=\{\mu_2\phi_3h_1\}_{\phi_3}\in\{0,\ldots,\phi_3-1\}$, and we see that $H/(H_1+H_2)$ is the partition of $H$ based on these values of $k$. The analogous result holds for the $w_2$-coordinates of the points of $H$. The $w_3$-coordinate $h_3$ of every point $h\in H$ has the form $h_3=t\mu_3/\mu$ for some $t\in\{0,\ldots,\mu/\mu_3-1\}$, and every $t\mu_3/\mu$ is the $w_3$-coordinate of precisely $\mu_3$ points of $H$. Since $\gamma\mid\mu_3\mid\gamma\phi_3$ and we actually have $\gamma=\gcd(\mu_1,\mu_2,\mu_3)$, we can put $\mu_3=\gamma\mu_3'$ and $\phi_3=\mu_3'\phi_3'$ with $\mu_3',\phi_3'\in\Zplusnul$. We can now write every $w_3$-coordinate $h_3$ as \begin{equation*} h_3=\frac{t\mu_3}{\mu}=\frac{t\gamma\mu_3'}{\mu_1\mu_2\phi_3}=\frac{t}{\lambda\phi_3'} \end{equation*} for some $t\in\{0,\ldots,\lambda\phi_3'-1\}$, and thus as \begin{equation*} h_3=\frac{t}{\lambda\phi_3'}=\frac{l\phi_3'+k'}{\lambda\phi_3'} \end{equation*} for some $l\in\{0,\ldots,\lambda-1\}$ and some $k'\in\{0,\ldots,\phi_3'-1\}$. The $k'=\{\lambda\phi_3'h_3\}_{\phi_3'}$, associated in this way to every $h\in H$, is constant on the cosets of $H_1+H_2$, but points in different cosets may have the same value for $k'$. In fact each of the $\phi_3'$ possible values for $k'$ is adopted in precisely $\mu_3'$ cosets of $H_1+H_2$. (This agrees with the fact that every $(l\phi_3'+k')/\lambda\phi_3'$ appears precisely $\mu_3=\gamma\mu_3'$ times as the $w_3$-coordinate of a point of $H$, considered that each $l/\lambda$ is the $w_3$-coordinate of precisely $\gamma$ points in $H_1+H_2$.) We can now choose representatives for the elements of $H/(H_1+H_2)$. First, choose a point $h^{\ast}\in H$ with $w_1$-coordinate $h^{\ast}_1=1/\mu_2\phi_3$. The $w_2$-coordinate of this point equals \begin{equation*} h^{\ast}_2=\frac{i_0\phi_3+\eta}{\mu_1\phi_3} \end{equation*} for some $i_0\in\vereen$ and some $\eta\in\{0,\ldots,\phi_3-1\}$. All $\mu_1$ points of $H$ with $w_1$-coordinate $1/\mu_2\phi_3$ are given by $\{h^{\ast}+h\}$, $h\in H_1$, and their $w_2$-coordinates by \begin{equation*} \left\{\frac{(i_0+i)\phi_3+\eta}{\mu_1\phi_3}\right\};\qquad i=0,\ldots,\mu_1-1; \end{equation*} this is, after reordering, by \begin{equation*} \frac{i\phi_3+\eta}{\mu_1\phi_3};\qquad i=0,\ldots,\mu_1-1. \end{equation*} It follows that there exists a unique point $h(0,0,1)\in H$ of the form \begin{equation*} h(0,0,1)=\frac{1}{\mu_2\phi_3}w_1+\frac{\eta}{\mu_1\phi_3}w_2+\frac{l_0\phi_3'+\eta'}{\lambda\phi_3'}w_3 \end{equation*} with $\eta\in\{0,\ldots,\phi_3-1\}$, $l_0\in\{0,\ldots,\lambda-1\}$, and $\eta'\in\{0,\ldots,\phi_3'-1\}$. We will choose this point $h(0,0,1)$ as the representative of its coset $h(0,0,1)+(H_1+H_2)$. The $\phi_3$ multiples \begin{multline*} \{kh(0,0,1)\}=\frac{k}{\mu_2\phi_3}w_1+\left\{\frac{k\eta}{\mu_1\phi_3}\right\}w_2+\left\{\frac{kl_0\phi_3'+k\eta'}{\lambda\phi_3'}\right\}w_3;\\ k=0,\ldots,\phi_3-1; \end{multline*} of $h(0,0,1)$ run through all cosets of $H_1+H_2$, and therefore, $h(0,0,1)+(H_1+H_2)$ is a generator of the cyclic group $H/(H_1+H_2)$. We can choose the elements $\{kh(0,0,1)\}$; $k=0,\ldots,\phi_3-1$; as representatives of their respective cosets, but we can also choose, for each $k$, as a representative for $\{kh(0,0,1)\}+(H_1+H_2)$, the unique element $h(0,0,k)\in\{kh(0,0,1)\}+H_1$ for which $h_2(0,0,k)<1/\mu_1$. We take the last option. This is, \begin{equation*} h(0,0,k)=\left\{kh(0,0,1)-\frac{i(k)}{\mu_1}w_2-\frac{i(k)\xi_1}{\mu_1}w_3\right\}, \end{equation*} with \begin{equation}\label{defik} i(k)=\left\{\frac{k\eta-\{k\eta\}_{\phi_3}}{\phi_3}\right\}_{\mu_1}=\left\{\left\lfloor\frac{k\eta}{\phi_3}\right\rfloor\right\}_{\mu_1}\in\vereen \end{equation} for $k=0,\ldots,\phi_3-1$, resulting in the following set of representatives for the elements of $H/(H_1+H_2)$: \begin{equation*} h(0,0,k)=\frac{k}{\mu_2\phi_3}w_1+\frac{\{k\eta\}_{\phi_3}}{\mu_1\phi_3}w_2+\frac{l(k)\phi_3'+\{k\eta'\}_{\phi_3'}}{\lambda\phi_3'}w_3;\qquad k=0,\ldots,\phi_3-1; \end{equation*} with for every $k$, \begin{equation}\label{deflk} l(k)=\left\{kl_0-i(k)\xi_1\mu_2'+\left\lfloor\frac{k\eta'}{\phi_3'}\right\rfloor\right\}_{\lambda}\in\{0,\ldots,\lambda-1\}, \end{equation} $\mu_2'=\mu_2/\gamma$, and $i(k)$ as in \eqref{defik}. When $k$ runs through $\{0,\ldots,\phi_3-1\}$, the coset $h(0,0,k)+(H_1+H_2)$ runs through all elements of $H/(H_1+H_2)$. This means that $\{k\eta\}_{\phi_3}$ runs through $\{0,\ldots,\phi_3-1\}$ once, while $\{k\eta'\}_{\phi_3'}$ runs through $\{0,\ldots,\phi_3'-1\}$ precisely $\mu_3'$ times. It follows that $\eta+\phi_3\Z$ and $\eta'+\phi_3'\Z$ are generators of $\Z/\phi_3\Z$ and $\Z/\phi_3'\Z$, respectively, and therefore $\gcd(\eta,\phi_3)=\gcd(\eta',\phi_3')=1$. We can now list all the points of $H$. We start with an overview. The $\mu_1$ points of $H_1$ are \begin{alignat*}{3} h(i,0,0)&=\frac{i}{\mu_1}w_2&&+\left\{\frac{i\xi_1}{\mu_1}\right\}w_3;&\qquad&i=0,\ldots,\mu_1-1;\\\intertext{while the $\mu_2$ points of $H_2$ are given by} h(0,j,0)&=\frac{j}{\mu_2}w_1&&+\left\{\frac{j\xi_2}{\mu_2}\right\}w_3;&&j=0,\ldots,\mu_2-1. \end{alignat*} This gives the following $\mu_1\mu_2$ points for $H_1+H_2$: \begin{multline*} h(i,j,0)=\{h(i,0,0)+h(0,j,0)\}=\frac{j}{\mu_2}w_1+\frac{i}{\mu_1}w_2+\frac{l(i,j)}{\lambda}w_3;\\i=0,\ldots,\mu_1-1;\quad j=0,\ldots,\mu_2-1; \end{multline*} with for all $i,j$, \begin{equation*} l(i,j)=\{i\xi_1\mu_2'+j\xi_2\mu_1'\}_{\lambda}\in\{0,\ldots,\lambda-1\}. \end{equation*} As representatives for the $\phi_3$ cosets of $H_1+H_2$, we chose \begin{equation*} h(0,0,k)=\frac{k}{\mu_2\phi_3}w_1+\frac{\{k\eta\}_{\phi_3}}{\mu_1\phi_3}w_2+\frac{l(k)\phi_3'+\{k\eta'\}_{\phi_3'}}{\lambda\phi_3'}w_3;\qquad k=0,\ldots,\phi_3-1; \end{equation*} with $l(k)$ as in \eqref{deflk}. Consequently, the $\mu=\mu_1\mu_2\phi_3$ points of $H$ are given by \begin{multline*} \begin{aligned} h(i,j,k)&=\{h(i,0,0)+h(0,j,0)+h(0,0,k)\}\\ &=\frac{j\phi_3+k}{\mu_2\phi_3}w_1+\frac{i\phi_3+\{k\eta\}_{\phi_3}}{\mu_1\phi_3}w_2+\frac{l(i,j,k)\phi_3'+\{k\eta'\}_{\phi_3'}}{\lambda\phi_3'}w_3; \end{aligned}\\ i=0,\ldots,\mu_1-1;\quad j=0,\ldots,\mu_2-1;\quad k=0,\ldots,\phi_3-1; \end{multline*} with for all $i,j,k$, \begin{align*} l(i,j,k)&=\{l(i,j)+l(k)\}_{\lambda}\\ &=\left\{(i-i(k))\xi_1\mu_2'+j\xi_2\mu_1'+kl_0+\left\lfloor\frac{k\eta'}{\phi_3'}\right\rfloor\right\}_{\lambda}\in\{0,\ldots,\lambda-1\}\qquad\text{and}\\ i(k)&=\left\{\left\lfloor\frac{k\eta}{\phi_3}\right\rfloor\right\}_{\mu_1}\in\vereen, \end{align*} and where \begin{align*} \xi_1&\in\vereen,&\eta&\in\{0,\ldots,\phi_3-1\},&l_0&\in\{0,\ldots,\lambda-1\}\\ \xi_2&\in\vertwee,&\eta'&\in\{0,\ldots,\phi_3'-1\},&& \end{align*} are uniquely determined by \begin{equation*} \frac{1}{\mu_1}w_2+\frac{\xi_1}{\mu_1}w_3,\quad\frac{1}{\mu_2}w_1+\frac{\xi_2}{\mu_2}w_3,\quad \frac{1}{\mu_2\phi_3}w_1+\frac{\eta}{\mu_1\phi_3}w_2+\frac{l_0\phi_3'+\eta'}{\lambda\phi_3'}w_3\quad\in H. \end{equation*} We repeat that when $k$ runs through $\{0,\ldots,\phi_3-1\}$, the numbers $\{k\eta\}_{\phi_3}$ and $\{k\eta'\}_{\phi_3'}$ run through $\{0,\ldots,\phi_3-1\}$ and $\{0,\ldots,\phi_3'-1\}$ once and $\mu_3'$ times, respectively, while for fixed $k$, we have that $l(i,j,k)$ runs $\gamma$ times through $\{0,\ldots,\lambda-1\}$ when $i$ and $j$ run through \vereen\ and \vertwee, respectively. This concludes Theorem~\ref{algfp}(vi). \subsection{Determination of the numbers $\xi_1,\xi_2,\xi_3,\eta,\eta',l_0$ from the coordinates of $w_1,w_2,w_3$} \subsubsection{The numbers $\xi_1,\xi_2,\xi_3$}\label{pardetxieencnul} We will give the explanation for $\xi_1$. Recall that we introduced $\xi_1$ as the unique element of \vereen\ for which the $\mu_1$ points of $H_1=\Z^3\cap\lozenge(w_2,w_3)$ are given by \begin{equation*} \frac{i}{\mu_1}w_2+\left\{\frac{i\xi_1}{\mu_1}\right\}w_3;\qquad i=0,\ldots,\mu_1-1. \end{equation*} How can we find $\xi_1$ from the coordinates of $w_2(a_2,b_2,c_2)$ and $w_3(a_3,b_3,c_3)$? Consider the vector \begin{align*} -a_3w_2+a_2w_3&=(-a_3a_2+a_2a_3,-a_3b_2+a_2b_3,-a_3c_2+a_2c_3)\\ &=(0,d_{13},d_{12})\in\Z^3. \end{align*} Since $\mu_1=\gcd(\abs{d_{11}},\abs{d_{12}},\abs{d_{13}})$ divides every coordinate\footnote{with respect to the standard basis of $\R^3$} of $-a_3w_2+a_2w_3$, it holds that \begin{equation*} \frac{1}{\mu_1}(-a_3w_2+a_2w_3)=\frac{-a_3}{\mu_1}w_2+\frac{a_2}{\mu_1}w_3\in\Z^3. \end{equation*} On the other hand, we have \begin{equation*} \frac{1}{\mu_1}w_2+\frac{\xi_1}{\mu_1}w_3\in\Z^3. \end{equation*} It follows that also \begin{equation*} \frac{-a_3}{\mu_1}w_2+\frac{-a_3\xi_1}{\mu_1}w_3\in\Z^3\qquad\text{and}\qquad\frac{a_2+a_3\xi_1}{\mu_1}w_3\in\Z^3. \end{equation*} Since $w_3$ is primitive, we obtain \begin{equation*} a_3\xi_1\equiv-a_2\mod\mu_1. \end{equation*} Analogously, we find that \begin{equation*} b_3\xi_1\equiv-b_2\mod\mu_1\qquad\text{and}\qquad c_3\xi_1\equiv-c_2\mod\mu_1. \end{equation*} Consequently, $\xi_1$ is a solution of the following system of linear congruences: \begin{equation}\label{stelseleenfpld} \left\{ \begin{alignedat}{2} a_3x&\equiv-a_2&&\mod\mu_1,\\ b_3x&\equiv-b_2&&\mod\mu_1,\\ c_3x&\equiv-c_2&&\mod\mu_1. \end{alignedat} \right. \end{equation} The first linear congruence has a solution if and only if $\gcd(a_3,\mu_1)\mid a_2$. We show that this is indeed the case. Put \begin{equation*} \gamma_a=\gcd(a_3,\mu_1)=\gcd(a_3,\abs{d_{11}},\abs{d_{12}},\abs{d_{13}}). \end{equation*} It then follows from $\gamma_a\mid a_3,d_{12},d_{13}$ that $\gamma_a\mid a_2a_3,a_2b_3,a_2c_3$. Hence \begin{equation*} \gamma_a\mid a_2\gcd(a_3,b_3,c_3)=a_2. \end{equation*} Analogously we have $\gcd(a_2,\mu_1)\mid a_3$, and thus we can write \begin{equation*} \gamma_a=\gcd(a_2,\mu_1)=\gcd(a_3,\mu_1). \end{equation*} In the same way the other two congruences have solutions, and we may put \begin{alignat*}{2} \gamma_b&=\gcd(b_2,\mu_1)&&=\gcd(b_3,\mu_1)\qquad\text{and}\\ \gamma_c&=\gcd(c_2,\mu_1)&&=\gcd(c_3,\mu_1). \end{alignat*} The system \eqref{stelseleenfpld} is then equivalent to \begin{equation}\label{stelseltweefpld} \left\{ \begin{alignedat}{3} x&\equiv-a_2'&&\{a_3'\}_{\mu_1^{(a)}}^{-1}&&\mod\mu_1^{(a)},\\ x&\equiv-b_2'&&\{b_3'\}_{\mu_1^{(b)}}^{-1}&&\mod\mu_1^{(b)},\\ x&\equiv-c_2'&&\{c_3'\}_{\mu_1^{(c)}}^{-1}&&\mod\mu_1^{(c)}, \end{alignedat} \right. \end{equation} with \begin{equation*} a_2'=a_2/\gamma_a,\qquad a_3'=a_3/\gamma_a,\qquad \mu_1^{(a)}=\mu_1/\gamma_a, \end{equation*} and where $\{a_3'\}_{\mu_1^{(a)}}^{-1}$ denotes the unique element of $\{0,\ldots,\mu_1^{(a)}-1\}$ such that \begin{equation*} a_3'\{a_3'\}_{\mu_1^{(a)}}^{-1}\equiv1\mod\mu_1^{(a)}. \end{equation*} (Analogously for the numbers appearing in the other two congruences.) Since the moduli $\mu_1^{(a)},\mu_1^{(b)},\mu_1^{(c)}$ are generally not pairwise coprime, according to the Generalized Chinese Remainder Theorem, the system \eqref{stelseltweefpld} has a solvability condition in the form of \begin{equation}\label{solvcondsteltwee} a_2'\{a_3'\}_{\mu_1^{(a)}}^{-1}\equiv b_2'\{b_3'\}_{\mu_1^{(b)}}^{-1}\mod\gcd(\mu_1^{(a)},\mu_1^{(b)}), \end{equation} together with the analogous conditions for the other two combinations of two out of three congruences. Of course we know that the system is solvable since $\xi_1$ is a solution, but for the sake of completeness, let us verify Condition~\eqref{solvcondsteltwee} in a direct way. Because $a_3'$ and $b_3'$ are units modulo $\mu_1^{(a)}$ and $\mu_1^{(b)}$, respectively, they are both units modulo $\gcd(\mu_1^{(a)},\mu_1^{(b)})$. Furthermore, we have that \begin{equation}\label{viersterrekesfp} a_3'\{a_3'\}_{\mu_1^{(a)}}^{-1}\equiv b_3'\{b_3'\}_{\mu_1^{(b)}}^{-1}\equiv1\mod\gcd\bigl(\mu_1^{(a)},\mu_1^{(b)}\bigr). \end{equation} If we multiply both sides of \eqref{solvcondsteltwee} with the unit $a_3'b_3'$ and apply \eqref{viersterrekesfp}, we find that Condition~\eqref{solvcondsteltwee} is equivalent to \begin{equation*} a_2'b_3'\equiv a_3'b_2'\mod\gcd(\mu_1^{(a)},\mu_1^{(b)}), \end{equation*} and---after multiplying both sides and the modulus with $\gamma_a\gamma_b$---even to \begin{multline*} \gcd(a_2,a_3,b_2,b_3)\mu_1=\gcd(\gamma_a,\gamma_b)\mu_1\\ =\gamma_a\gamma_b\gcd(\mu_1^{(a)},\mu_1^{(b)})\mid\gamma_a\gamma_b(a_2'b_3'-a_3'b_2')=d_{13}. \end{multline*} Of course we have that $\mu_1\mid d_{13}$. It is therefore sufficient to show that for every prime $p$ with $p\mid\gcd(a_2,a_3,b_2,b_3)$, it holds that \begin{equation*} \ord_pd_{13}\geqslant\ord_p\gcd(a_2,a_3,b_2,b_3)+\ord_p\mu_1. \end{equation*} Let $p$ be such a prime. Since $p\mid a_2,b_2$ and $w_2$ is primitive, it certainly holds that $p\nmid c_2$. It now follows from $a_2d_{11}-b_2d_{12}+c_2d_{13}=0$ that \begin{align*} \ord_pd_{13}&=\ord_pc_2d_{13}\\ &=\ord_p(-a_2d_{11}+b_2d_{12})\\ &\geqslant\min\{\ord_pa_2+\ord_pd_{11},\ord_pb_2+\ord_pd_{12}\}\\ &\geqslant\min\{\ord_pa_2,\ord_pa_3,\ord_pb_2,\ord_pb_3\}\\ &\qquad\qquad\qquad\qquad\qquad\quad+\min\{\ord_pd_{11},\ord_pd_{12},\ord_pd_{13}\}\\ &=\ord_p\gcd(a_2,a_3,b_2,b_3)+\ord_p\mu_1. \end{align*} The system is thus indeed solvable and the Generalized Chinese Remainder Theorem asserts that its solution is unique modulo \begin{equation*} \lcm(\mu_1^{(a)},\mu_1^{(b)},\mu_1^{(c)})=\mu_1. \end{equation*} We can thus find $\xi_1$ as the unique solution in \vereen\ of the system \eqref{stelseleenfpld}. \subsubsection{Determination of $\eta,\eta',$ and $l_0$} Recall that we introduced the numbers $\eta,\eta',l_0$ as the unique $\eta\in\{0,\ldots,\phi_3-1\}$, $l_0\in\{0,\ldots,\lambda-1\}$, and $\eta'\in\{0,\ldots,\phi_3'-1\}$ such that \begin{equation}\label{tweedriehoekjes} \frac{1}{\mu_2\phi_3}w_1+\frac{\eta}{\mu_1\phi_3}w_2+\frac{l_0\phi_3'+\eta'}{\lambda\phi_3'}w_3\in H. \end{equation} Recall as well that $(\adj M)M=dI$, with $d=\det M$; i.e., \begin{equation}\label{adjgelijkheidfpls} \begin{pmatrix} d_{11}&-d_{21}&d_{31}\\ -d_{12}&d_{22}&-d_{32}\\ d_{13}&-d_{23}&d_{33} \end{pmatrix} \begin{pmatrix} a_1&b_1&c_1\\ a_2&b_2&c_2\\ a_3&b_3&c_3 \end{pmatrix}= \begin{pmatrix} d&0&0\\ 0&d&0\\ 0&0&d \end{pmatrix}. \end{equation} Let $j\in\{1,2,3\}$. Since $\mu=\abs{d}$ divides $d$, it follows from \eqref{adjgelijkheidfpls} that \begin{equation*} h(j)=\frac{d_{1j}}{\mu}w_1-\frac{d_{2j}}{\mu}w_2+\frac{d_{3j}}{\mu}w_3\in\Z^3. \end{equation*} Recall also that $\mu=\mu_1\mu_2\phi_3$ and that \begin{equation*} \mu_i=\gcd(\abs{d_{i1}},\abs{d_{i2}},\abs{d_{i3}});\qquad i=1,2,3. \end{equation*} If we now put $d_{ij}'=d_{ij}/\mu_i$; $i=1,2,3$; we obtain \begin{align*} h(j)&=\frac{d_{1j}'}{\mu_2\phi_3}w_1-\frac{d_{2j}'}{\mu_1\phi_3}w_2+\frac{d_{3j}'\mu_3}{\mu_1\mu_2\phi_3}w_3\\ &=\frac{d_{1j}'}{\mu_2\phi_3}w_1-\frac{d_{2j}'}{\mu_1\phi_3}w_2+\frac{d_{3j}'}{\lambda\phi_3'}w_3\in\Z^3. \end{align*} On the other hand, we also know the point \begin{equation*} h'(j)=\frac{d_{1j}'}{\mu_2\phi_3}w_1+\frac{d_{1j}'\eta}{\mu_1\phi_3}w_2+\frac{d_{1j}'(l_0\phi_3'+\eta')}{\lambda\phi_3'}w_3\in\Z^3 \end{equation*} with the same $w_1$-coordinate as $h(j)$. After reduction of its coordinates\footnote{with respect to the basis $(w_1,w_2,w_3)$} modulo one, $h(j)-h'(j)$ thus belongs to $H_1$, and since the coordinates\footnotemark[\value{footnote}] of the elements of $H_1$ belong to $(1/\mu_1)\Z$, we have that \begin{alignat*}{2} \frac{d_{1j}'\eta}{\mu_1\phi_3}&\equiv-\frac{d_{2j}'}{\mu_1\phi_3}&&\mod\frac{1}{\mu_1}\qquad\text{and}\\ \frac{d_{1j}'(l_0\phi_3'+\eta')}{\lambda\phi_3'}&\equiv\frac{d_{3j}'}{\lambda\phi_3'}&&\mod\frac{1}{\mu_1}, \end{alignat*} or, equivalently, that \begin{alignat*}{2} d_{1j}'\eta&\equiv -d_{2j}'&&\mod\phi_3\qquad\text{and}\\ d_{1j}'\eta'&\equiv d_{3j}'&&\mod\mu_2'\phi_3'.\\\intertext{(Recall that $\lambda=\lcm(\mu_1,\mu_2)=\mu_1\mu_2/\gamma=\mu_1\mu_2'$.) A fortiori, it thus holds that} d_{1j}'\eta'&\equiv d_{3j}'&&\mod\phi_3'. \end{alignat*} We have just showed that $\eta$ and $\eta'$ are solutions of the respective systems of linear congruences \begin{equation}\label{tweestelseltjes} \left\{ \begin{alignedat}{2} d_{11}'x&\equiv -d_{21}'&&\mod\phi_3,\\ d_{12}'x&\equiv -d_{22}'&&\mod\phi_3,\\ d_{13}'x&\equiv -d_{23}'&&\mod\phi_3; \end{alignedat} \right. \qquad\text{and}\qquad \left\{ \begin{alignedat}{2} d_{11}'x&\equiv d_{31}'&&\mod\phi_3',\\ d_{12}'x&\equiv d_{32}'&&\mod\phi_3',\\ d_{13}'x&\equiv d_{33}'&&\mod\phi_3'. \end{alignedat} \right. \end{equation} Moreover, it turns out that $\eta$ and $\eta'$ are the unique solutions of these systems in $\{0,\ldots,\phi_3-1\}$ and $\{0,\ldots,\phi_3'-1\}$, respectively. This gives us a method to determine $\eta$ and $\eta'$ from the coordinates of $w_1,w_2,w_3$. We will study the first system of \eqref{tweestelseltjes} in more detail, for the second system analogous conclusions will be true. Let us verify the solvability conditions of the first system. The first linear congruence has solutions if and only if $\gcd(d_{11}',\phi_3)\mid d_{21}'$, i.e., if and only if \begin{equation*} \gcd(\mu_2d_{11},\mu)\mid\mu_1d_{21}. \end{equation*} Put $\Gamma_1=\gcd(\mu_2d_{11},\mu)$. Then we have $\Gamma_1\mid\mu$ and $\Gamma_1\mid d_{11}d_{2j}$ for every $j\in\{1,2,3\}$, and it is sufficient to prove that $\Gamma_1\mid d_{1j}d_{21}$ for every $j$. We already know that $\Gamma_1\mid d_{11}d_{21}$. Furthermore, from $\adj(\adj M)=dM$, it follows that \begin{alignat}{3} d_{11}d_{22}&-d_{12}d_{21}&&=\bigl(\adj(\adj M)\bigr)_{33}&&=dc_3\qquad\text{and}\notag\\ d_{11}d_{23}&-d_{13}d_{21}&&=\bigl(\adj(\adj M)\bigr)_{32}&&=db_3.\label{vgltweeergfpls} \end{alignat} We find thus that $\Gamma_1\mid\mu\mid dc_3=d_{11}d_{22}-d_{12}d_{21}$, and together with $\Gamma_1\mid d_{11}d_{22}$, this implies that $\Gamma_1\mid d_{12}d_{21}$. Analogously, it follows from \eqref{vgltweeergfpls} that $\Gamma_1\mid d_{13}d_{21}$. The first linear congruence therefore has solutions, and the same thing holds for the other two congruences. The first system of \eqref{tweestelseltjes} is now solvable if and only if for all $j_1,j_2\in\{1,2,3\}$, it holds that \begin{equation}\label{allemaaljekes} d_{1j_1}'d_{2j_2}'\equiv d_{1j_2}'d_{2j_1}'\mod\gcd\left(\frac{\phi_3}{\gamma_{j_1}},\frac{\phi_3}{\gamma_{j_2}}\right), \end{equation} with $\gamma_j=\gcd(d_{1j}',\phi_3)$ for all $j$. Let us verify this for $j_1=1$ and $j_2=2$. For these values of $j_1$ and $j_2$, Condition~\eqref{allemaaljekes} is equivalent to \begin{equation*} d_{11}d_{22}\equiv d_{12}d_{21}\mod\gcd\left(\frac{\mu}{\gamma_1},\frac{\mu}{\gamma_2}\right), \end{equation*} which follows from $\mu\mid dc_3=d_{11}d_{22}-d_{12}d_{21}$. The (Generalized) Chinese Remainder Theorem now states that the system has a unique solution modulo \begin{equation*} \lcm\left(\frac{\phi_3}{\gamma_1},\frac{\phi_3}{\gamma_2},\frac{\phi_3}{\gamma_3}\right)=\frac{\phi_3}{\gcd(d_{11}',d_{12}',d_{13}',\phi_3)}=\phi_3. \end{equation*} An alternative way to find $\eta$ and $\eta'$, and a way to find $l_0$ is as follows. We know that \begin{equation*} h(j)=\frac{d_{1j}'}{\mu_2\phi_3}w_1-\frac{d_{2j}'}{\mu_1\phi_3}w_2+\frac{d_{3j}'}{\lambda\phi_3'}w_3\in\Z^3;\qquad j=1,2,3; \end{equation*} and that $\gcd(d_{11}',d_{12}',d_{13}')=1$. Find $\lambda_j\in\Z$; $j=1,2,3$; such that $\sum_j\lambda_jd_{1j}'=1$, and consider the point \begin{equation*} \left\{\sum\nolimits_j\lambda_jh(j)\right\}=\frac{1}{\mu_2\phi_3}w_1+\left\{\frac{-\sum_j\lambda_jd_{2j}'}{\mu_1\phi_3}\right\}w_2+\left\{\frac{\sum_j\lambda_jd_{3j}'}{\lambda\phi_3'}\right\}w_3\in H. \end{equation*} Substract from $\bigl\{\sum_j\lambda_jh(j)\bigr\}$ the point \begin{equation*} \frac{i}{\mu_1}w_2+\left\{\frac{i\xi_1}{\mu_1}\right\}w_3\in H_1, \end{equation*} with \begin{equation*} i=\left\{\left\lfloor\frac{-\sum_j\lambda_jd_{2j}'}{\phi_3}\right\rfloor\right\}_{\mu_1}, \end{equation*} and find the point \begin{equation*} \frac{1}{\mu_2\phi_3}w_1+\frac{\bigl\{-\sum_j\lambda_jd_{2j}'\bigr\}_{\phi_3}}{\mu_1\phi_3}w_2+\left\{\frac{\sum_j\lambda_jd_{3j}'-i\xi_1\mu_2'\phi_3'}{\lambda\phi_3'}\right\}w_3\in H. \end{equation*} Because of the uniqueness in $H$ of a point of the form \eqref{tweedriehoekjes}, we find that \begin{align*} \eta&=\left\{-\sum\nolimits_j\lambda_jd_{2j}'\right\}_{\phi_3},\\ \eta'&=\left\{\sum\nolimits_j\lambda_jd_{3j}'\right\}_{\phi_3'},\qquad\text{and}\\ l_0&=\left\{\left\lfloor\frac{\sum_j\lambda_jd_{3j}'}{\phi_3'}\right\rfloor-i\xi_1\mu_2'\right\}_{\lambda}. \end{align*} \section{Case~I: exactly one facet contributes to $s_0$ and this facet is a $B_1$-simplex}\label{secgeval1art3} \subsection{Figure and notations} Without loss of generality, we may assume that the $B_1$-simplex $\tau_0$ contributing to $s_0$ is as drawn in Figure~\ref{figcase1}. \begin{figure} \psset{unit=.03462099125\textwidth \centering \subfigure[$B_1$-simplex $\tau_0$, its subfaces, and its neighbor facets $\tau_1,\tau_2,$ and $\tau_3$]{ \begin{pspicture}(-7.58,-4.95)(6.14,7 {\footnotesize \pstThreeDCoor[xMin=0,yMin=0,zMin=0,xMax=10,yMax=8,zMax=7.24,linecolor=black,linewidth=.7pt] { \psset{linecolor=black,linewidth=.3pt,linestyle=dashed,subticks=1} \pstThreeDPlaneGrid[planeGrid=xy](0,0)(2,3) \pstThreeDPlaneGrid[planeGrid=xy](0,0)(8,4) \pstThreeDPlaneGrid[planeGrid=xy](0,0)(3,6) \pstThreeDPlaneGrid[planeGrid=xz](0,0)(2,1) \pstThreeDPlaneGrid[planeGrid=yz](0,0)(3,1) \pstThreeDPlaneGrid[planeGrid=xy,planeGridOffset=1](0,0)(2,3) \pstThreeDPlaneGrid[planeGrid=xz,planeGridOffset=3](0,0)(2,1) \pstThreeDPlaneGrid[planeGrid=yz,planeGridOffset=2](0,0)(3,1) } \pstThreeDPut[pOrigin=c](4.33,4.33,0.33){\psframebox*[framesep=0pt,framearc=0.3]{\phantom{$\tau_0$}}} { \psset{dotstyle=none,dotscale=1,drawCoor=false} \psset{linecolor=black,linewidth=1pt,linejoin=1} \psset{fillcolor=lightgray,opacity=.6,fillstyle=solid} \pstThreeDTriangle(8,4,0)(3,6,0)(2,3,1) } \pstThreeDPut[pOrigin=t](8,4,0){$A(x_A,y_A,0)$} \pstThreeDPut[pOrigin=lt](3,6,0){$B(x_B,y_B,0)$} \pstThreeDPut[pOrigin=lb](2,3,1){\psframebox*[framesep=-.3pt,framearc=1]{$C(x_C,y_C,1)$}} \pstThreeDPut[pOrigin=c](4.33,4.33,0.33){$\tau_0$} \pstThreeDPut[pOrigin=lt](5.8,5.7,0){$\tau_3$} \pstThreeDPut[pOrigin=rb](5,3.2,0.7){$\tau_2$} \pstThreeDPut[pOrigin=lb](2.3,4.75,0.6){\psframebox*[framesep=0pt,framearc=0.3]{$\tau_1$}} } \end{pspicture} }\hfill\subfigure[Relevant cones associated to relevant faces of~\Gf]{ \psset{unit=.03125\textwidth \begin{pspicture}(-7.6,-3.8)(7.6,9.6 {\footnotesize \pstThreeDCoor[xMin=0,yMin=0,zMin=0,xMax=10,yMax=10,zMax=10,nameZ={},linecolor=gray,linewidth=.7pt] \psset{linecolor=gray,linewidth=.3pt,linejoin=1,linestyle=dashed,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(10,0,0)(0,10,0)\pstThreeDLine(0,10,0)(0,0,10)\pstThreeDLine(0,0,10)(10,0,0) } \psset{linecolor=black,linewidth=.7pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,0)(1.13,2.15,6.72 \pstThreeDLine(0,0,0)(4.67,.888,4.44 \pstThreeDLine(0,0,0)(.267,4.97,4.78 \pstThreeDLine(0,0,0)(0,0,10 } \psset{linecolor=darkgray,linewidth=.8pt,linejoin=1,arrows=->,arrowscale=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,0)(.678,1.29,4.03 \pstThreeDLine(0,0,0)(3.74,.710,3.55 \pstThreeDLine(0,0,0)(.134,2.48,2.39 \pstThreeDLine(0,0,0)(0,0,2 } \psset{linecolor=white,linewidth=1.7pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(2.900,1.5190,5.580)(1.484,2.0238,6.492 \pstThreeDLine(2.9088,2.5208,4.576)(2.0282,3.3372,4.644 } \psset{linecolor=black,linewidth=.7pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,10)(1.13,2.15,6.72 \pstThreeDLine(.267,4.97,4.78)(1.13,2.15,6.72)(4.67,.888,4.44 } \psset{linecolor=black,linewidth=.7pt,linejoin=1,linestyle=dashed,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,10)(4.67,.888,4.44 \pstThreeDLine(0,0,10)(.267,4.97,4.78 \pstThreeDLine(4.67,.888,4.44)(.267,4.97,4.78 } \psset{labelsep=2pt} \uput[-12](.2,1.3){\darkgray\scriptsize$v_0$} \uput[-50](.8,.55){\darkgray\scriptsize$v_2$} } \psset{labelsep=1.5pt} \uput[230](-1.05,.75){\darkgray\scriptsize$v_1$} \uput[180](0,.8){\darkgray\scriptsize$v_3$} } \psset{labelsep=4pt} \uput[0](3.3,2.3){\psframebox*[framesep=0.3pt,framearc=0]{$\Delta_{\tau_2}$}} } \psset{labelsep=2.6pt} \uput[180](-2.63,1.9){\psframebox*[framesep=0.3pt,framearc=0]{$\Delta_{\tau_1}$}} \uput[90](0,8.65){$z,\Delta_{\tau_3}$} } \rput(1.35,4.95){$\delta_A$} \rput(-.34,4.95){\psframebox*[framesep=0.3pt,framearc=0]{$\delta_B$}} \rput(.575,3.13){\psframebox*[framesep=0.3pt,framearc=0]{$\delta_C$}} \pstThreeDNode(2.90,1.52,5.60){BC} \pstThreeDNode(1.13,2.15,6.72){dt0} \pstThreeDNode(.565,1.08,8.35){AB} \pstThreeDNode(.700,3.56,5.75){AC} \rput[Bl](-7.95,9.075){\rnode{BClabel}{$\Delta_{[BC]}$}} \rput[Bl](-4.11,9.075){\rnode{dt0label}{$\Delta_{\tau_0}$}} \rput[Bl](2.43,9.075){\rnode{ABlabel}{$\Delta_{[AB]}$}} \rput[Br](7.98,9.075){\rnode{AClabel}{$\Delta_{[AC]}$}} \ncline[linewidth=.15pt,nodesepB=2pt,nodesepA=1pt]{->}{BClabel}{BC} \ncline[linewidth=.15pt,nodesepB=2.5pt,nodesepA=2pt]{->}{dt0label}{dt0} \ncline[linewidth=.15pt,nodesepB=2pt,nodesepA=1pt]{->}{ABlabel}{AB} \ncline[linewidth=.15pt,nodesepB=2pt,nodesepA=2pt]{->}{AClabel}{AC} } \end{pspicture} } \caption{Case I: the only facet contributing to $s_0$ is the $B_1$-simplex $\tau_0$} \label{figcase1} \end{figure} Let us fix notations. We shall denote the vertices of $\tau_0$ and their co\-or\-di\-nates by \begin{equation*} A(x_A,y_A,0),\quad B(x_B,y_B,0),\quad\text{and}\quad C(x_C,y_C,1). \end{equation*} The neighbor facets of $\tau_0$ will be denoted $\tau_1,\tau_2,\tau_3$, as indicated in Figure~\ref{figcase1}, and the unique primitive vectors perpendicular to them will be denoted by \begin{equation*} v_0(a_0,b_0,c_0),\quad v_1(a_1,b_1,c_1),\quad v_2(a_2,b_2,c_2),\quad v_3(0,0,1), \end{equation*} respectively. Consequently, the affine supports of the considered facets should have equations of the form \begin{alignat*}{4} \aff(\tau_0)&\leftrightarrow a_0x&&+b_0y&&+c_0&&z=m_0,\\ \aff(\tau_1)&\leftrightarrow a_1x&&+b_1y&&+c_1&&z=m_1,\\ \aff(\tau_2)&\leftrightarrow a_2x&&+b_2y&&+c_2&&z=m_2,\\ \aff(\tau_3)&\leftrightarrow &&&&&&z=0, \end{alignat*} and we associate to them the following numerical data: \begin{align*} m_0=m(v_0)&=a_0x_C+b_0y_C+c_0,&\sigma_0=\sigma(v_0)&=a_0+b_0+c_0,\\ m_1=m(v_1)&=a_1x_C+b_1y_C+c_1,&\sigma_1=\sigma(v_1)&=a_1+b_1+c_1,\\ m_2=m(v_2)&=a_2x_C+b_2y_C+c_2,&\sigma_2=\sigma(v_2)&=a_2+b_2+c_2,\\ m_3=m(v_3)&=0,&\sigma_3=\sigma(v_3)&=1. \end{align*} We assume that $\tau_0$ (and only $\tau_0$) contributes to the candidate pole $s_0$. With the notations above this is, we assume that $p^{\sigma_0+m_0s_0}=1$, or equivalently, that \begin{equation*} \Re(s_0)=-\frac{\sigma_0}{m_0}=-\frac{a_0+b_0+c_0}{a_0x_C+b_0y_C+c_0}\qquad\text{and}\qquad\Im(s_0)=\frac{2n\pi}{m_0\log p} \end{equation*} for some $n\in\Z$. In this section we will consider the following simplicial cones: \begin{align*} \dA&=\cone(v_0,v_2,v_3),&\DAB&=\cone(v_0,v_3),&\Dtnul&=\cone(v_0).\\ \dB&=\cone(v_0,v_1,v_3),&\DAC&=\cone(v_0,v_2),&&\\ \dC&=\cone(v_0,v_1,v_2),&\DBC&=\cone(v_0,v_1),&& \end{align*} The $\Delta_{\tau}$ are the simplicial cones associated to the faces $\tau$ in the usual way. The cones $\Delta_A,\Delta_B,\Delta_C$, associated to the vertices of $\tau_0$, are generally not simplicial. Later in this section we will consider simplicial subdivisions (without creating new rays) of $\Delta_A,\Delta_B,\Delta_C$ that contain the respective simplicial cones $\dA,\dB,\dC$. Lastly, we fix notations for the vectors along the edges of $\tau_0$: \begin{alignat*}{8} &\overrightarrow{AC}&&(x_C&&-x_A&&,y_C&&-y_A&&,1&&)&&=(\aA,\bA,1),\\ &\overrightarrow{BC}&&(x_C&&-x_B&&,y_C&&-y_B&&,1&&)&&=(\aB,\bB,1),\\ &\overrightarrow{AB}&&(x_B&&-x_A&&,y_B&&-y_A&&,0&&)&&=(\aA-\aB,\bA-\bB,0). \end{alignat*} The first two vectors are primitive; the last one is generally not. We put \begin{equation*} \fAB=\gcd(x_B-x_A,y_B-y_A)=\gcd(\aA-\aB,\bA-\bB). \end{equation*} \subsection{Some relations between the variables}\label{srelbettvarcaseen} Expressing that $\overrightarrow{AC}\perp v_0,v_2$ and $\overrightarrow{BC}\perp v_0,v_1$, we obtain \begin{alignat*}{4} \begin{pmatrix} c_0\\c_2 \end{pmatrix} &=-\aA&& \begin{pmatrix} a_0\\a_2 \end{pmatrix} &&-\bA&& \begin{pmatrix} b_0\\b_2 \end{pmatrix}\qquad\text{and}\\ \begin{pmatrix} c_0\\c_1 \end{pmatrix} &=-\aB&& \begin{pmatrix} a_0\\a_1 \end{pmatrix} &&-\bB&& \begin{pmatrix} b_0\\b_1 \end{pmatrix}. \end{alignat*} These relations imply that \begin{equation*} \gcd(a_i,b_i,c_i)=\gcd(a_i,b_i)=1;\qquad i=0,\ldots,2. \end{equation*} Another consequence is that \begin{equation*} \begin{vmatrix} a_0&c_0\\a_2&c_2 \end{vmatrix}= \begin{vmatrix} a_0&-\aA a_0-\bA b_0\\a_2&-\aA a_2-\bA b_2 \end{vmatrix} =-\bA \begin{vmatrix} a_0&b_0\\a_2&b_2 \end{vmatrix}, \end{equation*} and analogously, \begin{equation*} \begin{vmatrix} b_0&c_0\\b_2&c_2 \end{vmatrix} =\aA \begin{vmatrix} a_0&b_0\\a_2&b_2 \end{vmatrix},\quad \begin{vmatrix} a_0&c_0\\a_1&c_1 \end{vmatrix} =-\bB \begin{vmatrix} a_0&b_0\\a_1&b_1 \end{vmatrix},\quad \begin{vmatrix} b_0&c_0\\b_1&c_1 \end{vmatrix} =\aB \begin{vmatrix} a_0&b_0\\a_1&b_1 \end{vmatrix}. \end{equation*} It will turn out to be convenient (and sometimes necessary) to know the signs of certain determinants. Considering the orientations of the corresponding coordinate systems, one can show that \begin{equation*} \begin{vmatrix} a_0&b_0\\a_2&b_2 \end{vmatrix}>0,\quad \begin{vmatrix} a_0&b_0\\a_1&b_1 \end{vmatrix}<0,\quad\Psi= \begin{vmatrix} a_1&b_1\\a_2&b_2 \end{vmatrix}>0,\quad\text{ and }\quad \begin{vmatrix} a_0&b_0&c_0\\a_1&b_1&c_1\\a_2&b_2&c_2 \end{vmatrix}>0. \end{equation*} \subsection{Igusa's local zeta function} As $f$ is non-degenerated over \Fp\ with respect to the compact faces of its Newton polyhedron \Gf, by Theorem~\ref{formdenhoor} the local Igusa zeta function \Zof\ of $f$ is the meromorphic complex function \begin{equation}\label{deflvilzfvmdcas1} \Zof=\sum_{\substack{\tau\mathrm{\ compact}\\\mathrm{face\ of\ }\Gf}}L_{\tau}S(\Dtu), \end{equation} with \begin{gather*} L_{\tau}:s\mapsto L_{\tau}(s)=\left(\frac{p-1}{p}\right)^3-\frac{N_{\tau}}{p^2}\frac{p^s-1}{p^{s+1}-1},\\ N_{\tau}=\#\left\{(x,y,z)\in(\Fpcross)^3\;\middle\vert\;\fbart(x,y,z)=0\right\}, \end{gather*} and \begin{align} S(\Dtu):s\mapsto S(\Dtu)(s)&=\sum_{k\in\Z^3\cap\Delta_{\tau}}p^{-\sigma(k)-m(k)s}\notag\\ &=\sum_{i\in I}\frac{\Sigma(\delta_i)(s)}{\prod_{j\in J_i}(p^{\sigma(w_j)+m(w_j)s}-1)}.\label{deflvilzfvmdbiscas1} \end{align} Here $\{\delta_i\}_{i\in I}$ denotes a simplicial decomposition without introducing new rays of the cone $\Delta_{\tau}$ associated to $\tau$. The simplicial cone $\delta_i$ is supposed to be strictly positively spanned by the linearly independent primitive vectors $w_j$, $j\in J_i$, in $\Zplusn\setminus\{0\}$, and $\Sigma(\delta_i)$ is the function \begin{equation*} \Sigma(\delta_i):s\mapsto \Sigma(\delta_i)(s)=\sum_hp^{\sigma(h)+m(h)s}, \end{equation*} where $h$ runs through the elements of the set \begin{equation*} H(w_j)_{j\in J_i}=\Z^3\cap\lozenge(w_j)_{j\in J_i}, \end{equation*} with \begin{equation*} \lozenge(w_j)_{j\in J_i}=\left\{\sum\nolimits_{j\in J_i}h_jw_j\;\middle\vert\;h_j\in[0,1)\text{ for all }j\in J_i\right\} \end{equation*} the fundamental parallelepiped spanned by the vectors $w_j$, $j\in J_i$. \subsection{The candidate pole $s_0$ and its residue} We want to prove that $s_0$ is not a pole of \Zof. Since $s_0$ is a candidate pole of expected order one (and therefore is either no pole or a pole of order one), it is enough to prove that the coefficient $a_{-1}$ in the Laurent series \begin{equation*} \Zof(s)=\sum_{k=-1}^{\infty}a_k(s-s_0)^k \end{equation*} of \Zof\ centered at $s_0$, equals zero. This coefficient, also called the residue of \Zof\ in $s_0$, is given by \begin{equation*} a_{-1}=\Res(\Zof,s_0)=\lim_{s\to s_0}(s-s_0)\Zof(s). \end{equation*} Equivalently, we will prove in the rest of this section that $R_1=0$, with \begin{equation*} R_1=\lim_{s\to s_0}\left(p^{\sigma_0+m_0s}-1\right)\Zof(s)=(\log p)m_0a_{-1}. \end{equation*} \subsection{Terms contributing to $R_1$} We will next calculate $R_1$ based on Formula~\eqref{deflvilzfvmdcas1} for \Zof. The only (compact) faces of \Gf\ that contribute to the candidate pole $s_0$ are the subfaces $A,B,C,[AB],[AC],[BC],\tau_0$ of the single facet having $s_0$ as an associated candidate pole. It are only the terms of \eqref{deflvilzfvmdcas1} corresponding to these faces that should be taken into account in the calculation of $R_1$: \begin{equation*} R_1=\lim_{s\to s_0}\left(p^{\sigma_0+m_0s}-1\right)\sum_{\substack{\tau=\tau_0,A,B,C,\\ [AB],[AC],[BC]}}L_{\tau}(s)S(\Dtu)(s). \end{equation*} A second simplification is the following. First, note that vertex $A$ is contained in facets $\tau_0,\tau_2,\tau_3$, but can still be contained in other facets. Hence $\Delta_A$ is---in general---not simplicial and the same thing holds for the other vertices $B,C$ and their associated cones. Consequently, to handle $S_A,S_B,$ and $S_C$, we need to consider simplicial decompositions of $\Delta_A,\Delta_B,$ and $\Delta_C$, and we will choose ones that contain the simplicial cones $\delta_A,\delta_B,$ and $\delta_C$, respectively. Terms of \eqref{deflvilzfvmdbiscas1} associated to cones, other than $\delta_A,\delta_B,\delta_C,$ in these decompositions, do not have a pole in $s_0$ and hence do not contribute to $R_1$. Let us write down the seven contributions to the \lq residue\rq\ $R_1$ explicitly. We obtain \begin{multline*} R_1=L_A(s_0)\frac{\Sigma(\delta_A)(s_0)}{\Ftwee(p-1)}+L_B(s_0)\frac{\Sigma(\delta_B)(s_0)}{\Feen(p-1)}\\ +L_C(s_0)\frac{\Sigma(\delta_C)(s_0)}{\Feen\Ftwee}+L_{[AB]}(s_0)\frac{\Sigma(\Delta_{[AB]})(s_0)}{p-1}\\ +L_{[AC]}(s_0)\frac{\Sigma(\Delta_{[AC]})(s_0)}{p^{\sigma_2+m_2s_0}-1}+L_{[BC]}(s_0)\frac{\Sigma(\Delta_{[BC]})(s_0)}{p^{\sigma_1+m_1s_0}-1}+L_{\tau_0}(s_0)\Sigma(\Delta_{\tau_0})(s_0). \end{multline*} \subsection{The numbers $N_{\tau}$} Let us fix notations for the coefficients of $f$. We put \begin{equation*} f(x,y,z)=\sum_{\omega=(\omega_1,\omega_2,\omega_3)\in\N^3}a_{\omega}x^{\omega_1}y^{\omega_2}z^{\omega_3}\in\Zp[x,y,z]. \end{equation*} For $a\in\Zp$, we denote by $\overline{a}=a+p\Zp\in\Fp$ its reduction modulo $p\Zp$. Recall that for every face $\tau$ of \Gf, we have \begin{equation*} \ft(x,y,z)=\sum_{\omega\in\Z^3\cap\tau}a_{\omega}x^{\omega_1}y^{\omega_2}z^{\omega_3}\quad\text{ and }\quad\fbart(x,y,z)=\sum_{\omega\in\Z^3\cap\tau}\overline{a_{\omega}}x^{\omega_1}y^{\omega_2}z^{\omega_3}. \end{equation*} Because the polynomial $f$ is non-degenerated over \Fp\ with respect to all the compact faces of its Newton polyhedron (and thus especially with respect to the vertices $A,B,C$), we have that none of the numbers $\overline{a_A},\overline{a_B},\overline{a_C}$ equals zero. Hence the numbers $N_{\tau}$ in the formula for \Zof\ are as follows. For the vertices of $\tau_0$ we find \begin{equation*} N_A=\#\left\{(x,y,z)\in(\Fpcross)^3\;\middle\vert\;\overline{a_A}x^{x_A}y^{y_A}=0\right\}=0, \end{equation*} and analogously, $N_B=N_C=0$. About the number $N_{[AB]}$ we don't know so much, except that \begin{align*} N_{[AB]}&=\#\bigl\{(x,y,z)\in(\Fpcross)^3\;\big\vert\;\overline{f_{[AB]}}(x,y)=\overline{a_A}x^{x_A}y^{y_A}+\cdots+\overline{a_B}x^{x_B}y^{y_B}=0\bigr\}\\ &=(p-1)N, \end{align*} with \begin{equation*} N=\#\left\{(x,y)\in(\Fpcross)^2\;\middle\vert\;\overline{f_{[AB]}}(x,y)=0\right\}. \end{equation*} For the other edges we find \begin{equation*} N_{[AC]}=\#\bigl\{(x,y,z)\in(\Fpcross)^3\;\big\vert\;\overline{a_A}x^{x_A}y^{y_A}+\overline{a_C}x^{x_C}y^{y_C}z=0\bigr\}=(p-1)^2, \end{equation*} and analogously, $N_{[BC]}=(p-1)^2$. Finally, for $\tau_0$ we obtain \begin{equation*} N_{\tau_0}=\#\bigl\{(x,y,z)\in(\Fpcross)^3\;\big\vert\;\overline{f_{[AB]}}(x,y)+\overline{a_C}x^{x_C}y^{y_C}z=0\bigr\}=(p-1)^2-N. \end{equation*} \subsection{The factors $L_{\tau}(s_0)$} The above formulas for the $N_{\tau}$ give rise to the following expressions for the $L_{\tau}(s_0)$: \begin{gather*} L_A(s_0)=L_B(s_0)=L_C(s_0)=\left(\frac{p-1}{p}\right)^3,\\ L_{[AB]}(s_0)=\left(\frac{p-1}{p}\right)^3-\frac{(p-1)N}{p^2}\frac{p^{s_0}-1}{p^{s_0+1}-1},\\ L_{[AC]}(s_0)=L_{[BC]}(s_0)=\left(\frac{p-1}{p}\right)^3-\left(\frac{p-1}{p}\right)^2\frac{p^{s_0}-1}{p^{s_0+1}-1},\\ \text{and}\qquad L_{\tau_0}(s_0)=\left(\frac{p-1}{p}\right)^3-\frac{(p-1)^2-N}{p^2}\frac{p^{s_0}-1}{p^{s_0+1}-1}. \end{gather*} \subsection{Multiplicities of the relevant simplicial cones} We use Proposition~\ref{multipliciteit} to calculate the multiplicities of the relevant simplicial cones (and their corresponding fundamental parallelepipeds), thereby exploiting the relations we obtained in Subsection~\ref{srelbettvarcaseen}. That way we find\footnote{Cfr.\ Notation~\ref{notatiematrices}.} \begin{alignat*}{4} \mu_A&=\mult\delta_A&&=\#H(v_0,v_2,v_3)&&= \begin{Vmatrix} a_0&b_0&c_0\\a_2&b_2&c_2\\0&0&1 \end{Vmatrix}=&& \begin{vmatrix} a_0&b_0\\a_2&b_2 \end{vmatrix}>0,\\ \mu_B&=\mult\delta_B&&=\#H(v_0,v_1,v_3)&&= \begin{Vmatrix} a_0&b_0&c_0\\a_1&b_1&c_1\\0&0&1 \end{Vmatrix}=-&& \begin{vmatrix} a_0&b_0\\a_1&b_1 \end{vmatrix}>0,\\ \mu_C&=\mult\delta_C&&=\#H(v_0,v_1,v_2)&&= \begin{Vmatrix} a_0&b_0&c_0\\a_1&b_1&c_1\\a_2&b_2&c_2 \end{Vmatrix}=&& \begin{vmatrix} a_0&b_0&c_0\\a_1&b_1&c_1\\a_2&b_2&c_2 \end{vmatrix}>0 \end{alignat*} for the maximal dimensional simplicial cones, while for the two-dimensional cones we obtain \begin{alignat*}{2} \mult\Delta_{[AB]}&=\#H(v_0,v_3)&&=\gcd\left( \begin{Vmatrix} a_0&b_0\\0&0 \end{Vmatrix}, \begin{Vmatrix} a_0&c_0\\0&1 \end{Vmatrix}, \begin{Vmatrix} b_0&c_0\\0&1 \end{Vmatrix} \right)\\ &&&=\gcd(0,a_0,b_0)=1,\\ \mult\Delta_{[AC]}&=\#H(v_0,v_2)&&=\gcd\left( \begin{Vmatrix} a_0&b_0\\a_2&b_2 \end{Vmatrix}, \begin{Vmatrix} a_0&c_0\\a_2&c_2 \end{Vmatrix}, \begin{Vmatrix} b_0&c_0\\b_2&b_2 \end{Vmatrix} \right)\\ &&&=\gcd\left( \begin{vmatrix} a_0&b_0\\a_2&b_2 \end{vmatrix},\abs{\bA} \begin{vmatrix} a_0&b_0\\a_2&b_2 \end{vmatrix},\abs{\aA} \begin{vmatrix} a_0&b_0\\a_2&b_2 \end{vmatrix} \right)\\&&&= \begin{vmatrix} a_0&b_0\\a_2&b_2 \end{vmatrix}=\mu_A,\qquad\text{and}\\ \mult\Delta_{[BC]}&=\#H(v_0,v_1)&&=\gcd\left( \begin{Vmatrix} a_0&b_0\\a_1&b_1 \end{Vmatrix}, \begin{Vmatrix} a_0&c_0\\a_1&c_1 \end{Vmatrix}, \begin{Vmatrix} b_0&c_0\\b_1&b_1 \end{Vmatrix} \right)\\ &&&=\gcd\left(- \begin{vmatrix} a_0&b_0\\a_1&b_1 \end{vmatrix},-\abs{\bB} \begin{vmatrix} a_0&b_0\\a_1&b_1 \end{vmatrix},-\abs{\aB} \begin{vmatrix} a_0&b_0\\a_1&b_1 \end{vmatrix} \right)\\&&&=- \begin{vmatrix} a_0&b_0\\a_1&b_1 \end{vmatrix}=\mu_B. \end{alignat*} For the one-dimensional cone $\delta_{\tau_0}$, finally, we have of course that \begin{equation*} \mult\Delta_{\tau_0}=\#H(v_0)=\gcd(a_0,b_0,c_0)=1. \end{equation*} \subsection{The sums $\Sigma(\cdot)(s_0)$} We found above that the multiplicities of $\Delta_{[AB]}$ and $\Delta_{\tau_0}$ both equal one; i.e., their corresponding fundamental parallelepipeds contain only one integral point which must be the origin: $H(v_0,v_3)=H(v_0)=\{(0,0,0)\}$. Hence \begin{equation*} \Sigma(\Delta_{[AB]})(s_0)=\Sigma(\Delta_{\tau_0})(s_0)=\sum_{h\in\{(0,0,0)\}}p^{\sigma(h)+m(h)s_0}=1. \end{equation*} Furthermore we saw that the multiplicities of $\Delta_A$ and $\Delta_{[AC]}$ are equal: \begin{equation*} \mu_A=\#H(v_0,v_2,v_3)=\#H(v_0,v_2). \end{equation*} The inclusion $H(v_0,v_2,v_3)\supseteq H(v_0,v_2)$ thus implies equality: \begin{equation*} H_A=H(v_0,v_2,v_3)=H(v_0,v_2), \end{equation*} and therefore, \begin{equation*} \Sigma_A=\Sigma(\delta_A)(s_0)=\Sigma(\Delta_{[AC]})(s_0)=\sum_{h\in H_A}p^{\sigma(h)+m(h)s_0}. \end{equation*} Analogously we have \begin{gather*} H_B=H(v_0,v_1,v_3)=H(v_0,v_1)\qquad\text{and}\\ \Sigma_B=\Sigma(\delta_B)(s_0)=\Sigma(\Delta_{[BC]})(s_0)=\sum_{h\in H_B}p^{\sigma(h)+m(h)s_0}. \end{gather*} Consistently, we shall also denote \begin{gather*} H_C=H(v_0,v_1,v_2)\qquad\text{and}\\ \Sigma_C=\Sigma(\delta_C)(s_0)=\sum_{h\in H_C}p^{\sigma(h)+m(h)s_0}. \end{gather*} Note that, since $\overline{\Delta_{[AC]}},\overline{\Delta_{[BC]}},\overline{\delta_C}\subseteq\overline{\Delta_C}$, we have that\footnote{In this text, by the dot product $w_1\cdot w_2$ of two complex vectors $w_1(a_1,b_1,c_1),\allowbreak w_2(a_2,b_2,c_2)\in\C^3$, we mean $w_1\cdot w_2=a_1a_2+b_1b_2+c_1c_2$.} \begin{equation*} m(h)=C\cdot h\qquad\text{for all}\qquad h\in H_A\cup H_B\cup H_C\subseteq\overline{\Delta_C}. \end{equation*} Hence, if we denote by $w$ the vector \begin{equation*} w=(1,1,1)+s_0(x_C,y_C,1)\in\C^3, \end{equation*} it holds that \begin{equation*} \Sigma_V=\sum_{h\in H_V}p^{w\cdot h};\qquad V=A,B,C. \end{equation*} \subsection{A new formula for $R_1$} If we denote \begin{equation*} F_1=p^{w\cdot v_1}-1=p^{\sigma_1+m_1s_0}-1\qquad\text{and}\qquad F_2=p^{w\cdot v_2}-1=p^{\sigma_2+m_2s_0}-1, \end{equation*} the results above on the numbers $N_{\tau}$ and the multiplicities of the cones lead to \begin{equation*} \begin{multlined}[.98\textwidth] R_1=\left(\frac{p-1}{p}\right)^3\left[\frac{\Sigma_A}{F_2(p-1)}+\frac{\Sigma_B}{F_1(p-1)}+\frac{\Sigma_C}{F_1F_2}+\frac{1}{p-1}+\frac{\Sigma_A}{F_2}+\frac{\Sigma_B}{F_1}+1\right]\\ -\left(\frac{p-1}{p}\right)^2\frac{p^{s_0}-1}{p^{s_0+1}-1}\left[\frac{N}{(p-1)^2}+\frac{\Sigma_A}{F_2}+\frac{\Sigma_B}{F_1}+\frac{(p-1)^2-N}{(p-1)^2}\right]. \end{multlined} \end{equation*} If we put $R_1'=(p/(p-1))^3R_1$, this formula can be simplified to \begin{equation}\label{formreenaccentfincaseen} R_1'=\frac{1}{1-p^{-s_0-1}}\left(\frac{\Sigma_A}{F_2}+\frac{\Sigma_B}{F_1}+1\right)+\frac{\Sigma_C}{F_1F_2}. \end{equation} Note that the number $N$ disappears from the equation. In what follows we shall prove that $R_1'=0$. \subsection{Formulas for $\Sigma_A$ and $\Sigma_B$} As in Section~\ref{fundpar}, we will consider the set \begin{equation*} H_C=H(v_0,v_1,v_2)=\Z^3\cap\lozenge(v_0,v_1,v_2) \end{equation*} as an additive group, endowed with addition modulo the lattice \begin{equation*} \Lambda(v_0,v_1,v_2)=\Z v_0+\Z v_1+\Z v_2. \end{equation*} In this way, $H_A=\Z^3\cap\lozenge(v_0,v_2)$ and $H_B=\Z^3\cap\lozenge(v_0,v_1)$ become subgroups of $H_C$ that correspond to the subgroups $H_1$ and $H_2$ of $H$ in Section~\ref{fundpar}. From the description of the elements of these groups there, we know that there exist numbers $\xi_A\in\verA$ and $\xi_B\in\verB$ with $\gcd(\xi_A,\mu_A)=\gcd(\xi_B,\mu_B)=1$, such that the $\mu_A$ points of $H_A$ are precisely \begin{alignat}{2} \left\{\frac{i\xi_A}{\mu_A}\right\}v_0&+\frac{i}{\mu_A}v_2;&\qquad&i=0,\ldots,\mu_A-1;\label{stereenmuA}\\\intertext{while the $\mu_B$ points of $H_B$ are given by} \left\{\frac{j\xi_B}{\mu_B}\right\}v_0&+\frac{j}{\mu_B}v_1;&&j=0,\ldots,\mu_B-1.\label{stertweemuB} \end{alignat} Recall that $\xi_A$ and $\xi_B$ are, as elements of \verA\ and \verB, respectively, uniquely determined by \begin{equation}\label{defxiaxibcaseeen} \xi_Av_0+v_2\in\mu_A\Z^3\qquad\text{and}\qquad\xi_Bv_0+v_1\in\mu_B\Z^3. \end{equation} These descriptions allow us to find \lq closed\rq\ formulas for $\Sigma_A$ and $\Sigma_B$. We know that \begin{equation*} \Sigma_A=\Sigma(\delta_A)(s_0)=\Sigma(\Delta_{[AC]})(s_0)=\sum_{h\in H_A}p^{\sigma(h)+m(h)s_0}=\sum_{h\in H_A}p^{w\cdot h}, \end{equation*} with $w=(1,1,1)+s_0(x_C,y_C,1)$. Note that since $s_0$ is a candidate pole associated to $\tau_0$, we have that $p^{w\cdot v_0}=p^{\sigma_0+m_0s_0}=1$. Hence $p^{a(w\cdot v_0)}=p^{\{a\}(w\cdot v_0)}$ for every real number $a$. So if we write $h$ as $h=h_0v_0+h_2v_2$, we obtain \begin{equation}\label{formsigmaAfincaseen} \begin{multlined}[.8\textwidth] \Sigma_A=\sum_{h\in H_A}p^{h_0(w\cdot v_0)+h_2(w\cdot v_2)} =\sum_{i=0}^{\mu_A-1}\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_2}{\mu_A}}\Bigr)^i\\ =\frac{p^{w\cdot v_2}-1}{p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_2}{\mu_A}}-1} =\frac{F_2}{p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_2}{\mu_A}}-1}. \end{multlined} \end{equation} Completely analogously we find \begin{equation}\label{formsigmaBfincaseen} \Sigma_B=\frac{F_1}{p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}-1}. \end{equation} \subsection{A formula for $\mu_C=\mult\delta_C$}\label{formulaformuCcaseeen} We know from Section~\ref{fundpar} that $\mu_A\mu_B\mid\mu_C$. We will give a useful interpretation of the quotient $\mu_C/\mu_A\mu_B$. We have the following: \begin{align*} \mu_C&= \begin{vmatrix} a_0&b_0&c_0\\a_1&b_1&c_1\\a_2&b_2&c_2 \end{vmatrix}\\ &=-a_1 \begin{vmatrix} b_0&c_0\\b_2&c_2 \end{vmatrix} +b_1 \begin{vmatrix} a_0&c_0\\a_2&c_2 \end{vmatrix} -c_1 \begin{vmatrix} a_0&b_0\\a_2&b_2 \end{vmatrix}.\\\intertext{Using the relations from Subsection~\ref{srelbettvarcaseen}, we continue:} \mu_C&=-a_1\aA \begin{vmatrix} a_0&b_0\\a_2&b_2 \end{vmatrix} -b_1\bA \begin{vmatrix} a_0&b_0\\a_2&b_2 \end{vmatrix} -c_1 \begin{vmatrix} a_0&b_0\\a_2&b_2 \end{vmatrix}\\ &=-\mu_A(a_1\aA+b_1\bA+c_1)\\ &=-\mu_A\left(v_1\cdot\overrightarrow{AC}\right),\\\intertext{and since $v_1\perp\overrightarrow{BC}$, we obtain} \mu_C&=-\mu_A\left(v_1\cdot\overrightarrow{AC}-v_1\cdot\overrightarrow{BC}\right)\\ &=-\mu_A\left(v_1\cdot\overrightarrow{AB}\right). \end{align*} Because the vector $\overrightarrow{AB}$ lies in the $xy$-plane and is perpendicular to $v_0$ and its coordinates have greatest common divisor $\fAB$ and we assume that $x_A>x_B$, it must hold that \begin{equation*} \overrightarrow{AB}=\fAB(-b_0,a_0,0). \end{equation*} Hence \begin{equation*} \mu_C=-\mu_A\fAB(a_0b_1-a_1b_0)=\mu_A\mu_B\fAB. \end{equation*} Next, we will use this formula in describing the points of $H_C$. \subsection{Description of the points of $H_C$}\label{descpointsHCgevaleen} It follows from \eqref{stereenmuA} and \eqref{stertweemuB} that the $\mu_A\mu_B$ points of the subgroup $H_A+H_B\cong H_A\oplus H_B$ of $H_C$ are \begin{equation*} \left\{\frac{i\xi_A\mu_B+j\xi_B\mu_A}{\mu_A\mu_B}\right\}v_0+\frac{j}{\mu_B}v_1+\frac{i}{\mu_A}v_2;\quad\ \ \ i=0,\ldots,\mu_A-1;\quad j=0,\ldots,\mu_B-1. \end{equation*} We know that the $v_2$-coordinates $h_2$ of the points $h\in H_C$ belong to the set \begin{equation*} \left\{0,\frac{1}{\mu_A\fAB},\frac{2}{\mu_A\fAB},\ldots,\frac{\mu_A\fAB-1}{\mu_A\fAB}\right\}, \end{equation*} and that every $l/\mu_A\fAB$ in this set occurs $\mu_B$ times as the $v_2$-coordinate of a point in $H_C$, while every $h\in H_A+H_B$ has a $v_2$-coordinate of the form $i/\mu_A$ with $i\in\verA$, and every such $i/\mu_A$ is the $v_2$-coordinate of exactly $\mu_B$ points in $H_A+H_B$. (Analogously for the $v_1$-coordinates.) In order to describe all the points of $H_C$ in a way as we did in Section~\ref{fundpar} for the points of $H$, we need to find a set of representatives for the elements of $H_C/(H_A+H_B)$. The $\fAB$ cosets of $H_A+H_B$ are characterised by constant $\{h_1\}_{1/\mu_B}$ and constant $\{h_2\}_{1/\mu_A}$, which can each take indeed \fAB\ possible values. From the discussion in Section~\ref{fundpar}, we know there exists a unique point $h^{\ast}\in H_C$ with $v_2$-coordinate $h_2^{\ast}=1/\mu_A\fAB$ and $v_1$-coordinate $h_1^{\ast}=\eta/\mu_B\fAB<1/\mu_B$, and that the $\fAB$ multiples $\{kh^{\ast}\}$; $k=0,\ldots,\fAB-1$; of $h^{\ast}$ in $H_C$ make good representatives for the cosets of $H_A+H_B$. We will now try to find $h^{\ast}$. If we denote by $M$ the matrix \begin{equation*} M= \begin{pmatrix} a_0&b_0&c_0\\a_1&b_1&c_1\\a_2&b_2&c_2 \end{pmatrix} \end{equation*} with $\det M=\mu_C$, it follows from $(\adj M)M=(\det M)I=\mu_CI$ that \begin{equation*} \begin{vmatrix} a_1&b_1\\a_2&b_2 \end{vmatrix} v_0- \begin{vmatrix} a_0&b_0\\a_2&b_2 \end{vmatrix} v_1+ \begin{vmatrix} a_0&b_0\\a_1&b_1 \end{vmatrix} v_2=\Psi v_0-\mu_A v_1-\mu_B v_2=(0,0,\mu_C), \end{equation*} and hence \begin{equation*} h^{\ast}=\left\{\frac{-\Psi}{\mu_C}\right\}v_0+\frac{1}{\mu_B\fAB}v_1+\frac{1}{\mu_A\fAB}v_2\in H_C \end{equation*} is the point we are looking for. So, considering all possible sums (in the group $H_C$) of one of the $\mu_A\mu_B$ points \begin{equation*} \left\{\frac{i\xi_A\mu_B+j\xi_B\mu_A}{\mu_A\mu_B}\right\}v_0+\frac{j}{\mu_B}v_1+\frac{i}{\mu_A}v_2;\quad\ \ \ i=0,\ldots,\mu_A-1;\quad j=0,\ldots,\mu_B-1; \end{equation*} of $H_A+H_B$ and one of the \fAB\ chosen representatives \begin{equation*} \{kh^{\ast}\}=\left\{\frac{-k\Psi}{\mu_C}\right\}v_0+\frac{k}{\mu_B\fAB}v_1+\frac{k}{\mu_A\fAB}v_2;\qquad k=0,\ldots,\fAB-1; \end{equation*} for the cosets of $H_A+H_B$ in $H_C$, we find the $\mu_C=\mu_A\mu_B\fAB$ points of $H_C$ as \begin{multline*} \left\{\frac{i\xi_A\mu_B\fAB+j\xi_B\mu_A\fAB-k\Psi}{\mu_C}\right\}v_0+\frac{j\fAB+k}{\mu_B\fAB}v_1+\frac{i\fAB+k}{\mu_A\fAB}v_2;\\ i=0,\ldots,\mu_A-1;\quad j=0,\ldots,\mu_B-1;\quad k=0,\ldots,\fAB-1. \end{multline*} Using the above description of the points of $H_C$, we will next derive a formula for $\Sigma_C$. \subsection{A formula for $\Sigma_C$} Recall that \begin{equation*} \Sigma_C=\Sigma(\delta_C)(s_0)=\sum_{h\in H_C}p^{\sigma(h)+m(h)s_0}=\sum_{h\in H_C}p^{w\cdot h}, \end{equation*} with $w=(1,1,1)+s_0(x_C,y_C,1)$. If we write $h=h_0v_0+h_1v_1+h_2v_2$ and remember that $p^{w\cdot v_0}=1$ and $\mu_C=\mu_A\mu_B\fAB$, we find \begin{align*} \Sigma_C&=\sum_{h\in H_C}p^{h_0(w\cdot v_0)+h_1(w\cdot v_1)+h_2(w\cdot v_2)}\\ &=\sum_{i=0}^{\mu_A-1}\sum_{j=0}^{\mu_B-1}\sum_{k=0}^{\fAB-1}p^{\frac{i\xi_A\mu_B\fAB+j\xi_B\mu_A\fAB-k\Psi}{\mu_C}(w\cdot v_0)+\frac{j\fAB+k}{\mu_B\fAB}(w\cdot v_1)+\frac{i\fAB+k}{\mu_A\fAB}(w\cdot v_2)}\\ &=\sum_i\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_2}{\mu_A}}\Bigr)^i\sum_j\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}\Bigr)^j\sum_k\Bigl(p^{\frac{-\Psi(w\cdot v_0)+\mu_A(w\cdot v_1)+\mu_B(w\cdot v_2)}{\mu_C}}\Bigr)^k\\ &=\frac{F_2}{p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_2}{\mu_A}}-1}\;\frac{F_1}{p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}-1}\;\frac{p^{\frac{-\Psi(w\cdot v_0)+\mu_A(w\cdot v_1)+\mu_B(w\cdot v_2)}{\mu_A\mu_B}}-1}{p^{\frac{-\Psi(w\cdot v_0)+\mu_A(w\cdot v_1)+\mu_B(w\cdot v_2)}{\mu_C}}-1}. \end{align*} We already observed in Subsection~\ref{descpointsHCgevaleen} that if we put \begin{equation*} M= \begin{pmatrix} a_0&b_0&c_0\\a_1&b_1&c_1\\a_2&b_2&c_2 \end{pmatrix}, \end{equation*} the identity $(\adj M)M=(\det M)I=\mu_CI$ implies that \begin{equation}\label{identiteitreccaseeen} \begin{vmatrix} a_1&b_1\\a_2&b_2 \end{vmatrix} v_0- \begin{vmatrix} a_0&b_0\\a_2&b_2 \end{vmatrix} v_1+ \begin{vmatrix} a_0&b_0\\a_1&b_1 \end{vmatrix} v_2=\Psi v_0-\mu_A v_1-\mu_B v_2=(0,0,\mu_C). \end{equation} Making the dot product with $w=(1,1,1)+s_0(x_C,y_C,1)$ on all sides of the equation yields \begin{equation*} -\Psi(w\cdot v_0)+\mu_A(w\cdot v_1)+\mu_B(w\cdot v_2)=\mu_C(-s_0-1). \end{equation*} Hence we find \begin{equation}\label{formsigmaCfincaseen} \Sigma_C=\frac{F_1F_2}{p^{-s_0-1}-1}\frac{p^{\frac{-\Psi(w\cdot v_0)+\mu_A(w\cdot v_1)+\mu_B(w\cdot v_2)}{\mu_A\mu_B}}-1}{\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_2}{\mu_A}}-1\Bigr)\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}-1\Bigr)}. \end{equation} \subsection{Proof of $R_1'=0$} Bringing together Equations (\ref{formreenaccentfincaseen}, \ref{formsigmaAfincaseen}, \ref{formsigmaBfincaseen}, \ref{formsigmaCfincaseen}) for $R_1',\allowbreak\Sigma_A,\allowbreak\Sigma_B,$ and $\Sigma_C$, we obtain that \begin{align*} R_1'&=\frac{1}{1-p^{-s_0-1}}\left(\frac{\Sigma_A}{F_2}+\frac{\Sigma_B}{F_1}+1\right)+\frac{\Sigma_C}{F_1F_2}\\ &=\frac{1}{1-p^{-s_0-1}}\left(\frac{1}{p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_2}{\mu_A}}-1}+\frac{1}{p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}-1}+1\right)\\* &\qquad\qquad\qquad\qquad+\frac{1}{p^{-s_0-1}-1}\frac{p^{\frac{-\Psi(w\cdot v_0)+\mu_A(w\cdot v_1)+\mu_B(w\cdot v_2)}{\mu_A\mu_B}}-1}{\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_2}{\mu_A}}-1\Bigr)\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}-1\Bigr)}\\ &=\frac{1}{p^{-s_0-1}-1}\frac{p^{\frac{(\xi_A\mu_B+\xi_B\mu_A)(w\cdot v_0)+\mu_A(w\cdot v_1)+\mu_B(w\cdot v_2)}{\mu_A\mu_B}}-1}{\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_2}{\mu_A}}-1\Bigr)\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}-1\Bigr)}\\* &\qquad\qquad\qquad\qquad-\frac{1}{1-p^{-s_0-1}}\frac{p^{\frac{-\Psi(w\cdot v_0)+\mu_A(w\cdot v_1)+\mu_B(w\cdot v_2)}{\mu_A\mu_B}}-1}{\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_2}{\mu_A}}-1\Bigr)\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}-1\Bigr)}. \end{align*} Hence it is sufficient to prove that \begin{equation*} p^{\frac{(\xi_A\mu_B+\xi_B\mu_A)(w\cdot v_0)}{\mu_A\mu_B}}=p^{-\frac{\Psi(w\cdot v_0)}{\mu_A\mu_B}}, \end{equation*} or, as $p^{w\cdot v_0}=1$, equivalently, that \begin{equation}\label{eqprovecaseeen} \frac{\xi_A\mu_B+\xi_B\mu_A+\Psi}{\mu_A\mu_B}\in\Z. \end{equation} Well, it follows from \eqref{defxiaxibcaseeen} and \eqref{identiteitreccaseeen} that \begin{multline*} (\xi_A\mu_B+\xi_B\mu_A+\Psi)v_0\\=\mu_B(\xi_Av_0+v_2)+\mu_A(\xi_Bv_0+v_1)+(\Psi v_0-\mu_A v_1-\mu_B v_2)\in\mu_A\mu_B\Z^3. \end{multline*} The primitivity of $v_0$ now implies \eqref{eqprovecaseeen}, concluding Case~I. \section{Case~II: exactly one facet contributes to $s_0$ and this facet is a non-compact $B_1$-facet} \subsection{Figure and notations} We shall assume that the one facet $\tau_0$ contributing to $s_0$ is non-compact for the variable $x$, and $B_1$ with respect to the variable $z$. We denote by $A(x_A,y_A,0)$ the vertex of $\tau_0$ in the $xy$-plane and by $B(x_B,y_B,1)$ the vertex in the plane $\{z=1\}$. The situation is sketched in Figure~\ref{figcase2}. \begin{figure} \psset{unit=.03298611111\textwidth \centering \subfigure[Non-compact $B_1$-facet $\tau_0$, its subfaces, and its neighbor facets $\tau_1,\tau_2,$ and $\tau_3$]{ \begin{pspicture}(-7.58,-6.75)(6.82,5.8 {\footnotesize \pstThreeDCoor[xMin=0,yMin=0,zMin=0,xMax=10,yMax=9,zMax=5.8,linecolor=black,linewidth=.7pt] { \psset{linecolor=black,linewidth=.3pt,linestyle=dashed,subticks=1} \pstThreeDLine(2,0,0)(2,3,0)\pstThreeDLine(2,3,0)(0,3,0) \pstThreeDLine(10,0,0)(10,3,0)\pstThreeDLine(10,3,0)(2,3,0) \pstThreeDLine(4,0,0)(4,5,0)\pstThreeDLine(4,5,0)(0,5,0) \pstThreeDLine(10,3,0)(10,9,0)\pstThreeDLine(10,9,0)(0,9,0) \pstThreeDLine(2,0,1)(2,3,1)\pstThreeDLine(2,3,1)(0,3,1) \pstThreeDLine(10,0,1)(10,3,1)\pstThreeDLine(0,0,1)(2,0,1) \pstThreeDLine(2,0,1)(10,0,1)\pstThreeDLine(0,0,1)(0,3,1) \pstThreeDLine(2,0,0)(2,0,1)\pstThreeDLine(10,0,0)(10,0,1) \pstThreeDLine(0,3,0)(0,3,1)\pstThreeDLine(10,3,0)(10,3,1) } \pstThreeDPut[pOrigin=c](6.3,3.9,0.5){\psframebox*[framesep=0.8pt,framearc=0.3]{\phantom{$\tau_0$}}} { \psset{dotstyle=none,dotscale=1,drawCoor=false} \psset{linecolor=black,linewidth=1pt,linejoin=1} \psset{fillcolor=lightgray,opacity=.6,fillstyle=solid} \pstThreeDLine(10,5,0)(4,5,0)(2,3,1)(10,3,1) } \pstThreeDPut[pOrigin=tl](4,5,0){$A(x_A,y_A,0)$} \pstThreeDPut[pOrigin=bl](2,3,1){\psframebox*[framesep=-.3pt,framearc=1]{$B(x_B,y_B,1)$}} \pstThreeDPut[pOrigin=c](6.3,3.9,0.5){$\tau_0$} \pstThreeDPut[pOrigin=c](2.62,4.51,0.45){$\tau_1$} \pstThreeDPut[pOrigin=c](7.5,7,0){$\tau_3$} \pstThreeDPut[pOrigin=rb](6.2,2.8,1.1){$\tau_2$} \pstThreeDPut[pOrigin=b](10,3,1.27){$l$} } \end{pspicture} }\hfill\subfigure[Relevant cones associated to relevant faces of~\Gf]{ \psset{unit=.03125\textwidth \begin{pspicture}(-7.6,-3.8)(7.6,9.6 {\footnotesize \pstThreeDCoor[xMin=0,yMin=0,zMin=0,xMax=10,yMax=10,zMax=10,nameZ={},linecolor=gray,linewidth=.7pt] \psset{linecolor=gray,linewidth=.3pt,linejoin=1,linestyle=dashed,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(10,0,0)(0,10,0)\pstThreeDLine(0,10,0)(0,0,10)\pstThreeDLine(0,0,10)(10,0,0) } \psset{linecolor=black,linewidth=.7pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,0)(0,0,10 } \psset{labelsep=2pt} \uput[90](0,1.7){\psframebox*[framesep=0.3pt,framearc=1]{\darkgray\scriptsize$v_3$}} } \psset{linecolor=black,linewidth=.7pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,0)(0,3.33,6.67 \pstThreeDLine(0,0,0)(2.63,1.58,5.79 \pstThreeDLine(0,0,0)(0,6.67,3.33 } \psset{linecolor=darkgray,linewidth=.8pt,linejoin=1,arrows=->,arrowscale=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,0)(0,1.5,3 \pstThreeDLine(0,0,0)(2.10,1.27,4.62 \pstThreeDLine(0,0,0)(0,4,2 \pstThreeDLine(0,0,0)(0,0,2 } \psset{linecolor=white,linewidth=2pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(2.24,1.84,5.92)(1.32,2.45,6.24 \pstThreeDLine(2.37,2.09,5.54)(1.05,4.63,4.32 } \psset{linecolor=black,linewidth=.7pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,10)(0,3.33,6.67 \pstThreeDLine(0,6.67,3.33)(0,3.33,6.67)(2.63,1.58,5.79 } \psset{linecolor=black,linewidth=.7pt,linejoin=1,linestyle=dashed,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,10)(2.63,1.58,5.79 \pstThreeDLine(2.63,1.58,5.79)(0,6.67,3.33 } \psset{labelsep=2pt} \uput[-30](.7,1.4){\darkgray\scriptsize$v_0$} \uput[105](1.96,.224){\darkgray\scriptsize$v_2$} } \psset{labelsep=1.5pt} \uput[202](-.3,1.4){\darkgray\scriptsize$v_1$} } \psset{labelsep=3.8pt} \uput[30](2.35,4.58){$\Delta_{\tau_0}$} \uput[180](-.73,3.53){$\Delta_{\tau_1}$} \uput[30](3.54,2.55){$\Delta_l$} \uput[30](4.73,.52){$\Delta_{\tau_2}$} } \rput(.7,5.2){\psframebox*[framesep=0.3pt,framearc=1]{\footnotesize$\delta_A$}} \rput(1.9,3.3){\psframebox*[framesep=0.7pt,framearc=1]{\footnotesize$\delta_B$}} \pstThreeDNode(1.32,2.45,6.24){AB} \rput[B](0,9.075){$z,\Delta_{\tau_3}$} \rput[Br](6.91,9.075){\rnode{ABlabel}{$\Delta_{[AB]}$}} \ncline[linewidth=.3pt,nodesepB=3.5pt,nodesepA=1pt]{->}{ABlabel}{AB} } \end{pspicture} } \caption{Case II: the only facet contributing to $s_0$ is the non-compact $B_1$-facet~$\tau_0$} \label{figcase2} \end{figure} If we denote by $\overrightarrow{AB}(x_B-x_A,y_B-y_A,1)=(\alpha,\beta,1)$ the vector along the edge $[AB]$, then the unique primitive vector $v_0\in\Zplus^3$ perpendicular to $\tau_0$ equals $v_0(0,1,-\beta)$, and an equation for the affine hull of $\tau_0$ is given by \begin{equation*} \aff(\tau_0)\leftrightarrow y-\beta z=y_A. \end{equation*} Note that since $\tau_0$ is $B_1$, we must have $\beta<0$ and hence $y_B<y_A$. The numerical data associated to $\tau_0$ are therefore $(m(v_0),\sigma(v_0))=(y_A,1-\beta)$, and thus we assume \begin{equation*} s_0=\frac{\beta-1}{y_A}+\frac{2n\pi i}{y_A\log p}\qquad\text{for some $n\in\Z$.} \end{equation*} We denote by $\tau_1$ the facet of \Gf\ that has the edge $[AB]$ in common with $\tau_0$, by $\tau_2$ the non-compact facet of \Gf\ sharing with $\tau_0$ a half-line with endpoint $B$, and finally, by $\tau_3$ the facet lying in the $xy$-plane. Primitive vectors in $\Zplus^3$ perpendicular to $\tau_1,\tau_2,\tau_3$ will be denoted by \begin{equation*} v_1(a_1,b_1,c_1),\quad v_2(0,b_2,c_2),\quad v_3(0,0,1), \end{equation*} respectively, and equations for the affine supports of these facets are denoted \begin{alignat*}{3} \aff(\tau_1)&\leftrightarrow a_1&x+b_1y&+c_1&z&=m_1,\\ \aff(\tau_2)&\leftrightarrow & b_2y&+c_2&z&=m_2,\\ \aff(\tau_3)&\leftrightarrow & & &z&=0, \end{alignat*} for certain $m_1,m_2\in\Zplus$. If we put $\sigma_1=a_1+b_1+c_1$ and $\sigma_2=b_2+c_2$, then the numerical data for $\tau_1,\tau_2,\tau_3$ are $(m_1,\sigma_1),(m_2,\sigma_2),$ and $(0,1)$, respectively. \subsection{The candidate pole $s_0$ and the contributions to its residue} The aim of this section is to prove that $s_0$ is not a pole of \Zof; i.e., we want to demonstrate that \begin{equation*} R_1=\lim_{s\to s_0}\left(p^{1-\beta+y_As}-1\right)\Zof(s)=0. \end{equation*} Since we work with the local version of Igusa's $p$-adic zeta function, we only consider the compact faces of \Gf\ in the formula for $\Zof(s)$ for non-degenerated $f$. Of course, in order to find an expression for $R_1$, we only need to account those compact faces that contribute to $s_0$, i.e., the compact subfaces $A,B,$ and $[AB]$ of $\tau_0$: \begin{equation*} R_1=\lim_{s\to s_0}\left(p^{1-\beta+y_As}-1\right)\sum_{\tau=A,B,[AB]}L_{\tau}(s)S(\Dtu)(s). \end{equation*} As in Case~I, we note that vertices $A$ and $B$ may be contained in facets other than $\tau_i$; $i=0,\ldots,3$; and subsequently their associated cones $\Delta_A$ and $\Delta_B$ may be not simplicial. Therefore, instead of $\Delta_A$ and $\Delta_B$, we shall consider the simplicial cones \begin{equation*} \dA=\cone(v_0,v_1,v_3)\qquad\text{and}\qquad\dB=\cone(v_0,v_1,v_2) \end{equation*} as members of simplicial decompositions of $\Delta_A$ and $\Delta_B$, respectively. It follows as before that of all cones in these decompositions, only $\dA$ and $\dB$ are relevant in the calculation of $R_1$: \begin{multline*} R_1=L_A(s_0)\frac{\Sigma(\delta_A)(s_0)}{\Feen(p-1)}\\ +L_B(s_0)\frac{\Sigma(\delta_B)(s_0)}{\Feen\Ftwee}+L_{[AB]}(s_0)\frac{\Sigma(\Delta_{[AB]})(s_0)}{p^{\sigma_1+m_1s_0}-1}. \end{multline*} \subsection{The factors $L_{\tau}(s_0)$, the sums $\Sigma(\cdot)(s_0)$ and a new formula for $R_1$} As in Case~I we find easily that $N_A=N_B=0$ and $N_{[AB]}=(p-1)^2$. Hence the factors $L_{\tau}(s_0)$ are as follows: \begin{gather*} L_A(s_0)=L_B(s_0)=\left(\frac{p-1}{p}\right)^3\qquad\text{and}\\ L_{[AB]}(s_0)=\left(\frac{p-1}{p}\right)^3-\left(\frac{p-1}{p}\right)^2\frac{p^{s_0}-1}{p^{s_0+1}-1}. \end{gather*} Let us look at the multiplicities of $\dA,\dB,$ and $\Delta_{[AB]}$. For $\mult\delta_A$ we find \begin{equation*} \mu_A=\mult\delta_A=\#H(v_0,v_1,v_3)= \begin{Vmatrix} 0&1&-\beta\\a_1&b_1&c_1\\0&0&1 \end{Vmatrix}=a_1>0. \end{equation*} Although this non-compact edge does not appear in the formula for $R_1$, we also mention the multiplicity $\mu_l$ of the cone $\Delta_l$ associated to the half-line $l=\tau_0\cap\tau_2$: \begin{equation*} \mu_l=\mult\Delta_l=\#H(v_0(0,1,-\beta),v_2(0,b_2,c_2))= \begin{Vmatrix} 1&-\beta\\b_2&c_2 \end{Vmatrix}. \end{equation*} Since the coordinate system $(v_0,v_2)$ for the $yz$-plane has the opposite o\-ri\-en\-ta\-tion of the coordinate system $(e_y(0,1,0),e_z(0,0,1))$ we work in, we have that \begin{equation*} \mu_l= \begin{Vmatrix} 1&-\beta\\b_2&c_2 \end{Vmatrix}=- \begin{vmatrix} 1&-\beta\\b_2&c_2 \end{vmatrix}=-\beta b_2-c_2>0. \end{equation*} We see now that \begin{multline*} \mu_B=\mult\delta_B=\#H(v_0,v_1,v_2)\\= \begin{Vmatrix} 0&1&-\beta\\a_1&b_1&c_1\\0&b_2&c_2 \end{Vmatrix}=a_1 \begin{Vmatrix} 1&-\beta\\b_2&c_2 \end{Vmatrix}=a_1(-\beta b_2-c_2)=\mu_A\mu_l. \end{multline*} Finally, for $\mult\Delta_{[AB]}$ we obtain \begin{multline*} \mult\Delta_{[AB]}=\#H(v_0,v_1)=\gcd\left( \begin{Vmatrix} 0&1\\a_1&b_1 \end{Vmatrix}, \begin{Vmatrix} 0&-\beta\\a_1&c_1 \end{Vmatrix}, \begin{Vmatrix} 1&-\beta\\b_1&c_1 \end{Vmatrix} \right)\\ =\gcd(a_1,-\beta a_1,\abs{\beta b_1+c_1})=\gcd(a_1,-\beta a_1,\abs{\alpha}a_1)=a_1=\mu_A. \end{multline*} In the third to last equality we used that $\beta b_1+c_1=-\alpha a_1$, which follows from the fact that $\overrightarrow{AB}(\alpha,\beta,1)\perp v_1(a_1,b_1,c_1)$. Since $H(v_0,v_1,v_3)\supseteq H(v_0,v_1)$ and $\mu_A=\#H(v_0,v_1,v_3)=\#H(v_0,v_1)$, we have that \begin{equation*} H_A=H(v_0,v_1,v_3)=H(v_0,v_1), \end{equation*} and therefore, \begin{equation*} \Sigma_A=\Sigma(\delta_A)(s_0)=\Sigma(\Delta_{[AB]})(s_0)=\sum_{h\in H_A}p^{\sigma(h)+m(h)s_0}=\sum_{h\in H_A}p^{w\cdot h}, \end{equation*} with $w=(1,1,1)+s_0(x_B,y_B,1)\in\C^3$. Furthermore we denote \begin{gather*} H_l=H(v_0,v_2),\qquad H_B=H(v_0,v_1,v_2),\\ \Sigma_B=\Sigma(\delta_B)(s_0)=\sum_{h\in H_B}p^{\sigma(h)+m(h)s_0}=\sum_{h\in H_B}p^{w\cdot h},\\ F_1=p^{w\cdot v_1}-1=p^{\sigma_1+m_1s_0}-1,\qquad\text{and}\qquad F_2=p^{w\cdot v_2}-1=p^{\sigma_2+m_2s_0}-1. \end{gather*} The considerations above result in the following concrete formula for $R_1$: \begin{equation*} R_1=\left(\frac{p-1}{p}\right)^3\left[\frac{\Sigma_A}{F_1(p-1)}+\frac{\Sigma_B}{F_1F_2}+\frac{\Sigma_A}{F_1}\right] -\left(\frac{p-1}{p}\right)^2\frac{p^{s_0}-1}{p^{s_0+1}-1}\frac{\Sigma_A}{F_1}. \end{equation*} With $R_1'=(p/(p-1))^3R_1$, this can be simplified to \begin{equation}\label{formreenaccentcasetwee} R_1'=\frac{1}{1-p^{-s_0-1}}\frac{\Sigma_A}{F_1}+\frac{\Sigma_B}{F_1F_2}. \end{equation} Next, we will prove that $R_1'=0$. \subsection{Proof of $R_1'=0$} First, note that \begin{equation*} -b_2v_0+v_2=-b_2(0,1,-\beta)+(0,b_2,c_2)=-(0,0,-\beta b_2-c_2)=-(0,0,\mu_l) \end{equation*} yields \begin{equation}\label{vectidcasetwee} \frac{-b_2}{\mu_l}v_0+\frac{1}{\mu_l}v_2=(0,0,-1)\in\Z^3\quad\text{ and }\quad p^{\frac{-b_2(w\cdot v_0)+w\cdot v_2}{\mu_l}}=p^{-s_0-1}, \end{equation} with $w=(1,1,1)+s_0(x_B,y_B,1)$. Let us, as before, consider \begin{equation*} H_B=H(v_0,v_1,v_2)=\Z^3\cap\lozenge(v_0,v_1,v_2) \end{equation*} as a group, endowed with addition modulo $\Z v_0+\Z v_1+\Z v_2$. Then, by \eqref{vectidcasetwee} and Theorem~\ref{algfp}, there exists a $\xi_A\in\verA$ such that the elements of the subgroups $H_A=H(v_0,v_1)$ and $H_l=H(v_0,v_2)$ of $H_B$ are given by \begin{align*} \left\{\frac{i\xi_A}{\mu_A}\right\}v_0+\frac{i}{\mu_A}v_1;&\qquad i=0,\ldots,\mu_A-1;\\\shortintertext{and} \left\{\frac{-jb_2}{\mu_l}\right\}v_0+\frac{j}{\mu_l}v_2;&\qquad j=0,\ldots,\mu_l-1; \end{align*} respectively. Furthermore, we found above that in this special case \begin{equation*} \#H_B=\mu_B=\mu_A\mu_l=\#H_A\#H_l. \end{equation*} Hence $H_A\cap H_l=\{(0,0,0)\}$ implies that $H_B=H_A+H_l\cong H_A\oplus H_l$ and its elements are the following: \begin{equation*} \left\{\frac{i\xi_A\mu_l-jb_2\mu_A}{\mu_A\mu_l}\right\}v_0+\frac{i}{\mu_A}v_1+\frac{j}{\mu_l}v_2;\quad\ \ \ i=0,\ldots,\mu_A-1;\quad j=0,\ldots,\mu_l-1. \end{equation*} We can now easily calculate $\Sigma_A$ and $\Sigma_B$. If, for $h\in H_B$, we denote by $(h_0,h_1,h_2)$ the coordinates of $h$ with respect to the basis $(v_0,v_1,v_2)$ and keep in mind that $p^{w\cdot v_0}=1$, we obtain \begin{equation}\label{formsigmaacasetwee} \begin{aligned} \Sigma_A&=\sum_{h\in H_A}p^{w\cdot h}=\sum_hp^{h_0(w\cdot v_0)+h_1(w\cdot v_1)}\\ &=\sum_{i=0}^{\mu_A-1}\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_1}{\mu_A}}\Bigr)^i =\frac{p^{w\cdot v_1}-1}{p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_1}{\mu_A}}-1} =\frac{F_1}{p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_1}{\mu_A}}-1}, \end{aligned} \end{equation} while $\Sigma_B$ is given by \begin{align} \Sigma_B&=\sum_{h\in H_B}p^{w\cdot h}\notag\\ &=\sum_hp^{h_0(w\cdot v_0)+h_1(w\cdot v_1)+h_2(w\cdot v_2)}\notag\\ &=\sum_{i=0}^{\mu_A-1}\sum_{j=0}^{\mu_l-1}p^{\frac{i\xi_A\mu_l-jb_2\mu_A}{\mu_A\mu_l}(w\cdot v_0)+\frac{i}{\mu_A}(w\cdot v_1)+\frac{j}{\mu_l}(w\cdot v_2)}\notag\\ &=\sum_i\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_1}{\mu_A}}\Bigr)^i\sum_j\Bigl(p^{\frac{-b_2(w\cdot v_0)+w\cdot v_2}{\mu_l}}\Bigr)^j\notag\\ &=\frac{F_1}{p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_1}{\mu_A}}-1}\;\frac{F_2}{p^{\frac{-b_2(w\cdot v_0)+w\cdot v_2}{\mu_l}}-1}\notag\\ &=\frac{1}{p^{-s_0-1}-1}\;\frac{F_1F_2}{p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_1}{\mu_A}}-1},\label{formsigmabcasetwee} \end{align} where we used \eqref{vectidcasetwee} in the last step. By Equations~(\ref{formreenaccentcasetwee}, \ref{formsigmaacasetwee}, \ref{formsigmabcasetwee}) we have $R_1'=0$. This concludes Case~II. \begin{figure} \centering \psset{unit=.05905773059\textwidth \subfigure[$B_1$-simplices $\tau_0$ and $\tau_1$, their subfaces and neighbor facets $\tau_2,\tau_3,$ and $\tau_4$]{ \begin{pspicture}(-7.54,-5.7)(7.53,4.16 \pstThreeDCoor[xMin=0,yMin=0,zMin=0,xMax=10,yMax=10,zMax=4,labelsep=5pt,linecolor=black,linewidth=1.0pt] { \psset{linecolor=black,linewidth=.5pt,linestyle=dashed,subticks=1} \pstThreeDPlaneGrid[planeGrid=xy](0,0)(9,4) \pstThreeDPlaneGrid[planeGrid=xy](0,0)(5,5) \pstThreeDPlaneGrid[planeGrid=xy](0,0)(2,8) \pstThreeDPlaneGrid[planeGrid=xz](0,0)(3,1) \pstThreeDPlaneGrid[planeGrid=yz](0,0)(3,1) \pstThreeDPlaneGrid[planeGrid=xy,planeGridOffset=1](0,0)(3,3) \pstThreeDPlaneGrid[planeGrid=xz,planeGridOffset=3](0,0)(3,1) \pstThreeDPlaneGrid[planeGrid=yz,planeGridOffset=3](0,0)(3,1) } \pstThreeDPut[pOrigin=c](5.01,3.69,0){\psframebox*[framesep=.6pt,framearc=0]{\phantom{$\tau_0$}}} \pstThreeDPut[pOrigin=c](3.54,5.33,0.33){\psframebox*[framesep=.7pt,framearc=1]{\phantom{$\tau_1$}}} { \psset{dotstyle=none,dotscale=1,drawCoor=false} \psset{linecolor=black,linewidth=1.5pt,linejoin=1} \psset{fillcolor=lightgray,opacity=.6,fillstyle=solid} \pstThreeDLine(3,3,1)(9,4,0)(5,5,0)(3,3,1)(2,8,0)(5,5,0) } \pstThreeDPut[pOrigin=t](9,4,-0.24){$A$} \pstThreeDPut[pOrigin=t](5,5,-0.25){$B$} \pstThreeDPut[pOrigin=t](2,8,-0.24){$C$} \pstThreeDPut[pOrigin=b](3,3,1.25){$D$} \pstThreeDPut[pOrigin=c](5,3.69,0){$\tau_0$} \pstThreeDPut[pOrigin=c](5,1.97,0){\psframebox*[framesep=1pt,framearc=0]{$\tau_3$}} \pstThreeDPut[pOrigin=c](3.5,5.33,0.33){$\tau_1$} \pstThreeDPut[pOrigin=c](1.55,4.89,0.33){$\tau_2$} \pstThreeDPut[pOrigin=l](7.45,7.5,0){$\ \tau_4$} \pstThreeDPut[pOrigin=br](0.06,-0.06,1.12){$1$} \end{pspicture} } \\[+3.8ex] \subfigure[Relevant cones associated to relevant faces of~\Gf]{ \psset{unit=.05890138981\textwidth \begin{pspicture}(-7.57,-3.85)(7.54,9.33 \pstThreeDCoor[xMin=0,yMin=0,zMin=0,xMax=10,yMax=10,zMax=10,nameZ={},labelsep=5pt,linecolor=gray,linewidth=1.0pt] \psset{linecolor=gray,linewidth=.5pt,linejoin=1,linestyle=dashed,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(10,0,0)(0,10,0)\pstThreeDLine(0,10,0)(0,0,10)\pstThreeDLine(0,0,10)(10,0,0) } \psset{linecolor=black,linewidth=1.0pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,0)(0,0,10 } \psset{labelsep=3pt} \uput[90](0,1.7){\psframebox*[framesep=1.5pt,framearc=0]{\darkgray$v_4$}} } \psset{linecolor=black,linewidth=1.0pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,0)(1.17,2.65,6.18 \pstThreeDLine(0,0,0)(2.63,1.58,5.79 \pstThreeDLine(0,0,0)(8.15,1.48,.370 \pstThreeDLine(0,0,0)(.573,6.87,2.58 } \psset{linecolor=darkgray,linewidth=1.2pt,linejoin=1,arrows=->,arrowscale=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,0)(.585,1.32,3.09 \pstThreeDLine(0,0,0)(2.10,1.27,4.62 \pstThreeDLine(0,0,0)(4.89,.888,.222 \pstThreeDLine(0,0,0)(.229,2.75,1.03 \pstThreeDLine(0,0,0)(0,0,2 } \psset{linecolor=white,linewidth=4pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(2.19,1.90,5.90)(1.75,2.22,6.03 \pstThreeDLine(2.43,2.11,5.47)(1.19,5.28,3.55 } \psset{linecolor=black,linewidth=1.0pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,10)(1.17,2.65,6.18 \pstThreeDLine(0,0,10)(2.63,1.58,5.79 \pstThreeDLine(.573,6.87,2.58)(1.17,2.65,6.18)(2.63,1.58,5.79)(8.15,1.48,.370 } \psset{linecolor=black,linewidth=1.0pt,linejoin=1,linestyle=dashed,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,10)(8.15,1.48,.370 \pstThreeDLine(0,0,10)(.573,6.87,2.58 \pstThreeDLine(8.15,1.48,.370)(.573,6.87,2.58 \pstThreeDLine(2.63,1.58,5.79)(.573,6.87,2.58 } \psset{labelsep=3pt} \uput[-22](.275,1.1){\darkgray$v_0$} \uput[-60](-1.3,-.9){\darkgray$v_2$} \uput[80](.9,-0.1){\darkgray$v_3$} } \psset{labelsep=2.5pt} \uput[202](-.3,1.4){\darkgray$v_1$} } \psset{labelsep=5pt} \psdots[dotsize=10pt,linecolor=white](5.3,-.54) \uput[0](4.45,-.4){$\Delta_{\tau_3}$} } \psset{labelsep=5pt} \uput[180](-4.7,-3.1){$\Delta_{\tau_2}$} } \rput(1.9,3.75){$\delta_A$} \rput(.17,4.9){\psframebox*[framesep=1.5pt,framearc=0]{$\Delta_B$}} \rput(-1.43,3.5){$\delta_C$} \rput(1.48,2.6){$\delta_1$} \rput(1.85,1.55){\psframebox*[framesep=1.3pt,framearc=0]{$\delta_2$}} \rput(-1.3,.25){$\delta_3$} \pstThreeDNode(1.17,2.7,6.18){dt0} \pstThreeDNode(2.63,1.58,5.79){dt1} \pstThreeDNode(.585,1.32,8.10){AB} \pstThreeDNode(.870,4.76,4.38){AD} \pstThreeDNode(1.32,.790,7.90){BC} \pstThreeDNode(1.90,2.12,6.00){BD} \pstThreeDNode(5.40,1.53,3.08){CD} \rput[Bl](-7.6,2.65){\rnode{CDlabel}{$\Delta_{[CD]}$}} \rput[Bl](-7.6,8.94){\rnode{dt1label}{$\Delta_{\tau_1}$}} \rput[Bl](-4.36,8.94){\rnode{BClabel}{$\Delta_{[BC]}$}} \rput[B](0,8.94){$z,\Delta_{\tau_4}$} \rput[Br](4.36,8.94){\rnode{ABlabel}{$\Delta_{[AB]}$}} \rput[Br](7.6,8.94){\rnode{dt0label}{$\Delta_{\tau_0}$}} \rput[Br](7.6,4.8){\rnode{BDlabel}{$\Delta_{[BD]}$}} \rput[Br](7.6,.6){\rnode{ADlabel}{$\Delta_{[AD]}$}} \ncline[linewidth=.3pt,nodesepB=2pt,nodesepA=1pt]{->}{dt0label}{dt0} \ncline[linewidth=.3pt,nodesepB=2.5pt,nodesepA=2pt]{->}{dt1label}{dt1} \ncline[linewidth=.3pt,nodesepB=2pt,nodesepA=1.5pt]{->}{ABlabel}{AB} \ncline[linewidth=.3pt,nodesepB=2.5pt,nodesepA=1pt]{->}{ADlabel}{AD} \ncline[linewidth=.3pt,nodesepB=2pt,nodesepA=1pt]{->}{BClabel}{BC} \nccurve[linewidth=.3pt,nodesepB=2pt,nodesepA=1pt,angleA=217,angleB=-37]{->}{BDlabel}{BD} \ncline[linewidth=.3pt,nodesepB=2pt,nodesepA=2.5pt]{->}{CDlabel}{CD} \end{pspicture} } \caption{Case III: the only facets contributing to $s_0$ are the $B_1$-simplices $\tau_0$ and $\tau_1$} \label{figcase3} \end{figure} \section{Case~III: exactly two facets of \Gf\ contribute to $s_0$, and these two facets are both $B_1$-simplices with respect to a same variable and have an edge in common} \subsection{Figure and notations} Without loss of generality, we may assume that the $B_1$-simplices $\tau_0$ and $\tau_1$ contributing to $s_0$ are as drawn in Figure~\ref{figcase3}. Let us fix notations. We denote, as indicated in Figure~\ref{figcase3}, the vertices of $\tau_0$ and $\tau_1$ and their coordinates by \begin{equation*} A(x_A,y_A,0),\quad B(x_B,y_B,0),\quad C(x_C,y_C,0),\quad\text{and}\quad D(x_D,y_D,1). \end{equation*} We denote the neighbor facets of $\tau_0$ and $\tau_1$ by $\tau_2,\tau_3,\tau_4$. The unique primitive vectors perpendicular to $\tau_i$; $i=0,\ldots,4$; will be denoted by \begin{equation*} v_0(a_0,b_0,c_0),\quad v_1(a_1,b_1,c_1),\quad v_2(a_2,b_2,c_2),\quad v_3(a_3,b_3,c_3),\quad v_4(0,0,1), \end{equation*} respectively. In this way the affine supports of these facets have equations of the form \begin{alignat*}{2} \aff(\tau_i)&\leftrightarrow a_ix+b_iy+c_i&&z=m_i;\qquad i=0,\ldots,3;\\ \aff(\tau_4)&\leftrightarrow &&z=0; \end{alignat*} and we associate to them the numerical data \begin{alignat*}{2} (m_i,\sigma_i)&=(m(v_i),\sigma(v_i))&&=(a_ix_D+b_iy_D+c_i,a_i+b_i+c_i);\qquad i=0,\ldots,3;\\ (m_4,\sigma_4)&=(m(v_4),\sigma(v_4))&&=(0,1). \end{alignat*} We assume that $\tau_0$ and $\tau_1$ both contribute to the candidate pole $s_0$. With the present notations, this is, we assume that $p^{\sigma_0+m_0s_0}=p^{\sigma_1+m_1s_0}=1$, or equivalently, \begin{align*} \Re(s_0)&=-\frac{\sigma_0}{m_0}=-\frac{\sigma_1}{m_1}=-\frac{a_0+b_0+c_0}{a_0x_D+b_0y_D+c_0}=-\frac{a_1+b_1+c_1}{a_1x_D+b_1y_D+c_1}\qquad\text{and}\\ \Im(s_0)&=\frac{2n\pi}{\gcd(m_0,m_1)\log p}\qquad\text{for some $n\in\Z$.} \end{align*} \begin{figure} \centering \psset{unit=.03458213256\textwidth \begin{pspicture}(-8.25,-3.7)(9.1,9.7 {\footnotesize \psset{linecolor=gray,linewidth=.3pt,linejoin=1,linestyle=dashed,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(10,0,0)(0,10,0)\pstThreeDLine(0,10,0)(0,0,10)\pstThreeDLine(0,0,10)(10,0,0) } \psset{linecolor=black,linewidth=.7pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,10)(1.17,2.65,6.18 \pstThreeDLine(0,0,10)(2.63,1.58,5.79 \pstThreeDLine(.573,6.87,2.58)(1.17,2.65,6.18)(2.63,1.58,5.79)(8.15,1.48,.370 } \psset{linecolor=black,linewidth=.7pt,linejoin=1,linestyle=dashed,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,10)(8.15,1.48,.370 \pstThreeDLine(0,0,10)(.573,6.87,2.58 \pstThreeDLine(8.15,1.48,.370)(.573,6.87,2.58 \pstThreeDLine(2.63,1.58,5.79)(.573,6.87,2.58 } \psset{labelsep=3.8pt} \uput[0](4.45,-.4){\psframebox*[framesep=0.4pt,framearc=0]{$\Delta_{\tau_3}$}} } \psset{labelsep=3.0pt} \rput(-6.1,-3.0){\psframebox*[framesep=0.6pt,framearc=1]{\phantom{$\Delta$}}} \uput[180](-4.7,-3.1){$\Delta_{\tau_2}$} } \rput(1.94,3.72){\footnotesize$\delta_A$} \rput(.17,4.9){\psframebox*[framesep=0.3pt,framearc=1]{\footnotesize$\Delta_B$}} \rput(-1.43,3.5){\footnotesize$\delta_C$} \rput(1.48,2.6){\footnotesize$\delta_1$} \rput(1.85,1.55){\psframebox*[framesep=0.3pt,framearc=.3]{\footnotesize$\delta_2$}} \rput(-.3,.4){\footnotesize$\delta_3$} \rput[br](6.55,-3.35){\footnotesize\gray$\{x+y+z=1\}\cap\Rplus^3$} \pstThreeDNode(1.17,2.7,6.18){dt0} \pstThreeDNode(2.63,1.58,5.79){dt1} \pstThreeDNode(.585,1.32,8.10){AB} \pstThreeDNode(.870,4.76,4.38){AD} \pstThreeDNode(1.32,.790,7.90){BC} \pstThreeDNode(1.90,2.12,6.00){BD} \pstThreeDNode(5.40,1.53,3.08){CD} \rput[Bl](-8.3,3.075){\rnode{CDlabel}{$\Delta_{[CD]}$}} \rput[Bl](-8.3,9.075){\rnode{dt1label}{$\Delta_{\tau_1}$}} \rput[Bl](-4.85,9.075){\rnode{BClabel}{$\Delta_{[BC]}$}} \rput[B](0,9.075){$\Delta_{\tau_4}$} \rput[Bl](2.85,9.075){\rnode{ABlabel}{$\Delta_{[AB]}$}} \rput[Bl](7.1,9.075){\rnode{dt0label}{$\Delta_{\tau_0}$}} \rput[Bl](7.1,6.075){\rnode{BDlabel}{$\Delta_{[BD]}$}} \rput[Bl](7.1,3.075){\rnode{ADlabel}{$\Delta_{[AD]}$}} \ncline[linewidth=.3pt,nodesepB=2pt,nodesepA=1pt]{->}{dt0label}{dt0} \ncline[linewidth=.3pt,nodesepB=2.5pt,nodesepA=2pt]{->}{dt1label}{dt1} \ncline[linewidth=.3pt,nodesepB=2pt,nodesepA=1.5pt]{->}{ABlabel}{AB} \ncline[linewidth=.3pt,nodesepB=2.5pt,nodesepA=1pt]{->}{ADlabel}{AD} \ncline[linewidth=.3pt,nodesepB=2pt,nodesepA=1pt]{->}{BClabel}{BC} \nccurve[linewidth=.3pt,nodesepB=2pt,nodesepA=1pt,angleA=217,angleB=-37]{->}{BDlabel}{BD} \ncline[linewidth=.3pt,nodesepB=2pt,nodesepA=2.5pt]{->}{CDlabel}{CD} } \end{pspicture} \caption{Sketch of the intersection of the contributing cones with the plane $\{x+y+z=1\}$} \label{figcase3detail} \end{figure} Throughout this section we will consider the following thirteen simplicial cones: \begin{align*} \dA&=\cone(v_0,v_3,v_4),&\DAB&=\cone(v_0,v_4),&\Dtnul&=\cone(v_0),\\ \DB&=\cone(v_0,v_1,v_4),&\DBC&=\cone(v_1,v_4),&\Dteen&=\cone(v_1).\\ \dC&=\cone(v_1,v_2,v_4),&\DAD&=\cone(v_0,v_3),&&\\ \delta_1&=\cone(v_0,v_1,v_3),&\DBD&=\cone(v_0,v_1),&&\\ \delta_2&=\cone(v_1,v_3),&\DCD&=\cone(v_1,v_2),&&\\ \delta_3&=\cone(v_1,v_2,v_3),&&&& \end{align*} The $\Delta_{\tau}$ listed above are the simplicial cones associated to the faces $\tau$. The cones $\Delta_A,\Delta_C,\Delta_D$, associated to the respective vertices $A,C,D$, are generally not simplicial. Later in this section we will consider simplicial subdivisions (without creating new rays) of $\Delta_A,\Delta_C,$ and $\Delta_D$ that include $\{\dA\},\{\dC\},$ and $\{\delta_1,\delta_2,\delta_3\}$, respectively (cfr.\ Figure~\ref{figcase3detail}). Finally, let us fix notations for the vectors along the edges of $\tau_0$ and $\tau_1$: \begin{alignat*}{8} &\overrightarrow{AD}&&(x_D&&-x_A&&,y_D&&-y_A&&,1&&)&&=(\aA,\bA,1),\\ &\overrightarrow{BD}&&(x_D&&-x_B&&,y_D&&-y_B&&,1&&)&&=(\aB,\bB,1),\\ &\overrightarrow{CD}&&(x_D&&-x_C&&,y_D&&-y_C&&,1&&)&&=(\aC,\bC,1),\\ &\overrightarrow{AB}&&(x_B&&-x_A&&,y_B&&-y_A&&,0&&)&&=(\aA-\aB,\bA-\bB,0),\\ &\overrightarrow{BC}&&(x_C&&-x_B&&,y_C&&-y_B&&,0&&)&&=(\aB-\aC,\bB-\bC,0). \end{alignat*} The first three vectors are primitive; the last two are generally not. We put \begin{equation*} \fAB=\gcd(x_B-x_A,y_B-y_A)\qquad\text{and}\qquad\fBC=\gcd(x_C-x_B,y_C-y_B). \end{equation*} \subsection{Some relations between the variables}\label{srelbettvarcasdrie} In the same way as in Case~I we obtain that \begin{alignat*}{4} \begin{pmatrix} c_0\\c_3 \end{pmatrix} &=-\aA&& \begin{pmatrix} a_0\\a_3 \end{pmatrix} &&-\bA&& \begin{pmatrix} b_0\\b_3 \end{pmatrix},\\ \begin{pmatrix} c_0\\c_1 \end{pmatrix} &=-\aB&& \begin{pmatrix} a_0\\a_1 \end{pmatrix} &&-\bB&& \begin{pmatrix} b_0\\b_1 \end{pmatrix},\\ \begin{pmatrix} c_1\\c_2 \end{pmatrix} &=-\aC&& \begin{pmatrix} a_1\\a_2 \end{pmatrix} &&-\bC&& \begin{pmatrix} b_1\\b_2 \end{pmatrix}. \end{alignat*} A first consequence is that \begin{equation*} \gcd(a_i,b_i,c_i)=\gcd(a_i,b_i)=1;\qquad i=0,\ldots,3. \end{equation*} As a second consequence, we have \begin{alignat*}{4} \begin{vmatrix} a_0&c_0\\a_3&c_3 \end{vmatrix} &=-\bA&& \begin{vmatrix} a_0&b_0\\a_3&b_3 \end{vmatrix}, &\qquad\qquad\quad \begin{vmatrix} b_0&c_0\\b_3&c_3 \end{vmatrix} &=\aA&& \begin{vmatrix} a_0&b_0\\a_3&b_3 \end{vmatrix}, \\ \begin{vmatrix} a_0&c_0\\a_1&c_1 \end{vmatrix} &=-\bB&& \begin{vmatrix} a_0&b_0\\a_1&b_1 \end{vmatrix}, & \begin{vmatrix} b_0&c_0\\b_1&c_1 \end{vmatrix} &=\aB&& \begin{vmatrix} a_0&b_0\\a_1&b_1 \end{vmatrix}, \\ \begin{vmatrix} a_1&c_1\\a_2&c_2 \end{vmatrix} &=-\bC&& \begin{vmatrix} a_1&b_1\\a_2&b_2 \end{vmatrix}, & \begin{vmatrix} b_1&c_1\\b_2&c_2 \end{vmatrix} &=\aC&& \begin{vmatrix} a_1&b_1\\a_2&b_2 \end{vmatrix}. \end{alignat*} In the calculations that will follow it is often convenient (or necessary) to know the signs of certain determinants. Coordinate system orientation considerations show that \begin{align} \begin{vmatrix} a_0&b_0\\a_3&b_3 \end{vmatrix} &>0,& \begin{vmatrix} a_0&b_0\\a_1&b_1 \end{vmatrix} &<0,& \begin{vmatrix} a_1&b_1\\a_2&b_2 \end{vmatrix} &<0,\notag\\ \Psi= \begin{vmatrix} a_1&b_1\\a_3&b_3 \end{vmatrix} &>0,&-\Omega= \begin{vmatrix} a_0&b_0\\a_2&b_2 \end{vmatrix} &<0,&\Theta= \begin{vmatrix} a_2&b_2\\a_3&b_3 \end{vmatrix} &>0,\label{defPOT}\\ \begin{vmatrix} a_0&b_0&c_0\\a_1&b_1&c_1\\a_3&b_3&c_3 \end{vmatrix} &>0,& \begin{vmatrix} a_0&b_0&c_0\\a_1&b_1&c_1\\a_2&b_2&c_2 \end{vmatrix} &>0,& \begin{vmatrix} a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3 \end{vmatrix} &>0.\notag \end{align} \subsection{Igusa's local zeta function} Since $f$ is non-degenerated over \Fp\ with respect to all the compact faces of its Newton polyhedron \Gf, by Theorem~\ref{formdenhoor} the local Igusa zeta function \Zof\ of $f$ is given by \begin{equation}\label{deflvilzfvmd} \Zof=\sum_{\substack{\tau\mathrm{\ compact}\\\mathrm{face\ of\ }\Gf}}L_{\tau}S(\Dtu), \end{equation} with \begin{gather*} L_{\tau}:s\mapsto L_{\tau}(s)=\left(\frac{p-1}{p}\right)^3-\frac{N_{\tau}}{p^2}\frac{p^s-1}{p^{s+1}-1},\\ N_{\tau}=\#\left\{(x,y,z)\in(\Fpcross)^3\;\middle\vert\;\fbart(x,y,z)=0\right\}, \end{gather*} and \begin{align} S(\Dtu):s\mapsto S(\Dtu)(s)&=\sum_{k\in\Z^3\cap\Delta_{\tau}}p^{-\sigma(k)-m(k)s}\notag\\ &=\sum_{i\in I}\frac{\Sigma(\delta_i)(s)}{\prod_{j\in J_i}(p^{\sigma(w_j)+m(w_j)s}-1)}.\label{deflvilzfvmdbis} \end{align} Here $\{\delta_i\}_{i\in I}$ denotes a simplicial decomposition without introducing new rays of the cone $\Delta_{\tau}$ associated to $\tau$. The simplicial cone $\delta_i$ is supposed to be strictly positively spanned by the linearly independent primitive vectors $w_j$, $j\in J_i$, in $\Zplusn\setminus\{0\}$, and $\Sigma(\delta_i)$ is the function \begin{equation*} \Sigma(\delta_i):s\mapsto \Sigma(\delta_i)(s)=\sum_hp^{\sigma(h)+m(h)s}, \end{equation*} where $h$ runs through the elements of the set \begin{equation*} H(w_j)_{j\in J_i}=\Z^3\cap\lozenge(w_j)_{j\in J_i}, \end{equation*} with \begin{equation*} \lozenge(w_j)_{j\in J_i}=\left\{\sum\nolimits_{j\in J_i}h_jw_j\;\middle\vert\;h_j\in[0,1)\text{ for all }j\in J_i\right\} \end{equation*} the fundamental parallelepiped spanned by the vectors $w_j$, $j\in J_i$. \subsection{The candidate pole $s_0$ and its residues} We want to prove that $s_0$ is not a pole of \Zof. Since $s_0$ is a candidate pole of expected order two (and therefore is a pole of actual order at most two), it is enough to prove that the coefficients $a_{-2}$ and $a_{-1}$ in the Laurent series \begin{equation*} \Zof(s)=\sum_{k=-2}^{\infty}a_k(s-s_0)^k \end{equation*} of \Zof\ centered at $s_0$, both equal zero. These coefficients are given by \begin{align*} a_{-2}&=\lim_{s\to s_0}(s-s_0)^2\Zof(s)\qquad\text{and}\\ a_{-1}=\Res(\Zof,s_0)&=\lim_{s\to s_0}\frac{d}{ds}\left[(s-s_0)^2\Zof(s)\right]. \end{align*} Alternatively (and consequently), it is sufficient to show that \begin{align*} R_2&=\lim_{s\to s_0}\left(p^{\sigma_0+m_0s}-1\right)\left(p^{\sigma_1+m_1s}-1\right)\Zof(s)\\ &=(\log p)^2m_0m_1a_{-2}\\\shortintertext{and} R_1&=\lim_{s\to s_0}\frac{d}{ds}\left[\left(p^{\sigma_0+m_0s}-1\right)\left(p^{\sigma_1+m_1s}-1\right)\Zof(s)\right]\\ &=(\log p)^2m_0m_1a_{-1}+\frac{1}{2}(\log p)^3m_0m_1(m_0+m_1)a_{-2} \end{align*} both vanish. We will in the rest of this section prove that $R_2=R_1=0$. \subsection{Terms contributing to $R_2$ and $R_1$} We intend to calculate $R_2$ and $R_1$ based on Formula~\eqref{deflvilzfvmd} for \Zof. Precisely $11$ compact faces of \Gf\ contribute to the candidate pole $s_0$. These are the subfaces $A,B,C,D,[AB],[BC],[AD],[BD],[CD],\tau_0,$ and $\tau_1$ of the two compact facets $\tau_0$ and $\tau_1$ that have $s_0$ as an associated candidate pole. It are only the terms of \eqref{deflvilzfvmd} associated to these faces that should be taken into account in the calculation of $R_1$. The other terms do not have $s_0$ as a pole and therefore do not contribute to the limit $R_1$. Vertex $B$ is only contained in the facets $\tau_0,\tau_1,$ and $\tau_4$; hence its associated cone $\Delta_B$ is simplicial. The cones associated to the other vertices $A,C,$ and $D$ are generally not simplicial. For dealing with $S_A$ and $S_C$, we will, just as in Case~I, consider simplicial decompositions of $\Delta_A$ and $\Delta_C$ that contain $\delta_A$ and $\delta_C$, respectively. Terms of \eqref{deflvilzfvmdbis} associated to other cones than $\delta_A$ and $\delta_C$ in these decompositions do not have a pole in $s_0$, hence do not contribute to $R_1$. Vertex $D$ is contained in at least four facets. We shall consider a decomposition of $\Delta_D$ into simplicial cones among which $\delta_1,\delta_2,$ and $\delta_3$. Only the terms associated to these three cones should be taken into account when calculating $R_1$. This makes a total of $13$ terms contributing to $R_1$ (three coming from $D$ and one for every other contributing face). The limit $R_2$ counts fewer contributions: the only terms of \eqref{deflvilzfvmd} and \eqref{deflvilzfvmdbis} that need to be considered are the ones that have a double pole in $s_0$. These are the terms associated to $B,[BD],$ and $\delta_1$. All other terms have at most a single pole in $s_0$ and do not contribute to $R_2$. Let us write down these contributions explicitly. For $R_2$ we obtain \begin{equation*} R_2=L_B(s_0)\frac{\Sigma(\Delta_B)(s_0)}{p-1}+L_D(s_0)\frac{\Sigma(\delta_1)(s_0)}{p^{\sigma_3+m_3s_0}-1}+L_{[BD]}(s_0)\Sigma(\Delta_{[BD]})(s_0). \end{equation*} The thirteen terms making up $R_1$ are \begin{multline*} R_1=\dds{L_A(s)\frac{\Feens\Sigma(\delta_A)(s)}{\Fdries(p-1)}} +\dds{L_B(s)\frac{\Sigma(\Delta_B)(s)}{p-1}}\\ +\dds{L_C(s)\frac{\Fnuls\Sigma(\delta_C)(s)}{\Ftwees(p-1)}} +\dds{L_D(s)\frac{\Sigma(\delta_1)(s)}{p^{\sigma_3+m_3s}-1}}\\ +\dds{L_D(s)\frac{\Fnuls\Sigma(\delta_2)(s)}{p^{\sigma_3+m_3s}-1}}\\ +\dds{L_D(s)\frac{\Fnuls\Sigma(\delta_3)(s)}{\Ftwees\Fdries}}\\ +\dds{L_{[AB]}(s)\frac{\Feens\Sigma(\Delta_{[AB]})(s)}{p-1}}\\ +\dds{L_{[BC]}(s)\frac{\Fnuls\Sigma(\Delta_{[BC]})(s)}{p-1}}\\ +\dds{L_{[AD]}(s)\frac{\Feens\Sigma(\Delta_{[AD]})(s)}{p^{\sigma_3+m_3s}-1}}\\ +\dds{L_{[BD]}(s)\Sigma(\Delta_{[BD]})(s)}\\ +\dds{L_{[CD]}(s)\frac{\Fnuls\Sigma(\Delta_{[CD]})(s)}{p^{\sigma_2+m_2s}-1}}\\ +\dds{L_{\tau_0}(s)\Feens\Sigma(\Delta_{\tau_0})(s)}\\ +\dds{L_{\tau_1}(s)\Fnuls\Sigma(\Delta_{\tau_1})(s)}. \end{multline*} After simplification, $R_1$ is given by \begin{multline*} R_1=L_A(s_0)\frac{m_1(\log p)\Sigma(\delta_A)(s_0)}{\Fdrie(p-1)} +L_B'(s_0)\frac{\Sigma(\Delta_B)(s_0)}{p-1} +L_B(s_0)\frac{\Sigma(\Delta_B)'(s_0)}{p-1}\\ +L_C(s_0)\frac{m_0(\log p)\Sigma(\delta_C)(s_0)}{\Ftwee(p-1)} +L_D'(s_0)\frac{\Sigma(\delta_1)(s_0)}{p^{\sigma_3+m_3s_0}-1} +L_D(s_0)\frac{\Sigma(\delta_1)'(s_0)}{p^{\sigma_3+m_3s_0}-1}\\ -L_D(s_0)\frac{m_3(\log p)p^{\sigma_3+m_3s_0}\Sigma(\delta_1)(s_0)}{\Fdrie^2} +L_D(s_0)\frac{m_0(\log p)\Sigma(\delta_2)(s_0)}{p^{\sigma_3+m_3s_0}-1}\\ +L_D(s_0)\frac{m_0(\log p)\Sigma(\delta_3)(s_0)}{\Ftwee\Fdrie}\\ +L_{[AB]}(s_0)\frac{m_1(\log p)\Sigma(\Delta_{[AB]})(s_0)}{p-1} +L_{[BC]}(s_0)\frac{m_0(\log p)\Sigma(\Delta_{[BC]})(s_0)}{p-1}\\ +L_{[AD]}(s_0)\frac{m_1(\log p)\Sigma(\Delta_{[AD]})(s_0)}{p^{\sigma_3+m_3s_0}-1} +L_{[BD]}'(s_0)\Sigma(\Delta_{[BD]})(s_0)\\ +L_{[BD]}(s_0)\Sigma(\Delta_{[BD]})'(s_0) +L_{[CD]}(s_0)\frac{m_0(\log p)\Sigma(\Delta_{[CD]})(s_0)}{p^{\sigma_2+m_2s_0}-1}\\ +L_{\tau_0}(s_0)m_1(\log p)\Sigma(\Delta_{\tau_0})(s_0) +L_{\tau_1}(s_0)m_0(\log p)\Sigma(\Delta_{\tau_1})(s_0). \end{multline*} \subsection{The numbers $N_{\tau}$} Analogously to Case~I we obtain \begin{gather*} N_A=N_B=N_C=N_D=0,\\ N_{[AB]}=(p-1)N_0,\qquad N_{[BC]}=(p-1)N_1,\\ N_{[AD]}=N_{[BD]}=N_{[CD]}=(p-1)^2,\\ N_{\tau_0}=(p-1)^2-N_0,\qquad N_{\tau_1}=(p-1)^2-N_1, \end{gather*} with \begin{align*} N_0&=\#\left\{(x,y)\in(\Fpcross)^2\;\middle\vert\;\overline{f_{[AB]}}(x,y)=0\right\},\\ N_1&=\#\left\{(x,y)\in(\Fpcross)^2\;\middle\vert\;\overline{f_{[BC]}}(x,y)=0\right\}. \end{align*} \subsection{The factors $L_{\tau}(s_0)$ and $L_{\tau}'(s_0)$} For the $L_{\tau}(s_0)$ we obtain \begin{gather*} L_A(s_0)=L_B(s_0)=L_C(s_0)=L_D(s_0)=\left(\frac{p-1}{p}\right)^3,\\ \begin{alignedat}{3} L_{[AB]}(s_0)&=\left(\frac{p-1}{p}\right)^3&&-\frac{(p-1)N_0}{p^2}&&\frac{p^{s_0}-1}{p^{s_0+1}-1},\\ L_{[BC]}(s_0)&=\left(\frac{p-1}{p}\right)^3&&-\frac{(p-1)N_1}{p^2}&&\frac{p^{s_0}-1}{p^{s_0+1}-1}, \end{alignedat}\\ L_{[AD]}(s_0)=L_{[BD]}(s_0)=L_{[CD]}(s_0)=\left(\frac{p-1}{p}\right)^3-\left(\frac{p-1}{p}\right)^2\frac{p^{s_0}-1}{p^{s_0+1}-1},\\ \begin{alignedat}{3} L_{\tau_0}(s_0)&=\left(\frac{p-1}{p}\right)^3&&-\frac{(p-1)^2-N_0}{p^2}&&\frac{p^{s_0}-1}{p^{s_0+1}-1},\\ L_{\tau_1}(s_0)&=\left(\frac{p-1}{p}\right)^3&&-\frac{(p-1)^2-N_1}{p^2}&&\frac{p^{s_0}-1}{p^{s_0+1}-1}, \end{alignedat} \end{gather*} while the $L_{\tau}'(s_0)$ are given by \begin{gather*} L_B'(s_0)=L_D'(s_0)=0,\\ L_{[BD]}'(s_0)=-(\log p)\left(\frac{p-1}{p}\right)^3\frac{p^{s_0+1}}{\bigl(p^{s_0+1}-1\bigr)^2}. \end{gather*} \subsection{Multiplicities of the relevant simplicial cones} Based on Proposition~\ref{multipliciteit} and the relations obtained in Subsection~\ref{srelbettvarcasdrie}, we have, analogously to Case~I, that \begin{gather*} \mult\Delta_{[AB]}=\mult\Delta_{[BC]}=\mult\Delta_{\tau_0}=\mult\Delta_{\tau_1}=1,\\ \begin{alignedat}{6} \mu_A=\mult\delta_A&=\#H(v_0,v_3,v_4)&&=\mult\Delta_{[AD]}&&=\#H(v_0,v_3)&&=&& \begin{vmatrix} a_0&b_0\\a_3&b_3 \end{vmatrix} &&>0,\\ \mu_B=\mult\Delta_B&=\#H(v_0,v_1,v_4)&&=\mult\Delta_{[BD]}&&=\#H(v_0,v_1)&&=-&& \begin{vmatrix} a_0&b_0\\a_1&b_1 \end{vmatrix} &&>0,\\ \mu_C=\mult\delta_C&=\#H(v_1,v_2,v_4)&&=\mult\Delta_{[CD]}&&=\#H(v_1,v_2)&&=-&& \begin{vmatrix} a_1&b_1\\a_2&b_2 \end{vmatrix} &&>0, \end{alignedat}\\ \mu_2=\mult\delta_2=\#H(v_1,v_3)=\gcd\left(\Psi, \begin{Vmatrix} a_1&c_1\\a_3&c_3 \end{Vmatrix}, \begin{Vmatrix} b_1&c_1\\b_3&c_3 \end{Vmatrix} \right)>0, \end{gather*} with $\Psi>0$ as in \eqref{defPOT}. Although we did not choose $\delta_1'=\cone(v_0,v_1,v_2)$ to be part of a simplicial decomposition of $\Delta_D$, we will consider its multiplicity as well. As in Case~I, we then find that \begin{alignat*}{5} \mu_1&=\mult\delta_1&&=\#H(v_0,v_1,v_3)&&= \begin{vmatrix} a_0&b_0&c_0\\a_1&b_1&c_1\\a_3&b_3&c_3 \end{vmatrix} &&=\mu_A\mu_B\fAB&&>0\qquad\text{and}\\ \mu_1'&=\mult\delta_1'&&=\#H(v_0,v_1,v_2)&&= \begin{vmatrix} a_0&b_0&c_0\\a_1&b_1&c_1\\a_2&b_2&c_2 \end{vmatrix} &&=\mu_B\mu_C\fBC&&>0. \end{alignat*} Finally, we will derive a more useful formula for \begin{equation*} \mu_3=\mult\delta_3=\#H(v_1,v_2,v_3)= \begin{vmatrix} a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3 \end{vmatrix}>0, \end{equation*} similar to the ones for $\mu_1$ and $\mu_1'$, in Subsection~\ref{multmudrieformule}. \subsection{The sums $\Sigma(\cdot)(s_0)$ and $\Sigma(\cdot)'(s_0)$} Since the corresponding multiplicities equal one, we find that \begin{equation*} \Sigma(\Delta_{[AB]})(s_0)=\Sigma(\Delta_{[BC]})(s_0)=\Sigma(\Delta_{\tau_0})(s_0)=\Sigma(\Delta_{\tau_1})(s_0)=1. \end{equation*} From the overview of the multiplicities, it is also clear\footnote{See Case~I for more details.} that we may put \begin{gather*} \begin{alignedat}{2} H_A&=H(v_0,v_3,v_4)&&=H(v_0,v_3),\\ H_B&=H(v_0,v_1,v_4)&&=H(v_0,v_1),\\ H_C&=H(v_1,v_2,v_4)&&=H(v_1,v_2), \end{alignedat}\\ H_1=H(v_0,v_1,v_3),\qquad H_2=H(v_1,v_3),\qquad H_3=H(v_1,v_2,v_3). \end{gather*} It follows that \begin{gather*} \begin{alignedat}{3} \Sigma_A&=\Sigma(\delta_A)(s_0)&&=\Sigma(\Delta_{[AD]})(s_0)&&=\sum\nolimits_{h\in H_A}p^{\sigma(h)+m(h)s_0};\\ \Sigma_B&=\Sigma(\Delta_B)(s_0)&&=\Sigma(\Delta_{[BD]})(s_0)&&=\sum\nolimits_{h\in H_B}p^{\sigma(h)+m(h)s_0};\\ \Sigma_C&=\Sigma(\delta_C)(s_0)&&=\Sigma(\Delta_{[CD]})(s_0)&&=\sum\nolimits_{h\in H_C}p^{\sigma(h)+m(h)s_0}; \end{alignedat}\\ \Sigma_i=\Sigma(\delta_i)(s_0)=\sum\nolimits_{h\in H_i}p^{\sigma(h)+m(h)s_0};\qquad i=1,2,3;\\ \Sigma_B'=\Sigma(\Delta_B)'(s_0)=\Sigma(\Delta_{[BD]})'(s_0)=\dds{\sum\nolimits_{h\in H_B}p^{\sigma(h)+m(h)s}};\\ \Sigma_1'=\Sigma(\delta_1)'(s_0)=\dds{\sum\nolimits_{h\in H_1}p^{\sigma(h)+m(h)s}}. \end{gather*} Let us for the rest of this section denote by $w$ the vector \begin{equation*} w=(1,1,1)+s_0(x_D,y_D,1)\in\C^3. \end{equation*} Then since $\overline{\Delta_D}$ contains all points of $H_V$; $V=A,B,C,1,2,3$; we have moreover that \begin{alignat*}{2} \Sigma_V&=\sum\nolimits_{h\in H_V}p^{w\cdot h};&\qquad V&=A,B,C,1,2,3;\\ \text{and}\qquad\Sigma_W'&=(\log p)\sum\nolimits_{h\in H_W}m(h)p^{w\cdot h};&W&=B,1. \end{alignat*} \subsection{Simplified formulas for $R_2$ and $R_1$} Let us put \begin{equation*} F_2=p^{w\cdot v_2}-1=p^{\sigma_2+m_2s_0}-1\qquad\text{and}\qquad F_3=p^{w\cdot v_3}-1=p^{\sigma_3+m_3s_0}-1. \end{equation*} Then, exploiting the information above on the numbers $N_{\tau}$ and the multiplicities of the cones, we obtain the following new formulas for $R_2$ and $R_1$: \begin{gather} R_2=\left(\frac{p-1}{p}\right)^3\left(\frac{\Sigma_B}{1-p^{-s_0-1}}+\frac{\Sigma_1}{F_3}\right),\label{followingnewformulaRtweecasedrie}\\ \begin{multlined}[.85\textwidth] R_1=(\log p)\left(\frac{p-1}{p}\right)^3\cdot\\ \Biggl[\frac{1}{1-p^{-s_0-1}}\left(\frac{m_1\Sigma_A}{F_3}+\frac{\Sigma_B'}{\log p}-\frac{\Sigma_B}{p^{s_0+1}-1}+\frac{m_0\Sigma_C}{F_2}+m_0+m_1\right)\\ +\frac{\Sigma_1'}{(\log p)F_3}-\frac{m_3(F_3+1)\Sigma_1}{F_3^2}+\frac{m_0\Sigma_2}{F_3}+\frac{m_0\Sigma_3}{F_2F_3}\Biggr]. \end{multlined}\label{followingnewformulaReencasedrie} \end{gather} Note that the \lq unknown\rq\ numbers $N_0$ and $N_1$ disappear from the equation. \subsection{Vector identities} We will quite often use the following identities: \begin{alignat}{6} \begin{vmatrix} a_1&b_1\\a_3&b_3 \end{vmatrix} v_0&- \begin{vmatrix} a_0&b_0\\a_3&b_3 \end{vmatrix} v_1&&+ \begin{vmatrix} a_0&b_0\\a_1&b_1 \end{vmatrix} v_3&&=&\;\Psi v_0&\;-\;&\mu_A v_1&\;-\;&\mu_B v_3&=(0,0,\mu_1),\label{vi1c3}\\ \begin{vmatrix} a_1&b_1\\a_2&b_2 \end{vmatrix} v_0&- \begin{vmatrix} a_0&b_0\\a_2&b_2 \end{vmatrix} v_1&&+ \begin{vmatrix} a_0&b_0\\a_1&b_1 \end{vmatrix} v_2&&=&\;-\mu_C v_0&\;+\;&\Omega v_1&\;-\;&\mu_B v_2&=(0,0,\mu_1'),\label{vi2c3}\\ \begin{vmatrix} a_2&b_2\\a_3&b_3 \end{vmatrix} v_1&- \begin{vmatrix} a_1&b_1\\a_3&b_3 \end{vmatrix} v_2&&+ \begin{vmatrix} a_1&b_1\\a_2&b_2 \end{vmatrix} v_3&&=&\;\Theta v_1&\;-\;&\Psi v_2&\;-\;&\mu_C v_3&=(0,0,\mu_3).\label{vi3c3} \end{alignat} Hereby $\Psi,\Omega,\Theta>0$ are as introduced in \eqref{defPOT}. As also mentioned in Case~I, these equations simply express the equalities of the last rows of the identical matrices $(\adj M)M$ and $(\det M)I$ for $M$ the respective matrices \begin{equation*} \begin{pmatrix} a_0&b_0&c_0\\a_1&b_1&c_1\\a_3&b_3&c_3 \end{pmatrix},\qquad \begin{pmatrix} a_0&b_0&c_0\\a_1&b_1&c_1\\a_2&b_2&c_2 \end{pmatrix},\qquad\text{and}\qquad \begin{pmatrix} a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3 \end{pmatrix} \end{equation*} with respective determinants $\mu_1,\mu_1',$ and $\mu_3$. Useful consequences of (\ref{vi1c3}--\ref{vi3c3}) arise from making the dot product with $w=(1,1,1)+s_0(x_D,y_D,1)$ on all sides of the equations: \begin{alignat}{3} -\Psi(w\cdot v_0)&\;+\;&\mu_A(w\cdot v_1)&\;+\;&\mu_B(w\cdot v_3)&=\mu_1(-s_0-1),\label{dpi1c3}\\ \mu_C(w\cdot v_0)&\;-\;&\Omega(w\cdot v_1)&\;+\;&\mu_B(w\cdot v_2)&=\mu_1'(-s_0-1),\label{dpi2c3}\\ -\Theta(w\cdot v_1)&\;+\;&\Psi(w\cdot v_2)&\;+\;&\mu_C(w\cdot v_3)&=\mu_3(-s_0-1).\label{dpi3c3} \end{alignat} \subsection{Points of $H_A,H_B,H_C,H_2,H_1$ and additional relations}\label{pointsofandaddrel} Based on the discussion on integral points in fundamental parallelepipeds in Section~\ref{fundpar}, we can state that the points of $H_A,H_B,H_C,$ and $H_2$ are given by \begin{alignat}{2} \left\{\frac{i\xi_A}{\mu_A}\right\}v_0&+\frac{i}{\mu_A}v_3;&\qquad&i=0,\ldots,\mu_A-1;\label{pofAcasedrie}\\\shortintertext{by} \left\{\frac{j\xi_B}{\mu_B}\right\}v_0&+\frac{j}{\mu_B}v_1;&&j=0,\ldots,\mu_B-1;\label{pofBcasedrie}\\\shortintertext{by} \left\{\frac{i\xi_C}{\mu_C}\right\}v_1&+\frac{i}{\mu_C}v_2;&&i=0,\ldots,\mu_C-1;\label{pofCcasedrie}\\\shortintertext{and by} \left\{\frac{j\xi_2}{\mu_2}\right\}v_1&+\frac{j}{\mu_2}v_3;&&j=0,\ldots,\mu_2-1;\label{pof2casedrie} \end{alignat} respectively. Here $\xi_A$ denotes the unique element $\xi_A\in\verA$ such that $\xi_Av_0+v_3$ belongs to $\mu_A\Z^3$. It follows that $\xi_A$ is coprime to $\mu_A$. (Analogously for $\xi_B,\xi_C,$ and $\xi_2$.) In exactly the same way as we did in Case~I for the points of $H_C$ (cfr.\ Sub\-sec\-tion~\ref{descpointsHCgevaleen}), we obtain that the $\mu_1=\mu_A\mu_B\fAB$ points of $H_1=H(v_0,v_1,v_3)$ are precisely \begin{multline}\label{pof1casedrie} \left\{\frac{i\xi_A\mu_B\fAB+j\xi_B\mu_A\fAB-k\Psi}{\mu_1}\right\}v_0+\frac{j\fAB+k}{\mu_B\fAB}v_1+\frac{i\fAB+k}{\mu_A\fAB}v_3;\\ i=0,\ldots,\mu_A-1;\quad j=0,\ldots,\mu_B-1;\quad k=0,\ldots,\fAB-1. \end{multline} On the other hand, we also know from Section~\ref{fundpar} that $\mu_2\mid\mu_1$ and that when $h$ runs through the elements of $H_1$, its $v_0$-coordinate $h_0$ runs precisely $\mu_2$ times through the numbers \begin{equation*} \frac{l\mu_2}{\mu_1};\qquad l=0,\ldots,\frac{\mu_1}{\mu_2}-1. \end{equation*} This implies that\footnote{First, note that in the left-hand side of the equation, the inner curly brackets denote the reduction of the argument modulo $\mu_1$ (cfr.\ Notation~\ref{notatiemodulo}), while the outer curly brackets serve as set delimiters. Secondly, recall that the maps $i\mapsto\{i\xi_A\}_{\mu_A}$ and $j\mapsto\{j\xi_B\}_{\mu_B}$ are permutations of \verA\ and \verB, respectively, so that after reordering the elements of the set, we can indeed omit the $\xi_A$ and $\xi_B$ from the equation.} \begin{equation*} \bigl\{\{i\mu_B\fAB+j\mu_A\fAB-k\Psi\}_{\mu_1}\bigr\}_{i,j,k=0}^{\mu_A-1,\mu_B-1,\fAB-1}=\bigl\{l\mu_2\bigr\}_{l=0}^{\mu_1/\mu_2-1}. \end{equation*} From this equality of sets, we easily conclude that \begin{equation*} \mu_2\mid\mu_A\fAB,\mu_B\fAB,\Psi, \end{equation*} what we already knew\footnote{The fact that $\mu_2\mid\mu_A\fAB,\mu_B\fAB$ was shown in several ways in the proof of Theorem~\ref{algfp}(v), while it follows from Proposition~\ref{multipliciteit} that $\mu_2=\gcd\left(\Psi,\bigl\lVert\begin{smallmatrix}a_1&c_1\\a_3&c_3\end{smallmatrix}\bigr\rVert,\bigl\lVert\begin{smallmatrix}b_1&c_1\\b_3&c_3\end{smallmatrix}\bigr\rVert\right)$.}, but also that $\mu_2$ can be written as a linear combination with integer coefficients of $\mu_A\fAB,\mu_B\fAB,$ and $\Psi$. Hence \begin{equation*} \mu_2=\gcd(\mu_A\fAB,\mu_B\fAB,\Psi). \end{equation*} Recall from \eqref{vi1c3} that \begin{equation*} \Psi v_0-\mu_A v_1-\mu_B v_3=(0,0,\mu_1). \end{equation*} If we put $\gamma=\gcd(\mu_A\fAB,\Psi)$, we have $\gamma\mid\mu_1$, and therefore it follows that \begin{equation*} \mu_B\fAB v_3=\Psi\fAB v_0-\mu_A\fAB v_1-(0,0,\fAB\mu_1)\in\gamma\Z^3. \end{equation*} The primitivity of $v_3$ now implies that $\gamma\mid\mu_B\fAB$, and thus we obtain that \begin{align*} \mu_2&=\gcd(\mu_A\fAB,\mu_B\fAB,\Psi)\\ &=\gcd(\mu_A\fAB,\Psi)\\ &=\gcd(\mu_B\fAB,\Psi), \end{align*} the last equality due to the symmetry of the argument above. Finally, let us denote \begin{equation}\label{consnotc3} \begin{gathered} \frac{\mu_A\fAB}{\mu_2}=\frac{\mu_1}{\mu_B\mu_2}=\fBtwee\in\Zplusnul,\qquad \frac{\mu_B\fAB}{\mu_2}=\frac{\mu_1}{\mu_A\mu_2}=\fAtwee\in\Zplusnul,\\ \text{and}\qquad\frac{\Psi}{\mu_2}=\psi\in\Zplusnul, \end{gathered} \end{equation} resulting in \begin{equation*} \mu_1=\mu_A\mu_B\fAB=\mu_A\mu_2\fAtwee=\mu_B\mu_2\fBtwee. \end{equation*} \subsection{Investigation of the $\Sigma_{\bullet}$ and the $\Sigma_{\bullet}'$, except for $\Sigma_1'$, $\Sigma_3$} \subsubsection{The sum $\Sigma_B$}\label{sssSigmaBc3} Because in this case $\tau_0$ and $\tau_1$ both contribute to $s_0$, and therefore $p^{w\cdot v_0}=p^{w\cdot v_1}=1$, the term $\Sigma_B$ plays a special role. By \eqref{pofBcasedrie} and the fact that $p^{a(w\cdot v_0)}=p^{\{a\}(w\cdot v_0)}$ for every $a\in\R$, we have \begin{equation*} \Sigma_B=\sum_{h\in H_B}p^{w\cdot h}=\sum_{j=0}^{\mu_B-1}\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}\Bigr)^j=\sum_{j=0}^{\mu_B-1}\Bigl(p^{w\cdot h^{\ast}}\Bigr)^j, \end{equation*} with \begin{equation*} h^{\ast}=\frac{\xi_B}{\mu_B}v_0+\frac{1}{\mu_B}v_1\in\Z^3, \end{equation*} a generating element of $H_B$ (if $\mu_B>1$). Unlike, e.g., $p^{(\xi_A(w\cdot v_0)+w\cdot v_3)/\mu_A}$, appearing in Formula~\eqref{formSigAalgc3} for $\Sigma_A$ below, the $\mu_B$th root of unity $p^{w\cdot h^{\ast}}=p^{(\xi_B(w\cdot v_0)+w\cdot v_1)/\mu_B}$ may equal one, but may as well differ from one. We need to distinguish between these two cases. As \begin{equation*} s_0=-\frac{\sigma_0}{m_0}+\frac{2n\pi i}{\gcd(m_0,m_1)\log p}=-\frac{\sigma_1}{m_1}+\frac{2n\pi i}{\gcd(m_0,m_1)\log p} \end{equation*} for some $n\in\Z$ and hence \begin{equation*} p^{w\cdot h^{\ast}}=p^{\sigma(h^{\ast})+m(h^{\ast})s_0}=\exp\frac{2nm(h^{\ast})\pi i}{\gcd(m_0,m_1)}, \end{equation*} we see that $p^{w\cdot h^{\ast}}=1$ if and only if \begin{equation*} n^{\ast}=\frac{\gcd(m_0,m_1)}{\gcd(m_0,m_1,m(h^{\ast}))}\mid n. \end{equation*} In this way we find \begin{equation}\label{formSigBc3} \Sigma_B=\sum_{j=0}^{\mu_B-1}\Bigl(p^{w\cdot h^{\ast}}\Bigr)^j= \begin{cases} {\displaystyle\sum\nolimits_j1=\mu_B,}&\text{if $n^{\ast}\mid n$;}\\[+2ex] {\displaystyle\frac{\left(p^{w\cdot h^{\ast}}\right)^{\mu_B}-1}{p^{w\cdot h^{\ast}}-1}=0,}&\text{otherwise.} \end{cases} \end{equation} Let us next look at $\Sigma_B'$. \subsubsection{The sum $\Sigma_B'$}\label{sssSigmaBaccentc3} As we know, the $\mu_B$ points of $H_B$ are given by \begin{equation*} \left\{\frac{j\xi_B}{\mu_B}\right\}v_0+\frac{j}{\mu_B}v_1;\qquad j=0,\ldots,\mu_B-1; \end{equation*} but if $\xi_B'$ denotes the unique element $\xi_B'\in\verB$ such that $\xi_B\xi_B'\equiv1\mod\mu_B$, they are as well given by \begin{equation*} \frac{j}{\mu_B}v_0+\left\{\frac{j\xi_B'}{\mu_B}\right\}v_1;\qquad j=0,\ldots,\mu_B-1. \end{equation*} Recall that we introduced $\Sigma_B'$ as \begin{align*} \Sigma_B'&=\Sigma(\DB)'(s_0)=\Sigma(\DBD)'(s_0)\\ &=\dds{\sum_{h\in H_B}p^{\sigma(h)+m(h)s}}\\ &=(\log p)\sum_{h\in H_B}m(h)p^{\sigma(h)+m(h)s_0}\\ &=(\log p)\sum_hm(h)p^{w\cdot h}. \end{align*} Hence if we write $h=h_0v_0+h_1v_1$ for $h\in H_B=H(v_0,v_1)$, we find \begin{align*} \frac{\Sigma_B'}{\log p}&=m_0\sum_{h\in H_B}h_0p^{h_0(w\cdot v_0)+h_1(w\cdot v_1)} +m_1\sum_{h\in H_B}h_1p^{h_0(w\cdot v_0)+h_1(w\cdot v_1)}\\ &=\frac{m_0}{\mu_B}\sum_{j=0}^{\mu_B-1}j\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^j +\frac{m_1}{\mu_B}\sum_{j=0}^{\mu_B-1}j\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}\Bigr)^j. \end{align*} As \begin{equation}\label{teentottander} p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}=\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}\Bigr)^{\xi_B'}\quad\text{and}\quad\ p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}=\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^{\xi_B}, \end{equation} the numbers $p^{(\xi_B(w\cdot v_0)+w\cdot v_1)/\mu_B}$ and $p^{(w\cdot v_0+\xi_B'(w\cdot v_1))/\mu_B}$ are either both one (if $n^{\ast}\mid n$) or both different from one (if $n^{\ast}\nmid n$). We obtain \begin{equation*} \frac{\Sigma_B'}{\log p}= \begin{cases} {\displaystyle\frac{m_0+m_1}{\mu_B}\sum_{j=0}^{\mu_B-1}j=\frac{(m_0+m_1)(\mu_B-1)}{2},}&\text{if $n^{\ast}\mid n$;}\\[+2.5ex] {\displaystyle\frac{m_0}{p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}-1}+ \frac{m_1}{p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}-1},}&\text{otherwise.} \end{cases} \end{equation*} \subsubsection{The sums $\Sigma_A$, $\Sigma_C$, and $\Sigma_2$}\label{sssSASCS2c3} From (\ref{pofAcasedrie}, \ref{pofCcasedrie}, \ref{pof2casedrie}) and the fact that $p^{w\cdot v_0}=p^{w\cdot v_1}=1$, we obtain in the same way as in Case~I that \begin{alignat}{4} \Sigma_A&=\sum_{h\in H_A}p^{w\cdot h} &&=\sum_{i=0}^{\mu_A-1}&&\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_3}{\mu_A}}\Bigr)^i &&=\frac{F_3}{p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_3}{\mu_A}}-1},\label{formSigAalgc3}\\ \Sigma_C&=\sum_{h\in H_C}p^{w\cdot h} &&=\sum_{i=0}^{\mu_C-1}&&\Bigl(p^{\frac{\xi_C(w\cdot v_1)+w\cdot v_2}{\mu_C}}\Bigr)^i &&=\frac{F_2}{p^{\frac{\xi_C(w\cdot v_1)+w\cdot v_2}{\mu_C}}-1},\qquad\text{and}\notag\\ \Sigma_2&=\sum_{h\in H_2}p^{w\cdot h} &&=\sum_{j=0}^{\mu_2-1}&&\Bigl(p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}\Bigr)^j &&=\frac{F_3}{p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}-1}.\notag \end{alignat} In Case~I we also observed that $\xi_Av_0+v_3\in\mu_A\Z^3$, $\xi_Bv_0+v_1\in\mu_B\Z^3$, and \eqref{vi1c3} give rise to $(\xi_A\mu_B+\xi_B\mu_A+\Psi)v_0\in\mu_A\mu_B\Z^3$ and hence to \begin{equation}\label{inZetidABc3} \frac{\xi_A\mu_B+\xi_B\mu_A+\Psi}{\mu_A\mu_B}\in\Z. \end{equation} Using \eqref{dpi1c3} it follows that \begin{multline}\label{sigasigbidcasedrie} p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_3}{\mu_A}}p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}} =p^{\frac{(\xi_A\mu_B+\xi_B\mu_A)(w\cdot v_0)+\mu_A(w\cdot v_1)+\mu_B(w\cdot v_3)}{\mu_A\mu_B}}\\ =p^{\frac{-\Psi(w\cdot v_0)+\mu_A(w\cdot v_1)+\mu_B(w\cdot v_3)}{\mu_A\mu_B}}=p^{\fAB(-s_0-1)}. \end{multline} Analogously, $v_0+\xi_B'v_1\in\mu_B\Z^3$, $\xi_Cv_1+v_2\in\mu_C\Z^3$, \eqref{vi2c3}, and \eqref{dpi2c3} yield \begin{multline}\label{sigcsigbidcasedrie} p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}p^{\frac{\xi_C(w\cdot v_1)+w\cdot v_2}{\mu_C}} =p^{\frac{\mu_C(w\cdot v_0)+(\xi_B'\mu_C+\xi_C\mu_B)(w\cdot v_1)+\mu_B(w\cdot v_2)}{\mu_B\mu_C}}\\ =p^{\frac{\mu_C(w\cdot v_0)-\Omega(w\cdot v_1)+\mu_B(w\cdot v_2)}{\mu_B\mu_C}}=p^{\fBC(-s_0-1)}, \end{multline} while $v_0+\xi_B'v_1\in\mu_B\Z^3$, $\xi_2v_1+v_3\in\mu_2\Z^3$, \eqref{vi1c3}, \eqref{dpi1c3}, and \eqref{consnotc3} lead to \begin{multline}\label{sig2sigbidcasedrie} \Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^{-\psi}p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}} =p^{\frac{-\Psi(w\cdot v_0)+(-\xi_B'\Psi+\xi_2\mu_B)(w\cdot v_1)+\mu_B(w\cdot v_3)}{\mu_B\mu_2}}\\ =p^{\frac{-\Psi(w\cdot v_0)+\mu_A(w\cdot v_1)+\mu_B(w\cdot v_3)}{\mu_B\mu_2}}=p^{\fBtwee(-s_0-1)}. \end{multline} Consequently, if $n^{\ast}\mid n$ and hence $p^{(\xi_B(w\cdot v_0)+w\cdot v_1)/\mu_B}=p^{(w\cdot v_0+\xi_B'(w\cdot v_1))/\mu_B}=1$, one has that \begin{equation}\label{fSASCS2ifnsdnc3} \begin{gathered} \Sigma_A=\frac{F_3}{p^{\fAB(-s_0-1)}-1},\qquad\quad\ \qquad \Sigma_C=\frac{F_2}{p^{\fBC(-s_0-1)}-1},\\ \text{and}\qquad\Sigma_2=\frac{F_3}{p^{\fBtwee(-s_0-1)}-1}.\qquad\text{\phantom{and}} \end{gathered} \end{equation} \subsubsection{The sum $\Sigma_1$}\label{parberSig1c3} If for $h\in H_1=H(v_0,v_1,v_3)$, we denote by $(h_0,h_1,h_3)$ the coordinates of $h$ with respect to the basis $(v_0,v_1,v_3)$, then by \eqref{pof1casedrie} and $p^{w\cdot v_0}=1$ we have that \begin{align*} \Sigma_1&=\sum_{h\in H_1}p^{w\cdot h}\\ &=\sum_hp^{h_0(w\cdot v_0)+h_1(w\cdot v_1)+h_3(w\cdot v_3)}\\ &=\sum_{i=0}^{\mu_A-1}\sum_{j=0}^{\mu_B-1}\sum_{k=0}^{\fAB-1}p^{\frac{i\xi_A\mu_B\fAB+j\xi_B\mu_A\fAB-k\Psi}{\mu_1}(w\cdot v_0)+\frac{j\fAB+k}{\mu_B\fAB}(w\cdot v_1)+\frac{i\fAB+k}{\mu_A\fAB}(w\cdot v_3)}\\ &=\sum_i\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_3}{\mu_A}}\Bigr)^i\sum_j\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}\Bigr)^j\sum_k\Bigl(p^{\frac{-\Psi(w\cdot v_0)+\mu_A(w\cdot v_1)+\mu_B(w\cdot v_3)}{\mu_1}}\Bigr)^k\\ &=\Sigma_A\Sigma_B\,\frac{p^{\fAB(-s_0-1)}-1}{p^{-s_0-1}-1}, \end{align*} where in the last step we again used \eqref{dpi1c3}. It now follows from \eqref{formSigBc3} and \eqref{fSASCS2ifnsdnc3} that \begin{equation*} \Sigma_1= \begin{cases} {\displaystyle\frac{\mu_BF_3}{p^{-s_0-1}-1},}&\text{if $n^{\ast}\mid n$;}\\[+2ex] {\displaystyle0,}&\text{otherwise.} \end{cases} \end{equation*} \subsection{Proof of $R_2=0$ and a new formula for $R_1$} If we fill in the formulas for $\Sigma_B$ and $\Sigma_1$ in Formula~\eqref{followingnewformulaRtweecasedrie} for $R_2$, we obtain \begin{align*} R_2&=\left(\frac{p-1}{p}\right)^3\left(\frac{\Sigma_B}{1-p^{-s_0-1}}+\frac{\Sigma_1}{F_3}\right)\\ &=\left(\frac{p-1}{p}\right)^3\left(\frac{\mu_B}{1-p^{-s_0-1}}+\frac{\mu_B}{p^{-s_0-1}-1}\right)\\ &=0 \end{align*} in the case that $n^{\ast}\mid n$ and clearly the same result in the other case as well. Let us check how much progress we made on $R_1$. First of all, denote by $R_1'$ the third factor in Formula~\eqref{followingnewformulaReencasedrie} for $R_1$; i.e., put \begin{equation} R_1=(\log p)\left(\frac{p-1}{p}\right)^3R_1'. \end{equation} Obviously, we want to prove that $R_1'=0$. Secondly, let us from now on denote $p^{-s_0-1}$ by $q$. If we then fill in the formulas for $\Sigma_A,\Sigma_B,\Sigma_C,\Sigma_1,\Sigma_2,$ and $\Sigma_B'$ obtained above in the formula for $R_1'$, we find that in the case $n^{\ast}\mid n$, the \lq residue\rq\ $R_1'$ equals \begin{multline}\label{lastformulaforReenaccent} R_1'=\frac{1}{1-q}\left(\frac{m_0}{1-q^{-\fBC}}+\frac{m_1}{1-q^{-\fAB}}+\frac{\mu_B}{1-q^{-1}}+\frac{m_3\mu_B(F_3+1)}{F_3}\right.\\+\left.\frac{(m_0+m_1)(\mu_B-1)}{2}\right)+\frac{\Sigma_1'}{(\log p)F_3}+\frac{m_0}{q^{\fBtwee}-1}+\frac{m_0\Sigma_3}{F_2F_3}. \end{multline} In the complementary case we have \begin{align} R_1'&=\frac{m_1}{1-q}\Biggl(\frac{1}{p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_3}{\mu_A}}-1}+\frac{1}{p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}-1}+1\Biggr)\notag\\ &\qquad\qquad\quad\ \ +\frac{m_0}{1-q}\Biggl(\frac{1}{p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}-1}+\frac{1}{p^{\frac{\xi_C(w\cdot v_1)+w\cdot v_2}{\mu_C}}-1}+1\Biggr)\notag\\ &\qquad\qquad\qquad\qquad\qquad\quad\,+\frac{\Sigma_1'}{(\log p)F_3}+\frac{m_0}{p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}-1}+\frac{m_0\Sigma_3}{F_2F_3},\notag\\ \shortintertext{and by (\ref{sigasigbidcasedrie}--\ref{sigcsigbidcasedrie}) this is,} R_1'&=\frac{m_1}{1-q}\frac{q^{\fAB}-1}{\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_3}{\mu_A}}-1\Bigr)\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}-1\Bigr)}\notag\\ &\qquad\qquad\quad\ \ +\frac{m_0}{1-q}\frac{q^{\fBC}-1}{\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}-1\Bigr)\Bigl(p^{\frac{\xi_C(w\cdot v_1)+w\cdot v_2}{\mu_C}}-1\Bigr)}\label{lastformulaforReenaccentdn}\\ &\qquad\qquad\qquad\qquad\qquad\quad\,+\frac{\Sigma_1'}{(\log p)F_3}+\frac{m_0}{p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}-1}+\frac{m_0\Sigma_3}{F_2F_3}.\notag \end{align} \subsection{Study of $\Sigma_1'$}\label{studysigmaeenacc3} The term $\Sigma_1'$ was defined as \begin{align*} \Sigma_1'&=\Sigma(\delta_1)'(s_0)\\ &=\dds{\sum_{h\in H_1}p^{\sigma(h)+m(h)s}}\\ &=(\log p)\sum_hm(h)p^{\sigma(h)+m(h)s_0}\\ &=(\log p)\sum_hm(h)p^{w\cdot h}. \end{align*} Writing $h=h_0v_0+h_1v_1+h_3v_3$ for $h\in H_1=H(v_0,v_1,v_3)$, we have that \begin{align} \frac{\Sigma_1'}{\log p}&=\sum_{h\in H_1}(h_0m_0+h_1m_1+h_3m_3)p^{w\cdot h}\notag\\ &=m_0\Sigma_1^{(0)}+m_1\Sigma_1^{(1)}+m_3\Sigma_1^{(3)},\label{formuleSigmaeenaccent}\\\shortintertext{with} \Sigma_1^{(i)}&=\sum_{h\in H_1}h_ip^{w\cdot h};\qquad i=0,1,3.\notag \end{align} We will now calculate $\Sigma_1^{(1)},\Sigma_1^{(3)},$ and $\Sigma_1^{(0)}$. \subsubsection{The sum $\Sigma_1^{(1)}$} With the notation $q=p^{-s_0-1}$, Identity~\eqref{dpi1c3} yields \begin{equation*} p^{\frac{-\Psi(w\cdot v_0)+\mu_A(w\cdot v_1)+\mu_B(w\cdot v_3)}{\mu_1}}=p^{-s_0-1}=q. \end{equation*} Hence, based on the description \eqref{pof1casedrie} of the points of $H_1$ and proceeding as in Paragraph~\ref{parberSig1c3}, we obtain \begin{align} \Sigma_1^{(1)}&=\sum_{h\in H_1}h_1p^{w\cdot h}\notag\\ &=\sum_{i=0}^{\mu_A-1}\sum_{j=0}^{\mu_B-1}\sum_{k=0}^{\fAB-1}\frac{j\fAB+k}{\mu_B\fAB}\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_3}{\mu_A}}\Bigr)^i\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}\Bigr)^jq^k\notag\\ &=\frac{\Sigma_A}{\mu_B}\Biggl[\sum_jj\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}\Bigr)^j\sum_kq^k+\frac{1}{\fAB}\sum_j\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}\Bigr)^j\sum_kkq^k\Biggr].\label{termofS11c3} \end{align} For $n^{\ast}\mid n$, by Formula~\eqref{fSASCS2ifnsdnc3} for $\Sigma_A$, we then find \begin{align} \Sigma_1^{(1)}&=\frac{F_3}{\mu_B(q^{\fAB}-1)}\biggl(\frac{\mu_B(\mu_B-1)}{2}\frac{q^{\fAB}-1}{q-1}\notag\\ &\qquad\qquad\qquad\quad\ \ +\frac{1}{\fAB}\,\mu_B\,\frac{q^{\fAB}(\fAB q-\fAB-q)+q}{(q-1)^2}\biggr)\notag\\ &=\frac{F_3}{1-q}\left(-\frac{\mu_B-1}{2}-\frac{1}{1-q^{-\fAB}}+\frac{1}{\fAB(1-q^{-1})}\right),\label{forSig11deelt}\\\intertext{while in the complementary case, we conclude} \Sigma_1^{(1)}&=\frac{F_3}{\mu_B\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_3}{\mu_A}}-1\Bigr)}\frac{\mu_B}{p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}-1}\frac{q^{\fAB}-1}{q-1}\notag\\ &=\frac{F_3}{q-1}\frac{q^{\fAB}-1}{\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_3}{\mu_A}}-1\Bigr)\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}-1\Bigr)}.\label{forSig11deeltnt} \end{align} Note that if $n^{\ast}\nmid n$, the second term of \eqref{termofS11c3} vanishes, as the sum over $j$ equals zero in this case. \subsubsection{The sum $\Sigma_1^{(3)}$} Similarly, $\Sigma_1^{(3)}$ is given by \begin{align*} \Sigma_1^{(3)}&=\sum_{h\in H_1}h_3p^{w\cdot h}\\ &=\sum_{i=0}^{\mu_A-1}\sum_{j=0}^{\mu_B-1}\sum_{k=0}^{\fAB-1}\frac{i\fAB+k}{\mu_A\fAB}\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_3}{\mu_A}}\Bigr)^i\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}\Bigr)^jq^k. \end{align*} If $n^{\ast}\nmid n$, the sum over $j$ vanishes and $\Sigma_1^{(3)}=0$. In the other case, one has \begin{equation*} p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}=1\quad\ \ \text{and subsequently}\quad\ \ p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_3}{\mu_A}}=p^{\fAB(-s_0-1)}=q^{\fAB}, \end{equation*} leading to \begin{align} \Sigma_1^{(3)}&=\frac{\mu_B}{\mu_A\fAB}\sum_{i=0}^{\mu_A-1}\sum_{k=0}^{\fAB-1}(i\fAB+k)q^{i\fAB+k}\notag\\ &=\frac{\mu_B}{\mu_A\fAB}\sum_{l=0}^{\mu_A\fAB-1}lq^l\notag\\ &=\frac{\mu_B}{\mu_A\fAB}\frac{q^{\mu_A\fAB}(\mu_A\fAB q-\mu_A\fAB-q)+q}{(q-1)^2}\notag\\ &=\frac{\mu_B}{1-q}\left(-(F_3+1)+\frac{F_3}{\mu_A\fAB(1-q^{-1})}\right).\label{formSig13deelt} \end{align} Note that $q^{\mu_A\fAB}=p^{\xi_A(w\cdot v_0)+w\cdot v_3}=p^{w\cdot v_3}=F_3+1$ in this case. Let us now look at $\Sigma_1^{(0)}$. \subsubsection{The sum $\Sigma_1^{(0)}$} Still based on \eqref{pof1casedrie}, this time we ought to consider the following sum: \begin{align*} \Sigma_1^{(0)}&=\sum_{h\in H_1}h_0p^{w\cdot h}\\* &=\sum_{i=0}^{\mu_A-1}\sum_{j=0}^{\mu_B-1}\sum_{k=0}^{\fAB-1}\left\{\frac{i\xi_A\mu_B\fAB+j\xi_B\mu_A\fAB-k\Psi}{\mu_1}\right\}\cdot\\* &\quad\qquad\qquad\qquad\qquad\qquad\qquad\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_3}{\mu_A}}\Bigr)^i \Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}\Bigr)^jq^k. \end{align*} If we put \begin{equation}\label{defjnulc3} j_0=\left\lfloor\frac{i\xi_A\mu_B\fAB-k\Psi}{\mu_A\fAB}\right\rfloor, \end{equation} we can write this sum as \begin{align} \Sigma_1^{(0)}&=\frac{1}{\mu_B}\sum_{i=0}^{\mu_A-1}\sum_{k=0}^{\fAB-1}\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_3}{\mu_A}}\Bigr)^iq^kS(i,k),\qquad\text{with}\label{sumoverjref}\\ S(i,k)&=\sum_{j=0}^{\mu_B-1}\left(\left\{\frac{i\xi_A\mu_B\fAB-k\Psi}{\mu_A\fAB}\right\}+\{j_0+j\xi_B\}_{\mu_B}\right)\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}\Bigr)^j.\notag \end{align} Since $\gcd(\xi_B,\mu_B)=1$, the map \begin{align*} &\verB\to\verB:j\mapsto\{j_0+j\xi_B\}_{\mu_B}\\ \intertext{is a permutation, and with $\xi_B'$ as before the unique element $\xi_B'\in\verB$ such that $\xi_B\xi_B'\equiv1\bmod\mu_B$, the inverse permutation is given by} &\verB\to\verB:j\mapsto\{(j-j_0)\xi_B'\}_{\mu_B}. \end{align*} Therefore, after reordering the terms, the sum $S(i,k)$ can be written as \begin{align} S(i,k)&=\sum_{j=0}^{\mu_B-1}\left(\left\{\frac{i\xi_A\mu_B\fAB-k\Psi}{\mu_A\fAB}\right\}+j\right)\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}\Bigr)^{\{(j-j_0)\xi_B'\}_{\mu_B}}\label{sumoverjex1}\\ &=\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^{-j_0}\sum_j\left(\left\{\frac{i\xi_A\mu_B\fAB-k\Psi}{\mu_A\fAB}\right\}+j\right)\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^j.\label{sumoverjex2} \end{align} Indeed, as $p^{(\xi_B(w\cdot v_0)+w\cdot v_1)/\mu_B}$ is a $\mu_B$th root of unity, one may omit the curly brackets $\{\cdot\}_{\mu_B}$ in the exponent in \eqref{sumoverjex1}. Expression~\eqref{sumoverjex2} is then obtained by \eqref{teentottander} and the fact that $j_0$ is independent of $j$. It is now a good time to make a case distinction between $n^{\ast}\mid n$ and $n^{\ast}\nmid n$. \paragraph{Case $n^{\ast}\mid n$} Since in this case \begin{equation*} p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_3}{\mu_A}}=p^{\fAB(-s_0-1)}=q^{\fAB}\qquad\text{and}\qquad p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}=1, \end{equation*} Equations~\eqref{sumoverjref} and \eqref{sumoverjex2} give rise to \begin{align} \Sigma_1^{(0)}&=\frac{1}{\mu_B}\sum_{i=0}^{\mu_A-1}\sum_{k=0}^{\fAB-1}q^{i\fAB+k} \sum_{j=0}^{\mu_B-1}\left(\left\{\frac{i\xi_A\mu_B\fAB-k\Psi}{\mu_A\fAB}\right\}+j\right)\notag\\ &=\frac{\mu_B-1}{2}\sum_{l=0}^{\mu_A\fAB-1}q^l+\sum_i\sum_k\left\{\frac{i\xi_A\mu_B\fAB-k\Psi}{\mu_A\fAB}\right\}q^{i\fAB+k}\label{defTtwee}\\ &=-\frac{\mu_B-1}{2}\frac{F_3}{1-q}+T_2.\label{formuleSigmaeennul} \end{align} The second term of \eqref{defTtwee}, which we temporarily denote by $T_2$, can be further simplified as follows. Either directly from $\xi_Av_0+v_3\in\mu_A\Z^3$ and \eqref{vi1c3}, or as a corollary of \eqref{inZetidABc3}, we have that \begin{equation*} \frac{\xi_A\mu_B+\Psi}{\mu_A}\in\Z. \end{equation*} This makes that we can replace $\xi_A\mu_B$ by $-\Psi$ in the second term $T_2$ of \eqref{defTtwee}: \begin{align*} T_2&=\sum_{i=0}^{\mu_A-1}\sum_{k=0}^{\fAB-1}\left\{\frac{i\xi_A\mu_B\fAB-k\Psi}{\mu_A\fAB}\right\}q^{i\fAB+k}\\ &=\sum_i\sum_k\left\{\frac{i(-\Psi)\fAB-k\Psi}{\mu_A\fAB}\right\}q^{i\fAB+k}\\ &=\sum_i\sum_k\left\{\frac{-(i\fAB+k)\Psi}{\mu_A\fAB}\right\}q^{i\fAB+k}\\ &=\sum_{l=0}^{\mu_A\fAB-1}\left\{\frac{-l\Psi}{\mu_A\fAB}\right\}q^l. \end{align*} In Subsection~\ref{pointsofandaddrel} we showed that $\gcd(\mu_A\fAB,\Psi)=\mu_2$. Therefore, if we recall that $\Psi=\psi\mu_2$ and $\mu_A\fAB=\mu_2\fBtwee$, we can write the fraction $-\Psi/\mu_A\fAB$ in lowest terms and continue: \begin{align*} T_2&=\sum_{l=0}^{\mu_A\fAB-1}\left\{\frac{-l\psi}{\fBtwee}\right\}q^l\\ &=\sum_{\iota=0}^{\mu_2-1}\sum_{\kappa=0}^{\fBtwee-1}\left\{\frac{-(\iota\fBtwee+\kappa)\psi}{\fBtwee}\right\}q^{\iota\fBtwee+\kappa}\\ &=\sum_{\iota}\left(q^{\fBtwee}\right)^{\iota}\sum_{\kappa}\left\{\frac{-\kappa\psi}{\fBtwee}\right\}q^{\kappa}\\ &=\frac{F_3}{q^{\fBtwee}-1}\sum_{\kappa}\left\{\frac{-\kappa\barpsi}{\fBtwee}\right\}q^{\kappa}, \end{align*} where $\barpsi=\{\psi\}_{\fBtwee}$ denotes the reduction of $\psi$ modulo $\fBtwee$. Obviously, we have $\barpsi\in\{0,\ldots,\fBtwee-1\}$ and still $\gcd(\barpsi,\fBtwee)=1$. Note also that $\barpsi=0$ if and only if $\fBtwee=1$, and that if $\fBtwee=1$, then $T_2=0$. In what follows, we study $T_2$ under the assumption that $\fBtwee>1$. For any real number $a$, one has that $\{-a\}=0$ if $a\in\Z$, and $\{-a\}=1-\{a\}$ otherwise. Since \barpsi\ and \fBtwee\ are coprime, the only $\kappa\in\{0,\ldots,\fBtwee-1\}$ for which \begin{equation*} \frac{\kappa\barpsi}{\fBtwee}\in\Z, \end{equation*} is $\kappa=0$. Consequently, \begin{equation*} T_2=\frac{F_3}{q^{\fBtwee}-1}\left(\sum_{\kappa=0}^{\fBtwee-1}\left(1-\left\{\frac{\kappa\barpsi}{\fBtwee}\right\}\right)q^{\kappa}-1\right), \end{equation*} and since $\{a\}=a-\lfloor a\rfloor$ for any $a\in\R$, we obtain that $T_2$ equals \begin{align*} &\,\frac{F_3}{q^{\fBtwee}-1}\left(\sum_{\kappa}q^{\kappa}-\frac{\barpsi}{\fBtwee}\sum_{\kappa}\kappa q^{\kappa}+\sum_{\kappa}\left\lfloor\frac{\kappa\barpsi}{\fBtwee}\right\rfloor q^{\kappa}-1\right)\\ &=\frac{F_3}{q^{\fBtwee}-1}\left(\frac{q^{\fBtwee}-1}{q-1}-\frac{\barpsi}{\fBtwee}\frac{q^{\fBtwee}(\fBtwee(q-1)-q)+q}{(q-1)^2}+\sum_{\kappa}\left\lfloor\frac{\kappa\barpsi}{\fBtwee}\right\rfloor q^{\kappa}-1\right)\\ &=\frac{F_3}{1-q}\left(\frac{\barpsi}{1-q^{-\fBtwee}}-\frac{\barpsi}{\fBtwee(1-q^{-1})}-1\right)+\frac{F_3}{q^{\fBtwee}-1}\left(\sum_{\kappa}\left\lfloor\frac{\kappa\barpsi}{\fBtwee}\right\rfloor q^{\kappa}-1\right). \end{align*} As we assume that $\fBtwee>1$, we have $\barpsi\in\{1,\ldots,\fBtwee-1\}$. Hence the finite sequence \begin{equation}\label{finseqkappa} \left(\left\lfloor\frac{\kappa\barpsi}{\fBtwee}\right\rfloor\right)_{\kappa=0}^{\fBtwee-1} \end{equation} of non-negative integers ascends from $0$ to $\barpsi-1$ with steps of zero or one. If we denote \begin{equation*} \kappa_{\rho}=\min\left\{\kappa\in\Zplus\;\middle\vert\;\left\lfloor\frac{\kappa\barpsi}{\fBtwee}\right\rfloor=\rho\right\}=\left\lceil\frac{\rho\fBtwee}{\barpsi}\right\rceil;\qquad\rho=0,\ldots,\barpsi; \end{equation*} then \begin{equation*} 0=\kappa_0<\kappa_1<\cdots<\kappa_{\barpsi-1}<\kappa_{\barpsi}=\fBtwee, \end{equation*} and $\kappa_1,\ldots,\kappa_{\barpsi-1}$ are the indices where a \lq jump\rq\ in the sequence \eqref{finseqkappa} takes place. Let us express \begin{equation*} \sum_{\kappa}\left\lfloor\frac{\kappa\barpsi}{\fBtwee}\right\rfloor q^{\kappa} \end{equation*} in terms of these numbers. We have \begin{align*} \left(\sum_{\kappa=0}^{\fBtwee-1}\left\lfloor\frac{\kappa\barpsi}{\fBtwee}\right\rfloor q^{\kappa}\right)(1-q)&=\sum_{\kappa=1}^{\fBtwee-1}\left(\left\lfloor\frac{\kappa\barpsi}{\fBtwee}\right\rfloor-\left\lfloor\frac{(\kappa-1)\barpsi}{\fBtwee}\right\rfloor\right)q^{\kappa}-(\barpsi-1)q^{\fBtwee}\\ &=\sum_{\rho=1}^{\barpsi-1}q^{\kappa_{\rho}}-(\barpsi-1)q^{\fBtwee}\\ &=\sum_{\rho=1}^{\barpsi}q^{\kappa_{\rho}}-\barpsi q^{\fBtwee}, \end{align*} and therefore, \begin{equation}\label{eindformuleTtwee} T_2=\frac{F_3}{1-q}\Biggl(\frac{1}{q^{\fBtwee}-1}\sum_{\rho=1}^{\barpsi}q^{\kappa_{\rho}}-\frac{\barpsi}{\fBtwee(1-q^{-1})}-1\Biggr)-\frac{F_3}{q^{\fBtwee}-1}. \end{equation} If we agree that an empty sum equals zero, then the above formula stays valid for $\fBtwee=1$. \paragraph{Case $n^{\ast}\nmid n$} With Equations~\eqref{sumoverjref} and \eqref{sumoverjex2} as a starting point, we now calculate $\Sigma_1^{(0)}$ in the complementary case. First of all, as $p^{(w\cdot v_0+\xi_B'(w\cdot v_1))/\mu_B}$ is now a $\mu_B$th root of unity different from one, one has that \begin{equation*} \sum_{j=0}^{\mu_B-1}\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^j=0, \end{equation*} and Expression~\eqref{sumoverjex2} simplifies to \begin{align*} S(i,k)&=\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^{-j_0}\sum_{j=0}^{\mu_B-1}j\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^j\\ &=\frac{\mu_B}{p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}-1}\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^{-j_0}. \end{align*} Next, let us recall from \eqref{defjnulc3} and \eqref{inZetidABc3} that \begin{equation*} j_0=\left\lfloor\frac{i\xi_A\mu_B\fAB-k\Psi}{\mu_A\fAB}\right\rfloor\qquad\text{and}\qquad\frac{\xi_A\mu_B+\xi_B\mu_A+\Psi}{\mu_A}\in\mu_B\Z. \end{equation*} We observe that \begin{equation*} -j_0\equiv-\left\lfloor\frac{i(-\xi_B\mu_A-\Psi)\fAB-k\Psi}{\mu_A\fAB}\right\rfloor=i\xi_B-\left\lfloor\frac{-(i\fAB+k)\Psi}{\mu_A\fAB}\right\rfloor\mod\mu_B, \end{equation*} which gives rise to \begin{equation}\label{sumoverjex3} S(i,k)=\frac{\mu_B}{p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}-1}\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}\Bigr)^i\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^{-\left\lfloor\frac{-(i\fAB+k)\Psi}{\mu_A\fAB}\right\rfloor}. \end{equation} Finally, in Paragraph~\ref{sssSASCS2c3} we obtained \begin{equation*} p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_3}{\mu_A}}p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}=p^{\fAB(-s_0-1)}=q^{\fAB}; \end{equation*} using this identity, Formulas~\eqref{sumoverjref} and \eqref{sumoverjex3} for $\Sigma_1^{(0)}$ and $S(i,k)$ eventually yield \begin{equation*} \Sigma_1^{(0)}=\frac{1}{p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}-1}\sum_{i=0}^{\mu_A-1}\sum_{k=0}^{\fAB-1}q^{i\fAB+k}\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^{-\left\lfloor\frac{-(i\fAB+k)\Psi}{\mu_A\fAB}\right\rfloor}. \end{equation*} Proceeding as in Case $n^{\ast}\mid n$, we write the double sum $DS$ in the expression above as \begin{align*} DS&=\sum_{l=0}^{\mu_A\fAB-1}q^l\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^{-\left\lfloor\frac{-l\psi}{\fBtwee}\right\rfloor}\\ &=\sum_{\iota=0}^{\mu_2-1}\sum_{\kappa=0}^{\fBtwee-1}q^{\iota\fBtwee+\kappa}\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^{-\left\lfloor\frac{-(\iota\fBtwee+\kappa)\psi}{\fBtwee}\right\rfloor}\\ &=\sum_{\iota}\left[q^{\fBtwee}\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^{\psi}\right]^{\iota}\sum_{\kappa}q^{\kappa}\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^{-\left\lfloor\frac{-\kappa\psi}{\fBtwee}\right\rfloor}.\\ \intertext{By \eqref{sig2sigbidcasedrie} and the fact that $\kappa\psi/\fBtwee\notin\Z$ for $\kappa\in\{1,\ldots,\fBtwee-1\}$,\footnotemark\ we then have} DS&=\sum_{\iota}\Bigl(p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}\Bigr)^{\iota} \Biggl[\sum_{\kappa}q^{\kappa}\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^{\left\lfloor\frac{\kappa\psi}{\fBtwee}\right\rfloor+1}-p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}+1\Biggr], \end{align*} \footnotetext{Recall that $\psi$ and $\fBtwee$ are coprime.} and hence we conclude \begin{align} \Sigma_1^{(0)}&=\frac{p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\,F_3}{\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}-1\Bigr)\Bigl(p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}-1\Bigr)}\sum_{\kappa=0}^{\fBtwee-1}q^{\kappa}\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^{\left\lfloor\frac{\kappa\psi}{\fBtwee}\right\rfloor}\notag\\ &\quad\,-\frac{F_3}{p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}-1}.\label{formSig10dnc3} \end{align} \subsubsection{A formula for $\Sigma_1'$} Bringing together Equations (\ref{formuleSigmaeenaccent}, \ref{forSig11deelt}, \ref{formSig13deelt}, \ref{formuleSigmaeennul}, and \ref{eindformuleTtwee}) for $\Sigma_1'$, $\Sigma_1^{(0)}$, $\Sigma_1^{(1)}$, $\Sigma_1^{(3)}$, and $T_2$, we find the following formula in case that $n^{\ast}\mid n$: \begin{align} \frac{\Sigma_1'}{(\log p)F_3}&=\frac{m_0\Sigma_1^{(0)}+m_1\Sigma_1^{(1)}+m_3\Sigma_1^{(3)}}{F_3}\notag\\ &=\frac{m_0}{1-q}\Biggl(-\frac{\mu_B-1}{2}+\frac{1}{q^{\fBtwee}-1}\sum_{\rho=1}^{\barpsi}q^{\kappa_{\rho}}-\frac{\barpsi}{\fBtwee(1-q^{-1})}-1\Biggr)\notag\\ &\quad-\frac{m_0}{q^{\fBtwee}-1}+\frac{m_1}{1-q}\left(-\frac{\mu_B-1}{2}-\frac{1}{1-q^{-\fAB}}+\frac{1}{\fAB(1-q^{-1})}\right)\notag\\ &\quad+\frac{m_3\mu_B}{1-q}\left(-\frac{F_3+1}{F_3}+\frac{1}{\mu_A\fAB(1-q^{-1})}\right)\notag\\ &=\frac{1}{1-q}\Biggl(\frac{m_0}{q^{\fBtwee}-1}\sum_{\rho=1}^{\barpsi}q^{\kappa_{\rho}}-\frac{(m_0+m_1)(\mu_B-1)}{2}-\frac{m_1}{1-q^{-\fAB}}\label{eindformulesigmaeenaccent}\\* &\quad\,+\frac{m_1\mu_A+m_3\mu_B-m_0\barpsi\mu_2}{\mu_A\fAB(1-q^{-1})}-\frac{m_3\mu_B(F_3+1)}{F_3}-m_0\Biggr)-\frac{m_0}{q^{\fBtwee}-1};\notag \end{align} if on the contrary $n^{\ast}\nmid n$, then Equations (\ref{formuleSigmaeenaccent}, \ref{forSig11deeltnt}, and \ref{formSig10dnc3}) yield \begin{gather} \frac{\Sigma_1'}{(\log p)F_3}=\frac{m_0\Sigma_1^{(0)}+m_1\Sigma_1^{(1)}+m_3\Sigma_1^{(3)}}{F_3}\notag\\ \begin{aligned} &=\frac{m_0p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}}{\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}-1\Bigr)\Bigl(p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}-1\Bigr)}\sum_{\kappa=0}^{\fBtwee-1}q^{\kappa}\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^{\left\lfloor\frac{\kappa\psi}{\fBtwee}\right\rfloor}\\ &\quad\,-\frac{m_0}{p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}-1}+\frac{m_1}{q-1}\frac{q^{\fAB}-1}{\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_3}{\mu_A}}-1\Bigr)\Bigl(p^{\frac{\xi_B(w\cdot v_0)+w\cdot v_1}{\mu_B}}-1\Bigr)}. \end{aligned}\label{eindformulesigmaeenaccentdn} \end{gather} \subsection{An easier formula for the residue $R_1$} \subsubsection{Case $n^{\ast}\mid n$} If we fill in Formula~\eqref{eindformulesigmaeenaccent} in Equation~\eqref{lastformulaforReenaccent} for $R_1'$, the latter simplifies to \begin{equation*} \frac{1}{1-q}\Biggl(\frac{m_0}{q^{\fBtwee}-1}\sum_{\rho=1}^{\barpsi}q^{\kappa_{\rho}}+\frac{m_0}{q^{\fBC}-1}+\frac{m_1\mu_A+m_3\mu_B+\mu_1-m_0\barpsi\mu_2}{\mu_A\fAB(1-q^{-1})}\Biggr)+\frac{m_0\Sigma_3}{F_2F_3}. \end{equation*} There is a very convenient interpretation of $m_1\mu_A+m_3\mu_B+\mu_1$ appearing in the equation above. Recall from \eqref{vi1c3} that \begin{equation*} \Psi v_0-\mu_A v_1-\mu_B v_3=(0,0,\mu_1). \end{equation*} Making the dot product with $D=(x_D,y_D,1)$ on both sides yields \begin{equation*} m_0\Psi-m_1\mu_A-m_3\mu_B=\Psi(D\cdot v_0)-\mu_A(D\cdot v_1)-\mu_B(D\cdot v_3)=D\cdot(0,0,\mu_1)=\mu_1, \end{equation*} and hence \begin{equation*} m_1\mu_A+m_3\mu_B+\mu_1=m_0\Psi=m_0\psi\mu_2. \end{equation*} It follows that \begin{equation*} \frac{m_1\mu_A+m_3\mu_B+\mu_1-m_0\barpsi\mu_2}{\mu_A\fAB(1-q^{-1})}=\frac{m_0(\psi-\barpsi)}{\fBtwee(1-q^{-1})}=\frac{m_0t}{1-q^{-1}}, \end{equation*} with \begin{equation*} t=\frac{\psi-\barpsi}{\fBtwee}=\frac{\psi-\{\psi\}_{\fBtwee}}{\fBtwee}=\left\lfloor\frac{\psi}{\fBtwee}\right\rfloor \end{equation*} the quotient of Euclidean division of $\psi$ by \fBtwee. Note that if $\fBtwee=1$, then $t=\psi$. If we now put $R_1''=R_1'/m_0$, it remains to prove that \begin{equation}\label{laatsteformulevrReen} R_1''=\frac{\Sigma_3}{F_2F_3}+\frac{1}{1-q}\Biggl(\frac{1}{q^{\fBtwee}-1}\sum_{\rho=1}^{\barpsi}q^{\kappa_{\rho}}+\frac{1}{q^{\fBC}-1}+\frac{t}{1-q^{-1}}\Biggr) \end{equation} vanishes. \subsubsection{Case $n^{\ast}\nmid n$} In this case, according to \eqref{lastformulaforReenaccentdn} and \eqref{eindformulesigmaeenaccentdn}, it now comes to proving that \begin{multline}\label{laatsteformulevrReendn} R_1''=\frac{R_1'}{m_0}=\frac{\Sigma_3}{F_2F_3}-\frac{q^{\fBC}-1}{(q-1)\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}-1\Bigr)\Bigl(p^{\frac{\xi_C(w\cdot v_1)+w\cdot v_2}{\mu_C}}-1\Bigr)}\\ +\frac{p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}}{\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}-1\Bigr)\Bigl(p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}-1\Bigr)}\sum_{\kappa=0}^{\fBtwee-1}q^{\kappa}\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^{\left\lfloor\frac{\kappa\psi}{\fBtwee}\right\rfloor} \end{multline} equals zero. \subsection{Investigation of $\Sigma_3$}\label{multmudrieformule} First we try to find a useful formula for $\mu_3$. \subsubsection{Multiplicity $\mu_3$ of $\delta_3$} From our study in Section~\ref{fundpar}, we remember that \begin{equation*} \mu_C\mu_2=\#H(v_1,v_2)\#H(v_1,v_3)\mid\mu_3=\#H(v_1,v_2,v_3). \end{equation*} We look for more information on the factor $\fCtwee=\mu_3/\mu_C\mu_2$. Let us proceed in the same way as when interpreting $\mu_C/\mu_A\mu_B=\fAB$ in Case~I (cfr.\ Subsection~\ref{formulaformuCcaseeen}). One has \begin{align*} \mu_3&= \begin{vmatrix} a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3 \end{vmatrix}\\ &=a_3 \begin{vmatrix} b_1&c_1\\b_2&c_2 \end{vmatrix} -b_3 \begin{vmatrix} a_1&c_1\\a_2&c_2 \end{vmatrix} +c_3 \begin{vmatrix} a_1&b_1\\a_2&b_2 \end{vmatrix}\\ &= \begin{vmatrix} a_1&b_1\\a_2&b_2 \end{vmatrix} (a_3\alpha_C+b_3\beta_C+c_3)\\ &=-\mu_C\left(v_3\cdot\overrightarrow{CD}\right),\\\intertext{and since $v_3$ and $\overrightarrow{AD}$ are perpendicular, we can continue:} \mu_3&=\mu_C\left(v_3\cdot\overrightarrow{AB}+v_3\cdot\overrightarrow{BC}\right). \end{align*} The fact that $\overrightarrow{AB}\perp v_0$ and $\overrightarrow{BC}\perp v_1$ implies that \begin{equation*} \overrightarrow{AB}=\fAB(-b_0,a_0,0)\qquad\text{and}\qquad\overrightarrow{BC}=\fBC(-b_1,a_1,0). \end{equation*} Hence \begin{align*} \mu_3&=\mu_C\bigl(\fAB(a_0b_3-a_3b_0)+\fBC(a_1b_3-a_3b_1)\bigr)\\ &=\mu_C(\mu_A\fAB+\fBC\Psi)\\ &=\mu_C\mu_2\fCtwee, \end{align*} with \begin{equation}\label{deffCtwee} \fCtwee=\fBtwee+\fBC\psi. \end{equation} Note that \eqref{deffCtwee} and the coprimality of $\psi$ and \fBtwee\ imply $\psi\in\{0,\ldots,\fCtwee-1\}$ and $\gcd(\psi,\fCtwee)=1$. Next, we try to list all the $\mu_3=\mu_C\mu_2\fCtwee$ points of $H_3$. \subsubsection{Points of $H_3=H(v_1,v_2,v_3)$}\label{ssspoH3c3} We proceed in the same way as in Case~I for the points of $H_C$. As we know, the points of $H_C=H(v_1,v_2)$ and $H_2=H(v_1,v_3)$ can be presented as \begin{alignat*}{3} h(i,0,0)&=\left\{\frac{i\xi_C}{\mu_C}\right\}v_1&&+\frac{i}{\mu_C}v_2;&\qquad&i=0,\ldots,\mu_C-1;\\\intertext{and} h(0,j,0)&=\left\{\frac{j\xi_2}{\mu_2}\right\}v_1&&+\frac{j}{\mu_2}v_3;&&j=0,\ldots,\mu_2-1; \end{alignat*} respectively. To generate a complete list of points of $H_3$, it is now sufficient to find a set of representatives for the \fCtwee\ cosets of the subgroup $H_C+H_2$ of $H_3$. Recall that the cosets of $H_C+H_2$ can be described as\footnote{Here $h_3$ denotes the $v_3$-coordinate of $h$. We can as well, and completely similarly, describe these cosets in terms of the $v_2$-coordinate, but the choice for $h_3$ is more convenient in this case.} \begin{equation*} \mathcal{C}_k=\left\{h\in H_3\;\middle\vert\;\{h_3\}_{\frac{1}{\mu_2}}=\frac{k\mu_C}{\mu_3}=\frac{k}{\mu_2\fCtwee}\right\};\qquad k=0,\ldots,\fCtwee-1. \end{equation*} We will follow the approach of Section~\ref{fundpar} and select for each coset $\mathcal{C}_k$, as a representative, the unique element $h(0,0,k)\in\mathcal{C}_k$ with $v_3$-coordinate $h_3(0,0,k)=k/\mu_2\fCtwee$ and $v_2$-coordinate $h_2(0,0,k)<1/\mu_C$. We find $h(0,0,1)$ as follows. Recall from \eqref{vi3c3} that \begin{equation*} \Theta v_1-\Psi v_2-\mu_C v_3=(0,0,\mu_3), \end{equation*} with $\Theta=\begin{vsmallmatrix}a_2&b_2\\a_3&b_3\end{vsmallmatrix}>0$. It follows that \begin{equation*} h(0,0,1)=\left\{\frac{-\Theta}{\mu_3}\right\}v_1+\frac{\psi}{\mu_C\fCtwee}v_2+\frac{1}{\mu_2\fCtwee}v_3=\{(0,0,-1)\}\in\mathcal{C}_1 \end{equation*} is the representative for $\mathcal{C}_1$ we are looking for. Indeed, it follows from Equation~\eqref{deffCtwee} that $h_2(0,0,1)=\psi/\mu_C\fCtwee$ is not only reduced modulo $1$, it is also already reduced modulo $1/\mu_C$. It is now natural to find all representatives $h(0,0,k)$ by considering the \fCtwee\ multiples $\{kh(0,0,1)\}$; $k=0,\ldots,\fCtwee-1$; of $h(0,0,1)$ in the group $H_3$, and adding to each multiple $\{kh(0,0,1)\}$ the unique element of $H_C$ such that the $v_2$-coordinate $h_2(0,0,k)$ of the sum $h(0,0,k)$ is reduced modulo $1/\mu_C$: \begin{multline*} h(0,0,k)=\left\{\frac{-k\Theta-\lfloor k\psi/\fCtwee\rfloor\xi_C\mu_2\fCtwee}{\mu_3}\right\}v_1+\frac{\{k\psi\}_{\fCtwee}}{\mu_C\fCtwee}v_2+\frac{k}{\mu_2\fCtwee}v_3\in\mathcal{C}_k;\\k=0,\ldots,\fCtwee-1. \end{multline*} Note that since $\psi$ and \fCtwee\ are coprime, $\{k\psi\}_{\fCtwee}$ runs, as expected, through the numbers $0,\ldots,\fCtwee-1$ when $k$ does so. All this leads to the following member list of $H_3$: \begin{multline*} \begin{aligned} h(i,j,k)&=\{h(i,0,0)+h(0,j,0)+h(0,0,k)\}\\ &=\left\{\frac{(i-\lfloor k\psi/\fCtwee\rfloor)\xi_C\mu_2\fCtwee+j\xi_2\mu_C\fCtwee-k\Theta}{\mu_3}\right\}v_1\\ &\qquad\qquad\qquad\qquad\qquad\qquad\,+\frac{i\fCtwee+\{k\psi\}_{\fCtwee}}{\mu_C\fCtwee}v_2+\frac{j\fCtwee+k}{\mu_2\fCtwee}v_3; \end{aligned}\\ i=0,\ldots,\mu_C-1;\quad j=0,\ldots,\mu_2-1;\quad k=0,\ldots,\fCtwee-1. \end{multline*} Finally, we will try to calculate $\Sigma_3$ based on the above description of $H_3$'s points. \subsubsection{Calculation of $\Sigma_3$}\label{ssscalcsigmadriecasedrie} Writing $h$ as $h=h_1v_1+h_2v_2+h_3v_3$ for $h\in H_3=H(v_1,v_2,v_3)$ and noting that $p^{w\cdot v_1}=1$, we find \begin{align} \Sigma_3&=\sum_{h\in H_3}p^{w\cdot h}\notag\\ &=\sum_{i=0}^{\mu_C-1}\sum_{j=0}^{\mu_2-1}\sum_{k=0}^{\fCtwee-1}\notag\\* &\qquad\ p^{\frac{(i-\lfloor k\psi/\fCtwee\rfloor)\xi_C\mu_2\fCtwee+j\xi_2\mu_C\fCtwee-k\Theta}{\mu_3}(w\cdot v_1)+\frac{i\fCtwee+\{k\psi\}_{\fCtwee}}{\mu_C\fCtwee}(w\cdot v_2)+\frac{j\fCtwee+k}{\mu_2\fCtwee}(w\cdot v_3)}\notag\\ &=\sum_i\Bigl(p^{\frac{\xi_C(w\cdot v_1)+w\cdot v_2}{\mu_C}}\Bigr)^i\sum_j\Bigl(p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}\Bigr)^j\notag\\* &\qquad\qquad\qquad\qquad\;\;\,\sum_k\Bigl(p^{\frac{-\Theta(w\cdot v_1)+\Psi(w\cdot v_2)+\mu_C(w\cdot v_3)}{\mu_3}}\Bigr)^k\Bigl(p^{\frac{\xi_C(w\cdot v_1)+w\cdot v_2}{\mu_C}}\Bigr)^{-\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor}\notag\\ &=\frac{F_2F_3}{\Bigl(p^{\frac{\xi_C(w\cdot v_1)+w\cdot v_2}{\mu_C}}-1\Bigr)\Bigl(p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}-1\Bigr)}\sum_{k=0}^{\fCtwee-1}q^k\Bigl(p^{\frac{\xi_C(w\cdot v_1)+w\cdot v_2}{\mu_C}}\Bigr)^{-\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor},\label{formuleSigmadriedn} \end{align} where in the last step we used Identity~\eqref{dpi3c3}. In the special case that $n^{\ast}\mid n$, based on (\ref{sigcsigbidcasedrie}--\ref{fSASCS2ifnsdnc3}), we obtain the slightly simpler formula \begin{equation}\label{formuleSigmadrie} \Sigma_3=\frac{F_2F_3}{(q^{\fBC}-1)(q^{\fBtwee}-1)}\sum_{k=0}^{\fCtwee-1}q^{k-\fBC\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor}. \end{equation} \subsection{Proof that the residue $R_1$ equals zero}\label{ssfinalsscasedrie} \subsubsection{Case $n^{\ast}\mid n$} According to Formula~\eqref{laatsteformulevrReen} for $R_1''$ and Formula~\eqref{formuleSigmadrie} for $\Sigma_3$, it now suffices to prove that \begin{equation}\label{finalcheckcas3} \sum_{k=0}^{\fCtwee-1}q^{k-\fBC\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor}=\frac{q^{\fBC}-1}{q-1}\sum_{\rho=1}^{\barpsi}q^{\kappa_{\rho}}+\frac{q^{\fBtwee}-1}{q-1}\left(tq\frac{q^{\fBC}-1}{q-1}+1\right). \end{equation} Let us do this now. First of all, if $\fBtwee=1$, then by \eqref{deffCtwee} we have $\fCtwee=\fBC\psi+1$, and \begin{align*} \sum_{k=0}^{\fCtwee-1}q^{k-\fBC\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor}&=1+\sum_{k=1}^{\psi\fBC}q^{k-\fBC\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor}\\ &=1+\sum_{r=0}^{\psi-1}\sum_{l=1}^{\fBC}q^{(r\fBC+l)-\fBC\left\lfloor\frac{(r\fBC+l)\psi}{\fBC\psi+1}\right\rfloor}\\ &=1+\sum_r\sum_lq^l\\ &=\psi q\frac{q^{\fBC}-1}{q-1}+1, \end{align*} which agrees with \eqref{finalcheckcas3} for $\fBtwee=1$. In what follows, we shall assume that $\fBtwee>1$ and thus that $\barpsi>0$. Since $\psi\in\{1,\ldots,\fCtwee-1\}$, the finite sequence \begin{equation}\label{finseqkkk} \left(\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor\right)_{k=0}^{\fCtwee-1} \end{equation} of non-negative integers ascends from $0$ to $\psi-1$ with steps of zero or one. Let us denote \begin{equation*} k_r=\min\left\{k\in\Zplus\;\middle\vert\;\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor=r\right\}=\left\lceil\frac{r\fCtwee}{\psi}\right\rceil;\qquad r=0,\ldots,\psi. \end{equation*} Then \begin{equation*} 0=k_0<k_1<\cdots<k_{\psi-1}<k_{\psi}=\fCtwee, \end{equation*} and obviously, \begin{equation*} \sum_{k=0}^{\fCtwee-1}q^{k-\fBC\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor}=\sum_{r=0}^{\psi-1}\sum_{k=k_r}^{k_{r+1}-1}q^{k-\fBC r}. \end{equation*} We recall that \begin{equation}\label{efferecallen} \fCtwee=\fBtwee+\fBC\psi\qquad\text{and}\qquad\psi=t\fBtwee+\barpsi, \end{equation} with $t\in\Zplus$ and $\barpsi=\{\psi\}_{\fBtwee}\in\{1,\ldots,\fBtwee-1\}$. Remember also that for $\rho\in\{0,\ldots,\barpsi\}$, the number $\kappa_{\rho}$ denotes the smallest integer satisfying $\kappa_{\rho}\barpsi\geqslant\rho\fBtwee$. Let us first verify \eqref{finalcheckcas3} for $t=0$. In this case we have that $\psi=\barpsi$, and hence \begin{equation*} k_{\rho}=\left\lceil\frac{\rho\fCtwee}{\barpsi}\right\rceil=\left\lceil\frac{\rho(\fBtwee+\fBC\barpsi)}{\barpsi}\right\rceil=\fBC\rho+\left\lceil\frac{\rho\fBtwee}{\barpsi}\right\rceil=\fBC\rho+\kappa_{\rho}, \end{equation*} for all $\rho\in\{0,\ldots,\barpsi\}$. It follows that \begin{align*} \sum_{k=0}^{\fCtwee-1}q^{k-\fBC\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor}&=\sum_{\rho=0}^{\barpsi-1}\sum_{k=k_{\rho}}^{k_{\rho+1}-1}q^{k-\fBC\rho}\\ &=\sum_{\rho=0}^{\barpsi-1}\sum_{k=\fBC\rho+\kappa_{\rho}}^{\fBC(\rho+1)+\kappa_{\rho+1}-1}q^{k-\fBC\rho}\\ &=\sum_{\rho=0}^{\barpsi-1}\sum_{\kappa=\kappa_{\rho}}^{\fBC+\kappa_{\rho+1}-1}q^{\kappa}\\ &=\sum_{\rho=0}^{\barpsi-1}\Biggl(\sum_{\kappa=\kappa_{\rho}}^{\kappa_{\rho+1}-1}q^{\kappa}+q^{\kappa_{\rho+1}}\sum_{l=0}^{\fBC-1}q^l\Biggr)\\ &=\sum_{\kappa=0}^{\fBtwee-1}q^{\kappa}+\frac{q^{\fBC}-1}{q-1}\sum_{\rho=0}^{\barpsi-1}q^{\kappa_{\rho+1}}\\ &=\frac{q^{\fBC}-1}{q-1}\sum_{\rho=1}^{\barpsi}q^{\kappa_{\rho}}+\frac{q^{\fBtwee}-1}{q-1}, \end{align*} which agrees with \eqref{finalcheckcas3} for $t=0$. Let us from now on assume that $t>0$. In the lemma below we express $k_r$ and $k_{r+1}-1$ explicitly as a function of $r$ after writing $r$ in a special form, but first we introduce the following notation. \begin{notation}[Iverson's convention]\label{iverson} Cfr.\ \cite{knuth92}. For any proposition $P$ we shall denote by \begin{equation*} [P]= \begin{cases} 1,&\text{if $P$ is true;}\\ 0,&\text{otherwise;} \end{cases} \end{equation*} the truth value of $P$. \end{notation} \begin{lemma}\label{lemmaspecialevorm} Assume that $t>0$. Then the map \begin{equation}\label{mapspecialevorm} \begin{multlined}[.89\textwidth] \bigl\{(\rho,\kappa,\lambda)\in\Zplus^3\mid\\ 0\leqslant\rho\leqslant\barpsi-1,\ \,\kappa_{\rho}\leqslant\kappa\leqslant\kappa_{\rho+1}-1,\ \,0\leqslant\lambda\leqslant t-[\kappa<\kappa_{\rho+1}-1]\bigr\}\\ \to\{0,\ldots,\psi-1\}:(\rho,\kappa,\lambda)\mapsto r=\kappa t+\rho+\lambda \end{multlined} \end{equation} is bijective, and for $r=\kappa t+\rho+\lambda\in\{0,\ldots,\psi-1\}$ written in this way, we have that \begin{align*} k_r&=\fBC r+\kappa+[\lambda>0]\qquad\text{and}\\ k_{r+1}-1&=\fBC(r+1)+\kappa. \end{align*} \end{lemma} We will prove this lemma shortly. If we accept it now, we obtain\footnote{Note that $r=\kappa t+\rho+\lambda$ in the second line.} \begin{align*} \sum_{k=0}^{\fCtwee-1}q^{k-\fBC\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor}&=\sum_{r=0}^{\psi-1}\sum_{k=k_r}^{k_{r+1}-1}q^{k-\fBC r}\\ &=\sum_{\rho=0}^{\barpsi-1}\sum_{\kappa=\kappa_{\rho}}^{\kappa_{\rho+1}-1}\sum_{\lambda=0}^{t-[\kappa<\kappa_{\rho+1}-1]}\sum_{k=\fBC r+\kappa+[\lambda>0]}^{\fBC(r+1)+\kappa}q^{k-\fBC r}\\ &=\sum_{\rho=0}^{\barpsi-1}\sum_{\kappa=\kappa_{\rho}}^{\kappa_{\rho+1}-1}q^{\kappa}\sum_{\lambda=0}^{t-[\kappa<\kappa_{\rho+1}-1]}\sum_{l=[\lambda>0]}^{\fBC}q^l\\ &=\sum_{\rho=0}^{\barpsi-1}\sum_{\kappa=\kappa_{\rho}}^{\kappa_{\rho+1}-1}q^{\kappa}\sum_{\lambda=0}^{t-[\kappa<\kappa_{\rho+1}-1]}\left(q\frac{q^{\fBC}-1}{q-1}+[\lambda=0]\right)\\ &=\sum_{\rho=0}^{\barpsi-1}\sum_{\kappa=\kappa_{\rho}}^{\kappa_{\rho+1}-1}q^{\kappa}\left(\bigl(t+[\kappa=\kappa_{\rho+1}-1]\bigr)q\frac{q^{\fBC}-1}{q-1}+1\right)\\ &=\sum_{\rho=0}^{\barpsi-1}\Biggl[\left(tq\frac{q^{\fBC}-1}{q-1}+1\right)\sum_{\kappa=\kappa_{\rho}}^{\kappa_{\rho+1}-1}q^{\kappa}+q^{\kappa_{\rho+1}}\frac{q^{\fBC}-1}{q-1}\Biggr]\\ &=\left(tq\frac{q^{\fBC}-1}{q-1}+1\right)\sum_{\kappa=0}^{\fBtwee-1}q^{\kappa}+\frac{q^{\fBC}-1}{q-1}\sum_{\rho=0}^{\barpsi-1}q^{\kappa_{\rho+1}}\\ &=\frac{q^{\fBC}-1}{q-1}\sum_{\rho=1}^{\barpsi}q^{\kappa_{\rho}}+\frac{q^{\fBtwee}-1}{q-1}\left(tq\frac{q^{\fBC}-1}{q-1}+1\right), \end{align*} which agrees with \eqref{finalcheckcas3}. We conclude the proof of $R_1=0$ in Case $n^{\ast}\mid n$ by verifying Lemma~\ref{lemmaspecialevorm}. Since for all $(\rho,\kappa,\lambda)$ in the domain, we have that \begin{equation*} 0\leqslant\kappa t+\rho+\lambda\leqslant(\kappa_{\barpsi}-1)t+(\barpsi-1)+t=t\fBtwee+\barpsi-1=\psi-1, \end{equation*} the map \eqref{mapspecialevorm} is well-defined. We check that the map is onto. Let $r\in\{0,\ldots,\psi-1\}$. Because the finite sequence \begin{equation*} \bigl(\kappa_{\rho}t+\rho\bigr)_{\rho=0}^{\barpsi} \end{equation*} of non-negative integers strictly ascends from $\kappa_0t+0=0$ to $\kappa_{\barpsi}t+\barpsi=\psi$, there exists a (unique) $\rho\in\{0,\ldots,\barpsi-1\}$ such that \begin{equation*} \kappa_{\rho}t+\rho\leqslant r<\kappa_{\rho+1}t+(\rho+1). \end{equation*} If $r=\kappa_{\rho+1}t+\rho$, we can write $r$ as $r=(\kappa_{\rho+1}-1)t+\rho+t$, and $r$ is the image of $(\rho,\kappa_{\rho+1}-1,t)$ under the map \eqref{mapspecialevorm}. Otherwise we have that \begin{equation*} \kappa_{\rho}t\leqslant r-\rho<\kappa_{\rho+1}t, \end{equation*} and we can write $r-\rho$ (in a unique way) as $r-\rho=\kappa t+\lambda$ with $\kappa,\lambda\in\Z;$ $\kappa_{\rho}\leqslant\kappa\leqslant\kappa_{\rho+1}-1$; and $0\leqslant\lambda\leqslant t-1$. In this case $r=\kappa t+\rho+\lambda$ is the image of $(\rho,\kappa,\lambda)$ under the map \eqref{mapspecialevorm}. This proves surjectivity. The uniqueness of the representation $r=\kappa t+\rho+\lambda$ can either be checked directly, or by verifying that the cardinality of the domain, \begin{align*} \sum_{\rho=0}^{\barpsi-1}\sum_{\kappa=\kappa_{\rho}}^{\kappa_{\rho+1}-1}\sum_{\lambda=0}^{ t-[\kappa<\kappa_{\rho+1}-1]}1 &=\sum_{\rho=0}^{\barpsi-1}\sum_{\kappa=\kappa_{\rho}}^{\kappa_{\rho+1}-1}\bigl(t+[\kappa=\kappa_{\rho+1}-1]\bigr)\\ &=\sum_{\kappa=0}^{\fBtwee-1}t+\sum_{\rho=0}^{\barpsi-1}1\\ &=t\fBtwee+\barpsi\\ &=\psi, \end{align*} indeed equals the cardinality of the codomain $\{0,\ldots,\psi-1\}$. Let $r=\kappa t+\rho+\lambda\in\{0,\ldots,\psi-1\}$, written in the appropriate way. We prove the expression for $k_r$ stated in the lemma. On the one hand, because $\kappa\geqslant\kappa_{\rho}$, it holds that $\kappa\barpsi\geqslant\kappa_{\rho}\barpsi\geqslant\rho\fBtwee$, and since $\lambda\leqslant t$, we have \begin{equation*} (\rho+\lambda)\fBtwee\leqslant\kappa\barpsi+[\lambda>0](t\fBtwee+\barpsi). \end{equation*} On the other hand, since we assume $t>0$, it follows from $\kappa\leqslant\kappa_{\rho+1}-1$ that \begin{equation*} \kappa\barpsi\leqslant(\kappa_{\rho+1}-1)\barpsi<(\rho+1)\fBtwee\leqslant(\rho+\lambda)\fBtwee+[\lambda=0](t\fBtwee+\barpsi). \end{equation*} Hence \begin{equation*} \kappa\barpsi-[\lambda=0](t\fBtwee+\barpsi)<(\rho+\lambda)\fBtwee\leqslant\kappa\barpsi+[\lambda>0](t\fBtwee+\barpsi). \end{equation*} Adding $\kappa t\fBtwee$ in all sides of the equation, we get \begin{equation*} (\kappa-[\lambda=0])(t\fBtwee+\barpsi)<(\kappa t+\rho+\lambda)\fBtwee\leqslant(\kappa+[\lambda>0])(t\fBtwee+\barpsi). \end{equation*} If we apply \eqref{efferecallen} and the representation of $r$, we obtain \begin{equation*} (\kappa-[\lambda=0])\psi<r\fBtwee\leqslant(\kappa+[\lambda>0])\psi, \end{equation*} and after adding $\fBC r\psi$, we have \begin{equation*} (\fBC r+\kappa-[\lambda=0])\psi<r(\fBtwee+\fBC\psi)\leqslant(\fBC r+\kappa+[\lambda>0])\psi. \end{equation*} Using Formula~\eqref{efferecallen} for $\fCtwee$, we eventually obtain \begin{equation*} (\fBC r+\kappa+[\lambda>0]-1)\psi<r\fCtwee\leqslant(\fBC r+\kappa+[\lambda>0])\psi, \end{equation*} which proves that \begin{equation}\label{formulekaer} k_r=\fBC r+\kappa+[\lambda>0]. \end{equation} Finally, let us verify the expression for $k_{r+1}-1$. If $r=\psi-1$, then \begin{gather*} r=(\kappa_{\barpsi}-1)t+(\barpsi-1)+t\qquad\text{and}\\ \begin{multlined}[.95\textwidth] k_{r+1}-1=k_{\psi}-1=\fCtwee-1=\fBtwee+\fBC\psi-1\\ =\fBC\psi+(\kappa_{\barpsi}-1)=\fBC(r+1)+\kappa. \end{multlined} \end{gather*} Otherwise $r+1\leqslant\psi-1$ and we can use \eqref{formulekaer} to find $k_{r+1}$. First suppose that $\lambda<t-[\kappa<\kappa_{\rho+1}-1]$. Then we have \begin{gather*} r+1=\kappa t+\rho+(\lambda+1)\qquad\text{and}\\ k_{r+1}-1=\fBC(r+1)+\kappa+[\lambda+1>0]-1=\fBC(r+1)+\kappa. \end{gather*} If on the contrary $\lambda=t-[\kappa<\kappa_{\rho+1}-1]$, we have \begin{gather*} r+1=(\kappa+1)t+(\rho+[\kappa=\kappa_{\rho+1}-1])+0\qquad\text{and again}\\ k_{r+1}-1=\fBC(r+1)+(\kappa+1)+[0>0]-1=\fBC(r+1)+\kappa. \end{gather*} This ends the proof of the lemma and concludes Case $n^{\ast}\mid n$. \subsubsection{Case $n^{\ast}\nmid n$} By Equations~\eqref{laatsteformulevrReendn} and \eqref{formuleSigmadriedn} for $R_1''$ and $\Sigma_3$, proving $R_1''=0$ boils down to verifying that \begin{multline*} \Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}-1\Bigr)\sum_{k=0}^{\fCtwee-1}q^k\Bigl(p^{\frac{\xi_C(w\cdot v_1)+w\cdot v_2}{\mu_C}}\Bigr)^{-\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor}\\ +p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigl(p^{\frac{\xi_C(w\cdot v_1)+w\cdot v_2}{\mu_C}}-1\Bigr)\sum_{\kappa=0}^{\fBtwee-1}q^{\kappa}\Bigl(p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}}\Bigr)^{\left\lfloor\frac{\kappa\psi}{\fBtwee}\right\rfloor}\\ =\Bigl(p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}-1\Bigr)\frac{q^{\fBC}-1}{q-1}. \end{multline*} Expressing everything in terms of \begin{equation*} q\qquad\text{and}\qquad\beta=p^{\frac{w\cdot v_0+\xi_B'(w\cdot v_1)}{\mu_B}} \end{equation*} by means of Identities~\eqref{sigcsigbidcasedrie} and \eqref{sig2sigbidcasedrie}, the above statement is equivalent to \begin{multline}\label{finalcheckcas3dn} (q-1)(\beta-1)\sum_{k=0}^{\fCtwee-1}q^{k-\fBC\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor}\beta^{\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor}+(q-1)(q^{\fBC}-\beta)\sum_{\kappa=0}^{\fBtwee-1}q^{\kappa}\beta^{\left\lfloor\frac{\kappa\psi}{\fBtwee}\right\rfloor}\\ =(q^{\fBC}-1)(q^{\fBtwee}\beta^{\psi}-1). \end{multline} This equality in fact turns out to be a polynomial identity in the variables $q$ and $\beta$, as we will show now. Both sequences, $\bigl(\lfloor k\psi/\fCtwee\rfloor\bigr)_{k=0}^{\fCtwee}$ and $\bigl(\lfloor\kappa\psi/\fBtwee\rfloor\bigr)_{\kappa=0}^{\fBtwee}$, ascend from $0$ to $\psi$. As $\psi<\fCtwee$ and $\psi$ may be strictly greater than $\fBtwee$, the first sequence adopts all values in $\{0,\ldots,\psi\}$, but the second one may not. We put \begin{alignat*}{3} k_r&=\min\left\{k\in\Zplus\;\middle\vert\;\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor=r\right\}&&=\left\lceil\frac{r\fCtwee}{\psi}\right\rceil&\qquad&\text{and}\\ \kappa_r&=\min\left\{\kappa\in\Zplus\;\middle\vert\;\left\lfloor\frac{\kappa\psi}{\fBtwee}\right\rfloor\geqslant r\right\}&&=\left\lceil\frac{r\fBtwee}{\psi}\right\rceil;&\qquad&r=0,\ldots,\psi. \end{alignat*} The numbers $k_r$ are the same as in Case $n^{\ast}\mid n$, while the $\kappa_r$ are defined differently; note that the sequence $(\kappa_r)_r$ is still ascending, but no longer necessarily strictly. We have \begin{alignat*}{6} 0&=k_0&&<k_1&&<\cdots&&<k_{\psi-1}&&<k_{\psi}&&=\fCtwee\qquad\text{and}\\ 0&=\kappa_0&&<\kappa_1&&\leqslant\cdots&&\leqslant\kappa_{\psi-1}&&\leqslant\kappa_{\psi}&&=\fBtwee; \end{alignat*} furthermore, there is the following relation between the numbers $k_r$ and $\kappa_r$: \begin{equation*} k_r=\left\lceil\frac{r\fCtwee}{\psi}\right\rceil=\left\lceil\frac{r(\fBtwee+\fBC\psi)}{\psi}\right\rceil=\fBC r+\left\lceil\frac{r\fBtwee}{\psi}\right\rceil=\fBC r+\kappa_r, \end{equation*} for all $r\in\{0,\ldots,\psi\}$. Next, we use these data in rewriting both sums appearing in \eqref{finalcheckcas3dn}. If we adopt the convention that empty sums equal zero, then the first sum is given by \begin{align} \sum_{k=0}^{\fCtwee-1}q^{k-\fBC\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor}\beta^{\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor} &=\sum_{r=0}^{\psi-1}\sum_{k=k_r}^{k_{r+1}-1}q^{k-\fBC r}\beta^r\notag\\ &=\sum_r\beta^r\sum_{k=\fBC r+\kappa_r}^{\fBC(r+1)+\kappa_{r+1}-1}q^{k-\fBC r}\notag\\ &=\sum_r\beta^r\sum_{\kappa=\kappa_r}^{\fBC+\kappa_{r+1}-1}q^{\kappa}\notag\\ &\overset{(\star)}{=}\sum_r\beta^r\Biggl(\sum_{\kappa=\kappa_r}^{\kappa_{r+1}-1}q^{\kappa}+q^{\kappa_{r+1}}\sum_{l=0}^{\fBC-1}q^l\Biggr)\notag\\ &=\sum_r\beta^r\sum_{\kappa}q^{\kappa}+\frac{q^{\fBC}-1}{q-1}\sum_r\beta^rq^{\kappa_{r+1}}.\label{firstsumc3} \end{align} Note that Equality~$(\star)$ holds even if $\kappa_r=\kappa_{r+1}$ for some $r$. With Notation~\ref{iverson}, the second sum can be written as \begin{equation}\label{secondsumc3} \sum_{\kappa=0}^{\fBtwee-1}q^{\kappa}\beta^{\left\lfloor\frac{\kappa\psi}{\fBtwee}\right\rfloor}=\sum_{r=0}^{\psi-1}\beta^r\sum_{\kappa}\left[\left\lfloor\frac{\kappa\psi}{\fBtwee}\right\rfloor=r\right]q^{\kappa}=\sum_r\beta^r\sum_{\kappa=\kappa_r}^{\kappa_{r+1}-1}q^{\kappa}. \end{equation} Indeed, if there is no $\kappa\in\{0,\ldots,\fBtwee-1\}$ such that $\lfloor\kappa\psi/\fBtwee\rfloor=r$, then $\kappa_r=\kappa_{r+1}$ and $\sum_{\kappa=\kappa_r}^{\kappa_{r+1}-1}q^{\kappa}=0$, otherwise $\kappa_r<\kappa_{r+1}$ and $\kappa_r,\ldots,\kappa_{r+1}-1$ are precisely the indices $\kappa$ satisfying $\lfloor\kappa\psi/\fBtwee\rfloor=r$. From (\ref{firstsumc3}--\ref{secondsumc3}), it now follows that \begin{align*} &(q-1)(\beta-1)\sum_{k=0}^{\fCtwee-1}q^{k-\fBC\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor}\beta^{\left\lfloor\frac{k\psi}{\fCtwee}\right\rfloor}+(q-1)(q^{\fBC}-\beta)\sum_{\kappa=0}^{\fBtwee-1}q^{\kappa}\beta^{\left\lfloor\frac{\kappa\psi}{\fBtwee}\right\rfloor}\\ &\quad=(q^{\fBC}-1)\left[(q-1)\sum_{r=0}^{\psi-1}\beta^r\sum_{\kappa=\kappa_r}^{\kappa_{r+1}-1}q^{\kappa}+(\beta-1)\sum_{r=0}^{\psi-1}\beta^rq^{\kappa_{r+1}}\right]\\ &\quad=(q^{\fBC}-1)\left[\sum_r\beta^rq^{\kappa_{r+1}}-\sum_r\beta^rq^{\kappa_r}+\sum_r\beta^{r+1}q^{\kappa_{r+1}}-\sum_r\beta^rq^{\kappa_{r+1}}\right]\\ &\quad=(q^{\fBC}-1)(q^{\fBtwee}\beta^{\psi}-1). \end{align*} Having verified that the calculations above make sense even if $\kappa_r=\kappa_{r+1}$ for some $r$, we achieve \eqref{finalcheckcas3dn} and therefore conclude Case~$n^{\ast}\nmid n$. This ends the proof of the main theorem in Case~III. \section{Case~IV: exactly two facets of \Gf\ contribute to $s_0$, and these two facets are both non-compact $B_1$-facets with respect to a same variable and have an edge in common} \subsection{Figure and notations} Let us assume that the two facets $\tau_0$ and $\tau_1$ contributing to $s_0$ are both $B_1$-facets with respect to the variable $z$. Note that $\tau_0$ and $\tau_1$ cannot be non-compact for the same variable unless they coincide. Therefore, we may assume that $\tau_0$ is non-compact for $x$, while $\tau_1$ is non-compact for $y$, and that $\tau_0$ and $\tau_1$ share their unique compact edge $[AB]$. Here $A(x_A,y_A,0)$ and $B(x_B,y_B,1)\in\Zplus^3$ denote the common vertices of $\tau_0$ and $\tau_1$ in the $xy$-plane and at \lq height\rq\ one, respectively. The situation is shown in Figure~\ref{figcase4}. \begin{figure} \psset{unit=.02844311377\textwidth \centering \subfigure[Non-compact $B_1$-facets $\tau_0$ and $\tau_1$, their subfaces and neighbor facets $\tau_2,\tau_3,$ and $\tau_4$]{ \begin{pspicture}(-9.1,-7.8)(7.6,6.8 {\footnotesize \pstThreeDCoor[xMin=0,yMin=0,zMin=0,xMax=12,yMax=10,zMax=6.75,linecolor=black,linewidth=.7pt] { \psset{linecolor=black,linewidth=.3pt,linestyle=dashed,subticks=1} \pstThreeDLine(4,0,0)(4,3,0)\pstThreeDLine(4,3,0)(0,3,0) \pstThreeDLine(12,0,0)(12,3,0)\pstThreeDLine(12,3,0)(4,3,0) \pstThreeDLine(0,10,0)(4,10,0)\pstThreeDLine(4,10,0)(4,3,0) \pstThreeDLine(7,0,0)(7,5,0)\pstThreeDLine(7,5,0)(0,5,0) \pstThreeDLine(12,3,0)(12,10,0)\pstThreeDLine(12,10,0)(4,10,0) \pstThreeDLine(4,0,1)(4,3,1)\pstThreeDLine(4,3,1)(0,3,1) \pstThreeDLine(12,0,1)(12,3,1)\pstThreeDLine(0,10,1)(4,10,1) \pstThreeDLine(0,0,1)(4,0,1)\pstThreeDLine(4,0,1)(12,0,1) \pstThreeDLine(0,0,1)(0,3,1)\pstThreeDLine(0,3,1)(0,10,1) \pstThreeDLine(4,0,0)(4,0,1)\pstThreeDLine(12,0,0)(12,0,1) \pstThreeDLine(0,3,0)(0,3,1)\pstThreeDLine(0,10,0)(0,10,1) \pstThreeDLine(12,3,0)(12,3,1)\pstThreeDLine(4,10,0)(4,10,1) } \pstThreeDPut[pOrigin=c](8.69,4,0.5){\psframebox*[framesep=1pt,framearc=0.3]{\phantom{$\tau_0$}}} { \psset{dotstyle=none,dotscale=1,drawCoor=false} \psset{linecolor=black,linewidth=1pt,linejoin=1} \psset{fillcolor=lightgray,opacity=.6,fillstyle=solid} \pstThreeDLine(12,5,0)(7,5,0)(4,3,1)(12,3,1) \pstThreeDLine(7,10,0)(7,5,0)(4,3,1)(4,10,1) } \pstThreeDPut[pOrigin=t](7,5,-0.28){$A$} \pstThreeDPut[pOrigin=b](4,3,1.28){$B$} \pstThreeDPut[pOrigin=c](8.69,4,0.5){$\tau_0$} \pstThreeDPut[pOrigin=c](5.5,7,0.5){$\tau_1$} \pstThreeDPut[pOrigin=c](8,2.7,1.5){\psframebox*[framesep=0.4pt,framearc=0.3]{$\tau_3$}} \pstThreeDPut[pOrigin=c](3.7,6.85,1.5){\psframebox*[framesep=0.2pt,framearc=0.5]{$\tau_2$}} \pstThreeDPut[pOrigin=c](10,8,0){$\tau_4$} \pstThreeDPut[pOrigin=b](12,3,1.28){$l_x$} \pstThreeDPut[pOrigin=b](4,10,1.25){$l_y$} \pstThreeDPut[pOrigin=br](0.06,-0.06,1.12){$1$} } \end{pspicture} }\hfill\subfigure[Relevant cones associated to relevant faces of~\Gf]{ \psset{unit=.03125\textwidth \begin{pspicture}(-7.6,-3.8)(7.6,9.6 {\footnotesize \pstThreeDCoor[xMin=0,yMin=0,zMin=0,xMax=10,yMax=10,zMax=10,nameZ={},linecolor=gray,linewidth=.7pt] \psset{linecolor=gray,linewidth=.3pt,linejoin=1,linestyle=dashed,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(10,0,0)(0,10,0)\pstThreeDLine(0,10,0)(0,0,10)\pstThreeDLine(0,0,10)(10,0,0) } \psset{linecolor=black,linewidth=.7pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,0)(0,0,10 } \psset{labelsep=2pt} \uput[90](0,1.7){\psframebox*[framesep=0.5pt,framearc=1]{\darkgray\scriptsize$v_4$}} } \psset{linecolor=black,linewidth=.7pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,0)(0,3.33,6.67 \pstThreeDLine(0,0,0)(2.5,0,7.5 \pstThreeDLine(0,0,0)(9,0,1 \pstThreeDLine(0,0,0)(0,8,2 } \psset{linecolor=darkgray,linewidth=.8pt,linejoin=1,arrows=->,arrowscale=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,0)(0,1.5,3 \pstThreeDLine(0,0,0)(1.4,0,4.2 \pstThreeDLine(0,0,0)(6.3,0,.7 \pstThreeDLine(0,0,0)(0,4,1 \pstThreeDLine(0,0,0)(0,0,2 } \psset{linecolor=white,linewidth=2pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(1.75,.999,7.25)(.75,2.33,6.92 \pstThreeDLine(2.12,1.20,6.68)(1.00,4.8,4.2 } \psset{linecolor=black,linewidth=.7pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,10)(0,3.33,6.67 \pstThreeDLine(0,0,10)(2.5,0,7.5 \pstThreeDLine(0,8,2)(0,3.33,6.67)(2.5,0,7.5)(9,0,1 } \psset{linecolor=black,linewidth=.7pt,linejoin=1,linestyle=dashed,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(9,0,1)(0,8,2 \pstThreeDLine(2.5,0,7.5)(0,8,2 } \psset{labelsep=2pt} \uput[-35](.51,.994){\darkgray\scriptsize$v_0$} \uput[75](1.41,-.275){\darkgray\scriptsize$v_3$} } \psset{labelsep=2.5pt} \uput[106](-2.22,-.808){\darkgray\scriptsize$v_2$} } \psset{labelsep=1.4pt} \uput[210](-.493,1.57){\darkgray\scriptsize$v_1$} } \psset{labelsep=3.8pt} \uput[30](2.35,4.58){$\Delta_{\tau_0}$} \uput[30](4,1.74){$\Delta_{l_x}$} \uput[30](5.65,-1.1){$\Delta_{\tau_3}$} \uput[150](-1.76,5.61){$\Delta_{\tau_1}$} \uput[150](-4.06,1.65){$\Delta_{l_y}$} \uput[150](-6.34,-2.31){$\Delta_{\tau_2}$} } \rput(0.15,6.3){\psframebox*[framesep=0.3pt,framearc=1]{\footnotesize$\Delta_A$}} \rput(1.9,3.6){\psframebox*[framesep=0.7pt,framearc=1]{\footnotesize$\delta_1$}} \rput(1.94,2.25){\psframebox*[framesep=0.3pt,framearc=.3]{\footnotesize$\delta_2$}} \rput(-1.25,.3){\footnotesize$\delta_3$} \pstThreeDNode(1.25,1.66,7.09){AB} \rput[B](0,9.075){$z,\Delta_{\tau_4}$} \rput[Br](8.01,9.075){\rnode{ABlabel}{$\Delta_{[AB]}$}} \ncline[linewidth=.3pt,nodesepB=2pt,nodesepA=0.5pt]{->}{ABlabel}{AB} } \end{pspicture} } \caption{Case IV: the only facets contributing to $s_0$ are the non-compact $B_1$-facets $\tau_0$ and $\tau_1$} \label{figcase4} \end{figure} If we put $\overrightarrow{AB}(x_B-x_A,y_B-y_A,1)=(\alpha,\beta,1)$ as usual, then $v_0(0,1,-\beta)$ and $v_1(1,0,-\alpha)$ are the unique primitive vectors in $\Zplus^3$ perpendicular to $\tau_0$ and $\tau_1$, respectively, while equations for the affine hulls of $\tau_0$ and $\tau_1$ are provided by \begin{equation*} \aff(\tau_0)\leftrightarrow y-\beta z=y_A\qquad\text{and}\qquad\aff(\tau_1)\leftrightarrow x-\alpha z=x_A. \end{equation*} Necessarily, we have that $\alpha,\beta<0$; i.e., $x_B<x_A$ and $y_B<y_A$. Given that the numerical data associated to $\tau_0$ and $\tau_1$ are $(m(v_0),\sigma(v_0))=(y_A,1-\beta)$ and $(m(v_1),\sigma(v_1))=(x_A,1-\alpha)$, respectively, we assume \begin{equation*} \Re(s_0)=\frac{\beta-1}{y_A}=\frac{\alpha-1}{x_A}\quad\ \ \text{and}\ \ \quad\Im(s_0)=\frac{2n\pi}{\gcd(x_A,y_A)\log p}\quad\ \ \text{for some $n\in\Z$.} \end{equation*} As indicated in Figure~\ref{figcase4}, we denote by $\tau_2$ and $\tau_3$ the non-compact facets of \Gf\ sharing with $\tau_1$ and $\tau_0$, respectively, a half-line with endpoint $B$, and by $\tau_4$ the facet lying in the $xy$-plane. Primitive vectors in $\Zplus^3$ perpendicular to $\tau_2,\tau_3,\tau_4$ will be denoted \begin{equation*} v_2(a_2,0,c_2),\quad v_3(0,b_3,c_3),\quad v_4(0,0,1), \end{equation*} respectively, and equations for the affine supports of these facets are given by \begin{alignat*}{3} \aff(\tau_2)&\leftrightarrow&\ a_2x&+c_2&z&=m_2,\\ \aff(\tau_3)&\leftrightarrow&\ b_3y&+c_3&z&=m_3,\\ \aff(\tau_4)&\leftrightarrow&\ & &z&=0 \end{alignat*} for certain $m_2,m_3\in\Zplus$. Finally, the numerical data for $\tau_2,\tau_3,$ and $\tau_4$ are $(m_2,\sigma_2)$, $(m_3,\sigma_3),$ and $(0,1)$, respectively, with $\sigma_2=a_2+c_2$ and $\sigma_3=b_3+c_3$. \subsection{The candidate pole $s_0$ and the contributions to its residues} Again we want to prove that $s_0$ is not a pole of \Zof. Since $s_0$ has expected order two as a candidate pole of \Zof, in order to do this, we will, as in Case~III, show that \begin{align*} R_2&=\lim_{s\to s_0}\left(p^{1-\beta+y_As}-1\right)\left(p^{1-\alpha+x_As}-1\right)\Zof(s)\qquad\text{and}\\ R_1&=\lim_{s\to s_0}\frac{d}{ds}\left[\left(p^{1-\beta+y_As}-1\right)\left(p^{1-\alpha+x_As}-1\right)\Zof(s)\right] \end{align*} both equal zero. In this case the (compact) faces contributing to $s_0$ are $A,B,$ and $[AB]$; i.e., we may in the above expressions for $R_2$ and $R_1$ replace $\Zof(s)$ by \begin{equation*} \sum_{\tau=A,B,[AB]}L_{\tau}(s)S(\Dtu)(s). \end{equation*} Vertex $A$ is exclusively contained in the facets $\tau_0,\tau_1,$ and $\tau_4$; its associated cone $\Delta_A$ is therefore simplicial. Vertex $B$, on the other hand, is contained in at least the facets $\tau_0,\tau_1,\tau_2,$ and $\tau_3$; hence $\Delta_B$ is certainly not simplicial. However, if we consider the cones $\delta_1,\delta_2,\delta_3$ defined below as members of a simplicial subdivision of $\Delta_B$, the relevant contributions to $s_0$ come from the simplicial cones \begin{align*} \DA&=\cone(v_0,v_1,v_4),&\delta_1&=\cone(v_0,v_1,v_3),&\DAB&=\cone(v_0,v_1).\\ & &\delta_2&=\cone(v_1,v_3), & & \\ & &\delta_3&=\cone(v_1,v_2,v_3),& & \end{align*} This way we find, similarly to Case~III, that $R_2$ and $R_1$ are explicitly given by \begin{gather*} \begin{aligned} R_2&=L_A(s_0)\frac{\Sigma(\Delta_A)(s_0)}{p-1} +L_B(s_0)\frac{\Sigma(\delta_1)(s_0)}{p^{\sigma_3+m_3s_0}-1} +L_{[AB]}(s_0)\Sigma(\Delta_{[AB]})(s_0),\\ R_1&=L_A'(s_0)\frac{\Sigma(\Delta_A)(s_0)}{p-1} +L_A(s_0)\frac{\Sigma(\Delta_A)'(s_0)}{p-1} +L_B'(s_0)\frac{\Sigma(\delta_1)(s_0)}{p^{\sigma_3+m_3s_0}-1} \end{aligned}\\ +L_B(s_0)\frac{\Sigma(\delta_1)'(s_0)}{p^{\sigma_3+m_3s_0}-1} -L_B(s_0)\frac{m_3(\log p)p^{\sigma_3+m_3s_0}\Sigma(\delta_1)(s_0)}{\Fdrie^2}\\ +L_B(s_0)\frac{y_A(\log p)\Sigma(\delta_2)(s_0)}{p^{\sigma_3+m_3s_0}-1} +L_B(s_0)\frac{y_A(\log p)\Sigma(\delta_3)(s_0)}{\Ftwee\Fdrie}\\ \hspace{.365\textwidth}+L_{[AB]}'(s_0)\Sigma(\Delta_{[AB]})(s_0) +L_{[AB]}(s_0)\Sigma(\Delta_{[AB]})'(s_0). \end{gather*} \subsection{Towards simplified formulas for $R_2$ and $R_1$} \subsubsection{The factors $L_{\tau}(s_0)$ and $L_{\tau}'(s_0)$} Since $N_A=N_B=0$ and $N_{[AB]}=(p-1)^2$, we obtain \begin{gather*} L_A(s_0)=L_B(s_0)=\left(\frac{p-1}{p}\right)^3,\qquad L_A'(s_0)=L_B'(s_0)=0,\\ \begin{aligned} L_{[AB]}(s_0)&=\left(\frac{p-1}{p}\right)^3-\left(\frac{p-1}{p}\right)^2\frac{p^{s_0}-1}{p^{s_0+1}-1},\qquad\text{and}\\ L_{[AB]}'(s_0)&=-(\log p)\left(\frac{p-1}{p}\right)^3\frac{p^{s_0+1}}{\bigl(p^{s_0+1}-1\bigr)^2}. \end{aligned} \end{gather*} \subsubsection{Cone multiplicities} We calculate the multiplicities of the five contributing simplicial cones, as well as the multiplicities $\mu_x$ and $\mu_y$ of the cones associated to the non-compact edges $l_x=\tau_0\cap\tau_3$ and $l_y=\tau_1\cap\tau_2$ (see Figure~\ref{figcase4}): \begin{gather*} \mult\Delta_A=\#H(v_0,v_1,v_4)= \begin{Vmatrix} 0&1&-\beta\\1&0&-\alpha\\0&0&1 \end{Vmatrix}=1,\\ \begin{alignedat}{6} \mu_x&=\mult\Delta_{l_x}&&=\#H(v_0,v_3)&&= \begin{Vmatrix} 1&-\beta\\b_3&c_3 \end{Vmatrix}&&=- \begin{vmatrix} 1&-\beta\\b_3&c_3 \end{vmatrix}&&=-\beta b_3-c_3&&>0,\\ \mu_y&=\mult\Delta_{l_y}&&=\#H(v_1,v_2)&&= \begin{Vmatrix} 1&-\alpha\\a_2&c_2 \end{Vmatrix}&&=- \begin{vmatrix} 1&-\alpha\\a_2&c_2 \end{vmatrix}&&=-\alpha a_2-c_2&&>0, \end{alignedat}\\ \begin{alignedat}{2} \mult\delta_1&=\#H(v_0,v_1,v_3)&&= \begin{Vmatrix} 0&1&-\beta\\1&0&-\alpha\\0&b_3&c_3 \end{Vmatrix}= \begin{Vmatrix} 1&-\beta\\b_3&c_3 \end{Vmatrix}=\mu_x,\\ \mu_3=\mult\delta_3&=\#H(v_1,v_2,v_3)&&= \begin{Vmatrix} 1&0&-\alpha\\a_2&0&c_2\\0&b_3&c_3 \end{Vmatrix}=b_3 \begin{Vmatrix} 1&-\alpha\\a_2&c_2 \end{Vmatrix}=b_3\mu_y, \end{alignedat}\\ \begin{alignedat}{2} \mult\delta_2&=\#H(v_1,v_3)&&=\gcd(b_3,c_3,-\alpha b_3)=1,\\ \mult\Delta_{[AB]}&=\#H(v_0,v_1)&&=\gcd(1,-\beta,-\alpha)=1. \end{alignedat} \end{gather*} \subsubsection{The sums $\Sigma(\cdot)(s_0)$ and $\Sigma(\cdot)'(s_0)$} Because the corresponding multiplicities are one, we have that \begin{gather*} \Sigma(\Delta_A)(s_0)=\Sigma(\delta_2)(s_0)=\Sigma(\Delta_{[AB]})(s_0)=1\qquad\text{and}\\ \Sigma(\Delta_A)'(s_0)=\Sigma(\Delta_{[AB]})'(s_0)=0. \end{gather*} Furthermore, since $H(v_0,v_3)\subseteq H(v_0,v_1,v_3)$ and \begin{equation*} \mu_x=\#H(v_0,v_3)=\#H(v_0,v_1,v_3), \end{equation*} we may put \begin{gather*} H_x=H(v_0,v_3)=H(v_0,v_1,v_3),\\ \Sigma_x=\Sigma(\delta_1)(s_0)=\sum\nolimits_{h\in H_x}p^{\sigma(h)+m(h)s_0}=\sum\nolimits_{h\in H_x}p^{w\cdot h},\qquad\text{and}\\ \Sigma_x'=\Sigma(\delta_1)'(s_0)=\dds{\sum\nolimits_{h\in H_x}p^{\sigma(h)+m(h)s}}=(\log p)\sum\nolimits_{h\in H_x}m(h)p^{w\cdot h}, \end{gather*} with $w=(1,1,1)+s_0(x_B,y_B,1)\in\C^3$. Finally we denote \begin{gather*} H_y=H(v_1,v_2),\qquad H_3=H(v_1,v_2,v_3),\qquad\text{and}\\ \Sigma_3=\Sigma(\delta_3)(s_0)=\sum\nolimits_{h\in H_3}p^{\sigma(h)+m(h)s_0}=\sum\nolimits_{h\in H_3}p^{w\cdot h}. \end{gather*} \subsubsection{New formulas for the residues} If we put \begin{gather*} R_2=\left(\frac{p-1}{p}\right)^3R_2',\qquad R_1=(\log p)\left(\frac{p-1}{p}\right)^3R_1',\\ F_2=p^{\sigma_2+m_2s_0}-1,\qquad F_3=p^{\sigma_3+m_3s_0}-1,\qquad\text{and}\qquad q=p^{-s_0-1}, \end{gather*} the observations above yield \begin{gather} R_2'=\frac{1}{1-q}+\frac{\Sigma_x}{F_3}\qquad\text{and}\label{formR2accasevier}\\ R_1'=-\frac{q}{(1-q)^2}+\frac{\Sigma_x'}{(\log p)F_3}-\frac{m_3(F_3+1)\Sigma_x}{F_3^2}+\frac{y_A}{F_3}+\frac{y_A\Sigma_3}{F_2F_3}.\label{formR1accasevier} \end{gather} We shall prove that $R_2'=R_1'=0$. \subsection{Some vector identities and their consequences} Given the coordinates of $v_i$; $i=0,\ldots,3$; one easily checks that\footnote{As in the previous cases, the first two identities arise from $(\adj M)M=(\det M)I$ for $M=\begin{psmallmatrix}1&-\beta\\b_3&c_3\end{psmallmatrix},\begin{psmallmatrix}1&-\alpha\\a_2&c_2\end{psmallmatrix}$ with $\det M=\mu_x,\mu_y$, respectively, while the third one follows immediately from the other two.} \begin{align} b_3v_0-v_3&=(0,0,\mu_x),\label{id1c4}\\ a_2v_1-v_2&=(0,0,\mu_y),\qquad\text{and}\label{id2c4}\\ -\mu_3v_0+a_2\mu_xv_1-\mu_xv_2+\mu_yv_3&=(0,0,0).\label{id3c4} \end{align} Considering dot products with $w=(1,1,1)+s_0(x_B,y_B,1)$, it follows from \eqref{id1c4} and \eqref{id2c4} that \begin{equation}\label{pwvdriemuxc4} \frac{-b_3(w\cdot v_0)+w\cdot v_3}{\mu_x}=\frac{-a_2(w\cdot v_1)+w\cdot v_2}{\mu_y}=-s_0-1, \end{equation} whereas making the dot product with $B(x_B,y_B,1)$ on both sides of \eqref{id1c4} yields \begin{equation}\label{interpbdrieyammdriec4} y_Ab_3-m_3=\mu_x. \end{equation} Other consequences of (\ref{id1c4}--\ref{id3c4}) include \begin{align} \frac{-b_3}{\mu_x}v_0+\frac{1}{\mu_x}v_3&=(0,0,-1)\in\Z^3,\label{bp1c4}\\ \frac{-a_2}{\mu_y}v_1+\frac{1}{\mu_y}v_2&=(0,0,-1)\in\Z^3,\qquad\text{and}\label{bp2c4}\\ \frac{a_2\mu_x}{\mu_3}v_1+\frac{-\mu_x}{\mu_3}v_2+\frac{1}{b_3}v_3&=v_0\in\Z^3.\label{bp3c4} \end{align} \subsection{Points of $H_x,H_y,$ and $H_3$} It follows from \eqref{bp1c4} that the $\mu_x$ points of $H_x$ are given by \begin{alignat}{2} \left\{\frac{-jb_3}{\mu_x}\right\}v_0&+\frac{j}{\mu_x}v_3;&\qquad&j=0,\ldots,\mu_x-1;\label{pointshxc4}\\ \intertext{while it follows from \eqref{bp2c4} that the $\mu_y$ points of $H_y=H(v_1,v_2)$ are} \left\{\frac{-ia_2}{\mu_y}\right\}v_1&+\frac{i}{\mu_y}v_2;&&i=0,\ldots,\mu_y-1.\notag \end{alignat} Note that $b_3$ and $a_2$ are, as expected, coprime to $\mu_x$ and $\mu_y$, respectively.\footnote{This follows from $\mu_x=-\beta b_3-c_3$, $\mu_y=-\alpha a_2-c_2$, and the primitivity of $v_2$ and $v_3$.} If we consider $H_3=H(v_1,v_2,v_3)$ in the usual way as an additive group with subgroup\footnote{Recall that $H(v_1,v_3)$ is the trivial subgroup of $H_3$.} $H_y=H_y+H(v_1,v_3)$ of index $b_3$, then we see from \eqref{bp2c4} and \eqref{bp3c4} that the points \begin{equation*} \left\{\frac{-a_2\{-k\mu_x\}_{b_3}}{\mu_3}\right\}v_1+\frac{\{-k\mu_x\}_{b_3}}{\mu_3}v_2+\frac{k}{b_3}v_3\in H_3;\qquad k=0,\ldots,b_3-1; \end{equation*} can serve as representatives for the $b_3$ cosets of $H_y$ in $H_3$. Hence a complete list of the $\mu_3=b_3\mu_y$ points of $H_3$ is provided by \begin{multline}\label{pointsh3c4} \left\{\frac{-a_2(ib_3+\{-k\mu_x\}_{b_3})}{\mu_3}\right\}v_1+\frac{ib_3+\{-k\mu_x\}_{b_3}}{\mu_3}v_2+\frac{k}{b_3}v_3;\\ i=0,\ldots,\mu_y-1;\quad k=0,\ldots,b_3-1. \end{multline} These descriptions should allow us to find expressions for $\Sigma_x,\Sigma_x',$ and $\Sigma_3$ in the next subsection. \subsection{Formulas for $\Sigma_x,\Sigma_x',$ and $\Sigma_3$} If for $h\in H_x=H(v_0,v_3)$, we denote by $(h_0,h_3)$ the coordinates of $h$ with respect to the basis $(v_0,v_3)$, then by \eqref{pwvdriemuxc4}, \eqref{pointshxc4}, and $p^{w\cdot v_0}=1$, we have \begin{equation}\label{formsigmaxcasevier} \Sigma_x=\sum_{h\in H_x}p^{w\cdot h}=\sum_{j=0}^{\mu_x-1}\Bigl(p^{\frac{-b_3(w\cdot v_0)+w\cdot v_3}{\mu_x}}\Bigr)^j=\sum_jq^j=\frac{F_3}{q-1}, \end{equation} whereas \begin{align} \frac{\Sigma_x'}{\log p}&=\sum_{h\in H_x}m(h)p^{w\cdot h}\notag\\ &=\sum_h(h_0y_A+h_3m_3)p^{w\cdot h}\notag\\ &=\sum_{j=0}^{\mu_x-1}\left(y_A\left\{\frac{-jb_3}{\mu_x}\right\} +m_3\frac{j}{\mu_x}\right)\Bigl(p^{\frac{-b_3(w\cdot v_0)+w\cdot v_3}{\mu_x}}\Bigr)^j\notag\\ &=y_A\sum_j\left\{\frac{-jb_3}{\mu_x}\right\}q^j+\frac{m_3}{\mu_x}\sum_jjq^j.\label{formsigmaxaccaseviertemp} \end{align} If $\mu_x=1$, then clearly $\Sigma_x'=0$. Let us find an expression for $\Sigma_x'$ in the complementary case. So from now on assume that $\mu_x>1$. Write $b_3$ as $b_3=t\mu_x+\overline{b_3}$ with $t\in\Zplus$ and $\overline{b_3}=\{b_3\}_{\mu_x}\in\{1,\ldots,\mu_x-1\}$; note that by the coprimality of $b_3$ and $\mu_x$, we have $\gcd(\overline{b_3},\mu_x)=1$ and hence $\overline{b_3}\neq0$. Furthermore, put \begin{equation}\label{defjkbarcase4} j_{\overline{k}}=\min\left\{j\in\Zplus\;\middle\vert\;\left\lfloor\frac{j\overline{b_3}}{\mu_x}\right\rfloor=\overline{k}\right\}=\left\lceil\frac{\overline{k}\mu_x}{\overline{b_3}}\right\rceil;\qquad \overline{k}=0,\ldots,\overline{b_3}; \end{equation} yielding \begin{equation*} 0=j_0<j_1<\cdots<j_{\overline{b_3}-1}<j_{\overline{b_3}}=\mu_x. \end{equation*} Then, proceeding as in Case~III (Subsection~\ref{studysigmaeenacc3}), we write \eqref{formsigmaxaccaseviertemp} as \begin{align*} &\,\frac{\Sigma_x'}{\log p}\\ &=y_A\sum_{j=0}^{\mu_x-1}\left\{\frac{-j\overline{b_3}}{\mu_x}\right\}q^j+\frac{m_3[q^{\mu_x}(\mu_x q-\mu_x-q)+q]}{\mu_x(q-1)^2}\\ &=\frac{y_A}{1-q}\Biggl(\sum_{\overline{k}=0}^{\overline{b_3}-1}q^{j_{\overline{k}}}-\frac{\overline{b_3}F_3}{\mu_x(1-q^{-1})}\Biggr)-y_A +\frac{m_3}{1-q}\left(-(F_3+1)+\frac{F_3}{\mu_x(1-q^{-1})}\right)\\ &=\frac{F_3}{1-q}\Biggl(\frac{y_A}{F_3}\sum_{\overline{k}}q^{j_{\overline{k}}}-\frac{y_A\overline{b_3}-m_3}{\mu_x(1-q^{-1})} -\frac{m_3(F_3+1)}{F_3}\Biggr)-y_A. \end{align*} Finally, if we use that $y_Ab_3-m_3=\mu_x$ (cfr.\ \eqref{interpbdrieyammdriec4}) and $b_3=t\mu_x+\overline{b_3}$, we obtain \begin{equation}\label{formsigmaxaccasevier} \frac{\Sigma_x'}{(\log p)F_3}=\frac{1}{1-q}\Biggl(\frac{y_A}{F_3}\sum_{\overline{k}=0}^{\overline{b_3}-1}q^{j_{\overline{k}}}+\frac{y_At-1}{1-q^{-1}} -\frac{m_3(F_3+1)}{F_3}\Biggr)-\frac{y_A}{F_3}\qquad\text{(if $\mu_x>1$).} \end{equation} Let us now calculate $\Sigma_3$. Using (\ref{pwvdriemuxc4}, \ref{pointsh3c4}) and $p^{w\cdot v_0}=p^{w\cdot v_1}=1$, we find \begin{align*} \Sigma_3&=\sum_{h\in H_3}p^{w\cdot h}\\ &=\sum_hp^{h_1(w\cdot v_1)+h_2(w\cdot v_2)+h_3(w\cdot v_3)}\\ &=\sum_{i=0}^{\mu_y-1}\sum_{k=0}^{b_3-1}p^{\frac{-a_2\left(ib_3+\{-k\mu_x\}_{b_3}\right)}{\mu_3}(w\cdot v_1)+\frac{ib_3+\{-k\mu_x\}_{b_3}}{\mu_3}(w\cdot v_2)+\frac{k}{b_3}(w\cdot v_3)}\\ &=\sum_i\Bigl(p^{\frac{-a_2(w\cdot v_1)+w\cdot v_2}{\mu_y}}\Bigr)^i\sum_kp^{\frac{-a_2(w\cdot v_1)+w\cdot v_2}{\mu_y}\left\{\frac{-k\mu_x}{b_3}\right\}+\frac{-b_3(w\cdot v_0)+w\cdot v_3}{\mu_x}\frac{k\mu_x}{b_3}}\\ &=\sum_iq^i\sum_kp^{(-s_0-1)\left(\left\{\frac{-k\mu_x}{b_3}\right\}+\frac{k\mu_x}{b_3}\right)}. \end{align*} Since $\mu_x$ and $b_3$ are coprime, one has $k\mu_x/b_3\notin\Z$ and $\{-k\mu_x/b_3\}=1-\{k\mu_x/b_3\}$ for $k$ not a multiple of $b_3$. Hence \begin{equation*} \left\{-\frac{k\mu_x}{b_3}\right\}+\frac{k\mu_x}{b_3}= \begin{cases} 0,&\text{if $k=0$};\\ 1+\left\lfloor\frac{k\mu_x}{b_3}\right\rfloor\in\Z,&\text{if $k\in\{1,\ldots,b_3-1\}$}; \end{cases} \end{equation*} and we obtain \begin{equation}\label{formuleSigmadriecase4} \Sigma_3=\frac{F_2}{q-1}\Biggl(1+q\sum_{k=1}^{b_3-1}q^{\left\lfloor\frac{k\mu_x}{b_3}\right\rfloor}\Biggr), \end{equation} with the understanding that the empty sum equals zero in case $b_3=1$. \subsection{Proof of $R_2'=R_1'=0$} As it follows immediately from \eqref{formR2accasevier} and \eqref{formsigmaxcasevier} that $R_2'=0$, we can further focus on $R_1'$. Let us first assume that $\mu_x=1$. In this case, we found that $\Sigma_x'=0$, while it follows from \eqref{formuleSigmadriecase4} that \begin{equation*} \Sigma_3=\frac{F_2}{q-1}(1+(b_3-1)q); \end{equation*} furthermore, note that $F_3=q-1$ and hence $\Sigma_x=1$, while $y_Ab_3-m_3=1$ by \eqref{interpbdrieyammdriec4}. With these observations, \eqref{formR1accasevier} easily yields $R_1'=0$. From now on, suppose that $\mu_x>1$ and thus that $\overline{b_3}>0$. If we then fill in (\ref{formsigmaxcasevier}, \ref{formsigmaxaccasevier}, \ref{formuleSigmadriecase4}) in \eqref{formR1accasevier}, one sees that proving $R_1'=0$ eventually boils down to proving that \begin{equation}\label{finalcheckcasevier} \sum_{k=1}^{b_3-1}q^{\left\lfloor\frac{k\mu_x}{b_3}\right\rfloor}=\sum_{\overline{k}=1}^{\overline{b_3}-1}q^{j_{\overline{k}}\,-1}+t\frac{F_3}{q-1}, \end{equation} whereby the sum over $\overline{k}$ is again understood to be zero if $\overline{b_3}=1$. Let us do this now. Recall that $b_3=t\mu_x+\overline{b_3}$ with $t\in\Zplus$ and $\overline{b_3}\in\{1,\ldots,\mu_x-1\}$. So if $t=0$, we have $b_3=\overline{b_3}$, and by \eqref{defjkbarcase4} and the coprimality of $\overline{b_3}$ and $\mu_x$, we then find \begin{equation*} \sum_{k=1}^{b_3-1}q^{\left\lfloor\frac{k\mu_x}{b_3}\right\rfloor}=\sum_{\overline{k}=1}^{\overline{b_3}-1}q^{\left\lfloor\frac{\overline{k}\mu_x}{\overline{b_3}}\right\rfloor}=\sum_{\overline{k}}q^{\left\lceil\frac{\overline{k}\mu_x}{\overline{b_3}}\right\rceil-1}=\sum_{\overline{k}}q^{j_{\overline{k}}\,-1}, \end{equation*} which agrees with \eqref{finalcheckcasevier} for $t=0$. In what follows, we assume that $t>0$ and hence that $b_3>\mu_x$. Define the numbers \begin{equation* k_j=\min\left\{k\in\Zplus\;\middle\vert\;\left\lfloor\frac{k\mu_x}{b_3}\right\rfloor=j\right\}=\left\lceil\frac{jb_3}{\mu_x}\right\rceil;\qquad j=0,\ldots,\mu_x; \end{equation*} and note that \begin{equation*} 0=k_0<k_1<\cdots<k_{\mu_x-1}<k_{\mu_x}=b_3. \end{equation*} This gives rise to \begin{equation*} \sum_{k=1}^{b_3-1}q^{\left\lfloor\frac{k\mu_x}{b_3}\right\rfloor}=\sum_{j=0}^{\mu_x-1}\sum_{k=k_j}^{k_{j+1}-1}q^j-1=\sum_{\overline{k}=0}^{\overline{b_3}-1}\sum_{j=j_{\overline{k}}}^{j_{\overline{k}+1}-1}(k_{j+1}-k_j)q^j-1. \end{equation*} Finally, observe that \begin{equation*} k_j=\left\lceil\frac{jb_3}{\mu_x}\right\rceil=\left\lceil\frac{j(t\mu_x+\overline{b_3})}{\mu_x}\right\rceil=jt+\left\lceil\frac{j\overline{b_3}}{\mu_x}\right\rceil=jt+\left\lfloor\frac{j\overline{b_3}}{\mu_x}\right\rfloor+1-[j=0]-[j=\mu_x] \end{equation*} for $j\in\{0,\ldots,\mu_x\}$; hence for $0\leqslant\overline{k}\leqslant\overline{b_3}-1$ and $j_{\overline{k}}\leqslant j\leqslant j_{\overline{k}+1}-1$, we have \begin{align*} &\;k_{j+1}-k_j\\ &=\bigl((j+1)t+(\overline{k}+[j+1=j_{\overline{k}+1}])+1-[j+1=\mu_x]\bigr)-\bigl(jt+\overline{k}+1-[j=0]\bigr)\\ &=t+[j=j_{\overline{k}+1}-1]+[j=0]-[j=\mu_x-1]. \end{align*} Therefore, \begin{align*} \sum_{k=1}^{b_3-1}q^{\left\lfloor\frac{k\mu_x}{b_3}\right\rfloor}&=\sum_{\overline{k}=0}^{\overline{b_3}-1}\sum_{j=j_{\overline{k}}}^{j_{\overline{k}+1}-1}(k_{j+1}-k_j)q^j-1\\ &=\sum_{\overline{k}}\sum_j\bigl(t+[j=j_{\overline{k}+1}-1]+[j=0]-[j=\mu_x-1]\bigr)q^j-1\\ &=t\sum_{j=0}^{\mu_x-1}q^j+\sum_{\overline{k}=0}^{\overline{b_3}-1}q^{j_{\overline{k}+1}-1}+q^0-q^{\mu_x-1}-1\\ &=\sum_{\overline{k}=1}^{\overline{b_3}-1}q^{j_{\overline{k}}\,-1}+t\frac{F_3}{q-1}, \end{align*} which agrees with \eqref{finalcheckcasevier}. This concludes Case~IV. \section{Case~V: exactly two facets of \Gf\ contribute to $s_0$; one of them is a non-compact $B_1$-facet, the other one a $B_1$-simplex; these facets are $B_1$ with respect to a same variable and have an edge in common} \subsection{Figure and notations} We assume that the two facets $\tau_0$ and $\tau_1$ contributing to $s_0$ are both $B_1$-facets with respect to the variable $z$. Let $\tau_0$ be non-compact, say for the variable $x$, and let $\tau_1$ be a $B_1$-simplex. Facet $\tau_0$ shares its unique compact edge $[AC]$ with $\tau_1$. We denote the vertices of $\tau_0$ and $\tau_1$ and their coordinates by \begin{equation*} A(x_A,y_A,0),\quad B(x_B,y_B,0),\quad C(x_C,y_C,1) \end{equation*} and the neighbor facets of $\tau_0$ and $\tau_1$ by $\tau_2,\tau_3,\tau_4$ as indicated in Figure~\ref{figcase5}. \begin{figure} \psset{unit=.03464624362\textwidth \centering \subfigure[Non-compact $B_1$-facet $\tau_0$, $B_1$-simplex $\tau_1$, their subfaces and neighbor facets $\tau_2,\tau_3,\tau_4$]{ \begin{pspicture}(-7.56,-5.5)(6.15,6.5 {\footnotesize \pstThreeDCoor[xMin=0,yMin=0,zMin=0,xMax=10,yMax=8,zMax=6.6,linecolor=black,linewidth=.7pt] { \psset{linecolor=black,linewidth=.3pt,linestyle=dashed,subticks=1} \pstThreeDLine(3,0,0)(3,3,0)\pstThreeDLine(3,3,0)(0,3,0) \pstThreeDLine(10,0,0)(10,3,0)\pstThreeDLine(10,3,0)(3,3,0) \pstThreeDLine(0,7,0)(2,7,0)\pstThreeDLine(2,7,0)(2,0,0) \pstThreeDLine(5,0,0)(5,5,0)\pstThreeDLine(5,5,0)(0,5,0) \pstThreeDLine(10,3,0)(10,5,0)\pstThreeDLine(3,0,1)(3,3,1) \pstThreeDLine(3,3,1)(0,3,1)\pstThreeDLine(10,0,1)(10,3,1) \pstThreeDLine(0,0,1)(3,0,1)\pstThreeDLine(3,0,1)(10,0,1) \pstThreeDLine(0,0,1)(0,3,1)\pstThreeDLine(3,0,0)(3,0,1) \pstThreeDLine(10,0,0)(10,0,1)\pstThreeDLine(0,3,0)(0,3,1) \pstThreeDLine(10,3,0)(10,3,1) } \pstThreeDPut[pOrigin=c](7,4,0.5){\psframebox*[framesep=0.8pt,framearc=0.3]{\phantom{$\tau_0$}}} \pstThreeDPut[pOrigin=c](3.33,4.85,0.33){\psframebox*[framesep=0.3pt,framearc=0.3]{\phantom{$\tau_1$}}} { \psset{dotstyle=none,dotscale=1,drawCoor=false} \psset{linecolor=black,linewidth=1pt,linejoin=1} \psset{fillcolor=lightgray,opacity=.6,fillstyle=solid} \pstThreeDLine(3,3,1)(2,7,0)(5,5,0)\pstThreeDLine(10,5,0)(5,5,0)(3,3,1)(10,3,1) } \pstThreeDPut[pOrigin=t](5,5,-0.25){$A$} \pstThreeDPut[pOrigin=t](2,7,-0.24){$B$} \pstThreeDPut[pOrigin=b](3,3,1.25){\psframebox*[framesep=.3pt,framearc=1]{$C$}} \pstThreeDPut[pOrigin=c](7,4,0.5){$\tau_0$} \pstThreeDPut[pOrigin=c](3.33,4.85,0.33){$\tau_1$} \pstThreeDPut[pOrigin=lb](2.65,5.1,0.8){\psframebox*[framesep=0.5pt,framearc=0.3]{$\tau_2$}} \pstThreeDPut[pOrigin=c](6.85,2.75,1.5){\psframebox*[framesep=0.4pt,framearc=0.3]{$\tau_3$}} \pstThreeDPut[pOrigin=l](7.45,7.5,0){$\ \tau_4$} \pstThreeDPut[pOrigin=b](10,3,1.27){$l$} \pstThreeDPut[pOrigin=br](0.06,-0.06,1.12){$1$} } \end{pspicture} }\hfill\subfigure[Relevant cones associated to relevant faces of~\Gf]{ \psset{unit=.03125\textwidth \begin{pspicture}(-7.6,-3.8)(7.6,9.6 {\footnotesize \pstThreeDCoor[xMin=0,yMin=0,zMin=0,xMax=10,yMax=10,zMax=10,nameZ={},linecolor=gray,linewidth=.7pt] \psset{linecolor=gray,linewidth=.3pt,linejoin=1,linestyle=dashed,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(10,0,0)(0,10,0)\pstThreeDLine(0,10,0)(0,0,10)\pstThreeDLine(0,0,10)(10,0,0) } \psset{linecolor=black,linewidth=.7pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,0)(0,0,10 } \psset{labelsep=2pt} \uput[90](0,1.7){\psframebox*[framesep=0.3pt,framearc=1]{\darkgray\scriptsize$v_4$}} } \psset{linecolor=black,linewidth=.7pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,0)(0,3.33,6.67 \pstThreeDLine(0,0,0)(2.63,1.58,5.79 \pstThreeDLine(0,0,0)(8.15,1.48,.370 \pstThreeDLine(0,0,0)(0,6.67,3.33 } \psset{linecolor=darkgray,linewidth=.8pt,linejoin=1,arrows=->,arrowscale=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,0)(0,1.5,3 \pstThreeDLine(0,0,0)(2.10,1.27,4.62 \pstThreeDLine(0,0,0)(4.89,.888,.222 \pstThreeDLine(0,0,0)(0,4,2 \pstThreeDLine(0,0,0)(0,0,2 } \psset{linecolor=white,linewidth=2pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(2.24,1.84,5.92)(1.32,2.45,6.24 \pstThreeDLine(2.37,2.09,5.54)(1.05,4.63,4.32 } \psset{linecolor=black,linewidth=.7pt,linejoin=1,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,10)(0,3.33,6.67 \pstThreeDLine(0,0,10)(2.63,1.58,5.79 \pstThreeDLine(0,6.67,3.33)(0,3.33,6.67)(2.63,1.58,5.79)(8.15,1.48,.370 } \psset{linecolor=black,linewidth=.7pt,linejoin=1,linestyle=dashed,fillcolor=lightgray,fillstyle=none} \pstThreeDLine(0,0,10)(8.15,1.48,.370 \pstThreeDLine(8.15,1.48,.370)(0,6.67,3.33 \pstThreeDLine(2.63,1.58,5.79)(0,6.67,3.33 } \psset{labelsep=2pt} \uput[-30](.7,1.4){\darkgray\scriptsize$v_0$} \uput[-60](-1.3,-.9){\darkgray\scriptsize$v_2$} \uput[105](1.96,.224){\darkgray\scriptsize$v_3$} } \psset{labelsep=1.5pt} \uput[202](-.3,1.4){\darkgray\scriptsize$v_1$} } \psset{labelsep=3.8pt} \uput[30](2.35,4.58){$\Delta_{\tau_0}$} \uput[30](3.54,2.55){$\Delta_l$} \uput[30](4.73,.52){$\Delta_{\tau_3}$} } \psset{labelsep=3.0pt} \rput(-6.1,-3.0){\psframebox*[framesep=0.6pt,framearc=1]{\phantom{$\Delta$}}} \uput[180](-4.7,-3.1){$\Delta_{\tau_2}$} } \rput(.7,5.2){\psframebox*[framesep=0.3pt,framearc=1]{\footnotesize$\Delta_A$}} \rput(-1.43,3.5){\footnotesize$\delta_B$} \rput(1.9,3.3){\psframebox*[framesep=0.7pt,framearc=1]{\footnotesize$\delta_1$}} \rput(1.98,2.02){\psframebox*[framesep=0.3pt,framearc=.3]{\footnotesize$\delta_2$}} \rput(-1.3,.25){\footnotesize$\delta_3$} \pstThreeDNode(2.63,1.58,5.79){dt1} \pstThreeDNode(1.32,.790,7.90){AB} \pstThreeDNode(1.32,2.45,6.24){AC} \pstThreeDNode(5.40,1.53,3.08){BC} \rput[Bl](-7.967,2.675){\rnode{BClabel}{$\Delta_{[BC]}$}} \rput[Bl](-7.967,9.075){\rnode{dt1label}{$\Delta_{\tau_1}$}} \rput[Bl](-4.88,9.075){\rnode{ABlabel}{$\Delta_{[AB]}$}} \rput[B](0,9.075){$z,\Delta_{\tau_4}$} \rput[Br](8.01,9.075){\rnode{AClabel}{$\Delta_{[AC]}$}} \ncline[linewidth=.3pt,nodesepB=2.5pt,nodesepA=2pt]{->}{dt1label}{dt1} \ncline[linewidth=.3pt,nodesepB=2pt,nodesepA=1pt]{->}{ABlabel}{AB} \ncline[linewidth=.3pt,nodesepB=3.5pt,nodesepA=1pt]{->}{AClabel}{AC} \ncline[linewidth=.3pt,nodesepB=2pt,nodesepA=2.5pt]{->}{BClabel}{BC} } \end{pspicture} } \caption{Case V: the only facets contributing to $s_0$ are the non-compact $B_1$-facet $\tau_0$ and the $B_1$-simplex $\tau_1$} \label{figcase5} \end{figure} Let us put \begin{gather*} \begin{alignedat}{10} &\overrightarrow{AC}&&(x_C&&-x_A&&,y_C&&-y_A&&,1&&)&&=(\aA&&,\bA&&,1),\\ &\overrightarrow{BC}&&(x_C&&-x_B&&,y_C&&-y_B&&,1&&)&&=(\aB&&,\bB&&,1), \end{alignedat}\\ \text{and}\qquad\,\fAB=\gcd(x_B-x_A,y_B-y_A) \end{gather*} as before. The unique primitive vector $v_0\in\Zplus^3$ perpendicular to $\tau_0$ is given by $v_0(0,1,-\bA)$; such vectors for the other relevant facets $\tau_1,\tau_2,\tau_3,\tau_4$ will be denoted \begin{equation*} v_1(a_1,b_1,c_1),\quad v_2(a_2,b_2,c_2),\quad v_3(0,b_3,c_3),\quad v_4(0,0,1), \end{equation*} respectively. Equations for the affine supports of $\tau_i$; $i=0,\ldots,4$; are given by \begin{alignat*}{3} \aff(\tau_0)&\leftrightarrow&\; y&\;-\;&\bA z&=y_A,\\ \aff(\tau_1)&\leftrightarrow&\;a_1x+b_1y&\;+\;& c_1z&=m_1,\\ \aff(\tau_2)&\leftrightarrow&\;a_2x+b_2y&\;+\;& c_2z&=m_2,\\ \aff(\tau_3)&\leftrightarrow&\; b_3y&\;+\;& c_3z&=m_3,\\ \aff(\tau_4)&\leftrightarrow&\; & & z&=0 \end{alignat*} for certain $m_1,m_2,m_3\in\Zplus$, and to these facets we associate the respective numerical data \begin{equation*} (y_A,1-\bA),\quad (m_1,\sigma_1),\quad (m_2,\sigma_2),\quad (m_3,\sigma_3),\quad (0,1), \end{equation*} with $\sigma_i=a_i+b_i+c_i$; $i=1,2$; and $\sigma_3=b_3+c_3$. Since we assume that $\tau_0$ and $\tau_1$ both contribute to the candidate pole $s_0$, we have that $p^{1-\bA+y_As_0}=p^{\sigma_1+m_1s_0}=1$; hence \begin{equation*} \Re(s_0)=\frac{\bA-1}{y_A}=-\frac{\sigma_1}{m_1}\quad\ \text{and}\ \quad\Im(s_0)=\frac{2n\pi}{\gcd(y_A,m_1)\log p}\quad\ \text{for some $n\in\Z$.} \end{equation*} \subsection{Contributions to the candidate pole $s_0$} The goal of this section is again to show that both \begin{align*} R_2&=\lim_{s\to s_0}\left(p^{1-\bA+y_As}-1\right)\left(p^{\sigma_1+m_1s}-1\right)\Zof(s)\qquad\text{and}\\ R_1&=\lim_{s\to s_0}\frac{d}{ds}\left[\left(p^{1-\bA+y_As}-1\right)\left(p^{\sigma_1+m_1s}-1\right)\Zof(s)\right] \end{align*} equal zero. The compact faces of \Gf\ contributing to $s_0$ are again the (seven) compact subfaces $A,B,C,[AB],[AC],[BC],$ and $\tau_1$ of the two contributing facets $\tau_0$ and $\tau_1$. Only three of them also contribute to the \lq residue\rq\ $R_2$: $A,C,$ and $[AC]$. If we consider the nine simplicial cones \begin{align*} \Dteen&=\cone(v_1), &\delta_1&=\cone(v_0,v_1,v_3),&\DAB&=\cone(v_1,v_4),\\ \DA&=\cone(v_0,v_1,v_4),&\delta_2&=\cone(v_1,v_3), &\DAC&=\cone(v_0,v_1),\\ \dB&=\cone(v_1,v_2,v_4),&\delta_3&=\cone(v_1,v_2,v_3),&\DBC&=\cone(v_1,v_2), \end{align*} the same approach as in Cases~III and IV leads to the following expressions for $R_2$ and $R_1$: \begin{gather*} \begin{aligned} R_2&=L_A(s_0)\frac{\Sigma(\Delta_A)(s_0)}{p-1} +L_C(s_0)\frac{\Sigma(\delta_1)(s_0)}{p^{\sigma_3+m_3s_0}-1} +L_{[AC]}(s_0)\Sigma(\Delta_{[AC]})(s_0),\\ R_1&=L_A'(s_0)\frac{\Sigma(\Delta_A)(s_0)}{p-1} +L_A(s_0)\frac{\Sigma(\Delta_A)'(s_0)}{p-1} +L_B(s_0)\frac{y_A(\log p)\Sigma(\delta_B)(s_0)}{\Ftwee(p-1)} \end{aligned}\\ +L_C'(s_0)\frac{\Sigma(\delta_1)(s_0)}{p^{\sigma_3+m_3s_0}-1} +L_C(s_0)\frac{\Sigma(\delta_1)'(s_0)}{p^{\sigma_3+m_3s_0}-1}\\ -L_C(s_0)\frac{m_3(\log p)p^{\sigma_3+m_3s_0}\Sigma(\delta_1)(s_0)}{\Fdrie^2} +L_C(s_0)\frac{y_A(\log p)\Sigma(\delta_2)(s_0)}{p^{\sigma_3+m_3s_0}-1}\\ +L_C(s_0)\frac{y_A(\log p)\Sigma(\delta_3)(s_0)}{\Ftwee\Fdrie} +L_{[AB]}(s_0)\frac{y_A(\log p)\Sigma(\Delta_{[AB]})(s_0)}{p-1}\\ +L_{[AC]}'(s_0)\Sigma(\Delta_{[AC]})(s_0) +L_{[AC]}(s_0)\Sigma(\Delta_{[AC]})'(s_0)\\ +L_{[BC]}(s_0)\frac{y_A(\log p)\Sigma(\Delta_{[BC]})(s_0)}{p^{\sigma_2+m_2s_0}-1} +L_{\tau_1}(s_0)y_A(\log p)\Sigma(\Delta_{\tau_1})(s_0). \end{gather*} \subsection{Towards simplified formulas for $R_2$ and $R_1$} \subsubsection{The factors $L_{\tau}(s_0)$ and $L_{\tau}'(s_0)$} In the usual way we obtain \begin{align*} L_A(s_0)=L_B(s_0)=L_C(s_0)&=\left(\frac{p-1}{p}\right)^3,\qquad L_A'(s_0)=L_C'(s_0)=0,\\ L_{[AB]}(s_0)&=\left(\frac{p-1}{p}\right)^3-\frac{(p-1)N}{p^2}\frac{p^{s_0}-1}{p^{s_0+1}-1},\\ L_{[AC]}(s_0)=L_{[BC]}(s_0)&=\left(\frac{p-1}{p}\right)^3-\left(\frac{p-1}{p}\right)^2\frac{p^{s_0}-1}{p^{s_0+1}-1},\\ L_{[AC]}'(s_0)&=-(\log p)\left(\frac{p-1}{p}\right)^3\frac{p^{s_0+1}}{\bigl(p^{s_0+1}-1\bigr)^2},\\ \text{and}\qquad L_{\tau_1}(s_0)&=\left(\frac{p-1}{p}\right)^3-\frac{(p-1)^2-N}{p^2}\frac{p^{s_0}-1}{p^{s_0+1}-1}, \end{align*} with \begin{equation*} N=\#\left\{(x,y)\in(\Fpcross)^2\;\middle\vert\;\overline{f_{[AB]}}(x,y)=0\right\}. \end{equation*} \subsubsection{Cone multiplicities} Let us investigate the multiplicities of the nine contributing simplicial cones. As we did before, we shall also consider the multiplicities $\mu_l$ and $\mu_1'$ of the respective simplicial cones $\Delta_l$ and $\delta_1'=\cone(v_0,v_1,v_2)$; the first cone is the cone associated to the half-line $l=\tau_0\cap\tau_3$ (see Figure~\ref{figcase5}), while the second one is a simplicial subcone of $\Delta_C$ that could have been chosen as a member of an alternative subdivision of $\Delta_C$. Proceeding as in the previous cases, we find \begin{gather*} \mult\Delta_{[AB]}=\mult\Delta_{\tau_1}=1,\\ \begin{alignedat}{5} \mu_A&=\mult\Delta_A&&=&\;\#H(v_0,v_1,v_4)&=\mult\Delta_{[AC]}&&=\#H(v_0,v_1)&&=a_1,\\ \mu_B&=\mult\delta_B&&=&\;\#H(v_1,v_2,v_4)&=\mult\Delta_{[BC]}&&=\#H(v_1,v_2)&&=- \begin{vmatrix} a_1&b_1\\a_2&b_2 \end{vmatrix},\\ &&&&\;\mu_l&=\mult\Delta_l&&=\#H(v_0,v_3)&&=- \begin{vmatrix} 1&-\bA\\b_3&c_3 \end{vmatrix}, \end{alignedat}\\ \begin{alignedat}{4} \mu_1&=\mult\delta_1\ \;&&=\#H(v_0,v_1,v_3)&&= \begin{vmatrix} 0&1&-\bA\\a_1&b_1&c_1\\0&b_3&c_3 \end{vmatrix} &&=-a_1 \begin{vmatrix} 1&-\bA\\b_3&c_3 \end{vmatrix}=\mu_A\mu_l,\\ \mu_1'&=\mult\delta_1'&&=\#H(v_0,v_1,v_2)&&= \begin{vmatrix} 0&1&-\bA\\a_1&b_1&c_1\\a_2&b_2&c_2 \end{vmatrix} &&=\mu_A\mu_B\fAB. \end{alignedat}\quad\ \, \end{gather*} We observe that \begin{equation*} \mu_1=\#H(v_0,v_1,v_3)=\mu_A\mu_l=\#H(v_0,v_1)\#H(v_0,v_3); \end{equation*} i.e., the factor $\phi_{Al}=\mu_1/\mu_A\mu_l$ equals one. Theorem~\ref{algfp}(v) now asserts that \begin{equation*} \mu_2=\mult\delta_2=\#H(v_1,v_3)=\gcd(\mu_A,\mu_l). \end{equation*} For \begin{equation*} \mu_3=\mult\delta_3=\#H(v_1,v_2,v_3)= \begin{vmatrix} a_1&b_1&c_1\\a_2&b_2&c_2\\0&b_3&c_3 \end{vmatrix}>0, \end{equation*} finally, we obtain in a similar way as in Case~III that \begin{align*} \mu_3&=-b_3 \begin{vmatrix} a_1&c_1\\a_2&c_2 \end{vmatrix} +c_3 \begin{vmatrix} a_1&b_1\\a_2&b_2 \end{vmatrix}\\ &= \begin{vmatrix} a_1&b_1\\a_2&b_2 \end{vmatrix} (b_3\bB+c_3)\\ &=-\mu_B\left(v_3\cdot\overrightarrow{BC}\right)\\ &=\mu_B\left(v_3\cdot\overrightarrow{AB}-v_3\cdot\overrightarrow{AC}\right)\\ &=\mu_B\bigl(v_3\cdot\fAB(-b_1,a_1,0)-v_3\cdot(\aA,\bA,1)\bigr)\\ &=\mu_B\bigl(\fAB\Psi+\mu_l\bigr)\\ &=\mu_B\mu_2\fBtwee, \end{align*} whereby \begin{gather*} \Psi=a_1b_3=b_3\mu_A,\qquad\fBtwee=\fAB\psi+\mu_l',\\ \psi=\frac{\Psi}{\mu_2}=b_3\mu_A'\in\Zplusnul,\qquad\mu_A'=\frac{\mu_A}{\mu_2}\in\Zplusnul,\qquad\text{and}\qquad\mu_l'=\frac{\mu_l}{\mu_2}\in\Zplusnul. \end{gather*} Note that the coprimality of $b_3$ and $c_3$ implies the coprimality of $b_3$ and $\mu_l$. Hence \begin{equation*} \mu_2=\gcd(\mu_A,\mu_l)=\gcd(\Psi,\mu_l) \end{equation*} and $\gcd(\psi,\mu_l')=\gcd(\psi,\fBtwee)=1$. \subsubsection{The sums $\Sigma(\cdot)(s_0)$ and $\Sigma(\cdot)'(s_0)$} We have of course that $\Sigma(\Delta_{[AB]})(s_0)=\Sigma(\Delta_{\tau_1})(s_0)=1$. As usual, we denote \begin{alignat*}{3} H_A&=H(v_0,v_1,v_4)&&=H(v_0,v_1),&\qquad\qquad\quad H_1&=H(v_0,v_1,v_3),\\ H_B&=H(v_1,v_2,v_4)&&=H(v_1,v_2),& H_2&=H(v_1,v_3),\\ H_l&=H(v_0,v_3), && & H_3&=H(v_1,v_2,v_3), \end{alignat*} and $w=(1,1,1)+s_0(x_C,y_C,1)\in\C^3$, yielding \begin{gather*} \begin{alignedat}{3} \Sigma_A &=\Sigma(\Delta_A)(s_0) &&=\Sigma(\Delta_{[AC]})(s_0) &&=\sum\nolimits_{h\in H_A}p^{w\cdot h};\\ \Sigma_B &=\Sigma(\delta_B)(s_0) &&=\Sigma(\Delta_{[BC]})(s_0) &&=\sum\nolimits_{h\in H_B}p^{w\cdot h};\\ \Sigma_A'&=\Sigma(\Delta_A)'(s_0)&&=\Sigma(\Delta_{[AC]})'(s_0)&&=(\log p)\sum\nolimits_{h\in H_A}m(h)p^{w\cdot h}; \end{alignedat}\\%[+1ex] \begin{alignedat}{2} \Sigma_l &=\Sigma(\Delta_l)(s_0) &&=\sum\nolimits_{h\in H_l}p^{w\cdot h};\\ \Sigma_i &=\Sigma(\delta_i)(s_0) &&=\sum\nolimits_{h\in H_i}p^{w\cdot h};\qquad i=1,2,3;\\ \Sigma_1'&=\Sigma(\delta_1)'(s_0)&&=(\log p)\sum\nolimits_{h\in H_1}m(h)p^{w\cdot h}. \end{alignedat} \end{gather*} \subsubsection{New formulas for the residues} Let us put \begin{gather*} R_2=\left(\frac{p-1}{p}\right)^3R_2',\qquad R_1=(\log p)\left(\frac{p-1}{p}\right)^3R_1',\\ F_2=p^{\sigma_2+m_2s_0}-1,\qquad F_3=p^{\sigma_3+m_3s_0}-1,\qquad\text{and}\qquad q=p^{-s_0-1}. \end{gather*} Our findings so far lead to the following expressions for $R_2'$ and $R_1'$: \begin{gather} R_2'=\frac{\Sigma_A}{1-q}+\frac{\Sigma_1}{F_3},\label{formR2accasevijf}\\ \begin{multlined}[.8\textwidth] R_1'=\frac{1}{1-q}\left(\frac{\Sigma_A'}{\log p}-\frac{\Sigma_A}{q^{-1}-1}+\frac{y_A\Sigma_B}{F_2}+y_A\right)\\ +\frac{\Sigma_1'}{(\log p)F_3}-\frac{m_3(F_3+1)\Sigma_1}{F_3^2}+\frac{y_A\Sigma_2}{F_3}+\frac{y_A\Sigma_3}{F_2F_3}. \end{multlined}\label{formR1accasevijf} \end{gather} We prove that $R_2'=R_1'=0$. \subsection{Investigation of the sums $\Sigma_{\bullet}$ and $\Sigma_{\bullet}'$} \subsubsection{Vector identities and consequences} The identities that will be useful to us in this case are \begin{align} b_3v_0-v_3&=(0,0,\mu_l),\label{vi1c5}\\ -\mu_Bv_0+a_2v_1-\mu_Av_2&=(0,0,\mu_1'),\qquad\text{and}\label{vi2c5}\\ \Theta v_1-\Psi v_2-\mu_Bv_3&=(0,0,\mu_3),\notag \end{align} whereby $\Theta=a_2b_3$ and $\Psi=a_1b_3$. These give rise to \begin{equation}\label{interpbdrieyammdriec5} y_Ab_3-m_3=\mu_l \end{equation} and to \begin{align} -b_3(w\cdot v_0)+w\cdot v_3&=\mu_l(-s_0-1),\label{dpi1c5}\\ \mu_B(w\cdot v_0)-a_2(w\cdot v_1)+\mu_A(w\cdot v_2)&=\mu_1'(-s_0-1),\qquad\text{and}\label{dpi2c5}\\ -\Theta(w\cdot v_1)+\Psi(w\cdot v_2)+\mu_B(w\cdot v_3)&=\mu_3(-s_0-1).\label{dpi3c5} \end{align} Moreover, they show that \begin{equation}\label{bpsc5} \frac{-b_3}{\mu_l}v_0+\frac{1}{\mu_l}v_3\in\Z^3\quad\text{ and }\quad\frac{-\Theta}{\mu_3}v_1+\frac{\psi}{\mu_B\fBtwee}v_2+\frac{1}{\mu_2\fBtwee}v_3\in\Z^3. \end{equation} (Recall that $\mu_3=\mu_B\mu_2\fBtwee$ and $\Psi=\psi\mu_2$.) \subsubsection{Points of $H_A,H_B,H_l,H_1,H_2,$ and $H_3$} The $\mu_A$ points of $H_A$ are given by \begin{alignat}{2} \left\{\frac{i\xi_A}{\mu_A}\right\}v_0+\frac{i}{\mu_A}v_1;&&\qquad i&=0,\ldots,\mu_A-1;\notag\\ \intertext{or, alternatively, by} \frac{i}{\mu_A}v_0+\left\{\frac{i\xi_A'}{\mu_A}\right\}v_1;&&\qquad i&=0,\ldots,\mu_A-1;\label{pofAaccentcasevijf}\\ \intertext{for certain $\xi_A,\xi_A'\in\verA$ with $\xi_A\xi_A'\equiv1\bmod\mu_A$. By \eqref{bpsc5}, the $\mu_l$ points of $H_l$ are} \left\{\frac{-jb_3}{\mu_l}\right\}v_0+\frac{j}{\mu_l}v_3;&&\qquad j&=0,\ldots,\mu_l-1;\label{poflcasevijf}\\ \intertext{while those of $H_B$ and $H_2$ are given by} \left\{\frac{i\xi_B}{\mu_B}\right\}v_1+\frac{i}{\mu_B}v_2;&&\qquad i&=0,\ldots,\mu_B-1;\label{pofBcasevijf}\\ \shortintertext{and by} \left\{\frac{j\xi_2}{\mu_2}\right\}v_1+\frac{j}{\mu_2}v_3;&&\qquad j&=0,\ldots,\mu_2-1;\label{pof2casevijf} \end{alignat} respectively, for unique $\xi_B\in\verB$ and $\xi_2\in\vertwee$, coprime to $\mu_B$ and $\mu_2$, respectively. Since $\mu_1=\mu_A\mu_l$, the description of the $\mu_1$ points of $H_1$ is rather easy: \begin{equation}\label{pof1casevijf} \left\{\frac{i\xi_A\mu_l-jb_3\mu_A}{\mu_1}\right\}v_0+\frac{i}{\mu_A}v_1+\frac{j}{\mu_l}v_3;\qquad i=0,\ldots,\mu_A-1;\quad j=0,\ldots,\mu_l-1. \end{equation} Based on (\ref{pofBcasevijf}--\ref{pof2casevijf}) and \eqref{bpsc5}, we find in exactly the same way as in Case~III (Paragraph~\ref{ssspoH3c3}) a complete list of the $\mu_3=\mu_B\mu_2\fBtwee$ points of $H_3$: \begin{multline}\label{pof3casevijf} \left\{\frac{(i-\lfloor k\psi/\fBtwee\rfloor)\xi_B\mu_2\fBtwee+j\xi_2\mu_B\fBtwee-k\Theta}{\mu_3}\right\}v_1\\ \shoveright{+\frac{i\fBtwee+\{k\psi\}_{\fBtwee}}{\mu_B\fBtwee}v_2+\frac{j\fBtwee+k}{\mu_2\fBtwee}v_3;}\\ i=0,\ldots,\mu_B-1;\quad j=0,\ldots,\mu_2-1;\quad k=0,\ldots,\fBtwee-1. \end{multline} \subsubsection{Formulas for $\Sigma_A,\Sigma_A',\Sigma_B,\Sigma_l,\Sigma_1,\Sigma_1',\Sigma_2,$ and $\Sigma_3$}As in Case~III, for some of the sums $\Sigma_{\bullet}$ and $\Sigma_{\bullet}'$, we will have to distinguish between two cases. Let us put \begin{gather*} n^{\ast}=\frac{\gcd(y_A,m_1)}{\gcd(y_A,m_1,m(h^{\ast}))},\\ \text{with}\qquad m(h^{\ast})=\frac{\xi_Ay_A+m_1}{\mu_A}\in\Zplusnul\qquad\text{and}\qquad h^{\ast}=\frac{\xi_A}{\mu_A}v_0+\frac{1}{\mu_A}v_1\in\Z^3, \end{gather*} a generating element of the group $H_A$ (if $\mu_A>1$). Then we have that \begin{equation*} p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_1}{\mu_A}}\qquad\qquad\text{and}\qquad\qquad\ p^{\frac{w\cdot v_0+\xi_A'(w\cdot v_1)}{\mu_A}} \end{equation*} both equal one if $n^{\ast}\mid n$, while they both differ from one if $n^{\ast}\nmid n$. Proceeding as in Paragraphs~\ref{sssSigmaBc3}--\ref{sssSigmaBaccentc3}, we obtain that \begin{align} \Sigma_A&= \begin{dcases*} \mu_A,&if $n^{\ast}\mid n$;\\ 0,&otherwise; \end{dcases*}\qquad\qquad\text{and}\label{formSigAc5}\\ \frac{\Sigma_A'}{\log p}&= \begin{dcases*} \frac{(y_A+m_1)(\mu_A-1)}{2},&if $n^{\ast}\mid n$;\\[+.4ex] \frac{y_A}{p^{\frac{w\cdot v_0+\xi_A'(w\cdot v_1)}{\mu_A}}-1}+ \frac{m_1}{p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_1}{\mu_A}}-1},&otherwise. \end{dcases*}\label{formSigAaccentc5} \end{align} We continue as in Paragraph~\ref{sssSASCS2c3}. Based on \eqref{pofBcasevijf} and $p^{w\cdot v_1}=1$, we find that \begin{equation}\label{formsigmaBcase5} \Sigma_B= \begin{dcases*} \frac{F_2}{p^{\frac{\xi_B(w\cdot v_1)+w\cdot v_2}{\mu_B}}-1},&in any case;\\ \frac{F_2}{q^{\fAB}-1},&if $n^{\ast}\mid n$. \end{dcases*} \end{equation} The special formula for $n^{\ast}\mid n$ arises from \begin{equation}\label{sigbsigaidcasevijf} p^{\frac{w\cdot v_0+\xi_A'(w\cdot v_1)}{\mu_A}}p^{\frac{\xi_B(w\cdot v_1)+w\cdot v_2}{\mu_B}}=p^{\fAB(-s_0-1)}=q^{\fAB}, \end{equation} which in turn follows from $v_0+\xi_A'v_1\in\mu_A\Z^3$, $\xi_Bv_1+v_2\in\mu_B\Z^3$, \eqref{vi2c5}, and \eqref{dpi2c5}. For $\Sigma_l$ we use \eqref{poflcasevijf}, $p^{w\cdot v_0}=1$, and \eqref{dpi1c5} in order to conclude \begin{equation*} \Sigma_l=\sum_{h\in H_l}p^{w\cdot h}=\sum_{j=0}^{\mu_l-1}\Bigl(p^{\frac{-b_3(w\cdot v_0)+w\cdot v_3}{\mu_l}}\Bigr)^j=\sum_jq^j=\frac{F_3}{q-1}. \end{equation*} By \eqref{pofAaccentcasevijf}, \eqref{pof2casevijf}, and \eqref{vi1c5} we have that \begin{equation*} v_0+\xi_A'v_1\in\mu_A\Z^3,\qquad\xi_2v_1+v_3\in\mu_2\Z^3,\qquad\text{and}\qquad b_3v_0-v_3\in\mu_l\Z^3. \end{equation*} Since $\mu_2=\gcd(\mu_A,\mu_l)$, we obtain \begin{equation*} -b_3(v_0+\xi_A'v_1)+(\xi_2v_1+v_3)+(b_3v_0-v_3)=(-\xi_A'b_3+\xi_2)v_1\in\mu_2\Z^3, \end{equation*} and hence \begin{equation*} \frac{-\xi_A'b_3+\xi_2}{\mu_2}\in\Z. \end{equation*} Using $\psi\mu_2=\Psi=b_3\mu_A$, $p^{w\cdot v_1}=1$, \eqref{dpi1c5}, and $\mu_l/\mu_2=\mu_l'\in\Zplusnul$, it then follows that \begin{multline}\label{extraidentiteitc5} \Bigl(p^{\frac{w\cdot v_0+\xi_A'(w\cdot v_1)}{\mu_A}}\Bigr)^{-\psi}p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}} =p^{\frac{-\Psi(w\cdot v_0)+(-\xi_A'\Psi+\xi_2\mu_A)(w\cdot v_1)+\mu_A(w\cdot v_3)}{\mu_A\mu_2}}\\ =p^{\frac{-b_3(w\cdot v_0)+(-\xi_A'b_3+\xi_2)(w\cdot v_1)+w\cdot v_3}{\mu_2}} =p^{\frac{-b_3(w\cdot v_0)+w\cdot v_3}{\mu_2}}=p^{\frac{\mu_l(-s_0-1)}{\mu_2}}=q^{\mu_l'}. \end{multline} In this way \eqref{pof2casevijf} yields \begin{equation}\label{formsigma2case5} \Sigma_2=\sum_{h\in H_2}p^{w\cdot h}=\sum_{j=0}^{\mu_2-1}\Bigl(p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}\Bigr)^j= \begin{dcases*} \frac{F_3}{p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}-1},&in any case;\\ \frac{F_3}{q^{\mu_l'}-1},&if $n^{\ast}\mid n$. \end{dcases*} \end{equation} Keeping in mind that $p^{w\cdot v_0}=1$, we easily find from the description \eqref{pof1casevijf} of the points of $H_1$ that \begin{align} \Sigma_1=\sum_{h\in H_1}p^{w\cdot h} &=\sum_{i=0}^{\mu_A-1}\sum_{j=0}^{\mu_l-1}p^{\frac{i\xi_A\mu_l-jb_3\mu_A}{\mu_1}(w\cdot v_0)+\frac{i}{\mu_A}(w\cdot v_1)+\frac{j}{\mu_l}(w\cdot v_3)}\notag\\ &=\sum_i\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_1}{\mu_A}}\Bigr)^i\sum_j\Bigl(p^{\frac{-b_3(w\cdot v_0)+w\cdot v_3}{\mu_l}}\Bigr)^j\notag\\ &=\Sigma_A\Sigma_l=\begin{dcases*} \frac{\mu_AF_3}{q-1},&if $n^{\ast}\mid n$;\\[+.3ex] 0,&otherwise. \end{dcases*}\label{formsigma1case5} \end{align} To calculate $\Sigma_1'$ we follow the same process as in Case~III. Write $\psi$ as $\psi=t\mu_l'+\barpsi$ with $t\in\Zplus$ and $\barpsi=\{\psi\}_{\mu_l'}$. Clearly $\barpsi\in\{0,\ldots,\mu_l'-1\}$, and since $\gcd(\psi,\mu_l')=1$, we also have $\gcd(\barpsi,\mu_l')=1$. Hence $\barpsi=0$ occurs if and only if $\mu_l'=1$. Exclusively in the case that $\mu_l'>1$ we also introduce the numbers \begin{equation*} \kappa_{\rho}=\min\left\{\kappa\in\Zplus\;\middle\vert\;\left\lfloor\frac{\kappa\barpsi}{\mu_l'}\right\rfloor=\rho\right\}=\left\lceil\frac{\rho\mu_l'}{\barpsi}\right\rceil;\qquad\rho=0,\ldots,\barpsi. \end{equation*} Proceeding as in Subsection~\ref{studysigmaeenacc3} and applying \eqref{interpbdrieyammdriec5} in the end, we eventually obtain that \begin{equation}\label{formsigma1accentcase5} \frac{\Sigma_1'}{(\log p)F_3}= \begin{dcases*} \begin{multlined}[b][.68\textwidth] \frac{1}{1-q}\Biggl(\frac{y_A}{q^{\mu_l'}-1}\sum_{\rho=1}^{\barpsi}q^{\kappa_{\rho}}-\frac{(y_A+m_1)(\mu_A-1)}{2}\\ +\frac{y_At-\mu_A}{1-q^{-1}}-\frac{m_3\mu_A(F_3+1)}{F_3}-y_A\Biggr)-\frac{y_A}{q^{\mu_l'}-1}, \end{multlined} &\!\text{if $n^{\ast}\mid n$;}\\[+.5ex] \begin{multlined}[b][.68\textwidth] \frac{y_Ap^{\frac{w\cdot v_0+\xi_A'(w\cdot v_1)}{\mu_A}}}{\Bigl(p^{\frac{w\cdot v_0+\xi_A'(w\cdot v_1)}{\mu_A}}-1\Bigr)\Bigl(p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}-1\Bigr)}\cdot\\ \shoveright{\sum_{\kappa=0}^{\mu_l'-1}q^{\kappa}\Bigl(p^{\frac{w\cdot v_0+\xi_A'(w\cdot v_1)}{\mu_A}}\Bigr)^{\left\lfloor\frac{\kappa\psi}{\mu_l'}\right\rfloor}}\\ -\frac{y_A}{p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}-1}+\frac{m_1}{(q-1)\Bigl(p^{\frac{\xi_A(w\cdot v_0)+w\cdot v_1}{\mu_A}}-1\Bigr)}, \end{multlined} &\!\text{if $n^{\ast}\nmid n$;} \end{dcases*} \end{equation} thereby adopting the convention that the empty sum over $\rho$ equals zero if $\mu_l'=1$. From the description \eqref{pof3casevijf} of the points of $H_3$, it is reasonable that also the calculation of $\Sigma_3$ is essentially not different from the one in Case~III; proceeding as in Paragraph~\ref{ssscalcsigmadriecasedrie}, thereby using Identity~\eqref{dpi3c5}, we find that \begin{equation}\label{formuleSigmadriednc5} \Sigma_3=\frac{F_2F_3}{\Bigl(p^{\frac{\xi_B(w\cdot v_1)+w\cdot v_2}{\mu_B}}-1\Bigr)\Bigl(p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}-1\Bigr)}\sum_{k=0}^{\fBtwee-1}q^k\Bigl(p^{\frac{\xi_B(w\cdot v_1)+w\cdot v_2}{\mu_B}}\Bigr)^{-\left\lfloor\frac{k\psi}{\fBtwee}\right\rfloor}. \end{equation} A simplified version of this formula, valid in the case that $n^{\ast}\mid n$ and justified by Equalities \eqref{sigbsigaidcasevijf} and \eqref{extraidentiteitc5}, is given by \begin{equation}\label{formuleSigmadriedeeltc5} \Sigma_3=\frac{F_2F_3}{(q^{\fAB}-1)(q^{\mu_l'}-1)}\sum_{k=0}^{\fBtwee-1}q^{k-\fAB\left\lfloor\frac{k\psi}{\fBtwee}\right\rfloor}. \end{equation} \subsection{Proof of $R_2'=R_1'=0$} On the one hand, it is clear from (\ref{formR2accasevijf}, \ref{formSigAc5}, and \ref{formsigma1case5}) that $R_2'=0$ in any case. On the other hand, if we fill in Formulas (\ref{formSigAc5}--\ref{formsigmaBcase5}, \ref{formsigma2case5}--\ref{formuleSigmadriedeeltc5}) for the $\Sigma(\cdot)(s_0)$ and the $\Sigma(\cdot)'(s_0)$ in Expression~\eqref{formR1accasevijf} for $R_1'$, we see that proving $R_1'=0$ comes down to verifying that \begin{equation*} \sum_{k=0}^{\fBtwee-1}q^{k-\fAB\left\lfloor\frac{k\psi}{\fBtwee}\right\rfloor}=\frac{q^{\fAB}-1}{q-1}\sum_{\rho=1}^{\barpsi}q^{\kappa_{\rho}}+\frac{q^{\mu_l'}-1}{q-1}\left(tq\frac{q^{\fAB}-1}{q-1}+1\right), \end{equation*} if $n^{\ast}\mid n$, and \begin{multline*} \Bigl(p^{\frac{w\cdot v_0+\xi_A'(w\cdot v_1)}{\mu_A}}-1\Bigr)\sum_{k=0}^{\fBtwee-1}q^k\Bigl(p^{\frac{\xi_B(w\cdot v_1)+w\cdot v_2}{\mu_B}}\Bigr)^{-\left\lfloor\frac{k\psi}{\fBtwee}\right\rfloor}\\* +p^{\frac{w\cdot v_0+\xi_A'(w\cdot v_1)}{\mu_A}}\Bigl(p^{\frac{\xi_B(w\cdot v_1)+w\cdot v_2}{\mu_B}}-1\Bigr)\sum_{\kappa=0}^{\mu_l'-1}q^{\kappa}\Bigl(p^{\frac{w\cdot v_0+\xi_A'(w\cdot v_1)}{\mu_A}}\Bigr)^{\left\lfloor\frac{\kappa\psi}{\mu_l'}\right\rfloor}\\* =\Bigl(p^{\frac{\xi_2(w\cdot v_1)+w\cdot v_3}{\mu_2}}-1\Bigr)\frac{q^{\fAB}-1}{q-1}, \end{multline*} otherwise. In order to obtain this last equation, we need to apply Identity~\eqref{sigbsigaidcasevijf} at some point. Since the analogous relations between the variables hold, e.g., \begin{equation*} \fBtwee=\fAB\psi+\mu_l'\qquad\text{and}\qquad\psi=t\mu_l'+\barpsi, \end{equation*} these final assertions can be proved in exactly the same way as in Subsection~\ref{ssfinalsscasedrie} of Case~III. Hence we conclude Case~V. \section{Case~VI: at least three facets of \Gf\ contribute to $s_0$; all of them are $B_1$-facets (compact or not) with respect to a same variable and they are \lq connected to each other by edges\rq} More precisely, we mean that we can denote the contributing $B_1$-facets by $\tau_0,\tau_1,\ldots,\tau_t$ with $t\geqslant2$ in such a way that facets $\tau_{i-1}$ and $\tau_i$ share an edge for all $i\in\{1,\ldots,t\}$. An example with $t=2$ is shown in Figure~\ref{figcase6}. \begin{figure} \psset{unit=.06\textwidth \centering { \psset{Beta=15} \begin{pspicture}(-8.3,-2.9)(8.23,2.95 \pstThreeDCoor[xMin=0,yMin=0,zMin=0,xMax=11,yMax=11,zMax=2.4,linecolor=black,linewidth=.7pt] { \psset{linecolor=black,linewidth=.3pt,linestyle=dashed,subticks=1} \pstThreeDPlaneGrid[planeGrid=xy](0,0)(5,3) \pstThreeDPlaneGrid[planeGrid=xy](0,0)(2,5) \pstThreeDPlaneGrid[planeGrid=xz](0,0)(1,1) \pstThreeDPlaneGrid[planeGrid=yz](0,0)(1,1) \pstThreeDPlaneGrid[planeGrid=xy,planeGridOffset=1](0,0)(1,1) \pstThreeDPlaneGrid[planeGrid=xz,planeGridOffset=1](0,0)(1,1) \pstThreeDPlaneGrid[planeGrid=yz,planeGridOffset=1](0,0)(1,1) \pstThreeDLine(1,0,1)(11,0,1)\pstThreeDLine(11,0,1)(11,0,0)\pstThreeDLine(11,0,0)(11,1,0)\pstThreeDLine(11,1,0)(11,1,1)\pstThreeDLine(11,1,1)(11,0,1)\pstThreeDLine(11,1,0)(11,3,0)\pstThreeDLine(11,1,0)(1,1,0) } \pstThreeDPut[pOrigin=c](7,2,0.5){\psframebox*[framesep=0.5pt,framearc=1]{\phantom{$\tau_0$}}} \pstThreeDPut[pOrigin=c](2.66,3,0.33){\psframebox*[framesep=0.5pt,framearc=1]{\phantom{$\tau_1$}}} \pstThreeDPut[pOrigin=c](1.1,5.05,0.33){\psframebox*[framesep=0.7pt,framearc=1]{\phantom{$\tau_2$}}} { \psset{dotstyle=none,dotscale=1,drawCoor=false} \psset{linecolor=black,linewidth=1pt,linejoin=1} \psset{fillcolor=lightgray,opacity=.6,fillstyle=solid} \pstThreeDLine(1,1,1)(0,10,0)(2,5,0) \pstThreeDLine(1,1,1)(2,5,0)(5,3,0) \pstThreeDLine(11,3,0)(5,3,0)(1,1,1)(11,1,1) } \pstThreeDPut[pOrigin=t](5,3,-0.25){$A$} \pstThreeDPut[pOrigin=t](2,5,-0.25){$B$} \pstThreeDPut[pOrigin=t](0,10,-0.24){$C$} \pstThreeDPut[pOrigin=c](7,2,0.5){$\tau_0$} \pstThreeDPut[pOrigin=c](2.66,3,0.33){$\tau_1$} \pstThreeDPut[pOrigin=c](1.1,5.1,0.33){$\tau_2$} \pstThreeDPut[pOrigin=bl](.925,1.075,1.07){\psframebox*[framesep=-.3pt,framearc=1]{\footnotesize $V(x_V,y_V,1)$}} \end{pspicture} \psset{Beta=30} } \caption{Case VI: $B_1$-facets $\tau_0,\tau_1,$ and $\tau_2$ all contribute to $s_0$} \label{figcase6} \end{figure} Let us assume that the contributing facets $\tau_0,\tau_1,\ldots,\tau_t$ are $B_1$ with respect to the variable $z$. Since the $\tau_i$ all contribute to the same candidate pole $s_0$, their affine supports intersect the diagonal of the first octant in the same point $(-1/s_0,-1/s_0,-1/s_0)$. As these affine supports share only one point, the aforementioned intersection point must be the contributing facets' common vertex $V$ at \lq height\rq\ one: \begin{equation*} \left(-\frac{1}{s_0},-\frac{1}{s_0},-\frac{1}{s_0}\right)=(x_V,y_V,1). \end{equation*} We conclude that $x_V=y_V=1$ and $s_0=-1$. Hence under the conditions of Theorem~\ref{maintheoartdrie}, Case~VI cannot occur. \section{General case: several groups of $B_1$-facets contribute to $s_0$; every group is separately covered by one of the previous cases, and the groups have pairwise at most one point in common}\label{secgeval7art3} As the different \lq clusters\rq\ of contributing $B_1$-facets pairwise share not more than one point, we can decompose each cone associated to a vertex of \Gf\ into simplicial cones in such a way that the relevant residues in $s_0$ split up into parts, each part corresponding to one of the preceding cases. In this way the general case follows immediately from the previous ones. Figure~\ref{figcase7} shows two possible configurations of $B_1$-facets that fall under the general case.\label{eindegrbew} \begin{figure} \centering \subfigure[Non-compact $B_1$-facet $\tau_0$ and $B_1$-simplex $\tau_1$ both contribute to $s_0$. As they have only one point in common, they form two separate clusters.]{ \psset{unit=.07067137809\textwidth \psset{Beta=8}\psset{Alpha=60} \begin{pspicture}(-5.8,-1.8)(8.07,3.25 \pstThreeDCoor[xMin=0,yMin=0,zMin=0,xMax=11,yMax=9,zMax=3,spotZ=180,linecolor=black,linewidth=.7pt] { \psset{linecolor=gray,linewidth=.3pt,linestyle=dashed,subticks=1} \pstThreeDLine(2,0,0)(2,2,0)\pstThreeDLine(2,2,0)(0,2,0)\pstThreeDLine(3,0,0)(3,5,0)\pstThreeDLine(3,5,0)(0,5,0)\pstThreeDLine(0,0,1)(2,0,1)\pstThreeDLine(2,0,1)(2,0,0)\pstThreeDLine(0,0,1)(0,2,1)\pstThreeDLine(0,2,1)(0,2,0)\pstThreeDLine(2,0,0)(2,2,0) \pstThreeDPlaneGrid[planeGrid=xy,planeGridOffset=1](0,0)(2,2) \pstThreeDPlaneGrid[planeGrid=xz,planeGridOffset=2](0,0)(2,1) \pstThreeDPlaneGrid[planeGrid=yz,planeGridOffset=2](0,0)(2,1) \pstThreeDLine(2,0,1)(11,0,1)\pstThreeDLine(11,0,1)(11,0,0)\pstThreeDLine(11,0,0)(11,2,0)\pstThreeDLine(11,2,0)(11,2,1)\pstThreeDLine(11,2,1)(11,0,1)\pstThreeDLine(11,2,0)(11,4,0)\pstThreeDLine(11,2,0)(2,2,0)\pstThreeDLine(6,0,0)(6,4,0) } { \psset{linecolor=black,linewidth=.3pt,linestyle=dashed,subticks=1} \pstThreeDLine(6,4,0)(0,4,0)\pstThreeDLine(8,0,0)(3,5,0)\pstThreeDLine(4,4,0)(0,0,2)\pstThreeDLine(0,0,0)(3,3,3) } { \psset{linecolor=black,dotsize=2.8pt} \pstThreeDDot(4,4,0)\pstThreeDDot(1.3333,1.3333,1.3333) } \pstThreeDPut[pOrigin=c](2,5,0.33){\psframebox*[framesep=0.5pt,framearc=1]{\phantom{$\tau_1$}}} { \psset{dotstyle=none,dotscale=1,drawCoor=false} \psset{linecolor=black,linewidth=1pt,linejoin=1} \psset{fillcolor=lightgray,opacity=.6,fillstyle=solid} \pstThreeDTriangle(2,2,1)(0,8,0)(3,5,0) \pstThreeDLine(11,4,0)(6,4,0)(2,2,1)(11,2,1) } { \psset{linecolor=gray,dotsize=2.8pt} \pstThreeDDot(2,2,1) } \pstThreeDPut[pOrigin=t](6,4,-0.24){$A$} \pstThreeDPut[pOrigin=t](3,5,-0.24){$B$} \pstThreeDPut[pOrigin=t](0,8,-0.24){$C$} \pstThreeDPut[pOrigin=bl](1.925,2.04,1.1){\psframebox*[framesep=-1pt,framearc=1]{$D=\tau_0\cap\tau_1$}} \pstThreeDPut[pOrigin=c](7.5,3,0.5){$\tau_0$} \pstThreeDPut[pOrigin=c](2,5,0.33){$\tau_1$} \pstThreeDPut[pOrigin=br](0.06,-0.05,1.11){\footnotesize$1$} \pstThreeDPut[pOrigin=bl](2.96,3.03,3.02){$d\leftrightarrow x=y=z$} \pstThreeDNode(1.3333,1.3333,1.3333){ipunt} \rput[r](8.25,1.7){\rnode[l]{ipuntlabel}{\footnotesize$\aff\tau_0\cap\aff\tau_1\cap d=(-1/s_0,-1/s_0,-1/s_0)$}} \ncline[linewidth=.3pt,nodesepB=2.5pt,nodesepA=2pt]{->}{ipuntlabel}{ipunt} \end{pspicture} }\\\subfigure[$B_1$-Facets $\tau_0,\tau_1,\tau_2$ all contribute to $s_0$. We distinguish the clusters $\{\tau_0\}$ and $\{\tau_1,\tau_2\}$.]{ \psset{unit=.06426\textwidth \psset{Beta=8}\psset{Alpha=53} \begin{pspicture}(-6.97,-1.84)(8.34,3.25 \pstThreeDCoor[xMin=0,yMin=0,zMin=0,xMax=11,yMax=10,zMax=3,spotZ=180,linecolor=black,linewidth=.7pt] { \psset{linecolor=gray,linewidth=.3pt,linestyle=dashed,subticks=1} \pstThreeDLine(7,0,0)(7,4,0)\pstThreeDLine(7,4,0)(0,4,0) \pstThreeDLine(6,0,0)(6,5,0)\pstThreeDLine(6,5,0)(0,5,0) \pstThreeDLine(4,0,0)(4,8,0)\pstThreeDLine(4,8,0)(0,8,0) \pstThreeDLine(3,0,0)(3,2,0)\pstThreeDLine(3,2,0)(0,2,0) \pstThreeDLine(2,0,0)(2,3,0)\pstThreeDLine(2,3,0)(0,3,0) \pstThreeDLine(3,0,1)(3,2,1)\pstThreeDLine(3,2,1)(0,2,1) \pstThreeDLine(2,0,1)(2,3,1)\pstThreeDLine(2,3,1)(0,3,1) \pstThreeDLine(0,0,1)(2,0,1)\pstThreeDLine(2,0,1)(3,0,1)\pstThreeDLine(2,0,1)(2,0,0)\pstThreeDLine(3,0,1)(3,0,0) \pstThreeDLine(0,0,1)(0,2,1)\pstThreeDLine(0,2,1)(0,3,1)\pstThreeDLine(0,2,1)(0,2,0)\pstThreeDLine(0,3,1)(0,3,0) \pstThreeDLine(2,3,0)(2,3,1)\pstThreeDLine(3,2,0)(3,2,1) \pstThreeDLine(3,0,1)(11,0,1)\pstThreeDLine(11,0,1)(11,0,0)\pstThreeDLine(11,0,0)(11,2,0)\pstThreeDLine(11,2,0)(11,2,1)\pstThreeDLine(11,2,1)(11,0,1)\pstThreeDLine(11,2,0)(11,4,0)\pstThreeDLine(11,2,0)(3,2,0) \pstThreeDLine(0,3,1)(0,10,1)\pstThreeDLine(0,10,1)(0,10,0)\pstThreeDLine(0,10,0)(2,10,0)\pstThreeDLine(2,10,0)(2,10,1)\pstThreeDLine(2,10,1)(0,10,1)\pstThreeDLine(2,10,0)(4,10,0)\pstThreeDLine(2,10,0)(2,3,0) } { \psset{linecolor=black,linewidth=.3pt,linestyle=dashed,subticks=1} \pstThreeDLine(2,3,1)(.8,0,1.6)\pstThreeDLine(0,0,0)(3,3,3) } { \psset{linecolor=black,dotsize=2.8pt} \pstThreeDDot(1.3333,1.3333,1.3333) } \pstThreeDPut[pOrigin=c](4,5.3,0.33){\psframebox*[framesep=-1pt,framearc=1]{\phantom{$\tau_1$}}} \pstThreeDPut[pOrigin=c](3,7.72,0.5){\psframebox*[framesep=0.5pt,framearc=1]{\phantom{$\tau_2$}}} { \psset{dotstyle=none,dotscale=1,drawCoor=false} \psset{linecolor=black,linewidth=1pt,linejoin=1} \psset{fillcolor=lightgray,opacity=.6,fillstyle=solid} \pstThreeDLine(4,8,0)(6,5,0)(2,3,1) \pstThreeDLine(4,10,0)(4,8,0)(2,3,1)(2,10,1) \pstThreeDLine(11,4,0)(7,4,0)(3,2,1)(11,2,1) } { \psset{linecolor=black,linewidth=.3pt,linestyle=dashed,subticks=1} \pstThreeDLine(8,4,0)(0,.8,1.6) } { \psset{linecolor=gray,dotsize=2.8pt} \pstThreeDDot(8,4,0)\pstThreeDDot(3,2,1)\pstThreeDDot(4,8,0)\pstThreeDDot(2,3,1) } \pstThreeDPut[pOrigin=t](7,4,-0.24){$A$} \pstThreeDPut[pOrigin=t](6,5,-0.24){$B$} \pstThreeDPut[pOrigin=t](4,8,-0.24){$C$} \pstThreeDPut[pOrigin=br](3.075,1.96,1.1){\psframebox*[framesep=-.85pt,framearc=1]{$E$}} \pstThreeDPut[pOrigin=bl](1.925,3.04,1.1){\psframebox*[framesep=-1pt,framearc=1]{$D$}} \pstThreeDPut[pOrigin=c](8,3,0.51){$\tau_0$} \pstThreeDPut[pOrigin=c](4,5.33,0.33){$\tau_1$} \pstThreeDPut[pOrigin=c](3,7.75,0.5){$\tau_2$} \pstThreeDPut[pOrigin=br](0.06,-0.05,1.11){\psframebox*[framesep=0.2pt,framearc=0]{\footnotesize$1$}} \pstThreeDPut[pOrigin=bl](2.96,3.02,3.02){$d\leftrightarrow x=y=z$} \pstThreeDNode(1.3333,1.3333,1.3333){ipunt} \rput[l](2.7,1.68){\rnode[l]{ipuntlabel}{\parbox{3.4cm}{\footnotesize$\aff\tau_0\cap\aff\tau_1\cap\aff\tau_2\cap d$\\$=(-1/s_0,-1/s_0,-1/s_0)$}}} \ncline[linewidth=.3pt,nodesepB=2.5pt,nodesepA=2.5pt]{->}{ipuntlabel}{ipunt} \end{pspicture} } \psset{Beta=30}\psset{Alpha=45} \caption{General Case: several \lq clusters\rq\ of $B_1$-facets contribute to the candidate-pole $s_0$} \label{figcase7} \end{figure} \section{The main theorem for a non-trivial character of \Zpx}\label{sectkarakter} In this section we consider Igusa's zeta function of a polynomial $f(x_1,\ldots,x_n)\in\Zp[x_1,\ldots,x_n]$ and a character $\chi:\Zpx\to\Ccross$ of \Zpx, and we prove the analogue of Theorem~\ref{mcigusandss} for a non-trivial character. We start with the definition of this \lq twisted\rq\ $p$-adic zeta function. Let $p$ be a prime number and $a\in\Qp$. We denote the $p$-adic order of $a$ by $\ord_pa\in\Z\cup\{\infty\}$; we write $\abs{a}=p^{-\ord_pa}$ for the $p$-adic norm of $a$ and $\ac a=\abs{a}a$ for its angular component. As before, we denote by $\abs{dx}=\abs{dx_1\wedge\cdots\wedge dx_n}$ the Haar measure on \Qpn, normalized in such a way that \Zpn\ has measure one. \begin{definition}[local twisted Igusa zeta function]\label{defIZFkaraktartdrie} Cfr.\ \cite[Def.~1.1]{Hoo01}. Let $p$ be a prime number and $f(x)=f(x_1,\ldots,x_n)$ a polynomial in $\Zp[x_1,\ldots,x_n]$. Let $\chi:\Zpx\to\Ccross$ be a character of \Zpx, i.e., a multiplicative group homomorphism with finite image. We formally put $\chi(0)=0$. To $f$ and $\chi$ we associate the local Igusa zeta function \begin{equation*} Z_{f,\chi}^0:\{s\in\C\mid\Re(s)>0\}\to\C:s\mapsto\int_{p\Zpn}\chi(\ac f(x))\abs{f(x)}^s\abs{dx}. \end{equation*} \end{definition} If $\chi$ is the trivial character, we obtain the usual local Igusa zeta function of $f$. In this section we will deal with the non-trivial characters. The rationality result of Igusa \cite{Igu74} and Denef \cite{Den84} holds for the above version of Igusa's zeta function as well. From now on, by $Z_{f,\chi}^0$ we mean the meromorphic continuation to \C\ of the function defined in Definition~\ref{defIZFkaraktartdrie}. The goal is to verify the following analogue of Theorem~\ref{mcigusandss}. \begin{theorem}[Monodromy Conjecture for Igusa's zeta function of a non-degenerated surface singularity and a non-trivial character of \Zpx]\label{mcigusandsskarakter} Let $f(x,y,z)\in\Z[x,y,z]$ be a nonzero polynomial in three variables satisfying $f(0,0,0)=0$, and let $U\subset\C^3$ be a neighborhood of the origin. Suppose that $f$ is non-degenerated over \C\ with respect to all the compact faces of its Newton polyhedron, and let $p$ be a prime number such that $f$ is also non-degenerated over \Fp\ with respect to the same faces.\footnote{By Remark~\ref{verndcndfp}(i) this is the case for almost all prime numbers $p$.} Let $\chi:\Zpx\to\Ccross$ be a non-trivial character of \Zpx, and assume that $\chi$ is trivial on $1+p\Zp$. Suppose that $s_0$ is a pole of the local Igusa zeta function $Z_{f,\chi}^0$ associated to $f$ and $\chi$. Then $e^{2\pi i\Re(s_0)}$ is an eigenvalue of the local monodromy of $f$ at some point of $f^{-1}(0)\cap U$. \end{theorem} The reason that we restrict to characters $\chi$ that are trivial on $1+p\Zp$, is that in this case we have a nice analogue of Denef and Hoornaert's formula (Theorem~\ref{formdenhoor}) for $Z_{f,\chi}^0$. We give the formula below, but first we introduce a notation that simplifies the statement of the formula. \begin{notation}\label{notchibarartdrieh} Let $p$ be a prime number and $\chi:\Zpx\to\Ccross$ a character of \Zpx. Assume that $\chi$ is trivial on the (multiplicative) subgroup $1+p\Zp$ of \Zpx. We shall identify the quotient group $\Zpx/(1+p\Zp)$ with \Fpcross, and we shall denote by $\pi:\Zpx\to\Fpcross$ the natural surjective homomorphism. Since $1+p\Zp\subset\ker\chi$, there exists a unique homomorphism $\barchi:\Fpcross\to\Ccross$ such that $\chi=\barchi\circ\pi$. In order for $\barchi$ to be defined on the whole of \Fp, we shall formally put $\barchi(0)=0$. \end{notation} \begin{theorem}\label{formdenhoorkarakters} \textup{\cite[Thm.~3.4]{Hoo01}}. Let $p$ be a prime number; let $f(x)=f(x_1,\ldots,x_n)$ be a nonzero polynomial in $\Zp[x_1,\ldots,x_n]$ satisfying $f(0)=0$. Suppose that $f$ is non-degenerated over \Fp\ with respect to all the compact faces of its Newton polyhedron \Gf. Let $\chi:\Zpx\to\Ccross$ be a non-trivial character of \Zpx, and assume that $\chi$ is trivial on $1+p\Zp$. Then the local Igusa zeta function associated to $f$ and $\chi$ is the meromorphic complex function \begin{equation*} Z_{f,\chi}^0:s\mapsto\sum_{\substack{\tau\mathrm{\ compact}\\\mathrm{face\ of\ }\Gf}}L_{\tau}S(\Dtu)(s), \end{equation*} with \begin{gather*} L_{\tau}=p^{-n}\sum_{x\in\Fpcrossn}\barchi\bigl(\fbart(x)\bigr)\\\shortintertext{and} S(\Dtu)(s)=\sum_{k\in\Zn\cap\Dtu}p^{-\sigma(k)-m(k)s} \end{gather*} for every compact face $\tau$ of \Gf. Hereby $\ft,\fbart$, and \barchi\ are defined as in Notations~\ref{notftauart3}, \ref{notftaubarart3}, and \ref{notchibarartdrieh}, respectively; the definitions of $\sigma(k),m(k)$, and \Dtu\ can be found in Notation~\ref{notsigmakartdrieintro} and Definitions~\ref{def_mfad} and \ref{def_Dfad}, respectively. The sums $S(\Dtu)(s)$ can be calculated in the same way as in Theorem~\ref{formdenhoor}. \end{theorem} Note that, contrary to the trivial character case, the $L_{\tau}$ do not depend on the variable $s$. Consequently, $Z_{f,\chi}^0$ for a non-trivial character $\chi$, has \lq fewer\rq\ candidate poles than \Zof. \begin{corollary} Let $f$ and $\chi$ be as in Theorem~\ref{formdenhoorkarakters}. Let $\gamma_1,\ldots,\gamma_r$ be all the facets of \Gf, and let $v_1,\ldots,v_r$ be the unique primitive vectors in $\Zplusn\setminus\{0\}$ that are perpendicular to $\gamma_1,\ldots,\gamma_r$, respectively. From Theorem~\ref{formdenhoorkarakters} and the rational expression for $S(\Dtu)(s)$ obtained in Theorem~\ref{formdenhoor}, it follows that the poles of $Z_{f,\chi}^0$ are among the numbers \begin{equation}\label{candpoleskarakterad} -\frac{\sigma(v_j)}{m(v_j)}+\frac{2k\pi i}{m(v_j)\log p}, \end{equation} with $j\in\{1,\ldots,r\}$ such that $m(v_j)\neq0$, and $k\in\Z$. We shall refer to these numbers as the candidate poles of $Z_{f,\chi}^0$. \end{corollary} Now suppose that $f,U,p,\chi$, and $s_0$ are as in Theorem~\ref{mcigusandsskarakter}. Then $s_0$ is one of the numbers \eqref{candpoleskarakterad}. Theorem~\ref{theoAenL} tells us that if $s_0$ is contributed (cfr.\ Definition~\ref{defcontrinlad}) by a facet of \Gf\ that is not a $B_1$-facet (cfr.\ Definition~\ref{defbeenfacetad}), then $e^{2\pi i\Re(s_0)}$ is an eigenvalue of monodromy of $f$ at some point of $f^{-1}(0)\cap U$. Proposition~\ref{propAenL} says that the same is true if $s_0$ is contributed by two $B_1$-facets of \Gf\ that are not $B_1$ for a same variable and that have an edge in common. Therefore, in order to obtain Theorem~\ref{mcigusandsskarakter}, it is sufficient to verify the following proposition. \begin{proposition}[On candidate poles of $Z_{f,\chi}^0$ only contributed by $B_1$-facets]\label{maintheoartdriekarakt} Let $p$ be a prime number and let $f(x,y,z)\in\Zp[x,y,z]$ be a nonzero polynomial in three variables with $f(0,0,0)=0$. Suppose that $f$ is non-degenerated over \Fp\ with respect to all the compact faces of its Newton polyhedron. Let $\chi:\Zpx\to\Ccross$ be a non-trivial character of \Zpx\ that is trivial on $1+p\Zp$. Suppose that $s_0$ is a candidate pole of $Z_{f,\chi}^0$ that is only contributed by $B_1$-facets of \Gf. Further assume that for any pair of contributing $B_1$-facets, we have that \begin{itemize} \item[-] either they are $B_1$-facets for a same variable, \item[-] or they have at most one point in common. \end{itemize} Then $s_0$ is not a pole of $Z_{f,\chi}^0$. \end{proposition} Let $p,f,\chi$, and $s_0$ be as in the proposition. Let us consider the same seven cases as in the proof of Theorem~\ref{maintheoartdrie}. The three observations below show that in every case, the relevant terms in the formula for $Z_{f,\chi}^0$ from Theorem~\ref{formdenhoorkarakters}, are either zero or they cancel in pairs. In what follows we shall use the notations of Theorem~\ref{formdenhoor}. First consider a vertex $V(x_V,y_V,1)$ of \Gf\ at \lq height\rq\ one. The corresponding polynomial $\overline{f_V}$ has the form $\overline{f_V}=\overline{a_V}x^{x_V}y^{y_V}z$ with $\overline{a_V}\in\Fpcross$. The factor $L_V$ is thus given by \begin{align} L_V&=p^{-3}\sum_{(x,y,z)\in(\Fpcross)^3}\barchi\bigl(\overline{a_V}x^{x_V}y^{y_V}z\bigr)\notag\\ &=p^{-3}\barchi(\overline{a_V})\sum_{x\in\Fpcross}\barchi^{x_V}(x)\sum_{y\in\Fpcross}\barchi^{y_V}(y)\sum_{z\in\Fpcross}\barchi(z).\label{lastsumkaraktad} \end{align} Since $\chi$ is non-trivial but trivial on $1+p\Zp$, the character $\barchi$ of \Fpcross\ is also non-trivial. It is well-known that in this case the last sum of \eqref{lastsumkaraktad} equals zero. Indeed, for any $u\in\Fpcross$ the map $\Fpcross\to\Fpcross:z\mapsto uz$ is a permutation. Consequently, \begin{equation*} \sum_{z\in\Fpcross}\barchi(z)=\sum_{z\in\Fpcross}\barchi(uz)=\barchi(u)\sum_{z\in\Fpcross}\barchi(z). \end{equation*} As $\barchi$ is non-trivial, there exists a $u\in\Fpcross$ with $\barchi(u)\neq1$, and for such $u$ the above equation implies that $\sum_{z\in\Fpcross}\barchi(z)=0$. We conclude that $L_V=0$ and the term associated to $V$ vanishes. Next let us consider a $B_1$-simplex $\tau_0$, say for the variable $z$. Let $A$ and $B$ be the two vertices of $\tau_0$ in the plane $\{z=0\}$, and let $C(x_C,y_C,1)$ be the vertex of $\tau_0$ at distance one of this plane. For $L_{[AB]}$ we find \begin{equation*} L_{[AB]}=p^{-3}\sum_{(x,y,z)\in(\Fpcross)^3}\barchi\bigl(\overline{f_{[AB]}}(x,y)\bigr)=p^{-3}(p-1)\sum_{(x,y)\in(\Fpcross)^2}\barchi\bigl(\overline{f_{[AB]}}(x,y)\bigr), \end{equation*} while $L_{\tau_0}$ is given by \begin{equation}\label{ltnulkarakad} L_{\tau_0}=p^{-3}\sum_{(x,y)\in(\Fpcross)^2}\sum_{z\in\Fpcross}\barchi\bigl(\overline{f_{[AB]}}(x,y)+\overline{a_C}x^{x_C}y^{y_C}z\bigr) \end{equation} for some $\overline{a_C}\in\Fpcross$. Fix $(x,y)\in(\Fpcross)^2$. If $z$ runs through \Fpcross, the argument of $\barchi$ in \eqref{ltnulkarakad} runs through all elements of the set $\Fp\setminus\bigl\{\overline{f_{[AB]}}(x,y)\bigr\}$. Consequently, \begin{align*} L_{\tau_0}&=p^{-3}\sum_{(x,y)\in(\Fpcross)^2}\Bigl(\sum\nolimits_{u\in\Fp}\barchi(u)-\barchi\bigl(\overline{f_{[AB]}}(x,y)\bigr)\Bigr)\\ &=-p^{-3}\sum_{(x,y)\in(\Fpcross)^2}\barchi\bigl(\overline{f_{[AB]}}(x,y)\bigr). \end{align*} Together with the fact that \begin{equation*} S(\Delta_{\tau_0})(s)=\frac{1}{p^{\sigma_0+m_0s}-1}\qquad\text{and}\qquad S(\Delta_{[AB]})(s)=\frac{1}{(p^{\sigma_0+m_0s}-1)(p-1)} \end{equation*} (with $(m_0,\sigma_0)$ the numerical data associated to $\tau_0$), we now easily find that the sum of the terms associated to $\tau_0$ and $[AB]$ equals zero. Finally, consider any $B_1$-facet $\tau_0$ (compact or not), and assume that $\tau_0$ is $B_1$ for the variable $z$. Let $A$ be a vertex of $\tau_0$ in the plane $\{z=0\}$ and $C(x_C,y_C,1)$ the vertex of $\tau_0$ at \lq height\rq\ one. Denote by $\tau_1$ the other facet of \Gf\ that contains the edge $[AC]$, and let $\tau_2$ be the facet in $\{z=0\}$. Denote by $\delta_A$ the simplicial subcone of $\Delta_A$ strictly positively spanned by the primitive vectors $v_0,v_1,v_2\in\Zplus^3\setminus\{0\}$ that are perpendicular to $\tau_0,\tau_1,\tau_2$, respectively. In the same way as in the previous paragraph we find that \begin{equation*} L_A=-(p-1)L_{[AC]}. \end{equation*} If we combine this identity with the expressions \begin{align*} S(\Delta_{[AC]})(s)&=\frac{\Sigma(\Delta_{[AC]})(s)}{(p^{\sigma_0+m_0s}-1)(p^{\sigma_1+m_1s}-1)}\qquad\text{and}\\ S(\delta_A)(s)&=\frac{\Sigma(\Delta_{[AC]})(s)}{(p^{\sigma_0+m_0s}-1)(p^{\sigma_1+m_1s}-1)(p-1)} \end{align*} (where $(m_0,\sigma_0)$ and $(m_1,\sigma_1)$ denote the numerical data of $\tau_0$ and $\tau_1$, re\-spec\-tive\-ly), we find again that the terms associated to $[AC]$ and $\delta_A$ cancel out. This concludes (the sketch of) the proof of Proposition~\ref{maintheoartdriekarakt} and Theorem~\ref{mcigusandsskarakter}. \section{The main theorem in the motivic setting}\label{sectmotivisch} \subsection{The local motivic zeta function and the motivic Monodromy Conjecture} The theory of motivic integration was invented by Kontsevich and further developed by a.o.\ Denef--Loeser \cite{DLmoteen,DLmottwee,DLmotdrie}, Loeser--Sebag \cite{LSmot,Sebag}, and Cluckers--Loeser \cite{CLmot}. Denef and Loeser introduced the motivic zeta function and the corresponding monodromy conjecture in \cite{DL98}. For an introduction to motivic integration, motivic zeta functions, and the (motivic) Monodromy Conjecture, we refer to \cite{nicaisemot} and \cite{veysmot}. In this section we will only give the definitions that are needed to state the results. In motivic integration theory, one associates to each algebraic variety $X$ over \C, and to each $l\in\Zplus$, a space $\mathcal{L}_l(X)$ of so-called $l$-jets on $X$. Informally speaking, this jet space $\mathcal{L}_l(X)$ is an algebraic variety over \C\ whose points with coordinates in \C\ correspond to points of $X$ with coordinates in $\C[t]/(t^{l+1})$, and vice versa. For all $l'\geqslant l$, there are natural \emph{truncation maps} $\pi_l^{l'}:\mathcal{L}_{l'}(X)\to\mathcal{L}_l(X)$, sending $l'$-jets to their reduction modulo $t^{l+1}$. Next one obtains the space $\mathcal{L}(X)$ of \emph{arcs} on $X$ as the inverse limit $\varprojlim\mathcal{L}_l(X)$ of the system $\bigl((\mathcal{L}_l(X))_{l\geqslant0},(\pi_l^{l'})_{l'\geqslant l\geqslant0}\bigr)$. The arc space $\mathcal{L}(X)$ should be thought of as an \lq algebraic variety of infinite dimension\rq\ over \C\ whose points with coordinates in \C\ agree with the points of $X$ with coordinates in $\C[[t]]$. It comes together with natural truncation maps $\pi_l:\mathcal{L}(X)\to\mathcal{L}_l(X)$, sending arcs to their reduction modulo $t^{l+1}$. In this section, the only algebraic variety we will consider, is the $n$-dimensional affine space $X=\mathbf{A}^n(\C)$. In this case, $\mathcal{L}_l(\mathbf{A}^n(\C))\cong\mathbf{A}^{n(l+1)}(\C)$ and $\mathcal{L}(\mathbf{A}^n(\C))$ can be identified with $(\C[t]/(t^{l+1}))^n$ and $(\C[[t]])^n$, respectively. We will use these identifications throughout the section. The truncation maps are as expected: \begin{align*} \pi_l^{l'}:&\ (\C[t]/(t^{l'+1}))^n\to(\C[t]/(t^{l+1}))^n:\bigl(\phi_{\rho}+(t^{l'+1})\bigr)_{\rho}\mapsto\bigl(\phi_{\rho}+(t^{l+1})\bigr)_{\rho},\\ \pi_l:&\ (\C[[t]])^n\to(\C[t]/(t^{l+1}))^n:\bigl({\textstyle\sum_{\kappa}}\phi_{\rho,\kappa}t^{\kappa}\bigr)_{\rho}\mapsto\bigl({\textstyle\sum_{\kappa=0}^{l}}\phi_{\rho,\kappa}t^{\kappa}+(t^{l+1})\bigr)_{\rho}. \end{align*} In motivic integration, the discrete valuation ring $\C[[t]]$ and its uniformizer $t$ play the role that $\Zp$ and $p$ play in $p$-adic integration. The Grothendieck group of (complex) algebraic varieties is the abelian group $K_0(Var_{\C})$ generated by the isomorphism classes $[X]$ of algebraic varieties $X$, modulo the relations $[X]=[X\setminus Y]+[Y]$ if $Y$ is Zariski-closed in $X$. The Grothendieck group is turned into a Grothendieck ring by putting $[X]\cdot[Y]=[X\times Y]$ for all algebraic varieties $X$ and $Y$. The class of a (complex) algebraic variety in the Grothendieck ring is the universal invariant of an algebraic variety with respect to the additive and multiplicative relations above; it is a refinement of the topological Euler characteristic. We call a subset $C$ of an algebraic variety $X$ constructible if it can be written as a finite disjoint union of locally closed\footnote{w.r.t.\ the Zariski-topology on $X$} subvarieties $Y_1,\ldots,Y_r$ of $X$. For such a constructible subset $C=\bigsqcup_jY_j$, the class $[C]=\sum_j[Y_j]$ of $C$ in the Grothendieck ring is well-defined, i.e., is independent of the chosen decomposition. We denote the class of a point by $1$ and the class of the affine line $\mathbf{A}^1(\C)$ by \LL. Finally, we denote by $\MC=K_0(Var_{\C})[\LL^{-1}]$ the localization of $K_0(Var_{\C})$ with respect to \LL. It is known that $K_0(Var_{\C})$ is not a domain \cite{Poonen}; however, it is still an open question whether \MC\ is a domain or not. We shall call a subset $A$ of $(\C[[t]])^n$ cylindric if $A=\pi_l^{-1}(C)$ for some $l\in\Zplus$ and some constructible subset $C$ of $(\C[t]/(t^{l+1}))^n$. For such a cylindric subset $A=\pi_l^{-1}(C)$, one has that \begin{gather*} \pi_{l'}(A)\cong C\times\mathbf{A}^{n(l'-l)}(\C)\qquad\text{for all }l'\geqslant l;\\\shortintertext{therefore,} \mu(A)=[C]\LL^{-n(l+1)}=\lim_{l'\to\infty}[\pi_{l'}(A)]\LL^{-n(l'+1)}\in\MC \end{gather*} is independent of $l$. We call $\mu(A)$ the naive motivic measure of $A$. Its definition and in particular the chosen normalization are inspired by the $p$-adic Haar measure; note that $\mu((t^l\C[[t]])^n)=\LL^{-nl}$ for all $l\in\Zplus$. For $\phi_{\rho}=\phi_{\rho,0}+\phi_{\rho,1}t+\phi_{\rho,2}t^2+\cdots\in\C[[t]]\setminus\{0\}$, we define $\ord_t\phi_{\rho}$ as the smallest $\kappa\in\Zplus$ such that $\phi_{\rho,\kappa}\neq0$; additionally, we agree that $\ord_t0=\infty$. If $\phi=(\phi_1,\ldots,\phi_n)\in(\C[[t]])^n$, then we put \begin{equation*} \ord_t\phi=(\ord_t\phi_1,\ldots,\ord_t\phi_n)\in(\Zplus\cup\{\infty\})^n. \end{equation*} Let us recall the definition of the local $p$-adic zeta function. If $f(x)=f(x_1,\ldots,x_n)$ is a nonzero polynomial in $\Zp[x_1,\ldots,x_n]$ with $f(0)=0$, then \begin{align*} \Zofs&=\int_{p\Zpn}\abs{f(x)}^s\abs{dx}\\ &=\sum_{l\geqslant1}\mu(\{x\in p\Zpn\mid\ord_pf(x)=l\})p^{-ls}\\ &=p^{-n}\sum_{l\geqslant1}\#\bigl\{x+p^{l+1}\Zpn\in\bigl(p\Zp/p^{l+1}\Zp\bigr)^n\mid\ord_pf(x)=l\bigr\}\cdot(p^{-n}p^{-s})^l, \end{align*} with $\mu(\cdot)$ the Haar measure on \Qpn, so normalized that $\mu(\Zpn)=1$. This is the motivation for the following definition. \begin{definition}[Local motivic zeta function]\label{deflocmotzf} Let $f(x)=f(x_1,\ldots,x_n)$ be a nonzero polynomial in $\C[x_1,\ldots,x_n]$ satisfying $f(0)=0$. Put \begin{equation*} \mathcal{X}_l^0=\bigl\{\phi+\bigl(t^{l+1}\C[t]\bigr)^n\in\bigl(t\C[t]/t^{l+1}\C[t]\bigr)^n\mid\ord_tf(\phi)=l\bigr\} \end{equation*} for $l\in\Zplusnul$. Then the local motivic zeta function $\Zmotof(s)$ associated to $f$ is by definition the following element of $\MC[[\LL^{-s}]]$: \begin{align*} \Zmotof(s)&=\sum_{l\geqslant1}\mu(\{\phi\in(t\C[[t]])^n\mid\ord_tf(\phi)=l\})(\LL^{-s})^l\\ &=\LL^{-n}\sum_{l\geqslant1}[\mathcal{X}_l^0](\LL^{-n}\LL^{-s})^l\in\MC[[\LL^{-s}]]. \end{align*} Here $\LL^{-s}$ should be seen as a formal indeterminate. In what follows we shall always denote $\LL^{-s}$ by $T$; i.e., we define the local motivic zeta function \Zmotoft\ of $f$ as \begin{equation*} \Zmotoft=\LL^{-n}\sum_{l\geqslant1}[\mathcal{X}_l^0](\LL^{-n}T)^l\in\MC[[T]]. \end{equation*} \end{definition} The (local) motivic zeta function \Zmotoft\ is thus by definition a formal power series in $T$ with coefficients in \MC. By means of resolutions of singularities, Denef and Loeser proved that it is also a rational function in $T$. More precisely, they proved that there exists a finite set $S\subset\Zplusnul^2$ such that \begin{equation*} \Zmotoft\in\MC\left[\frac{\LL^{-\sigma}T^m}{1-\LL^{-\sigma}T^m}\right]_{(m,\sigma)\in S}\subset\MC[[T]]. \end{equation*} Denef and Loeser also formulated a motivic version of the Monodromy Conjecture. One should be careful, however, when translating the $p$-adic (or topological) statement of the conjecture to the motivic setting; since it is not known whether \MC\ is a domain or not, the notion of pole of \Zmotoft\ is not straightforward. \begin{conjecture}[Motivic Monodromy Conjecture] Let $f(x)=f(x_1,\ldots,x_n)$ be a non\-zero polynomial in $\C[x_1,\ldots,x_n]$ satisfying $f(0)=0$. Then there exists a finite set $S\subset\Zplusnul^2$ such that \begin{equation*} \Zmotoft\in\MC[T]\left[\frac{1}{1-\LL^{-\sigma}T^m}\right]_{(m,\sigma)\in S}\subset\MC[[T]], \end{equation*} and such that, for each $(m,\sigma)\in S$, the complex number $e^{-2\pi i\sigma/m}$ is an eigenvalue of the local monodromy of $f$ at some point of the complex zero locus $f^{-1}(0)\subset\C^n$ close to the origin. \end{conjecture} The goal of this section is to prove the motivic Monodromy Conjecture for a polynomial in three variables that is non-degenerated over \C\ with respect to its Newton polyhedron, i.e., to prove the following motivic version of Theorem~\ref{mcigusandss}. \begin{theorem}[Monodromy Conjecture for the local motivic zeta function of a non-degenerated surface singularity]\label{mcmotivndss} Let $f(x,y,z)\in\C[x,y,z]$ be a nonzero polynomial in three variables satisfying $f(0,0,0)=0$, and let $U\subset\C^3$ be a neighborhood of the origin. Suppose that $f$ is non-degenerated over \C\ with respect to all the compact faces of its Newton polyhedron. Then there exists a finite set $S\subset\Zplusnul^2$ such that \begin{equation*} \Zmotoft\in\MC[T]\left[\frac{1}{1-\LL^{-\sigma}T^m}\right]_{(m,\sigma)\in S}, \end{equation*} and such that, for each $(m,\sigma)\in S$, the complex number $e^{-2\pi i\sigma/m}$ is an eigen\-value of the local monodromy of $f$ at some point of $f^{-1}(0)\cap U\subset\C^3$. \end{theorem} We discuss a proof of Theorem~\ref{mcmotivndss} in Subsection~\ref{finsspmtms}. The essential formula for this proof is treated in the next subsection. \subsection{A formula for the local motivic zeta function of a non-degenerated polynomial} We will prove a combinatorial formula \`{a} la Denef--Hoornaert \cite{DH01} for the local motivic zeta function associated to a polynomial that is non-degenerated over the complex numbers. This was also done (in less detail) by Guibert \cite{guibert}. We state the formula below, but first we recall the precise notion of non-degeneracy we will be dealing with. \begin{definition}[Non-degenerated over \C] Let $f(x)=f(x_1,\ldots,x_n)$ be a nonzero polynomial in $\C[x_1,\ldots,x_n]$ satisfying $f(0)=0$. We say that $f$ is non-degenerated over \C\ with respect to all the compact faces of its Newton polyhedron \Gf, if for every compact face $\tau$ of \Gf, the zero locus $\ft^{-1}(0)\subset\C^n$ of \ft\ has no singularities in \Ccrossn\ (cfr.\ Notation~\ref{notftauart3}). \end{definition} Looking for an analogue for the motivic zeta function of Denef and Hoornaert's formula for Igusa's $p$-adic zeta function, one roughly expects to recover their formula with $p$, $p^{-s}$, and $N_{\tau}$ replaced by \LL, $T$, and the class of $\{x\in\Ccrossn\mid\ft(x)=0\}$ in the Grothendieck ring of complex varieties, respectively. We have to be careful however. Neither $T^{-1}$, nor $(1-\LL^{-1})^{-1}$ are elements in $\MC[[T]]$; especially, whereas $\sum_{\lambda=0}^{\infty}p^{-\lambda}=(1-p^{-1})^{-1}$ in \R, the corresponding $\sum_{\lambda=0}^{\infty}\LL^{-\lambda}=(1-\LL^{-1})^{-1}$ does not make sense in $\MC[[T]]$. To avoid the appearance of $T^{-1}$ in the formula, we adopt a slightly different notion of fundamental parallelepiped; to avoid dividing by $1-\LL^{-1}$, we have to treat compact faces lying in coordinate hyperplanes differently.\footnote{The reason is that for such a face $\tau$, at least one $v$ among the primitive vectors spanning \Dtu, has numerical data $(m(v),\sigma(v))=(0,1)$.} \begin{theorem}\label{formlocmotzf} Let $f(x)=f(x_1,\ldots,x_n)$ be a nonzero polynomial in $\C[x_1,\ldots,x_n]$ satisfying $f(0)=0$. Suppose that $f$ is non-degenerated over \C\ with respect to all the compact faces of its Newton polyhedron \Gf. Then the local motivic zeta function associated to $f$ is given by \begin{multline*} \Zmotoft=\\ \sum_{\substack{\tau\mathrm{\ compact\ face\ of\ }\Gf,\\\tau\nsubseteq\{x_{\rho}=0\}\mathrm{\ for\ all\ }\rho}}\!L_{\tau}S(\Dtu)+\sum_{\substack{\tau\mathrm{\ compact\ face\ of\ }\Gf,\\\tau\subset\{x_{\rho}=0\}\mathrm{\ for\ some\ }\rho}}\!L_{\tau}'S(\Dtu)'\in\MC[[T]], \end{multline*} where the $L_{\tau},S(\Dtu),L_{\tau}',S(\Dtu)'$ are as defined below. For $\tau$ not contained in any coordinate hyperplane, we have \begin{gather*} L_{\tau}=\bigl(1-\LL^{-1}\bigr)^n-\LL^{-n}[\mathcal{X}_{\tau}]\frac{1-T}{1-\LL^{-1}T}\in\MC[[T]],\\\shortintertext{with} \mathcal{X}_{\tau}=\left\{x\in\Ccrossn\;\middle\vert\;\ft(x)=0\right\},\\\shortintertext{and} S(\Dtu)=\sum_{k\in\Zn\cap\Delta_{\tau}}\LL^{-\sigma(k)}T^{m(k)}\in\MC[[T]]. \end{gather*} By $S(\Dtu)\in\MC[[T]]$ we mean, more precisely, the following. First choose a decomposition $\{\delta_i\}_{i\in I}$ of the cone \Dtu\ into simplicial cones $\delta_i$ without introducing new rays, and put $S(\Dtu)=\sum_{i\in I}S(\delta_i)$, with \begin{equation*} S(\delta_i)=\sum_{k\in\Zn\cap\delta_i}\LL^{-\sigma(k)}T^{m(k)}\in\MC[[T]] \end{equation*} for all $i\in I$. Then assuming that the cone $\delta_i$ is strictly positively spanned by the linearly independent primitive vectors $v_j$, $j\in J_i$, in $\Zplusn\setminus\{0\}$, the element $S(\delta_i)\in\MC[[T]]$ is defined as\,\footnote{Since $\tau$ is not contained in any coordinate hyperplane, all $m(v_j)$ are positive integers. Hence $\left(1-\LL^{-\sigma(v_j)}T^{m(v_j)}\right)^{-1}=\sum_{\lambda=0}^{\infty}\left(\LL^{-\sigma(v_j)}T^{m(v_j)}\right)^{\lambda}\in\MC[[T]]$ for all $j\in\bigcup_{i\in I}J_i$.} \begin{equation*} S(\delta_i)=\frac{\tilde{\Sigma}(\delta_i)}{\prod_{j\in J_i}\bigl(1-\LL^{-\sigma(v_j)}T^{m(v_j)}\bigr)}\in\MC[[T]], \end{equation*} with \begin{equation*} \tilde{\Sigma}(\delta_i)=\sum_h\LL^{-\sigma(h)}T^{m(h)}\in\MC[T], \end{equation*} where $h$ runs through the elements of the set \begin{equation*} \tilde{H}(v_j)_{j\in J_i}=\Z^n\cap\tilde{\lozenge}(v_j)_{j\in J_i}, \end{equation*} with \begin{equation*} \tilde{\lozenge}(v_j)_{j\in J_i}=\left\{\sum\nolimits_{j\in J_i}h_jv_j\;\middle\vert\;h_j\in(0,1]\text{ for all }j\in J_i\right\} \end{equation*} the fundamental parallelepiped\,\footnote{with opposite boundaries as before} spanned by the vectors $v_j$, $j\in J_i$. Suppose now that the compact face $\tau$ of \Gf\ is contained in at least one coordinate hyperplane. Define $P_{\tau}\subset\{1,\ldots,n\}$ such that $\rho\in P_{\tau}$ if and only if $\tau\subset\{(x_1,\ldots,x_n)\in\Rplusn\mid x_{\rho}=0\}$. Note that $1\leqslant\abs{P_{\tau}}\leqslant n-1$ and that \ft\ only depends on the variables $x_{\rho}$, $\rho\not\in P_{\tau}$. If we put \begin{equation*} \mathcal{X}_{\tau}'=\left\{(x_{\rho})_{\rho\not\in P_{\tau}}\in(\C^{\times})^{n-\abs{P_{\tau}}}\;\middle\vert\;\ft(x_{\rho})_{\rho\not\in P_{\tau}}=0\right\}, \end{equation*} then we have \begin{equation*} L_{\tau}'=\bigl(1-\LL^{-1}\bigr)^{n-\abs{P_{\tau}}}-\LL^{-(n-\abs{P_{\tau}})}[\mathcal{X}_{\tau}']\frac{1-T}{1-\LL^{-1}T}\in\MC[[T]]. \end{equation*} Denoting the standard basis of \Rn\ by $(e_{\rho})_{1\leqslant\rho\leqslant n}$, it follows that \Dtu\ is strictly positively spanned by the vectors $e_{\rho}$, $\rho\in P_{\tau}$, and one or more other primitive vectors $v_j$, $j\in J_{\tau}$, in $\Zplusn\setminus\{0\}$.\footnote{Indeed, as $\tau$ is compact and contained in $\bigcap_{\rho\in P_{\tau}}\{x_{\rho}=0\}$, we have that $\dim\tau\leqslant n-\abs{P_{\tau}}-1$; hence $\dim\Dtu\geqslant\abs{P_{\tau}}+1$.} Choose a decomposition $\{\delta_i\}_{i\in I}$ of the cone \Dtu\ into simplicial cones $\delta_i$ without introducing new rays, and assume that $\delta_i$ is strictly positively spanned by the linearly independent primitive vectors $e_{\rho},v_j$; $\rho\in P_i,j\in J_i$; with $\emptyset\subset P_i\subset P_{\tau}$ and $\emptyset\varsubsetneq J_i\subset J_{\tau}$. For $i\in I$, put \begin{align*} \delta_i'&=\tilde{\lozenge}(e_{\rho})_{\rho\in P_i}+\cone(v_j)_{j\in J_i}\\ &=\left\{\sum\nolimits_{\rho\in P_i}h_{\rho}e_{\rho}+\sum\nolimits_{j\in J_i}\lambda_jv_j\;\middle\vert\;h_{\rho}\in(0,1],\lambda_j\in\Rplusnul\text{ for all }\rho,j\right\}\subset\delta_i. \end{align*} Then $S(\Dtu)'$ is given by\,\footnote{Note again that all $m(v_j)$ are positive; therefore, $\left(1-\LL^{-\sigma(v_j)}T^{m(v_j)}\right)^{-1}\in\MC[[T]]$ for all $j\in J_{\tau}=\bigcup_{i\in I}J_i$.} \begin{align*} S(\Dtu)'&=\sum_{i\in I}\bigl(1-\LL^{-1}\bigr)^{\abs{P_{\tau}}-\abs{P_i}}\sum_{k\in\Zn\cap\delta_i'}\LL^{-\sigma(k)}T^{m(k)}\\ &=\sum_{i\in I}\bigl(1-\LL^{-1}\bigr)^{\abs{P_{\tau}}-\abs{P_i}}\frac{\tilde{\Sigma}(\delta_i)}{\prod_{j\in J_i}\bigl(1-\LL^{-\sigma(v_j)}T^{m(v_j)}\bigr)}\in\MC[[T]], \end{align*} with \begin{equation*} \tilde{\Sigma}(\delta_i)=\sum_h\LL^{-\sigma(h)}T^{m(h)}\in\MC[T], \end{equation*} where $h$ runs through the elements of the set \begin{equation*} \tilde{H}(e_{\rho},v_j)_{\rho,j}=\Z^n\cap\left\{\sum\nolimits_{\rho\in P_i}h_{\rho}e_{\rho}+\sum\nolimits_{j\in J_i}h_jv_j\;\middle\vert\;h_{\rho},h_j\in(0,1]\text{ for all }\rho,j\right\}. \end{equation*} \end{theorem} The formula as stated above is obtained from Denef and Hoornaert's formula by first replacing $p$, $p^{-s}$, and $N_{\tau}$ by their proper analogues and then rewriting the formula in such a way that everything lives in $\MC[[T]]$. The proof is naturally similar to its $p$-adic counterpart, but we have to make adaptations due to some restrictions in comparison with the $p$-adic case. One important restriction is that the (naive) motivic measure is not $\sigma$-additive. As mentioned earlier, we can no longer give meaning to countable sums of measures as $\sum_{\lambda=0}^{\infty}\LL^{-\lambda}$ in $\MC$. This results in a necessary different treatment of compact faces that are contained in coordinate hyperplanes. It also makes that we have---in some sense---less measurable subsets and therefore less freedom in the way we calculate things. For example, where in the $p$-adic case we start the proof by splitting up the integration domain $p\Zpn$ according to the $p$-order of its elements, we cannot copy this approach in the present setting, as it would give rise to unmeasurable sets. Another example is the following. In the $p$-adic case, when calculating $\int_{p\Zpn}\abs{f(x)}^s\abs{dx}$, we could ignore the $x\in p\Zpn$ with one or more coordinates equal to zero, because this part of the integration domain has measure zero. In the motivic setting, working with the naive motivic measure, we don't have this luxury; the corresponding $\{(\phi_1,\ldots,\phi_n)\in(t\C[[t]])^n\mid\phi_{\rho}=0\text{ for some }\rho\}$ is not a cylindric subset of $(t\C[[t]])^n$, hence is not measurable. In what follows, we adapt some familiar notions to better describe this new situation. We consider the extended non-negative real numbers $\Rplusbar=\Rplus\cup\{\infty\}$ with the usual order \lq$\leqslant$\rq\ and addition \lq$+$\rq. We extend the usual multiplication in \Rplus\ to a multiplication in \Rplusbar\ by putting $\infty\cdot0=0\cdot\infty=0$ and $\infty\cdot x=x\cdot\infty=\infty$ for $x\in\Rplusnulbar=\Rplusnul\cup\{\infty\}$. This allows us to also extend the dot product on \Rplusn\ to a dot product \begin{equation*} \cdot:\Rplusbarn\times\Rplusbarn\to\Rplusbar:\bigl((x_{\rho})_{\rho},(y_{\rho})_{\rho}\bigr)\mapsto(x_{\rho})_{\rho}\cdot(y_{\rho})_{\rho}=\sum\nolimits_{\rho=1}^nx_{\rho}y_{\rho} \end{equation*} on \Rplusbarn. The motivation for this definition is that, in this way, \begin{equation*} \ord_t\phi^{\omega}=\ord_t\phi_1^{\omega_1}\cdots\phi_n^{\omega_n}=(\ord_t\phi_1,\ldots,\ord_t\phi_n)\cdot(\omega_1,\ldots,\omega_n)=(\ord_t\phi)\cdot\omega \end{equation*} for $\phi=(\phi_1,\ldots,\phi_n)\in(t\C[[t]])^n$ and $\omega=(\omega_1,\ldots,\omega_n)\in\Rplusn$, even if $\phi_{\rho}=0$ for some $\rho$. Next we extend $m(\cdot)$ and $F(\cdot)$ to \Rplusbarn\ in the expected way: \begin{equation*} m(k)=\inf_{x\in\Gf}k\cdot x=\min_{\omega\in\supp(f)}k\cdot\omega\in\Rplusbar,\qquad F(k)=\{x\in\Gf\mid k\cdot x=m(k)\} \end{equation*} for $k\in\Rplusbarn$. We have the following properties. \begin{proposition} Let $k\in\Rplusbarn$ and put $P_k=\{\rho\mid k_{\rho}=\infty\}\subset\{1,\ldots,n\}$. \begin{enumerate} \item If $k=(0,\ldots,0)$ or $m(k)=\infty$, then $F(k)=\Gf$, otherwise $F(k)$ is a proper face of \Gf; \item the face $F(k)$ is compact if and only if $k\in\Rplusnulbarn$ and $m(k)<\infty$; \item if $P_k\neq\emptyset$ and $m(k)<\infty$, then $F(k)$ is contained in $\bigcap_{\rho\in P_k}\{x_{\rho}=0\}$. \end{enumerate} \end{proposition} The map $F:\Rplusbarn\to\{\text{faces of }\Gf\}$ induces an equivalence relation on \Rplusbarn. For every face $\tau$ of \Gf, we put $\Dti=F^{-1}(\tau)$ and call it the (extended) cone associated to $\tau$. These equivalence classes are subject to the following properties. \begin{proposition} Let $\tau$ be a face of \Gf\ and put $\emptyset\subset P_{\tau}=\{\rho\mid\tau\subset\{x_{\rho}=0\}\}\subsetneq\{1,\ldots,n\}$. Suppose that \Dtu\ is strictly positively spanned by the primitive vectors $e_{\rho},v_j$; $\rho\in P_{\tau},j\in J_{\tau}$; in $\Zplusn\setminus\{0\}$.\footnote{We agree that $J_{\tau}=\emptyset$ if $\tau=\Gf$; this is, $\Delta_{\Gf}=\{(0,\ldots,0)\}$ is strictly positively spanned by the empty set.} Then we have \begin{enumerate} \item $\Dtu=\Dti\cap\Rplusn$; \item if $\tau=\Gf$, then $\Dti=\{(0,\ldots,0)\}\cup\{k\in\Rplusbarn\mid m(k)=\infty\}$; \item if $\tau$ is a proper face of \Gf, then \begin{equation*} \Dti=\left\{\sum\nolimits_{\rho\in P_{\tau}}\bar{\lambda}_{\rho}e_{\rho}+\sum\nolimits_{j\in J_{\tau}}\lambda_jv_j\;\middle\vert\;\bar{\lambda}_{\rho}\in\Rplusnulbar,\lambda_j\in\Rplusnul\text{ for all }\rho,j\right\}; \end{equation*} \item in particular, if $\tau$ is a proper face not contained in any coordinate hyperplane, then $\Dti=\Dtu$. \end{enumerate} Furthermore, \begin{enumerate} \setcounter{enumi}{4} \item the family $\{\Dti\mid\tau\text{ is a face of }\Gf\}$ of all extended cones forms a partition of \Rplusbarn, while \item $\{\Dti\mid\tau\text{ is a compact face of }\Gf\}$ partitions $\{k\in\Rplusnulbarn\mid m(k)<\infty\}$. \end{enumerate} \end{proposition} Let us do some more preliminary work to facilitate the actual proof of the theorem. In the lemmas and corollaries that follow we calculate the (naive) motivic measure of some cylindric subsets of $(t\C[[t]])^n$, but first we introduce a notation. \begin{notation} For $K\subset\Zplusnulbarn=(\Zplusnul\cup\{\infty\})^n$ and $l\in\Zplusnul$, we put \begin{equation*} X_{K,l}=\{\phi\in(t\C[[t]])^n\mid\ord_t\phi\in K\text{ and }\ord_tf(\phi)=l\}. \end{equation*} If $k\in\Zplusnuln$, then we usually write $X_{k,l}$ instead of $X_{\{k\},l}$. \end{notation} \begin{lemma}\label{motlemmaeen} Let $f$ be as in Theorem~\ref{formlocmotzf}. Suppose that $\tau$ is a compact face of \Gf, and put $\mathcal{X}_{\tau}=\left\{x\in\Ccrossn\;\middle\vert\;\ft(x)=0\right\}$. Let $k\in\Zn\cap\Dtu$ and $l\in\Zplusnul$. Then $k\in\Zplusnuln$, $m(k)\in\Zplusnul$, and \begin{equation*} \mu(X_{k,l})= \begin{cases} {\ds0,}&\text{if \,$l<m(k)$};\\ {\ds\bigl((\LL-1)^n-[\mathcal{X}_{\tau}]\bigr)\LL^{-n-\sigma(k)},}&\text{if \,$l=m(k)$};\\ {\ds[\mathcal{X}_{\tau}](\LL-1)\LL^{-n+m(k)-\sigma(k)-l},}&\text{if \,$l>m(k)$}. \end{cases} \end{equation*} \end{lemma} \begin{proof} Let $\phi=(\phi_1,\ldots,\phi_n)\in(t\C[[t]])^n$ with $\ord_t\phi=k=(k_1,\ldots,k_n)$, and let $\psi=(\psi_1,\ldots,\psi_n)\in(\C[[t]]^{\times})^n$ be such that $\phi_{\rho}=t^{k_{\rho}}\psi_{\rho}$ for all $\rho$. Then, for $\omega=(\omega_1,\ldots,\omega_n)\in\Zplusn$, we have that \begin{equation*} \phi^{\omega}=\phi_1^{\omega_1}\cdots\phi_n^{\omega_n}=t^{k_1\omega_1}\psi_1^{\omega_1}\cdots t^{k_n\omega_n}\psi_n^{\omega_n}=t^{k\cdot\omega}\psi^{\omega}. \end{equation*} Write \begin{equation*} f(x)=\sum_{\omega\in\Zplusn}a_{\omega}x^{\omega}\qquad\text{and}\qquad\ft(x)=\sum_{\omega\in\Zn\cap\tau}a_{\omega}x^{\omega}. \end{equation*} It follows from $k\in\Dtu$ that $k\cdot\omega=m(k)$ for all $\omega\in\supp(f)\cap\tau$,\footnote{Recall that $\supp(f)=\{\omega\in\Zplusn\mid a_{\omega}\neq0\}$.} whereas $k\cdot\omega\geqslant m(k)+1$ for $\omega\in\supp(f)\setminus\tau$. Hence we can write $f(\phi)$ as \begin{gather*} f(\phi)=t^{m(k)}\bigl(\ft(\psi)+t\tilde{f}_{\tau,k}(t,\psi)\bigr),\\\shortintertext{with} \tilde{f}_{\tau,k}(t,\psi)=\sum_{\omega\in\supp(f)\setminus\tau}a_{\omega}t^{k\cdot\omega-m(k)-1}\psi^{\omega}. \end{gather*} First of all, we see that $\ord_tf(\phi)\geqslant m(k)$; hence $\mu(X_{k,l})=0$ for $l<m(k)$. Secondly, we observe that $\ord_tf(\phi)=m(k)$ if and only if $\ord_t\ft(\psi)=0$. If we write $\psi=(\psi_{\rho,0}+\psi_{\rho,1}t+\psi_{\rho,2}t^2+\cdots)_{1\leqslant\rho\leqslant n}$, then $\ft(\psi)\in\ft(\psi_{1,0},\ldots,\psi_{n,0})+t\C[[t]]$. Consequently, the set \begin{align*} \widetilde{X}_{k,m(k)}&=\{\psi\in(\C[[t]]^{\times})^n\mid\ord_tf(\phi)=m(k)\}\\ &=\{\psi\in(\C[[t]]^{\times})^n\mid\ft(\psi_{1,0},\ldots,\psi_{n,0})\neq0\}\\ &=\pi_0^{-1}\bigl(\pi_0\bigl(\widetilde{X}_{k,m(k)}\bigr)\bigr) \end{align*} is a cylindric subset of $(\C[[t]])^n$ of motivic measure \begin{equation*} \mu\bigl(\widetilde{X}_{k,m(k)}\bigr)=\bigl[\pi_0\bigl(\widetilde{X}_{k,m(k)}\bigr)\bigr]\LL^{-n}=[\Ccrossn\setminus\mathcal{X}_{\tau}]\LL^{-n}=\bigl((\LL-1)^n-[\mathcal{X}_{\tau}]\bigr)\LL^{-n}. \end{equation*} The corresponding set $X_{k,m(k)}$ has motivic measure \begin{equation*} \mu\bigl(X_{k,m(k)}\bigr)=\LL^{-\sigma(k)}\mu\bigl(\widetilde{X}_{k,m(k)}\bigr)=\bigl((\LL-1)^n-[\mathcal{X}_{\tau}]\bigr)\LL^{-n-\sigma(k)}. \end{equation*} Suppose now that $l>m(k)$. Let us first calculate the measure of \begin{equation*} X_{k,\geqslant l}=\{\phi\in(t\C[[t]])^n\mid\ord_t\phi=k\text{ and }\ord_tf(\phi)\geqslant l\}. \end{equation*} From our expression for $f(\phi)$ we see that $\ord_tf(\phi)\geqslant l$ if and only if \begin{equation*} \ord_t\bigl(\ft(\psi)+t\tilde{f}_{\tau,k}(t,\psi)\bigr)\geqslant l'=l-m(k)\geqslant1, \end{equation*} or, equivalently, if and only if \begin{equation}\label{motcondle} \ft(\psi)+t\tilde{f}_{\tau,k}(t,\psi)\equiv0\mod t^{l'}\C[[t]]. \end{equation} Whether $\psi$ satisfies the above condition, only depends on the complex numbers \begin{equation*} \psi_{\rho,\kappa};\qquad\rho=1,\ldots,n;\ \kappa=0,\ldots,l'-1. \end{equation*} Clearly, for $\psi$ to satisfy \eqref{motcondle}, it is necessary that $\ft(\psi_{1,0},\ldots,\psi_{n,0})=0$. Fix such an $n$-tuple $(\psi_{1,0},\ldots,\psi_{n,0})$. Since $f$ is non-degenerated over \C\ with respect to $\tau$, there exists a $\rho_0$ such that $(\partial\ft/\partial x_{\rho_0})(\psi_{1,0},\ldots,\psi_{n,0})\neq0$. Therefore, Hensel's lifting lemma returns, for every free choice of $(n-1)(l'-1)$ complex numbers \begin{equation*} \psi_{\rho,\kappa};\qquad\rho=1,\ldots,\rho_0-1,\rho_0+1,\ldots,n;\ \kappa=1,\ldots,l'-1; \end{equation*} unique $\psi_{\rho_0,1},\ldots,\psi_{\rho_0,l'-1}\in\C$ such that $\psi$ satisfies \eqref{motcondle}. It follows that \begin{align*} \widetilde{X}_{k,\geqslant l}&=\{\psi\in(\C[[t]]^{\times})^n\mid\ord_tf(\phi)\geqslant l\}\\ &=\{\psi\in(\C[[t]]^{\times})^n\mid\ft(\psi)+t\tilde{f}_{\tau,k}(t,\psi)\equiv0\bmod t^{l'}\C[[t]]\}\\ &=\pi_{l'-1}^{-1}\bigl(\pi_{l'-1}\bigl(\widetilde{X}_{k,\geqslant l}\bigr)\bigr) \end{align*} is a cylindric subset of $(\C[[t]])^n$ of motivic measure \begin{multline*} \mu\bigl(\widetilde{X}_{k,\geqslant l}\bigr)=\bigl[\pi_{l'-1}\bigl(\widetilde{X}_{k,\geqslant l}\bigr)\bigr]\LL^{-nl'}\\ =\bigl[\mathcal{X}_{\tau}\times\C^{(n-1)(l'-1)}\bigr]\LL^{-nl'}=[\mathcal{X}_{\tau}]\LL^{-n+m(k)-l+1}. \end{multline*} The corresponding set $X_{k,\geqslant l}$ therefore has motivic measure \begin{equation*} \mu(X_{k,\geqslant l})=\LL^{-\sigma(k)}\mu\bigl(\widetilde{X}_{k,\geqslant l}\bigr)=[\mathcal{X}_{\tau}]\LL^{-n+m(k)-\sigma(k)-l+1}. \end{equation*} By additivity of the motivic measure, finally, we obtain that \begin{gather*} \mu(X_{k,l})=\mu(X_{k,\geqslant l}\setminus X_{k,\geqslant l+1})=\mu(X_{k,\geqslant l})-\mu(X_{k,\geqslant l+1})=\\ [\mathcal{X}_{\tau}]\LL^{-n+m(k)-\sigma(k)-l+1}-[\mathcal{X}_{\tau}]\LL^{-n+m(k)-\sigma(k)-l}=[\mathcal{X}_{\tau}](\LL-1)\LL^{-n+m(k)-\sigma(k)-l}, \end{gather*} which concludes the proof of the lemma. \end{proof} \begin{corollary}\label{motcoroleen} Let $f$ be as in Theorem~\ref{formlocmotzf} and suppose that $\tau$ is a com\-pact face of \Gf\ that is not contained in any coordinate hyperplane. Let $l\in\Zplusnul$. Then $X_{\Zn\cap\Dtu,l}$ is a cylindric subset of $(t\C[[t]])^n$; i.e., $\mu(X_{\Zn\cap\Dtu,l})$ exists. \end{corollary} \begin{proof} Clearly, $X_{\Zn\cap\Dtu,l}$ equals the disjoint union \begin{equation}\label{disjuncoreen} X_{\Zn\cap\Dtu,l}=\bigsqcup_{k\in\Zn\cap\Dtu}X_{k,l}. \end{equation} We know that $X_{k,l}=\emptyset$ for $k\in\Zplusnuln$ with $m(k)>l$; hence we may restrict the above union to $k$ satisfying $m(k)\leqslant l$. Choose $x\in\tau\cap\Rplusnuln\neq\emptyset$. Then $m(k)=k\cdot x$ for all $k\in\Dtu$. Moreover, $\{k\in\Rplusn\mid k\cdot x\leqslant l\}$ is a closed and bounded subset of \Rn, containing finitely many integral points. The union \eqref{disjuncoreen} so boils down to a finite disjoint union of sets $X_{k,l}$ that, by Lemma~\ref{motlemmaeen}, are cylindric subsets of $(t\C[[t]])^n$. Consequently, \begin{equation*} \mu(X_{\Zn\cap\Dtu,l})=\sum_{\substack{k\in\Zn\cap\Dtu,\\m(k)\leqslant l}}\mu(X_{k,l}) \end{equation*} is well-defined. \end{proof} \begin{lemma}\label{motlemmatwee} Let $f$ be as in Theorem~\ref{formlocmotzf} and suppose that $\tau$ is a compact face of \Gf\ that is contained in at least one coordinate hyperplane. Define $P_{\tau}\subset\{1,\ldots,n\}$ such that $\rho\in P_{\tau}$ if and only if $\tau\subset\{x_{\rho}=0\}$, and denote \begin{equation*} \mathcal{X}_{\tau}'=\left\{(x_{\rho})_{\rho\not\in P_{\tau}}\in(\C^{\times})^{n-\abs{P_{\tau}}}\;\middle\vert\;\ft(x_{\rho})_{\rho\not\in P_{\tau}}=0\right\}. \end{equation*} Let $k\in\Zn\cap\Dtu$, $\emptyset\subset P\subset P_{\tau}$, and put \begin{equation*} k\vee P=k+\sum_{\rho\in P}\Zplusbar e_{\rho}\subset\Zplusnulbarn\cap\Dti.\footnote{Hereby $(e_{\rho})_{1\leqslant\rho\leqslant n}$ denotes the standard basis of \Rn, and $\Zplusbar=\Zplus\cup\{\infty\}\subset\Rplusbar$.} \end{equation*} Note that $m(k')=m(k)\in\Zplusnul$ for all $k'\in k\vee P$. Finally, let $l\in\Zplusnul$. Then \begin{multline*} \mu(X_{k\vee P,l})=\\ \begin{cases} {\ds0,}&\text{if \,$l<m(k)$};\\ {\ds\bigl((\LL-1)^{n-\abs{P_{\tau}}}-[\mathcal{X}_{\tau}']\bigr)(\LL-1)^{\abs{P_{\tau}}-\abs{P}}\LL^{-n+\abs{P}-\sigma(k)},}&\text{if \,$l=m(k)$};\\ {\ds[\mathcal{X}_{\tau}'](\LL-1)^{\abs{P_{\tau}}-\abs{P}+1}\LL^{-n+\abs{P}+m(k)-\sigma(k)-l},}&\text{if \,$l>m(k)$}. \end{cases} \end{multline*} \end{lemma} \begin{proof} The proof is analogous to the proof of Lemma~\ref{motlemmaeen}. Essential is that \ft\ only depends on the variables $x_{\rho}$, $\rho\not\in P_{\tau}$. The measure of \begin{gather*} X_{k\vee P,\geqslant l}=\{\phi\in(t\C[[t]])^n\mid\ord_t\phi\in k\vee P\text{ and }\ord_tf(\phi)\geqslant l\}\\\shortintertext{equals} \mu(X_{k\vee P,\geqslant l})=[\mathcal{X}_{\tau}'](\LL-1)^{\abs{P_{\tau}}-\abs{P}}\LL^{-n+\abs{P}+m(k)-\sigma(k)-l+1} \end{gather*} for $l>m(k)$. \end{proof} \begin{corollary}\label{motcoroltwee} Let $f$ be as in Theorem~\ref{formlocmotzf} and suppose that $\tau$ is a compact face of \Gf\ that is contained in at least one coordinate hyperplane. Let $l\in\Zplusnul$. Then $X_{\Zplusnulbarn\cap\Dti,l}$ is a cylindric subset of $(t\C[[t]])^n$; i.e., $\mu(X_{\Zplusnulbarn\cap\Dti,l})$ exists. \end{corollary} \begin{proof} Put $\emptyset\subsetneq P_{\tau}=\{\rho\mid\tau\subset\{x_{\rho}=0\}\}\subsetneq\{1,\ldots,n\}$ as usual, and suppose that \Dtu\ is strictly positively spanned by the primitive vectors $e_{\rho},v_j$; $\rho\in P_{\tau},j\in J_{\tau}\neq\emptyset$; in $\Zplusn\setminus\{0\}$. Choose a decomposition $\{\delta_i\}_{i\in I}$ of the cone \Dtu\ into simplicial cones $\delta_i$ without introducing new rays, and assume that $\delta_i$ is strictly positively spanned by the linearly independent primitive vectors $e_{\rho},v_j$; $\rho\in P_i,j\in J_i$; with $\emptyset\subset P_i\subset P_{\tau}$ and $\emptyset\varsubsetneq J_i\subset J_{\tau}$. Then the \emph{extended simplicial cones} \begin{equation*} \delta_i^{\infty}=\left\{\sum\nolimits_{\rho\in P_i}\bar{\lambda}_{\rho}e_{\rho}+\sum\nolimits_{j\in J_i}\lambda_jv_j\;\middle\vert\;\bar{\lambda}_{\rho}\in\Rplusnulbar,\lambda_j\in\Rplusnul\text{ for all }\rho,j\right\},\quad i\in I, \end{equation*} clearly partition \Dti, and so we are looking at the finite disjoint union \begin{equation*} X_{\Zplusnulbarn\cap\Dti,l}=\bigsqcup_{i\in I}X_{\Zplusnulbarn\cap\delta_i^{\infty},l}. \end{equation*} Next we decompose $\Zplusnulbarn\cap\delta_i^{\infty}$, and subsequently $X_{\Zplusnulbarn\cap\delta_i^{\infty},l}$, as \begin{equation}\label{disjuncortwee} \Zplusnulbarn\cap\delta_i^{\infty}=\bigsqcup_{k\in\Zn\cap\delta_i'}k\vee P_i\quad\text{ and }\quad X_{\Zplusnulbarn\cap\delta_i^{\infty},l}=\bigsqcup_{k\in\Zn\cap\delta_i'}X_{k\vee P_i,l}, \end{equation} with \begin{equation*} \delta_i'=\left\{\sum\nolimits_{\rho\in P_i}h_{\rho}e_{\rho}+\sum\nolimits_{j\in J_i}\lambda_jv_j\;\middle\vert\;h_{\rho}\in(0,1],\lambda_j\in\Rplusnul\text{ for all }\rho,j\right\}\subset\delta_i. \end{equation*} Recall that $X_{k\vee P_i,l}=\emptyset$ for $k\in\Zplusnuln$ with $m(k)>l$. We may therefore restrict the second union of \eqref{disjuncortwee} to $k$ satisfying $m(k)\leqslant l$. Choose $x=(x_1,\ldots,x_n)\in\tau$ with $x_{\rho}>0$ for all $\rho\not\in P_{\tau}$. Then $m(k)=k\cdot x$ for all $k\in\delta_i'\subset\Dtu$, and $\{k\in\delta_i'\mid k\cdot x\leqslant l\}$ is a bounded subset of \Rn, containing finitely many integral points. It follows that the second union of \eqref{disjuncortwee} is actually a finite disjoint union of cylindric\footnote{See Lemma~\ref{motlemmatwee}.} subsets $X_{k\vee P_i,l}$ of $(t\C[[t]])^n$. We conclude that the finite sum \begin{equation*} \mu\bigl(X_{\Zplusnulbarn\cap\Dti,l}\bigr)=\sum_{i\in I}\sum_{\substack{k\in\Zn\cap\delta_i',\\m(k)\leqslant l}}\mu(X_{k\vee P_i,l}) \end{equation*} is well-defined. \end{proof} \begin{proof}[Proof of Theorem~\ref{formlocmotzf}] By definition we have\footnote{Note the difference between $\mathcal{X}_l^0$ (see Definition~\ref{deflocmotzf}) and $X_l^0$.} \begin{gather*} \Zmotoft=\LL^{-n}\sum_{l\geqslant1}[\mathcal{X}_l^0](\LL^{-n}T)^l=\sum_{l\geqslant1}\mu(X_l^0)T^l,\\\shortintertext{with} X_l^0=\{\phi\in(t\C[[t]])^n\mid\ord_tf(\phi)=l\},\qquad l\in\Zplusnul. \end{gather*} If $\phi\in X_l^0$, then $\ord_t\phi\in\Zplusnulbarn$ and $m(\ord_t\phi)\leqslant\ord_tf(\phi)=l<\infty$. Further, $\{\Dti\mid\tau\text{ is a compact face of }\Gf\}$ forms a partition of $\{k\in\Rplusnulbarn\mid m(k)<\infty\}$. Hence we may write each $X_l^0$ as the finite disjoint union \begin{equation*} X_l^0=\bigsqcup_{\tau}X_{\Zplusnulbarn\cap\Dti,l}=\bigsqcup_{\substack{\tau,\\P_{\tau}=\emptyset}}X_{\Zn\cap\Dtu,l}\sqcup\bigsqcup_{\substack{\tau,\\P_{\tau}\neq\emptyset}}X_{\Zplusnulbarn\cap\Dti,l}, \end{equation*} where all unions are over compact faces $\tau$ of \Gf, and $P_{\tau}=\{\rho\mid\tau\subset\{x_{\rho}=0\}\}$ as usual. By Corollaries~\ref{motcoroleen} and \ref{motcoroltwee}, all $X_{\Zn\cap\Dtu,l}$ and $X_{\Zplusnulbarn\cap\Dti,l}$ are cylindric subsets of $(t\C[[t]])^n$, which allows us to write $\mu(X_l^0)$ as the finite sum \begin{equation*} \mu(X_l^0)=\sum_{\substack{\tau,\\P_{\tau}=\emptyset}}\mu\bigl(X_{\Zn\cap\Dtu,l}\bigr)+\sum_{\substack{\tau,\\P_{\tau}\neq\emptyset}}\mu\bigl(X_{\Zplusnulbarn\cap\Dti,l}\bigr). \end{equation*} This leads to \begin{equation*} \Zmotoft=\sum_{\substack{\tau,\\P_{\tau}=\emptyset}}\sum_{l\geqslant1}\mu\bigl(X_{\Zn\cap\Dtu,l}\bigr)T^l+\sum_{\substack{\tau,\\P_{\tau}\neq\emptyset}}\sum_{l\geqslant1}\mu\bigl(X_{\Zplusnulbarn\cap\Dti,l}\bigr)T^l. \end{equation*} If $\tau$ is not contained in any coordinate hyperplane, then by Corollary~\ref{motcoroleen}, we have \begin{align} \sum_{l\geqslant1}\mu\bigl(X_{\Zn\cap\Dtu,l}\bigr)T^l&\notag\\ &\hspace{-1cm}=\sum_{l\geqslant1}\,\sum_{\substack{k\in\Zn\cap\Dtu,\\m(k)\leqslant l}}\mu(X_{k,l})T^l\in\MC[[T]]\label{motexpeen}\\ &\hspace{-1cm}=\sum_{k\in\Zn\cap\Dtu}\,\sum_{l\geqslant m(k)}\mu(X_{k,l})T^l\label{motexptwee}\\ &\hspace{-1cm}=\sum_{k\in\Zn\cap\Dtu}\mu\bigl(X_{k,m(k)}\bigr)T^{m(k)}+\sum_{k\in\Zn\cap\Dtu}\,\sum_{l\geqslant m(k)+1}\mu(X_{k,l})T^l.\notag \end{align} Replacing the motivic measures $\mu(\cdot)$ by the expressions found in Lemma~\ref{motlemmaeen}, we obtain \begin{align*} \sum_{l\geqslant1}\mu\bigl(X_{\Zn\cap\Dtu,l}\bigr)T^l&=\bigl((\LL-1)^n-[\mathcal{X}_{\tau}]\bigr)\LL^{-n}\sum_{k\in\Zn\cap\Dtu}\LL^{-\sigma(k)}T^{m(k)}\\ &\quad\,+[\mathcal{X}_{\tau}](\LL-1)\LL^{-n}\sum_{k\in\Zn\cap\Dtu}\LL^{-\sigma(k)}\sum_{l\geqslant m(k)+1}\LL^{m(k)-l}T^l, \end{align*} and since \begin{equation*} \sum_{l\geqslant m(k)+1}\LL^{m(k)-l}T^l=T^{m(k)}\sum_{l\geqslant1}\LL^{-l}T^l=T^{m(k)}\frac{\LL^{-1}T}{1-\LL^{-1}T}\in\MC[[T]], \end{equation*} we eventually find \begin{multline*} \sum_{l\geqslant1}\mu\bigl(X_{\Zn\cap\Dtu,l}\bigr)T^l\\ =\left(\bigl(1-\LL^{-1}\bigr)^n-\LL^{-n}[\mathcal{X}_{\tau}]\frac{1-T}{1-\LL^{-1}T}\right)\sum_{k\in\Zn\cap\Dtu}\LL^{-\sigma(k)}T^{m(k)}. \end{multline*} This last sum, denoted $S(\Dtu)$, can be calculated as follows. First choose a decomposition $\{\delta_i\}_{i\in I}$ of the cone \Dtu\ into simplicial cones $\delta_i$ without introducing new rays. Then $S(\Dtu)=\sum_{i\in I}S(\delta_i)$, whereby \begin{equation*} S(\delta_i)=\sum_{k\in\Zn\cap\delta_i}\LL^{-\sigma(k)}T^{m(k)} \end{equation*} for all $i\in I$. Next assume that the cone $\delta_i$ is strictly positively spanned by the linearly independent primitive vectors $v_j$, $j\in J_i$, in $\Zplusn\setminus\{0\}$. Then $\Zn\cap\delta_i$ equals the finite disjoint union $\bigsqcup_hh+\sum_{j\in J_i}\Zplus v_j$, where $h$ runs through the elements of \begin{equation*} \tilde{H}(v_j)_{j\in J_i}=\Z^n\cap\tilde{\lozenge}(v_j)_{j\in J_i}=\Z^n\cap\left\{\sum\nolimits_{j\in J_i}h_jv_j\;\middle\vert\;h_j\in(0,1]\text{ for all }j\in J_i\right\}. \end{equation*} Consequently, \begin{align*} S(\delta_i)&=\adjustlimits\sum_{h\in\tilde{H}(v_j)_j}\sum_{(\lambda_j)_j\in\Zplus^{\abs{J_i}}}\LL^{-\sigma\left(h+\sum_j\lambda_jv_j\right)}T^{m\left(h+\sum_j\lambda_jv_j\right)};\\\intertext{then exploiting the linearity\footnotemark\ of $m(\cdot)$ on $\overbar{\Dtu}\supset\delta_i$, we find} S(\delta_i)&=\sum_{h\in\tilde{H}(v_j)_j}\LL^{-\sigma(h)}T^{m(h)}\prod_{j\in J_i}\sum_{\lambda_j\geqslant0}\left(\LL^{-\sigma(v_j)}T^{m(v_j)}\right)^{\lambda_j}\\ &=\frac{\sum_{h\in\tilde{H}(v_j)_j}\LL^{-\sigma(h)}T^{m(h)}}{\prod_{j\in J_i}\bigl(1-\LL^{-\sigma(v_j)}T^{m(v_j)}\bigr)}\in\MC[[T]]. \end{align*}\footnotetext{Recall that for any $x\in\tau$ we have that $m(k)=k\cdot x$ for all $k\in\overbar{\Dtu}$. Note that since all $m(v_j)$ are positive, we indeed obtain an element of $\MC[[T]]$. The eventual formula for $\sum_{l\geqslant1}\mu\bigl(X_{\Zn\cap\Dtu,l}\bigr)T^l$ is thus \begin{equation*} \left(\bigl(1-\LL^{-1}\bigr)^n-\LL^{-n}[\mathcal{X}_{\tau}]\frac{1-T}{1-\LL^{-1}T}\right)\sum_{i\in I}\frac{\sum_{h\in\tilde{H}(v_j)_{j\in J_i}}\LL^{-\sigma(h)}T^{m(h)}}{\prod_{j\in J_i}\bigl(1-\LL^{-\sigma(v_j)}T^{m(v_j)}\bigr)}\in\MC[[T]], \end{equation*} as announced in the theorem. To rigorously prove that \eqref{motexpeen} equals this last expression in $\MC[[T]]$, in particular to defend the change of summation order in going from \eqref{motexpeen} to \eqref{motexptwee}, one compares the coefficients of $T^l$ in both elements and finds twice the same finite sum in $\MC$. From now suppose that $\tau$ is contained in at least one coordinate hyperplane; i.e., $P_{\tau}\neq\emptyset$. Let \Dtu\ be strictly positively spanned by the primitive vectors $e_{\rho},v_j$; $\rho\in P_{\tau},j\in J_{\tau}\neq\emptyset$; in $\Zplusn\setminus\{0\}$. Choose a decomposition $\{\delta_i\}_{i\in I}$ of \Dtu\ into simplicial cones $\delta_i$ without introducing new rays, and assume that $\delta_i$ is strictly positively spanned by the linearly independent primitive vectors $e_{\rho},v_j$; $\rho\in P_i,j\in J_i$; with $\emptyset\subset P_i\subset P_{\tau}$ and $\emptyset\varsubsetneq J_i\subset J_{\tau}$. Finally, put $\delta_i'=\tilde{\lozenge}(e_{\rho})_{\rho\in P_i}+\cone(v_j)_{j\in J_i}$ as before. We proceed as in the $P_{\tau}=\emptyset$ case. Corollary~\ref{motcoroltwee} yields \begin{align*} \sum_{l\geqslant1}\mu\bigl(X_{\Zplusnulbarn\cap\Dti,l}\bigr)T^l&\\ &\hspace{-2.6cm}=\sum_{l\geqslant1}\sum_{i\in I}\sum_{\substack{k\in\Zn\cap\delta_i',\\m(k)\leqslant l}}\mu(X_{k\vee P_i,l})T^l\in\MC[[T]]\\ &\hspace{-2.6cm}=\sum_{i\in I}\sum_{k\in\Zn\cap\delta_i'}\mu\bigl(X_{k\vee P_i,m(k)}\bigr)T^{m(k)}+\sum_{i\in I}\sum_{k\in\Zn\cap\delta_i'}\,\sum_{l\geqslant m(k)+1}\mu(X_{k\vee P_i,l})T^l. \end{align*} Then applying Lemma~\ref{motlemmatwee}, we find \begin{align*} &\,\sum_{l\geqslant1}\mu\bigl(X_{\Zplusnulbarn\cap\Dti,l}\bigr)T^l\\ &=\bigl((\LL-1)^{n-\abs{P_{\tau}}}-[\mathcal{X}_{\tau}']\bigr)\LL^{-n}\sum_{i\in I}(\LL-1)^{\abs{P_{\tau}}-\abs{P_i}}\LL^{\abs{P_i}}\sum_{k\in\Zn\cap\delta_i'}\LL^{-\sigma(k)}T^{m(k)}\\ &\hspace{.7cm}+[\mathcal{X}_{\tau}']\LL^{-n}\sum_{i\in I}(\LL-1)^{\abs{P_{\tau}}-\abs{P_i}+1}\LL^{\abs{P_i}}\sum_{k\in\Zn\cap\delta_i'}\LL^{-\sigma(k)}\sum_{l\geqslant m(k)+1}\LL^{m(k)-l}T^l\\ &=\left(\bigl(1-\LL^{-1}\bigr)^{n-\abs{P_{\tau}}}-\LL^{-(n-\abs{P_{\tau}})}[\mathcal{X}_{\tau}']\frac{1-T}{1-\LL^{-1}T}\right)\\ &\hspace{4.87cm}\cdot\sum_{i\in I}\bigl(1-\LL^{-1}\bigr)^{\abs{P_{\tau}}-\abs{P_i}}\sum_{k\in\Zn\cap\delta_i'}\LL^{-\sigma(k)}T^{m(k)}. \end{align*} This last double sum, which we denote by $S(\Dtu)'$, can be calculated in the same way as we calculated $S(\Dtu)$ in the $P_{\tau}=\emptyset$ case. We obtain \begin{equation*} S(\Dtu)'=\sum_{i\in I}\bigl(1-\LL^{-1}\bigr)^{\abs{P_{\tau}}-\abs{P_i}}\frac{\sum_h\LL^{-\sigma(h)}T^{m(h)}}{\prod_{j\in J_i}\bigl(1-\LL^{-\sigma(v_j)}T^{m(v_j)}\bigr)}\in\MC[[T]], \end{equation*} where $h$ runs through the elements of the set \begin{equation*} \tilde{H}(e_{\rho},v_j)_{\rho,j}=\Z^n\cap\left\{\sum\nolimits_{\rho\in P_i}h_{\rho}e_{\rho}+\sum\nolimits_{j\in J_i}h_jv_j\;\middle\vert\;h_{\rho},h_j\in(0,1]\text{ for all }\rho,j\right\}. \end{equation*} Note again that since all $m(v_j)$ are positive, we indeed find an element of $\MC[[T]]$. This concludes the proof of Theorem~\ref{formlocmotzf}. \end{proof} \subsection{A proof of the main theorem in the motivic setting}\label{finsspmtms} In this final subsection we explain why (and how) Theorem~\ref{mcmotivndss} can be proved in the same way as Theorem~\ref{mcigusandss}. Let us start with a small overview. Let $f$ be as in Theorem~\ref{mcmotivndss}. By the general rationality result of Denef--Loeser, we know that there exists a finite set $\widetilde{S}\subset\Zplusnul^2$ such that \begin{equation*} \Zmotoft\in\MC\left[\frac{\LL^{-\sigma}T^m}{1-\LL^{-\sigma}T^m}\right]_{(m,\sigma)\in\widetilde{S}}\subset\MC[[T]]. \end{equation*} Our formula for non-degenerated $f$ (Theorem~\ref{formlocmotzf}), on the other hand, yields \begin{equation*} \Zmotoft\in\MC[T]\left[\frac{1}{1-\LL^{-\sigma}T^m}\right]_{(m,\sigma)\in S}\subset\MC[[T]], \end{equation*} whereby \begin{multline}\label{defsetS} S=\{(1,1)\}\cup\{(m(v),\sigma(v))\mid\text{$v$ is the primitive vector associated to a}\\ \text{facet of \Gf\ that is not contained in any coordinate hyperplane}\}\subset\Zplusnul^2. \end{multline} Now we want to prove that there exists a subset $S'\subset S$ such that \begin{equation}\label{tbpfinmotmaintheo} \Zmotoft\in\MC[T]\left[\frac{1}{1-\LL^{-\sigma}T^m}\right]_{(m,\sigma)\in S'}, \end{equation} and such that $e^{-2\pi i\sigma/m}$ is an eigenvalue of monodromy (in the sense of the theorem) for each $(m,\sigma)\in S'$. Let us introduce some notations and terminology. Consider the set $S$ from \eqref{defsetS}, and put $Q=\{\sigma/m\mid(m,\sigma)\in S\}\subset\Qplusnul$. Let $q\in Q$ and let $\tau$ be a facet of \Gf\ that is not contained in any coordinate hyperplane. We say that $\tau$ contributes to $q$ if $\sigma(v)/m(v)=q$, with $v$ the unique primitive vector in $\Zplusn\setminus\{0\}$ perpendicular to $\tau$. We shall call a ratio $q\in Q$ \emph{good} if \begin{itemize} \item $q=1$, \item or $q$ is contributed by a facet of \Gf\ that is not a $B_1$-facet, \item or $q$ is contributed by two $B_1$-facets of \Gf\ that are \underline{not} $B_1$ for a same variable and that have an edge in common. \end{itemize} We shall call $q\in Q$ \emph{bad} if $q$ is not good, i.e., if \begin{itemize} \item $q\neq1$; \item and $q$ is only contributed by $B_1$-facets of \Gf; \item and for any pair of contributing $B_1$-facets, we have that \begin{itemize} \item either they are $B_1$-facets for a same variable, \item or they have at most one point in common. \end{itemize} \end{itemize} Finally, we shall call a facet $\tau$ of \Gf\ \emph{bad} if it contributes to a bad $q\in Q$. This implies that $\tau$ is a $B_1$-facet. Let us now define \begin{equation*} S'=\{(m,\sigma)\in S\mid\sigma/m\text{ is good}\}\subset S. \end{equation*} Then by Theorem~\ref{theoAenL} and Proposition~\ref{propAenL} by Lemahieu and Van Proeyen, we know that $e^{-2\pi i\sigma/m}$ is an eigenvalue of monodromy for each $(m,\sigma)\in S'$. It remains to prove that \eqref{tbpfinmotmaintheo} holds for the $S'$ proposed above. The formula for \Zmotoft\ in Theorem~\ref{formlocmotzf} associates a term to every compact face $\tau$ of \Gf. If $\tau$ is not contained in any bad facet, then its associated term clearly belongs to \begin{equation*} \MC[T]\left[\frac{1}{1-\LL^{-\sigma}T^m}\right]_{(m,\sigma)\in S'}. \end{equation*} Hence it suffices to consider the sum of the terms associated to bad $B_1$-simplices or compact subfaces of bad $B_1$-facets. We will refer to this sum as the relevant part of \Zmotoft. If we look at the formula carefully, we see that it is a rational expression (with integer coefficients) in \LL\ and $T$, except for the presence of $[\mathcal{X}_{\tau}]$ and $[\mathcal{X}_{\tau}']$ in $L_{\tau}$ and $L_{\tau}'$, respectively. Fortunately, for the relevant faces, these classes have a fairly simple form. For any vertex $V$, we have $[\mathcal{X}_V]=[\mathcal{X}_V']=0$. If $[CD]$ is any edge with one vertex in a coordinate hyperplane and the other vertex at distance one of this hyperplane, then $[\mathcal{X}_{[CD]}]=(\LL-1)^2$ if $[CD]$ is not contained in any coordinate hyperplane, and $[\mathcal{X}_{[CD]}']=\LL-1$ otherwise. Lastly, let $\tau_0$ be a $B_1$-simplex with a base\footnote{If a $B_1$-simplex $\tau_0$ has two vertices $A$ and $B$ in a coordinate hyperplane and one vertex at distance one of this hyperplane, then we shall call $[AB]$ a base of $\tau_0$. A $B_1$-simplex has by definition at least one base, but can have several.} $[AB]$. Then we have the relation \begin{equation}\label{beensconid} [\mathcal{X}_{\tau_0}]=(\LL-1)^2-[\mathcal{X}_{[AB]}']. \end{equation} Let us write down the contributions of $\tau_0$ and $[AB]$ to \Zmotoft. If we denote by $v_0$ the unique primitive vector in $\Zplusn\setminus\{0\}$ perpendicular to $\tau_0$, then \begin{align} &L_{\tau_0}S(\Delta_{\tau_0})+L_{[AB]}'S(\Delta_{[AB]})'\notag\\ &\ \ \;=\left[\bigl(1-\LL^{-1}\bigr)^3-\LL^{-3}\bigl((\LL-1)^2-[\mathcal{X}_{[AB]}']\bigr)\frac{1-T}{1-\LL^{-1}T}\right]\frac{\LL^{-\sigma(v_0)}T^{m(v_0)}}{1-\LL^{-\sigma(v_0)}T^{m(v_0)}}\notag\\ &\qquad\hspace{1.692cm}+\left[\bigl(1-\LL^{-1}\bigr)^2-\LL^{-2}[\mathcal{X}_{[AB]}']\frac{1-T}{1-\LL^{-1}T}\right]\frac{\LL^{-\sigma(v_0)-1}T^{m(v_0)}}{1-\LL^{-\sigma(v_0)}T^{m(v_0)}}\notag\\ &\ \ \;=\frac{\bigl(1-\LL^{-1}\bigr)^3}{1-\LL^{-1}T}\,\frac{\LL^{-\sigma(v_0)}T^{m(v_0)}}{1-\LL^{-\sigma(v_0)}T^{m(v_0)}}.\label{conaftcanc} \end{align} Like we observed in the $p$-adic case, Identity~\eqref{beensconid}, together with the fact that $\mult\Delta_{[AB]}=1$, causes the cancellation of $[\mathcal{X}_{[AB]}']$. We shall call \eqref{conaftcanc} the contribution of $\tau_0$ and $[AB]$ to \Zmotoft\ after cancellation. After these cancellations (one for every bad $B_1$-simplex), the relevant part of \Zmotoft\ is indeed a rational expression in \LL\ and $T$. More precisely, it is an element of the ring \begin{equation}\label{motringeen} \Z[\LL,\LL^{-1}][T]\left[\frac{1}{1-\LL^{-\sigma}T^m}\right]_{(m,\sigma)\in S}\subset\MC[T]\left[\frac{1}{1-\LL^{-\sigma}T^m}\right]_{(m,\sigma)\in S}, \end{equation} whereby $\Z[\LL,\LL^{-1}]\subset\MC$ denotes the smallest subring of \MC\ containing $\Z,\LL$, and $\LL^{-1}$. We can now replace \LL\ by a new indeterminate $S$ and study the relevant part of \Zmotoft\ in the ring \begin{equation}\label{motringtwee} \Z[S,S^{-1}][T]\left[\frac{1}{1-S^{-\sigma}T^m}\right]_{(m,\sigma)\in S}\subset\Z[S,S^{-1}](T), \end{equation} where $\Z[S,S^{-1}]$ is the ring of formal Laurent polynomials over \Z. The advantage is that the coefficients of $T$ now live in the unique factorization domain $\Z[S,S^{-1}]$. There clearly exists a surjective ring morphism from \eqref{motringtwee} to \eqref{motringeen}; so if we can prove equality in \eqref{motringtwee}, equality in \eqref{motringeen} follows. The goal is now to prove that the relevant part of \Zmotoft\ (seen as an element in this new ring) also belongs to \begin{equation*} \Z[S,S^{-1}][T]\left[\frac{1}{1-S^{-\sigma}T^m}\right]_{(m,\sigma)\in S'}. \end{equation*} The advantage of working in a unique factorization domain is that we may now choose a bad $q\in Q$ randomly and restrict ourselves to proving that the relevant part of \Zmotoft\ is an element of \begin{equation*} \Z[S,S^{-1}][T]\left[\frac{1}{1-S^{-\sigma}T^m}\right]_{(m,\sigma)\in S\setminus S_q}, \end{equation*} with $S_q=\{(m,\sigma)\in S\mid\sigma/m=q\}\subset S$ and $S'\subset S\setminus S_q\subset S$. Indeed, if $\sigma_1/m_1\neq\sigma_2/m_2$, then $1-S^{-\sigma_1}T^{m_1}$ and $1-S^{-\sigma_2}T^{m_2}$ have no common irreducible factors in $\Z[S,S^{-1}][T]$. So from now on, $q$ is a fixed bad ratio in $Q$. We define a \emph{$q$-cluster} as a family $\mathcal{C}$ of (bad $B_1$-) facets contributing to $q$, such that for any two facets $\tau,\tau'\in\mathcal{C}$, there exists a chain $\tau=\tau_0,\tau_1,\ldots,\tau_t=\tau'$ of $B_1$-facets in $\mathcal{C}$ with the property that $\tau_{j-1}$ and $\tau_j$ share an edge for all $j\in\{1,\ldots,t\}$. A \emph{maximal $q$-cluster} is a $q$-cluster that is not contained in a strictly bigger one. Note that every facet contributing to $q$ is contained in precisely one maximal $q$-cluster. Also note that the supports\footnote{By the support of a $q$-cluster we mean the union of its facets.} of two distinct maximal $q$-clusters may share a vertex of \Gf, but never share an edge. Let $V$ be a vertex of \Gf, and let $\tau_j$, $j\in J$, be all the facets of \Gf\ that contain $V$. Denote for each $j\in J$, by $v_j$ the unique primitive vectors in $\Zplusn\setminus\{0\}$ perpendicular to $\tau_j$. Then $\Delta_V$ is the cone strictly positively spanned by the vectors $v_j$, $j\in J$. Let $\{\delta_i\}_{i\in I}$ be a decomposition of $\Delta_V$ into simplicial cones $\delta_i$ without introducing new rays, and assume that $\delta_i$ is strictly positively spanned by the vectors $v_j$, $j\in J_i$. We shall say that a cone $\delta_i$ \emph{meets} a $q$-cluster $\mathcal{C}$ if $\{\tau_j\mid j\in J_i\}\cap\mathcal{C}\neq\emptyset$. We shall call $\{\delta_i\}_{i\in I}$ a \emph{nice} decomposition if every $\delta_i$ meets at most one maximal $q$-cluster. By construction of the maximal $q$-clusters, a nice decomposition of $\Delta_V$ always exists. Let us now choose a nice decomposition $\{\delta_{V,i}\}_{i\in I_V}$ of $\Delta_V$ for every relevant vertex $V$ of \Gf. The relevant part of \Zmotoft\ contains a term for every such $V$. According to the formula in Theorem~\ref{formlocmotzf}, this term can be split up into terms, one for each simplicial cone $\delta_{V,i}$ in the decomposition of $\Delta_V$. Let $\mathcal{C}$ be a maximal $q$-cluster. We define the part of \Zmotoft\ associated to $\mathcal{C}$ as the sum of the following terms: \begin{itemize} \item for each $B_1$-simplex $\tau\in\mathcal{C}$ with chosen base $b_{\tau}$, the contribution of $\tau$ and $b_{\tau}$ to \Zmotoft\ after cancellation; \item the terms associated to the other compact edges of the $B_1$-facets in $\mathcal{C}$; \item the terms associated to the simplicial cones $\delta_{V,i}$ that meet $\mathcal{C}$. \end{itemize} Note that in this way no term is assigned to more than one maximal $q$-cluster. It follows that the relevant part of \Zmotoft\ is given by \begin{equation*} \sum_{\substack{\text{$\mathcal{C}$ maximal}\\\text{$q$-cluster}}}(\text{part of \Zmotoft\ associated to $\mathcal{C}$})+(\text{sum of remaining terms}). \end{equation*} By construction the sum of the remaining terms is certainly an element of \begin{equation}\label{motringdrie} \Z[S,S^{-1}][T]\left[\frac{1}{1-S^{-\sigma}T^m}\right]_{(m,\sigma)\in S\setminus S_q}. \end{equation} The problem is therefore reduced to proving that for every maximal $q$-cluster $\mathcal{C}$, the part of \Zmotoft\ associated to $\mathcal{C}$ belongs to \eqref{motringdrie}. A maximal $q$-cluster contains no more than two $B_1$-facets, otherwise $q$ would equal one (see Case~VI). Moreover, two $B_1$-facets belonging to the same maximal $q$-cluster, are always $B_1$ for a same variable, otherwise $q$ would be good. This leaves us five possible configurations of a maximal $q$-cluster $\mathcal{C}$; it consists of \begin{enumerate} \item one $B_1$-simplex $\tau_0$, \item or one non-compact $B_1$-facet $\tau_0$, \item or two $B_1$-simplices $\tau_0$ and $\tau_1$ for a same variable, \item or two non-compact $B_1$-facets $\tau_0$ and $\tau_1$ for a same variable, \item or one non-compact $B_1$-facet $\tau_0$ and one $B_1$-simplex $\tau_1$ for a same variable. \end{enumerate} Pictures can be found in Figures~\ref{figcase1}, \ref{figcase2}, \ref{figcase3}, \ref{figcase4}, and \ref{figcase5}, respectively. In Cases~(i) and (ii), the part of \Zmotoft\ associated to $\mathcal{C}$ has the form \begin{align*} \frac{N_1(T)}{(1-S^{-1}T)F_0F_1F_2}&\in\Z[S,S^{-1}][T]\left[\frac{1}{1-S^{-\sigma}T^m}\right]_{(m,\sigma)\in S}\subset\Z[S,S^{-1}](T),\\\intertext{while in Cases~(iii)--(v), it has the form} \frac{N_2(T)}{(1-S^{-1}T)F_0F_1F_2F_3}&\in\Z[S,S^{-1}][T]\left[\frac{1}{1-S^{-\sigma}T^m}\right]_{(m,\sigma)\in S}\subset\Z[S,S^{-1}](T). \end{align*} Hereby $N_1(T)$ and $N_2(T)$ are polynomials in $T$ with coefficients in $\Z[S,S^{-1}]$, and so are \begin{equation*} F_j=1-S^{-\sigma_j}T^{m_j}\subset\Z[S,S^{-1}][T];\qquad j=0,\ldots,3. \end{equation*} In Cases~(i) and (ii), the factor $F_0$ corresponds to $\tau_0$, while $F_1$ and $F_2$ correspond to neighbor facets\footnote{By a neighbor facet we mean a facet sharing an edge. A factor will appear in the denominator for every neighbor facet that does not lie in a coordinate hyperplane.}\addtocounter{footnote}{-1} of $\tau_0$. It follows that $\sigma_0/m_0=q$ and $\sigma_j/m_j\neq q$ for $j=1,2$. In Cases~(iii)--(v), factors $F_0$ and $F_1$ correspond to $\tau_0$ and $\tau_1$, where $F_2$ and $F_3$ come from neighbor facets\footnotemark\ of $\tau_0$ and $\tau_1$. We have $\sigma_0/m_0=\sigma_1/m_1=q$ and $\sigma_j/m_j\neq q$ for $j=2,3$. Finally everything boils down to proving that (depending on the case) \begin{equation}\label{motdivcond} F_0\mid N_1(T)\qquad\text{or}\qquad F_0F_1\mid N_2(T) \end{equation} in the polynomial ring $\Z[S,S^{-1}][T]$. As $F_0$ and $F_1$ are monic polynomials (in the sense that their leading coefficients are units of $\Z[S,S^{-1}]$), the divisibility conditions \eqref{motdivcond} can be investigated equivalently over the fraction field $\Q(S)$ of $\Z[S,S^{-1}]$. Now we can decide divisibility by looking at the roots of $F_0$ and $F_1$ in some algebraic closure of the coefficient field $\Q(S)$. We shall consider the field $\overbar{\Q}\{\{S\}\}$ of formal Puiseux series over the field $\overbar{\Q}$ of algebraic numbers. The polynomial $F_j=1-S^{-\sigma_j}T^{m_j}$ has $m_j$ distinct roots \begin{equation*} T_k^{(j)}=S^{\frac{\sigma_j}{m_j}}e^{\frac{2k\pi i}{m_j}};\qquad k=0,1,\ldots,m_j-1; \end{equation*} in $\overbar{\Q}\{\{S\}\}$ for $j=0,1$. Hence $F_0$ divides $N_1(T)$ if and only if $N_1\bigl(T_k^{(0)}\bigr)=0$ in $\overbar{\Q}\{\{S\}\}$ for all $k$. In Cases~(iii)--(v), we may conclude that $F_0F_1\mid N_2(T)$ as soon as $N_2\bigl(T_k^{(j)}\bigr)=0$ for all $k$ and $j=0,1$ \emph{and} $N_2'(T)$ vanishes in all common roots \begin{equation*} T_k^{(0,1)}=S^qe^{\frac{2k\pi i}{\gcd(m_0,m_1)}};\qquad k=0,1,\ldots,\gcd(m_0,m_1)-1; \end{equation*} of $F_0$ and $F_1$ in $\overbar{\Q}\{\{S\}\}$. The proof of each of the identities $N_1\bigl(T_k^{(0)}\bigr)=0$, $N_2\bigl(T_k^{(j)}\bigr)=0$, $N_2'\bigl(T_k^{(0,1)}\bigr)=0$ is identical to one of the \lq residue vanishing proofs\rq\ in Cases~I--V. For example, in Case~(iii) of the current proof, the proof of $N_2\bigl(T_k^{(j)}\bigr)=0$ for a simple root $T_k^{(j)}$ of $F_0F_1$, corresponds to the proof of $R_1=0$ in Case~I. For a double root $T_k^{(0,1)}$ of $F_0F_1$, the proofs of $N_2\bigl(T_k^{(0,1)}\bigr)=0$ and $N_2'\bigl(T_k^{(0,1)}\bigr)=0$ are completely analogous to the proofs of $R_2=0$ and $R_1=0$, respectively, in Case~III. This ends the sketch of the proof of the main theorem in the motivic setting.
{'timestamp': '2013-06-26T02:02:59', 'yymm': '1306', 'arxiv_id': '1306.6012', 'language': 'en', 'url': 'https://arxiv.org/abs/1306.6012'}
\section{I. Introduction} Most materials expand upon heating, while those that shrink are much less common. Recent interest in these materials with negative thermal expansion (NTE) is also driven by technological applications that require materials with zero thermal expansion across a desired temperature range \cite{nte_review_1,nte_review_2,nte_review_3}. Even though NTE is an unusual phenomenon, it is relatively common for materials near structural phase transitions, and is typically associated with soft phonons and strong anharmonicity \cite{nte_review_1,nte_review_2,nte_review_3}. The Gr{\"u}neisen theory \cite{Bryce,Keblinski,PhysRevB.71.205214,thermexpSnSe,PhysRevB.94.054307,Arash_NTE} is the standard approach to calculate thermal expansion from first principles, using density functional theory. In this method, anharmonicity of the crystal potential is described via mode Gr{\"u}neisen parameters (GP's), which represent the changes of phonon frequencies with volume \citep{Bryce,Keblinski,PhysRevB.71.205214,thermexpSnSe,PhysRevB.94.054307,Arash_NTE}. Negative GP's of certain phonon modes are commonly identified as the source of NTE \cite{NTE1,NTE2, NTE3, NTE3, Arash_NTE}. Phonon frequencies and mode GP's are usually calculated using the harmonic approximation. First principles methods that describe phonon frequency renormalization due to anharmonicity have been recently developed, such as the self consistent harmonic approximation (SCHA) {\citep{SCPA}} and temperature dependent effective potentials (TDEP) \cite{STDEP}. These and related approaches were recently used to describe the negative thermal expansion of ScF$_{3}$~\cite{Ambroaz} and Si~\cite{PNAS_Hellman_2018}. In principle, these methods are capable of modeling thermal expansion of materials near phase transitions. However, to the best of our knowledge, no previous work has investigated this possibility. GeTe is the simplest ferroelectric material that exhibits NTE near the phase transition \cite{main, newmain, Marchenkov1994, abrikosov}. This makes it an ideal test case for identifying the physical effects leading to NTE. At low temperatures, GeTe crystallizes in a rhombohedral structure \cite{main,newmain,Goldack}, characterized by the Te internal atomic displacement along the [111] direction from its high symmetry position in the rocksalt phase, (0.5,0.5,0.5) in reduced coordinates, see Fig.~\ref{fig1}. The angle between the primitive lattice vectors of the rhombohedral structure also differs from 60$^{\circ}$ for the rocksalt phase. GeTe experiences a structural phase transition from a rhombohedral to the rocksalt structure at $\sim 600-700$ K depending on the carrier concentration \cite{abrikosov}. This phase transition is mediated by softening of the zone center transverse optical (TO) mode \cite{STEIGMEIER19701275,Wdowik}, which corresponds to the frozen-in Te internal atomic displacement along the [111] axis. \begin{figure}[h] \begin{center} \includegraphics[width = 0.49\textwidth]{fig1.eps} \end{center} \caption{Primitive unit cell of GeTe at (a) 0 K and (b) above the Curie temperature, generated using \textsc{VESTA} software \cite{VESTA}. The low temperature rhombohedral structure becomes more similar to the rocksalt structure as temperature increases: the angle between the primitive lattice vectors $\theta$ becomes closer to 60$^{\circ}$ and the Te internal atomic position $(\tau,\tau,\tau)$ approaches $(0.5,0.5,0.5)$ in reduced coordinates.} \label{fig1} \end{figure} The proximity to the ferroelectric phase transition also makes GeTe a very good thermoelectric material, either in the pure \cite{levin,gete-jacs,yaniv-gete-jap-16,biswas-gete-rev,natureasia-gete,pei-joule-gete,ZT_PT_PNAS_2018} or alloyed form \cite{GeSbte1,GeSbte2,GeSbte3, TAGS1,TAGS2,TAGS3, TAGS4, leadalloy1, ronanleadalloy}. Its soft TO modes interact strongly with acoustic modes which carry most heat, thus leading to the low lattice thermal conductivity \cite{ronanleadalloy} and the high thermoelectric figure of merit. The same mechanism is responsible for the exceptionally low lattice thermal conductivity of PbTe \cite{ssc148-417,nmat10-614,prb85-155203,ronanscatt}. GeTe can be driven closer to the soft TO mode phase transition not only by changing the temperature but also by alloying with PbTe \cite{pbgete}. We have recently shown that the acoustic-TO coupling is strongest for those (Pb,Ge)Te alloy compositions that are very near the phase transition, and leads to the minimal lattice thermal conductivity when mass disorder is neglected \cite{ronanleadalloy}. In this paper, we present a first principles method to compute the thermal expansion of the rhombohedral phase of GeTe up to the Curie temperature. We calculate the structural parameters by minimizing the total free energy with respect to each structural parameter in the spirit of the Gr{\"u}neisen theory. We explicitly include internal atomic position as an independent variable in the minimization process. Although this effect was included to some extent in previous calculations of thermal expansion \cite{Bryce,PhysRevB.71.205214,thermexpSnSe,PhysRevB.94.054307,Arash_NTE} by relaxing atomic positions due to applied strain, this may not be sufficient for materials near phase transitions. Our approach enables us to determine the temperature dependence of the static elastic energy variations with structural parameters, which we find is the key to correctly describing the thermal expansion of GeTe near the phase transition. We show that our calculated thermal evolution of the structural parameters of GeTe agrees well with experiments. Negative volumetric thermal expansion of GeTe near the phase transition is also well described in our model. We find that the coupling between acoustic and soft TO modes is the dominant mechanism leading not only to the low lattice thermal conductivity of GeTe, as shown previously, but also to its NTE. \section{II. Method We model the thermal expansion of rhombohedral GeTe using the ideas of the Gr{\"u}neisen theory within the elastic and harmonic approximations for the mechanical and vibrational properties of solids, respectively. A rhombohedral unit cell is defined with the primitive lattice vectors $a(b,0,c)$, $a(-\frac{b}{2},\frac{b\sqrt{3}}{2},c)$ and $a(-\frac{b}{2},-\frac{b\sqrt{3}}{2},c)$. Here $a$ is the lattice constant, and $b$ and $c$ are defined as: \begin{align*} b &= \sqrt{\frac{2}{3}(1-\cos\theta)}, \\ c &= \sqrt{\frac{1}{3}(1+2\cos\theta)}, \numberthis \end{align*} where $\theta$ is the angle between the primitive lattice vectors. The reduced atomic positions of GeTe within this unit cell are: ($0,0,0$) for Ge atom and ($\tau ,\tau ,\tau $) for Te atom. The temperature dependence of these structural parameters is implicit. The Helmholtz total free energy of a rhombohedral crystal per unit cell is defined as \cite{srivastava}: \begin{align*} &F(a,\theta,\tau,T) = E_{el}(a,\theta,\tau) + F_{vib}(a,\theta,\tau,T), \numberthis \end{align*} where $E_{el}(a,\theta,\tau)$ and $F_{vib}(a,\theta,\tau,T)$ correspond to the static elastic and vibrational free energy at temperature $T$, respectively. The values of all the structural parameters at a certain temperature can be found by minimizing the total free energy with respect to each structural parameter $u$, $u\in (a,\theta,\tau)$: \begin{align*} &\frac{\partial F}{\partial u} =\frac{\partial E_{el}}{\partial u} + \frac{\partial F_{vib}}{\partial u} = 0. \numberthis \label{eq2} \end{align*} Within the harmonic approximation, vibrational free energy is given as \cite{srivastava}: \begin{align*} &F_{vib}=\sum_{\textbf{q},s}\left[ \frac{\hbar \omega _{s}(\textbf{q})}{2} + k_{B}T\text{ln}\left(1-\exp\left(-\frac{\hbar \omega _{s}(\textbf{q})}{k_{B}T}\right)\right)\right],\numberthis \end{align*} where $\omega _{s}(\textbf{q})$ is the phonon frequency of mode $s$ and wave vector $\textbf{q}$, and $k_B$ is the Boltzmann constant. The derivative of vibrational free energy with respect to one of the structural parameters $u$ reads: \begin{align*} \frac{\partial F_{vib}}{\partial u} =& -\frac{1}{u}\sum_{\textbf{q},s} \hbar \omega _{s}(\textbf{q}) \left(n(\omega _{s}(\textbf{q})) + \frac{1}{2}\right)\gamma ^u _{s} (\textbf{q}), \numberthis \label{eq4} \end{align*} where $n(\omega _{s}(\textbf{q}))$ is the Bose-Einstein occupation factor at temperature $T$ for a phonon with frequency $\omega _{s}(\textbf{q})$. We define the generalized Gr{\"u}neisen parameters with respect to each structural parameter as: \begin{align*} \gamma ^u _{s}(\textbf{q}) = -\frac{u}{\omega _{s}(\textbf{q})}\frac{\partial \omega _{s}(\textbf{q})}{\partial u}. \numberthis \label{eq5} \end{align*} We note that the generalized GP's $\gamma ^{u}_{s} (\mathbf{q})$ are computed without the relaxation of atomic positions with applied strain, in contrast to previous GP calculations~\cite{Bryce,Keblinski,PhysRevB.71.205214,thermexpSnSe,PhysRevB.94.054307}. This difference in the GP's definitions will lead to differences between our calculated GP values and those of prior work for GeTe~\cite{Bernuskoni}. Nonetheless, we account for the atomic relaxation effects via the generalized GP's $\gamma ^\tau _{s}(\textbf{q})$. This separation of variables allows us to explicitly track the coupling between the soft TO mode and strain, as we will show. Phonon frequencies and generalized Gr{\"u}neisen parameters can be computed either using the harmonic approximation, or accounting for the phonon frequency renormalization due to anharmonicity and the temperature variation of structural parameters. Here we calculate phonon frequencies and generalized GP's for the values of the structural parameters $a$, $\theta$ and $\tau$ at $0$ K. This is a reasonable approximation since only the soft TO modes close to the zone center will have a considerable temperature dependence in GeTe. We expect that the temperature induced renormalization of soft TO modes will have a substantial effect on thermal expansion only very close to the phase transition. The static elastic part of total free energy can be expanded in a Taylor series as: \begin{align*} E_{el} = E_{0} + \sum_{u} K_{u}\Delta u + \sum_{u,v} K_{uv} \Delta u\Delta v. \numberthis \label{eq6} \end{align*} $\Delta u$ and $\Delta v$ represent the small deviations of the structural parameters $u$ and $v$ from their equilibrium values for temperature $T$ ($u,v \in \{a,\theta,\tau\}$, $u \geq v$). We define the first and second order coefficients as the changes of $E_{el}$ with respect to the changes of structural parameters: $K_{u} = \frac{\partial E_{el}}{\partial u}$ and $K_{uv} =(1-\frac{1}{2} \delta _{uv} )\frac{\partial ^2 E_{el}}{\partial u \partial v}$. The relationship between these coefficients and elastic constants is discussed in Appendix A. The final form for the derivative of static elastic energy with respect to one of the structural parameters reads: \begin{align*} \frac{\partial E_{el}}{\partial u} =& K_{u} + \sum_{v} (1+\delta_{vu})K_{vu}\Delta v. \numberthis \label{eq7} \end{align*} Coefficients $K$ change with temperature due to the contribution of the higher order terms in the Taylor expansion of static elastic energy. If we label the changes of the structural parameters at temperature $T$ with respect to their values at 0 K as: \begin{align*} \centering \delta a &= a - a_{0}, \\ \delta \theta &= \theta - \theta _{0}, \numberthis \\ \delta \tau &= \tau - \tau _{0}, \end{align*} we can expand static elastic energy as: \begin{align*} \label{eq8} &E_{el} = \sum_{u,v} K^{0}_{uv} (\Delta u + \delta u)(\Delta v + \delta v) + \\ & \sum_{u,v,w} K^{0}_{uvw} (\Delta u + \delta u)(\Delta v + \delta v)(\Delta w + \delta w) \numberthis + \\ & \sum_{u,v,w,t} K^{0}_{uvwt} (\Delta u + \delta u)(\Delta v + \delta v)\times\\&(\Delta w + \delta w)(\Delta t + \delta t). \end{align*} $K^{0} _{uv}$, $K^{0} _{uvw}$ and $K^{0} _{uvwt}$ are the second, third and fourth order coefficients defined for the changes of structural parameters calculated at 0 K, and $\delta u\in \{\delta a,\delta \theta,\delta \tau\}$ ($u \ge v \ge w \ge t$). From Eqs.~(\ref{eq6}) and (\ref{eq8}), we obtain coefficients $K_{u}$ and $K_{uv}$ that depend on the changes $\delta u$ from the $0$ K values, e.g.: \begin{align*} \label{eq9} & K_{a}= 2K^{0} _{aa}\delta a + K^{0} _{a\tau}\delta \tau + K^{0} _{a\theta}\delta \theta + 3K^{0} _{aaa}\delta a^2 + \\ & 2(K^{0} _{aa\tau}\delta \tau + K^{0} _{aa\theta}\delta \theta)\delta a + K^{0} _{a\tau\tau}\delta \tau ^2 +\\ &K^{0} _{a\theta\theta}\delta \theta^2 + K^{0} _{a\theta\tau}\delta\theta\delta\tau + \text{terms with 4th order}~K^0, \numberthis \\ &K_{aa} = K^{0} _{aa} + 3K^{0} _{aaa}\delta a + K^{0} _{aa\theta}\delta \theta + K^{0} _{aa\tau} \delta \tau +\\ &6K^{0} _{aaaa}\delta a^2 + 3(K^{0} _{aaa\theta}\delta \theta + K^{0} _{aaa\tau} \delta \tau)\delta a + K^{0} _{aa\theta\theta}\delta \theta ^2 + \\ &K^{0} _{aa\tau\tau}\delta \tau ^2 + K^{0} _{aa\theta\tau}\delta \theta \delta \tau. \end{align*} The temperature dependence of elastic coefficients $K_{uv}$ described by Eq.~\eqref{eq9} is directly related to the strength of anharmonic interactions involving very long wavelength acoustic and optical phonons. We thus effectively capture the anharmonic coupling between different zone center phonon modes up to the second order, including that between acoustic and soft transverse optical modes. Anharmonicity of the generalized GP's is taken into account only in the lowest order. We will show that this treatment of anharmonic effects is sufficient to describe the NTE of GeTe near the phase transition. Substituting Eqs.~(\ref{eq4}) and (\ref{eq7}) into Eq.~(\ref{eq2}), we obtain: {\small \setlength{\abovedisplayskip}{6pt} \setlength{\belowdisplayskip}{\abovedisplayskip} \setlength{\abovedisplayshortskip}{0pt} \setlength{\belowdisplayshortskip}{3pt} \begin{align*} \Delta u = \sum_{v} S_{vu} \left[ \sum_{\textbf{q},s}\hbar \omega _{s}(\textbf{q})\left( n(\omega _{s}(\textbf{q})) + \frac{1}{2}\right)\frac{\gamma^{v} _{s}(\textbf{q})}{v} - K_{v}\right]. \numberthis \label{eq10} \end{align*} } $S_{vu}$ are the elements of the matrix defined as an inverse of the matrix of coefficients $\hat{K}$: \begin{equation} \hat{K} = \begin{bmatrix} 2K_{aa} & K_{a\theta} & K_{a\tau} \\ K_{a\theta} & 2K_{\theta\theta} & K_{\theta\tau} \\ K_{a\tau} & K_{\theta\tau} & 2K_{\tau\tau} \end{bmatrix}. \end{equation} The matrix $\hat{S}$ is related to the compliance matrix which represents an inverse of the elastic constants matrix. We note that coefficients $K_u$ and $K_{uv}$ are functions of the structural parameters changes, $\delta a$, $\delta \theta$ and $\delta \tau$, see Eq.~(\ref{eq9}). We solve Eq.~(\ref{eq10}) for $\delta a$, $\delta \theta$ and $\delta \tau$ at each temperature by requiring that $\Delta u = 0$, which gives the thermal equilibrium structure. To do this, we construct an iterative solution as $\delta u_{i+1}=\delta u_i+\Delta u_i(\delta a_i, \delta \theta_i, \delta \tau_i)$, where $\Delta u_i$ is given by Eq.~(\ref{eq10}). This is iterated until $\Delta u_i\approx 0$. We note that the presented method to calculate thermal expansion is inexpensive and straightforward to implement. Its implementation requires: (i) the density functional theory (DFT) calculations of the phonon frequencies and generalized Gr{\"u}neisen parameters for the $0$ K values of the structural parameters, (ii) the calculation of the DFT energy surface for a range of structural parameter values, whose fitting gives coefficients $K^0$ (Eq.~(\ref{eq8})), and (iii) the iterative solution for $\delta a$, $\delta \theta$ and $\delta \tau$ in Eq.~(\ref{eq10}) until $\Delta a$, $\Delta \theta$ and $\Delta \tau$ become zero for a range of temperatures. Our approach for obtaining the thermal expansion of rhombohedral materials near soft optical mode phase transitions can be linked to the standard method based on the Gr{\"u}neisen theory \cite{Bryce,Keblinski}, as shown in Appendix A. The standard approach finds the minimum of the total free energy of the system with respect to strain, rather than structural parameters. It includes the influence of atomic positions on total free energy by accounting for their relaxation due to applied strain. Far from the phase transition, our method fully corresponds to the standard one. However, the standard approach does not track the temperature dependence of internal atomic position and the corresponding static elastic energy changes, which are important for the accurate description of thermal expansion near the phase transition. More details about these differences can be found in Appendix A. On the other hand, establishing the precise relationship between our method and statistical mechanics approaches \cite{Rabe,Rabe2,FerroelecFunct} is less straightforward and requires further study. \section{III. Computational details DFT calculations were performed using the plane wave basis set, the generalized gradient approximation with Perdew-Burke-Ernzerhof \cite{GGAPBE} parametrization (\textsc{GGA-PBE}) for the exchange-correlation potential and the Hartwigsen-Goedecker-Hutter (HGH) pseudopotentials \cite{HGHpseudo} as implemented in \textsc{ABINIT} code \cite{ABINIT}. For the ground state and static elastic energy calculations, we used a 32 Hartree energy cutoff for plane waves and a four shifted $12\times 12\times 12$ \textbf{k}-point grid for Brillouin zone sampling of electronic states. Harmonic interatomic force constants at zero temperature were calculated from Hellmann-Feynman forces obtained by the finite difference supercell approach using \textsc{PHONOPY} code \cite{phonopy}. Forces were computed using 128-atom supercells ($4\times4\times4$ rhombohedral unit cells) with a 24 Hartree cutoff and a four shifted $3\times 3 \times 3$ \textbf{k}-point grid. Phonon frequencies were calculated using a $20\times 20\times 20$ \textbf{q}-point grid for vibrational modes. We obtained generalized Gr{\"u}neisen parameters using a finite difference method, taking the finite displacement to be smaller than 1\% for $a$, and smaller than 1\% of the difference between the 0 K rhombohedral and high temperature rocksalt structures for $\theta$ and $\tau$. For the calculation of coefficients $K^0$ in Eq.~(\ref{eq8}), we parametrized the energy surface on uniform grids for the values of structural parameters $a$, $\theta$ and $\tau$ from the 0 K rhombohedral structure to the high temperature rocksalt structure. \section{IV. Results and discussion We calculated the structural parameters of GeTe at 0 K using \textsc{DFT} and two different exchange-correlation functionals, local density approximation (\textsc{LDA}) \cite{HGHpseudo} and \textsc{GGA-PBE}, see Table \ref{tb1}. To our knowledge, the measured values of the structural parameters at zero temperature are not available. Nevertheless, it is likely that, as the temperature is reduced from 295 K to 0 K, the angle and internal atomic position would deviate further from the high-symmetry (rocksalt) values, and would agree better with the GGA-PBE calculation than with the LDA. Since our goal is to describe the temperature dependence of structural parameters near the phase transition, where internal atomic position plays a crucial role, we use the \textsc{GGA-PBE} functional in all further calculations. Our values of structural parameters are also in good agreement with previous DFT calculations \cite{PhysRevB.95.024311, Wdowik}. \begin{table}[h] \begin{center} \begin{tabularx}{0.5\textwidth}{ c | Y | Y | Y | Y } \hline \hline & $a$ [\r{A}] &$\theta$ [deg]&$\tau$&$V_{0}$ [\r{A}$ ^{3}$] \\ \hline \textsc{LDA} & 4.207 & 58.788 & 0.524 & 51.193 \\ \hline \textsc{GGA-PBE} & 4.381 & 57.776 & 0.530 & 56.420 \\ \hline Experiment (295 K) & 4.299 & 57.931 & 0.525 & 53.513 \\ \hline \hline \end{tabularx} \end{center} \caption{Lattice parameters of GeTe at 0 K, calculated using \textsc{LDA} and \textsc{GGA-PBE} functionals, and compared with experimental results \cite{main}. $a$ stands for lattice constant, $\theta$ for angle, and $\tau$ for internal atomic coordinate.} \label{tb1} \end{table} The phonon dispersion of GeTe at 0 K is given in Fig. \ref{fig2}(a), together with the experimental results for the frequencies of the Raman active zone center modes \cite{STEIGMEIER19701275, Fons}. Large intrinsic concentrations of charge carriers in real GeTe samples (1- 20$\times$10$^{20}$ cm$^{-1}$\cite{natureasia-gete}) completely screen long range interactions \cite{STEIGMEIER19701275}. We roughly estimate this effect by setting Born effective charges to zero in the calculation of phonon frequencies (see dashed red lines in Fig.~\ref{fig2}(a)). To evaluate the importance of screening, we also neglect this effect in the phonon calculation by using Born effective charge values obtained using density functional perturbation theory (DFPT) (solid black lines in Fig.~\ref{fig2}(a)). Using both approaches, our calculated phonon frequencies at the zone center agree very well with experimental results \cite{STEIGMEIER19701275, Fons}. Fig.~\ref{fig2}(b) illustrates that our computed phonon densities of states (DOS) of GeTe at 0 K compare fairly well with experiments \cite{Wdowik, Pereira}. Since there are no appreciable differences in the calculated phonon DOS if we exclude or roughly include screening effects, we neglect screening in all further calculations \footnote{We verified explicitly that our treatment of screening produces a very small effect on the values of structural parameters with respect to the unscreened case. We expect that a more sophisticated treatment of screening will change these values more substantially, as observed experimentally in GeTe samples with different carrier concentrations~\cite{Marchenkov1994, abrikosov}.}. Our phonon dispersions of GeTe also agree well with a previous DFPT calculation \cite{PhysRevB.95.024311}. \begin{figure} \begin{minipage}{0.48\textwidth} \begin{center} \includegraphics[width = 0.9\textwidth]{fig2a.eps} \end{center} \end{minipage} \begin{minipage}{0.48\textwidth} \begin{center} \includegraphics[width = 0.9\textwidth]{fig2b.eps} \end{center} \end{minipage} \caption{(a) Phonon dispersion of GeTe calculated using \textsc{GGA-PBE} exchange-correlation functional neglecting and accounting for screening (solid black lines and dashed red lines, respectively). The frequencies of the zone centre Raman active modes were taken from the measurements of Ref.~\cite{STEIGMEIER19701275} (red circles) and Ref.~\cite{Fons} (green squares). (b) Phonon density of states of GeTe calculated neglecting and including screening (solid black line and dashed red line, respectively) and measured by Ref.~\cite{Wdowik} (blue circles and red squares) and Ref.~\cite{Pereira} (green triangles). The integral of the density of states over frequency is normalized to unity.} \label{fig2} \end{figure} \begin{figure}[ht!] \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[width = 0.9\textwidth]{fig3a.eps} \end{center} \end{minipage} \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[width = 0.9\textwidth]{fig3b.eps} \end{center} \end{minipage} \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[width = 0.9\textwidth]{fig3c.eps} \end{center} \end{minipage} \caption{Structural parameters of GeTe as a function of temperature: (a) lattice constant $a$, (b) angle $\theta$, and (c) internal atomic coordinate $\tau$. Solid black lines represent our calculations. Red circles and blue squares correspond to the measurements of Refs.~\cite{main} and \cite{newmain}, respectively. Dashed black lines represent our calculations shifted by the difference between our calculated values and the experimental values of Ref.~\cite{main} at 300 K.} \label{fig3} \end{figure} The temperature dependence of all structural parameters of rhombohedral GeTe (lattice constant, angle and internal atomic coordinate $\tau$) are illustrated in Fig. \ref{fig3}. Solid lines represent our calculations, while symbols show the measurements of Refs.~\cite{main,newmain}. The experimental values were transformed from the pseudocubic to the rhombohedral unit cell for comparison with our results. The computed temperature variation of structural parameters is in good agreement with experiments, despite the small discrepancy between the \textsc{GGA-PBE} and the room temperature experimental structural parameters (see Table \ref{tb1}). Dashed lines in Fig.~\ref{fig3} represent our calculations shifted by the difference between our values and the experimental values of Ref.~\cite{main} at 300 K. The calculated temperature dependence of the zone center TO mode frequency (see Appendix B) is also in very good agreement with experiment \cite{STEIGMEIER19701275}. We highlight that all these agreements are obtained fully from first principles, without any empirical parameters. Our calculated structural parameters of rhombohedral GeTe show clear indications of the ferroelectric phase transition near 700 K, see Fig. \ref{fig3}. As temperature increases, the angle $\theta$ and the internal atomic coordinate $\tau$ tend to their high symmetry values, 60$^0$ and 0.5, respectively. Moreover, the temperature dependence of all structural parameters diverges from a linear behavior at high temperatures (500-700 K), which signals the proximity to the phase transition. The thermal evolution of the structural parameters of GeTe is correctly captured only when the total free energy is minimized with respect to all structural parameters, and the temperature dependence of coefficients $K_u$ and $K_{uv}$ defined in Eq.~(\ref{eq7}) is taken into account. Fig.~\ref{fig4} shows the comparison between the calculations obtained using our approach and the standard approach \cite{Bryce,Keblinski}, where the free energy is not minimized with respect to the internal atomic coordinate $\tau$ and elastic constants do not vary with temperature. Even though internal atomic position is relaxed as strain is applied in the standard method, this approach gives qualitatively very different trends compared to our model and experiments \cite{main,newmain}. These results highlight the importance of improving the standard method, to include the critical physical effects occurring near the phase transition, as shown here. \begin{figure}[ht!] \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[width = 0.9\textwidth]{fig4a.eps} \end{center} \end{minipage} \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[width = 0.9\textwidth]{fig4b.eps} \end{center} \end{minipage} \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[width = 0.9\textwidth]{fig4c.eps} \end{center} \end{minipage} \caption{Structural parameters of GeTe as a function of temperature: (a) lattice constant, (b) angle, and (c) internal atomic coordinate. Solid black lines represent the results obtained using our approach, while dashed red lines correspond to the standard method (see text for full explanation).} \label{fig4} \end{figure} Most interestingly, GeTe exhibits negative volumetric thermal expansion near the phase transition at $\sim 700$ K, which has been observed experimentally \cite{main,newmain,abrikosov,Marchenkov1994} and reproduced in our calculations, see Fig.~\ref{fig5}(a). In contrast, the standard approach gives a positive volume expansion of GeTe in the whole temperature range considered. The volumetric contraction close to the phase transition is due to the NTE of the lattice constant shown in Fig.~\ref{fig3}(a). We note that the sign of the volumetric thermal expansion depends strongly on the exact composition of samples, as does the Curie temperature. Positive volumetric thermal expansion occurs in samples with more than $50.6$\% Te, as measured in Refs. \cite{Marchenkov1994, abrikosov}. Samples with less than $50.6$\% of Te exhibit NTE at the phase transition \cite{Marchenkov1994, abrikosov}, which is in agreement with our calculation for stoichiometric GeTe ($50\%$ Te). Analyzing all the physical quantities that determine the structural parameters (coefficients $K$ and generalized Gr{\"u}neisen parameters entering Eq.~(\ref{eq2})), we found that only $K_{a\tau}$, $K_{\theta\tau}$ and $K_{\tau\tau}$ change substantially near the phase transition.~(Elastic constants also vary considerably close to the transition, see Appendix C). $K_{a\tau}$ and $K_{\theta\tau}$ reflect static elastic energy variations with respect to simultaneous changes of the structural parameters related to acoustic strain ($a$ and $\theta$) and the TO mode ($\tau$). Consequently, $K_{a\tau}$ and $K_{\theta\tau}$ quantify acoustic-TO coupling, and indicate its large variation close to the phase transition. \begin{figure}[h!] \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[width = 0.9\textwidth]{fig5a.eps} \end{center} \end{minipage} \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[width = 0.9\textwidth]{fig5b.eps} \end{center} \end{minipage} \caption{(a) Volumetric thermal expansion of GeTe: our calculation (solid black line), experiment \cite{main} (red circles), and our calculation shifted by the difference between our and the experimental value at 300 K (dashed black line). (b) Computed volumetric thermal expansion including and neglecting acoustic-soft optical mode coupling, shown in solid black and dashed red lines, respectively.} \label{fig5} \end{figure} Acoustic-soft TO mode coupling that increases considerably near the phase transition causes the negative thermal expansion of GeTe. In our computational method, we can artificially turn off this coupling by setting $K_{a\tau}$ and $K_{\theta\tau}$ to zero, as shown in Fig.~\ref{fig5}(b). The volume calculated by neglecting acoustic-TO coupling does not exhibit a negative thermal expansion. We thus conclude that strong acoustic-TO phonon coupling is the origin of the NTE of GeTe at the phase transition. \begin{figure}[h!] \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[width = 0.9\textwidth]{fig6a.eps} \end{center} \end{minipage} \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[width = 0.9\textwidth]{fig6b.eps} \end{center} \end{minipage} \caption{Temperature dependence of: (a) average generalized Gr{\"u}neisen parameters defined for each structural parameter ($a$ - lattice constant, $\theta$ - angle, $\tau$ - internal atomic coordinate), and (b) normalized compliance matrix elements (see text for full explanation).} \label{fig6} \end{figure} The most commonly cited cause of negative thermal expansion in the literature is a negative mode Gr{\"u}neisen parameter \cite{NTE1, NTE2, NTE3, NTE4}. Here we investigate the role of generalized Gr{\"u}neisen parameters in establishing the NTE of GeTe. We define average generalized Gr{\"u}neisen parameters for $u\in\{a,\theta,\tau\}$ as: \begin{align*} \left<\gamma ^{u}\right> =\frac{1}{\hbar \omega _{D}} \sum_{\textbf{q},s} \hbar\omega_{s}(\textbf{q})\left(n(\omega _{s}(\textbf{q})) + \frac{1}{2}\right)\gamma ^{u}_{s}(\textbf{q}), \numberthis \label{eq17} \end{align*} where $\omega _{D}$ is the Debye frequency \cite{debye}. The temperature dependence of $\left<\gamma ^{u}\right>$ is shown in Fig.~\ref{fig6}(a). Fig.~\ref{fig6}(b) illustrates the compliance elements that determine the value of lattice constant in Eq.~(\ref{eq10}), normalized as $S_{aa}^*=S_{aa}/a^2$, $S_{a\tau}^*=S_{a\theta}/a\theta$, and $S_{a\tau}^*=S_{a\tau}/a\tau$. The linear temperature dependence of the average generalized Gr{\"u}neisen parameters stems from the Bose-Einstein occupation factor. In contrast, the compliance elements change dramatically with temperature near the phase transition, due to large temperature variations of $K_{a\tau}$, $K_{\theta\tau}$ and $K_{\tau\tau}$. Since the lattice constant expansion is proportional to $S_{aa}\left<\gamma ^{a}\right>+S_{a\theta}\left<\gamma ^{\theta}\right>+S_{a\tau}\left<\gamma ^{\tau}\right>$ (Eq.~(\ref{eq10})), its negative sign is partially due to negative $\left<\gamma ^{\tau}\right>$, which physically corresponds to the anharmonicity of the TO mode. Nevertheless, negative $\left<\gamma ^{\tau}\right>$ is not the main reason for NTE: it has to be accompanied by a large change of $S_{a\tau}$ i.e. large acoustic-TO coupling so that the expansion becomes negative. Furthermore, $S_{a\theta}$ is also negative and its absolute value increases more rapidly at the phase transition, resulting in an additional negative contribution to thermal expansion. This analysis confirms the dominant role of acoustic-TO coupling in establishing the NTE of GeTe near the phase transition. We expect that this conclusion will remain valid even when the temperature dependence of phonon frequencies and generalized Gr{\"u}neisen parameters $\gamma ^{u}_{s}(\textbf{q})$ is accounted for. This would make the temperature changes of $\left<\gamma ^{\tau}\right>$ near the phase transition somewhat larger than those calculated here, due to the temperature variations of the frequencies of soft TO modes close to the zone center. There is an ongoing debate in the literature about the true nature of the phase transition in GeTe (displacive vs order-disorder). Our method directly applies only to displacive phase transition. The experimental support for the displacive transition in GeTe was reported in Ref. \cite{main, newmain,displejsiv}. This is challenged by recent works of Fons \textit{et al.} \citep{Fons} and Matsunaga \textit{et al.} \cite{macunaga}, whose findings support the order-disorder picture. Our calculations show that the thermal expansion near the phase transition in GeTe can be well described with a purely displacive model. However, further investigation of order-disorder effects is needed for the complete description of the phase transition of GeTe. \section{V. Conclusion} We developed a first principles method that accurately describes the temperature dependence of all structural parameters for the rhombohedral phase of GeTe up to the Curie temperature of $\sim 700$ K. The key new features of our approach with respect to the standard method based on the Gr{\"u}neisen theory are the minimization of free energy with respect to all structural parameters, including internal atomic displacement, and the temperature dependence of static elastic energy. Our computed thermal expansion is in very good qualitative agreement with experiment. We showed that the coupling between acoustic and soft transverse optical modes is the main reason for the negative volumetric thermal expansion of GeTe near the phase transition. \begin{table*}[t] \begin{center} \begin{tabularx}{\textwidth}{ Y | c | Y | Y | Y | Y | Y } \hline \hline & $C_{11} + C_{12}$ [GPa] &$C_{13}$[GPa]&$C_{33}$[GPa]&$K_{aa}$[eV]&$K_{a\theta}$[eV] & $K_{\theta\theta}$[eV]\\ \hline DFPT & 114.756 & 29.962 & 63.899 & 72.764 & 74.514 & 27.243 \\ \hline Finite diff. DFT & 116.555 & 29.942 & 60.543 & 72.789 & 76.133 & 27.667 \\ \hline \hline \end{tabularx} \end{center} \caption{Calculated elastic constants of GeTe using density functional perturbation theory (DFPT), and density functional theory (DFT) combined with a finite difference method. $C_{11} + C_{12}$, $C_{13}$ and $C_{33}$ were calculated directly using DFPT, and transformed into $K_{aa}$, $K_{a\theta}$ and $K_{\theta\theta}$ using Eq.~(\ref{eq15}). $K_{aa}$, $K_{a\theta}$ and $K_{\theta\theta}$ were computed using DFT, and transformed into $C_{11} + C_{12}$, $C_{13}$ and $C_{33}$ by inverting Eq.~(\ref{eq15}).} \label{tb2} \end{table*} \section{Acknowledgements} This work was supported by Science Foundation Ireland (SFI) under Investigators Programme 15/1A/3160. We acknowledge the Irish Centre for High-End Computing (ICHEC) for the provision of computational facilities. \section{Appendix A: Connection between our approach and standard approach to thermal expansion} Our approach for calculating the thermal expansion of rhombohedral materials can be linked to the standard method based on the Gr{\"u}neisen theory \cite{Bryce,Keblinski}. In contrast to our approach, the standard method minimizes the total free energy with respect to strain. Neglecting the contribution of internal atomic coordinate $\tau$, the elastic coefficients $K$ defined as the static elastic energy changes with respect to structural parameters in our method can be transformed into elastic constants. In the Voigt notation, the elastic matrix of a rhombohedral crystal reads: \begin{equation} \hat{C} = \begin{bmatrix} C_{11} & C_{12} & C_{13} & C_{14} & 0 & 0 \\ C_{12} & C_{11} & C_{13} & -C_{14} & 0 & 0 \\ C_{13} & C_{13} & C_{33} & 0 & 0 & 0 \\ C_{14} & -C_{14} & 0 & C_{44} & 0 & 0 \\ 0 & 0 & 0 & 0 & C_{44} & C_{14} \\ 0 & 0 & 0 & 0 & C_{14} & C_{66} \end{bmatrix}, \end{equation} where $C_{66} = \frac{1}{2}(C_{11}-C_{12})$. Our coefficients $K$ can be converted to elastic constants via: \begin{align*} \label{eq15} K_{aa} =& C_{11} + C_{12} + 2C_{13} + C_{33}/2,\\ K_{\theta\theta} =& Q_{1}^2 (C_{11} + C_{12}) -2Q_{1}Q_{2}C_{13} + Q_{2}^2 C_{33}/2, \numberthis \\ K_{a\theta} =& 2Q_{1}(C_{11}+C_{12}) + 2(Q_{1}-Q_{2})C_{13} -Q_{2}C_{33}. \end{align*} The coefficients $Q_{1}$ and $Q_{2}$ represent the dilatation of the hexagonal structure parameters (lattice constants perpendicular and parallel to the [111] axis) for unit dilatation of angle: \begin{align*} Q_{1} =& \frac{\sin \theta}{2(1-\cos \theta)}, \\ Q_{2} =& \frac{\sin \theta}{1+2\cos \theta}. \numberthis \end{align*} We computed the elastic constant matrix $\hat{C}$ using density functional perturbation theory (\textsc{DFPT}) and \textsc{ABINIT} code, and transformed them into $K_{aa}$, $K_{a\theta}$ and $K_{\theta\theta}$ using Eq.~(\ref{eq15}). We also calculated $K_{aa}$, $K_{a\theta}$ and $K_{\theta\theta}$ using \textsc{DFT} and a finite difference method, and converted them into $\hat{C}$ by inverting Eq.~(\ref{eq15}). All elastic constants and coefficients $K$ obtained from \textsc{DFPT} and \textsc{DFT} calculations are in very good agreement, see Table \ref{tb2}. To the best of our knowledge, there are no reported experimental values for the elastic constants of GeTe. If we use the Voigt average for calculating the bulk modulus as: \begin{align*} 9B = 2(C_{11} + C_{12}) + 4C_{13} + C_{33} = 2K_{aa}, \numberthis \end{align*} we obtain the value of $B = 45.92$ GPa at 0~K, which is in a good agreement with the experimental value of 49.9 GPa at 300~K \cite{bulk}. We note that the elastic constants discussed above correspond to the clamped-ion elastic tensor $\hat{C}$, where the internal atomic coordinate is not relaxed in the presence of strain. We explicitly define the coefficients $K$ that take into account the relaxation of the internal atom: $K_{\tau\tau}$, $K_{a\tau}$, and $K_{\theta\tau}$. $K_{\tau\tau}$ represents the soft TO mode. $K_{a\tau}$ and $K_{\theta\tau}$ are related to the elements of the force-response internal-strain tensor as defined in Ref. \cite{intstrain}, and physically correspond to acoustic-soft optical mode coupling. Now we identify the main differences between our approach and the standard approach to thermal expansion in the case of materials near phase transitions. Mode GP's in the standard approach are computed as: \begin{align} \frac{d \omega _{\lambda} (\mathbf{q})}{d \epsilon _{de}} = \frac{\partial \omega _{\lambda} (\mathbf{q})}{\partial \epsilon _{de}} + \frac{\partial \omega _{\lambda} (\mathbf{q})}{\partial \tau}\frac{\partial \tau}{\partial \epsilon _{de}} \label{eq19} \end{align} where $\epsilon _{de}$ is a component of the strain tensor~\cite{Bryce,Keblinski}. We consider a simplified expression for the total free energy of a rhombohedral system: \begin{align} F_{tot} = K_{\tau\tau}\tau ^2 + K_{a\tau}a\tau + K_{\theta\tau}\theta\tau. \end{align} To find the value of $\tau$ at thermal equilibrium, we minimize this function with respect to $\tau$: \begin{align} \frac{\partial F_{tot}}{\partial \tau} &= 2K_{\tau\tau}\tau + K_{a\tau}a + K_{\theta\tau}\theta = 0, \\ \tau &= -\frac{K_{a\tau}a + K_{\theta\tau}\theta}{2K_{\tau\tau}}. \end{align} We estimate the terms that correspond to the term $\partial \tau/\partial \epsilon _{de}$ in Eq.~\eqref{eq19} by replacing $\epsilon _{de}$ with $a$ (or $\theta$): \begin{align} \frac{\partial \tau}{\partial a} = -\frac{K_{a\tau}}{2K_{\tau\tau}}. \end{align} The coefficient $K_{\tau\tau}$ corresponds to the zone center soft TO mode, and becomes zero at the phase transition. Our calculations show that $K_{a\tau}$ is finite at the phase transition. Consequently, the factor $\partial \tau/\partial a$ diverges at the phase transition. Our method captures the temperature dependence of elastic coefficients $K_{uv}$, $u,v\in \{a,\theta,\tau\}$, and thus the temperature dependence of the terms $\partial \tau/\partial a$ and $\partial \tau/\partial \theta$. In contrast, the standard method gives the corresponding terms only at 0~K. Both methods ignore the temperature dependence of the terms $\partial \omega _{\lambda} (\mathbf{q})/\partial \epsilon _{de}$ and $\partial \omega _{\lambda} (\mathbf{q})/\partial \tau$ in Eq.~\eqref{eq19}. We stress that the temperature dependence of elastic coefficients is critical for the description of the NTE of GeTe near the phase transition. This can be obtained straightforwardly by explicitly accounting for $\tau$ in the free energy minimization, as done in our method. \section{Appendix B: Soft optical mode frequency} Since $K_{\tau\tau}$ is the second derivative of total energy with respect to internal atomic coordinate, we can calculate the TO mode frequency using \cite{srivastava}: \begin{align} \omega _{TO} ^2 = \frac{2K_{\tau\tau}}{\mu a_{||}^2}, \end{align} where $\mu$ is reduced mass of the unit cell and $a_{||}$ is the length of the unit cell in the [111] direction. The temperature dependent elastic coefficient $K_{\tau\tau}$ is computed as: \begin{align} K_{\tau\tau} &= K^{0} _{\tau\tau} + 3K^{0} _{\tau\tau\tau}\delta \tau + K^{0} _{\tau\tau\theta}\delta \theta + K^{0} _{\tau\tau a} \delta a\nonumber\\ & + 6K^{0} _{\tau\tau\tau\tau}\delta \tau ^2 + 3(K^{0} _{\tau\tau\tau\theta}\delta \theta + K^{0} _{\tau\tau\tau a} \delta a)\delta \tau \\ &+ K^{0} _{\tau\tau\theta\theta}\delta \theta ^2 + K^{0} _{\tau\tau aa}\delta a ^2 + K^{0} _{\tau\tau\theta a}\delta \theta \delta a.\nonumber \end{align} Consequently, the anharmonic contribution to the zone center TO mode frequency is explicitly accounted for in our model up to the second order. The coefficients $K^{0} _{\tau\tau\tau}$ and $K^{0} _{\tau\tau\tau\tau}$ describe the anharmonicity of the soft TO mode energy potential. The coefficients such as $K^{0} _{\tau\tau a}$, $K^{0}_{\tau\tau\theta}$, $K^{0} _{\tau\tau\tau a}$ etc. describe anharmonic acoustic-soft TO mode coupling. We also account for the temperature dependence of $a_{||}$. As result, we can track the softening of TO mode as a function of temperature and compare it to measurements~\cite{STEIGMEIER19701275}, as shown in Fig.~\ref{fig8}. We find a very good agreement between our calculated TO frequency and experiment. \begin{figure}[h] \begin{center} \includegraphics[width=0.441\textwidth]{fig7.eps} \caption{TO mode frequency versus temperature: our calculation (solid black line) and experiment \cite{STEIGMEIER19701275} (red circles).} \label{fig8} \end{center} \end{figure} \section{Appendix C: Elastic constants near the phase transition} In our calculations, the values of all elastic constants have a steep change at the phase transition, which is in agreement with experimental observations in Sn$_{x}$Ge$_{1-x}$Te \cite{elastSn} and Pb$_{x}$Ge$_{1-x}$Te \cite{elastPb}. Fig. \ref{fig7} shows how $C_{11} + C_{12}$, $C_{13}$ and $C_{33}$ vary with temperature. $C_{11} + C_{12}$ increases rapidly at the phase transition, as observed in \cite{elastPb, elastSn}. Experimental values of $C_{13}$ and $C_{33}$ were not reported, but our calculations correctly capture their expected behaviour. At the high symmetry rocksalt phase, $C_{33}$ and $C_{11}$, as well as $C_{12}$ and $C_{13}$, should have the same values. In the low symmetry rhombohedral phase, $C_{33}$ has lower value than $C_{11}$ (Table \ref{tb1}), so we would expect that $C_{33}$ will increase towards the phase transition to become equal to $C_{11}$. On the other hand, $C_{13}$ is larger than $C_{12}$, and it will decrease towards the phase transition to become equal to $C_{12}$. Both of these trends are observed in our results. We made an attempt to verify whether our calculated values of elastic constants satisfy the Born criteria for mechanical stability: \begin{align*} &C_{11} - C_{12} > 0, \\ &C_{44} > 0, \numberthis \\ &C_{11} + 2C_{12} > 0. \end{align*} In our calculations, which are restricted to rhombohedral symmetry structures, we cannot separately calculate the elastic constants $C_{11}$ and $C_{12}$, and can only track their sum. Our DFPT calculation at 0~K gives $C_{11} \gg C_{12}$ ($C_{11} = 93.33$ GPa, $C_{12}$ = 21.43 GPa). Since $C_{11}+C_{12}$ does not vary substantially with temperature, see Fig.~\ref{fig7}(a), it is likely that $C_{11}$ and $C_{12}$ individually exhibit a similar trend. This suggests that the relations $C_{11} \gg C_{12}>0$, $C_{11} - C_{12} > 0$ and $C_{11} + 2C_{12} > 0$ should remain valid up to the Curie temperature. We cannot track the elastic coefficient $C_{44}$ related to shear strain since we do not allow symmetry lowering types of strain. \begin{figure}[h!] \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[width = 0.9\textwidth]{fig8a.eps} \end{center} \end{minipage} \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[width = 0.9\textwidth]{fig8b.eps} \end{center} \end{minipage} \caption{Elastic constants of GeTe as functions of temperature: (a) $C_{11} + C_{12}$, (b) $C_{13}$ (solid black line) and $C_{33}$ (dashed red line).} \label{fig7} \end{figure}
{'timestamp': '2018-07-10T02:04:55', 'yymm': '1804', 'arxiv_id': '1804.06790', 'language': 'en', 'url': 'https://arxiv.org/abs/1804.06790'}
\section{} \label{app:pf} \subsection{Proof of Theorem~\ref{thm:chernoff}} \label{app:pfThmChernoff} \begin{proof} We present the proof of~\eqref{eqn:chernoffUpper}. The proof of~\eqref{eqn:chernoffLower} is similar and is omitted. For any $n\geq 2$, denote by $\mathcal{F}_n$ the $\sigma$-algebra generated by $Z_1, \ldots, Z_n$. For any $s>0$, $\eta>0$, and $n\geq 2$, we denote the following random variable \begin{align} W_n \triangleq \exp\left ( s\sum_{i= 1}^{n-1} \left (U_i Z_{i+1} - \eta U_i^2\right )\right ). \end{align} By the Chernoff bound, we have \begin{align} \prob{\hat{a}_{\text{ML}}(\bfU) - a \geq \eta} \leq \inf_{s>0}~\mathbb{E}\left[W_n\right]. \end{align} To compute $\mathbb{E}\left[W_n\right]$, we first consider the conditional expectation $\mathbb{E}\left[W_n|\mathcal{F}_{n-1}\right]$. Since $Z_n$ is the only term in $W_n$ that does not belong to $\mathcal{F}_{n-1}$, we have \begin{align} & \mathbb{E} \left[W_n\right] \notag\\ = & \mathbb{E}\left [ W_{n-1}\cdot\mathbb{E} \left [ \exp\left (s\left (U_{n-1}Z_n-\eta U_{n-1}^2\right )\right )|\mathcal{F}_{n-1} \right ]\right ] \label{steps:0}\\ = & \EX{ W_{n-1}\cdot \exp\left (\alpha_1 U_{n-1}^2 \right ) },\label{steps:1} \end{align} where $\alpha_1$ is the deterministic function of $s$ and $\eta$ defined in~\eqref{alpha1}, and~\eqref{steps:1} follows from the moment generating function of $Z_n$. To obtain a recursion, we then consider the conditional expectation $\mathbb{E}\left[W_{n-1}\cdot \exp\left (\alpha_1 U_{n-1}^2 \right ) | \mathcal{F}_{n-2} \right]$. Since $U_{n-1}^2$ and $U_{n-2}Z_{n-1}$ are the only two terms in $W_{n-1}\cdot \exp(\alpha_1 U_{n-1}^2)$ that do not belong to $\mathcal{F}_{n-2}$, we use the relation $U_{n-1} = aU_{n-2} + Z_{n-1}$ and we complete squares in $Z_{n-1}$ to obtain \begin{align} & W_{n-1}\cdot \exp\left (\alpha_1 U_{n-1}^2\right )\notag \\ = & W_{n-2}\cdot\exp\Bigg ( \alpha_1\left(Z_{n-1}+\left (a+ \frac{s}{ 2\alpha_1} \right )U_{n-2}\right)^2 + \notag \\ &\quad \left (a^2\alpha_1 - s\eta\right )U_{n-2}^2 - \alpha_1\left(a+\frac{s}{2\alpha_1}\right)^2 U_{n-2}^2\Bigg). \end{align} Furthermore, using the formula for the moment generating function of the noncentral $\chi^2$-distributed random variable \begin{align} \left(Z_{n-1}+\left(a+ \frac{s}{ 2\alpha_1}\right)U_{n-2}\right)^2 \end{align} with 1 degree of freedom, we obtain \begin{align} & \EX{ W_{n-1}\cdot \exp\left (\alpha_1 U_{n-1}^2\right ) } \notag \\ = & \frac{1}{\sqrt{1-2\sigma^2 \alpha_1}}\EX{W_{n-2}\cdot \exp\left (\alpha_2 U_{n-2}^2\right)}.\label{diff} \end{align} This is where our method diverges from Rantzer~\cite[Lem. 5]{rantzer2018concentration}, who chooses $s = \frac{\eta}{\sigma^2}$ and bounds $\alpha_2\leq \alpha_1$ (due to Property A4 in Appendix~\ref{app:seqA} below) in~\eqref{diff}. Instead, by conditioning on $\mathcal{F}_{n-3}$ in~\eqref{diff} and repeating the above recursion for another $n-2$ times, we compute $\EX{W_{n}}$ exactly using the sequence $\{\alpha_\ell\}$: \begin{align} \EX{W_{n}} = \exp\left ( -\frac{1}{2}\sum_{\ell = 1}^{n-1}\log\left (1-2\sigma^2\alpha_{\ell} \right )\right ). \end{align} If $s\not\in \mathcal{S}_n^+$, then by the definition of the set $\mathcal{S}_n^+$ we have $\EX{W_n} = +\infty$. Therefore, \begin{align} \inf_{s>0}~\mathbb{E}\left[W_n\right] = \inf_{s\in \mathcal{S}_n^+}~\mathbb{E}\left[W_n\right]. \end{align} \end{proof} \subsection{Properties of the Sequence $\alpha_{\ell}$} \label{app:seqA} We derive several important elementary properties of the sequences $\alpha_{\ell}$ and $\beta_{\ell}$. First, we consider $\alpha_{\ell}$. We find the two fixed points $r_1 < r_2$ of the recursive relation~\eqref{alphaEll} by solving the following quadratic equation in $x$: \begin{align} 2\sigma^2 x^2 + [a^2 + 2\sigma^2 s(a + \eta) - 1]x + \alpha_1 = 0. \label{eqn:sol1} \end{align} \subsubsection*{Property A1} \label{sec:pA1} For any $s>0$ and $\eta>0$,~\eqref{eqn:sol1} has two roots $r_1< r_2$, and $r_1<0$. The two roots $r_1$ and $r_2$ are given by \begin{align} r_1 &= \frac{-[a^2 + 2\sigma^2 (a+\eta)s - 1] - \sqrt{\Delta}}{4\sigma^2},\label{root1} \\ r_2 &= \frac{-[a^2 + 2\sigma^2 (a+\eta)s - 1] + \sqrt{\Delta}}{4\sigma^2},\label{root2} \end{align} where $\Delta$ denotes the discriminant of~\eqref{eqn:sol1}: \begin{align} \Delta &= 4\sigma^4 [(a+\eta)^2 - 1] s^2 + \notag \\ &\quad 4\sigma^2 [(a+\eta)(a^2 -1) + 2\eta] s+ (a^2 - 1)^2.\label{discriminantalpha} \end{align} \begin{proof} Note that the discriminant $\Delta$ satisfies \begin{align} \Delta > (a^2 - 1)^2 > 0, \label{positiveD} \end{align} where we used $a>1$. Then,~\eqref{root1} implies $r_1 < 0$. \end{proof} \subsubsection*{Property A2} \label{sec:pA2} For $ \frac{2\eta}{\sigma^2} \neq s>0$ and $\eta>0$, the sequence $\frac{\alpha_\ell - r_1}{\alpha_\ell - r_2}$ is a geometric sequence with common ratio \begin{align} q \triangleq \frac{[a^2 + 2\sigma^2 s(a+\eta)] + 2\sigma^2 r_1}{[a^2 + 2\sigma^2 s(a+\eta)] + 2\sigma^2 r_2}. \label{cr} \end{align} Furthermore, \begin{align} q \in (0,1), \label{crrange} \end{align} and it follows immediately that \begin{align} \alpha_{\ell} & = r_1 + \frac{(r_1 - r_2)\frac{\alpha_1 - r_1}{\alpha_1-r_2}q^{\ell-1}}{1 - \frac{\alpha_1 - r_1}{\alpha_1-r_2}q^{\ell-1}}, \label{eqn:expAlpha} \\ &= r_2 + \frac{r_2 - r_1}{ \frac{\alpha_1 - r_1}{\alpha_1-r_2}q^{\ell-1} - 1}. \label{eqn:expAlpha2} \end{align} \begin{proof} Using the recursion~\eqref{alphaEll} and the fact that $r_1$ and $r_2$ are the fixed points of~\eqref{alphaEll}, one can verify that $\frac{\alpha_\ell - r_1}{\alpha_\ell - r_2}$ is a geometric sequence with common ratio $q$ given by~\eqref{cr}. The relation~\eqref{crrange} is verified by direct computations using~\eqref{root1} and~\eqref{root2}. \end{proof} \subsubsection*{Property A3} \label{sec:pA3} For any $\frac{2\eta}{\sigma^2} \neq s>0$ and $\eta>0$, we have \begin{align} \lim_{\ell\rightarrow \infty}\alpha_{\ell} = r_1. \label{seqLim} \end{align} For $s = \frac{2\eta}{\sigma^2}$, we have $\alpha_{\ell} = 0 = r_2 > r_1,~\forall\ell \geq 1$. \begin{proof} The limit~\eqref{seqLim} follows from~\eqref{crrange} and~\eqref{eqn:expAlpha}. Plugging $s = \frac{2\eta}{\sigma^2}$ into~\eqref{alpha1} yields $\alpha_1 = 0$, which implies by~\eqref{alphaEll} that $\alpha_{\ell} = 0$ for $\ell\geq 1$. \end{proof} \subsubsection*{Property A4} \label{sec:pA4} For any $0 < s\leq \frac{2\eta}{\sigma^2}$, we have $\alpha_\ell < 0$ and $\alpha_{\ell}$ decreases to $r_1$ geometrically. For $s > \frac{2\eta}{\sigma^2}$,~\eqref{seqLim} still holds, but the convergence is not monotone: there exists an $\ell^\star\geq 1$ such that $\alpha_\ell > 0$ and increases to $\alpha_{\ell^\star}$ for $1\leq \ell \leq \ell^\star$; and $\alpha_\ell < 0$ and increases to $r_1$ for $\ell > \ell^\star$. \begin{proof} Due to~\eqref{eqn:expAlpha2}, the monotonicity of $\alpha_{\ell}$ depends on the signs of $r_2 - r_1$ and $\frac{\alpha_1 - r_1}{\alpha_1 - r_2}$. Note that $r_2 - r_1 > 0$ by Property A1. Plugging $x = \alpha_1$ into~\eqref{eqn:sol1}, we have \begin{align} (\alpha_1 - r_1)(\alpha_1 - r_2) = (a+\sigma^2 s)^2 \alpha_1. \label{pr} \end{align} Since for $0 < s\leq \frac{2\eta}{\sigma^2}$, we have $\alpha_1 < 0$ by~\eqref{alpha1}; we must also have $\frac{\alpha_1 - r_1}{\alpha_1 - r_2} < 0$ by~\eqref{pr}. Due to~\eqref{eqn:expAlpha} and~\eqref{eqn:expAlpha2}, this immediately implies that $\alpha_{\ell}$ decreases to $r_1$. Therefore, $\alpha_{\ell}\leq \alpha_1 <0,~\forall\ell\geq 1$. For any $s > \frac{2\eta}{\sigma^2}$, we have $\alpha_1 > 0$ and $\frac{\alpha_1 - r_1}{\alpha_1 - r_2} > 0$. In fact, since $r_1 < 0$, we have $\alpha_1 > r_2$, which implies $\frac{\alpha_1 - r_1}{\alpha_1 - r_2} > 1$. Therefore, the conclusion follows from~\eqref{eqn:expAlpha2}. \end{proof} \subsubsection*{Property A5} \label{sec:pA5} For any $\eta > 0$, the root $r_1$ in~\eqref{root1} is a decreasing function in $s > 0$. \begin{proof} Direct computations using~\eqref{root1},~\eqref{discriminantalpha} and the assumption that $a>1$. \end{proof} \subsection{Properties of the Sequence $\beta_{\ell}$} \label{app:seqB} The sequence $\beta_{\ell}$ is analyzed similarly, although it is slightly more involved than $\alpha_{\ell}$. We only consider $0<s\leq \frac{2\eta}{\sigma^2}$ in the rest of this section. We find the two fixed points $t_1 < t_2$ of the recursive relation~\eqref{betaEll} by solving the following quadratic equation in $x$: \begin{align} 2\sigma^2 x^2 + [a^2 + 2\sigma^2 s(-a + \eta) - 1]x + \beta_1 = 0. \label{eqn:sol2} \end{align} \subsubsection*{Property B1} \label{sec:pB1} For $s = \frac{2\eta}{\sigma^2}$, we have $\beta_{\ell} = 0,~\forall\ell\geq 1$. For any $\eta > 0$ and $0 < s\leq \frac{2\eta}{\sigma^2}$,~\eqref{eqn:sol2} has two distinct roots $t_1 < 0 < t_2$, given by \begin{align} t_1 &= \frac{-[a^2+2\sigma^2 s(-a +\eta) - 1] - \sqrt{\Gamma}}{4\sigma^2}, \label{t1} \\ t_2 &= \frac{-[a^2+2\sigma^2 s(-a +\eta) - 1] + \sqrt{\Gamma}}{4\sigma^2} \label{t2}, \end{align} where the discriminant $\Gamma$ of~\eqref{eqn:sol2} is \begin{align} \Gamma &= 4\sigma^4 [(-a+\eta)^2 - 1] s^2 + \notag \\ &\quad 4\sigma^2 [(-a+\eta)(a^2 -1) + 2\eta] s+ (a^2 - 1)^2. \label{discriminantbeta} \end{align} \begin{proof} We verify that $\Gamma > 0$ for any $\eta > 0$ and $0 < s\leq \frac{2\eta}{\sigma^2}$. The reason that $\Gamma > 0$ is not as obvious as~\eqref{positiveD} is due to the subtle difference between~\eqref{discriminantalpha} and~\eqref{discriminantbeta} in the negative sign of $a$. Note that $\Gamma$ in~\eqref{discriminantbeta} is a quadratic equation in $s$ and the discriminant of~$\Gamma$ is given by \begin{align} \gamma = 16\sigma^4 (2a\eta - a^2 + 1)^2 \geq 0. \end{align} Hence, in general,~\eqref{discriminantbeta} has two roots (distinct when $\eta\neq \frac{a^2 - 1}{2a}$) and $\Gamma$ could be positive or negative. However, an analysis of two cases $(-a+\eta)^2 -1 \geq 0$ and $(-a+\eta)^2 -1 <0$ reveals that $\Gamma > 0$ for any $\eta > 0$ and $0 < s\leq \frac{2\eta}{\sigma^2}$. Therefore,~\eqref{eqn:sol2} has two distinct roots $t_1 < t_2$ given in~\eqref{t1} and~\eqref{t2} above. From~\eqref{eqn:sol2}, we have $t_1t_2 = \frac{\beta_1}{2\sigma^2}$, which is negative for $0 < s\leq \frac{2\eta}{\sigma^2}$. Therefore, we have $t_1 < 0 < t_2$. \end{proof} \subsubsection*{Property B2} \label{sec:pB2} For any $\eta > 0$ and $0 < s\leq \frac{2\eta}{\sigma^2}$, the sequence $\frac{\beta_{\ell} - t_1}{\beta_{\ell} - t_2}$ is a geometric sequence with common ratio \begin{align} p \triangleq \frac{[a^2 + 2\sigma^2 s(-a+\eta)] + 2\sigma^2 t_1}{[a^2 + 2\sigma^2 s(-a+\eta)] + 2\sigma^2 t_2}. \label{crbeta} \end{align} In addition, for any $\eta > 0$ and $0 < s\leq \frac{2\eta}{\sigma^2}$, we also have \begin{align} p\in (0,1). \label{crbetarange} \end{align} It follows immediately that \begin{align} \beta_{\ell} & = t_1 + \frac{(t_1 - t_2)\frac{\beta_1 - t_1}{\beta_1-t_2}p^{\ell-1}}{1 - \frac{\beta_1 - t_1}{\beta_1-t_2}p^{\ell-1}}, \label{eqn:expBeta} \\ &= t_2 + \frac{t_2 - t_1}{ \frac{\beta_1 - t_1}{\beta_1-t_2}p^{\ell-1} - 1}. \label{eqn:expBeta2} \end{align} \begin{proof} Similar to that of Property A2 above for $\alpha_\ell$. \end{proof} \subsubsection*{Property B3} \label{sec:pB3} For any $\eta > 0$ and $0 < s\leq \frac{2\eta}{\sigma^2}$, we have $\beta_{\ell} \leq \beta_1 < 0$, and $\beta_{\ell}$ decreases to $t_1$ geometrically: \begin{align} \lim_{\ell\rightarrow \infty} \beta_{\ell} = t_1. \end{align} \begin{proof} This can be verified using~\eqref{eqn:expBeta} and~\eqref{eqn:expBeta2} by noticing that $t_2 - t_1 > 0$ and that for $0 < s\leq \frac{2\eta}{\sigma^2}$, \begin{align} (\beta_1 - t_1)(\beta_2 - t_2) = (a-\sigma^2 s)^2 \beta_1 < 0. \end{align} \end{proof} \subsubsection*{Property B4} \label{sec:pB4} For any constant $a>1$, the two thresholds $\eta_1$ and $\eta_2$, defined in~\eqref{eta1} and~\eqref{eta2}, respectively, satisfy the following Then, \begin{enumerate} \item When $0< \eta \leq \eta_1$, the root $t_1$ in~\eqref{t1} is an increasing function in $s \in \mathcal{I}_\eta$. \item When $\eta \geq \eta_2$, $t_1$ is a decreasing function in $s \in \mathcal{I}_\eta$. \item When $\eta_1 < \eta < \eta_2$, $t_1$ is a decreasing function in $s \in (0, s^\star)$ and an increasing function in $s\in \left(s^\star, \frac{2\eta}{\sigma^2}\right)$, where $s^\star$ is the unique solution in the interval $\mathcal{I}_\eta$ to \begin{align} \frac{d t_1}{ds} \Big |_{s = s^\star} = 0,\label{eqn:dt1} \end{align} and $s^\star$ is given by \begin{align} s^\star \triangleq \frac{a\eta (\eta - \eta_1)}{\sigma^2 (1 - (\eta -a)^2)}.\label{def:sstar} \end{align} \end{enumerate} \begin{proof} Using~\eqref{t1} and~\eqref{discriminantbeta}, we compute the derivatives of $t_1$ as follows: \begin{align} \frac{d t_1}{ds} &= -\frac{\eta - a}{2} - \frac{1}{\sqrt{\Gamma}}\Bigg\{ \sigma^2 [(-a+\eta)^2 - 1]s \notag \\ & \quad\quad + \frac{1}{2} [(-a+\eta)(a^2 - 1) + 2\eta]\Bigg\}, \label{eqn:firstderivative}\\ \frac{d^2 t_1}{ds^2} &= \frac{\sigma^2(2a\eta - a^2 + 1)^2}{\Gamma^{\frac{3}{2}}}\geq 0.\label{eqn:doublederivative} \end{align} To simplify notations, denote by $L(s)$ the first derivative: \begin{align} L(s)\triangleq \frac{dt_1}{ds}(s). \end{align} From~\eqref{eqn:firstderivative}, we have \begin{align} L(0) = \frac{-a^2\left(\eta - \eta_1\right)}{a^2 - 1}, \end{align} and \begin{align} & L\left(\frac{2\eta}{\sigma^2}\right) = \notag \\ & \begin{cases} \frac{-2(2\eta - a)(\eta - \eta_2)(\eta - \eta_2')}{(a-2\eta)^2 - 1}, & \eta \in \left( 0, \frac{a-1}{2} \right)\cup \left( \frac{a+1}{2}, +\infty\right) \\ \frac{\eta}{1 - (a-2\eta)^2}, & \eta\in \left( \frac{a-1}{2} , \frac{a+1}{2}\right), \end{cases}\label{eqn:Lright} \end{align} where $\eta_2'$ is given by \begin{align} \eta_2'\triangleq \frac{3a - \sqrt{a^2 + 8}}{4}. \label{def:T2p} \end{align} Since $L(s)$ is an increasing function in $s$ due to~\eqref{eqn:doublederivative}, to determine the monotonicity of $t_1$, we only need to consider the following three cases. a) When $L(0)\geq 0$, or equivalently, $0 < \eta \leq \eta_1$, we have $L(s)\geq 0$ for any $s\in\mathcal{I}_\eta$. Hence, $t_1$ is an increasing function in $s$. b) When $L\left(\frac{2\eta}{\sigma^2}\right) \leq 0$, we have $L(s)\leq 0$ for any $s\in\mathcal{I}_\eta$. Hence, $t_1$ is a decreasing function in $s$. We now show that $L\left(\frac{2\eta}{\sigma^2}\right) \leq 0$ is equivalent to $\eta \geq \eta_2$. When $\eta \in \left(\frac{a-1}{2}, \frac{a+1}{2}\right)$, we have $L\left(\frac{2\eta}{\sigma^2}\right) > 0$ by~\eqref{eqn:Lright} and $\eta > 0$. When $\eta \in \left( 0, \frac{a-1}{2} \right)\cup \left( \frac{a+1}{2}, +\infty\right)$, it is easy to see from~\eqref{eqn:Lright} that $L\left(\frac{2\eta}{\sigma^2}\right) \leq 0$ is equivalent to $\eta \in [\eta_2', a/2]\cup [\eta_2,+\infty)$. Hence, the equivalent condition for $L\left(\frac{2\eta}{\sigma^2}\right) \leq 0$ is $\eta \in [\eta_2,+\infty)$. c) When $L(0) < 0$ and $L\left(\frac{2\eta}{\sigma^2}\right) > 0$, or equivalently, $\eta \in (\eta_1, \eta_2)$, solving~\eqref{eqn:dt1} using~\eqref{eqn:firstderivative} yields~\eqref{def:sstar}. Since $L(s)$ is monotonically increasing due to~\eqref{eqn:doublederivative}, we know that $s^\star$ given by~\eqref{def:sstar} is the unique solution to~\eqref{eqn:dt1} in $\mathcal{I}_\eta$, and $L(s)\leq 0$ for $s\in (0, s^\star]$ and $L(s) > 0$ for $s\in (s^\star, 2\eta / \sigma^2 )$. \end{proof} \subsection{Proof of Lemma~\ref{lemma:limiting}} \label{app:pfLemLimiting} \begin{proof} We first show the monotone decreasing property. The set $\mathcal{S}_{n+1}^{+}$ contains all $s>0$ such that $a_1,...,a_n, a_{n+1}$ are all less than $1/2\sigma^2$, while the set $\mathcal{S}_{n}^{+}$ contains all $s>0$ such that $a_1,...,a_n$ are all less than $1/2\sigma^2$, hence $\mathcal{S}_{n+1}^{+} \subseteq \mathcal{S}_{n}^{+} $. The same argument yields the conclusion for $\mathcal{S}_n^-$. We then prove that $\mathcal{S}_{\infty}^+=\left(0, 2\eta / \sigma^2\right]$. Property A4 above in Appendix~\ref{app:seqA} implies that for any $0< s\leq 2\eta / \sigma^2$, we have $\alpha_{\ell} \leq 0< \frac{1}{2\sigma^2}$. Hence $ \left(0, 2\eta / \sigma^2\right] \subseteq\mathcal{S}_n^+$ for any $n\geq 1$. To show the other direction, it suffices to show that for any $s > \frac{2\eta}{\sigma^2}$, there exists $n\in\mathbb{N}$ such that $\alpha_n \geq \frac{1}{2\sigma^2}$. Let $\ell^\star$ be the integer defined in Property A4 above. Then, $\ell^\star$ satisfies the following two conditions \begin{align} &\frac{\alpha_1 - r_1}{\alpha_1 - r_2} q^{\ell^\star - 1} \geq 1, \label{ab1}\\ &\frac{\alpha_1 - r_1}{\alpha_1 - r_2} q^{\ell^\star} < 1.\label{bel1} \end{align} We show that $\alpha_{\ell^\star}\geq \frac{1}{2\sigma^2}$, which would complete the proof. Due to $r_2 - r_1 >0$, using~\eqref{eqn:expAlpha2} and~\eqref{bel1}, we have \begin{align} \alpha_{\ell^\star} &\geq r_2 + \frac{r_2 - r_1}{\frac{1}{q} - 1} \\ &= \frac{r_2 - r_1 q}{1 - q} \label{eqn:2sigmapre} \\ &= \frac{1}{2\sigma^2}, \label{eqn:2sigma} \end{align} where~\eqref{eqn:2sigma}~\footnote{It is pretty amazing that~\eqref{eqn:2sigma} is in fact an equality.} is by plugging~\eqref{root1},~\eqref{root2} and~\eqref{cr} into~\eqref{eqn:2sigmapre}. Finally, to show~\eqref{minus}, for any $0<s\leq 2\eta / \sigma^2$, we have $\beta_{\ell} \leq 0 < \frac{1}{2\sigma^2},~\forall \ell \geq 1$, hence $\left(0, 2\eta / \sigma^2\right] \subseteq \mathcal{S}_{\infty}^-$. The other direction cannot hold since there are many counterexamples, e.g., $a = 1.2$, $\sigma^2 = 1$, $\eta = 0.15$ and $s = 0.35 > \frac{2\eta}{\sigma^2}$, where the sequence $\beta_\ell$ increases monotonically to $t_1 \approx 0.0411 < \frac{1}{2\sigma^2}$. Hence, in this case, $0.35 \in \mathcal{S}_{\infty}^-$ but $0.35\not\in \left(0, \frac{2\eta}{\sigma^2}\right]$. \end{proof} \subsection{Proof of Theorem~\ref{thm:upperLDP}} \label{app:pfupperLDP} \begin{proof} Theorem~\ref{thm:chernoff} and Lemma~\ref{lemma:limiting} imply that for any $s\in\mathcal{I}_\eta$, \begin{align} \liminf_{n\rightarrow \infty} P^+(n, a, \eta) \geq \lim_{n\rightarrow \infty} \frac{1}{2n}\sum_{\ell = 1}^{n-1}\log (1 - 2\sigma^2\alpha_{\ell}). \end{align} Recall that $\alpha_{\ell}$ depends on $s$. By~\eqref{seqLim}, the continuity of the function $x\mapsto \log (1 - x)$ and the Ces{\`a}ro mean convergence, we have \begin{align} \liminf_{n\rightarrow \infty} P^+(n, a, \eta) \geq \frac{1}{2}\log(1 - 2\sigma^2 r_1), \label{pf:fixeds} \end{align} where $r_1$ depends on $s$ via~\eqref{root1}. Since~\eqref{pf:fixeds} holds for any $s\in\mathcal{I}_\eta$, using Property A5 in Appendix~\ref{app:seqA} above and supremizing~\eqref{pf:fixeds} over $s\in\mathcal{I}_\eta$, we obtain~\eqref{eqn:PplusL}. Specifically, the supremum of \eqref{pf:fixeds} over $s\in\mathcal{I}_\eta$ is achieved in the limit of $s$ going to the right end point $2\eta / \sigma^2$. Plugging $s = 2\eta / \sigma^2$ into~\eqref{root1}, we obtain the corresponding value for $r_1$: \begin{align} -\frac{(a+2\eta)^2 - 1}{2\sigma^2}, \end{align} which is further substituted into~\eqref{pf:fixeds} to yield~\eqref{eqn:PplusL}. Similarly, to show~\eqref{eqn:PminusL}, using Property B3 in Appendix~\ref{app:seqB} above, we have \begin{align} \liminf_{n\rightarrow \infty} P^-(n, a, \eta) \geq \sup_{s\in\mathcal{I}_\eta}\frac{1}{2}\log(1 - 2\sigma^2 t_1). \label{fixedss} \end{align} Then, by Property B4 in Appendix~\ref{app:seqB} above, the supermizer $s'$ in~\eqref{fixedss} is given by \begin{align} s' = \begin{cases} 0, & 0< \eta \leq \eta_1\\ s^\star,& \eta_1 < \eta < \eta_2\\ \frac{2\eta}{\sigma^2}, & \eta \geq \eta_2, \end{cases} \label{super} \end{align} where $s^\star$ is given by~\eqref{def:sstar}. Plugging~\eqref{super} into~\eqref{fixedss} yields~\eqref{eqn:PminusL}. Finally, the bound~\eqref{eqn:PL} follows from~\eqref{eqn:PplusL} and~\eqref{eqn:PminusL}, since \begin{align} & \prob{\lrabs{\hat{a}_{\text{ML}}(\bfU)-a}>\eta} \notag \\ =~& \prob{\LRB{\hat{a}_{\text{ML}}(\bfU)-a}>\eta} + \prob{\LRB{\hat{a}_{\text{ML}}(\bfU)-a} < -\eta} \end{align} and \begin{align} & \liminf_{n\rightarrow \infty} P(n, a, \eta)\notag \\ =~& \liminf_{n\rightarrow \infty}\min\left\{ P^+(n, a, \eta), P^-(n, a, \eta)\right\} \\ \geq ~& I^-(a, \eta). \end{align} \end{proof} \subsection{Proof of Theorem~\ref{thm:decreasingeta}} \label{app:pfdecreasingeta} \begin{proof} For any sequence $\eta_n$, the proof of Theorem~\ref{thm:chernoff} in Appendix~\ref{app:pfThmChernoff} above remains valid with $\alpha_{\ell}$ replaced by $\alpha_{n, \ell}$ defined in~\eqref{alphaElln} in Section~\ref{subsec:apade} above. We present the proof of~\eqref{eqn:decreasingBDp}, and omit that of~\eqref{eqn:decreasingBDm}, which is similar. In this regime, for each $n\geq 1$, the proof of Lemma~\ref{lemma:limiting} implies that \begin{align} \LRB{0, \frac{2\eta_n}{\sigma^2}} \subseteq \mathcal{S}_n^+. \end{align} Then, in~\eqref{eqn:chernoffUpper}, we choose \begin{align} s = s_n = \frac{\eta_n}{\sigma^2} \in \mathcal{S}_n^+. \label{choice:sn} \end{align} First, using~\eqref{root1}-\eqref{root2},~\eqref{cr} and the choice~\eqref{choice:sn}, we can determine the asymptotic behavior of quantities involved in determining $\alpha_{n, \ell}$ in~\eqref{eqn:expAlpha} and~\eqref{eqn:expAlpha2} (with $\eta$ replaced by $\eta_n$ and $s$ replaced by $s_n$), summarized in TABLE~\ref{table:dec}. \begin{table}[h!] \centering \begin{tabular}{||c | c | c | c | c | c ||} \hline $\alpha_1$ & $r_1$ & $r_2$ & $r_2 - r_1$ & $q$ & $-\frac{\alpha_1 - r_1}{\alpha_1 - r_2}$ \\ \hline $-\Theta(\eta_n^2)$ & $-\Theta(1)$ & $\Theta(\eta_n^2)$ & $\Theta(1)$ & $\Theta(1)$ & $\Theta(1/\eta_n^2)$\\ \hline \end{tabular} \caption{Order dependence in $\eta_n$ of the quantities involved in determining $\alpha_{n, \ell}$ in~\eqref{eqn:expAlpha} and~\eqref{eqn:expAlpha2}.} \label{table:dec} \end{table} We make two remarks before proceeding further. It can be easily verified from~\eqref{cr} that the common ratio $q$ is a constant belonging to $(0,1)$ and \begin{align} \lim_{\eta_n \rightarrow 0}~q = \frac{1}{a^2} \in (0, 1). \end{align} Hence, for all large $n$, $q$ is bounded by positive constants between 0 and 1. Besides, from~\eqref{root1}, we have \begin{align} \lim_{\eta_n\rightarrow 0} r_1 = -\frac{a^2 - 1}{2\sigma^2}. \label{eqn:limR1} \end{align} Second, from~\eqref{eqn:expAlpha},~\eqref{eqn:chernoffUpper} and the choice~\eqref{choice:sn}, we have \begin{align} & P^+(n, a, \eta_n) \geq \frac{n-1}{2n}\log\left (1 - 2\sigma^2 r_1\right ) + \\ & \quad \frac{1}{2n}\sum_{\ell = 1}^{n-1}\log\LRB{1 - \frac{2\sigma^2 (r_2 - r_1)}{1 - 2\sigma^2 r_1}\cdot \frac{\LRB{-\frac{\alpha_1 - r_1}{\alpha_1 - r_2}}q^{\ell - 1}}{1 + \LRB{-\frac{\alpha_1 - r_1}{\alpha_1 - r_2}}q^{\ell - 1}}},\notag \end{align} where $r_1, r_2$ and $q$ in this regime depend on $\eta_n$ with order dependence given in TABLE~\ref{table:dec} above. Using the inequality $\log(1 - x) \geq \frac{x}{x - 1},~\forall x\in (0,1)$, we have \begin{align} & P^+(n, a, \eta_n) \geq \frac{n-1}{2n}\log\left (1 - 2\sigma^2 r_1\right ) + \\ &\quad\frac{1}{2n}\sum_{\ell = 1}^{n-1}\frac{-1}{\frac{1-2\sigma^2 r_2}{2\sigma^2 (r_2 - r_1)} + \frac{1-2\sigma^2 r_1}{2\sigma^2 (r_2 - r_1)} \cdot \frac{1}{\LRB{-\frac{\alpha_1 - r_1}{\alpha_1 - r_2}}q^{\ell - 1}}}.\notag \end{align} Since $1 - 2\sigma^2 r_2 >0$ due to~\eqref{root2}, we can further bound $P^+(n, a, \eta_n)$ as \begin{align} P^+(n, a, \eta_n) & \geq \frac{n-1}{2n}\log(1 - 2\sigma^2 r_1) - \\ &~ \quad \frac{1}{n}\LRB{\sum_{\ell = 1}^{n-1} q^{\ell - 1}} \frac{2\sigma^2 (r_2 - r_1)}{1 - 2\sigma^2 r_1}\cdot \LRB{-\frac{\alpha_1 - r_1}{\alpha_1 - r_2}} \notag \\ & \geq \frac{n-1}{2n}\log(1 - 2\sigma^2 r_1) - \\ &~ \quad \frac{1}{n} \frac{2\sigma^2 (r_2 - r_1)}{(1 - 2\sigma^2 r_1)(1-q)}\cdot \LRB{-\frac{\alpha_1 - r_1}{\alpha_1 - r_2}} \notag \\ &~ = \frac{n-1}{2n}\log(1 - 2\sigma^2 r_1) - \frac{1}{n\Theta(\eta_n^2)}, \end{align} where in the last step we used the results in TABLE~\ref{table:dec}. Due to the assumption~\eqref{assumption:etan} on $\eta_n$ and~\eqref{eqn:limR1}, we obtain~\eqref{eqn:decreasingBDp}. \end{proof} \subsection{Proof of Theorem~\ref{thm:subgaussian}} \label{app:subG} \begin{proof} We point out the proof changes in generalizing our results to the sub-Gaussian case. There are two changes to be made in the proof of Theorem~\ref{thm:chernoff} in Appendix~\ref{app:pfThmChernoff} above: the equality from~\eqref{steps:0} to~\eqref{steps:1} is replaced by $\leq$ since $Z_n$ is $\sigma$-sub-Gaussian; the equality in~\eqref{diff} is replaced by $\leq$ due to~Lemma~\ref{lem:subG}. The rest of the proof for Theorem~\ref{thm:chernoff} remains the same for the sub-Gaussian case. Since Lemma~\ref{lemma:limiting} and Theorems~\ref{thm:upperLDP},~\ref{thm:decreasingeta} depend only on the properties of the sequences $\alpha_{\ell}$ and $\beta_{\ell}$, and~\eqref{eqn:chernoffUpper}-\eqref{eqn:chernoffLower} continue to hold for sub-Gaussian $Z_n$'s, the proofs of Lemma~\ref{lemma:limiting} and Theorems~\ref{thm:upperLDP},~\ref{thm:decreasingeta} remain exactly the same for the sub-Gaussian case. \end{proof} \section{} \label{app:PfNRDF} \subsection{Proof of Lemma~\ref{lemma:infdis}} \label{app:LemInfdis} \begin{proof} In view of~\eqref{eqn:Xi}, we take the variances of both sides of~\eqref{eqn:dtiexp} to obtain \begin{align} \mathbb{V}_U(d) = \limsup_{n\rightarrow \infty}~\frac{1}{2n}\sum_{i=1}^n\min\left[1,~\left(\frac{\sigma_{n,i}^2}{\theta_n}\right)^2\right]. \label{pfeqn:VD} \end{align} Note that $\lim_{n\rightarrow\infty}\theta_n = \theta$, where $\theta > 0$ is the water level given by~\eqref{eqn:RWD}. Applying Theorem~\ref{thm:LimitingThm} in Section~\ref{subsubsec:diff} to~\eqref{pfeqn:VD} with the function \begin{align} F(t) \triangleq \frac{1}{2} \min\left[1,~\left(\frac{\sigma^2}{\theta t}\right)^2\right], \label{pfeqn:Ft} \end{align} which is continuous at $t = 0$, we obtain~\eqref{eqn:dispersion}. \end{proof} \subsection{An Integral} \label{app:TwoIntegrals} We present the computation of an interesting integral that is useful in obtaining the value of $\mathbb{R}_{U}(d_{\mathrm{max}})$. \begin{lemma} \label{lemma:TwoIntegrals} For any constant $r \in [-1,1]$, it holds that \begin{align} \int_{-\pi}^{\pi} \log (1 - r\cos(w))~dw &= 4\pi \log \frac{\sqrt{1 + r} + \sqrt{1 - r}}{2}. \label{eqn:1stIntegral} \end{align} \end{lemma} \begin{proof} Denote \begin{align} I(r)\triangleq \int_{-\pi}^{\pi} \log (1 - r\cos(w))~dw. \end{align} By Leibniz's rule for differentiation under the integral sign, we have \begin{align} \frac{\mathrm{d} I(r)}{\mathrm{d}r} &= \int_{-\pi}^{\pi} \frac{\partial }{\partial r}\log (1 - r\cos(w))~dw \\ &= -2\cdot \int_{0}^{\pi} \frac{\cos w}{1 - r\cos w}~dw. \label{int_derivative} \end{align} With the change of variable $u = \tan\left(w / 2\right)$ and partial-fraction decomposition, we obtain the closed-form solution to the integral in~\eqref{int_derivative}: \begin{align} \frac{\mathrm{d} I(r)}{\mathrm{d}r} = \frac{2\pi}{r} - \frac{2\pi}{r\sqrt{1 - r^2}}. \label{eqn:dIR} \end{align} It can be easily verified by directly taking derivatives that the right-side of~\eqref{eqn:1stIntegral} is indeed the antiderivative of~\eqref{eqn:dIR}. \end{proof} \subsection{Derivation of $\mathbb{R}_{U}(d_{\mathrm{max}})$ in~\eqref{rdfdmax}} \label{app:rudmax} We present two ways to obtain~\eqref{rdfdmax}. The first one is to directly use~\eqref{rdfHA} in Section~\ref{subsubsec:diff}. For $\theta = \theta_{\max}$, we have $\mathbb{R}_{\text{K}}(d_{\mathrm{max}}) = 0$ in~\eqref{Kol}, then~\eqref{rdfdmax} immediately follows from~\eqref{rdfHA}. The second method relies on~\eqref{eqn:RWR}. For $\theta = \theta_{\max}$, observe from~\eqref{eqn:RWR} that \begin{align} \mathbb{R}_{U}(d_{\mathrm{max}}) = \frac{1}{4\pi}\int_{-\pi}^{\pi}\log(g(w))~dw. \label{rdfdmaxint} \end{align} Then, computing the integral~\eqref{rdfdmaxint} using~Lemma~\ref{lemma:TwoIntegrals} in Appendix~\ref{app:TwoIntegrals} yields~\eqref{rdfdmax}. \subsection{Proof of Lemma~\ref{lemma:eigScaling}} \label{app:proof_eigScaling} \begin{proof} The bound~\eqref{eqn:eig2N} is obtained by partitioning $\mtx{F}' \mtx{F}$~\eqref{def:A} into its leading principal submatrix of order $n-1$ and then applying the Cauchy interlacing theorem to that partition, see~\cite[Lem. 1]{dispersionJournal} for details. To obtain~\eqref{eqn:eig1}, observe from~\eqref{eqn:prodMu} \begin{align} \mu_{n,1} = \left(\prod_{i = 2}^n \mu_{n,i}\right)^{-1}. \label{eqn:inv} \end{align} Combining~\eqref{eqn:inv} and~\eqref{eqn:eig2N} yields \begin{align} L_n \geq -\frac{1}{n}\log \mu_{n,1} \geq R_n, \label{eqn:twosums} \end{align} where \begin{align} L_n\triangleq \frac{1}{n}\sum_{i = 2}^{n} \log \xi_{n, i} \quad \text{and}\quad R_n \triangleq \frac{1}{n}\sum_{i = 1}^{n-1} \log \xi_{n-1, i}. \label{LnRn} \end{align} Plugging~\eqref{eqn:xini} into~\eqref{LnRn} and then taking the limit, we obtain \begin{align} \lim_{n\rightarrow\infty}L_n &= \lim_{n\rightarrow\infty} R_n \notag\\ &= \frac{1}{\pi}\int_{0}^{\pi} \log (1 + a^2 - 2a\cos(w))~dw \\ &= 2\log a, \label{limsum} \end{align} where the last equality is due to Lemma~\ref{lemma:TwoIntegrals} in Appendix~\ref{app:TwoIntegrals} above. In the rest of the proof, we obtain the following refinement of~\eqref{limsum}: for any $n\geq 1$, \begin{align} R_n &\geq 2\log a - \frac{c_1}{n}, \label{Rn}\\ L_n &\leq 2\log a + \frac{c_2}{n}, \label{Ln} \end{align} where $c_1$ and $c_2$ are the constants given by~\eqref{int:c1} and~\eqref{int:c2} in Lemma~\ref{lemma:eigScaling}, respectively. Then,~\eqref{eqn:eig1} will follow directly from~\eqref{eqn:twosums},~\eqref{Rn} and~\eqref{Ln}. The proofs of the refinements~\eqref{Rn} and~\eqref{Ln} are similar, and both are based on the elementary relations between Riemann sums and their corresponding integrals. We present the proof of~\eqref{Rn}, and omit that of~\eqref{Ln}. Note that the function $h(w)\triangleq \frac{1}{\pi}\log (1 + a^2 - 2a\cos(w))$ is an increasing function in $w\in [0, \pi]$, and its derivative is bounded above by $M_1 \triangleq \frac{2a}{\pi (a^2-1)}$ for any fixed $a > 1$. Therefore, from~\eqref{eqn:xini} and~\eqref{LnRn}, we have \begin{align} \left|R_n + \frac{1}{n}\log(a+1)^2 - \frac{1}{\pi}\int_{0}^{\pi} \log (g(w))~dw\right| \leq \frac{M_1\pi^2}{2n}, \end{align} and~\eqref{Rn} follows immediately. \end{proof} \subsection{Proof of Theorem~\ref{thm:nonasymEig}} \label{app:nonasymEig} \begin{proof} From Lemma~\ref{lemma:eigScaling}, we know that $\alpha' = 0 < \alpha $ (recall~\eqref{ma} and~\eqref{mprime}). Since $g(w)$ is an even function, we have \begin{align} I & \triangleq \frac{1}{2\pi} \int_{-\pi}^{\pi} F(g(w))~dw \\ & = \frac{1}{\pi} \int_{0}^{\pi} F(g(w))~dw. \end{align} Denote the maximum absolute value of $F$ over the interval~\eqref{interval_t} by $T>0$. It is easy to check that the function $F(g(w))$ is $2aL$-Lipschitz since $F(\cdot)$ is $L$-Lipschitz and the derivative of $g(w) $ is bounded by $2a$. For the following Riemann sum \begin{align} S_n \triangleq \frac{1}{n}\sum_{i = 1}^n F\left(g\left(\frac{i\pi}{n}\right)\right), \end{align} the Lipschitz property implies that \begin{align} \lrabs{S_n - I} \leq \frac{2aL}{\pi n}. \end{align} For $i\geq 2$, rewrite~\eqref{eqn:xini} and~\eqref{eqn:eig2N} as \begin{align} g\LRB{\frac{(i-1)\pi}{n}} \leq \mu_{n,i} \leq g\LRB{\frac{i\pi}{n+1}}. \label{pfeqn:mu} \end{align} Denote the sum in~\eqref{eqn:nonasymEig} as \begin{align} Q_n \triangleq \frac{1}{n}\sum_{i=1}^n F(\mu_{n,i}). \end{align} Then, separating $F(\mu_{n,1})$ from $Q_n$ and applying~\eqref{pfeqn:mu}, we have \begin{align} Q_n &\geq S_n - \frac{2T}{n}, \\ Q_n & \leq \frac{n+1}{n} S_{n+1} + \frac{3T}{n}. \end{align} Therefore, there is a constant $C_L>0$ depending on $L$ and $T$ such that~\eqref{eqn:nonasymEig} holds. \end{proof} \section{} \label{app:pfDispersion} We gather the frequently used notations in this section as follows. For any given distortion threshold $d>0$, \begin{itemize} \item let $\theta>0$ be the water level corresponding to $d$ in the limiting reverse waterfilling~\eqref{eqn:RWD}; \item for each $n\geq 1$, let $\theta_n$ be the water level corresponding to $d$ in the $n$-th order reverse waterfilling~\eqref{eqn:NRDF2}; \item let $d_n$ be the distortion associated to the water level $\theta$ in the $n$-th order reverse waterfilling~\eqref{eqn:NRDF2}. \end{itemize} For clarity, we explicitly write down the relations between $d$ and $\theta_n$, and between $d_n$ and $\theta$: \begin{align} d &= \frac{1}{n}\sum_{i=1}^n \min(\theta_n,~\sigma_{n,i}^2), \label{pfeqn:thetaprime}\\ d_n &= \frac{1}{n}\sum_{i=1}^n \min(\theta,~\sigma_{n,i}^2), \label{pfeqn:dprime} \end{align} where $\sigma_{n,i}^2$'s are given in~\eqref{eqn:sigmai}. Note that $d$ and $\theta$ are constants independent of $n$, while $d_n$ and $\theta_n$ are functions of $n$, and there is no direct reverse waterfilling relation between $d_n$ and $\theta_n$. Applying Theorem~\ref{thm:LimitingThm} in Section~\ref{subsubsec:diff} above to the function $t\mapsto \min(\theta, \sigma^2 / t)$, we have \begin{align} \lim_{n\rightarrow\infty} d_n = d, \label{eqn:ddn} \end{align} and \begin{align} \lim_{n\rightarrow\infty}\theta_n = \theta.\label{eqn:thetathetan} \end{align} Theorem~\ref{thm:nonasymEig} in Section~\ref{subsec:spectrum} then implies that the speed of convergence in~\eqref{eqn:ddn} and~\eqref{eqn:thetathetan} is in the order of $1/n$. \subsection{Expectation and Variance of the $\mathsf{d}$-tilted Information} \label{app:EV} \begin{proposition} \label{prop:approx} For any $d\in (0, d_{\mathrm{max}})$ and $n\geq 1$, let $d_n$ be defined in~\eqref{pfeqn:dprime} above. Then, the expectation and variance of the $\mathsf{d}$-tilted information $\jmath_{U_1^n}(U_1^n, d_n)$ at distortion level $d_n$ satisfy \begin{align} \lrabs{ \frac{1}{n} \mathbb{E}\left[\jmath_{U_1^n}(U_1^n, d_n)\right] - \mathbb{R}_{U}(d) } &\leq \frac{C_1}{n},\label{eqn:appEXP}\\ \lrabs{ \frac{1}{n} \mathbb{V}\left[\jmath_{U_1^n}(U_1^n, d_n)\right] - \mathbb{V}_{U}(d) } &\leq \frac{C_2}{n},\label{eqn:appVAR} \end{align} where $\mathbb{R}_{U}(d)$ is the rate-distortion function given in~\eqref{eqn:RWR}, $\mathbb{V}_{U}(d)$ is the informational dispersion given in~\eqref{eqn:dispersion} and $C_1$, $C_2$ are positive constants. \end{proposition} \begin{proof} Using the same derivation as that of~\eqref{eqn:dtiexp}, one can obtain the following representation of the $\mathsf{d}$-tilted information $\jmath_{U_1^n}(U_1^n, d_n)$ at distortion level $d_n$: \begin{align} \jmath_{U_1^n}(U_1^n, d_n) &= \sum_{i = 1}^n \frac{\min(\theta,~\sigma_{n, i}^2)}{2\theta}\LRB{\frac{X_i^2}{\sigma_{n,i}^2} - 1} + \notag \\ & \frac{1}{2}\sum_{i=1}^n \log \frac{\max(\theta,~\sigma_{n,i}^2)}{\theta}, \label{pfeqn:DTId} \end{align} where $X_1^n$ is the decorrelation of $U_1^n$ defined in~\eqref{decor}. Note that the difference between~\eqref{eqn:dtiexp} and~\eqref{pfeqn:DTId} is that $\theta_n$ is replaced by $\theta$. Using~\eqref{eqn:Xi} and taking expectations and variances of both sides of~\eqref{pfeqn:DTId}, we arrive at \begin{align} \frac{1}{n} \mathbb{E}\left[\jmath_{U_1^n}(U_1^n, d_n)\right] &= \frac{1}{2n}\sum_{i=1}^n \log\max\left(1,~ \frac{\sigma_{n,i}^2}{\theta}\right), \label{pfeqn:expsum}\\ \frac{1}{n} \var{\jmath_{U_1^n}(U_1^n, d_n)} &= \frac{1}{2n}\sum_{i=1}^n \min\left(1,~\frac{\sigma_{n,i}^4}{\theta^2}\right) \label{pfeqn:varsum}. \end{align} Applying Theorem~\ref{thm:nonasymEig} in Section~\ref{subsec:spectrum} to~\eqref{pfeqn:expsum} with the function $F_{\text{G}}(t)$ defined in~\eqref{func:gray} yields~\eqref{eqn:appEXP}. Similarly, applying Theorem~\ref{thm:nonasymEig} to~\eqref{pfeqn:varsum} with the function~\eqref{pfeqn:Ft} yields~\eqref{eqn:appVAR}. \end{proof} Proposition~\ref{prop:approx} is one of the key lemmas that will be used in both converse and achievability proofs. Proposition~\ref{prop:approx} and its proof are similar to those of~\cite[Eq. (95)--(96)]{dispersionJournal}. The difference is that we apply Theorem~\ref{thm:nonasymEig}, which is the nonstationary version of~\cite[Th. 4]{dispersionJournal}, to a different function in \eqref{pfeqn:expsum}. \subsection{Approximation of the $\mathsf{d}$-tilted Information} \label{app:appDTI} The following proposition gives a probabilistic characterization of the accuracy of approximating the $\mathsf{d}$-tilted information $\jmath_{U_1^n} \left(U_1^n, d \right)$ at distortion level $d$ using the $\mathsf{d}$-tilted information $\jmath_{U_1^n} \left(U_1^n, d_n \right)$ at distortion level $d_n$. \begin{proposition} \label{prop:gapDTI} For any $d \in (0, d_{\mathrm{max}})$, there exists a constant $\tau>0$ (depending on $d$ only) such that for all $n$ large enough \begin{align} \mathbb{P}\left[\left|\jmath_{U_1^n} \left(U_1^n, d \right) - \jmath_{U_1^n} \left(U_1^n, d_n \right)\right| > \tau \right] \leq \frac{1}{n}, \end{align} where $d_n$ is defined in~\eqref{pfeqn:dprime}. \end{proposition} \begin{proof} The proof in~\cite[App. D-B]{dispersionJournal} works for the nonstationary case as well, since the proof~\cite[App. D-B]{dispersionJournal} only relies on the convergences in~\eqref{eqn:ddn} and~\eqref{eqn:thetathetan} being both in the order of $1/n$, which continues to hold for the nonstationary case. \end{proof} \begin{remark} The following high probability set is used in our converse and achievability proofs: \begin{align} \mathcal{A} \triangleq \lbpara{\left|\jmath_{U_1^n} \left(U_1^n, d \right) - \jmath_{U_1^n} \left(U_1^n, d_n \right)\right| \leq \tau}. \end{align} Proposition~\ref{prop:gapDTI} implies that $\mathbb{P}[\mathcal{A}]\geq 1 - 1 / n$ for all $n$ large enough. \label{rem:setE} \end{remark} \section{Converse Proof} \label{app:pfConverse} \begin{proof}[Proof of Theorem~\ref{thm:converse}] Using the general converse by Kostina and Verd{\'u}~\cite[Th. 7]{kostina2012fixed} and our established Propositions~\ref{prop:approx} and~\ref{prop:gapDTI} in Appendix~\ref{app:pfDispersion}, the proof is the same as the converse proof in the asymptotically stationary case~\cite[Th. 7, Eq. (97)--(109)]{dispersionJournal}. For completeness, we give a proof sketch. Choosing $\gamma = ( \log n ) / 2$ and setting $X$ to be $U_1^n$ in~\cite[Th. 7]{kostina2012fixed}, we know that any $(n, M, d, \epsilon)$ code for the Gauss-Markov source must satisfy \begin{align} \epsilon &\geq \prob{\jmath_{U_1^n}(U_1^n, d)\geq \log M + ( \log n ) / 2 } - \frac{1}{\sqrt{n}}. \end{align} By conditioning on the high probability set $\mathcal{A}$ defined in~Remark~\ref{rem:setE} above, we can further bound $\epsilon$ from below by \begin{align} & \left(1-\frac{1}{n}\right )\cdot \prob{\jmath_{U_1^n}(U_1^n, d_n)\geq \log M + (\log n ) / 2 + \tau } \notag \\ & \qquad\qquad\qquad \qquad\qquad\qquad \qquad\qquad\qquad - \frac{1}{\sqrt{n}}. \end{align} From~\eqref{pfeqn:DTId}, we know that $\jmath_{U_1^n}(U_1^n, d_n)$ is a sum of independent random variables, whose mean and variance are bounded (within the order of $1/n$ due to Proposition~\ref{prop:approx}) by the rate-distortion function $\mathbb{R}_{U}(d)$ and the informational dispersion $\mathbb{V}_U(d)$. Choosing $M$ as in~\cite[Eq. (103)]{dispersionJournal} and applying the Berry-Esseen theorem to $\jmath_{U_1^n}(U_1^n, d_n)$, we obtain the converse in Theorem~\ref{thm:converse}. \end{proof} \section{Achievability Proof} \label{app:pfAchievability} \begin{proof}[Proof of Theorem~\ref{thm:achievability}] With our lossy AEP for the nonstationary Gauss-Markov source and Propositions~\ref{prop:approx} and~\ref{prop:gapDTI}, the proof is similar to the one for the stationary Gauss-Markov source in~\cite[Sec. V-C]{dispersionJournal}. Here, we streamline the proof. As elucidated in Section~\ref{subsec:dispersion} above, the standard random coding argument~\cite[Cor. 11]{kostina2012fixed} implies that for any $n$, there exists an $(n, M, d, \epsilon')$ code such that \begin{align} \epsilon'\leq \inf_{P_{V_1^n}}~\mathbb{E}\lbrac{\exp\lpara{-M\cdot P_{V_{1}^n}(\mathcal{B}(U_1^n, d))}}. \label{app:rcbound} \end{align} Choosing $V_1^n$ to be $V_1^{\star n}$ (the random variable that attains the minimum in~\eqref{eqn:nRDF} with $X_1^n$ there replaced by $U_1^n$), the bound~\eqref{app:rcbound} can be relaxed to \begin{align} \epsilon'\leq \mathbb{E}\lbrac{\exp\lpara{-M\cdot P_{V_{1}^{\star n}}(\mathcal{B}(U_1^n, d))}}. \end{align} To simplify notations, in the following, we denote by $C$ a constant that might be different from line to line. Given any constant $\epsilon\in (0,1)$, define $\epsilon_n$ as \begin{align} \epsilon_n \triangleq \epsilon - \frac{C}{\sqrt{n}} - \frac{1}{q(n)} - \frac{1}{n}, \label{app:en} \end{align} where $q(n)$ is defined in~\eqref{eqn:qn} above. Note that for all $n$ large enough, we have $\epsilon_n\in (0, 1)$. We choose $M$ as \begin{align} \log M\triangleq~& n\mathbb{R}_{U}(d) + \sqrt{n \mathbb{V}_U(d)} Q^{-1}(\epsilon_n) + \notag \\ &\quad \log(\log n / 2) + p(n) + C + \tau, \label{app:logm} \end{align} where $p(n)$ is defined in~\eqref{eqn:pn} and $\tau$ is from Proposition~\ref{prop:gapDTI} above. We also define the random variable $G_n$ as \begin{align} G_n \triangleq \log M - \jmath_{U_1^n}(U_1^n, d_n) - p(n) - C - \tau, \end{align} where $d_n$ is defined in~\eqref{pfeqn:dprime} above. Note that all the randomness in $G_n$ is from $U_1^n$, hence we will also use the notation $G_n(u_1^n)$ to indicate one realization of the random variable $G_n$. By bounding the deterministic part, that is, $\log M$, of $G_n$ using Proposition~\ref{prop:approx}, we know that with probability 1, \begin{align} G_n\geq \mathbb{E} + Q^{-1}(\epsilon_n)\sqrt{\mathbb{V}} - \jmath_{U_1^n}(U_1^n, d_n) + \log(\log n / 2), \end{align} where we use $\mathbb{E}$ and $\mathbb{V}$ to denote the expectation and variance of the informational dispersion $\jmath_{U_1^n}(U_1^n, d_n)$ at distortion level $d_n$. Define the set $\mathcal{G}_n$ as \begin{align} \mathcal{G}_n \triangleq \lbpara{u_1^n\in\mathbb{R}^n\colon G_n(u_1^n) < \log(\log n / 2) }, \end{align} Then, in view of~\eqref{pfeqn:DTId}, the informational dispersion $\jmath_{U_1^n}(U_1^n, d_n)$ is a sum of independent random variables with bounded moments, and we apply the Berry-Esseen theorem to obtain \begin{align} P_{U_1^n}(\mathcal{G}_n) \leq \epsilon_n +\frac{C}{\sqrt{n}}. \end{align} We define one more set $\mathcal{L}_n$ as \begin{align} \mathcal{L}_n \triangleq \lbpara{u_1^n\in\mathbb{R}^n\colon \log\frac{1}{P_{V_1^{\star n}}(\mathcal{B}(u_1^n, d))} < \log M - G_n(u_1^n) }. \end{align} Then, by the lossy AEP in Lemma~\ref{lemma:LossyAEP} in Section~\ref{subsec:dispersion} above and Proposition~\ref{prop:gapDTI}, we have \begin{align} P_{U_1^n}(\mathcal{L}_n) \geq 1 - \frac{1}{q(n)} - \frac{1}{n}. \label{app:proln} \end{align} Finally, for any constant $\epsilon\in (0,1)$ and $n$ large enough, we define $\epsilon_n$ as in~\eqref{app:en} above and set $M$ as in~\eqref{app:logm}. Then, there exists $(n, M, d, \epsilon')$ code such that \begin{align} \epsilon' \leq &~ \mathbb{E}\lbrac{\exp\lpara{-M\cdot P_{V_{1}^{\star n}}(\mathcal{B}(U_1^n, d))\cdot 1\{\mathcal{L}_n\}}} + \notag \\ &~\mathbb{E}\lbrac{\exp\lpara{-M\cdot P_{V_{1}^{\star n}}(\mathcal{B}(U_1^n, d))}\cdot 1\{\mathcal{L}_n^c\}} \\ \leq &~\mathbb{E}\lbrac{\exp(e^{-G_n})} + \frac{1}{q(n)} + \frac{1}{n}, \end{align} where the last inequality is due to the definition of $\mathcal{L}_n$ and~\eqref{app:proln}. By further conditioning on $\mathcal{G}_n$, we conclude that there exists $(n, M, d, \epsilon')$ code such that \begin{align} \epsilon' & \leq \epsilon_n + \frac{C}{\sqrt{n}} + \frac{1}{n} + \frac{1}{q(n)} \\ &= \epsilon. \end{align} Therefore, by the choice of $M$ in~\eqref{app:logm}, the minimum achievable source coding rate $R(n, d, \epsilon)$ must satisfy \begin{align} R(n, d, \epsilon) & \leq \mathbb{R}_{U} (d) + \sqrt{\frac{\mathbb{V}_U(d)}{n}} Q^{-1}(\epsilon) + \notag \\ &\quad \frac{K_1 \log\log n}{n} + \frac{p(n)}{n} + \frac{K_2}{\sqrt{n} q(n)} \label{eqn:generalrelation} \end{align} for all $n$ large enough, where $K_1> 0$ is a universal constant and $K_2$ is a constant depending on $\epsilon$. Here we change from $Q^{-1}(\epsilon_n)$ to $Q^{-1}(\epsilon)$ using a Taylor expansion. Therefore, Theorem~\ref{thm:achievability} follows immediately from~\eqref{eqn:generalrelation} with the choices of $p(n)$ and $q(n)$ given by~\eqref{eqn:pn} and~\eqref{eqn:qn}, respectively, in the lossy AEP in Lemma~\ref{lemma:LossyAEP} in Section~\ref{subsec:dispersion} above. We have $O(\cdot)$ in~\eqref{abound} since $K_2$ could be positive or negative. \end{proof} \section{Proof of Lossy AEP} \label{app:Achievability} \subsection{Notations} \label{app:lossyAEPnotations} For the optimization problem $\mathbb{R}(A_1^n, B_1^n, d)$ in~\eqref{eqn:crem}, the generalized tilted information defined in~\cite[Eq. (28)]{kostina2012fixed} in $a_1^n$ (a realization of $A_1^n$) is given by \begin{align} \Lambda_{B_1^n}(a_1^n, \delta, d) \triangleq -\delta n d - \log \EX{\exp(-n\delta \dis{a_1^n}{B_1^n})}, \end{align} where $\delta>0$ and $d\in(0,d_{\mathrm{max}})$. For properties of the generalized tilted information, see~\cite[App. D]{kostina2012fixed}. For clarity, we list the notations used throughout this section: \begin{enumerate} \item $X_1^n$ denotes the decorrelation of $U_1^n$ defined in~\eqref{decor}; \item $\hat{X}_1^n$ is the proxy random variable of $X_1^n$ defined in Definition~\ref{def:proxy} in Section~\ref{subsec:LossyAEPandPE} above; \item For $Y_1^{\star n}$ that achieves $\mathbb{R}_{X_1^n}(d)$ in~\eqref{eqn:nRDF}, $\hat{F}_1^{\star n}$ is the random vector that achieves $\mathbb{R}\LRB{\hat{X}_1^n, Y_1^{\star n}, d}$; \item We denote by $\lambda^\star_n$ the negative slope of $\mathbb{R}_{X_1^n}(d)$ (the same notation used in~\eqref{dtiltedGM}): \begin{align} \lambda^\star_n \triangleq -\mathbb{R}'_{X_1^n}(d). \label{lambdaOpt} \end{align} It is shown in~\cite[Lem. 5]{dispersionJournal} that $\lambda^\star_n$ is related to the $n$-th order water level $\theta_n$ in~\eqref{eqn:NRDF2} by \begin{align} \lambda^\star_n = \frac{1}{2\theta_n}. \label{eqn:lambdatheta} \end{align} Given any source outcome $u_1^n$, let $x_1^n$ be the decorrelation of $u_1^n$. Define $\hat{\lambda}_n$ as the negative slope of $\mathbb{R}(\hat{X}_1^n, Y_1^{\star n}, d)$ w.r.t. $d$: \begin{align} \hat{\lambda}_n \triangleq -\mathbb{R}'(\hat{X}_1^n, Y_1^{\star n}, d).\label{lambdaXOpt} \end{align} \item Comparing the definitions of $\mathsf{d}$-tilted information and the generalized tilted information, one can see that~\cite[Eq. (18)]{dispersionJournal} \begin{align} \jmath_{X_1^n}(x_1^n, d) = \Lambda_{Y_1^{\star n}}(x_1^n, \lambda^\star_n, d). \end{align} \item Recalling~\eqref{eqn:Xi} and applying the reverse waterfilling result~\cite[Th. 10.3.3]{cover2012elements}, we know that the coordinates of $Y_1^{\star n}$ are independent and satisfy \begin{align} Y_i^\star \sim \mathcal{N}(0,~\nu_{n,i}^2), \label{Yi} \end{align} where \begin{align} \nu_{n,i}^2 \triangleq \max(0,~\sigma_{n,i}^2 - \theta_n), \label{nui} \end{align} with $\theta_n>0$ given in~\eqref{pfeqn:thetaprime}. \end{enumerate} \subsection{Parametric Representation of the Gaussian Conditional Relative Entropy Minimization} \label{app:paraCREM} Various aspects of the optimization problem~\eqref{eqn:crem} have been discussed in~\cite[Sec. II-B]{dispersionJournal}. In particular, let $B_1^{\star n}$ be the optimizer of $\mathbb{R}_{A_1^n}(d)$, then we have \begin{align} \mathbb{R}(A_1^n, B_1^{\star n}, d) = \mathbb{R}_{A_1^n}(d), \end{align} where $\mathbb{R}_{A_1^n}(d)$ is in~\eqref{eqn:nRDF}. Another useful result on the optimization problem~\eqref{eqn:crem} is the following: when $A_1^n$ and $B_1^n$ are independent Gaussian random vectors, the next theorem gives parametric characterizations for the optimizer and optimal value of~\eqref{eqn:crem}. \begin{theorem} \label{th:gaussiancrem} Let $A_1,\ldots, A_n$ be independent random variables with \begin{align} A_i\sim \mathcal{N}(0, \alpha_i^2),\label{eq:Aalphai} \end{align} and $B_1,\ldots, B_n$ be independent random variables with \begin{align} B_i\sim \mathcal{N}(0, \beta_i^2),\label{eq:Bbetai}. \end{align} For any $d$ such that \begin{align} 0< d < \frac{1}{n}\sum_{i=1}^n (\alpha_i^2 + \beta_i^2), \label{eq:ranged} \end{align} we have the following parametric representation for $\mathbb{R}(A_1^n, B_1^n, d)$: \begin{align} \mathbb{R}(A_1^n, B_1^n, d) & = -\lambda d ~+ \label{eq:rlambdadcrem}\\ &\quad \frac{1}{2n}\sum_{i=1}^n \log (1 + 2\lambda \beta_i^2) + \frac{1}{n}\sum_{i=1}^n \frac{\lambda \alpha_i^2}{1 + 2\lambda \beta_i^2} \notag \end{align} \begin{align} d &= \frac{1}{n}\sum_{i=1}^n \frac{\alpha_i^2 + \beta_i^2 (1 + 2\lambda \beta_i^2)}{(1 + 2\lambda \beta_i^2)^2},\label{eq:dlambdadcrem} \end{align} where $\lambda > 0$ is the parameter. Furthermore, $\lambda$ equals the negative slope of $\mathbb{R}(A_1^n, B_1^n, d)$ w.r.t. $d$: \begin{align} \lambda = -\mathbb{R}'(A_1^n, B_1^n, d). \label{app:llambda} \end{align} \end{theorem} Similar results to Theorem~\ref{th:gaussiancrem} have appeared previously in the literature~\cite{dembo1999asymptotics,yang1999redundancy,dembo2002source}. See~\cite[Example 1 and Th. 2]{dembo2002source} for the case of $n=1$. For completeness, we present a proof. \begin{proof} Fix any $d$ that satisfies~\eqref{eq:ranged}, and let $\lambda$ be such that~\eqref{eq:dlambdadcrem} is satisfied. Note from~\eqref{eq:dlambdadcrem} that $d$ is a strictly decreasing function in $\lambda$ (unless $\beta_i = 0$ for all $i\in [n]$), hence such $\lambda$ is unique. The upper bound on $d$ in~\eqref{eq:ranged} guarantees that $\lambda > 0$. We first show the $\leq$ direction in~\eqref{eq:rlambdadcrem}. For $A_1^n = a_1^n\in\mathbb{R}^n$, define the conditional distribution $P_{F_i |A_i = a_i} (f_i)$ as \begin{align} \mathcal{N}\lpara{\frac{2\lambda \beta_{i}^2 a_i}{1+2\lambda \beta_{i}^2}, \frac{\beta_{i}^2}{1+2\lambda \beta_{i}^2}}.\label{eq:cupper} \end{align} We then define the joint distribution $P_{A_1^n, F_1^n}$ as \begin{align} P_{A_1^n, F_1^n} \triangleq \prod_{i = 1}^n P_{F_i | A_i} P_{A_i}. \label{eq:prodchoice} \end{align} Using~\eqref{eq:dlambdadcrem}, we can check that with such a choice of $P_{A_1^n, F_1^n}$, the expected distortion between $A_1^n$ and $F_1^n$ equals $d$. The details follow. \begin{align} \mathbb{E}\left[\dis{A_1^n}{F_1^n}\right] = &~ \mathbb{E}\left[ \mathbb{E}[\dis{A_1^n}{F_1^n} | A_1^n]\right] \\ = &~\frac{1}{n}\sum_{i=1}^n \mathbb{E}\left[ \mathbb{E}[(F_i - A_i)^2| A_i] \right]\\ = &~ \frac{1}{n}\sum_{i=1}^n \frac{\beta_{i}^2}{1+2\lambda \beta_{i}^2} + \frac{\alpha_i^2}{(1 + 2\lambda\beta_i^2)^2} \label{ssstep:step1}\\ = &~ d, \label{ssstep:step2} \end{align} where~\eqref{ssstep:step1} is from the relation $\mathbb{E}[(X - t)^2] = \text{Var}[X] + (\mathbb{E}[X] - t)^2$ and~\eqref{ssstep:step2} is due to~\eqref{eq:dlambdadcrem}. Therefore, the choice of $P_{F_1^n|A_1^n}$ in~\eqref{eq:cupper} and~\eqref{eq:prodchoice} is feasible for the optimization problem in defining $\mathbb{R}(A_1^n, B_1^n, d)$. Hence, \begin{align} \mathbb{R}(A_1^n, B_1^n, d) &\leq \frac{1}{n} D\left (P_{F_1^n | A_1^n} || P_{B_1^n} | P_{A_1^n}\right ) \\ &= \frac{1}{n} \sum_{i=1}^n \mathbb{E} \left [D\left (P_{F_i | A_i}(\cdot | A_i) || P_{B_i} \right )\right ].\label{eq:sumkl} \end{align} It is straightforward to verify that the Kullback-Leibler divergence between two Gaussian distributions $X\sim \mathcal{N}(\mu_X, \sigma_X^2)$ and $Y\sim \mathcal{N}(\mu_Y, \sigma_Y^2)$ is given by \begin{align} D(P_X||P_Y) =\frac{\sigma_X^2 + (\mu_X - \mu_Y)^2}{2\sigma_Y^2} - \frac{1}{2}\log\frac{\sigma_X^2}{\sigma_Y^2} - \frac{1}{2}. \label{eq:klgaussians} \end{align} Using~\eqref{eq:klgaussians} and~\eqref{eq:cupper}, we see that~\eqref{eq:sumkl} equals the right-hand side of~\eqref{eq:rlambdadcrem}. To prove the other direction, we use the Donsker-Varadhan representation of the Kullback-Leibler divergence~\cite[Th. 3.5]{polyanskiy2014lecture}: \begin{align} D(P||Q) = \sup_{g}~\mathbb{E}_P[g(X)] - \log \mathbb{E}_Q[\exp{g(X)}],\label{eq:dv} \end{align} where the supremum is over all functions $g$ from the sample space to $\mathbb{R}$ such that both expectations in~\eqref{eq:dv} are finite. Fix any $P_{F_1^n|A_1^n}$ such that $\mathbb{E}[\dis{A_1^n}{F_1^n}] \leq d$. For any $A_1^n = a_1^n$, in~\eqref{eq:dv}, we choose $P$ to be $P_{F_1^n|A_1^n = a_1^n}$, $Q$ to be $P_{B_1^n}$ and $g$ to be $g(f_1^n)\triangleq -n \lambda \mathsf{d}(f_1^n, a_1^n)$ for any $f_1^n\in\mathbb{R}^n$, then we have \begin{align} D(P_{F_1^n|A_1^n = a_1^n} ||P_{B_1^n}) & \geq - n\lambda \mathbb{E}_{P_{F_1^n|A_1^n = a_1^n}}[\mathsf{d}(F_1^n, a_1^n)] \label{eq:dvgaussian}\\ &\quad - \log \mathbb{E}_{P_{B_1^n}}[\exp\left(-n\lambda \dis{B_1^n}{a_1^n}\right)]. \notag \end{align} Taking expectations on both sides of~\eqref{eq:dvgaussian} with respect to $P_{A_1^n}$ and then normalizing by $n$, we have \begin{align} \mathbb{R}(A_1^n, B_1^n, d) &\geq -\lambda \mathbb{E}[\dis{A_1^n}{F_1^n}] \label{eq:dvbound} \\ & \quad- \mathbb{E}_{P_{A_1^n}}\log \mathbb{E}_{P_{B_1^n}}[\exp\left(-n\lambda \dis{B_1^n}{A_1^n}\right)]. \notag \end{align} Using the formula for the moment generating function for noncentral $\chi^2$ distributions, we can compute \begin{align} & \mathbb{E}_{P_{B_1^n}}[\exp\left(-n\lambda \dis{B_1^n}{a_1^n}\right)] \notag \\ = &~ \prod_{i = 1}^n \frac{1}{\sqrt{1 + 2\lambda\beta_i^2}} \exp\lpara{\frac{-\lambda a_i^2}{1 + 2\lambda\beta_i^2}}. \label{eq:mgfchisq} \end{align} Plugging~\eqref{eq:mgfchisq} into~\eqref{eq:dvbound} and using $\mathbb{E}[\dis{A_1^n}{F_1^n}] \leq d$, we conclude that $\mathbb{R}(A_1^n, B_1^n, d)$ is greater than or equal to the right-hand side of~\eqref{eq:rlambdadcrem}. Finally,~\eqref{app:llambda} is obtained by taking derivative of~\eqref{eq:rlambdadcrem} w.r.t. $d$, where we need to use the chain rule for derivatives since $\lambda$ is a function of $d$ given by~\eqref{eq:dlambdadcrem}. \end{proof} Our next result states that for fixed $\beta_i^2$'s satisfying certain mild conditions, if we change the variances from $\alpha_i^2$'s to $\hat{\alpha}_i^2$'s, then the perturbation on the corresponding $\lambda$'s is controlled by the perturbation on $\alpha_i^2$'s. \begin{theorem}[Variance perturbation] \label{th:vp} Let $\alpha_i^2$'s and $\beta_i^2$'s be in~\eqref{eq:Aalphai} and \eqref{eq:Bbetai} above, respectively. For a fixed $d$ satisfying~\eqref{eq:ranged}, let $\lambda$ be given by~\eqref{eq:dlambdadcrem}. Suppose that $\alpha_i^2$'s and $\beta_i^2$'s are such that both \begin{align} \frac{1}{n}\sum_{i=1}^n\frac{1}{(1 + 2\lambda \beta_i^2)^4} \label{aspt:beta1} \end{align} and \begin{align} \frac{1}{n}\sum_{i=1}^n \frac{2\beta_i^2(2\alpha_i^2 + 1 + 2\lambda\beta_i^2)}{(1 + 2\lambda\beta_i^2)^3} \label{aspt:beta2} \end{align} are bounded above by positive constants. Let $\hat{A}_1,\ldots, \hat{A}_n$ be independent random variables with \begin{align} \hat{A}_i\sim \mathcal{N}(0, \hat{\alpha}_i^2).\label{eq:Ahatalphai} \end{align} Let $\hat{\lambda}$ be such that \begin{align} d &= \frac{1}{n}\sum_{i=1}^n \frac{\hat{\alpha}_i^2 + \beta_i^2 (1 + 2\hat{\lambda} \beta_i^2)}{(1 + 2\hat{\lambda} \beta_i^2)^2}.\label{eq:dhatlambdadcrem} \end{align} Then, there is a constant $C>0$ such that \begin{align} \abs{\hat{\lambda} - \lambda } \leq C \max_{1\leq i\leq n}~\abs{\hat{\alpha}_i^2 - \alpha_i^2}. \end{align} \end{theorem} \begin{proof} We can view~\eqref{eq:dlambdadcrem} as an equation of the form $f(\alpha_1^2, \ldots, \alpha_n^2, \lambda) = 0$. Then, by the implicit function theorem, we know that there exists a unique continuously differentiable function $h$ such that \begin{align} \lambda = h(\alpha_1^2,\ldots,\alpha_n^2), \end{align} and \begin{align} \frac{\partial h }{\partial \alpha_i^2} = \lbpara{\frac{1}{n}\sum_{i=1}^n \frac{2\beta_i^2 [ 2\alpha_i^2 + \beta_i^2(1 + 2\lambda\beta_i^2)]}{(1 + 2\lambda\beta_i^2)^3}}^{-1} \frac{1}{n(1 + 2\lambda \beta_i^2)^2}. \end{align} Hence, \begin{align} \pnorm{2}{\nabla h} = & \lbpara{\frac{1}{n}\sum_{i=1}^n \frac{2\beta_i^2(2\alpha_i^2 + 1 + 2\lambda\beta_i^2)}{(1 + 2\lambda\beta_i^2)^3}}^{-1} \times \\ & \quad\quad \sqrt{\frac{1}{n^2}\sum_{i=1}^n\frac{1}{(1 + 2\lambda \beta_i^2)^4}}.\notag \end{align} By the assumptions~\eqref{aspt:beta1} and~\eqref{aspt:beta2}, we know that there exists a constant $C>0$ such that \begin{align} \pnorm{2}{\nabla h} \leq \frac{C}{\sqrt{n}}. \end{align} Hence, we have \begin{align} \abs{\hat{\lambda} - \lambda } & \leq \pnorm{2}{\nabla h} \pnorm{2}{(\alpha_1^2,\ldots, \alpha_n^2) - (\hat{\alpha}_1^2,\ldots, \hat{\alpha}_n^2)} \\ &\leq C \max_{1\leq i\leq n}~\abs{\hat{\alpha}_i^2 - \alpha_i^2}. \end{align} \end{proof} \subsection{Proof of Theorem~\ref{thm:typical}} \label{app:PfThTS} The proof is similar to~\cite[Th. 12]{dispersionJournal}. We streamline the proof and point out the differences. We use the notations defined in Appendix~\ref{app:lossyAEPnotations} above. Our Corollary~\ref{cor:disp} implies that for all $n$ large enough the condition~\eqref{eqn:cond1} is violated with probability at most $2e^{-cn}$ for a constant $c> \log (a) / 2$. This is much stronger than the bound $\Theta\left(1 / \text{poly}\log n \right)$ in the stationary case~\cite[Th. 6]{dispersionJournal}. In view of~\eqref{eqn:Xi}, the random variables $X_i / \sigma_{n,i}$ for $i = 1,\ldots, n$, are distributed according to i.i.d. standard normal distributions, and their $2k$-th moments equal to $(2k-1)!!$. The Berry-Esseen theorem implies that the condition~\eqref{eqn:cond2} is violated with probability at most $\Theta \left(1 / \sqrt{n}\right)$. This is the same as in the stationary case~\cite[Eq. (279)--(280)]{dispersionJournal}. We use the following procedure to show that the condition~\eqref{eqn:cond3} is violated with probability at most $\Theta\left(1/\log n\right)$: \begin{itemize} \item We approximate $m_i(u_1^n)$ by another random variable $\bar{m}_{i}(u_1^n)$ that is easier to analyze. \item We show that~\eqref{eqn:cond3} with $m_{i}(u_1^n)$ replaced by $\bar{m}_i(u_1^n)$ holds with probability at least $1 - \Theta(1 / \log n)$. \item We then control the difference between $m_i(u_1^n)$ and $\bar{m}_i(u_1^n)$. \end{itemize} To carry out the above program, we first give an expression for $m_i(u_1^n)$ by applying~\cite[Lem. 4]{dispersionJournal} (see also the proof of Theorem~\ref{th:gaussiancrem}) on $\mathbb{R}(\hat{X}_1^n, Y_1^{\star n}, d)$. Note that $\hat{X}_1^n$ and $Y_1^{\star n}$ are Gaussian random vectors with independent coordinates with variances given by~\eqref{eqn:sigmaihat} and~\eqref{Yi}, respectively. Then,~\cite[Lem. 4]{dispersionJournal} implies that the optimizer $P_{\hat{F}_1^{\star n} | \hat{X}_1^n}$ for $\mathbb{R}(\hat{X}_1^n, Y_1^{\star n}, d)$ satisfies \begin{align} P_{\hat{F}_1^{\star n} | \hat{X}_1^n = \hat{x}_1^n} = \prod_{i=1}^n P_{\hat{F}_i^{\star} | \hat{X}_i = \hat{x}_i}, \end{align} where the conditional distributions $\hat{F}_i^{\star} | \hat{X}_i = \hat{x}_i$ are Gaussian: \begin{align} \mathcal{N}\lpara{\frac{2\hat{\lambda}_n \nu_{n,i}^2 \hat{x}_i}{1+2\hat{\lambda}_n \nu_{n,i}^2}, \frac{\nu_{n,i}^2}{1+2\hat{\lambda}_n \nu_{n,i}^2}}, \label{FiXi} \end{align} where $\nu_{n,i}^2$'s are defined in~\eqref{nui}, and $\hat{\lambda}_n$ is defined in~\eqref{lambdaXOpt}. Then, using the definition of $m_i(u_1^n)$ in~\eqref{mi} and~\eqref{FiXi}, we obtain \begin{align} m_i(u_1^n) = \frac{\nu_{n,i}^2}{1+2\hat{\lambda}_n \nu_{n,i}^2} + \frac{x_i^2}{(1+2\hat{\lambda}_n \nu_{n,i}^2)^2}, \label{eqn:miexp} \end{align} where $x_1^n = \mtx{S}' u_1^n$. The random variable $m_i(u_1^n)$ in the form of~\eqref{eqn:miexp} is hard to analyze since we do not have a simple expression for $\hat{\lambda}_n$. By replacing $\hat{\lambda}_n$ with $\lambda^\star_n$, we define another random variable $\bar{m}_{i}(u_1^n)$ that turns out to be easier to analyze: \begin{align} \bar{m}_i(u_1^n) \triangleq \frac{\nu_{n,i}^2}{1+2\lambda^\star_n \nu_{n,i}^2} + \frac{x_i^2}{(1+2\lambda^\star_n \nu_{n,i}^2)^2}. \label{eqn:mibarexp} \end{align} Plugging~\eqref{eqn:lambdatheta} and~\eqref{nui} into~\eqref{eqn:mibarexp}, we obtain \begin{align} \bar{m}_i(u_1^n)= \frac{\min(\sigma_{n,i}^2, \theta_n)^2}{\sigma_{n,i}^2}\lpara{\frac{x_i^2}{\sigma_{n,i}^2} - 1} + \min(\sigma_{n,i}^2, \theta_n), \label{eqn:mibarexp1} \end{align} where $\theta_n$ is the $n$-th order water level in~\eqref{eqn:NRDF2} and $x_1^n = \mtx{S}' u_1^n$. The random variable $\bar{m}_i(U_1^n)$ is much easier to analyze since $X_i / \sigma_{n,i}$'s are i.i.d. standard normal random variables. Moreover, in view of~\eqref{eqn:NRDF2}, their expectations satisfy \begin{align} \frac{1}{n}\sum_{i=1}^n \EX{\bar{m}_i(U_1^n)} = \frac{1}{n}\sum_{i=1}^n \min(\sigma_{n,i}^2, \theta_n) = d. \end{align} Since $X_i / \sigma_{n,i}$ has bounded moments, from the Berry-Esseen theorem, we know that there exists a constant $\omega > 0$ such that for all $n$ large enough \begin{align} \prob{\abs{\frac{1}{n}\sum_{i=1}^n \bar{m}_i(U_1^n) - d} > \omega \eta_n} \leq \frac{C_1}{\log n} + \frac{C_2}{\sqrt{n}}, \label{mbarbound} \end{align} where $\eta_n$ is in~\eqref{eqn:etan} above, and $C_1, C_2$ are positive constants. In the last step of the program, we control the difference between $m_{i}(U_1^n)$ and $\bar{m}_i(U_1^n)$. From~\eqref{eqn:miexp}--\eqref{eqn:mibarexp}, we have \begin{align} & \frac{1}{n}\sum_{i=1}^n \bar{m}_i(u_1^n) - \frac{1}{n}\sum_{i = 1}^n m_i(u_1^n) \notag\\ = & \frac{1}{n}\sum_{i = 1}^n \frac{2\nu_{n,i}^4 (\hat{\lambda}_n - \lambda^\star_n)}{(1 + 2 \hat{\lambda}_n \nu_{n,i}^2)(1 + 2\lambda^\star_n \nu_{n,i}^2)} ~+ \label{eqn:Modified} \\ &\quad \frac{1}{n}\sum_{i = 1}^n \frac{2x_i^2\nu_{n,i}^2 (2 + 2\hat{\lambda}_n \nu_{n,i}^2 +2 \lambda^\star_n \nu_{n,i}^2)(\hat{\lambda}_n - \lambda^\star_n)}{(1 + 2 \hat{\lambda}_n \nu_{n,i}^2)^2(1 + 2\lambda^\star_n \nu_{n,i}^2)^2}.\notag \end{align} For $i=1$, we have $\nu_{n,1}^2 = \sigma_{n,1}^2 - \theta_n = \Theta\left(a^{2n}\right)$, $\hat{\lambda}_n = \Theta(1)$ and $\lambda^\star_n = \Theta(1)$. This implies that the summands in~\eqref{eqn:Modified} for $i=1$ are both of the order $O(1/n)$ for any $x_1^2 = O(a^{4n})$. For $2\leq i\leq n$, the condition~\eqref{eqn:cond1} and the variance perturbation result in Theorem~\ref{th:vp} imply that every summand in~\eqref{eqn:Modified} for $i\geq 2$ is in the order of $\eta_n$. Hence, ~\eqref{eqn:Modified} is in the order of $\eta_n$. Finally, combining~\eqref{mbarbound} and~\eqref{eqn:Modified} implies that conditioning on the conditions~\eqref{eqn:cond1} and~\eqref{eqn:cond2}, we conclude that~\eqref{eqn:cond3} is violated with probability at most $\Theta(1 / \log n)$. \qed \subsection{Auxiliary Lemmas} \label{app:AL} \begin{lemma}[Lower bound on the probability of distortion balls] \label{lemma:shell} Fix $d\in (0,d_{\mathrm{max}})$. For any $n$ large enough and any $u_1^n\in \mathcal{T}(n, p)$ defined in Definition~\ref{def:TS} in Section~\ref{subsec:LossyAEPandPE} above, and $\gamma$ defined by \begin{align} \gamma \triangleq \frac{(\log n)^{B_4}}{n} \label{gammaFS} \end{align} for a constant $B_4 > 0$ specified in~\eqref{B4}, below, it holds that \begin{align} \mathbb{P}\left[d - \gamma \leq \mathsf{d}\left(x_1^n, \hat{F}_1^{\star n}\right)\leq d ~| \hat{X_1^n} = x_1^n\right] \geq \frac{K_1}{\sqrt{n}}, \label{eqn:lowerShell} \end{align} where $K_1>0$ is a constant and $\hat{F}_1^{\star n}$ is in Appendix~\ref{app:lossyAEPnotations} above. \end{lemma} The proof is in Appendix~\ref{app:pfLemmaShell}. \begin{lemma} \label{lemma:generalized} Fix $d\in (0,d_{\mathrm{max}})$ and $\epsilon\in (0,1)$. There exists constants $C$ and $K_2>0$ such that for all $n$ large enough, \begin{align} & \prob{\Lambda_{Y_1^{\star n}}\left(X_1^n, \hat{\lambda}_n, d\right) \leq \Lambda_{Y_1^{\star n}}\left(X_1^n, \lambda^\star_n, d\right) + C\log n} \notag \\ \geq & 1 - \frac{K_2}{\sqrt{n}}, \end{align} where $\lambda^\star_n$ and $\hat{\lambda}_n$ are defined in~\eqref{lambdaOpt} and~\eqref{lambdaXOpt}, respectively. \end{lemma} \begin{proof} The proof of Lemma~\ref{lemma:generalized} is the same as~\cite[Eq.~(314)--(333)]{dispersionJournal} except that we strengthen the right side of~\cite[Eq.~(322)]{dispersionJournal} to be $\Theta(e^{-cn})$ for a constant $c> \log(a) / 2$ due to Corollary~\ref{cor:disp}. \end{proof} \subsection{Proof of Lemma~\ref{lemma:LossyAEP}} \label{app:LossyAEP} Using Lemmas~\ref{lemma:shell} and~\ref{lemma:generalized} in Appendix~\ref{app:AL} above, the proof of Lemma~\ref{lemma:LossyAEP} is almost the same as that in the stationary case~\cite[Eq. (270)-(278)]{dispersionJournal}. For completeness, we sketch the proof here. We weaken the bound~\cite[Lem. 1]{kostina2012fixed} by setting $P_{\hat{X}}$ as $P_{\hat{X}_1^n}$ and $P_Y$ as $P_{Y_1^{\star n}}$ to obtain that for any $x_1^n\in \mathbb{R}^n$, \begin{align} & \log\frac{1}{P_{Y_1^{\star n}}\left (\mathcal{B}(x_1^n, d)\right )}\leq \inf_{\gamma > 0}\Bigg\{ \Lambda_{Y_1^{\star n}}(x_1^n, \hat{\lambda}_n, d) + \hat{\lambda}_n n \gamma - \notag \\ &\quad \log \prob{d - \gamma \leq \dis{x_1^n}{\hat{F}_1^{\star n} } \leq d| \hat{X}_1^n = x_1^n} \Bigg\},\label{pfeqn:lemmaK} \end{align} where $\hat{\lambda}_n$ in~\eqref{lambdaXOpt} depends on $X_1^n$. Let $\mathcal{E}$ denote the event inside the square brackets in~\eqref{eqn:lossyAEP}. Then, \begin{align} & \mathbb{P}[\mathcal{E}] \notag \\ =~& \mathbb{P}[\mathcal{E} \cap \mathcal{T}(n, p)] + \mathbb{P}[\mathcal{E} \cap \mathcal{T}(n, p)^c] \\ \leq ~& \mathbb{P} \Big [ \Lambda_{Y_1^{\star n}}(X_1^n, \hat{\lambda}_n, d) \geq \Lambda_{Y_1^{\star n}}(X_1^n, \lambda^\star_n, d) + p(n) - \hat{\lambda}_n n \gamma- \notag \\ & \quad\quad \frac{1}{2}\log n + \log K_1,~\mathcal{T}(n, p) \Big ] + \mathbb{P}[ \mathcal{T}(n, p)^c] \label{appEQN:lossy}\\ \leq ~& \mathbb{P} \Big [ \Lambda_{Y_1^{\star n}}(X_1^n, \hat{\lambda}_n, d) \geq \Lambda_{Y_1^{\star n}}(X_1^n, \lambda^\star_n, d) + C\log n \Big ] + \notag \\ &\quad\quad \mathbb{P}[ \mathcal{T}(n, p)^c] \label{appEQN:lossyL}\\ \leq ~& \frac{1}{q(n)}, \label{appEQN:lossyM} \end{align} where \begin{itemize} \item \eqref{appEQN:lossy} is due to~\eqref{pfeqn:lemmaK} and Lemma~\ref{lemma:shell}; \item From~\eqref{appEQN:lossy} to~\eqref{appEQN:lossyL}, we used the fact that for $u_1^n\in \mathcal{T}(n, p)$, $\hat{\lambda}_n$ can be bounded by \begin{align} \lrabs{\hat{\lambda}_n - \frac{1}{2\theta}}\leq B_1, \label{eqn:lambdax} \end{align} where $B_1>0$ is a constant and $\theta > 0$ is given by~\eqref{eqn:RWD}. The bound~\eqref{eqn:lambdax} is obtained by the same argument as that in the stationary case~\cite[Eq. (273)]{dispersionJournal}; $\gamma$ is chosen in~\eqref{gammaFS} above; the constants $c_i$'s, $i = 1,...4$ in~\eqref{eqn:pn} are chosen as \begin{align} c_1 &= B_1 + \frac{1}{2\theta},\\ c_2 &= B_4,\\ c_3 &= C+ \frac{1}{2},\\ c_4 &= -\log K_1, \end{align} where $B_4>0$ is given in~\eqref{B4} below, and $K_1$ and $C$ are the constants in Lemmas~\ref{lemma:shell} and~\ref{lemma:generalized}, respectively. \item \eqref{appEQN:lossyM} is due to Lemma~\ref{lemma:generalized} and Theorem~\ref{thm:typical}. \end{itemize} \qed \subsection{Proof of Lemma~\ref{lemma:shell}} \label{app:pfLemmaShell} \begin{proof} The proof is similar to the stationary case~\cite[Lem. 10]{dispersionJournal}. We streamline the proof and point out the differences. Conditioned on $\hat{X}_1^n = x_1^n$, the random variable \begin{align} \mathsf{d}\left(x_1^n, \hat{F}_1^{\star n}\right) = \frac{1}{n}\sum_{i = 1}^n \left(\hat{F}_i^\star - x_i\right)^2 \label{dsum} \end{align} follows a noncentral $\chi^2$-distribution with (at most) $n$ degrees of freedom, since it is shown in~\cite[Eq. (282) and Lem. 4]{dispersionJournal} that conditioned on $\hat{X}_1^n = x_1^n$, the distribution of the random variable $\hat{F}_{i}^\star - x_i$ is given by \begin{align} \mathcal{N}\left(\frac{-x_i}{1 +2 \hat{\lambda}_n \nu_{n,i}^2},~ \frac{\nu_{n,i}^2}{1 + 2 \hat{\lambda}_n \nu_{n,i}^2}\right), \label{pfeqn:summand} \end{align} where $\nu_{n,i}^2$'s are given in~\eqref{nui}. Then, the conditional expectation is given by \begin{align} \EX{\mathsf{d}\left(x_1^n, \hat{F}_1^{\star n}\right) | \hat{X}_1^n = x_1^n} = \frac{1}{n}\sum_{i = 1}^n m_i(u_1^n), \label{pfeqn:EXDXF} \end{align} where $m_i(u_1^n)$ is defined in~\eqref{mi} in Section~\ref{subsec:dispersion} above. In view of~\eqref{dsum},~\eqref{pfeqn:EXDXF} and~\eqref{eqn:cond3}, we expect that $\mathsf{d}\left(x_1^n, \hat{F}_1^{\star n}\right)$ concentrates around $d$ conditioned on $\hat{X}_1^n = x_1^n$ for $u_1^n\in\mathcal{T}(n,p)$. Note that the proof of Theorem~\ref{thm:typical} related to~\eqref{eqn:cond3} is different from the one in the stationary case, see Appendix~\ref{app:PfThTS} above for the details. To simplify notations, we denote the variances as \begin{align} V_i(x_1^n) &\triangleq \text{Var}\left[\left(\hat{F}_i^\star - x_i\right)^2 | \hat{X}_1^n = x_1^n\right], \\ V(x_1^n) &\triangleq \sqrt{\frac{1}{n}\sum_{i = 1}^n V_i(x_1^n)}. \end{align} Due to~\eqref{pfeqn:summand} and~\eqref{eqn:cond3}, we see $(\hat{F}_i^\star - x_i)^2$'s have finite second- and third- order absolute moments. That is, we have \begin{align} V(x_1^n) = \Theta(1), \label{eqn:VVX} \end{align} for $u_1^n\in \mathcal{T}(n,p)$. Therefore, we can apply the Berry-Esseen theorem. Hence, \begin{align} & \mathbb{P}\left[d - \gamma \leq \mathsf{d}\left(x_1^n, \hat{F}_1^{\star n}\right)\leq d ~| \hat{X}_1^n = x_1^n\right] \notag \\ =~ & \mathbb{P}\Bigg[\quad\quad \frac{n(d - \gamma) -\sum_{i = 1}^n m_i(u_1^n)}{\sqrt{n} V(x_1^n)} \notag \\ &\quad\quad \leq \frac{1}{\sqrt{n} V(x_1^n)} \sum_{i = 1}^n \left[\left(\hat{F}_i^\star - x_i\right)^2 - m_i(u_1^n)\right] \notag \\ &\quad\quad \leq \frac{nd -\sum_{i = 1}^n m_i(u_1^n)}{\sqrt{n} V(x_1^n)} ~|~ \hat{X}_1^n = x_1^n\quad \Bigg] \\ \geq~ & \Phi\left(\frac{nd -\sum_{i = 1}^n m_i(u_1^n)}{\sqrt{n} V(x_1^n)}\right) \notag \\ & \quad - \Phi\left(\frac{n(d-\gamma) -\sum_{i = 1}^n m_i(u_1^n)}{\sqrt{n} V(x_1^n)}\right) - \frac{2B_1}{\sqrt{n}} \label{pfsteps:PHI1}\\ = ~& \frac{\sqrt{n} \gamma}{V(x_1^n)} \Phi'(\xi) - \frac{2B_1}{\sqrt{n}},\label{pfsteps:PHI2} \end{align} where \begin{itemize} \item \eqref{pfsteps:PHI1} follows from the Berry-Esseen theorem; $B_1 >0$ is a constant, and \begin{align} \Phi(t) \triangleq \frac{1}{\sqrt{2\pi}}\int_{-\infty}^t e^{-\frac{\tau^2}{2}}~d\tau \end{align} is the cumulative distribution function of the standard Gaussian distribution; \item \eqref{pfsteps:PHI2} is due to the mean value theorem and \begin{align} \Phi'(t) = \frac{1}{\sqrt{2\pi}}e^{-\frac{t^2}{2}}; \end{align} \item In \eqref{pfsteps:PHI2}, $\xi$ satisfies \begin{align} \frac{n(d-\gamma) -\sum_{i = 1}^n m_i(u_1^n)}{\sqrt{n} V(x_1^n)} \leq \xi \leq \frac{nd -\sum_{i = 1}^n m_i(u_1^n)}{\sqrt{n} V(x_1^n)}. \label{xixixi} \end{align} \end{itemize} By~\eqref{eqn:cond3} and~\eqref{eqn:VVX}, we see that there is a constant $B_2 > 0$ such that \begin{align} \lrabs{\frac{nd - \sum_{i = 1}^n m_i(u_1^n)}{\sqrt{n} V(x_1^n)}} \leq B_2\sqrt{\log\log n}. \end{align} Hence, as long as $\gamma$ in~\eqref{xixixi} satisfies \begin{align} \gamma \leq O(\eta_n), \label{cond:gamma} \end{align} where $\eta_n$ is defined in~\eqref{eqn:etan}, there exists a constant $B_3>0$ such that \begin{align} |\xi| \leq B_3 \sqrt{\log\log n}. \label{xixi} \end{align} Let $B_4 > 0$ be a constant such that \begin{align} B_4 \geq \frac{B_3^2}{2} + 1,\label{B4} \end{align} and choose $\gamma$ as in~\eqref{gammaFS}, which satisfies~\eqref{cond:gamma}. Then, plugging the bounds~\eqref{eqn:VVX},~\eqref{xixi},~\eqref{B4} and~\eqref{gammaFS} into~\eqref{pfsteps:PHI2}, we conclude that there exists a constant $K_1>0$ such that~\eqref{pfsteps:PHI2} is further bounded from below by $\frac{K_1}{\sqrt{n}}$. \end{proof} \section{Conclusion} \label{sec:con} In this paper, we obtain nonasymptotic (Theorem~\ref{thm:chernoff}) and asymptotic (Theorem~\ref{thm:upperLDP}) bounds on the estimation error of the maximum likelihood estimator of the parameter $a$ of the nonstationary scalar Gauss-Markov process. Numerical simulations in Fig.~\ref{fig:compare} confirm the tightness of our estimation error bounds compared to previous works. As an application of the estimation error bound (Corollary~\ref{cor:disp}), we find the dispersion for lossy compression of the nonstationary Gauss-Markov sources (Theorems~\ref{thm:converse} and~\ref{thm:achievability}). Future research directions include generalizing the error exponent bounds in this paper, applicable to identification of scalar dynamical systems, to vector systems, and finding the dispersion of the Wiener process. \section{Introduction} \subsection{Overview} \IEEEPARstart{W}{e consider} two related problems that concern a scalar Gauss-Markov process $\{U_i\}_{i = 1}^{\infty}$, defined by $U_0 = 0$ and \begin{align} U_i = a U_{i-1} + Z_i, \quad\forall i\geq 1, \label{eqn:GMmodel} \end{align} where $Z_i$'s are independent Gaussian random variables with zero mean and variance $\sigma^2$. The first problem is parameter estimation: given samples $u_1^n$ drawn from the Gauss-Markov source, we seek to design and analyse estimators for the unknown system parameter $a$. The consistency and asymptotic distribution of the maximum likelihood (ML) estimator have been studied in the literature~\cite{mann1943statistical, rubin1950consistency, white1958limiting, anderson1959asymptotic, rissanen1979strong, chan1987asymptotic}. Our main contribution is a large deviation bound on the estimation error of the ML estimator. Our numerical experiments indicate that our new bound is tighter than previously known results~\cite{bercu1997large, worms2001large, rantzer2018concentration}. The second problem is the nonasymptotic performance of the optimal lossy compressor of the Gauss-Markov process. An encoder outputs $nR$ bits for each realization $u_1^n$. Once the decoder receives the $nR$ bits, it produces $\hat{u}_1^n$ as a reproduction of $u_1^n$. The distortion between $u_1^n$ and $\hat{u}_1^n$ is measured by the mean squared error (MSE). Two commonly used criteria to quantify the distortion of a lossy compression scheme are the average distortion criterion and the excess-distortion probability criterion. The rate-distortion theory, initiated by Shannon~\cite{shannon1959coding} and further pioneered in~\cite{goblick1969coding, kolmogorov1956shannon, berger1968rate, berger1970information, berger1971rate, gray1970information, gray1971markov, gray2008note, wyner1971bounds, marton1974error, hashimoto1980rate}, studies the optimal tradeoff between the rate $R$ and the distortion. In the limit of large blocklength $n$, the minimum rate $R$ required to achieve average distortion $d$ is given by the rate-distortion function. The nonasymptotic version of the rate-distortion problem~\cite{marton1974error, zhang1997redundancy, yang1999redundancy, ingber2011dispersion, kostina2012fixed} studies the rate-distortion tradeoff for finite blocklength $n$. Our main contribution is a coding theorem that characterizes the gap between the rate-distortion function and the minimum rate $R$ at blocklength $n$ for the nonstationary Gauss-Markov source ($a>1$), under the excess-distortion probability criterion. We leverage our result on the ML estimator to analyze lossy compression. Namely, we apply our bound on the estimation error of the ML estimator to construct a typical set of the sequences whose estimated parameter $a$ is close to the true $a$. We then use the typical set in our achievability proof of the nonasymptotic coding theorem. Without loss of of generality, we assume that $a\geq 0$ in this paper, since, otherwise, we can consider another random process $\{U'_i\}_{i = 1}^{\infty}$ defined by the invertible mapping $U'_i \triangleq (-1)^{i} U_i$ that satisfies $ U'_i = (-a) U'_{i-1} + (-1)^i Z_{i}$, where $(-1)^i Z_{i}$'s are also independent zero-mean Gaussian random variables with variance $\sigma^2$. We distinguish the following three cases: \begin{itemize} \item $0< a <1$: the asymptotically stationary case; \item $a=1$: the unit-root case; \item $a>1$: the nonstationary case. \end{itemize} In this paper, we mostly focus on the nonstationary case. \subsection{Motivations} Estimation of parameters of stochastic processes from their realizations has many applications. In the statistical analysis of economic time series~\cite{mann1943statistical, haavelmo1943statistical, koopmans1942serial}, the Gauss-Markov process $\{U_i\}_{i=1}^{\infty}$ is used to model the varying price of a certain commodity with time, and the ML estimate of the unknown coefficient $a$ is then used to predict future prices. In~\cite{gould1974stochastic} and~\cite[Sec. 5]{dickey1979distribution}, the Gauss-Markov process with $a = 1$ is used to model the stochastic structure of the velocity of money. The Gauss-Markov process, also known as the autoregressive process of order 1 (AR(1)), is a special case of the general autoregressive-moving-average (ARMA) model~\cite{whittle1951hypothesis, box1970time}, for which various estimation and prediction procedures have been proposed, e.g. the Box-Jenkins method~\cite{box1970time}. The Gauss-Markov process is also a special case of the linear state-space model (e.g.~\cite[Chap. 5]{kailath2000linear}) that is popular in control theory. One of the problems in control is system identification~\cite{ljung1987system}, which is the problem of building mathematical models using measured data from unknown dynamical systems. Parameter estimation is one of the common methods used in system identification where the dynamical system is modeled by a state-space model~\cite[Chap. 7]{ljung1987system} with unknown parameters. In modern data-driven control systems, where the goal is to control an unknown nonstationary system given measured data, parameter estimation methods are used as a first step in designing controllers~\cite{rantzer2018concentration}~\cite[Sec. 1.2]{tu2019sample}. In speech signal processing, the linear predictive coding algorithm~\cite{atal1971speech} relies on parameter estimation (the ordinary least squares estimate, or, equivalently, the maximum likelihood estimate assuming Gaussian noise) to fit a higher-order Gauss-Markov process, see~\cite[App. C]{atal1971speech}. A fine-grained analysis of the ML estimate is instrumental in optimizing the design of all these systems. Our nonasymptotic analysis leading up to a large deviation bound for the ML estimate in our simple setting can provide insights for analyzing more complex random processes, e.g., higher-order autoregressive processes and vector systems. Understanding finite-blocklength lossy compression of the Gauss-Markov process fits into a continuing effort by many researchers to advance the rate-distortion theory of information sources with memory, see~\cite{kolmogorov1956shannon, berger1968rate, berger1970information, gray1970information, gray1971markov, wyner1971bounds, hashimoto1980rate, kontoyiannis2000pointwise, dembo2002source, kontoyiannis2003pattern, kontoyiannis2006mismatched, venkataramanan2007source, gray2008note, kontoyiannis2002arbitrary, dembo1999asymptotics, madiman2004minimum}, as well as into a newer push~\cite{marton1974error, zhang1997redundancy, yang1999redundancy, ingber2011dispersion, kostina2012fixed, kostina2013lossy, tan2014dispersions, watanabe2017second, dispersionJournal, zhou2016discrete, zhou2017second} to understand the fundamental limits of low latency communication. There is a tight connection between lossy compression of the nonstationary Gauss-Markov process and control of an unstable linear system under communication constraints~\cite{tatikonda2004stochastic, kostina2019rate}. Namely, the minimum channel capacity needed to achieve a given LQG (linear quadratic Gaussian) cost for the plant~\cite[Eq. (1)]{tatikonda2004stochastic} is lower-bounded by the causal rate-distortion function of the Gauss-Markov process~\cite[Eq. (9)]{tatikonda2004stochastic}. See~\cite[Th. 1]{kostina2019rate} for more details. Being more restrictive on the coding schemes, the causal rate-distortion function is further lower-bounded by the traditional rate-distortion function. The result in this paper on the rate-distortion tradeoff in the finite blocklength regime provides a lower bound on the minimum communication rate required to ensure that the LQG cost stays below a desired threshold with desired probability at the end of a finite horizon. Finally, the aforementioned linear predictive coding algorithm~\cite{atal1971speech} is connected to lossy compression of autoregressive processes, see a recent historical note by Gray~\cite[p.2]{gray2020in}. \subsection{Notations} For $n\in \mathbb{N}$, we use $[n]$ to denote the set $\{1, 2, ..., n\}$. We use the standard notations for the asymptotic behaviors $O(\cdot), o(\cdot)$, $\Theta(\cdot)$, $\Omega(\cdot)$ and $\omega(\cdot)$. Namely, let $f(n)$ and $g(n)$ be two functions of $n$, then $f(n) =O(g(n))$ means that there exists a constant $c>0$ and $n_0\in\mathbb{N}$ such that $|f(n)|\leq c |g(n)|$ for any $n\geq n_0$; $f(n) = o(g(n))$ means $\lim_{n\rightarrow\infty} f(n) /g(n) = 0$; $f(n) = \Theta(g(n))$ means there exist positive constants $c_1, c_2$ and $n_0\in\mathbb{N}$ such that $c_1 g(n) \leq f(n) \leq c_2 g(n)$ for any $n\geq n_0$; $f(n) = \Omega(g(n))$ if and only if $g(n) = O(f(n))$; and $f(n) = \omega(g(n))$ if and only if $\lim_{n\rightarrow\infty} f(n) / g(n) = +\infty$. For a matrix $\mtx{M}$, we denote by $\mtx{M}'$ its transpose, by $\|\mtx{M}\|$ its operator norm (the largest singular value) and by $\mu_1(\mtx{M}) \leq \ldots \leq \mu_n(\mtx{M})$ its eigenvalues listed in nondecreasing order. We use $\set{S}^c$ to denote the complement of a set $\set{S}$. All logarithms and exponentials are base $e$. \section{Parameter Estimation} \label{sec:mainresults} \subsection{Nonasymptotic Lower Bounds} \label{subsec:pa} We first present our nonasymptotic bounds on $P^+(n, a, \eta)$ and $P^-(n, a, \eta)$, defined in~\eqref{def:pplus} and~\eqref{def:pminus} above, respectively. We define two sequences $\{\alpha_\ell\}_{\ell\in\mathbb{N}}$ and $\{\beta_\ell\}_{\ell\in\mathbb{N}}$ as follows. Let $\sigma^2 > 0$ and $a>1$ be fixed constants. For $\eta>0$ and a parameter $s>0$, let $\alpha_\ell$ be the following sequence \begin{align} \alpha_1 &\triangleq \frac{\sigma^2s^2 - 2\eta s}{2}, \label{alpha1}\\ \alpha_\ell & = \frac{\left [ a^2 + 2\sigma^2s(a+\eta)\right ]\alpha_{\ell - 1} + \alpha_1}{1 - 2\sigma^2\alpha_{\ell-1}},\quad \forall \ell\geq 2. \label{alphaEll} \end{align} Similarly, let $\beta_\ell$ be the following sequence \begin{align} \beta_1 &\triangleq \frac{\sigma^2s^2 - 2\eta s}{2}, \label{beta1}\\ \beta_\ell & = \frac{\left [a^2 + 2\sigma^2s(-a+\eta)\right ]\beta_{\ell - 1} + \beta_1}{1 - 2\sigma^2\beta_{\ell-1}},\quad \forall \ell\geq 2.\label{betaEll} \end{align} Note the subtle difference between~\eqref{alphaEll} and~\eqref{betaEll}: there is a negative sign in the numerator in \eqref{betaEll}. Both sequences depend on $\eta$ and $s$. We derive closed-form expressions and analyze the convergence properties of $\alpha_{\ell}$ and $\beta_{\ell}$ in Appendices~\ref{app:seqA} and~\ref{app:seqB} below. For $\eta>0$ and $n\in\mathbb{N}$, we define the following sets \begin{align} \mathcal{S}_n^+ \triangleq \left\{s\in\mathbb{R}\colon s>0,~\alpha_{\ell} < \frac{1}{2\sigma^2},~\forall\ell\in [n]\right\}, \label{Splus}\\ \mathcal{S}_n^- \triangleq \left\{s\in\mathbb{R}\colon s>0,~\beta_{\ell} < \frac{1}{2\sigma^2},~\forall\ell\in [n]\right\}.\label{Sminus} \end{align} \begin{theorem} \label{thm:chernoff} For any constant $\eta > 0$, the estimator~\eqref{eqn:MLEintro} satisfies for any $n\geq 2$, \begin{align} P^+(n, a, \eta) &\geq \sup_{s \in \mathcal{S}_n^+}~ \frac{1}{2n}\sum_{\ell = 1}^{n-1} \log\LRB{1 - 2\sigma^2\alpha_\ell}, \label{eqn:chernoffUpper}\\ P^-(n, a, \eta) &\geq \sup_{s \in\mathcal{S}_n^-}~ \frac{1}{2n}\sum_{\ell = 1}^{n-1} \log\LRB{1 - 2\sigma^2\beta_\ell}, \label{eqn:chernoffLower} \end{align} where $\alpha_{\ell}$ and $\beta_{\ell}$ are defined in~\eqref{alphaEll} and~\eqref{betaEll}, respectively, and $\mathcal{S}_n^+$ and $\mathcal{S}_n^-$ are defined in~\eqref{Splus} and~\eqref{Sminus}, respectively. \end{theorem} Theorem~\ref{thm:chernoff} is a useful result for numerically computing lower bounds on $P^+(n, a, \eta)$ and $P^-(n, a, \eta)$. In Fig.~\ref{fig:compare}, we plot our lower bounds in Theorem~\ref{thm:chernoff}, previous results in~\eqref{eqn:rantzer} by Rantzer and~\eqref{eqn:bercu} by Bercu and Touati, and a simulation result. As one can see, our bound in Theorem~\ref{thm:chernoff} is much tighter than previous results. The proof of Theorem~\ref{thm:chernoff}, presented in Appendix~\ref{app:pfThmChernoff} below, is a detailed analysis of the Chernoff bound using the tower property of conditional expectations. The proof is motivated by~\cite[Lem. 5]{rantzer2018concentration}, but our analysis is more accurate and the result is significantly tighter, see Fig.~\ref{fig:compare} and Fig.~\ref{fig:compareLim} for comparisons. One recovers Rantzer's lower bound~\eqref{eqn:rantzer} by setting $s = \eta / \sigma^2$ and bounding $\alpha_{\ell}$ as $\alpha_{\ell} \leq \alpha_1$ (due to the monotonicity of $\alpha_{\ell}$ shown in Appendix~\ref{app:seqA} below) in Theorem~\ref{thm:chernoff}. We explicitly state where we diverge from \cite[Lem. 5]{rantzer2018concentration} in the proof in Appendix~\ref{app:pfThmChernoff} below. \begin{remark} In view of the G{\"a}rtner-Ellis theorem~\cite[Th. 2.3.6]{dembo1994zeitouni}, we conjecture that the bounds~\eqref{eqn:chernoffUpper} and~\eqref{eqn:chernoffLower} can be reversed in the limit of large $n$: \begin{align} \limsup_{n\to\infty} P^+(n, a, \eta) \leq \limsup_{n\to\infty}\sup_{s \in \mathcal{S}_n^+}~ \frac{1}{2n}\sum_{\ell = 1}^{n-1} \log\LRB{1 - 2\sigma^2\alpha_\ell}, \end{align} and similarly for~\eqref{eqn:chernoffLower}. \end{remark} \begin{figure}[!htb] \centering \psfrag{a}[lc][][0.8][0]{simulation} \psfrag{b}[lc][][0.8][0]{Bercu and Touati~\eqref{eqn:bercu}} \psfrag{c}[lc][][0.8][0]{Rantzer~\eqref{eqn:rantzer}} \psfrag{d}[lc][][0.8][0]{\eqref{eqn:chernoffUpper} in Theorem~\ref{thm:chernoff}} \psfrag{e}[lc][][0.8][0]{\eqref{eqn:PplusL} in Theorem~\ref{thm:upperLDP}} \includegraphics[width=0.48\textwidth]{compare.eps} \caption{Numerical simulations and lower bounds on $P^+(n, a, \eta)$. We choose $a = 1.2$ and $\eta = 10^{-3}$. The ``simulation" curve is obtained as follows. For each $n$, we generate $N = 10^6$ independent samples $u_1^n$ from the Gauss-Markov process~\eqref{eqn:GMmodel}. We approximate $P^+(n, a, \eta)$ by $-\frac{1}{n}\log \left(\frac{1}{N}\#\left\{\text{samples }u_1^n\text{ with } \hat{a}_{\text{ML}}(\bfu) - a > \eta\right\}\right)$, which is shown by the ``simulation" curve. } \label{fig:compare} \end{figure} \subsection{Asymptotic Lower Bounds} \label{subsec:apa} We next present our bounds on the error exponents, that is, the limits of $P^{+}(n, a, \eta)$, $P^{-}(n, a, \eta)$ and $P(n, a, \eta)$ as $n$ tends to infinity. To take limits using~\eqref{eqn:chernoffUpper} and~\eqref{eqn:chernoffLower}, we need to understand the two sequences of sets $\mathcal{S}_n^+$ and $\mathcal{S}_n^-$. Define the limits of the sets as \begin{align} \mathcal{S}_\infty^{+} &\triangleq \bigcap_{n\geq 1}\mathcal{S}_n^+,\\ \mathcal{S}_\infty^{-} &\triangleq \bigcap_{n\geq 1}\mathcal{S}_n^-. \end{align} We have the following properties. \begin{lemma} \label{lemma:limiting} Fix any constant $\eta>0$. \begin{itemize} \item (Monotone decreasing sets) For any $n\geq 1$, we have \begin{align} \mathcal{S}_{n+1}^{+} \subseteq \mathcal{S}_{n}^{+},\quad \mathcal{S}_{n+1}^{-} \subseteq \mathcal{S}_{n}^{-} . \end{align} \item (Limits of the sets) It holds that \begin{align} \mathcal{S}_\infty^{+} = \left(0,~\frac{2\eta}{\sigma^2}\right], \label{plus} \end{align} \begin{align} \mathcal{S}_\infty^{-} \supsetneqq\left(0,~\frac{2\eta}{\sigma^2}\right]. \label{minus} \end{align} \end{itemize} \end{lemma} The proof of Lemma~\ref{lemma:limiting} is presented in Appendix~\ref{app:pfLemLimiting} below. The exact characterization of $\mathcal{S}_{n}^+$ and $\mathcal{S}_n^-$ for each $n$ using $\eta$ is involved. One can see from the definitions~\eqref{Splus} and~\eqref{Sminus} that \begin{align} \mathcal{S}_1^+ = \mathcal{S}_1^- = \left\{s\in\mathbb{R}\colon 0 < s < \frac{\eta + \sqrt{1 +\eta^2}}{\sigma^2}\right\}. \end{align} To obtain the set $\mathcal{S}_{n+1}^+$ from $\mathcal{S}_{n}^+$, we need to solve $\alpha_{n+1} < \frac{1}{2\sigma^2}$, which is equivalent to solving an additional inequality involving a polynomial of degree $n+2$ in $s$ (using the closed-form expression for $\alpha_{n+1}$ in~\eqref{eqn:expAlpha} in Appendix~\ref{app:seqA} below). Fig.~\ref{fig:set} presents a plot of $\mathcal{S}_n^+$ for $n= 1, ..., 5$. Despite the complexity of the sets $\mathcal{S}_n^+$ and $\mathcal{S}_n^-$, Lemma~\ref{lemma:limiting} shows their monotonicity property and limits. \begin{figure}[!htb] \centering \includegraphics[width=0.48\textwidth]{set.eps} \caption{Numerical computation of the sets $\mathcal{S}_n^+$ for $a = 1.2$ and $\eta = 0.1$. Each horizontal line corresponds to $n= 1, ..., 5$ in the bottom-up order. Within each horizontal line, the red thick parts denote the ranges of $s$ for which $\alpha_{n} < \frac{1}{2\sigma^2}$, and the blue thin region is where $\alpha_{n} \geq \frac{1}{2\sigma^2}$. The plot for $\mathcal{S}_n^-$ is similar.} \label{fig:set} \end{figure} Combining Theorem~\ref{thm:chernoff} and Lemma~\ref{lemma:limiting}, we obtain the following lower bounds on the error exponents. The proof is given in Appendix~\ref{app:pfupperLDP} below. \begin{theorem} \label{thm:upperLDP} Fix any constant $\eta>0$. For the ML estimator~\eqref{eqn:MLEintro}, the following three inequalities hold: \begin{align} \liminf_{n\rightarrow \infty}~P^+(n, a, \eta) & \geq I^+(a, \eta) \triangleq \log (a + 2\eta ), \label{eqn:PplusL} \\ \liminf_{n\rightarrow \infty}~P^-(n, a, \eta) &\geq I^-(a, \eta), \label{eqn:PminusL} \\ \liminf_{n\rightarrow \infty}~P(n, a, \eta) &\geq I^-(a, \eta), \label{eqn:PL} \end{align} where \begin{align} I^-(a, \eta) \triangleq \begin{cases} \log a, & 0< \eta \leq \eta_1,\\ \frac{1}{2}\log \frac{2a\eta - (a^2 - 1)}{1-(\eta-a)^2}, & \eta_1 < \eta < \eta_2, \\ \log(2\eta - a),& \eta \geq \eta_2, \end{cases} \end{align} with the thresholds $\eta_1$ and $\eta_2$ given by \begin{align} \eta_1 &\triangleq \frac{a^2 - 1}{a}, \label{eta1}\\ \eta_2 &\triangleq \frac{3a + \sqrt{a^2+8}}{4}.\label{eta2} \end{align} \end{theorem} \begin{remark} The results in~\eqref{plus}-\eqref{minus} and~\eqref{eqn:PplusL}-\eqref{eqn:PminusL} indicate the asymmetry between $P^+(n, a, \eta)$ and $P^-(n, a, \eta)$: the set $\mathcal{S}_{\infty}^-$ has a larger range than $\mathcal{S}_{\infty}^+$, and $I^+(a,\eta) > I^-(a,\eta)$, which suggests that the maximum likelihood estimator $\hat{a}_{\text{ML}}(\bfU)$ is more likely to underestimate $a$ than to overestimate it. \end{remark} Fig.~\ref{fig:compareLim} presents a comparison of~\eqref{eqn:PL}, Rantzer's bound~\eqref{eqn:rantzer} and Bercu and Touati~\eqref{eqn:bercu}. Our bound~\eqref{eqn:PL} is tighter than both of them for any $\eta>0$. \begin{figure}[!htb] \centering \psfrag{a}[lc][][0.8][0]{Rantzer~\eqref{eqn:rantzer}} \psfrag{b}[lc][][0.8][0]{Bercu and Touati~\eqref{eqn:bercu}} \psfrag{c}[lc][][0.8][0]{\eqref{eqn:PL} in Theorem~\ref{thm:upperLDP}} \includegraphics[width=0.48\textwidth]{compareLimn.eps} \caption{Lower bounds on $\liminf_{n\to\infty}P(n, a, \eta)$ for $a = 1.2$.} \label{fig:compareLim} \end{figure} \subsection{Decreasing Error Thresholds} \label{subsec:apade} When the number of samples $n$ increases, it is natural to have error threshold $\eta$ decrease. In this section, we consider the regime where the error threshold $\eta = \eta_n>0$ is a sequence decreasing to 0. In this setting, Theorem~\ref{thm:chernoff} still holds and the proof stays the same, except that we replace $\alpha_{\ell}$ and $\beta_{\ell}$, by the length-$n$ sequences $\alpha_{n, \ell}$ and $\beta_{n, \ell}$ for $\ell = 1, \ldots, n$, respectively, where $\alpha_{n, \ell}$ and $\beta_{n, \ell}$ now depend on $\eta_n$ instead of a constant $\eta$: \begin{align} \alpha_{n,1} &\triangleq \frac{\sigma^2s^2 - 2\eta_n s}{2}, \label{alpha1n}\\ \alpha_{n,\ell} & = \frac{[a^2 + 2\sigma^2s(a+\eta_n)]\alpha_{n, \ell - 1} + \alpha_{n,1}}{1 - 2\sigma^2\alpha_{n, \ell-1}},\quad \forall \ell = 2,\ldots, n. \label{alphaElln} \end{align} The sequence $\beta_{n,\ell}$ is defined in a similar way. For Theorem~\ref{thm:upperLDP} to remain valid, we require $\eta_n$ no smaller than $1/\sqrt{n}$ to ensure that the right sides of~\eqref{eqn:chernoffUpper}-\eqref{eqn:chernoffLower} still converge to the right sides of~\eqref{eqn:PplusL}-\eqref{eqn:PminusL}, respectively. Let $\eta_n$ be a positive sequence such that \begin{align} \eta_n = \omega\left(\frac{1}{\sqrt{n}}\right). \label{assumption:etan} \end{align} \begin{theorem} \label{thm:decreasingeta} For any $\sigma^2>0$ and $a>1$, let $\eta_n>0$ be a positive sequence satisfying~\eqref{assumption:etan}. Then, Theorem~\ref{thm:chernoff} holds with $\alpha_{\ell}$ replaced by $\alpha_{n, \ell}$, and $\beta_{\ell}$ by $\beta_{n, \ell}$, and Theorem~\ref{thm:upperLDP} holds with~\eqref{eqn:PplusL} and~\eqref{eqn:PminusL} replaced, respectively, by \begin{align} \liminf_{n\rightarrow \infty} P^+(n, a, \eta_n) &\geq \log a, \label{eqn:decreasingBDp}\\ \liminf_{n\rightarrow \infty} P^-(n, a, \eta_n) &\geq \log a.\label{eqn:decreasingBDm} \end{align} \end{theorem} The proof of Theorem~\ref{thm:decreasingeta} is presented in Appendix~\ref{app:pfdecreasingeta} below. Theorem~\ref{thm:decreasingeta} is a quite strong result as it states that even if the error threshold is a sequence decreasing to zero, as long as~\eqref{assumption:etan} is satisfied, the probability of estimation error exceeding such decreasing thresholds is still exponentially small, with exponent being at least $\log a$. \begin{corollary} \label{cor:disp} For any $\sigma^2 > 0$ and any $a>1$, there exists a constant $c\geq \frac{1}{2}\log(a)$ such that for all $n$ large enough, \begin{align} \prob{|\hat{a}_{\text{ML}}(\bfU)- a| \geq \sqrt{\frac{\log\log n}{n}}} \leq 2e^{-cn}. \end{align} \end{corollary} Corollary~\ref{cor:disp} is used in Section~\ref{subsec:dispersion} below to derive the dispersion of nonstationary Gauss-Markov sources. The proof of Corollary~\ref{cor:disp} is by applying~Theorem~\ref{thm:decreasingeta} with $\eta_n$ chosen as \begin{align} \eta_n = \sqrt{\frac{\log\log n}{n}}. \end{align} \subsection{Generalization to sub-Gaussian $Z_i$'s} \label{subsec:gen2sub} In this section, we generalize the above results to the case where $Z_i$'s in~\eqref{eqn:GMmodel} are zero-mean sub-Gaussian random variables. This general result is of independent interest and will not be used in the rest of the paper. \begin{definition}[sub-Gaussian random variable, e.g. {\cite[Def. 2.7]{wainwright2019}}] Fix $\sigma>0$. A random variable $Z\in\mathbb{R}$ with mean $\mu$ is said to be $\sigma$-sub-Gaussian with variance proxy $\sigma^2$ if its moment-generating function (MGF) satisfies \begin{align} \mathbb{E}[e^{s(Z - \mu)}] \leq e^{\frac{\sigma^2 s^2}{2}}, \label{def:subG} \end{align} for all $s \in \mathbb{R}$. \end{definition} One important property of $\sigma$-sub-Gaussian random variables is the following well-known bound on the MGF of quadratic functions of $\sigma$-sub-Gaussian random variables. \begin{lemma}[{\cite[Prop. 2]{rantzer2018concentration}}] \label{lem:subG} Let $Z$ be a $\sigma$-sub-Gaussian random variable with mean $\mu$. Then \begin{align} \mathbb{E} \left[ \exp (s Z^2) \right] \leq \frac{1}{\sqrt{1 - 2\sigma^2 s}}\exp\left(\frac{s \mu^2}{1 - 2\sigma^2 s}\right)\label{property:subG} \end{align} for any $s < \frac{1}{2\sigma^2}$. \end{lemma} Equality holds in~\eqref{def:subG} and~\eqref{property:subG} when $Z$ is Gaussian. In particular, the right side of~\eqref{property:subG} is the MGF of the noncentral $\chi^2$-distributed random variable $Z^2$. \begin{theorem}[Generalization to sub-Gaussian case] \label{thm:subgaussian} Theorems~\ref{thm:chernoff}--\ref{thm:decreasingeta} and Lemma~\ref{lemma:limiting} remain valid for the estimator~\eqref{eqn:MLEintro} when $Z_i$'s in~\eqref{eqn:GMmodel} are i.i.d. zero-mean $\sigma$-sub-Gaussian random variables. \end{theorem} The generalizations of Theorems~\ref{thm:chernoff}--\ref{thm:decreasingeta} and Lemma~\ref{lemma:limiting} from Gaussian to sub-Gaussian $Z_i$'s only require minor changes in the corresponding proofs. See Appendix~\ref{app:subG} for the details. \section{The Dispersion of a Nonstationary Gauss-Markov Source} \label{subsec:preliminaries} \subsection{Rate-distortion functions} \label{subsubsec:RW} For a generic random process $\{X_i\}_{i=1}^{\infty}$, the $n$-th order (informational) rate-distortion function $\mathbb{R}_{X_1^n}(d)$ is defined as \begin{align} \mathbb{R}_{X_1^n}(d) \triangleq \inf_{\substack{P_{Y_1^n | X_1^n}:\\\EX{\dis{X_1^n}{Y_1^n}}\leq d}} ~\frac{1}{n}I(X_1^n; Y_1^n), \label{eqn:nRDF} \end{align} where $X_1^n \triangleq (X_1, \ldots, X_n)'$ is the $n$-dimensional random vector determined by the random process, $I(X_1^n; Y_1^n)$ is the mutual information between $X_1^n$ and $Y_1^n$, $d$ is a given distortion threshold, and $\dis{\cdot}{\cdot}$ is the distortion measure defined in~\eqref{eqn:disdef} in Sec.~\ref{subsec:nrdt} above. The rate-distortion function $\mathbb{R}_X(d)$ is defined as \begin{align} \mathbb{R}_X(d) \triangleq \limsup_{n\rightarrow \infty}~\mathbb{R}_{X_1^n}(d). \end{align} For a wide class of sources, $\mathbb{R}_X(d)$ has been shown to be equal to the minimum achievable source coding rate under the average distortion criterion, in the limit of $n\to\infty$, see~\cite{shannon1959coding} for discrete memoryless sources and~\cite{goblick1969coding} for general ergodic sources. In particular, Gray's coding theorem~\cite[Th. 2]{gray1970information} for the Gaussian autoregressive processes directly implies that for the Gauss-Markov source $\{U_i\}_{i=1}^{\infty}$ in~\eqref{eqn:GMmodel} for any $a\in\mathbb{R}$, its rate-distortion function $\mathbb{R}_{U}(d)$ equals the minimum achievable source coding rate under the average distortion criterion as $n$ tends to infinity. The $n$-th order rate-distortion function $\mathbb{R}_{U_1^n}(d)$ of the Gauss-Markov source is given by the $n$-th order reverse waterfilling, e.g.~\cite[Eq.~(22)]{gray1970information}: \begin{align} \mathbb{R}_{U_1^n} (d) &= \frac{1}{n}\sum_{i = 1}^n \frac{1}{2}\log \max\left(\mu_{n, i},~\frac{\sigma^2}{\theta_n}\right), \label{eqn:NRDF1}\\ d &= \frac{1}{n}\sum_{i = 1}^n \min\left(\theta_n, \frac{\sigma^2}{\mu_{n,i}}\right),\label{eqn:NRDF2} \end{align} where $\theta_n > 0$ is the $n$-th order water level, and $\mu_{n, i}$'s for $i\in [n]$ (sorted in nondecreasing order) are the eigenvalues of the $n\times n$ matrix $\mtx{F}'\mtx{F}$ with $\mtx{F}$ being an $n\times n$ lower triangular matrix defined as \begin{align} (\mtx{F})_{ij} \triangleq \begin{cases} 1, & i=j, \\ -a, & i = j+1,\\ 0, & \text{otherwise.} \end{cases} \label{def:A} \end{align} One can check that $\sigma^2 (\mtx{F}'\mtx{F})^{-1}$ is the covariance matrix of $U_1^n$. The way that one uses~\eqref{eqn:NRDF1}-\eqref{eqn:NRDF2} is to first solve the $n$-th order water level $\theta_n$ using~\eqref{eqn:NRDF2} for a given distortion threshold $d$, and then to plug that water level into~\eqref{eqn:NRDF1} to obtain $\mathbb{R}_{U_1^n}(d)$. The rate-distortion function $\mathbb{R}_{U}(d)$ of the Gauss-Markov source is given by the limiting reverse waterfilling: \begin{align} \mathbb{R}_U(d) &= \frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{1}{2}\log\max\LRB{g(w),~\frac{\sigma^2}{\theta}}~dw, \label{eqn:RWR}\\ d &= \frac{1}{2\pi}\int_{-\pi}^{\pi}\min\LRB{\theta,~\frac{\sigma^2}{g(w)}}~dw, \label{eqn:RWD} \end{align} where $\theta>0$ is the limiting water level and $g(w)$ is a function from $[-\pi, \pi]$ to $\mathbb{R}$ given by \begin{align} g(w) &\triangleq 1 + a^2 -2a\cos(w). \label{eqn:g} \end{align} The rate-distortion function of the Gaussian memoryless source $\{Z_i\}_{i=1}^{\infty}$ (the special case when $a$ is set to 0 in the Gauss-Markov model) is~\cite{shannon1959coding} \begin{align} \mathbb{R}_Z(d) = \max\left(0, ~\frac{1}{2}\log\frac{\sigma^2}{d}\right). \label{rdfZ} \end{align} One can obtain~\eqref{rdfZ} from~\eqref{eqn:RWR}-\eqref{eqn:RWD} by noting that $g(w) = 1$ for $a = 0$, which further simplifies~\eqref{eqn:RWD} to $d = \theta$, and~\eqref{eqn:RWR} to~\eqref{rdfZ}. See Fig.~\ref{fig:RD} for a plot of $\mathbb{R}_U(d)$ and $\mathbb{R}_Z(d)$. \begin{figure}[!htb] \centering \includegraphics[width=0.48\textwidth]{RD.eps} \caption{Rate-distortion functions: $\mathbb{R}_{U}(d)$ in~\eqref{eqn:RWR} with $a = 1.2$, and $\mathbb{R}_{Z}(d)$ in~\eqref{rdfZ}.} \label{fig:RD} \end{figure} \subsection{Operational Dispersion} \label{subsubsec:dispersion} To characterize the convergence rate of the minimum achievable source coding rate $R(n,d, \epsilon)$ (defined in~\eqref{def:Rnde} in Section~\ref{subsec:nrdt} above) to the rate-distortion function, we define the operational dispersion $V_U(d)$ for the Gauss-Markov source as \begin{align} V_U(d) \triangleq \lim_{\epsilon\rightarrow 0} \limsup_{n\rightarrow \infty} n\LRB{\frac{R(n,d, \epsilon) - \mathbb{R}_U(d)}{Q^{-1}(\epsilon)}}^2, \label{eqn:opdis} \end{align} where $Q^{-1}$ denotes the inverse Q-function. The main result in the second part of this paper gives $V_U(d)$ for the nonstationary Gauss-Markov source. \subsection{Informational Dispersion} \label{subsubsec:dtiltedinfo} The $\mathsf{d}$-tilted information~\cite[Def. 6]{kostina2012fixed} is the key random variable in our nonasymptotic analysis of $R(n, d, \epsilon)$. Under other names, the $\mathsf{d}$-tilted information has also been studied by Blahut~\cite[Th. 4]{blahut1972computation} and Kontoyiannis~\cite[Sec. III-A]{kontoyiannis2000pointwise}. Using the definition in~\cite[Def. 6]{kostina2012fixed}, the $\mathsf{d}$-tilted information $\jmath_{U_1^n}(u_1^n, d)$ in $u_1^n$ is \begin{align} \jmath_{U_1^n}(u_1^n, d) \triangleq -\lambda_n^\star d - \log\mathbb{E}\exp\lpara{-\lambda_n^\star\mathsf{d}(u_1^n, V_1^{\star n})}, \label{dtiltedGM} \end{align} where $\lambda_n^\star$ is the negative slope of $\mathbb{R}_{U_1^n}(d)$ at the distortion level $d$ and $V_1^{\star n}$ is the random variable that achieves the infimum in~\eqref{eqn:nRDF} for $U_1^n$. In~\cite[Lem. 7, Eq. (228)]{dispersionJournal}, by a decorrelation argument, we obtained the following expression for the $\mathsf{d}$-tilted information for the Gauss-Markov source: for any $a\in\mathbb{R}$ and any $n\in\mathbb{N}$, \begin{align} \jmath_{U_1^n}\left(u_1^n, d\right) & = \sum_{i = 1}^n \frac{\min(\theta_n, \sigma_{n, i}^2)}{2\theta_n}\left(\frac{x_i^2}{\sigma_{n,i}^2} - 1\right) + \notag \\&\quad \frac{1}{2}\sum_{i = 1}^n \log\frac{\max(\theta_n, \sigma_{n, i}^2)}{\theta_n}, \label{eqn:dtiexp} \end{align} where $\theta_n>0$ is given by~\eqref{eqn:NRDF2}, $x_1^n\triangleq \mtx{S}' u_1^n$ with $\mtx{S}$ being an $n\times n$ orthonormal matrix that diagonalizes $(\mtx{F}'\mtx{F})^{-1}$, and \begin{align} \sigma_{n,i}^2\triangleq \frac{\sigma^2}{\mu_{n,i}} \label{eqn:sigmai} \end{align} with $\mu_{n,i}$'s being the eigenvalues of the $n\times n$ matrix $\mtx{F}'\mtx{F}$. We refer to the random variable $X_1^n$, defined by \begin{align} X_1^n \triangleq \mtx{S}' U_1^n, \label{decor} \end{align} as the decorrelation of $U_1^n$. Note that the decorrelation $X_1^n$ has independent coordinates and \begin{align} X_i \sim\mathcal{N}(0, \sigma_{n, i}^2).\label{eqn:Xi} \end{align} Using~\eqref{eqn:NRDF1}-\eqref{eqn:NRDF2} and~\eqref{eqn:Xi}, one can show~\cite[Eq. (55) and (228)]{dispersionJournal} that the $\mathsf{d}$-tilted information $\jmath_{U_1^n}(u_1^n, d)$ in $u_1^n$ for the Gauss-Markov source satisfies $\jmath_{U_1^n}(u_1^n, d) = \jmath_{X_1^n}(x_1^n, d)$. The minimum achievable source coding rates (defined in~\eqref{def:Rnde}) for lossy compression of $U_1^n$ and $X_1^n$ are equal, as are their rate-distortion functions: $\mathbb{R}_{U_1^n}(d) = \mathbb{R}_{X_1^n}(d)$, see~\cite[Sec. III.A]{dispersionJournal} for the details. It is known~\cite[Property 1]{kostina2012fixed} that the $\mathsf{d}$-tilted information $\jmath_{U_1^n}(u_1^n, d)$ satisfies (by the Karush-Kuhn-Tucker conditions for the optimization problem~\eqref{eqn:nRDF}) \begin{align} \mathbb{E}\lbrac{\jmath_{U_1^n}(U_1^n, d)} = \mathbb{R}_{U_1^n}(d). \end{align} The informational dispersion $\mathbb{V}_U(d)$ is defined as the limit of the variance of the $\mathsf{d}$-tilted information normalized by $n$: \begin{align} \mathbb{V}_U(d) \triangleq \limsup_{n\rightarrow \infty}~\frac{1}{n}\var{\jmath_{U_1^n}(U_1^n, d)}. \label{eqn:infdis} \end{align} By decorrelating the Gauss-Markov source $U_1^n$ and analyzing the limiting behavior of the eigenvalues of the covariance matrix of $U_1^n$, we obtain the following reverse waterfilling representation for the informational dispersion. The proof is given in Appendix~\ref{app:LemInfdis} below. \begin{lemma} \label{lemma:infdis} The informational dispersion of the nonstationary Gauss-Markov source is given by \begin{align} \mathbb{V}_U(d) = \frac{1}{4\pi}\int_{-\pi}^{\pi} \min\left[1,~\LRB{\frac{\sigma^2}{\theta g(w)}}^2\right]~dw, \label{eqn:dispersion} \end{align} where $\theta>0$ is given in~\eqref{eqn:RWD}, and $g$ is in~\eqref{eqn:g}. \end{lemma} Notice that the informational dispersion in the nonstationary case is given by the same expression as in the stationary case~\cite[Eq. (57)]{dispersionJournal}. It is known, e.g.~\cite[Eq. (94)]{kostina2012fixed} and~\cite[Sec. IV]{ingber2011dispersion}, that the informational dispersion for the Gaussian memoryless source $\{Z_i\}_{i=1}^{\infty}$ is \begin{align} \mathbb{V}_Z(d) = \frac{1}{2},\quad\forall d\in (0, \sigma^2). \label{disZ} \end{align} See Fig.~\ref{fig:DD} for a plot of $\mathbb{V}_U(d)$ and $\mathbb{V}_Z(d)$. \begin{figure}[!htb] \centering \includegraphics[width=0.48\textwidth]{DD.eps} \caption{Dispersions :$\mathbb{V}_{U}(d)$ in~\eqref{eqn:dispersion} with $a = 1.2$, and $\mathbb{V}_{Z}(d)$ in~\eqref{disZ}.} \label{fig:DD} \end{figure} \subsection{A Few Remarks} \label{subsubsec:CMD} In view of~\eqref{eqn:RWD}, there are two special water levels $\theta_{\min}$ and $\theta_{\max}$, defined as follows: \begin{align} \theta_{\min} \triangleq \min_{w\in [-\pi, \pi]}~\frac{\sigma^2}{g(w)} = \frac{\sigma^2}{(a+1)^2} \end{align} and \begin{align} \theta_{\max} \triangleq \max_{w\in [-\pi, \pi]}~\frac{\sigma^2}{g(w)} = \frac{\sigma^2}{(a-1)^2}. \end{align} The critical distortion $d_c$ is defined as the distortion corresponding to the water level $\theta_{\min}$. By~\eqref{eqn:RWD}, we have \begin{align} d_c = \theta_{\min} = \frac{\sigma^2}{(a+1)^2}. \end{align} The maximum distortion $d_{\max}$ is defined as the distortion corresponding to the water level $\theta_{\max}$. By~\eqref{eqn:RWD}, we have \begin{align} d_{\max} = \frac{1}{2\pi}\int_{-\pi}^{\pi} \frac{\sigma^2}{g(w)}~dw. \label{eqn:integraldmax} \end{align} Using similar techniques as in~\cite[Eq. (169)--(172)]{dispersionJournal}, one can compute the integral in~\eqref{eqn:integraldmax} as \begin{align} d_{\max} = \frac{\sigma^2}{a^2 - 1}. \end{align} In this paper, we always consider a fixed distortion threshold $d$ such that $0 < d < d_{\max}$. \begin{remark} Gray~\cite[Eq. (24)]{gray1970information} showed the following relation between the rate-distortion function $\mathbb{R}_{U}(d)$ of the Gauss-Markov source and $\mathbb{R}_Z(d)$ of the Gaussian memoryless source: \begin{align} \begin{cases} \mathbb{R}_{U}(d) = \mathbb{R}_Z(d), & d\in (0, d_c], \\ \mathbb{R}_{U}(d) > \mathbb{R}_Z(d), & d\in ( d_c, d_{\mathrm{max}}). \end{cases} \label{rdf:comp} \end{align} Using Lemma~\ref{lemma:infdis} above, one can easily show (in the same way as~\cite[Cor. 1]{dispersionJournal}) that their dispersions are also comparable: \begin{align} \begin{cases} \mathbb{V}_{U}(d) = \mathbb{V}_Z(d), & d\in (0, d_c], \\ \mathbb{V}_{U}(d) < \mathbb{V}_Z(d), & d\in ( d_c, \sigma^2). \end{cases} \label{ddf:comp} \end{align} The results in~\eqref{rdf:comp}-\eqref{ddf:comp} imply that for low distortions $d\in (0, d_c)$, the minimum achievable source coding rate in compressing the Gauss-Markov source and the Gaussian memoryless source are the same up to second-order terms, a phenomenon we observed in the stationary case as well~\cite[Cor. 1]{dispersionJournal}. See Fig.~\ref{fig:RD} and Fig.~\ref{fig:DD} for a visualization of~\eqref{rdf:comp} and~\eqref{ddf:comp}, respectively. \end{remark} \begin{remark} For the function $\mathbb{R}_{U}(d)$, we show that \begin{align} \mathbb{R}_{U}(d_{\mathrm{max}}) = \log a. \label{rdfdmax} \end{align} This result has an interesting connection to the problem of control under communication constraints: in~\cite{wong1999systems} \cite[Th. 1]{baillieul1999feedback}~\cite[Prop. 3.1]{tatikonda2004control}, it was shown that the minimum rate to asymptotically stabilize a linear, discrete-time, scalar system is also $\log a$. The result in~\eqref{rdfdmax} implies that stability cannot be attained with any rate lower than $\log a$ even if an infinite lookahead is allowed. The derivation of~\eqref{rdfdmax} is presented in Appendix~\ref{app:rudmax} below. \end{remark} \begin{remark} Let $P_1$ and $P_2$ be the two special points on the curve $\mathbb{V}_{U}(d)$ at distortions $d_c$ and $d_{\mathrm{max}}$, respectively. Then, the coordinates of $P_1$ and $P_2$ are given by \begin{align} P_1 = (d_c, 1/2), \quad P_2 = \LRB{d_{\mathrm{max}}, \frac{(1+a^2)(a-1)}{2(a+1)^3}}. \label{p1p2} \end{align} The derivation for $P_2$ is the same as that in the stationary case~\cite[Eq. (61)]{dispersionJournal} except that we need to compute the residue at $1/a$ instead of at $a$ since we now have $a>1$, see~\cite[App. B-A]{dispersionJournal} for details. \end{remark} \subsection{Second-order Coding Theorem} \label{subsec:dispersion} Our main result establishes the equality between the operational dispersion and the informational dispersion. \begin{theorem}[Gaussian approximation] \label{thm:dispersion} For the Gauss-Markov source~\eqref{eqn:GMmodel} with $a>1$, any fixed excess-distortion probability $\epsilon\in (0,1)$, and distortion threshold $d\in(0,d_{\mathrm{max}})$, it holds that \begin{align} V_U(d) = \mathbb{V}_U(d). \label{eqn:opinf} \end{align} \end{theorem} Specifically, we have the following converse and achievability. \begin{theorem}[Converse] \label{thm:converse} For the Gauss-Markov source with $a>1$, any fixed excess-distortion probability $\epsilon\in (0,1)$, and distortion threshold $d$, the minimum achievable source coding rate $R(n, d, \epsilon)$ satisfies \begin{align} R(n, d, \epsilon) \geq \mathbb{R}_{U}(d) + \sqrt{\frac{\mathbb{V}_U(d)}{n}} Q^{-1}(\epsilon) -\frac{\log n}{2n} + O\left(\frac{1}{n}\right), \end{align} where $Q^{-1}$ denotes the inverse Q-function, $\mathbb{R}_U(d)$ is the rate-distortion function given in~\eqref{eqn:RWR}, and $\mathbb{V}_U(d)$ is the informational dispersion given by Lemma~\ref{lemma:infdis} above. \end{theorem} The converse proof is similar to that in the asymptotically stationary case in~\cite[Th. 7]{dispersionJournal}. See Appendix~\ref{app:pfConverse} for the details. \begin{theorem}[Achievability] \label{thm:achievability} In the setting the Theorem~\ref{thm:converse}, the minimum achievable source coding rate $R(n, d, \epsilon)$ satisfies \begin{align} R(n, d, \epsilon) \leq \mathbb{R}_{U} (d) + \sqrt{\frac{\mathbb{V}_U(d)}{n}} Q^{-1}(\epsilon) + O\left(\frac{1}{\sqrt{n}\log n}\right). \label{abound} \end{align} \end{theorem} Theorem~\ref{thm:dispersion} follows immediately from Theorems~\ref{thm:converse} and~\ref{thm:achievability}. Central to the achievability proof of Theorem~\ref{thm:achievability} is the following random coding bound: there exists an $(n, M, d, \epsilon)$ code such that~\cite[Cor. 11]{kostina2012fixed} \begin{align} \epsilon\leq \inf_{P_{V_1^n}}~\mathbb{E}\lbrac{\exp\lpara{-M\cdot P_{V_{1}^n}(\mathcal{B}(U_1^n, d))}}, \label{rcbound} \end{align} where the infimization is over all random variables defined on $\mathbb{R}^n$ and $\mathcal{B}(u_1^n, d))$ denotes the distortion $d$-ball around $u_1^n$: \begin{align} \mathcal{B}(u_1^n, d)) \triangleq \lbpara{z_1^n\in\mathbb{R}^n\colon \mathsf{d}(u_1^n, z_1^n)\leq d}. \end{align} To obtain the achievability in~\eqref{abound} from~\eqref{rcbound}, we need to bound from below the probability $P_{V_{1}^n}(\mathcal{B}(U_1^n, d))$ that $V_1^n$ falls within the distortion $d$-ball $\mathcal{B}(U_1^n, d)$, where $V_1^n$ and $U_1^n$ are independent, in terms of the informational dispersion. This connection is made via the following second-order refinement of the ``lossy AEP" (asymptotic equipartition property~\cite[Lem. 1]{shannon1959coding}~\cite[Th. 1]{dembo2002source}~\cite[Lem. 2]{kostina2012fixed}) that applies to the nonstationary Gauss-Markov sources. \begin{lemma}[Second-order lossy AEP for the nonstationary Gauss-Markov sources] \label{lemma:LossyAEP} For the Gauss-Markov source with $a>1$, let $P_{V_1^{\star n}}$ be the random variable that attains the minimum in~\eqref{eqn:nRDF} with $X_1^n$ there replaced by $U_1^n$. It holds that \begin{align} &\mathbb{P}\left[\log\frac{1}{P_{V_1^{\star n}}\left(\mathcal{B}\left(U_1^n, d\right)\right)} \geq \jmath_{U_1^n}(U_1^n, d) + p(n)\right] \leq \frac{1}{q(n)}, \label{eqn:lossyAEP} \end{align} where \begin{align} p(n) &\triangleq c_1 (\log n)^{c_2} + c_3 \log n + c_4,\label{eqn:pn}\\ q(n) &\triangleq \Theta ( \log n ), \label{eqn:qn} \end{align} and $c_i$'s, $i = 1,...,4$, are positive constants depending only on $a$ and $d$. \end{lemma} The proof of Lemma~\ref{lemma:LossyAEP} is presented in Appendix~\ref{app:LossyAEP} below. The proof of Theorem~\ref{thm:achievability}, which uses uses the random coding bound~\eqref{rcbound} and Lemma~\ref{lemma:LossyAEP}, is presented in Appendix~\ref{app:pfAchievability} below. \subsection{The Connection between Lossy AEP and Parameter Estimation} \label{subsec:LossyAEPandPE} The proof of lossy AEP in the form of Lemma~\ref{lemma:LossyAEP} is technical even for stationary memoryless sources~\cite[Lem. 2]{kostina2012fixed}. A lossy AEP for stationary $\alpha$-mixing processes was derived in~\cite[Cor. 17]{dembo2002source}. For stationary memoryless sources with single-letter distribution $P_{X}$, the idea in~\cite[Lem. 2]{kostina2012fixed} is to form a typical set $\mathcal{F}_n$ of source outcomes~\cite[Lem. 4]{kostina2012fixed} using the product of the empirical distributions~\cite[Eq. (270)]{kostina2012fixed}: $P_{\hat{X}} \times\ldots\times P_{\hat{X}}$, where $P_{\hat{X}}(x)\triangleq \frac{1}{n}\sum_{i=1}^n\mathbbm{1}\{x_i = x\}$ is the empirical distribution of a given source sequence $x_1^n$, and then to show that the inequality inside the bracket in~\eqref{eqn:lossyAEP} holds for $x_1^n\in \mathcal{F}_n^c$ and that the probability of the complement set $\mathcal{F}_n^c$ is at most $1/q(n)$, where $p(n) = C\log n + c$ and $q(n) = K / \sqrt{n}$~\cite[Lem. 2]{kostina2012fixed}. The Gauss-Markov source is not memoryless, and it is nonstationary for $a>1$. To form a typical set of source outcomes, we define the following proxy random variables using the estimator $\hat{a}_{\text{ML}}(\bfu)$ in~\eqref{eqn:MLEintro}. \begin{definition}[Proxy random variables] \label{def:proxy} For each sequence $u_1^n$ of length $n$ generated by the Gauss-Markov source, define the proxy random variable $\hat{X}_1^n$ as an $n$-dimensional Gaussian random vector with independent coordinates, each of which follows the distribution $\mathcal{N}(0, \hat{\sigma}_{n,i}^2)$ with \begin{align} \hat{\sigma}_{n,1}^2 &\triangleq \sigma^2\hat{a}_{\text{ML}}(\bfu)^{2n}, \label{eqn:sigma1hat}\\ \hat{\sigma}_{n,i}^2 &\triangleq \frac{\sigma^2}{1 +\hat{a}_{\text{ML}}(\bfu)^2 - 2\hat{a}_{\text{ML}}(\bfu)\cos\frac{i\pi}{n+1} }, \quad 2\leq i\leq n,\label{eqn:sigmaihat} \end{align} where $\hat{a}_{\text{ML}}(\bfu)$ is in~\eqref{eqn:MLEintro} above. \end{definition} \begin{remark} The proxy random variable in Definition~\ref{def:proxy} differs from that in~\cite[Eq. (119)]{dispersionJournal} for the stationary case in the behavior of the largest variance $\hat{\sigma}_{n,1}^2$. For each realization $u_1^n$, we construct the Gaussian random vector $\hat{X}_1^n$ according to~\eqref{eqn:sigma1hat}-\eqref{eqn:sigmaihat}, which is a proxy to the decorrelation $X_1^n$ in~\eqref{decor} above. The variances of $\hat{X}_i$ and $X_i$ are very close due to the closeness of $\hat{a}_{\text{ML}}(\bfu)$ to $a$ (Corollary~\ref{cor:disp}). \end{remark} \begin{remark} Since the proxy random variable $\hat{X}_1^n$ depends on the realization of $U_1^n$, Definition~\ref{def:proxy} defines the joint distribution of $(X_1^n, \hat{X}_1^n)$, where $X_1^n$ is the decorrelation of $U_1^n$ in~\eqref{decor} above. \end{remark} The following convex optimization problem will be instrumental: for two generic random vectors $A_1^n$ and $B_1^n$ with distributions $P_{A_1^n}$ and $P_{B_1^n}$, respectively, define \begin{align} \mathbb{R}(A_1^n, B_1^n, d) \triangleq \inf_{\substack{P_{F_1^n | A_1^n}:\\\EX{\dis{A_1^n}{F_1^n}}\leq d}} ~\frac{1}{n}D(P_{F_1^n |A_1^n} || P_{B_1^n}|P_{A_1^n}), \label{eqn:crem} \end{align} where $D(P_{F_1^n |A_1^n} || P_{B_1^n}|P_{A_1^n})$ is the conditional relative entropy. See~Appendix~\ref{app:paraCREM} for detailed discussions on this optimization problem. For each realization $u_1^n$ (equivalently, each $x_1^n = \mtx{S}' u_1^n$ with the $n\times n$ matrix $\mtx{S}$ defined in the text above~\eqref{eqn:sigmai}), we define $n$ random variables $m_i(u_1^n)~, i = 1,\ldots,n$ as follows. \begin{itemize} \item Let $X_1^n$ be the decorrelation of $U_1^n$ in~\eqref{decor} above. Let $Y_1^{\star n}$ be the random variable that attains the infimum in $\mathbb{R}_{X_1^n}(d)$. \item For each $u_1^n$, choose $A_1^n$ in~\eqref{eqn:crem} to be the proxy random variable $\hat{X}_1^n$, and choose $B_1^n$ to be $Y_1^{\star n}$. Let $\hat{F}_1^{\star n}$ be the random variable that attains the infimum in $\mathbb{R}(\hat{X}_1^n, Y_1^{\star n}, d)$. \end{itemize} Then, for each $i = 1, \ldots, n$, define \begin{align} m_i(u_1^n)\triangleq \EX{(\hat{F}^\star_i- x_i)^2 | \hat{X}_i = x_i}. \label{mi} \end{align} Denote \begin{align} \eta_n\triangleq \sqrt{\frac{\log\log n}{n}}. \label{eqn:etan} \end{align} The typical set for the Gauss-Markov source is then defined as follows. \begin{definition}[Typical set] \label{def:TS} For any $d\in (0,d_{\mathrm{max}})$, $n\geq 2$ and a constant $p>0$, define $\mathcal{T}(n,p)$ to be the set of vectors $u_1^n \in \mathbb{R}^n$ that satisfy the following conditions: \begin{align} \lrabs{\hat{a}_{\text{ML}}(\bfu) - a} &\leq \eta_n, \label{eqn:cond1} \\ \lrabs{\frac{1}{n}\sum_{i=1}^n \LRB{\frac{x_i^2}{\sigma_{n,i}^2}}^k - (2k-1)!!} &\leq 2,\quad k=1,2,3, \label{eqn:cond2}\\ \lrabs{\frac{1}{n}\sum_{i=1}^n m_i(u_1^n) - d} &\leq p\eta_n,\label{eqn:cond3} \end{align} where $x_1^n = \mtx{S}'u_1^n$ is the decorrelation~\eqref{decor} and $\sigma_{n,i}^2$'s are defined in~\eqref{eqn:sigmai} above. \end{definition} The typical set in Definition~\ref{def:TS} is in the same form as that in the stationary case~\cite[Def. 2]{dispersionJournal}, but the definitions of proxy random variables and the analyses are different. \begin{theorem} \label{thm:typical} For any $d\in (0,d_{\mathrm{max}})$, there exists a constant $p>0$ such that the probability that the Gauss-Markov source produces a typical sequence satisfies \begin{align} \prob{U_1^n \in \mathcal{T}(n,p) } \geq 1 - \Theta\LRB{\frac{1}{\log n}}. \end{align} \end{theorem} Corollary~\ref{cor:disp} is essential to the proof of Theorem~\ref{thm:typical}. See the details in Appendix~\ref{app:PfThTS}. Let $\mathcal{E}$ denote the event inside the square bracket in~\eqref{eqn:lossyAEP}. To prove Lemma~\ref{lemma:LossyAEP}, we intersect $\mathcal{E}$ with the typical set $\mathcal{T}(n,p)$ and the complement $\mathcal{T}(n,p)^{c}$, respectively, and then we bound the probability of the two intersections separately. See Appendix~\ref{app:LossyAEP} for the details. \section{Discussion} \label{sec:discussions} \subsection{Stationary and Nonstationary Gauss-Markov Processes} \label{subsubsec:diff} It took several decades~\cite{kolmogorov1956shannon, berger1970information, gray1970information,hashimoto1980rate,gray2008note} to completely understand the difference in rate-distortion functions between stationary and nonstationary Gaussian autoregressive sources. We briefly summarize this subtle difference here to make the point that generalizing results from the stationary case to the nonstationary one is natural but nontrivial. Since $\det(\mtx{F}) = 1$, the eigenvalues $\mu_{n,i}$'s of $\mtx{F}' \mtx{F}$ satisfy \begin{align} \prod_{i = 1}^n \mu_{n,i} = 1. \label{eqn:prodMu} \end{align} Using~\eqref{eqn:prodMu}, we can equivalently rewrite~\eqref{eqn:NRDF1} as \begin{align} \mathbb{R}_{U_1^n}(d) = \frac{1}{n}\sum_{i = 1}^n \max\LRB{0,~\frac{1}{2}\log\frac{\sigma_{n,i}^2}{\theta_n}}, \label{KNRDF} \end{align} where $\theta_n>0$ is in~\eqref{eqn:NRDF2} and $\sigma_{n,i}^2$'s are in~\eqref{eqn:sigmai}. Both~\eqref{eqn:NRDF1} and~\eqref{KNRDF} are valid expressions for the $n$-th order rate-distortion function $\mathbb{R}_{U_1^n}(d)$, regardless of whether the source is stationary or nonstationary. The classical Kolmogorov reverse waterfilling result~\cite[Eq. (18)]{kolmogorov1956shannon}, obtained by taking the limit in~\eqref{KNRDF}, implies that the rate-distortion function of the \emph{stationary} Gauss-Markov source ($0<a<1$) is given by (the subscript K stands for Kolmogorov) \begin{align} \mathbb{R}_{\text{K}}(d) = \frac{1}{2\pi}\int_{-\pi}^{\pi}\max\LRB{0,~\frac{1}{2}\log\frac{\sigma^2}{\theta g(w)}}~dw, \label{Kol} \end{align} where $\theta>0$ is given in~\eqref{eqn:RWD} and $g(w)$ is given in~\eqref{eqn:g}. While~\eqref{eqn:RWR} and~\eqref{eqn:RWD} are valid for both stationary and nonstationary cases, Hashimoto and Arimoto~\cite{hashimoto1980rate} noticed in 1980 that~\eqref{Kol} is incorrect for the nonstationary Gaussian autoregressive source. The reason is the different asymptotic behaviors of the eigenvalues $\mu_{n,i}$'s of $\mtx{F}' \mtx{F}$~\eqref{def:A} in the stationary and nonstationary cases: while in the stationary case, the spectrum is bounded away from zero, in the nonstationary case, the smallest eigenvalue $\mu_{n,1}$ approaches 0, causing a discontinuity. By treating that smallest eigenvalue in a special way, Hashimoto and Arimoto~\cite[Th. 2]{hashimoto1980rate} showed that \begin{align} \mathbb{R}_{\text{HA}}(d) = \mathbb{R}_{\text{K}}(d) + \log(\max(a,1))\label{rdfHA} \end{align} is the correct rate-distortion function for both stationary and nonstationary Gauss-Markov sources, where the subscript HA stands for the authors of~\cite{{hashimoto1980rate}}. For the general higher-order Gaussian autoregressive source, the correction term needed in~\eqref{rdfHA} depends on the unstable roots of the characteristic polynomial of the source, see~\cite[Th. 2]{hashimoto1980rate} for the details. In 2008, Gray and Hashimoto~\cite{gray2008note} showed the equivalence between $\mathbb{R}_{\text{HA}}(d)$ in~\eqref{rdfHA}, obtained by taking a limit in~\eqref{KNRDF}, and Gray's result~$\mathbb{R}_{U}(d)$ in~\eqref{eqn:RWR}, obtained by taking a limit in~\eqref{eqn:NRDF1}. The tool that allows one to take limits in~\eqref{KNRDF} and~\eqref{eqn:NRDF1} is the following theorem on the asymptotic eigenvalue distribution of the almost Toeplitz matrix $\mtx{F}' \mtx{F}$, which is the (rescaled) inverse of the covariance matrix of $U_1^n$. Denote \begin{align} \alpha\triangleq \min_{w\in [-\pi, \pi]}~g(w) = (a-1)^2, \label{ma} \end{align} and \begin{align} \beta\triangleq \max_{w\in [-\pi, \pi]}~g(w) = (a+1)^2. \label{Mb} \end{align} Gray~\cite[Th. 2.4]{gray2006toeplitz} generalized the result of Grenander and Szeg{\"o}~\cite[Th. in Sec. 5.2]{grenander1984toeplitz} on the asymptotic eigenvalue distribution of Toeplitz forms to that of matrices that are asymptotically equivalent to Toeplitz forms, see~\cite[Chap. 2.3]{gray2006toeplitz} for the details. Define \begin{align} \alpha'\triangleq \inf_{n\in\mathbb{N},~i\in[n]}~\mu_{n,i}. \label{mprime} \end{align} \begin{theorem}[Gray~{\cite[Eq. (19)]{gray1970information}}, Hashimoto and Arimoto~{\cite[Th. 1]{hashimoto1980rate}}] For any continuous function $F(t)$ over the interval \begin{align} t\in\left[\alpha',~\beta\right], \label{interval_t} \end{align} the eigenvalues $\mu_{n,i}$'s of $\mtx{F}' \mtx{F}$ with $\mtx{F}$ in~\eqref{def:A} satisfy \begin{align} \lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i = 1}^n F(\mu_{n,i}) = \frac{1}{2\pi}\int_{-\pi}^{\pi} F\left(g(w)\right)~dw, \label{eqn:limiting_equality} \end{align} where $g(w)$ is defined in~\eqref{eqn:g}. \label{thm:LimitingThm} \end{theorem} The eigenvalues $\mu_{n,i}$'s behave quite differently in the following three cases, leading to the subtle difference in the corresponding rate-distortion functions. \begin{enumerate} \item For the stationary case $a\in (0,1)$, it can be easily shown~\cite[Eq. (71)]{dispersionJournal} that $\alpha' = \alpha > 0$ and all eigenvalues $\mu_{n,i}$'s lie in between $\alpha$ and $\beta$. Kolmogorov's formula~\eqref{Kol} is obtained by applying Theorem~\ref{thm:LimitingThm} to~\eqref{KNRDF} using the function \begin{align} F_{\text{K}}(t) \triangleq \max\left(0,~\frac{1}{2}\log\frac{\sigma^2}{\theta t}\right), \label{FK} \end{align} where $\theta>0$ is given by~\eqref{eqn:RWD}. \item For the Wiener process ($a = 1$), closed-form expressions of $\mu_{n,i}$'s are given by Berger~\cite[Eq. (2)]{berger1970information}. Those results imply that the smallest eigenvalue $\mu_{n,1}$ is of order $\Theta\LRB{\frac{1}{n^2}}$, and thus $\alpha' = \alpha = 0$. Using the same function as in~\eqref{FK}, Berger obtained the rate-distortion functions for the Wiener process~\cite[Eq. 4]{berger1970information}~\footnote{To be precise, although the rate-distortion function for the Wiener process is correct in~\cite[Eq. 4]{berger1970information}, the proof there is not rigorous since in this case $\alpha' = \alpha = 0$ but $F_{\text{K}}(t)$ is not continuous at $t = 0$ as pointed out in~\cite[Eq. (23)]{gray2008note}. Therefore, the limit leading to~\cite[Eq. 4]{berger1970information} needs extra justifications.}. \item For the nonstationary case $a >1$, we have $\alpha' = 0 < \alpha$, the smallest eigenvalue $\mu_{n,1}$ is of order $\Theta(a^{-2n})$ and the other $n-1$ eigenvalues lie in between $\alpha$ and $\beta$. This behavior of eigenvalues was shown by Hashimoto and Arimoto~\cite[Lemma]{hashimoto1980rate} for higher-order Gaussian autoregressive sources, and we will show a refined version for the Gauss-Markov source in Lemma~\ref{lemma:eigScaling} below. As pointed out in~\cite[Th. 1]{hashimoto1980rate}, an application of Theorem~\ref{thm:LimitingThm} using the function~\eqref{FK} fails to yield the correct rate-distortion function for nonstationary sources due to the discontinuity of $F_{\text{K}}(t)$ at 0. Gray~\cite[Eq. (22)]{gray1970information} and Hashimoto and Arimoto~\cite{hashimoto1980rate} circumvent this difficulty in two different ways, which lead to~\eqref{eqn:RWR} and~\eqref{rdfHA}, respectively. Gray~\cite[]{gray1970information} applied Theorem~\ref{thm:LimitingThm} on~\eqref{eqn:NRDF1} using the function \begin{align} F_{\text{G}}(t) = \frac{1}{2}\log\max\left(t,~\frac{\sigma^2}{\theta}\right), \label{func:gray} \end{align} which is indeed continuous at $0$, while Hashimoto and Arimoto~\cite[Th. 2]{hashimoto1980rate} still use the function $F_{\text{K}}(t)$ but consider $\mu_{n, 1}$ and $\mu_{n,i},~i\geq 2$ separately: \begin{align} \frac{1}{n}\sum_{i = 2}^n F_{\text{K}}(\mu_{n,i}) + \frac{1}{n} F_{\text{K}}(\mu_{n,1}), \end{align} which in the limit yields~\eqref{rdfHA} by plugging $\mu_{n,1} = \Theta(a^{-2n})$ into~\eqref{FK}. \end{enumerate} \subsection{New Results on the Spectrum of the Covariance Matrix} \label{subsec:spectrum} The following result on the scaling of the eigenvalues $\mu_{n,i}$'s refines~\cite[Lemma]{hashimoto1980rate}. Its proof is presented in Appendix~\ref{app:proof_eigScaling}. \begin{lemma} \label{lemma:eigScaling} Fix $a>1$. For any $i = 2, \ldots, n$, the eigenvalues of $\mtx{F}' \mtx{F}$~\eqref{def:A} are bounded as \begin{align} \xi_{n-1, i - 1} \leq \mu_{n,i} \leq \xi_{n, i}, \label{eqn:eig2N} \end{align} where \begin{align} \xi_{n, i} \triangleq 1 + a^2 - 2a \cos\left(\frac{i\pi}{n +1}\right). \label{eqn:xini} \end{align} The smallest eigenvalue is bounded as \begin{align} 2\log a +\frac{c_2}{n} \geq -\frac{1}{n}\log \mu_{n,1} \geq 2\log a-\frac{c_1}{n}, \label{eqn:eig1} \end{align} where $c_1>0$ and $c_2$ are constants given by \begin{align} c_1 &= 2\log (a+1) + \frac{a\pi}{a^2 -1}, \label{int:c1} \\ c_2 &= 2\log\frac{a}{a^2 -1} + \frac{2a\pi}{a^2 - 1}. \label{int:c2} \end{align} \end{lemma} \begin{remark} The constant $c_1$ in~\eqref{int:c1} is positive, while $c_2$ in~\eqref{int:c2} can be positive, zero or negative, depending on the value of $a>1$. Lemma~\ref{lemma:eigScaling} indicates that $a^{-2n}$ is a good approximation to $\mu_{n,1}$. Using~\eqref{eqn:eig2N}--\eqref{eqn:xini}, we deduce that for $i= 2, \ldots, n$, \begin{align} \mu_{n,i} \in [\alpha, \beta]. \end{align} \end{remark} Based on Lemma~\ref{lemma:eigScaling}, we obtain a nonasymptotic version of Theorem~\ref{thm:LimitingThm}, which is useful in the analysis of the dispersion, in particular, in deriving Proposition~\ref{prop:approx} in Appendix~\ref{app:EV} below. \begin{theorem} \label{thm:nonasymEig} Fix any $a>1$. For any bounded, $L$-Lipschitz and nondecreasing function (or nonincreasing function) $F(t)$ over the interval~\eqref{interval_t} and any $n\geq 1$, the eigenvalues $\mu_{n,i}$'s of $\mtx{F}' \mtx{F}$~\eqref{def:A} satisfy \begin{align} \lrabs{ \frac{1}{n}\sum_{i = 1}^n F(\mu_{n,i}) - \frac{1}{2\pi}\int_{-\pi}^{\pi} F\left(g(w)\right)~dw} \leq \frac{C_L}{n}, \label{eqn:nonasymEig} \end{align} where $g(w)$ is defined in~\eqref{eqn:g} and $C_L > 0$ is a constant that depends on $L$ and the maximum absolute value of $F$. \end{theorem} The proof of Theorem~\ref{thm:nonasymEig} is in Appendix~\ref{app:nonasymEig}. \section{Previous Works} \label{sec:PF} \subsection{Parameter Estimation} The maximum likelihood (ML) estimate $\hat{a}_{\text{ML}}(\bfu)$ of the parameter $a$ given samples $u_1^n= (u_1, \ldots, u_n)'$ drawn from the Gauss-Markov source is given by \begin{align} \hat{a}_{\text{ML}}(u_1^n)= \frac{\sum_{i = 1}^{n-1} u_{i}u_{i+1}}{\sum_{i = 1}^{n-1} u_{i}^2}. \label{eqn:MLEintro} \end{align} The derivation of~\eqref{eqn:MLEintro} is straightforward, e.g.~\cite[App. F-A]{dispersionJournal}. The problem is to provide performance guarantees of $\hat{a}_{\text{ML}}(\bfu)$. This simply formulated problem has been widely studied in the literature. Our main contribution in this paper is a nonasymptotic fine-grained large deviations analysis of the estimation error. The estimate $\hat{a}_{\text{ML}}(\bfu)$ in~\eqref{eqn:MLEintro} has been extensively studied in the statistics~\cite{white1958limiting, rissanen1979strong} and economics~\cite{mann1943statistical, rubin1950consistency} communities. Mann and Wald~\cite{mann1943statistical} and Rubin~\cite{rubin1950consistency} showed that the estimation error $\hat{a}_{\text{ML}}(\bfU) - a$ converges to 0 in probability for any $a\in \mathbb{R}$. Rissanen and Caines~\cite{rissanen1979strong} later proved that $\hat{a}_{\text{ML}}(\bfU) - a$ converges to 0 almost surely for $0< a<1$. To better understand the finer scaling of the error $\hat{a}_{\text{ML}}(\bfU)-a$, researchers turned to study the limiting distribution of the normalized estimation error $h(n)(\hat{a}_{\text{ML}}(\bfU) - a)$ for a careful choice of the standardizing function $h(n)$: \begin{align} h(n) \triangleq \begin{cases} \sqrt{\frac{n}{1 - a^2}}, & |a| < 1,\\ \frac{n}{\sqrt{2}}, & |a| = 1, \\ \frac{|a|^n}{a^2 - 1}, & |a| > 1. \end{cases} \end{align} With the above choices of $h(n)$, Mann and Wald~\cite{mann1943statistical} and White~\cite{white1958limiting} showed that the distribution of the normalized estimation error $h(n)(\hat{a}_{\text{ML}}(\bfU)- a)$ converges to $\mathcal{N}(0, 1)$ for $|a|<1$; to the standard Cauchy distribution for $|a|>1$; and for $|a|=1$, to the distribution of \begin{align} \frac{B^2(1) - 1}{2\int_{0}^1 B^2(t)~dt}, \end{align} where $\{B(t):t\in [0,1]\}$ is a Brownian motion. Generalizations of the above results in several directions have also been investigated. In~\cite[Sec. 4]{mann1943statistical}, the maximum likelihood estimator for the $p$-th order stationary autoregressive processes with $Z_i$'s being i.i.d. zero-mean and bounded moments random variables (not necessarily Gaussian) was shown to be weakly consistent, and the scaled estimation errors $\sqrt{n}(\hat{a}_j - a_j)$ for $j = 1, \ldots, p$ were shown to converge in distribution to the Gaussian random variables as $n$ tends to infinity. Anderson~\cite[Sec. 3]{anderson1959asymptotic} studied the limiting distribution of the maximum likelihood estimator for a nonstationary vector version of the process~\eqref{eqn:GMmodel}. Chan and Wei~\cite{chan1987asymptotic} studied the performance of the estimation error when $a$ is not a constant but approaches to 1 from below in the order of $1/n$. Estimating $a$ from a block of outcomes of the Gauss-Markov source~\eqref{eqn:GMmodel} is one of the simplest versions of the problem of system identification, where the goal is to learn system parameters of a dynamical system from the observations~\cite{simchowitz2018learning, oymak2018non, sarkar2018fast, faradonbeh2018finite, rantzer2018concentration}. One objective of those studies is to obtain tight performance bounds on the least-squares estimates of the system parameters $\mtx{A}, \mtx{B}, \mtx{C}, \mtx{D}$ from a single input / output trajectory $\{W_i, Y_i\}_{i=1}^{n}$ in the following state-space model, e.g.~\cite[Eq. (1)--(2)]{oymak2018non}: \begin{align} X_{i+1} &= \mtx{A}X_i + \mtx{B}W_{i} + Z_i, \\ Y_i &= \mtx{C}X_i + \mtx{D}W_i +V_i, \end{align} where $X_i, W_i, Z_i,V_i$'s are random vectors of certain dimensions and the system parameters $\mtx{A}, \mtx{B}, \mtx{C}, \mtx{D}$ are matrices of appropriate dimensions. The Gauss-Markov process in~\eqref{eqn:GMmodel} can be written as the state-space model by choosing $\mtx{A} = a$ being a scalar, $\mtx{B} = \mtx{D} = 0$, $\mtx{C} = 1$ and $V_i = 0$. For stable vector systems, that is, $\|\mtx{A}\| < 1$, Oymak and Ozay~\cite[Thm. 3.1]{oymak2018non} showed that the estimation error in spectral norm is $O(1/\sqrt{n})$ with high probability, where $n$ is the number of samples. For the subclass of the regular unstable systems~\cite[Def. 3]{faradonbeh2018finite}, Faradonbeh et al.~\cite[Thm. 1]{faradonbeh2018finite} proved that the probability of estimation error exceeding a positive threshold in spectral norm decays exponentially in $n$. For the Gauss-Markov processes considered in the present paper, Simchowitz et al.~\cite[Thm. B.1]{simchowitz2018learning} and Sarkar and Rakhlin~\cite[Prop. 4.1]{sarkar2018fast} presented tail bounds on the estimation error of the ML estimate. Another line of work closely related to this paper is the large deviation principle (LDP)~\cite[Ch. 1.2]{dembo1994zeitouni} on $\hat{a}_{\text{ML}}(\bfU) - a$. Given an error threshold $\eta > 0$, define $P^+(n, a, \eta)$ and $P^-(n, a, \eta)$ as follows: \begin{align} P^+(n, a, \eta) &\triangleq -\frac{1}{n}\log\prob{\hat{a}_{\text{ML}}(\bfU) - a > \eta},\label{def:pplus}\\ P^-(n, a, \eta) &\triangleq -\frac{1}{n}\log\prob{\hat{a}_{\text{ML}}(\bfU)- a < -\eta}.\label{def:pminus} \end{align} We also define $P(n, a, \eta)$ as \begin{align} P(n, a, \eta) \triangleq -\frac{1}{n}\log \prob{ |\hat{a}_{\text{ML}}(\bfU) - a | > \eta}.\label{def:p} \end{align} The large deviation theory studies the rate functions, defined as the limits of $P^+(n, a, \eta)$, $P^-(n, a, \eta)$ and $P(n, a, \eta)$, as $n$ goes to infinity. Bercu et al.~\cite[Prop. 8]{bercu1997large} found the rate function for the case of $0<a<1$. For $a\geq 1$, Worms~\cite[Thm. 1]{worms2001large} proved that the rate functions can be bounded from below implicitly by the optimal value of an optimization problem. These studies of the limiting distribution and the LDP of the estimation error are asymptotic. In this paper, we develop a nonasymptotic analysis of the estimation error. Two nonasymptotic lower bounds on $P^+(n, a, \eta)$ and $P^-(n, a, \eta)$ are available in the literature. For any $a\in\mathbb{R}$, Rantzer~\cite[Th. 4]{rantzer2018concentration} showed that \begin{align} P^+(n, a, \eta)~~\left (\text{and }P^-(n, a, \eta)\right )~\geq \frac{1}{2}\log (1 + \eta^2).\label{eqn:rantzer} \end{align} Bercu and Touati~\cite[Cor. 5.2]{bercu2008exponential} proved that \begin{align} P^+(n, a, \eta)~~\left (\text{and }P^-(n, a, \eta)\right )~\geq \frac{\eta^2}{2(1 + y_\eta)},\label{eqn:bercu} \end{align} where $y_\eta$ is the unique positive solution to $(1+x)\log (1 + x)-x -\eta^2 = 0$ in $x$. Both bounds~\eqref{eqn:rantzer} and~\eqref{eqn:bercu} do not capture the dependence on $a$ and $n$, and are the same for $P^+(n, a, \eta)$ and $P^-(n, a, \eta)$. The bounds in~\cite{simchowitz2018learning, oymak2018non, sarkar2018fast, faradonbeh2018finite, rantzer2018concentration} either are optimal only order-wise or involve implicit constants. Our main result on parameter estimation is a tight nonasymptotic lower bound on $P^+(n, a, \eta)$ and $P^-(n, a, \eta)$. For larger $a$, the lower bound becomes larger, which suggests that unstable systems are easier to estimate than stable ones, an observation consistent with~\cite{simchowitz2018learning}. The proof is inspired by Rantzer~\cite[Lem. 5]{rantzer2018concentration}, but our result improves Rantzer's result~\eqref{eqn:rantzer} and Bercu and Touati's result~\eqref{eqn:bercu}, see Fig.~\ref{fig:compare} for a comparison. Most of our results generalize to the case where $Z_i$'s are i.i.d. sub-Gaussian random variables, see Theorem~\ref{thm:subgaussian} in Section~\ref{subsec:gen2sub} below. \subsection{Nonasymptotic Rate-distortion Theory} \label{subsec:nrdt} The rate-distortion theory studies the problem of compressing a generic random process $\{X_i\}_{i=1}^{\infty}$ with minimum distortion. Given a distortion threshold $d>0$, an excess-distortion probability $\epsilon \in (0,1)$ and the number of codewords $M\in \mathbb{N}$, an $(n, M, d, \epsilon)$ lossy compression code for a random vector $X_1^n$ consists of an encoder $\mathsf{f}_n \colon \mathbb{R}^n \rightarrow [M]$, and a decoder $\mathsf{g}_n\colon [M] \rightarrow \mathbb{R}^n$, such that $\mathbb{P}\left[\mathsf{d}\left(X_1^n, \mathsf{g}_n\left(\mathsf{f}_n(X_1^n)\right)\right) > d\right] \leq \epsilon$, where $\mathsf{d}(\cdot, \cdot)$ is the distortion measure. This paper considers the mean squared error (MSE) distortion: $\forall~ x_1^n,~y_1^n\in \mathbb{R}^n$, \begin{align} \mathsf{d}(x_1^n, y_1^n) \triangleq \frac{1}{n}\sum_{i = 1}^n (x_i - y_i)^2. \label{eqn:disdef} \end{align} The minimum achievable code size and source coding rate are defined respectively by \begin{align} M^\star (n, d, \epsilon) &\triangleq \min\left\{M\in \mathbb{N} \colon \exists~ (n, M, d, \epsilon) \text{ code}\right\}, \\ R(n, d, \epsilon) &\triangleq \frac{1}{n} \log M^\star (n, d, \epsilon). \label{def:Rnde} \end{align} In this paper, we approximate the nonasymptotic coding rate $R(n, d, \epsilon)$ for the nonstationary Gauss-Markov source. Another related and widely studied setting is compression under the average distortion criterion. Given a distortion threshold $d>0$ and the number of codewords $M\in \mathbb{N}$, an $(n, M, d)$ lossy compression code for a random vector $X_1^n$ consists of an encoder $\mathsf{f}_n \colon \mathbb{R}^n \rightarrow [M]$, and a decoder $\mathsf{g}_n\colon [M] \rightarrow \mathbb{R}^n$, such that $\mathbb{E}\left[\mathsf{d}\left(X_1^n, \mathsf{g}_n\left(\mathsf{f}_n(X_1^n)\right)\right) \right] \leq d$. Similarly, one can define $M^\star(n, d)$ and $R(n, d)$ as the minimum achievable code size and source coding rate, respectively, under the average distortion criterion. The traditional rate-distortion theory~\cite{shannon1959coding, goblick1969coding,gray1970information,gray1971markov,berger1970information,berger1971rate} showed that the limit of the operational source coding rate $R(n, d)$ as $n$ tends to infinity equals the informational rate-distortion function for a wide class of sources. For discrete memoryless sources, Zhang, Yang and Wei in~\cite{zhang1997redundancy} showed that $R(n, d)$ approaches the rate-distortion function as $\log n / 2n + o(\log n / n )$. For abstract alphabet memoryless sources, Yang and Zhang in~\cite[Th. 2]{yang1999redundancy} showed a similar convergence rate. Under the excess-distortion probability criterion, one can also study the nonasymptotic behavior of the minimum achievable excess-distortion probability $\epsilon^\star (n, d, M)$: \begin{align} \epsilon^\star (n, d, M) &\triangleq \inf\left\{\epsilon > 0\colon \exists~ (n, M, d, \epsilon) \text{ code}\right\}. \label{def:Pndm} \end{align} Marton's excess distortion exponent~\cite[Th. 1, Eq. (2)-(3), (20)]{marton1974error} showed that for discrete memoryless sources $P_{X}$, it holds that \begin{align} -\frac{1}{n}\log \epsilon^\star (n, d, M) = \min_{P_{\hat{X}}}~D( P_{\hat{X}} || P_{X}) + O\left(\frac{\log n}{n}\right), \label{eqn:marton} \end{align} where the minimization is over all probability distributions $P_{\hat{X}}$ such that $\mathbb{R}_{\hat{X}}(d)\geq \frac{\log M}{n}$, where $M$ is such that $\frac{\log M}{n}$ is a constant, $\mathbb{R}_{\hat{X}}(d)$ denotes the rate-distortion function of a discrete memoryless source with single-letter distribution $P_{\hat{X}}$, and $D(\cdot||\cdot)$ denotes the Kullback-Leibler divergence. As pointed out by~\cite[p.~2]{ingber2011dispersion}, for fixed $d>0$ and $\epsilon\in (0,1)$, even the limit of $R(n,d, \epsilon)$ as $n$ goes to infinity is unanswered by Marton's bound in~\eqref{eqn:marton}. Ingber and Kochman~\cite{ingber2011dispersion} (for finite-alphabet and Gaussian sources) and Kostina and Verd{\'u}~\cite{kostina2012fixed} (for abstract sources) showed that the minimum achievable source coding rate $R(n,d,\epsilon)$ admits the following expansion, known as Gaussian approximation~\cite{polyanskiy2010channel}. \begin{align} R(n, d, \epsilon) = \mathbb{R}_{X}(d) + Q^{-1}(\epsilon)\sqrt{\frac{\mathbb{V}(d)}{n}} + O\left(\frac{\log n}{n}\right), \label{eq:gauapp} \end{align} where $\mathbb{V}(d)$ is the dispersion of the source (defined as the variance of the tilted information random variable, details later) and $Q^{-1}$ denotes the inverse Q-function. In this paper, by extending our previous analysis~\cite[Th. 1]{dispersionJournal} of the stationary Gauss-Markov source to the nonstationary one, we establish the Gaussian approximation in the form of~\eqref{eq:gauapp} for the nonstationary Gauss-Markov sources. One of the key ideas behind this extension is to construct a typical set using the ML estimate of $a$, and to use our estimation error bound to probabilistically characterize that set.
{'timestamp': '2021-03-29T02:01:46', 'yymm': '1907', 'arxiv_id': '1907.00304', 'language': 'en', 'url': 'https://arxiv.org/abs/1907.00304'}
\section{Introduction} At its core, general relativity is a theory of gravity phrased operationally in terms of measurements of distances and time using classical rulers and clocks. Quantizing these notions has been a major problem of theoretical physics for the past century and, as of today, there is still no complete theory of quantum gravity. Nevertheless, there are multiple effective tools that can be used in order to better understand the relationship of gravity and quantum physics in low energy regimes. In particular, the behaviour of quantum fields in curved spacetimes can be well described using a semiclassical theory, where the background is classical and the matter fields are quantum. Although this approach does not provide a full theory of quantum gravity, it gives important results, such as the Unruh and Hawking effects~\cite{fullingUnruhEffect,HawkingRadiation,Unruh1976,Unruh-Wald} and the model of inflation~\cite{inflation}, which describes the universe fractions of seconds after its creation. A more recent application of this semiclassical theory is to rephrase classical notions of space and time intervals in terms of properties of quantum fields~\cite{achim,achimQGInfoCMB,achim2,mygeometry}. As argued in~\cite{achim2,mygeometry}, this rephrasing might lead to a quantum theory of spacetime, which could redefine the notions of distance and time close to the Planck scale. In order to relate the spacetime geometry with properties of a quantum field theory (QFT), it is necessary to study the specific way that the background spacetime affects a quantum field. The effect of curvature in the correlation function of a QFT has been thoroughly studied in the literature~\cite{Wald2,kayWald,achim}. In fact, it is possible to show that, within short scales, the behaviour of the correlations of a quantum field can be written as the correlations in flat spacetime added to terms that involve corrections due to curvature~\cite{birrell_davies,DeWittExpansion}. This suggests that if one finds a mechanism to locally probe these correlations, one would then be able to recover the geometry of spacetime. One way of probing quantum fields locally, and recovering their correlation functions, is through the use of particle detector models~\cite{pipo,mygeometry}. Broadly speaking, particle detector models are localized non-relativistic quantum systems that couple to a quantum field. Examples of physical realizations of these range from atoms probing the electromagnetic field~\cite{Pozas2016,Nicho1,richard} to nucleons interacting with the neutrino fields~\cite{neutrinos,antiparticles,fermionicharvesting}. After their first introduction by Unruh and DeWitt in~\cite{Unruh1976,DeWitt}, these models found many different uses for studying a wide range of phenomena of quantum field theories in both flat and curved spacetimes. There are several applications of these models, such as the study of the entanglement in quantum fields~\cite{Valentini1991,Reznik1,reznik2,Retzker2005,Pozas-Kerstjens:2015,Pozas2016}, the Unruh effect~\cite{Unruh-Wald,HawkingRadiation,fullingHadamard,matsasUnruh,mine} and Hawking radiation~\cite{HawkingRadiation,HawkingMain,WaldRadiation,detRadiation,detectorsCavitiesFalling}. Moreover, particle detectors can be used to provide a measurement framework for quantum field theory~\cite{FVdebunked,chicken}, to probe the topology~\cite{topology} and geometry of spacetime~\cite{mygeometry}, among other applications~\cite{pipo,teleportation,adam}. In this manuscript we show how it is in principle possible to recover the curvature of spacetime using smeared particle detectors ultra rapidly coupled to a quantum field~\cite{deltaCoupled,nogo}. Smeared particle detectors have a finite spatial extension, which can be controlled to probe the quantum field in different directions. The effect of the spacetime curvature in the correlation function of the quantum field then affects the transition probabilities of the detector. We precisely quantify how curvature affects the response of particle detectors, so that by comparing the response of a detector in curved spacetimes with what would be seen in Minkowski, one can infer the spacetime curvature. Using particle detectors with different shapes then gives access to the spacetime curvature in different directions, so that it is possible to reconstruct the full Riemann tensor at the center of the detector's trajectory, and all geometrical quantities derived from it. Our results are another instance of rephrasing geometrical properties of spacetime in terms of measurements of observable quantities of quantum fields~\cite{achim,achimQGInfoCMB,achim2,mygeometry}. We argue that such rephrasing is an important step towards understanding the relationship between quantum theory and gravity. This sets the grounds for future works which might provide a detailed answer to how to define the notions of space and time in scales where the classical notions provided by general relativity fail to work. This paper is organized as follows. In Section \ref{sec:detectors} we describe the coupling and the dynamics of a particle detector ultra rapidly coupled to a massless scalar field. In Section \ref{sec:curvature} we write the excitation probability of an ultra rapidly coupled particle detector as an expansion in the detector size, with coefficients related to the spacetime curvature. In Section \ref{sec:recovery} we provide a protocol such that one can recover the spacetime curvature from the excitation probability of particle detectors. The conclusions of our work can be found in Section \ref{sec:conclusion}. \section{Ultra rapid sampling of quantum fields}\label{sec:detectors} In this section we describe the particle detector model that will be used in this manuscript. We consider a two-level Unruh-DeWitt (UDW) detector model coupled to a free massless scalar quantum field $\hat{\phi}(\mf x)$ in a \mbox{$D = n+1$} dimensional spacetime $\mathcal{M}$ with metric $g$. The Lagrangian associated with the field can be written as \begin{equation} \mathcal{L} = - \frac{1}{2} \nabla_\mu \phi \nabla^\mu \phi \end{equation} where $\nabla$ is the Levi-Civita connection. We will not be concerned with the details of the quantization of the field here. However, we will assume that the state of the field is a Hadamard state, for reasons that will become clear in Section \ref{sec:curvature}. Moreover, Hadamard states are those for which it is possible to associate a finite value to the stress-energy tensor of the quantum field~\cite{birrell_davies,fewsterNecessityHadamard}, which makes them appealing from a physical perspective. The detector is modelled by a two-level system undergoing a timelike trajectory $\mf z(\tau)$ in $\mathcal{M}$ with four-velocity $u^\mu(\tau)$ and proper time parameter $\tau$. We pick Fermi normal coordinates $(\tau,\bm{x})$ around $\mf z(\tau)$ (for more details we refer the reader to~\cite{poisson,us,us2}). We assume the proper energy gap of the two-level system to be $\Omega$, such that its free Hamiltonian in its proper frame is given by \mbox{$\hat{H}_D = \Omega \hat{\sigma}^+\hat{\sigma}^-$}, where $\hat{\sigma}^\pm$ are the standard raising/lowering ladder operators. The interaction with the scalar field is prescribed by the scalar interaction Hamiltonian density \begin{equation} \hat{h}_I(\mf x) = \tilde{\lambda} \Lambda(\mf x) \hat{\mu}(\tau)\hat{\phi}(\mf x), \end{equation} where \mbox{$\hat{\mu}(\tau) = e^{- i \Omega \tau} \hat{\sigma}^- +e^{i \Omega \tau} \hat{\sigma}^+ $} is detector monopole moment, $\tilde{\lambda}$ is the coupling constant, and $\Lambda(\mf x)$ is a scalar function that defines the spacetime profile of the interaction. This setup defines the interaction of a UDW detector with a real scalar quantum field, and has been thoroughly studied in the literature~\cite{birrell_davies,pipo,mygeometry,antiparticles,fermionicharvesting,Unruh1976,DeWitt,Pozas-Kerstjens:2015,us,us2}. This model also has a physical appeal, as it has been shown to reproduce realistic models, such as atoms interacting with the electromagnetic field~\cite{Pozas2016,Nicho1,richard} and nucleons with the neutrino fields~\cite{neutrinos,antiparticles,fermionicharvesting}. Under the assumption that the shape of the interaction between the detector and the field is constant in the detector's frame i.e. a rigid detector, we can write the spacetime smearing function as $\Lambda(\mf x) = \chi(\tau)f(\bm x)$, where now $f(\bm x)$ (the smearing function) defines the shape of the interaction and $\chi(\tau)$ (the switching function) controls the strength and the duration of the coupling. This decomposition also allows one to control the proper time duration of the interaction by considering $\chi(\tau) = \eta\, \varphi(\tau/T)/T$ for a positive compactly supported function $\varphi$ that is $L^1(\mathbb{R})$ normalized and symmetric with respect to the origin. Here $\eta$ and $T$ are parameters with units of time, which ensure that $\chi(\tau)$ is dimensionless. In this manuscript we will be particularly interested in an ultra rapid coupling~\cite{deltaCoupled,nogo}, which is obtained when $T\longrightarrow 0$ and $\chi(\tau) \longrightarrow \eta\, \delta(\tau)$. The evolution of the system after an interaction is implemented by the time evolution operator \begin{equation}\label{U1} \hat{U} = \mathcal{T}_\tau \exp\left(-i \int_\mathcal{M} \dd V \hat{h}_I(\mf x)\right), \end{equation} where $\mathcal{T}_\tau$ denotes time ordering with respect to $\tau$ and $\dd V = \sqrt{-g} \,\dd^{D}x$ is the invariant spacetime volume element. In the case of ultra rapid coupling, where \mbox{$\chi(\tau) = \eta \, \delta(\tau)$}, Eq. \eqref{U1} simplifies to \begin{align} \hat{U} &= e^{-i \hat{\mu}\, \hat{Y}(f)} = \text{cos}(\hat{Y}) -i \hat{\mu}\, \text{sin}(\hat{Y}), \end{align} where \begin{align} \hat{Y}(f)&= \lambda \int \dd^3\bm x \sqrt{-g} f(\bm x) \hat{\phi}(\bm x), \end{align} with $\hat{\mu} = \hat{\mu}(0)= \hat{\sigma}^+ + \hat{\sigma}^- $, $\lambda =\tilde{\lambda}\eta$ and \mbox{$\hat{\phi}(\bm x) = \hat{\phi}(\tau = 0,\bm x)$} is the field evaluated at the rest space associated to the interaction time, $\tau = 0$. In the equation above the integral is performed over the rest space of the system at $\tau = 0$. We consider a setup where the detector starts in the ground state $\hat{\rho}_{\textrm{d},0}= \hat{\sigma}^-\hat{\sigma}^{+}$ and the field starts in a given Hadamard state $\omega . The final state of the detector after the interaction, $\hat{\rho}_{\textrm{d}}$, will then be given by the partial trace over the field degrees of freedom. It can be written as \begin{align} \hat{\rho}_{\textrm{d}} =& \omega(\hat{U}\hat{\rho}_{\textrm{d},0} \hat{U}^\dagger) \nonumber\\ =& \omega(\cos^2(\hat{Y})) \hat{\sigma}^-\hat{\sigma}^+ + \omega(\sin^2(\hat{Y})) \hat{\sigma}^+ \hat{\sigma}^-\nonumber\\ =& \frac{1}{2}\left(\openone + \omega(\cos{}(2\hat{Y}))\hat{\sigma}_z\right), \end{align} where we used $\hat{\mu} \hat{\rho}_{\text{d},0} \hat{\mu} = \hat{\sigma}^+ \hat{\sigma}^-$ and $\omega(\text{sin}(\hat{Y})\text{cos}(\hat{Y})) = 0$ due to the fact that this operator is odd in the field $\hat{\phi}$, and $\omega$ is a Hadamard state, so that it is quasifree~\cite{kayWald} and all of its odd point-functions vanish. In particular, notice that $\tr(\hat{\rho}_{\textrm{d}}) = \omega(\cos^2(\hat{Y})+\sin^2(\hat{Y})) = 1$, as expected. The excitation probability of the detector is then given by \begin{equation} P = \tr(\hat{\rho}_{\text{d}}\sigma^+ \sigma^-) = \omega(\sin^2(\hat{Y})) = \frac{1}{2}\left(1 - \omega(e^{2 i \hat{Y}})\right), \end{equation} where we used $\sin^2\theta = \frac{1}{2}\left(1 - \cos(2\theta)\right)$ and the fact that $\omega(\cos{}(2\hat{Y})) = \omega(\text{exp}({2 i \hat{Y}}))$, because only the even part of the exponential contributes. Moreover, there is a simple expression for the excitation probability in terms of a smeared integral of the field's Wightman function $W(\mf x,\mf x') = \omega(\hat{\phi}(\mf x)\hat{\phi}(\mf x'))$. Let \begin{equation}\label{eq:L} \mathcal{L} = \lambda^2 \int \dd^n\bm x \dd^n \bm x'\,\sqrt{-g}\sqrt{-g'}\, f(\bm x) f(\bm x') W(\bm x,\bm x'), \end{equation} where $W(\bm x, \bm x') = W(\tau\! =\! 0, \bm x,\tau'\!=\! 0, \bm x')$. Then, we show in Appendix \ref{app:L} that if $\omega$ is a quasifree state, \mbox{$\omega(e^{2 i \hat{Y}}) = e^{-2 \mathcal{L}}$}, so that the excitation probability of the delta coupled detector can be written as \begin{equation}\label{eq:P} P = \frac{1}{2}\left(1 - e^{-2 \mathcal{L}}\right). \end{equation} Notice that in the pointlike limit the detector is essentially sampling the field correlator at a single point. In this case, $\hat{\rho}_{\text{d}}\rightarrow \frac{1}{2}\openone$ and no information about the quantum field can be obtained. By considering finite sized detectors, it is then possible to sample the field in local regions, allowing one to recover information about both the field and its background spacetime. \begin{comment} \subsection{An explicit example: a Gaussian detector in Minkowski spacetime.} \begin{equation}\label{eq:f} f(\bm x) = \frac{1}{(2\pi L^2)^\frac{n}{2}}e^{- \frac{1}{2}{a_{ij} x^i x^j}} \end{equation} \trr{Define this bad boy here:} \begin{equation} \mathcal{L}_0 = \lambda^2 \eta^2 \int\dd^n \bm x \dd^n \bm x' f(\bm x) f(\bm x') W_0(\bm x,\bm x'), \end{equation} where $W(\bm x,\bm x')$ is the Wightman function of a massless scalar field in the vacuum of Minkowski spacetime. \trr{Compute the $\mathcal{L}_0$ explicitly as a function of $a_{ij}$ (or its eigenvalues, doesn't really matter). Doing so allows us to write the last section and finish the paper with great style.} \end{comment} \section{The effect of curvature on the excitation probability}\label{sec:curvature} In this section we will derive an expansion for the excitation probability of a particle detector rapidly interacting with a quantum field in curved spacetimes. From now on, we will focus in the case of (3+1) dimensions. Our expansion will relate $P$ in Eq. \eqref{eq:P} with the excitation probability of a delta-coupled particle detector in Minkowski spacetime. By comparing these results, we will later be able to rewrite the components of the Riemann curvature tensor as a function of the excitation probability of the detector. \begin{comment}The $\hat{Y}(f)$ operator can then be written as \begin{align} \hat{Y}(f) = \lambda \eta \int\dd^3 \bm x \left(1 + \frac{1}{2}Q_{ij} x^i x^j\right) f(\bm x) \hat{\phi}(x) + \mathcal{O}(L^3)\\= \left(1 -Q_{ij} \pdv{}{a_{ij}}\right)\hat{Y_0}(f) + \mathcal{O}(L^3), \end{align} where we have defined \begin{align} \hat{Y_0}(f) = \lambda \eta \int\dd^3 \bm x f(\bm x) \hat{\phi}(x). \end{align} Using the fact that differentiation with respect to $a_{ij}$ are of order of $L^2$, we can write the excitation probability of the detector as \begin{equation} P = \frac{1}{2}\left(1 - \omega(e^{2i \hat{Y}_0(f)}) + Q_{ij}\pdv{}{a_{ij}}\omega(e^{2i \hat{Y}_0(f)})\right)+\mathcal{O}(L^4). \end{equation} \end{comment} Notice that the detector's excitation probability in Eq. \eqref{eq:P} is entirely determined by $\mathcal{L}$ in \mbox{Eq. \eqref{eq:L}}, so that in order to obtain an expansion for the excitation probability, it is enough to expand $\mathcal{L}$. The first step in order to perform our expansion is to write the Wightman function in curved spacetimes as its flat spacetime analog added to an expansion in terms of curvature. Assuming the field state $\omega$ to be a Hadamard state, it can be shown that the correlation function of a quantum field can be written \mbox{as~\cite{fullingHadamard,fullingHadamard2,kayWald,equivalenceHadamard,fewsterNecessityHadamard}} \begin{align} W(\mf x,\mf x') \!=\! \frac{1}{8\pi^2}\! \frac{\Delta^{1/2}( \mf x,\mf x')}{\sigma( \mf x, \mf x')} \!+\! v( \mf x,\mf x') \text{ln}|\sigma(\mf x, \mf x')|\!+\!w(\mf x,\mf x'),\label{Whada} \end{align} where $v(\mf x,\mf x')$ and $w(\mf x,\mf x')$ are regular functions in the limit $\mf x'\rightarrow \mf x$, $\Delta(\mf x,\mf x')$ is the Van-Vleck determinant (see~\cite{poisson}) and $\sigma(\mf x,\mf x')$ is Synge's world function, corresponding to one half the geodesic separation between the events $\mf x$ and $\mf x'$. In Eq. \eqref{Whada}, the function $w(\mf x,\mf x')$ contains the state dependence, while $v(\mf x,\mf x')$ is fully determined by the properties of both the field and the spacetime. We can then write \begin{align} W(\mf x,\mf x') = \frac{1}{8\pi^2\sigma} \bigg[\Delta^{1/2} \!+ \! 8 \pi^2v_0\,\sigma\,&\text{ln}|\sigma|+8\pi^2 w_0\,\sigma \label{Wexpsigma}\\ &\:\:\:\:\:\:\:\:\:\:+ \mathcal{O}(\sigma^2\,\text{ln}|\sigma|)\bigg],\nonumber \end{align} where $v_0 = v_0(\mf x,\mf x')$ and $w_0 = w_0(\mf x,\mf x')$ are the first order terms of an expansion of $v$ and $w$ in powers of $\sigma$~\cite{DeWittExpansion}. Notice that we have factored the Minkowski spacetime Wightman function for a massless field, $W_0(\mf x,\mf x') = \frac{1}{8\pi^2\sigma}$ in Eq. \eqref{Wexpsigma}. In~\cite{DeWittExpansion,poisson} it was shown that for a massless field, $v_0(\mf x,\mf x') = R(\mf x)/6 + \mathcal{O}(\sqrt{\sigma})$, so that the leading order contribution for the expansion is given by the Ricci scalar. The same is true for the state dependent part of the Wightman function, $w(\mf x,\mf x') = \omega_0(\mf x) + \mathcal{O}(\sqrt{\sigma})$ for a given function $\omega_0(\mf x)$ which determines the state contribution to $W(\mf x,\mf x')$ to leading order in $\sigma$. Moreover, the Van-Vleck determinant admits the following expansion \begin{equation} \Delta^{\frac{1}{2}}(\mf x,\mf x') = 1 + \frac{1}{12}R_{\alpha \beta}(\mf x) \sigma^\alpha(\mf x,\mf x') \sigma^\beta(\mf x,\mf x'), \end{equation} where $\sigma^{\alpha}(\mf x,\mf x')$ denotes the tangent vector to the geodesic that connects $\mf x$ and $\mf x'$ such that its length corresponds to the spacetime separation between $\mf x$ and $\mf x'$. $\sigma_\alpha$ also corresponds to $\partial_\alpha \sigma$~\cite{poisson}. Combining the results above, we find that the Wightman function of a quantum field in a Hadamard state can be approximated as \begin{align} W(\mf x,\mf x') \approx W_0(\mf x,\mf x')\Big(1 +& \frac{1}{12}R_{\alpha\beta}(\mf x)\sigma^\alpha(\mf x,\mf x') \sigma^\beta(\mf x,\mf x') \nonumber\\ &\!\!\!+\frac{4\pi^2}{3}R(\mf x)\,\sigma(\mf x,\mf x')\,\text{ln}|\sigma(\mf x,\mf x')|\nonumber\\ &\:\:\:\:\:\:+8 \pi^2\omega_0(\mf x) \sigma(\mf x,\mf x')\Big),\label{WW0partial} \end{align} where $W_0(\mf x,\mf x')$ is the Wightman function in Minkowski spacetime. Equation \eqref{WW0partial} allows one to locally relate the Wightman function in curved spacetimes with its Minkowski counterpart. However, we wish to have an expansion in terms of the proper distance from the center of the interaction, $\mf z$. This proper distance can be expressed naturally in terms of Synge's world function in Fermi normal coordinates due to the fact that $x^i = \sigma^i(\mf z,\mf x)$, so $\sigma(\mf x,\mf x') = \frac{1}{2}\sigma^i(\mf z,\mf x)\sigma_i(\mf z,\mf x) = \frac{1}{2}x^ix_i$. Thus, considering $\mf x$ and $\mf x'$ sufficiently close to the point $\mf z$, we can use the following approximations \begin{align} \sigma(\mf x,\mf x') &\approx \sigma(\mf z,\mf x) - \sigma_\alpha(\mf z,\mf x) \sigma^\alpha(\mf z,\mf x') + \sigma(\mf z,\mf x')\nonumber,\\ \sigma^\alpha(\mf x',\mf x) &\approx \sigma^\alpha(\mf z,\mf x) - \sigma^\alpha(\mf z,\mf x') = (\mf x-\mf x')^\alpha. \end{align} It is also possible to expand the Ricci scalar and the Ricci tensor according to $R(\mf x) = R(\mf z) + \mathcal{O}(r)$ and \mbox{$R_{\alpha\beta}(\mf x) = R_{\alpha\beta}(\mf z) + \mathcal{O}(r)$}, where $\mathcal{O}(r)$ denotes terms of order $r = \sqrt{x^i x_i}$. Analogously, we can expand the state dependent term as $\omega_0(\mf x) = \omega_0(\mf z)+\mathcal{O}(r)$. In the end we obtain an expression that relates $W(\mf x,\mf x')$ with $W_0(\mf x,\mf x')$, tensors evaluated at $\mf z$, and the effective separation vector between $\mf z$ and $\mf x$/$\mf x'$: \begin{align} W(\mf x,\mf x') \approx W_0(\mf x,\mf x')\!\bigg[1&\! + \!\frac{1}{12}R_{ij}(x-x')^i(x-x')^j \label{WW0} \\ &\!\!\!+\frac{2\pi^2}{3}R\,(\mf x-\mf x')^2\,\text{ln}\left|\tfrac{1}{2}(\mf x-\mf x')^2\right|\nonumber\\ &\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:+4\pi^2\omega_0(\mf z)(\mf x-\mf x')^2 \bigg],\nonumber \end{align} where the curvature tensors are all evaluated at the center point of the interaction, $\mf z$. In Eq. \eqref{WW0}, \mbox{$(x-x')^i = x^i - x'{}^i$} denotes the difference in Fermi normal coordinates of the points $\mf x$ and $\mf x'$ and \mbox{$(\mf x-\mf x')^2 = (x-x')^i(x-x')_i = r^2$}. The last factor that was not yet considered in our expansion is the factor of $\sqrt{-g}$ terms that show up in the definition of $\mathcal{L}$ in Eq. \eqref{eq:L}. If the detector size is small enough compared to the radius of curvature of spacetime, we can employ the expansion of the metric determinant detailed in Appendix \ref{ap:fermi} around the center of the interaction $\mf z$. We then have \begin{equation}\label{sqrtg} \sqrt{-g} = 1 + a_i x^i +\tfrac{1}{2}M_{ij} x^i x^j + \mathcal{O}(r^3), \end{equation} where $r = \sqrt{\delta_{ij}x^i x^j}$ corresponds to the proper distance from a point to $\mf z(0)$, $a_i$ is the four-acceleration of the trajectory at $\tau = 0$ and the tensor $M_{ij}$ is evaluated at $\mf z$. This tensor is explicitly given by \begin{equation} M_{ij} = \tfrac{2}{3}R_{\tau i \tau j} - \tfrac{1}{3} R_{ij}. \end{equation} At this stage, we have all the tools required to expand the excitation probability. Combining the results of Eqs. \eqref{WW0} and \eqref{sqrtg}, we can write the excitation probability of a smeared delta-coupled particle detector in a curved spacetime as the following short scale expansion \begin{widetext} \begin{align}\label{PcsPflat} P \approx P_0 + e^{-2 \mathcal{L}_0}\left( M_{ij}\mathcal{Q}^{ij}+2a_i\mathcal{D}^i+\frac{1}{12}R_{ij}\mathcal{L}^{ij} +\frac{2\pi^2}{3} R\, \mathcal{L}_R +4 \pi^2 \omega_0 \mathcal{L}_{\omega}\right), \end{align} \end{widetext} where $P_0 = \frac{1}{2}\left(1 -e^{-2 \mathcal{L}_0}\right)$ and we have defined: \begin{align} \mathcal{L}_0 \!&=\! \lambda^2 \!\!\!\int\!\!\dd^3 \bm x \dd^3 \bm x' \!f(\bm x) f(\bm x') W_0(\bm x,\bm x'),\nonumber\\ \mathcal{Q}^{ij} \!&= \!\lambda^2 \!\!\!\int\!\!\dd^3 \bm x \dd^3 \bm x' \!f(\bm x) f(\bm x') W_0(\bm x,\bm x')x^ix^j,\nonumber\\ \mathcal{D}^{i} \!&= \!\lambda^2 \!\!\!\int\!\!\dd^3 \bm x \dd^3 \bm x' \!f(\bm x) f(\bm x') W_0(\bm x,\bm x')x^i,\nonumber\\ \mathcal{L}^{ij} \!&= \!\lambda^2 \!\!\!\int\!\!\dd^3 \bm x \dd^3 \bm x' \!f(\bm x) f(\bm x') W_0(\bm x,\bm x')(x\!-\!x')^i(x\!-\!x')^j,\nonumber\\ \mathcal{L}_R \!&= \!\lambda^2 \!\!\! \int\!\!\dd^3 \bm x \dd^3 \bm x' \!f(\bm x) f(\bm x') W_0(\bm x,\bm x')\nonumber\\ &\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\times (\bm x-\bm x')^2\:\text{ln}\left|\tfrac{1}{2}(\bm x-\bm x')^2\right|,\nonumber\\ \mathcal{L}_{\omega}\!&= \!\lambda^2 \!\!\!\int\!\!\dd^3 \bm x \dd^3 \bm x' \!f(\bm x) f(\bm x') W_0(\bm x,\bm x')(\bm x - \bm x')^2.\label{Ls} \end{align} Notice that $P_0$ corresponds to the excitation probability of the detector if it were interacting with the vacuum of Minkowski spacetime. Eq. \eqref{PcsPflat} contains all corrections to the excitation probability of the detector up to second order in the detector size, as we have considered all terms of this order or lower in our computations. In Eq. \eqref{PcsPflat} we see corrections arising from five different fronts. The $M_{ij}\mathcal{Q}^{ij}$ term is associated with the spacetime volume element in the rest frame of the trajectory where the detector interacts with the quantum field. The $a_i\mathcal{D}^i$ term is the effect that the instantaneous acceleration of the detector has in the shape of its rest surface. The $R_{ij}\mathcal{L}^{ij}$ term is related to the corrections to the correlation function due to the Van-Vleck determinant, associated with the determinant of the parallel propagator. The $R \,\mathcal{L}_R$ term is associated to the corrections to the correlation function due to spacetime curvature. Finally, the $\omega_0 \,\mathcal{L}_\omega$ term is associated with the state of the quantum field. We highlight that this is the only term in Eq. \eqref{PcsPflat} whose coefficient is not independent of the other ones, given that we can write $\mathcal{L}_\omega = \delta_{ij}\mathcal{L}^{ij}$. The expansion of Eq. \eqref{PcsPflat} contains the effect of the curvature of spacetime in the excitation probability of a smeared delta-coupled UDW detector. Moreover, this expansion works for a large class of spacetimes under weak assumptions for the quantum field, provided that the detector size is small compared to the curvature radius of spacetime. It is also important to mention that the integral for $\mathcal{L}$ in Eq. \eqref{eq:L} is not solvable analytically in most spacetimes, and can demand great computational power to be performed numerically. However, the expression for $\mathcal{L}_0$, $\mathcal{Q}^{ij}$, $\mathcal{D}^i$, $\mathcal{L}^{ij}$, $\mathcal{L}_R$ and $\mathcal{L}_\omega$ can be computed analytically for a large class of smearing functions (see, for instance Appendix \ref{app:L-terms}). In this sense, the expansion presented in this section can be used to simplify the study sufficiently small particle detectors in curved spacetimes. Overall, the expansion in Eq. \eqref{PcsPflat} shows the different ways that the background geometry manifests itself on ultra rapid localized measurements of a quantum field. \section{The curvature of spacetime in terms of the excitation probability}\label{sec:recovery} In this section we will use the results of Section \ref{sec:curvature} in order to build a protocol by which one can obtain the curvature of spacetime from the excitation probability of delta-coupled particle detectors. In order to do so, we will consider explicit shapes for detectors, so that we can explicitly compute the $\mathcal{L}_0$, $\mathcal{Q}^{ij}$, $\mathcal{D}^{i}$, $\mathcal{L}^{ij}$, $\mathcal{L}_R$ and $\mathcal{L}_{\omega}$ terms of Eq. \eqref{PcsPflat}, and obtain the curvature-dependent terms $M_{ij}$, $R_{ij}$ and $R$ from the excitation probabilities. Before outlining the operational protocol which will allow us to recover the spacetime curvature, it is important to discuss the effect that the detector size has in the excitation probability in flat spacetimes. Consider a pointlike detector in Minkowski spacetime. After the ultra rapid coupling with the quantum field, this detector will be in a maximally mixed state, with excitation probability equal to $1/2$. The physical reason behind the detector ending up in a maximally mixed state is that it instantaneously probes all of the field modes. This generates a great amount of noise, which results in the detector state containing no information about the field. Overall, the size of the detector determines the smallest wavelength (largest energy modes) that it is sensitive to. Thus, increasing the size of the detector makes it sensitive to less energetic modes, which then decreases the excitation probability, according to Eq. \eqref{eq:P}. This allows one to obtain information about the field modes up to a cutoff determined by the inverse of the detector's size. The discussion of the last paragraph can also be extended to curved spacetimes. In particular, the fact that a point-like detector delta-coupled to a quantum field ends up in a maximally mixed state also holds in general spacetimes. In fact, in the pointlike limit one ends up sampling smaller and smaller regions that are locally flat, and too small to be affected by curvature. This can be explicitly seen from Eq. \eqref{PcsPflat}, where all the correction terms are proportional to some power of the detector size (Eq. \eqref{Ls}). Similarly, as discussed in the case of flat spacetimes, a finite-sized particle detector will then couple to field modes of finite-sized wavelengths, and the effect of these modes in the particle detector will change its excitation probability. Moreover, the curvature in different directions will affect the modes that propagate in these directions differently. This implies that probing the quantum field with smeared delta-coupled particle detectors with different shapes should allow one to recover the spacetime curvature in different directions. We are now at a step where we can explicitly formulate a protocol where spacetime curvature can be recovered from ultra rapid local measurements of a quantum field. In order to do this, we will first have to make assumptions about the spacetime $\mathcal{M}$ and the events where we sample the field. As one would expect, in order to recover the classical curvature of spacetime in terms of expected values of quantum systems, one would require many samplings of the quantum field in similar conditions. Thus, we will require our spacetime to be locally stationary for the duration of the experiment\footnote{This is a strong condition that could be relaxed, as we only need spacetime not to vary too much in the frame of one timelike curve during the experiments, but we will assume this stronger version in order to build an explicit protocol.}, so that it contains a local timelike Killing field $\xi$ localized in the region where the experiments take place. Moreover, we will assume that the center points of the interactions of the particle detectors with the quantum field can all be connected by the flow of $\xi$. This will ensure that the curvature tensor $R_{\mu\nu\alpha\beta}$ and all other tensors derived from it are the same for all interactions considered, so that the expansion of Eq. \eqref{PcsPflat} has constant coefficients $M_{ij}$, $a_i$, $R_{ij}$ and $R$. The final assumption for our setup is that the different centers of the interactions are sufficiently separated in time so that the backreaction that each coupling of the detectors has on the field can dissipate away. This is a key assumption, which implies that the field state being probed remains approximately the same throughout the interaction. Equivalently, this implies that the state dependent part of the Wightman function expansion in Eq. \eqref{PcsPflat}, $\omega_0$, will remain approximately constant within the detectors smearings. We note that we are considering a massless field, so that field excitations propagate at light-speed. Thus, the assumption that $\omega_0$ is approximately constant translates into the different interactions being separated in time by more than the detectors' light-crossing time. Overall, this is a reasonable assumption for any experimental setup. In order to build an explicit protocol, we will consider the detectors smearing functions to be given by ellipsoidal Gaussians in their respective Fermi normal frames. By considering ellipsoidal Gaussians as the shape of the detectors, we will then be able to select the modes that they are sensitive to in each spatial direction. Explicitly, we consider smearing functions of the form \begin{equation}\label{eq:f} f(\bm x) = \frac{\sqrt{\det(a_{ij})}}{(2\pi)^\frac{3}{2}}e^{- \frac{1}{2}{a_{ij} x^i x^j}}, \end{equation} where $a_{ij}$ is a positive symmetric bilinear map. We assume $\sqrt{\det(a_{ij})} = \mathcal{O}(L^{-3})$, where $L$ is a constant with units of length that determines the approximate size of the detectors and dictates the smallest wavelengths that they are sensitive to. The smearing function is prescribed in the detector's rest space in terms of the Fermi normal coordinates $\bm x = (x^1,x^2,x^3)$. With the explicit choice of detectors shapes in Eq. \eqref{eq:f}, it is possible to compute most coefficients from the expansion of Eq. \eqref{PcsPflat} analytically. In fact, in Appendix \ref{app:L-terms}, we show that with the choice of elliptic Gaussian for $f(\bm x)$, $\mathcal{L}_0$, $\mathcal{Q}^{ij}$, $\mathcal{D}^i$, $\mathcal{L}^{ij}$ and $\mathcal{L}_\omega$ can be computed analytically. Moreover, $\mathcal{D}^i = 0$ in this case, so that the expansion in Eq. \eqref{PcsPflat} can be written as \begin{equation} P = P_0 + e^{- 2 \mathcal{L}_0}\left(M_{ij}\mathcal{Q}^{ij}+N_{ij}\mathcal{L}^{ij} + \frac{2\pi^2}{3} R \,\mathcal{L}_R \right),\label{PP0} \end{equation} where $M_{ij} = \frac{2}{3} R_{\tau i \tau j} - \frac{1}{3}R_{ij}$ and $N_{ij} = \frac{1}{12}R_{ij} + 4\pi^2 \omega_0 \delta_{ij}$. In Appendix \ref{app:L-terms} we also show that $\mathcal{Q}^{ij}$, $\mathcal{L}^{ij}$ and $\mathcal{L}_R$ can all be varied independently due to their different non-linear dependence on $a_{ij}$ (or equivalently, on the shape of the detector). In this sense, Eq. \eqref{PP0} particularizes the expansion in Eq. \eqref{PcsPflat} for this specific setup and explicitly shows the independent coefficients $\mathcal{Q}^{ij}$, $\mathcal{L}^{ij}$ and $\mathcal{L}_R$ determined by the detectors' shape. We are now at a stage where we can pick different detector sizes and shapes in order to recover information about the curvature of spacetime from their excitation probabilities. First, we consider the case where the detector's trajectory $\mf z(\tau)$ is the flow of the Killing vector field $\xi$. In this case, we expect to recover the tensors $M_{ij}$ and $N_{ij}$ and the scalar $R$ by sampling the probability $P$ in Eq. \eqref{PcsPflat} for different shapes of detectors (or, correspondingly, for different values of $a_{ij}$). That is, we perform measurements using different detectors with different shapes placed in different orientations, so that we ``sample the effect of curvature'' in each direction. In order to fully recover these tensors, it is necessary to sample the field using at least $13$ different values of $a_{ij}$ which give a set of $13$ \emph{linearly independent} coefficients $\mathcal{Q}^{ij}$ (with $6$ independent components), $\mathcal{L}^{ij}$ (with $6$ independent components) and $\mathcal{L}_R$. We then need a total of $13 = 6+6+1$ measurements in order to be able to write $M_{ij}$, $N_{ij}$ and $R$ in terms of the different probabilities. From the tensors $M_{ij}$, $N_{ij}$ and $R$ it is possible to recover $R_{ij}$, $R_{\tau i \tau j}$ and $\omega_0$. In fact, using \mbox{$M_{i}{}^i = \frac{2}{3} R_{\tau \tau} - \frac{1}{3}R_{i}{}^i$} and \mbox{$R = -R_{\tau\tau} + R_{i}{}^i$}, we obtain \mbox{$R_{i}{}^i = 2R + 3M_i{}^i$}. We can then obtain the state dependent term, \mbox{$\omega_0 = \frac{1}{12\pi^2}\left(N_{i}{}^i - \frac{1}{12}R_{i}{}^i\right)$}. Finally, the curvature tensors can be written as \mbox{$R_{ij} = 12(N_{ij} - 4\pi^2 \omega_0 \delta_{ij})$} and \mbox{$R_{\tau i \tau j} = \frac{3}{2}M_{ij} + 2 R_{ij}$}. This protocol then allows one to recover $13$ independent terms: we recover all the space components of the Ricci scalar $R_{ij}$, all components of the Riemann tensor of the form $R_{\tau i \tau j}$ and the state dependent term $\omega_0$. In particular, from $R_{ij}$ and $R_{\tau i \tau j}$, it is possible to obtain $R_{\tau \tau}$ and the Ricci scalar $R$. The protocol outlined above then allows one to recover information about the spacetime geometry using only $13$ different couplings of detectors with the field. Moreover, if the spacetime whose geometry we wish to recover has known symmetries, it might be possible to require even less than $13$ samplings by exploiting these symmetries. At this stage it should be clear that it is possible to recover some information about the spacetime geometry from the excitation probability of ultra rapid coupled particle detectors. However, it is still not possible to recover the full Ricci tensor, or the full Riemann curvature tensor from the setup described so far. In fact, it is not possible to write the components $R_{\tau i}$, $R_{\tau i j k}$ or $R_{ijkl}$ in terms of $M^{ij}$, $N^{ij}$ and $R$. However, it is possible to recover these tensors by considering detectors in different states of motion such that the center of their interactions with the quantum field is at events that still lie along the same flow of the Killing field $\xi$. For concreteness, consider a second detector which has a relative velocity $v$ in a (Fermi normal) coordinate direction $x^i$ with respect to the previous setup. In this case, it is possible to write the instantaneous four-velocity of the second detector at the point of the interaction as $\mf u' = \gamma(\mf u+v \mf e_i)$, where $\mf u$ is the four-velocity of the flow of the $\xi$ trajectory, $\mf e_i$ is the frame vector associated with the Fermi normal coordinates in the direction $i$ and $v$ is the magnitude of the instantaneous relative three-velocity between the trajectories. Then, performing the same protocol described above for detectors with relative velocity $v$ at the interaction points, we will obtain the tensors $R_{i'j'}$, $R_{\tau'i'\tau'j'}$ and the scalar $R_{\tau'\tau'}$, where the primed coordinates are associated with the components with respect to the Fermi frame of the trajectory $\mf u'$. Using the standard Lorentz coordinate transformation between these frames at the interaction points, it is possible to write \mbox{$R_{\tau\tau} = \gamma^2(R_{\tau'\tau'} - 2 v R_{\tau' i'} + v^2 R_{i'i'})$}. This expression now allows us to write the components $R_{\tau'i'}$ in terms of other previously obtained tensor components. An analogous procedure can also be carried over to the Riemann curvature tensor, allowing one to obtain $R_{\tau'i'j'k'}$ and $R_{i'j'k'l'}$ by considering frames with relative motion with respect to the flow of $\xi$. With this protocol, we are then able to recover all components of the Riemann curvature tensor We have particularized this protocol for specific elliptical gaussian detector shapes, so that their proper acceleration did not play any role in the expansion of Eq. \eqref{PcsPflat}. That is, this choice allows one to recover the geometry of spacetime regardless of the instantaneous proper acceleration of the detectors. However, it is possible to generalize this procedure using general detector shapes, provided that one finds linearly independent coefficients for the terms $\mathcal{Q}^{ij}$, $\mathcal{D}^i$, $\mathcal{L}^{ij}$ and $\mathcal{L}_R$. In fact, if we had considered detectors with nontrivial $\mathcal{D}^i$ terms, the acceleration of the detector would also play a role in the expansion of Eq. \eqref{PcsPflat}. Then, with $16$ couplings it would be possible to recover $M_{ij}$, $a_i$, $N_{ij}$ and $R$. An analogous protocol could then be performed in order to recover the full Riemann curvature tensor curvature of spacetime. Overall, we have shown that it is possible to write the components of the curvature tensors in terms of the excitation probabilities of smeared delta coupled particle detectors of different shapes in different states of motion. In order to do so, we assume that the spacetime geometry is approximately unchanged for the duration of the experiments. Intuitively, by varying the shape of the detector in different directions, the detector will couple to different field modes, which are affected by curvature in specific ways according to Eq. \eqref{PcsPflat}. Having the specific dependence of these modes on curvature then allows one to associate the excitation probability of the particle detectors with the geometry of spacetime. \begin{comment} or more explicitly as: \begin{equation}\label{eq:recon} \begin{bmatrix} \mathcal{L}_{10} & M_{10} & \mathcal{L}_R & \mathcal{L}_{\omega} \\ \mathcal{L}_{01} & M_{01} & \mathcal{L}_R & \mathcal{L}_{\omega} \\ \mathcal{L}_{11} & M_{11} & \mathcal{L}_R & \mathcal{L}_{\omega} \\ \vdots & \vdots & \vdots & \vdots \\ \mathcal{L}_{33} & M_{33} & \mathcal{L}_R & \mathcal{L}_{\omega} \end{bmatrix} \begin{bmatrix} Q_{ij} \\ R{ij} \\ R \\ \omega_0 \end{bmatrix} = \begin{bmatrix} e^{2\mathcal{L}_0^{(1)}}(P^{(1)} - P_0^{(1)})\\ \vdots \\ e^{2\mathcal{L}_0^{(9)}}(P^{(9)} - P_0^{(9)}) \end{bmatrix} \end{equation} \end{comment} \section{Conclusion}\label{sec:conclusion} We have expressed the spacetime curvature in terms of the excitation probability of smeared particle detectors delta coupled to a quantum field. Specifically, we devised a protocol in which one considers particle detectors of specific shapes and with specific states of motion which repeatedly interact with the quantum field. Under the assumption that the background geometry is approximately unchanged during these measurements, one can then recover the components of the Riemann curvature tensor associated with the directions in which each detector is more smeared. With the protocol we have devised, it is then possible to recover all components of the Riemann curvature tensor, and thus all information about the spacetime geometry, from measurable quantities of particle detectors. Overall, we have devised a protocol by which one can write the geometry of spacetime in terms of the expectation values of quantum observables. This represents yet another step towards obtaining a theory of spacetime and gravity which is compatible with with quantum theory and rephrases classical notions of spacetime and curvature entirely in terms of properties of quantum fields. \section*{Acknowledgements} The authors thank Bruno de S. L. Torres and Barbara \v{S}oda for insightful discussions and Erickson Tjoa for reviewing the manuscript. A.S. thanks Prof. Robert Mann for his supervision. T. R. P. thanks Profs. David Kubiz\v{n}\'ak and Eduardo Mart\'in-Mart\'inez’s funding through their NSERC Discovery grants. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Industry Canada and by the Province of Ontario through the Ministry of Colleges and Universities.
{'timestamp': '2022-02-24T02:00:18', 'yymm': '2202', 'arxiv_id': '2202.11108', 'language': 'en', 'url': 'https://arxiv.org/abs/2202.11108'}
\section{\label{sec:level1}First-level heading:\protect\\ The line break was forced \lowercase{via} \textbackslash\textbackslash} This sample document demonstrates proper use of REV\TeX~4.2 (and \LaTeXe) in manuscripts prepared for submission to AIP journals. Further information can be found in the documentation included in the distribution or available at \url{http://authors.aip.org} and in the documentation for REV\TeX~4.2 itself. When commands are referred to in this example file, they are always shown with their required arguments, using normal \TeX{} format. In this format, \verb+#1+, \verb+#2+, etc. stand for required author-supplied arguments to commands. For example, in \verb+\section{#1}+ the \verb+#1+ stands for the title text of the author's section heading, and in \verb+\title{#1}+ the \verb+#1+ stands for the title text of the paper. Line breaks in section headings at all levels can be introduced using \textbackslash\textbackslash. A blank input line tells \TeX\ that the paragraph has ended. \subsection{\label{sec:level2}Second-level heading: Formatting} This file may be formatted in both the \texttt{preprint} (the default) and \texttt{reprint} styles; the latter format may be used to mimic final journal output. Either format may be used for submission purposes; however, for peer review and production, AIP will format the article using the \texttt{preprint} class option. Hence, it is essential that authors check that their manuscripts format acceptably under \texttt{preprint}. Manuscripts submitted to AIP that do not format correctly under the \texttt{preprint} option may be delayed in both the editorial and production processes. The \texttt{widetext} environment will make the text the width of the full page, as on page~\pageref{eq:wideeq}. (Note the use the \verb+\pageref{#1}+ to get the page number right automatically.) The width-changing commands only take effect in \texttt{twocolumn} formatting. It has no effect if \texttt{preprint} formatting is chosen instead. \subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes} Citations in text refer to entries in the Bibliography; they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+. Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly, its entire repertoire of commands are available in your document; see the \verb+natbib+ documentation for further details. The argument of \verb+\cite+ is a comma-separated list of \emph{keys}; a key may consist of letters and numerals. By default, citations are numerical; \cite{feyn54} author-year citations are an option. To give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}). REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate. REV\TeX\ provides the ability to properly punctuate textual citations in author-year style; this facility works correctly with numerical citations only with \texttt{natbib}'s compress option turned off. To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983}, and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}). Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography. A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command, where the argument is the citation key mentioned above. \verb+\bibitem{#1}+ commands may be crafted by hand or, preferably, generated by using Bib\TeX. The AIP styles for REV\TeX~4 include Bib\TeX\ style files \verb+aipnum.bst+ and \verb+aipauth.bst+, appropriate for numbered and author-year bibliographies, respectively. REV\TeX~4 will automatically choose the style appropriate for the document's selected class options: the default is numerical, and you obtain the author-year style by specifying a class option of \verb+author-year+. This sample file demonstrates a simple use of Bib\TeX\ via a \verb+\bibliography+ command referencing the \verb+aipsamp.bib+ file. Running Bib\TeX\ (in this case \texttt{bibtex aipsamp}) after the first pass of \LaTeX\ produces the file \verb+aipsamp.bbl+ which contains the automatically formatted \verb+\bibitem+ commands (including extra markup information via \verb+\bibinfo+ commands). If not using Bib\TeX, the \verb+thebibiliography+ environment should be used instead. \paragraph{Fourth-level heading is run in.}% Footnotes are produced using the \verb+\footnote{#1}+ command. Numerical style citations put footnotes into the bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}. Author-year and numerical author-year citation styles (each for its own reason) cannot use this method. Note: due to the method used to place footnotes in the bibliography, \emph{you must re-run BibTeX every time you change any of your document's footnotes}. \section{Math and Equations} Inline math may be typeset using the \verb+$+ delimiters. Bold math symbols may be achieved using the \verb+bm+ package and the \verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and Blackboard (or open face or double struck) characters should be typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands respectively. Both are supplied by the \texttt{amssymb} package. For example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and \verb+$\mathfrak{G}$+ gives $\mathfrak{G}$ In \LaTeX\ there are many different ways to display equations, and a few preferred ways are noted below. Displayed math will center by default. Use the class option \verb+fleqn+ to flush equations left. Below we have numbered single-line equations, the most common kind: \begin{eqnarray} \chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2} \left( \begin{array}{c} |{\bf p}|+p_z\\ px+ip_y \end{array}\right)\;, \\ \left\{% \openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}% \label{eq:one}. \end{eqnarray} Note the open one in Eq.~(\ref{eq:one}). Not all numbered equations will fit within a narrow column this way. The equation number will move down automatically if it cannot fit on the same line with a one-line equation: \begin{equation} \left\{ ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}. \end{equation} When the \verb+\label{#1}+ command is used [cf. input for Eq.~(\ref{eq:one})], the equation can be referred to in text without knowing the equation number that \TeX\ will assign to it. Just use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in the \verb+\label{#1}+ command. Unnumbered single-line equations can be typeset using the \verb+\[+, \verb+\]+ format: \[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \] \subsection{Multiline equations} Multiline equations are obtained by using the \verb+eqnarray+ environment. Use the \verb+\nonumber+ command at the end of each line to avoid assigning a number: \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} \delta_{\sigma_1,-\sigma_2} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1), \end{eqnarray} \begin{eqnarray} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\nonumber \\ & &\times \left( \sum_{i<j}\right) \sum_{\text{perm}} \frac{1}{S_{12}} \frac{1}{S_{12}} \sum_\tau c^f_\tau~. \end{eqnarray} \textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline equation if \verb+\nonumber+ is also used on that line. Incorrect cross-referencing will result. Notice the use \verb+\text{#1}+ for using a Roman font within a math environment. To set a multiline equation without \emph{any} equation numbers, use the \verb+\begin{eqnarray*}+, \verb+\end{eqnarray*}+ format: \begin{eqnarray*} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\\ & &\times \left( \sum_{i<j}\right) \left( \sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}} \right) \frac{1}{S_{12}}~. \end{eqnarray*} To obtain numbers not normally produced by the automatic numbering, use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired equation number. For example, to get an equation number of (\ref{eq:mynum}), \begin{equation} g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum} \end{equation} A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires \texttt{amsmath}. The \verb+\tag{#1}+ must come before the \verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is \textit{transparent} to the automatic numbering in REV\TeX{}; therefore, the number must be known ahead of time, and it must be manually adjusted if other equations are added. \verb+\tag{#1}+ works with both single-line and multiline equations. \verb+\tag{#1}+ should only be used in exceptional case - do not use it to number all equations in a paper. Enclosing single-line and multiline equations in \verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce a set of equations that are ``numbered'' with letters, as shown in Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below: \begin{subequations} \label{eq:whole} \begin{equation} \left\{ abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2} \right\},\label{subeq:1} \end{equation} \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2} \end{eqnarray} \end{subequations} Putting a \verb+\label{#1}+ command right after the \verb+\begin{subequations}+, allows one to reference all the equations in a subequations environment. For example, the equations in the preceding subequations environment were Eqs.~(\ref{eq:whole}). \subsubsection{Wide equations} The equation that follows is set in a wide format, i.e., it spans across the full page. The wide format is reserved for long equations that cannot be easily broken into four lines or less: \begin{widetext} \begin{equation} {\cal R}^{(\text{d})}= g_{\sigma_2}^e \left( \frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right) + x_WQ_e \left( \frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right)\;. \label{eq:wideeq} \end{equation} \end{widetext} This is typed to show the output is in wide format. (Since there is no input line between \verb+\equation+ and this paragraph, there is no paragraph indent for this paragraph.) \section{Cross-referencing} REV\TeX{} will automatically number sections, equations, figure captions, and tables. In order to reference them in text, use the \verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a particular page, use the \verb+\pageref{#1}+ command. The \verb+\label{#1}+ should appear in a section heading, within an equation, or in a table or figure caption. The \verb+\ref{#1}+ command is used in the text where the citation is to be displayed. Some examples: Section~\ref{sec:level1} on page~\pageref{sec:level1}, Table~\ref{tab:table1},% \begin{table} \caption{\label{tab:table1}This is a narrow table which fits into a text column when using \texttt{twocolumn} formatting. Note that REV\TeX~4 adjusts the intercolumn spacing so that the table fills the entire width of the column. Table captions are numbered automatically. This table illustrates left-aligned, centered, and right-aligned columns. } \begin{ruledtabular} \begin{tabular}{lcr} Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\ \hline 1 & 2 & 3\\ 10 & 20 & 30\\ 100 & 200 & 300\\ \end{tabular} \end{ruledtabular} \end{table} and Fig.~\ref{fig:epsart}. \section{Figures and Tables} Figures and tables are typically ``floats''; \LaTeX\ determines their final position via placement rules. \LaTeX\ isn't always successful in automatically placing floats where you wish them. Figures are marked up with the \texttt{figure} environment, the content of which imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+). The argument of the latter command should itself contain a \verb+\label+ command if you wish to refer to your figure with \verb+\ref+. Import your image using either the \texttt{graphics} or \texttt{graphix} packages. These packages both define the \verb+\includegraphics{#1}+ command, but they differ in the optional arguments for specifying the orientation, scaling, and translation of the figure. Fig.~\ref{fig:epsart}% \begin{figure} \includegraphics{fig_1 \caption{\label{fig:epsart} A figure caption. The figure captions are automatically numbered.} \end{figure} is small enough to fit in a single column, while Fig.~\ref{fig:wide}% \begin{figure*} \includegraphics{fig_2 \caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide figure, spanning the page in \texttt{twocolumn} formatting.} \end{figure*} is too wide for a single column, so instead the \texttt{figure*} environment has been used. The analog of the \texttt{figure} environment is \texttt{table}, which uses the same \verb+\caption+ command. However, you should type your caption command first within the \texttt{table}, instead of last as you did for \texttt{figure}. The heart of any table is the \texttt{tabular} environment, which represents the table content as a (vertical) sequence of table rows, each containing a (horizontal) sequence of table cells. Cells are separated by the \verb+&+ character; the row terminates with \verb+\\+. The required argument for the \texttt{tabular} environment specifies how data are displayed in each of the columns. For instance, a column may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+), or aligned on a decimal point (\verb+d+). (Table~\ref{tab:table4}% \begin{table} \caption{\label{tab:table4}Numbers in columns Three--Five have been aligned by using the ``d'' column specifier (requires the \texttt{dcolumn} package). Non-numeric entries (those entries without a ``.'') in a ``d'' column are aligned on the decimal point. Use the ``D'' specifier for more complex layouts. } \begin{ruledtabular} \begin{tabular}{ccddd} One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\ \hline one&two&\mbox{three}&\mbox{four}&\mbox{five}\\ He&2& 2.77234 & 45672. & 0.69 \\ C\footnote{Some tables require footnotes.} &C\footnote{Some tables need more than one footnote.} & 12537.64 & 37.66345 & 86.37 \\ \end{tabular} \end{ruledtabular} \end{table} illustrates the use of decimal column alignment.) Extra column-spacing may be be specified as well, although REV\TeX~4 sets this spacing so that the columns fill the width of the table. Horizontal rules are typeset using the \verb+\hline+ command. The doubled (or Scotch) rules that appear at the top and bottom of a table can be achieved by enclosing the \texttt{tabular} environment within a \texttt{ruledtabular} environment. Rows whose columns span multiple columns can be typeset using \LaTeX's \verb+\multicolumn{#1}{#2}{#3}+ command (for example, see the first row of Table~\ref{tab:table3}).% \begin{table*} \caption{\label{tab:table3}This is a wide table that spans the page width in \texttt{twocolumn} mode. It is formatted using the \texttt{table*} environment. It also demonstrates the use of \textbackslash\texttt{multicolumn} in rows with entries that span more than one column.} \begin{ruledtabular} \begin{tabular}{ccccc} &\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\ Ion&1st alternative&2nd alternative&lst alternative &2nd alternative\\ \hline K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\ Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.} &$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\ Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. } &$(4e)^{\text{a}}$\\ He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\ Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\ \end{tabular} \end{ruledtabular} \end{table*} The tables in this document illustrate various effects. Tables that fit in a narrow column are contained in a \texttt{table} environment. Table~\ref{tab:table3} is a wide table, therefore set with the \texttt{table*} environment. Lengthy tables may need to break across pages. A simple way to allow this is to specify the \verb+[H]+ float placement on the \texttt{table} or \texttt{table*} environment. Alternatively, using the standard \LaTeXe\ package \texttt{longtable} gives more control over how tables break and allows headers and footers to be specified for each page of the table. An example of the use of \texttt{longtable} can be found in the file \texttt{summary.tex} that is included with the REV\TeX~4 distribution. There are two methods for setting footnotes within a table (these footnotes will be displayed directly below the table rather than at the bottom of the page or in the bibliography). The easiest and preferred method is just to use the \verb+\footnote{#1}+ command. This will automatically enumerate the footnotes with lowercase roman letters. However, it is sometimes necessary to have multiple entries in the table share the same footnote. In this case, create the footnotes using \verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+. \texttt{\#1} is a numeric value. Each time the same value for \texttt{\#1} is used, the same mark is produced in the table. The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular} environment. Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and \ref{tab:table2}% \begin{table} \caption{\label{tab:table2}A table with more columns still fits properly in a column. Note that several entries share the same footnote. Inspect the \LaTeX\ input for this table to see exactly how it is done.} \begin{ruledtabular} \begin{tabular}{cccccccc} &$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$& &$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\ \hline Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1] & 0.680 & 1.870 & 3.700 \\ Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2] & 0.450 & 1.930 & 3.760 \\ Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3] & 0.750 & 2.170 & 3.560 \\ Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4] & 0.900 & 2.370 & 3.720 \\ Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2] & 0.380 & 1.730 & 2.830 \\ Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5] & 0.760 & 2.110 & 3.120 \\ Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5] & 1.120 & 2.620 & 3.480 \\ Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3] & 1.330 & 2.800 & 3.590 \\ Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4] & 1.420 & 3.030 & 3.740 \\ In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5] & 0.960 & 2.460 & 3.780 \\ Tl& 0.480 & 18.90 & 3.550 & & & & \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.} \footnotetext[2]{Here's the second.} \footnotetext[3]{Here's the third.} \footnotetext[4]{Here's the fourth.} \footnotetext[5]{And etc.} \end{table} for an illustration. All AIP journals require that the initial citation of figures or tables be in numerical order. \LaTeX's automatic numbering of floats is your friend here: just put each \texttt{figure} environment immediately following its first reference (\verb+\ref+), as we have done in this example file. \begin{acknowledgments} We wish to acknowledge the support of the author community in using REV\TeX{}, offering suggestions and encouragement, testing new versions, \dots. \end{acknowledgments} \section{Introduction} Kohn--Sham density-functional theory (KS-DFT)~\cite{KStheory_1965} has become over the last two decades the method of choice for computational chemistry and physics studies, essentially because it often provides a relatively accurate description of the electronic structure of large molecular or extended systems at a low computational cost. The major simplification of the electronic structure problem in KS-DFT lies in the fact that the ground-state energy is evaluated, in principle exactly, from a non-interacting single-configuration wave function, which is simply referred to as the KS determinant. The latter is obviously not the exact solution to the Schr\"{o}dinger equation. However, its density matches the exact interacting ground-state density, so that the Hartree-exchange-correlation (Hxc) energy of the physical system, which is induced by the electronic repulsion, can be recovered from an appropriate (in principle exact and universal) Hxc density functional. Despite the success of KS-DFT, standard density-functional approximations still fail in describing strongly correlated electrons. To overcome this issue, various strategies have been explored and improved over the years, both in condensed matter physics~\cite{Anisimov_1997_lda_plus_U,Anisimov_1997,PRB98_Lichtenstein_LDA_plus_DMFT,kotliar2006reviewDMFT,Haule_2ble_counting_DMFT-DFT_2015,requist2019model} and quantum chemistry~\cite{CR18_Truhlar_Multiconf_DFTs}. Note that, in the latter case, in-principle-exact multi-determinantal extensions of DFT based on the adiabatic connection formalism have been developed~\cite{savinbook,toulouse2004long,sharkas2011double,fromager2015exact}. In these approaches, the KS system is only referred to in the design of density-functional approximations. In practice, a single (partially-interacting) many-body wave function is calculated self-consistently and the complement to the partial interaction energy is described with an appropriate density functional (which differs from the conventional xc one). In other words, there is no KS construction in the actual calculation. Some of these concepts have been reused in the study of model lattice Hamiltonians~\cite{fromager2015exact,senjean2018site}. A similar strategy will be adopted in the present work, with an important difference though. The {\it reduced-in-size} correlated density-functional many-body wave function that we will introduce will be extracted from a quantum embedding theory where the KS determinant of the full system is a key ingredient that must be evaluated explicitly.\\ Quantum embedding theory~\cite{IJQC20_Adam-Michele_embedding_special_issue} is at first sight a completely different approach to the strong electron correlation problem. Interestingly, some of its implementations, like the \textit{density matrix embedding theory} (DMET)~\cite{knizia2012density,knizia2013density,tsuchimochi2015density,welborn2016bootstrap,sun2016quantum,wouters2016practical,wu2019projected,JCTC20_Chan_ab-initio_DMET,faulstich2022_vrep}, rely on a reference Slater determinant that is computed for the full system. This is also the case in practical embedding calculations based on the exact factorization formalism~\cite{PRL20_Lacombe_embedding_via_EF,PRL21_Requist_EF_electrons}. Unlike the well-established \textit{dynamical mean-field theory} (DMFT)~\cite{georges1992hubbard,georges1996limitdimension,kotliar2004strongly,held2007electronic,zgid2011DMFTquantum}, which relies on the one-electron Green's function, DMET is a static theory of ground electronic states. Most importantly, the bath, in which a fragment of the original system (referred to as impurity when it is a single localized orbital) is embedded, is drastically reduced in size in DMET. As a result, the ``impurity+bath'' embedding cluster can be accurately (if not exactly) described with wave function-based quantum chemical methods. The authors have shown recently that the Schmidt decomposition of the reference Slater determinant, which is central in DMET, can be recast into a (one-electron reduced) density-matrix functional Householder transformation~\cite{sekaran2021}, which is much simpler to implement. The approach, in which the bath orbitals can in principle be correlated directly through the density matrix~\cite{sekaran2021}, is referred to as \textit{ Householder~transformed~density matrix~functional~embedding~theory} (Ht-DMFET). Since the seminal work of Knizia and Chan on DMET~\cite{knizia2012density}, various connections with DMFT and related approaches have been established~\cite{ayral2017dynamical,lee2019rotationally,fertitta2018rigorous,JCP19_Booth_Ew-DMET_hydrogen_chain, PRB21_Booth_effective_dynamics_static_embedding,PRX21_Lee_SlaveBoson_resp_functions-superconductivity}. Connections with DFT have been less explored, and only at the approximate level of theory. We can refer to the \textit{density embedding theory} (DET) of Bulik {\it et al.}~\cite{bulik2014density}, which is a simplified version of DMET where only the diagonal elements of the embedded density matrix are mapped onto the reference Slater determinant of the full system. More recently, Senjean~\cite{senjean2019projected} combined DFT for lattices~\cite{lima2003density,DFT_ModelHamiltonians} with DMET, and Mordovina {\it et al.}~\cite{mordovina2019self} (see also Ref.~\cite{Theophilou_2021}) proposed a {\it self-consistent density-functional embedding} (SDE), where the KS determinant is explicitly used as the reference wave function in the DMET algorithm.\\ In the present work, an in-principle-exact combination of KS-DFT with DMET is derived for the one-dimensional (1D) Hubbard lattice, as a proof of concept. For that purpose, we use the density-matrix functional Householder transformation introduced recently by the authors~\cite{sekaran2021}. On the basis of well-identified density-functional approximations, we propose and implement a {\it local potential functional embedding theory} (LPFET) where the Hxc potential is evaluated self-consistently in the lattice by ``learning'' from the embedding cluster at each iteration of the optimization process. LPFET can be seen as a flavor of KS-DFT where no density functional is actually used.\\ The paper is organized as follows. After a short introduction to the 1D Hubbard model in Sec.~\ref{subsec:1D_hub}, a detailed review of Ht-DMFET is presented in Sec.~\ref{subsec:review_Ht-DMFET}, for clarity and completeness. An exact density-functional reformulation of the theory is then proposed in Sec.~\ref{subsec:exact_dfe_dft}. The resulting approximate LPFET and its comparison with SDE are detailed in Secs.~\ref{sec:LPFET} and Sec.~\ref{subsec:sde_comparison}, respectively. The LPFET algorithm is summarized in Sec.~\ref{sec:lpfet_algo}. Results obtained for a 1000-site Hubbard ring are presented and discussed in Sec.~\ref{sec:results}. The conclusion and perspectives are finally given in Sec.~\ref{sec:conclusion}. \section{Theory}\label{sec:theory} \subsection{One-dimensional Hubbard lattice}\label{subsec:1D_hub} By analogy with Ref.~\cite{sekaran2021}, various quantum embedding strategies will be discussed in the following within the simple but nontrivial uniform 1D Hubbard model. The corresponding lattice Hamiltonian (for a $L$-site ring) reads as \begin{eqnarray}\label{eq:1D_Hubbard_Hamilt} \hat{H}=\hat{T}+\hat{U}+v_{\rm ext}\hat{N}, \end{eqnarray} where the hopping operator (written in second quantization), \begin{eqnarray}\label{eq:hopping_operator} \hat{T}=-t\sum^{L-1}_{i=0}\sum_{\sigma=\uparrow,\downarrow}\left(\hat{c}^\dagger_{i\sigma}\hat{c}_{(i+1)\sigma} +\hat{c}_{(i+1)\sigma}^\dagger\hat{c}_{i\sigma} \right), \end{eqnarray} with parameter $t$, is the analog for lattices of the kinetic energy operator. For convenience, we will systematically use periodic boundary conditions, {\it{i.e.}}, $\hat{c}^\dagger_{L\sigma}\equiv \hat{c}^\dagger_{0\sigma}$. On-site repulsions only are taken into account in the two-electron repulsion operator $\hat{U}$, {\it{i.e.}}, \begin{eqnarray} \hat{U}=\sum^{L-1}_{i=0}\hat{U}_i, \end{eqnarray} where $\hat{U}_i=U\hat{n}_{i\uparrow}\hat{n}_{i\downarrow}$, $U$ is the parameter that controls the strength of the interaction, and $\hat{n}_{i\sigma}=\hat{c}_{i\sigma}^\dagger\hat{c}_{i\sigma}$ is a site occupation operator for spin $\sigma$. Since the lattice is uniform, the local external potential (which would correspond to the nuclear potential in a conventional quantum chemical calculation) operator is proportional to the electron counting operator [see the last term on the right-hand side of Eq.~(\ref{eq:1D_Hubbard_Hamilt})], \begin{eqnarray} \hat{N}=\sum^{L-1}_{i=0}\sum_{\sigma=\uparrow,\downarrow}\hat{n}_{i\sigma}. \end{eqnarray} The uniform value of the external potential can be rewritten as \begin{eqnarray}\label{eq:constant_ext_pot} v_{\rm ext}=-\mu, \end{eqnarray} where the chemical potential $\mu$ controls the number of electrons $N$ or, equivalently, the uniform density $n=N/L$ in the lattice. In this case, $\hat{H}$ is actually a (zero-temperature) grand canonical Hamiltonian. For convenience, we rewrite the hopping operator as follows, \begin{eqnarray}\label{eq:hopping_final_form_sumij} \hat{T}\equiv\sum^{L-1}_{i,j=0}\sum_{\sigma=\uparrow,\downarrow}t_{ij}\hat{c}^\dagger_{i\sigma}\hat{c}_{j\sigma}, \end{eqnarray} where \begin{eqnarray}\label{eq:hopping_matrix} t_{ij}=-t\left(\delta_{j(i+1)}+\delta_{i(j+1)}\right), \end{eqnarray} and $t_{(L-1)0}=t_{0(L-1)}=-t$. From now on the bounds in the summations over the full lattice will be dropped, for simplicity: \begin{eqnarray} \sum_i\equiv \sum^{L-1}_{i=0}. \end{eqnarray} Note that the quantum embedding strategies discussed in the present work can be extended to more general (quantum chemical, in particular) Hamiltonians~\cite{wouters2016practical}. For that purpose, the true {\it ab initio} Hamiltonian should be written in a localized molecular orbital basis, thus leading to the more general Hamiltonian expression, \begin{eqnarray} \hat{H}=\sum_{\sigma}\sum_{ij}h_{ij}\hat{c}^\dagger_{i\sigma}\hat{c}_{j\sigma}+\dfrac{1}{2}\sum_{\sigma,\sigma'}\sum_{ijkl}\langle ij\vert kl\rangle \hat{c}^\dagger_{i\sigma}\hat{c}^\dagger_{j\sigma'}\hat{c}_{l\sigma'}\hat{c}_{k\sigma}, \end{eqnarray} where $h_{ij}$ and $\langle ij\vert kl\rangle$ are the (kinetic and nuclear attraction) one-electron and two-electron repulsion integrals, respectively. Using a localized orbital basis allows for the decomposition of the molecule under study into fragments that can be embedded afterward~\cite{wouters2016practical}. In the following, we will work with the simpler Hamiltonian of Eq.~(\ref{eq:1D_Hubbard_Hamilt}), as a proof of concept.\\ \subsection{Review of Ht-DMFET}\label{subsec:review_Ht-DMFET} For the sake of clarity and completeness, a review of Ht-DMFET~\cite{sekaran2021} is presented in the following subsections. Various ingredients (operators and reduced quantities) that will be used later on in Sec.~\ref{subsec:exact_dfe_dft} in the derivation of a formally exact density-functional embedding theory (which is the main outcome of this work) are introduced. Real algebra will be used. For simplicity, we focus on the embedding of a single impurity. A multiple-impurity extension of the theory can be obtained from a block Householder transformation~\cite{sekaran2021,AML99_Rotella_Block_Householder_transf}. Unlike in the exact reformulation of the theory which is proposed in the following Sec.~\ref{subsec:exact_dfe_dft} and where the chemical potential $\mu$ controls the density of the uniform lattice, the total number of electrons will be {\it fixed} to the value $N$ in the present section. In other words, the uniform density is set to $n=N/L$ and $\mu$ is an arbitrary constant (that could be set to zero). \subsubsection{Exact non-interacting embedding}\label{subsubsec:non_int_embedding} Let us first consider the particular case of a non-interacting ($U=0$) lattice for which Ht-DMFET is exact~\cite{sekaran2021}. As it will be applied later on (in Sec.~\ref{subsec:exact_dfe_dft}) to the auxiliary KS lattice, it is important to highlight the key features of the non-interacting embedding. Following Ref.~\cite{sekaran2021}, we label as $i=0$ one of the localized (lattice site in the present case) spin-orbital $\ket{\chi^\sigma_0}\equiv \hat{c}_{0\sigma}^\dagger\ket{\rm vac}$ [we denote $\ket{\rm vac}$ the vacuum state of second quantization] that, ultimately, will become the so-called {\it embedded impurity}. The ingredient that is central in Ht-DMFET is the (one-electron reduced) density matrix of the full system in the lattice representation, {\it{i.e.}}, \begin{eqnarray}\label{eq:1RDM_lattice} {\bm \gamma}^{\uparrow}={\bm \gamma}^{\downarrow}={\bm \gamma}\equiv\gamma_{ij}=\mel{\Phi}{\hat{c}_{i\sigma}^\dagger\hat{c}_{j\sigma}}{\Phi}, \end{eqnarray} where we restrict ourselves to closed-shell singlet ground states $\ket{\Phi}$, for simplicity. Note that \begin{eqnarray}\label{eq:gamma_00_filling} \gamma_{00}=\dfrac{n}{2}=\dfrac{N}{2L} \end{eqnarray} is the uniform lattice filling per spin. Since the full lattice will always be described with a single Slater determinant in the following, the density matrix ${\bm \gamma}$ will always be {\it idempotent}. The latter is used to construct the Householder unitary transformation which, once it has been applied to the one-electron lattice space, defines the so-called {\it bath} spin-orbital with which the impurity will ultimately be exclusively entangled. More explicitly, the Householder transformation matrix \begin{eqnarray}\label{eq:P_from_HH_vec} {\bm P}={\bm I}-2{\mathbf{v}}\bv^{\dagger}\equiv P_{ij}=\delta_{ij}-2{\rm v}_i{\rm v}_j, \end{eqnarray} where ${\bm I}$ is the identity matrix, is a functional of the density matrix, {\it{i.e.}}, \begin{eqnarray}\label{eq:P_functional_oneRDM} {\bm P}\equiv{\bm P}\left[{\bm \gamma}\right], \end{eqnarray} where the density-matrix-functional Householder vector components read as~\cite{sekaran2021} \begin{eqnarray}\label{eq:v0_zero} {\rm v}_0&=&0, \\ \label{eq:HH_vec_compt1} {\rm v}_1&=&\dfrac{\gamma_{10}-\tilde{\gamma}_{10}}{\sqrt{2\tilde{\gamma}_{10}\left(\tilde{\gamma}_{10}-\gamma_{10}\right)}}, \\ \label{eq:HH_vec_compt_i_larger2} {\rm v}_i&\underset{i\geq 2}{=}&\dfrac{\gamma_{i0}}{\sqrt{2\tilde{\gamma}_{10}\left(\tilde{\gamma}_{10}-\gamma_{10}\right)}}, \end{eqnarray} with \begin{eqnarray}\label{eq:gamma01_tilde} \tilde{\gamma}_{10}=-{\rm sgn}\left(\gamma_{10}\right)\sqrt{\sum_{j>0}\gamma^2_{j0}}, \end{eqnarray} and \begin{eqnarray}\label{eq:normalization_HH_vec} {{\mathbf{v}}}^\dagger{\mathbf{v}}=\sum_{i\geq 1}{\rm v}^2_i=1. \end{eqnarray} Note that, in the extreme case of a two-site lattice, the denominator in Eqs.~(\ref{eq:HH_vec_compt1}) and (\ref{eq:HH_vec_compt_i_larger2}) is still well defined and it does not vanish. Indeed, by construction [see Eq.~(\ref{eq:gamma01_tilde})], \begin{eqnarray} \tilde{\gamma}_{10}&\underset{\tiny\left\{\gamma_{j0}\overset{j>1}{=}0\right\}}{=}&-{\rm sgn}\left(\gamma_{10}\right)\abs{\gamma_{10}}=-\gamma_{10} \end{eqnarray} in this case, thus leading to $\tilde{\gamma}_{10}\left(\tilde{\gamma}_{10}-\gamma_{10}\right)=2\gamma_{10}^2>0$. Note also that ${\bm P}$ is hermitian and unitary, {\it{i.e.}}, ${\bm P}={\bm P}^\dagger$ and \begin{eqnarray}\label{eq:unitary_transf} {\bm P}^2={\bm P}{\bm P}^\dagger={\bm P}^\dagger{\bm P}={\bm I}. \end{eqnarray} The bath spin-orbital $\ket{\varphi^\sigma_{\rm bath}}$ is then constructed as follows in second quantization, \begin{eqnarray} \ket{\varphi^\sigma_{\rm bath}}:=\hat{d}_{1\sigma}^\dagger\ket{\rm vac}, \end{eqnarray} where, according to Eqs.~(\ref{eq:P_from_HH_vec}) and (\ref{eq:v0_zero}), \begin{eqnarray}\label{eq:bath_expansion} \begin{split} \hat{d}_{1\sigma}^\dagger&=\sum_{k}P_{1k}\hat{c}_{k\sigma}^\dagger \\ &=\hat{c}_{1\sigma}^\dagger-2{\rm v}_1\sum_{k\geq 1}{\rm v}_k\hat{c}_{k\sigma}^\dagger. \end{split} \end{eqnarray} More generally, the entire lattice space can be Householder-transformed as follows, \begin{eqnarray}\label{eq:Householder_creation_ops} \hat{d}_{i\sigma}^\dagger\underset{0\leq i\leq L-1}{=}\sum_k P_{ik}\hat{c}_{k\sigma}^\dagger, \end{eqnarray} and the back transformation simply reads as \begin{eqnarray}\label{eq:from_HH_to_lattice_rep} \sum_i P_{li}\hat{d}_{i\sigma}^\dagger=\sum_{ik}P_{li}P_{ik}\hat{c}_{k\sigma}^\dagger=\sum_{k}\left[{\bm P}^2\right]_{lk}\hat{c}_{k\sigma}^\dagger=\hat{c}_{l\sigma}^\dagger. \end{eqnarray} We stress that the impurity is invariant under the Householder transformation, {\it{i.e.}}, \begin{eqnarray}\label{eq:imp_invariant_underHH_transf} \hat{d}_{0\sigma}^\dagger=\hat{c}_{0\sigma}^\dagger, \end{eqnarray} and, according to the Appendix, the Householder-transformed density matrix elements involving the impurity can be simplified as follows, \begin{eqnarray}\label{eq:simplified_tilde_gamma0j} \mel{\Phi}{\hat{d}_{j\sigma}^\dagger\hat{d}_{0\sigma}}{\Phi}=\gamma_{j0}-{\rm v}_j\sqrt{2\tilde{\gamma}_{10}\left(\tilde{\gamma}_{10}-\gamma_{10}\right)}. \end{eqnarray} As readily seen from Eqs.~(\ref{eq:HH_vec_compt1}) and (\ref{eq:simplified_tilde_gamma0j}), the matrix element $\tilde{\gamma}_{10}$ introduced in Eq.~(\ref{eq:gamma01_tilde}) is in fact the bath-impurity element of the density matrix in the Householder representation: \begin{eqnarray} \mel{\Phi}{\hat{d}_{1\sigma}^\dagger\hat{d}_{0\sigma}}{\Phi}=\tilde{\gamma}_{10}. \end{eqnarray} If we denote \begin{eqnarray}\label{eq:1RDM_with_d_ops} \tilde{\bm \gamma}\equiv \tilde{\gamma}_{ij}= \mel{\Phi}{\hat{d}_{i\sigma}^\dagger\hat{d}_{j\sigma}}{\Phi}=\sum_{kl}P_{ik}\gamma_{kl}P_{lj}\equiv {\bm P}{\bm \gamma}{\bm P} \end{eqnarray} the full Householder-transformed density matrix, we do readily see from Eqs.~(\ref{eq:HH_vec_compt_i_larger2}) and (\ref{eq:simplified_tilde_gamma0j}) that the impurity is exclusively entangled with the bath, {\it{i.e.}}, \begin{eqnarray}\label{eq:imp_disconnected_from_Henv} \tilde{\gamma}_{i0}\underset{i\geq 2}{=}0, \end{eqnarray} by construction~\cite{sekaran2021}. As $\tilde{\bm \gamma}$ inherits the idempotency of ${\bm \gamma}$ through the unitary Householder transformation, we deduce from Eq.~(\ref{eq:imp_disconnected_from_Henv}) that \begin{eqnarray} \tilde{\gamma}_{i0}=\left[\tilde{\bm \gamma}^2\right]_{i0}=\sum_j\tilde{\gamma}_{ij}\tilde{\gamma}_{j0}=\tilde{\gamma}_{i0}\tilde{\gamma}_{00}+\tilde{\gamma}_{i1}\tilde{\gamma}_{10}, \end{eqnarray} or, equivalently, \begin{eqnarray} \tilde{\gamma}_{i1}=\dfrac{\tilde{\gamma}_{i0}\left(1-\tilde{\gamma}_{00}\right)}{\tilde{\gamma}_{10}}, \end{eqnarray} thus leading to [see Eq.~(\ref{eq:imp_disconnected_from_Henv})] \begin{eqnarray}\label{eq:bath_entangled_with_imp} \tilde{\gamma}_{i1}\underset{i\geq 2}{=}0, \end{eqnarray} and \begin{eqnarray}\label{eq:2e_in_cluster} \tilde{\gamma}_{00}+\tilde{\gamma}_{11}=1. \end{eqnarray} Eqs.~(\ref{eq:bath_entangled_with_imp}) and (\ref{eq:2e_in_cluster}) simply indicate that, by construction~\cite{sekaran2021}, the bath is itself entangled exclusively with the impurity, and the Householder ``impurity+bath'' cluster, which is disconnected from its environment, contains exactly two electrons (one per spin). Therefore, the Householder cluster sector of the density matrix can be described exactly by a {\it two-electron} Slater determinant $\Phi^{\mathcal{C}}$: \begin{eqnarray}\label{eq:cluster_sector_Psi_representable} \tilde{\gamma}_{ij}\underset{0\leq i,j\leq 1}{=}\mel{\Phi^{\mathcal{C}}}{\hat{d}_{i\sigma}^\dagger\hat{d}_{j\sigma}}{\Phi^{\mathcal{C}}}. \end{eqnarray} Note that, in the Householder representation, the lattice ground-state determinant reads as $\Phi\equiv \Phi^{\mathcal{C}}\Phi_{\rm core}$, where the cluster's determinant $\Phi^{\mathcal{C}}$ is disentangled from the core one $\Phi_{\rm core}$. Once the cluster's block of the density matrix has been diagonalized, we obtain the sole occupied orbital that overlaps with the impurity, exactly like in DMET~\cite{wouters2016practical}. In other words, for non-interacting (or mean-field-like descriptions of) electrons, the Ht-DMFET construction of the bath is equivalent (although simpler) to that of DMET. We refer the reader to Ref.~\cite{sekaran2021} for a more detailed comparison of the two approaches.\\ \subsubsection{Non-interacting embedding Hamiltonian} As the Householder cluster is strictly disconnected from its environment in the non-interacting case, it is exactly described by the two-electron ground state $\ket{\Phi^{\mathcal{C}}}$ of the Householder-transformed hopping operator (that we refer to as kinetic energy operator from now on, like in DFT for lattices~\cite{DFT_ModelHamiltonians,senjean2018site}) on projected onto the cluster~\cite{sekaran2021}, {\it{i.e.}}, \begin{eqnarray}\label{eq:non-int_cluster_SE} \hat{\mathcal{T}}^{\mathcal{C}}\ket{\Phi^{\mathcal{C}}}=\mathcal{E}_{\rm s}^{\mathcal{C}}\ket{\Phi^{\mathcal{C}}}, \end{eqnarray} where, according to Eqs.~(\ref{eq:hopping_final_form_sumij}) and (\ref{eq:from_HH_to_lattice_rep}), \begin{eqnarray} \hat{\mathcal{T}}^{\mathcal{C}}=\sum_{ij}\sum_{\sigma=\uparrow,\downarrow}t_{ij} \sum^1_{k,l=0}P_{ik}P_{jl}\hat{d}_{k\sigma}^\dagger\hat{d}_{l\sigma}. \end{eqnarray} For convenience, we will separate in $\hat{\mathcal{T}}^{\mathcal{C}}$ the physical per-site kinetic energy operator [see Eq.~(\ref{eq:hopping_operator})], \begin{eqnarray} \hat{t}_{01}=-t\sum_{\sigma=\uparrow,\downarrow}\left(\hat{c}^\dagger_{0\sigma}\hat{c}_{1\sigma} +\hat{c}_{1\sigma}^\dagger\hat{c}_{0\sigma} \right), \end{eqnarray} from the correction induced (within the cluster) by the Householder transformation: \begin{eqnarray}\label{eq:hc_t01_plus_corr} \hat{\tau}^{\mathcal{C}}= \hat{\mathcal{T}}^{\mathcal{C}}-\hat{t}_{01}. \end{eqnarray} Note that, since $t_{00}=0$, $\hat{\tau}^{\mathcal{C}}$ can be expressed more explicitly as follows, \begin{eqnarray} \begin{split} \hat{\tau}^{\mathcal{C}} &=\sum_{\sigma=\uparrow,\downarrow}\left(\sum_{ij}P_{i1}P_{j0}t_{ij}\right)\left[\hat{d}_{0\sigma}^\dagger\hat{d}_{1\sigma}+\hat{d}_{1\sigma}^\dagger\hat{d}_{0\sigma}\right]\\ &\quad +\sum_{\sigma=\uparrow,\downarrow}\left(\sum_{ij}P_{i1}P_{j1}t_{ij}\right)\hat{d}_{1\sigma}^\dagger\hat{d}_{1\sigma}-\hat{t}_{01} \\ &=\sum_{\sigma=\uparrow,\downarrow}\left(\sum_iP_{i1}t_{i0}\right)\left[\hat{c}_{0\sigma}^\dagger\hat{d}_{1\sigma}+\hat{d}_{1\sigma}^\dagger\hat{c}_{0\sigma}\right] \\ &\quad +\sum_{\sigma=\uparrow,\downarrow}\left(\sum_{ij}P_{i1}P_{j1}t_{ij}\right)\hat{d}_{1\sigma}^\dagger\hat{d}_{1\sigma}-\hat{t}_{01} \\ &=\sum_{\sigma=\uparrow,\downarrow}t_{10}\left[\hat{c}_{0\sigma}^\dagger\hat{d}_{1\sigma}+\hat{d}_{1\sigma}^\dagger\hat{c}_{0\sigma}\right] \\ &\quad -2{\rm v}_1\sum_{\sigma=\uparrow,\downarrow}\left(\sum_i{\rm v}_it_{i0}\right)\left[\hat{c}_{0\sigma}^\dagger\hat{d}_{1\sigma}+\hat{d}_{1\sigma}^\dagger\hat{c}_{0\sigma}\right] \\ &\quad +\sum_{\sigma=\uparrow,\downarrow}\left(\sum_{ij}P_{i1}P_{j1}t_{ij}\right)\hat{d}_{1\sigma}^\dagger\hat{d}_{1\sigma}-\hat{t}_{01}, \end{split} \end{eqnarray} thus leading to \begin{eqnarray}\label{eq:simplified_tauC} \begin{split} \hat{\tau}^{\mathcal{C}} &=2t{\rm v}_1\sum_{\sigma=\uparrow,\downarrow}\sum_{k\geq 1}{\rm v}_k\left[\hat{c}_{0\sigma}^\dagger\hat{c}_{k\sigma}+\hat{c}_{k\sigma}^\dagger\hat{c}_{0\sigma}\right] \\ &\quad -2{\rm v}_1\sum_{\sigma=\uparrow,\downarrow}\left(\sum_i{\rm v}_it_{i0}\right)\left[\hat{c}_{0\sigma}^\dagger\hat{d}_{1\sigma}+\hat{d}_{1\sigma}^\dagger\hat{c}_{0\sigma}\right] \\ &\quad+4\left(\sum_{ij}{\rm v}_i{\rm v}_j\left({\rm v}_1^2-\delta_{j1}\right)t_{ij}\right)\sum_{\sigma=\uparrow,\downarrow}\hat{d}_{1\sigma}^\dagger\hat{d}_{1\sigma}, \end{split} \end{eqnarray} where we used Eqs.~(\ref{eq:P_from_HH_vec}) and (\ref{eq:bath_expansion}), as well as the fact that $t_{11}=0$ and $t_{10}=-t$. Note that, when no Householder transformation is performed ({\it{i.e.}}, when ${\rm v}_i=0$ for $0\leq i\leq L-1$), the bath site simply corresponds to the nearest neighbor ($i=1$) of the impurity in the lattice [see Eq.~(\ref{eq:bath_expansion})] and, as readily seen from Eqs.~(\ref{eq:hc_t01_plus_corr}) and (\ref{eq:simplified_tauC}), the non-interacting cluster's Hamiltonian $\hat{\mathcal{T}}^{\mathcal{C}}$ reduces to $\hat{t}_{01}$.\\ Unlike in the interacting case, which is discussed in Sec.~\ref{sec:approx_int_embedding}, it is unnecessary to introduce an additional potential on the embedded impurity in order to ensure that it reproduces the correct lattice filling. Indeed, according to Eqs.~(\ref{eq:gamma_00_filling}), (\ref{eq:v0_zero}), (\ref{eq:simplified_tilde_gamma0j}), (\ref{eq:1RDM_with_d_ops}), \ref{eq:cluster_sector_Psi_representable}), \begin{eqnarray}\label{eq:occ_embedded_imp_lattice_equal} \mel{\Phi^{\mathcal{C}}}{\hat{c}_{0\sigma}^\dagger\hat{c}_{0\sigma}}{\Phi^{\mathcal{C}}}=\mel{\Phi^{\mathcal{C}}}{\hat{d}_{0\sigma}^\dagger\hat{d}_{0\sigma}}{\Phi^{\mathcal{C}}}=n/2. \end{eqnarray} This constraint is automatically fulfilled when Householder transforming the kinetic energy operator $\hat{T}$ of the full lattice, thanks to the local potential contribution on the bath [see the last term on the right-hand side of Eq.~(\ref{eq:simplified_tauC})]. Interestingly, the true (non-interacting in this case) per-site energy of the lattice can be determined solely from $\Phi^{\mathcal{C}}$. Indeed, according to Eq.~(\ref{eq:1RDM_lattice}), the per-site kinetic energy can be evaluated from the lattice ground-state wave function $\Phi$ as follows, \begin{eqnarray}\label{eq:true_per-site_ener_lattice} \mel{\Phi}{\hat{t}_{01}}{\Phi}=-4t\gamma_{10}. \end{eqnarray} When rewritten in the Householder representation, Eq.~(\ref{eq:true_per-site_ener_lattice}) gives [see Eqs.~(\ref{eq:from_HH_to_lattice_rep}), (\ref{eq:imp_disconnected_from_Henv}), and (\ref{eq:cluster_sector_Psi_representable})] \begin{eqnarray}\label{eq:ener_from_lattice_to_cluster} \begin{split} \mel{\Phi}{\hat{t}_{01}}{\Phi} &=-4t\sum_iP_{1i}\tilde{\gamma}_{i0} \\ &=-4t\sum_{0\leq i\leq 1}P_{1i}\tilde{\gamma}_{i0} \\ &=-4t\sum_{0\leq i\leq 1}P_{1i}\mel{\Phi^{\mathcal{C}}}{\hat{d}_{i\sigma}^\dagger\hat{d}_{0\sigma}}{\Phi^{\mathcal{C}}} \\ &=-4t\sum_{i}P_{1i}\mel{\Phi^{\mathcal{C}}}{\hat{d}_{i\sigma}^\dagger\hat{c}_{0\sigma}}{\Phi^{\mathcal{C}}}, \end{split} \end{eqnarray} where we used Eq.~(\ref{eq:imp_invariant_underHH_transf}) and the fact that $\hat{d}_{i\sigma}\ket{\Phi^{\mathcal{C}}}\overset{i>1}{=}0$, since $\Phi^{\mathcal{C}}$ is constructed within the cluster. We finally recover from Eq.~(\ref{eq:ener_from_lattice_to_cluster}) the following equality~\cite{sekaran2021}, \begin{eqnarray}\label{eq:per_site_kin_ener_from_cluster} \begin{split} \mel{\Phi}{\hat{t}_{01}}{\Phi}&=-4t\mel{\Phi^{\mathcal{C}}}{\hat{c}_{1\sigma}^\dagger\hat{c}_{0\sigma}}{\Phi^{\mathcal{C}}} \\ &=\mel{\Phi^{\mathcal{C}}}{\hat{t}_{01}}{\Phi^{\mathcal{C}}}, \end{split}\end{eqnarray} which drastically (and exactly) simplifies the evaluation of non-interacting energies for lattices. \subsubsection{Approximate interacting embedding}\label{sec:approx_int_embedding} The simplest (approximate) extension of Ht-DMFET to interacting electrons consists in introducing the on-impurity-site two-electron repulsion operator $\hat{U}_0$ into the non-interacting Householder cluster's Hamiltonian of Eq.~(\ref{eq:non-int_cluster_SE}), by analogy with DMET~\cite{knizia2012density,sekaran2021}. In such a (standard) scheme, the interaction is treated {\it on top} of the non-interacting embedding. Unlike in the non-interacting case, it is necessary to introduce a chemical potential $\tilde{\mu}^{\rm imp}$ on the embedded impurity in order to ensure that it reproduces the correct lattice filling $N/L$~\cite{sekaran2021}, {\it{i.e.}}, \begin{eqnarray} \expval{\hat{n}_0}_{\Psi^{\mathcal{C}}}=N/L, \end{eqnarray} where the two-electron cluster's ground-state wave function $\Psi^{\mathcal{C}}$ fulfills the following interacting Schr\"{o}dinger equation: \begin{eqnarray}\label{eq:int_cluster_SE} \left(\hat{\mathcal{T}}^{\mathcal{C}}+\hat{U}_0-\tilde{\mu}^{\rm imp}\hat{n}_0\right)\ket{\Psi^{\mathcal{C}}}=\mathcal{E}^{\mathcal{C}}\ket{\Psi^{\mathcal{C}}}. \end{eqnarray} The physical per-site energy (from which we remove the chemical potential contribution) is then evaluated as follows: \begin{eqnarray}\label{eq:per_site_ener_HtDMFET} \left(E+\mu N\right)/{L}\underset{\rm{Ht-DMFET}}{\approx}\mel{\Psi^{\mathcal{C}}}{\hat{t}_{01}+\hat{U}_0}{\Psi^{\mathcal{C}}}. \end{eqnarray} Let us stress that, in Ht-DMFET, the cluster is designed from a single determinantal (non-interacting in the present case) lattice wave function, like in regular DMET calculations~\cite{wouters2016practical}. In other words, the Householder transformation is constructed from an idempotent density matrix. Moreover, the interacting cluster is described as a {\it closed} (two-electron) subsystem. As shown for small Hubbard rings, the exact interacting cluster is in principle an open subsystem~\cite{sekaran2021}. It rigorously contains two electrons only at half filling, as a consequence of the hole-particle symmetry of the Hubbard lattice Hamiltonian~\cite{sekaran2021}.\\ Note finally that, if we Householder transform the two-electron repulsion operator $\hat{U}$ of the full lattice, one can in principle take into account its complete projection onto the cluster. It means that the interaction on the bath site could be added to the Hamiltonian in Eq.~(\ref{eq:int_cluster_SE}). For simplicity, we will focus in the following on the (so-called) non-interacting bath formulation of the theory, which is described by Eq.~(\ref{eq:int_cluster_SE}). Let us finally mention that, in the present single-impurity embedding, DMET, DET, and Ht-DMFET are equivalent~\cite{sekaran2021}. \subsection{Exact density-functional embedding}\label{subsec:exact_dfe_dft} We will show in the following that, once it has been merged with KS-DFT, Ht-DMFET can be made formally exact. For clarity, we start with reviewing briefly KS-DFT for lattice Hamiltonians in Sec.~\ref{subsubsec:KS-DFT_lattices}. A multi-determinantal extension of the theory based on the interacting Householder cluster's wave function is then proposed in Sec.~\ref{subsubsec:exact_DFE}. \subsubsection{KS-DFT for uniform lattices}\label{subsubsec:KS-DFT_lattices} According to the Hohenberg--Kohn (HK) variational principle~\cite{hktheo}, which is applied in this work to lattice Hamiltonians~\cite{DFT_ModelHamiltonians}, the ground-state energy of the full lattice can be determined as follows, \begin{eqnarray}\label{eq:HK_var_principle_full_lattice} E=\min_n\left\{F(n)+v_{\rm ext}nL\right\}, \end{eqnarray} where the HK density functional reads as \begin{eqnarray} F(n)=\mel{\Psi(n)}{\hat{T}+\hat{U}}{\Psi(n)}, \end{eqnarray} and $\ket{\Psi(n)}$ is the lattice ground state with uniform density profile $n\overset{0\leq i< L}{=}\mel{\Psi(n)}{\hat{n}_i}{\Psi(n)}$. Strictly speaking, $F(n)$ is a function of the site occupation $n$, hence the name {\it site occupation functional theory} often given to DFT for lattices~\cite{DFT_ModelHamiltonians,senjean2018site}. Note that the ground-state energy $E$ is in fact a (zero-temperature) grand canonical energy since a change in uniform density $n$ induces a change in the number $N=nL$ of electrons. In the thermodynamic $N\rightarrow+\infty$ and $L\rightarrow+\infty$ limit, with $N/L$ fixed to $n$, one can in principle describe {\it continuous} variations in $n$ with a pure-state wave function ${\Psi(n)}$. The derivations that follow will be based on this assumption. If we introduce the per-site analog of the HK functional, \begin{eqnarray}\label{eq:per_site_HK_func} f(n)=F(n)/L=\mel{\Psi(n)}{\hat{t}_{01}+\hat{U}_0}{\Psi(n)}, \end{eqnarray} and use the notation of Eq.~(\ref{eq:constant_ext_pot}), then Eq.~(\ref{eq:HK_var_principle_full_lattice}) becomes \begin{eqnarray} E/L\equiv E(\mu)/L=\min_n\left\{f(n)-\mu n\right\}, \end{eqnarray} and the minimizing density $n(\mu)$ fulfills the following stationarity condition: \begin{eqnarray}\label{eq:true_chem_pot_from_DFT} \mu=\left.\dfrac{\partial f(n)}{\partial n}\right|_{n=n(\mu)}. \end{eqnarray} In the conventional KS formulation of DFT, the per-site HK functional is decomposed as follows, \begin{eqnarray}\label{eq:KS_decomp} f(n)=t_{\rm s}(n)+e_{\rm Hxc}(n), \end{eqnarray} where \begin{eqnarray}\label{eq:per_site_ts} t_{\rm s}(n)=\mel{\Phi(n)}{\hat{t}_{01}}{\Phi(n)}=\dfrac{1}{L}\mel{\Phi(n)}{\hat{T}}{\Phi(n)} \end{eqnarray} is the (per-site) analog for lattices of the non-interacting kinetic energy functional, and the Hxc density functional reads as~\cite{DFT_ModelHamiltonians} \begin{eqnarray} e_{\rm Hxc}(n)=\dfrac{U}{4}n^2+e_{\rm c}(n), \end{eqnarray} where $e_{\rm c}(n)$ is the exact (per-site) correlation energy functional of the interacting lattice. The (normalized) density-functional lattice KS determinant ${\Phi(n)}$ fulfills the (non-interacting) KS equation \begin{eqnarray} \left(\hat{T}-\mu_{\rm s}(n)\hat{N}\right)\ket{\Phi(n)}=\mathcal{E}_{\rm s}(n)\ket{\Phi(n)}, \end{eqnarray} so that [see Eq.~(\ref{eq:per_site_ts})] \begin{eqnarray} \begin{split} \dfrac{\partial t_{\rm s}(n)}{\partial n}&=\dfrac{2}{L}\mel{\frac{\partial \Phi(n)}{\partial n}}{\hat{T}}{\Phi(n)} \\ &=\dfrac{2\mu_{\rm s}(n)}{L}\mel{\frac{\partial \Phi(n)}{\partial n}}{\hat{N}}{\Phi(n)} \\ &=\dfrac{\mu_{\rm s}(n)}{L}\dfrac{\partial (nL)}{\partial n} \\ &=\mu_{\rm s}(n), \end{split} \end{eqnarray} since $\mel{\Phi(n)}{\hat{N}}{\Phi(n)}=N=nL$. Thus we recover from Eqs.~(\ref{eq:true_chem_pot_from_DFT}) and (\ref{eq:KS_decomp}) the well-known relation between the physical and KS chemical potentials: \begin{eqnarray}\label{eq:KS_pot_decomp} \mu_{\rm s}(n(\mu))\equiv \mu_{\rm s}=\mu-v_{\rm Hxc}, \end{eqnarray} where the density-functional Hxc potential reads as $v_{\rm Hxc}=v_{\rm Hxc}(n(\mu))$ with \begin{eqnarray}\label{eq:Hxc_pot_from_eHxc} v_{\rm Hxc}(n)= \dfrac{\partial e_{\rm Hxc}(n)}{\partial n}. \end{eqnarray} Note that the exact non-interacting density-functional chemical potential can be expressed analytically as follows~\cite{lima2003density}: \begin{eqnarray}\label{eq:KS_dens_func_chemical_potential} \mu_{\rm s}(n) = -2t\cos\left(\frac{\pi}{2}n\right). \end{eqnarray} Capelle and coworkers~\cite{lima2003density,DFT_ModelHamiltonians} have designed a local density approximation (LDA) to $e_{\rm Hxc}(n)$ on the basis of exact Bethe Ansatz (BA) solutions~\cite{lieb_absence_1968} (the functional is usually referred to as BALDA).\\ Unlike in conventional {\it ab initio} DFT, the Hxc functional of lattice Hamiltonians is not truly universal in the sense that it is universal for a given choice of (hopping) one-electron and two-electron repulsion operators. In other words, the Hxc functional does not depend on the (possibly non-uniform) one-electron local potential operator $\sum_i v_{{\rm ext},i}\hat{n}_i$, which is the analog for lattices of the nuclear potential in molecules, but it is $t$- and $U$-dependent and, in the present case, it should be designed specifically for the 1D Hubbard model. Even though BALDA can be extended to higher dimensions~\cite{Vilela_2019}, there is no general strategy for constructing (localized) orbital-occupation functional approximations, thus preventing direct applications to quantum chemistry~\cite{fromager2015exact}, for example. Turning ultimately to a potential-functional theory, as proposed in Sec.~\ref{sec:LPFET}, is appealing in this respect. With this change of paradigm, which is the second key result of the paper, the Hxc energy and potential become implicit functionals of the density, and they can be evaluated from a (few-electron) correlated wave function through a quantum embedding procedure. \subsubsection{Density-functional interacting cluster}\label{subsubsec:exact_DFE} We propose in this section an alternative formulation of DFT based on the interacting Householder cluster introduced in Sec.~\ref{sec:approx_int_embedding}. For that purpose, we consider the following {\it exact} decomposition, \begin{eqnarray}\label{eq:decomp_f_from_cluster} f(n)=f^{\mathcal{C}}(n)+\overline{e}_{\rm c}(n), \end{eqnarray} where the Householder cluster HK functional \begin{eqnarray}\label{eq:f_cluster_func} f^{\mathcal{C}}(n)=\mel{\Psi^{\mathcal{C}}(n)}{\hat{t}_{01}+\hat{U}_0}{\Psi^{\mathcal{C}}(n)} \end{eqnarray} is evaluated from the two-electron cluster density-functional wave function ${\Psi^{\mathcal{C}}(n)}$, and $\overline{e}_{\rm c}(n)$ is the complementary correlation density functional that describes the correlation effects of the Householder cluster's environment on the embedded impurity. Note that, according to Sec.~\ref{sec:approx_int_embedding}, $\ket{ \Psi^{\mathcal{C}}(n) }$ fulfills the following Schr\"{o}dinger-like equation, \begin{eqnarray}\label{eq:dens_func_int_cluster_SE} \hat{\mathcal{H}}^{\mathcal{C}}(n)\ket{\Psi^{\mathcal{C}}(n)}=\mathcal{E}^{\mathcal{C}}(n)\ket{\Psi^{\mathcal{C}}(n)}, \end{eqnarray} where (we use the same notations as in Sec.~\ref{sec:approx_int_embedding}) \begin{eqnarray}\label{eq:dens_func_cluster_hamilt} \hat{\mathcal{H}}^{\mathcal{C}}(n)\equiv \hat{\mathcal{T}}^{\mathcal{C}}(n)+\hat{U}_0-\tilde{\mu}^{\rm imp}(n)\,\hat{n}_0 \end{eqnarray} and \begin{eqnarray}\label{eq:Tcluster_op_dens_func} \hat{\mathcal{T}}^{\mathcal{C}}(n)\equiv\hat{t}_{01}+\hat{\tau}^{\mathcal{C}}(n). \end{eqnarray} The dependence in $n$ of the (projected-onto-the-cluster) Householder-transformed kinetic energy operator $\hat{\mathcal{T}}^{\mathcal{C}}(n)$ comes from the fact that the KS lattice density matrix ${\bm \gamma}(n)\equiv\mel{\Phi(n)}{\hat{c}_{i\sigma}^\dagger\hat{c}_{j\sigma}}{\Phi(n)}$ (on which the Householder transformation is based) is, like the KS determinant $\Phi(n)\equiv \Phi^{\mathcal{C}}(n)\Phi_{\rm core}(n)$ of the lattice, a functional of the uniform density $n$. On the other hand, for a given uniform lattice density $n$, the local potential $-\tilde{\mu}^{\rm imp}(n)$ is adjusted on the embedded impurity such that the interacting cluster reproduces $n$, {\it{i.e.}}, \begin{eqnarray}\label{eq:dens_constraint_int_cluster} \mel{\Psi^{\mathcal{C}}(n)}{\hat{n}_0}{\Psi^{\mathcal{C}}(n)}=n. \end{eqnarray} Interestingly, on the basis of the two decompositions in Eqs.~(\ref{eq:KS_decomp}) and (\ref{eq:decomp_f_from_cluster}), and Eq.~(\ref{eq:f_cluster_func}), we can relate the exact Hxc functional to the density-functional Householder cluster as follows, \begin{eqnarray} e_{\rm Hxc}(n)&=\mel{\Psi^{\mathcal{C}}(n)}{\hat{t}_{01}+\hat{U}_0}{\Psi^{\mathcal{C}}(n)} -t_{\rm s}(n)+\overline{e}_{\rm c}(n), \end{eqnarray} where, as shown in Eq.~(\ref{eq:per_site_kin_ener_from_cluster}), the per-site non-interacting kinetic energy can be determined exactly from the two-electron cluster's part $\Phi^{\mathcal{C}}(n)$ of the KS lattice determinant $\Phi(n)$, {\it{i.e.}}, \begin{eqnarray} t_{\rm s}(n)=\mel{\Phi^{\mathcal{C}}(n)}{\hat{t}_{01}}{\Phi^{\mathcal{C}}(n)}, \end{eqnarray} thus leading to the final expression \begin{eqnarray}\label{eq:final_exp_eHxc_from_cluster} \begin{split} e_{\rm Hxc}(n)&=\mel{\Psi^{\mathcal{C}}(n)}{\hat{t}_{01}+\hat{U}_0}{\Psi^{\mathcal{C}}(n)} -\mel{\Phi^{\mathcal{C}}(n)}{\hat{t}_{01}}{\Phi^{\mathcal{C}}(n)}+\overline{e}_{\rm c}(n). \end{split} \end{eqnarray} Note that, according to Eqs.~(\ref{eq:non-int_cluster_SE}) and (\ref{eq:hc_t01_plus_corr}), $\Phi^{\mathcal{C}}(n)$ fulfills the KS-like equation \begin{eqnarray}\label{eq:dens_func_cluster_KS_eq} \left(\hat{t}_{01}+\hat{\tau}^{\mathcal{C}}(n)\right)\ket{\Phi^{\mathcal{C}}(n)}=\mathcal{E}_{\rm s}^{\mathcal{C}}(n)\ket{\Phi^{\mathcal{C}}(n)}, \end{eqnarray} where the Householder transformation ensures that $\mel{\Phi^{\mathcal{C}}(n)}{\hat{n}_0}{\Phi^{\mathcal{C}}(n)}=n$ [see Eq.~(\ref{eq:occ_embedded_imp_lattice_equal})].\\ We will now establish a clearer connection between the KS lattice system and the Householder cluster {\it via} the evaluation of the Hxc density-functional potential in the lattice. According to Eqs.~(\ref{eq:Hxc_pot_from_eHxc}) and (\ref{eq:final_exp_eHxc_from_cluster}), the latter can be expressed as follows, \begin{eqnarray} \begin{split} v_{\rm Hxc}(n)&=2\mel{\frac{\partial\Psi^{\mathcal{C}}(n)}{\partial n}}{\hat{t}_{01}+\hat{U}_0}{\Psi^{\mathcal{C}}(n)} \\ &\quad -2\mel{\frac{\partial\Phi^{\mathcal{C}}(n)}{\partial n}}{\hat{t}_{01}}{\Phi^{\mathcal{C}}(n)}+\frac{\partial \overline{e}_{\rm c}(n)}{\partial n}, \end{split} \end{eqnarray} or, equivalently [see Eqs.~(\ref{eq:dens_func_int_cluster_SE}), (\ref{eq:dens_constraint_int_cluster}), and (\ref{eq:dens_func_cluster_KS_eq})], \begin{eqnarray}\label{eq:Hxc_dens_pot_final} \begin{split} v_{\rm Hxc}(n)&= \tilde{\mu}^{\rm imp}(n) -2\mel{\frac{\partial\Psi^{\mathcal{C}}(n)}{\partial n}}{\hat{\tau}^{\mathcal{C}}(n)}{\Psi^{\mathcal{C}}(n)} \\ &\quad +2\mel{\frac{\partial\Phi^{\mathcal{C}}(n)}{\partial n}}{\hat{\tau}^{\mathcal{C}}(n)}{\Phi^{\mathcal{C}}(n)}+\frac{\partial \overline{e}_{\rm c}(n)}{\partial n}. \end{split} \end{eqnarray} If we introduce the following bi-functional of the density, \begin{eqnarray}\label{eq:kinetic_corr_bi_func} \begin{split} \tau^{\mathcal{C}}_{\rm c}(n,\nu)&=\mel{\Psi^{\mathcal{C}}(\nu)}{\hat{\tau}^{\mathcal{C}}(n)}{\Psi^{\mathcal{C}}(\nu)} -\mel{\Phi^{\mathcal{C}}(\nu)}{\hat{\tau}^{\mathcal{C}}(n)}{\Phi^{\mathcal{C}}(\nu)}, \end{split} \end{eqnarray} which can be interpreted as a kinetic correlation energy induced within the density-functional cluster by the Householder transformation and the interaction on the impurity, we obtain the final {\it exact} expression \begin{eqnarray}\label{eq:final_vHxc_exp_from_muimp} v_{\rm Hxc}(n)= \tilde{\mu}^{\rm imp}(n)-\left.\frac{\partial \tau^{\mathcal{C}}_{\rm c}(n,\nu)}{\partial \nu}\right|_{\nu=n}+\frac{\partial \overline{e}_{\rm c}(n)}{\partial n}, \end{eqnarray} which is the first key result of this paper.\\ Before turning Eq.~(\ref{eq:final_vHxc_exp_from_muimp}) into a practical self-consistent embedding method (see Sec.~\ref{sec:LPFET}), let us briefly discuss its physical meaning and connection with Ht-DMFET. As pointed out in Sec.~\ref{subsubsec:non_int_embedding}, the (density-functional) operator $\hat{\tau}^{\mathcal{C}}(n)$ is an auxiliary correction to the true per-site kinetic energy operator $\hat{t}_{01}$ which originates from the Householder-transformation-based embedding of the impurity. It is not physical and its impact on the impurity chemical potential $\tilde{\mu}^{\rm imp}(n)$, which is determined in the presence of $\hat{\tau}^{\mathcal{C}}(n)$ in the cluster's Hamiltonian [see Eqs.~(\ref{eq:dens_func_int_cluster_SE})-(\ref{eq:Tcluster_op_dens_func})], should be removed when evaluating the Hxc potential of the true lattice, hence the minus sign in front of the second term on the right-hand side of Eq.~(\ref{eq:final_vHxc_exp_from_muimp}). Finally, the complementary correlation potential $\partial \overline{e}_{\rm c}(n)/\partial n$ is in charge of recovering the electron correlation effects that were lost when considering an interacting cluster that is disconnected from its environment~\cite{sekaran2021}. We should stress at this point that, in Ht-DMFET (which is equivalent to DMET or DET when a single impurity is embedded~\cite{sekaran2021}), the following density-functional approximation is made: \begin{eqnarray}\label{eq:DFA_in_HtDMFET} \overline{e}_{\rm c}(n)\underset{\rm Ht-DMFET}{\approx}0, \end{eqnarray} so that the physical density-functional chemical potential is evaluated as follows~\cite{sekaran2021}, \begin{eqnarray}\label{eq:true_chem_pot_Ht-DMFET} \mu(n)\underset{\rm Ht-DMFET}{\approx}\frac{\partial f^{\mathcal{C}}(n)}{\partial n}. \end{eqnarray} Interestingly, even though it is never computed explicitly in this context, the corresponding (approximate) Hxc potential simply reads as \begin{eqnarray} v_{\rm Hxc}(n)\underset{\rm{Ht-DMFET}}{\approx}\frac{\partial (f^{\mathcal{C}}(n)-t_{\rm s}(n))}{\partial n}, \end{eqnarray} or, equivalently [see Eqs.~(\ref{eq:final_vHxc_exp_from_muimp}) and (\ref{eq:DFA_in_HtDMFET})], \begin{eqnarray}\label{eq:approx_Hxc_pot_HtDMFET} v_{\rm Hxc}(n)\underset{\rm{Ht-DMFET}}{\approx} \tilde{\mu}^{\rm imp}(n)-\left.\frac{\partial \tau^{\mathcal{C}}_{\rm c}(n,\nu)}{\partial \nu}\right|_{\nu=n}. \end{eqnarray} Therefore, Ht-DMFET can be seen as an approximate formulation of KS-DFT where the Hxc potential is determined solely from the density-functional Householder cluster. \subsection{Local potential functional embedding theory}\label{sec:LPFET} Until now the Householder transformation has been described as a functional of the uniform density $n$ or, more precisely, as a functional of the KS density matrix, which is itself a functional of the density. If we opt for a potential-functional reformulation of the theory, as suggested in the following, the Householder transformation becomes a functional of the KS chemical potential $\mu_{\rm s}$ instead, and, consequently, the Householder correction to the per-site kinetic energy operator within the cluster [see Eq.~(\ref{eq:Tcluster_op_dens_func})] is also a functional of $\mu_{\rm s}$: \begin{eqnarray} \hat{\tau}^{\mathcal{C}}(n)\rightarrow \hat{\tau}^{\mathcal{C}}(\mu_{\rm s}). \end{eqnarray} Similarly, the interacting cluster's wave function becomes a bi-functional of the KS {\it and} interacting embedded impurity chemical potentials: \begin{eqnarray}\label{eq:bifunctional_cluster_wfn} \Psi^{\mathcal{C}}(n)\rightarrow \Psi^{\mathcal{C}}\left(\mu_{\rm s},\tilde{\mu}^{\rm imp}\right). \end{eqnarray} In the exact theory, for a given chemical potential value $\mu$ in the true interacting lattice, both the KS lattice and the embedded impurity reproduce the interacting lattice density $n(\mu)$, {\it{i.e.}}, \begin{eqnarray}\label{eq:exact_dens_mapping_KS_cluster} n(\mu)=n_{\rm lattice}^{\rm KS}\left(\mu-v_{\rm Hxc}\right)=n^{\mathcal{C}}\left(\mu-v_{\rm Hxc},\tilde{\mu}^{\rm imp}\right), \end{eqnarray} where \begin{eqnarray} n_{\rm lattice}^{\rm KS}(\mu_{\rm s})\equiv\expval{\hat{n}_0}_{\hat{T}-\mu_{\rm s}\hat{N}}, \end{eqnarray} and \begin{eqnarray} \begin{split} n^{\mathcal{C}}\left(\mu_{\rm s},\tilde{\mu}^{\rm imp}\right) &=\expval{\hat{n}_0}_{\Psi^{\mathcal{C}}\left(\mu_{\rm s},\tilde{\mu}^{\rm imp}\right)} \\ &\equiv \expval{\hat{n}_0}_{\hat{t}_{01}+\hat{\tau}^{\mathcal{C}}(\mu_{\rm s})+\hat{U}_0-\tilde{\mu}^{\rm imp}\hat{n}_0}, \end{split} \end{eqnarray} with, according to Eq.~(\ref{eq:final_vHxc_exp_from_muimp}), \begin{eqnarray}\label{eq:exact_muimp_expression} \begin{split} \tilde{\mu}^{\rm imp}&=\tilde{\mu}^{\rm imp}(n(\mu)) \\ &=v_{\rm Hxc}-\left[\frac{\partial \overline{e}_{\rm c}(\nu)}{\partial \nu} -\frac{\partial \tau^{\mathcal{C}}_{\rm c}(n(\mu),\nu)}{\partial \nu}\right]_{\nu=n(\mu)}. \end{split} \end{eqnarray} The density constraint of Eq.~(\ref{eq:exact_dens_mapping_KS_cluster}) combined with Eq.~(\ref{eq:exact_muimp_expression}) allows for an in-principle-exact evaluation of the Hxc potential $v_{\rm Hxc}$. Most importantly, these two equations can be used for designing an alternative (and self-consistent) embedding strategy on the basis of well-identified density-functional approximations. Indeed, in Ht-DMFET, the second term on the right-hand side of Eq.~(\ref{eq:exact_muimp_expression}) is simply dropped, for simplicity [see Eq.~(\ref{eq:DFA_in_HtDMFET})]. If, in addition, we neglect the Householder kinetic correlation density-bi-functional potential correction $\partial \tau^{\mathcal{C}}_{\rm c}(n,\nu)/\partial \nu$ [last term on the right-hand side of Eq.~(\ref{eq:exact_muimp_expression})], we obtain from Eq.~(\ref{eq:exact_dens_mapping_KS_cluster}) the following self-consistent equation, \begin{eqnarray}\label{eq:sc_LPFET_eq} n_{\rm lattice}^{\rm KS}\left(\mu-\tilde{v}_{\rm Hxc}\right)=n^{\mathcal{C}}\left(\mu-\tilde{v}_{\rm Hxc},\tilde{v}_{\rm Hxc}\right), \end{eqnarray} from which an approximation $\tilde{v}_{\rm Hxc}\equiv \tilde{v}_{\rm Hxc}(\mu)$ to the Hxc potential can be determined. Eq.~(\ref{eq:sc_LPFET_eq}) is the second main result of this paper. Since $\tilde{v}_{\rm Hxc}$ is now the to-be-optimized quantity on which the embedding fully relies, we refer to the approach as {\it local potential functional embedding theory} (LPFET), in which the key density-functional approximation that is made reads as \begin{eqnarray}\label{eq:LPFET_approx_to_vHxc} {v}_{\rm Hxc}(n)\underset{\rm LPFET}{\approx}\tilde{\mu}^{\rm imp}(n). \end{eqnarray} The approach is graphically summarized in Fig.~\ref{Fig:self-consistent-loop-scheme}. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.4]{Figure1.pdf} \caption{Graphical representation of the LPFET procedure. Note that the {\it same} Hxc potential $\tilde{v}_{\rm Hxc}$ is used in the KS lattice and the embedding Householder cluster. It is optimized self-consistently in order to fulfill the density constraint of Eq.~(\ref{eq:sc_LPFET_eq}). See text for further details.} \label{Fig:self-consistent-loop-scheme} \end{center} \end{figure} In order to verify that the first HK theorem~\cite{hktheo} still holds at the LPFET level of approximation, let us assume that two chemical potentials $\mu$ and $\mu+\Delta\mu$ lead to the same density. If so, the converged Hxc potentials should differ by $\tilde{v}_{\rm Hxc}\left(\mu+\Delta\mu\right)-\tilde{v}_{\rm Hxc}(\mu)=\Delta\mu$, so that both calculations give the same KS chemical potential value [see Eq.~(\ref{eq:KS_pot_decomp})]. According to Eqs.~(\ref{eq:sc_LPFET_eq}) and (\ref{eq:LPFET_approx_to_vHxc}), it would imply that two different values of the interacting embedded impurity chemical potential can give the same density, which is impossible~\cite{sekaran2021,senjean2017local}. Therefore, when convergence is reached in Eq.~(\ref{eq:sc_LPFET_eq}), we can generate an approximate map \begin{eqnarray} \mu\rightarrow n(\mu) \underset{\rm LPFET}{\approx} n_{\rm lattice}^{\rm KS}\left(\mu-\tilde{v}_{\rm Hxc}\right) = \expval{\hat{n}_0}_{\Psi^{\mathcal{C}}\left(\mu-\tilde{v}_{\rm Hxc},\tilde{v}_{\rm Hxc}\right)} , \end{eqnarray} and compute approximate per-site energies as follows, \begin{eqnarray}\label{eq:per_site_ener_LPFFET} \frac{E(\mu)}{L}+\mu n(\mu)\underset{\rm{LPFET}}{\approx}\expval{\hat{t}_{01}+\hat{U}_0}_{\Psi^{\mathcal{C}}\left(\mu-\tilde{v}_{\rm Hxc},\tilde{v}_{\rm Hxc}\right)}, \end{eqnarray} since the approximation in Eq.~(\ref{eq:DFA_in_HtDMFET}) is also used in LPFET, as discussed above.\\ Note that Ht-DMFET and LPFET use the same per-site energy expression [see Eq.~(\ref{eq:per_site_ener_HtDMFET})], which is a functional of the interacting cluster's wave function. In both approaches, the latter and the non-interacting lattice share the same density. Therefore, if the per-site energy is evaluated as a function of the lattice filling $n$, both methods will give exactly the same result. However, different energies will be obtained if they are evaluated as functions of the chemical potential value $\mu$ in the interacting lattice. The reason is that Ht-DMFET and LPFET will give different densities. Indeed, as shown in Sec.~\ref{subsubsec:exact_DFE}, Ht-DMFET can be viewed as an approximation to KS-DFT where the Hxc density-functional potential of Eq.~(\ref{eq:approx_Hxc_pot_HtDMFET}) is employed. As readily seen from Eq.~(\ref{eq:LPFET_approx_to_vHxc}), the LPFET and Ht-DMFET Hxc potentials differ by the Householder kinetic correlation potential (which is neglected in LPFET). If the corresponding KS densities were the same then the Hxc potential, the Householder transformation, and, therefore, the chemical potential on the interacting embedded impurity would be the same, which is impossible according to Eqs.~(\ref{eq:approx_Hxc_pot_HtDMFET}) and (\ref{eq:LPFET_approx_to_vHxc}).\\ \subsection{Comparison with SDE}\label{subsec:sde_comparison} At this point we should stress that LPFET is very similar to the SDE approach of Mordovina {\it et al.}~\cite{mordovina2019self}. The major difference between SDE and LPFET (in addition to the fact that LPFET has a clear connection with a formally exact density-functional embedding theory based on the Householder transformation) is that no KS construction is made within the cluster. Instead, the Hxc potential is directly updated in the KS lattice, on the basis of the correlated embedded impurity density. This becomes even more clear when rewriting Eq.~(\ref{eq:sc_LPFET_eq}) as follows, \begin{eqnarray}\label{eq:LPFET_Hxc_pot_from_dens_cluster} \tilde{v}_{\rm Hxc}=\mu-\left[n_{\rm lattice}^{\rm KS}\right]^{-1}\left(n^{\mathcal{C}}\left(\mu-\tilde{v}_{\rm Hxc},\tilde{v}_{\rm Hxc}\right)\right), \end{eqnarray} where $\left[n_{\rm lattice}^{\rm KS}\right]^{-1}:\,n\rightarrow \mu_{\rm s}(n)$ is the inverse of the non-interacting chemical-potential-density map. A practical advantage of such a procedure (which remains feasible since the full system is treated at the non-interacting KS level only) lies in the fact that the KS construction within the cluster is automatically (and exactly) generated by the Householder transformation, once the density has been updated in the KS lattice (see Eq.~(\ref{eq:occ_embedded_imp_lattice_equal}) and the comment that follows). Most importantly, the density in the KS lattice and the density of the non-interacting KS embedded impurity (which, unlike the embedded {\it interacting} impurity, is not used in the actual calculation) will match {\it at each iteration} of the Hxc potential optimization process, as it should when convergence is reached. If, at a given iteration, the KS construction were made directly within the cluster, there would always be a ``delay'' in density between the KS lattice and the KS cluster, which would only disappear at convergence. Note that, when the latter is reached, the (approximate) Hxc potential of the lattice should match the one extracted from the cluster, which is defined in SDE as the difference between the KS cluster Hamiltonian and the one-electron part of the interacting cluster's Hamiltonian~\cite{mordovina2019self}, both reproducing the density of the KS lattice. Therefore, according to Eqs.~(\ref{eq:dens_func_cluster_hamilt}), (\ref{eq:Tcluster_op_dens_func}) and (\ref{eq:dens_func_cluster_KS_eq}), the converged Hxc potential will simply correspond to the chemical potential on the interacting embedded impurity, exactly like in LPFET [see Eq.~(\ref{eq:LPFET_approx_to_vHxc})].\\ Note finally that the simplest implementation of LPFET, as suggested by Eq.~(\ref{eq:LPFET_Hxc_pot_from_dens_cluster}), can be formally summarized as follows: \begin{eqnarray}\label{eq:LPFET_algo} \begin{split} \tilde{v}^{(i+1)}_{\rm Hxc}&=\mu-\left[n_{\rm lattice}^{\rm KS}\right]^{-1}\left(n^{\mathcal{C}}\left(\mu-\tilde{v}^{(i)}_{\rm Hxc},\tilde{v}^{(i)}_{\rm Hxc}\right)\right), \\ \tilde{v}^{(i=0)}_{\rm Hxc}&=0. \end{split} \end{eqnarray} A complete description of the algorithm is given in the next section. \section{LPFET algorithm}\label{sec:lpfet_algo} The LPFET approach introduced in Sec.~\ref{sec:LPFET} aims at computing the interacting chemical-potential-density $\mu \rightarrow n(\mu)$ map through the self-consistent optimization of the uniform Hxc potential. A schematics of the algorithm is provided in Fig.~\ref{Fig:self-consistent-loop-convergence}. It can be summarized as follows. \newline \\1. We start by diagonalizing the one-electron Hamiltonian ({\it{i.e.}}, the hopping in the present case) matrix ${\bm t}\equiv t_{ij}$ [see Eq.~(\ref{eq:hopping_matrix})]. Thus we obtain the ``molecular'' spin-orbitals and their corresponding energies. We fix the chemical potential of the interacting lattice to some value $\mu$ and (arbitrarily) initialize the Hxc potential to $\tilde{v}_{\rm Hxc}=0$. Therefore, at the zeroth iteration, the KS chemical potential $\mu_{\rm s}$ equals $\mu$. \newline \\ 2. We occupy all the molecular spin-orbitals with energies below $\mu_{\rm s}=\mu-\tilde{v}_{\rm Hxc}$ and construct the corresponding density matrix (in the lattice representation). The latter provides the uniform KS density (denoted $n^{\rm KS}_{\rm lattice}$ in Fig.~\ref{Fig:self-consistent-loop-convergence}) and the embedding Householder cluster Hamiltonian [see Eq.~(\ref{eq:int_cluster_SE})] in which the impurity chemical potential is set to $\tilde{\mu}^{\rm imp}=\tilde{v}_{\rm Hxc}$ [see Eq.~(\ref{eq:LPFET_approx_to_vHxc})]. \newline \\ 3. We solve the interacting Schr\"{o}dinger equation for the two-electron Householder cluster and deduce the occupation of the embedded impurity (which is denoted $n^{\mathscr{C}}$ in Fig.~\ref{Fig:self-consistent-loop-convergence}). This can be done analytically since the Householder cluster is an asymmetric Hubbard dimer~\cite{sekaran2021}. \newline \\ 4. We verify that the density in the KS lattice $n^{\rm KS}_{\rm lattice}$ and the occupation of the interacting embedded impurity $n^{\mathscr{C}}$ match (a convergence threshold has been set to 10$^{-4}$). If this is the case, the calculation has converged and $n^{\mathscr{C}}$ is interpreted as (an approximation to) the density $n(\mu)$ in the true interacting lattice. If the two densities do not match, the Hxc potential $\tilde{v}_{\rm Hxc}$ is adjusted in the KS lattice such that the latter reproduces $n^{\mathscr{C}}$ [see Eq.~(\ref{eq:LPFET_algo})] or, equivalently, such that the KS lattice contains $Ln^{\mathscr{C}}$ electrons. We then return to step 2.\\ \color{black} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.27]{Figure2.pdf} \caption{Schematics of the LPFET algorithm. The (one-electron reduced) density matrix of the KS lattice is referred to as the 1RDM. See text for further details.} \label{Fig:self-consistent-loop-convergence} \end{center} \end{figure} \section{Results and discussion}\label{sec:results} In the following, LPFET is applied to a uniform Hubbard ring with a large $L$ = 1000 number of sites in order to approach the thermodynamic limit. Periodic boundary conditions have been used. The hopping parameter is set to $t$ = 1. As pointed out in Sec.~\ref{sec:LPFET}, plotting the Ht-DMFET (which is equivalent to DMET or DET for a single embedded impurity) and LPFET per-site energies as functions of the lattice filling $n$ would give exactly the same results (we refer the reader to Ref.~\cite{sekaran2021} for a detailed analysis of these results). However, the chemical-potential-density $\mu\rightarrow n(\mu)$ maps obtained with both methods are expected to differ since they rely on different density-functional approximations [see Eqs.~(\ref{eq:approx_Hxc_pot_HtDMFET}) and (\ref{eq:LPFET_approx_to_vHxc})]. We focus on the self-consistent evaluation of the LPFET map in the following. Comparison is made with Ht-DMFET and the exact BA results.\\ As illustrated by the strongly correlated results of Figs.~\ref{fig:convergence_density} and \ref{fig:convergence_vHxc}, the LPFET self-consistency loop converges smoothly in few iterations. The same observation is made in weaker correlation regimes (not shown). The deviation in density between the KS lattice and the embedded impurity is drastically reduced after the first iteration (see Fig.~\ref{fig:convergence_density}). This is also reflected in the large variation of the Hxc potential from the zeroth to the first iteration (see Fig.~\ref{fig:convergence_vHxc}). It originates from the fact that, at the zeroth iteration, the Hxc potential is set to zero in the lattice while, in the embedding Householder cluster, the interaction on the impurity site is ``turned on''. As shown in Fig.~\ref{fig:convergence_density}, the occupation of the interacting embedded impurity is already at the zeroth iteration a good estimate of the self-consistently converged density. A few additional iterations are needed to refine the result. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.6]{Figure3.pdf} \caption{Comparison of the KS lattice and embedded impurity densities at each iteration of the LPFET calculation. The interaction strength and chemical potential values are set to $U/t=8$ and $\mu/t = - 0.97$, respectively. As shown in the inset, convergence is reached after five iterations.} \label{fig:convergence_density} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.6]{Figure4.pdf} \caption{Convergence of the LPFET Hxc potential for $U/t=8$ and $\mu/t = - 0.97$.} \label{fig:convergence_vHxc} \end{center} \end{figure} The converged LPFET densities are plotted in Fig.~\ref{fig:Mott-Hubbard} as functions of the chemical potential $\mu$ in various correlation regimes. The non-interacting $U=0$ curve describes the KS lattice at the zeroth iteration of the LPFET calculation. Thus we can visualize, as $U$ deviates from zero, how much the KS lattice learns from the interacting two-electron Householder cluster. LPFET is actually quite accurate (even more than Ht-DMFET, probably because of error cancellations) in the low-density regime. Even though LPFET deviates from Ht-DMFET when electron correlation is strong, as expected, their chemical-potential-density maps are quite similar. This is an indication that neglecting the Householder kinetic correlation potential contribution to the Hxc potential, as done in LPFET, is not a crude approximation, even in the strongly correlated regime. As expected~\cite{knizia2012density,sekaran2021}, LPFET and Ht-DMFET poorly perform when approaching half filling. They are unable to describe the density-driven Mott--Hubbard transition ({\it{i.e.}}, the opening of the gap). As discussed in Ref.~\cite{sekaran2021}, this might be related to the fact that, in the exact theory, the Householder cluster is not disconnected from its environment and it contains a fractional number of electrons, away from half filling, unlike in the (approximate) Ht-DMFET and LPFET schemes. In the language of KS-DFT, modeling the gap opening is equivalent to modeling the derivative discontinuity in the density-functional correlation potential $v_{\rm c}(n)=\mu(n)-\mu_{\rm s}(n)-\frac{U}{2}n$ at half filling. As clearly shown in Fig.~\ref{fig:Hartree-exchange-correlation-potential}, Ht-DMFET and LPFET do not reproduce this feature. In the language of the exact density-functional embedding theory derived in Sec.~\ref{subsec:exact_dfe_dft}, both Ht-DMFET and LPFET approximations neglect the complementary density-functional correlation energy $\overline{e}_{\rm c}(n)$ that is induced by the environment of the (closed) density-functional Householder cluster. As readily seen from Eq.~(\ref{eq:final_vHxc_exp_from_muimp}), it should be possible to describe the density-driven Mott--Hubbard transition with a single statically embedded impurity, provided that we can model the derivative discontinuity in $\partial\overline{e}_{\rm c}(n)/\partial n$ at half filling. This is obviously a challenging task that is usually bypassed by embedding more impurities~\cite{knizia2012density,sekaran2021}. The implementation of a multiple-impurity LPFET as well as its generalization to higher-dimension lattice or quantum chemical Hamiltonians is left for future work. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.6]{Figure5.pdf} \caption{Converged LPFET densities (red solid lines) plotted as functions of the chemical potential $\mu$ in various correlation regimes. Comparison is made with the exact BA (black solid lines) and Ht-DMFET (blue dotted lines) results. In the latter case, the chemical potential is evaluated {\it via} the numerical differentiation of the density-functional Ht-DMFET per-site energy [see Eqs.~(\ref{eq:f_cluster_func}) and (\ref{eq:true_chem_pot_Ht-DMFET})]. The non-interacting ($U=0$) chemical-potential-density map [see Eq.~(\ref{eq:KS_dens_func_chemical_potential})] is shown for analysis purposes.} \label{fig:Mott-Hubbard} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.6]{Figure6.pdf} \caption{Correlation potential $v_{\rm c}(n)=\mu(n)-\mu_{\rm s}(n)-\frac{U}{2}n$ plotted as a function of the lattice filling $n$ at the Ht-DMFET (blue dashed line) and LPFET (red solid line) levels of approximation for $U/t=8$. Comparison is made with the exact BA correlation potential (black solid line).} \label{fig:Hartree-exchange-correlation-potential} \end{center} \end{figure} \section{Conclusion and perspectives}\label{sec:conclusion} An in-principle-exact density-functional reformulation of the recently proposed {\it Householder transformed density matrix functional embedding theory} (Ht-DMFET)~\cite{sekaran2021} has been derived for the uniform 1D Hubbard Hamiltonian with a single embedded impurity. On that basis, an approximate {\it local potential functional embedding theory} (LPFET) has been proposed and implemented. Ht-DMFET, which is equivalent to DMET or DET in the particular case of a single impurity, is reinterpreted in this context as an approximation to DFT where the complementary density-functional correlation energy $\overline{e}_{\rm c}(n)$ induced by the environment of the embedding ``impurity+bath'' cluster is neglected. LPFET neglects, in addition, the kinetic correlation effects induced by the Householder transformation on the impurity chemical potential. We have shown that combining the two approximations is equivalent to approximating the latter potential with the Hxc potential of the full lattice. Thus an approximate Hxc potential can be determined {\it self-consistently} for a given choice of external (chemical in the present case) potential in the true interacting lattice. The self-consistency loop, which does not exist in regular single-impurity DMET or DET~\cite{PRB21_Booth_effective_dynamics_static_embedding}, emerges naturally in LPFET from the exact density constraint, {\it{i.e.}}, by forcing the KS lattice and interacting embedded impurity densities to match. In this context, the energy becomes a functional of the Hxc potential. In this respect, LPFET can be seen as a flavor of KS-DFT where no density functional is used. LPFET is very similar to SDE~\cite{mordovina2019self}. The two approaches essentially differ in the optimization of the potential. In LPFET, no KS construction is made within the embedding cluster, unlike in SDE. Instead, the Hxc potential is directly updated in the lattice. As a result, the KS cluster (which is not used in the actual calculation) can be automatically generated with the correct density by applying the Householder transformation to the KS lattice Hamiltonian.\\ LPFET and Ht-DMFET chemical-potential-density maps have been computed for a 1000-site Hubbard ring. Noticeable differences appear in the strongly correlated regime. LPFET is more accurate than Ht-DMFET in the low-density regime, probably because of error cancellations. As expected from previous works~\cite{sekaran2021,knizia2012density}, their performance deteriorates as we approach half filling. It appears that, in the language of density-functional embedding theory, it should be possible to describe the density-driven Mott--Hubbard transition ({\it{i.e.}}, the opening of the gap), provided that the complementary correlation potential $\partial\overline{e}_{\rm c}(n)/\partial n$ exhibits a derivative discontinuity at half filling. Since the latter is neglected in both methods, the gap opening is not reproduced. The missing correlation effects might be recovered by applying a multi-reference G\"{o}rling--Levy-type perturbation theory on top of the correlated cluster calculation~\cite{sekaran2021}. Extending LPFET to multiple impurities by means of a block Householder transformation is another viable strategy~\cite{sekaran2021}. Note that, like DMET or SDE, LPFET is in principle applicable to quantum chemical Hamiltonians written in a localized molecular orbital basis. Work is currently in progress in these directions. \section*{Acknowledgments} The authors thank Saad Yalouz (for his comments on the manuscript and many fruitful discussions) and Martin Rafael Gulin (for stimulating discussions). The authors also thank LabEx CSC (ANR-10-LABX-0026-CSC) and ANR (ANR-19-CE29-0002 DESCARTES and ANR-19-CE07-0024-02 CoLab projects) for funding. \begin{appendices} \section{\label{sec:level1}First-level heading:\protect\\ The line break was forced \lowercase{via} \textbackslash\textbackslash} This sample document demonstrates proper use of REV\TeX~4.2 (and \LaTeXe) in manuscripts prepared for submission to AAPM journals. Further information can be found in the documentation included in the distribution or available at \url{http://www.aapm.org} and in the documentation for REV\TeX~4.2 itself. When commands are referred to in this example file, they are always shown with their required arguments, using normal \TeX{} format. In this format, \verb+#1+, \verb+#2+, etc. stand for required author-supplied arguments to commands. For example, in \verb+\section{#1}+ the \verb+#1+ stands for the title text of the author's section heading, and in \verb+\title{#1}+ the \verb+#1+ stands for the title text of the paper. Line breaks in section headings at all levels can be introduced using \textbackslash\textbackslash. A blank input line tells \TeX\ that the paragraph has ended. \subsection{\label{sec:level2}Second-level heading: Formatting} This file may be formatted in both the \texttt{preprint} (the default) and \texttt{reprint} styles; the latter format may be used to mimic final journal output. In addition, there is another option available, \texttt{lengthcheck}, which formats the document as closely as possible to an actual journal article, to facilitate the author's performance of a length check. Either format may be used for submission purposes; however, for peer review and production, AAPM will format the article using the \texttt{preprint} class option. Hence, it is essential that authors check that their manuscripts format acceptably under \texttt{preprint}. Manuscripts submitted to AAPM that do not format correctly under the \texttt{preprint} option may be delayed in both the editorial and production processes. The \texttt{widetext} environment will make the text the width of the full page, as on page~\pageref{eq:wideeq}. (Note the use the \verb+\pageref{#1}+ to get the page number right automatically.) The width-changing commands only take effect in \texttt{twocolumn} formatting. It has no effect if \texttt{preprint} formatting is chosen instead. \subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes} Citations in text refer to entries in the Bibliography; they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+. Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly, its entire repertoire of commands are available in your document; see the \verb+natbib+ documentation for further details. The argument of \verb+\cite+ is a comma-separated list of \emph{keys}; a key may consist of letters and numerals. By default, AAPM citations are numerical; \cite{feyn54} to give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}). REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate. To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983}, and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}). Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography. A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command, where the argument is the citation key mentioned above. \verb+\bibitem{#1}+ commands may be crafted by hand or, preferably, generated by using Bib\TeX. The AAPM styles for REV\TeX~4 include Bib\TeX\ style file \verb+aapmrev4-2.bst+, appropriate for numbered bibliography. REV\TeX~4 will automatically choose the style appropriate for the document's selected class options: the default is numerical. This sample file demonstrates a simple use of Bib\TeX\ via a \verb+\bibliography+ command referencing the \verb+aapmsamp.bib+ file. Running Bib\TeX\ (in this case \texttt{bibtex aapmsamp}) after the first pass of \LaTeX\ produces the file \verb+aapmsamp.bbl+ which contains the automatically formatted \verb+\bibitem+ commands (including extra markup information via \verb+\bibinfo+ commands). If not using Bib\TeX, the \verb+thebibiliography+ environment should be used instead. \paragraph{Fourth-level heading is run in.}% Footnotes are produced using the \verb+\footnote{#1}+ command. Numerical style citations put footnotes into the bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}. Note: due to the method used to place footnotes in the bibliography, \emph{you must re-run BibTeX every time you change any of your document's footnotes}. \section{Math and Equations} Inline math may be typeset using the \verb+$+ delimiters. Bold math symbols may be achieved using the \verb+bm+ package and the \verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and Blackboard (or open face or double struck) characters should be typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands respectively. Both are supplied by the \texttt{amssymb} package. For example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and \verb+$\mathfrak{G}$+ gives $\mathfrak{G}$ In \LaTeX\ there are many different ways to display equations, and a few preferred ways are noted below. Displayed math will flush left by default. Below we have numbered single-line equations, the most common kind: \begin{eqnarray} \chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2} \left( \begin{array}{c} |{\bf p}|+p_z\\ px+ip_y \end{array}\right)\;, \\ \left\{% \openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}% \label{eq:one}. \end{eqnarray} Note the open one in Eq.~(\ref{eq:one}). Not all numbered equations will fit within a narrow column this way. The equation number will move down automatically if it cannot fit on the same line with a one-line equation: \begin{equation} \left\{ ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}. \end{equation} When the \verb+\label{#1}+ command is used [cf. input for Eq.~(\ref{eq:one})], the equation can be referred to in text without knowing the equation number that \TeX\ will assign to it. Just use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in the \verb+\label{#1}+ command. Unnumbered single-line equations can be typeset using the \verb+\[+, \verb+\]+ format: \[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \] \subsection{Multiline equations} Multiline equations are obtained by using the \verb+eqnarray+ environment. Use the \verb+\nonumber+ command at the end of each line to avoid assigning a number: \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} \delta_{\sigma_1,-\sigma_2} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1), \end{eqnarray} \begin{eqnarray} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\nonumber \\ & &\times \left( \sum_{i<j}\right) \sum_{\text{perm}} \frac{1}{S_{12}} \frac{1}{S_{12}} \sum_\tau c^f_\tau~. \end{eqnarray} \textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline equation if \verb+\nonumber+ is also used on that line. Incorrect cross-referencing will result. Notice the use \verb+\text{#1}+ for using a Roman font within a math environment. To set a multiline equation without \emph{any} equation numbers, use the \verb+\begin{eqnarray*}+, \verb+\end{eqnarray*}+ format: \begin{eqnarray*} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\\ & &\times \left( \sum_{i<j}\right) \left( \sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}} \right) \frac{1}{S_{12}}~. \end{eqnarray*} To obtain numbers not normally produced by the automatic numbering, use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired equation number. For example, to get an equation number of (\ref{eq:mynum}), \begin{equation} g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum} \end{equation} A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires \texttt{amsmath}. The \verb+\tag{#1}+ must come before the \verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is \textit{transparent} to the automatic numbering in REV\TeX{}; therefore, the number must be known ahead of time, and it must be manually adjusted if other equations are added. \verb+\tag{#1}+ works with both single-line and multiline equations. \verb+\tag{#1}+ should only be used in exceptional case - do not use it to number all equations in a paper. Note the equation number gets reset again: \begin{equation} g^+g^+g^+ \rightarrow g^+g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \end{equation} Enclosing single-line and multiline equations in \verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce a set of equations that are ``numbered'' with letters, as shown in Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below: \begin{subequations} \label{eq:whole} \begin{equation} \left\{ abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2} \right\},\label{subeq:1} \end{equation} \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2} \end{eqnarray} \end{subequations} Putting a \verb+\label{#1}+ command right after the \verb+\begin{subequations}+, allows one to reference all the equations in a subequations environment. For example, the equations in the preceding subequations environment were Eqs.~(\ref{eq:whole}). \subsubsection{Wide equations} The equation that follows is set in a wide format, i.e., it spans across the full page. The wide format is reserved for long equations that cannot be easily broken into four lines or less: \begin{widetext} \begin{equation} {\cal R}^{(\text{d})}= g_{\sigma_2}^e \left( \frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right) + x_WQ_e \left( \frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right)\;. \label{eq:wideeq} \end{equation} \end{widetext} This is typed to show the output is in wide format. (Since there is no input line between \verb+\equation+ and this paragraph, there is no paragraph indent for this paragraph.) \section{Cross-referencing} REV\TeX{} will automatically number sections, equations, figure captions, and tables. In order to reference them in text, use the \verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a particular page, use the \verb+\pageref{#1}+ command. The \verb+\label{#1}+ should appear in a section heading, within an equation, or in a table or figure caption. The \verb+\ref{#1}+ command is used in the text where the citation is to be displayed. Some examples: Section~\ref{sec:level1} on page~\pageref{sec:level1}, Table~\ref{tab:table1},% \begin{table} \caption{\label{tab:table1}This is a narrow table which fits into a text column when using \texttt{twocolumn} formatting. Note that REV\TeX~4 adjusts the intercolumn spacing so that the table fills the entire width of the column. Table captions are numbered automatically. This table illustrates left-aligned, centered, and right-aligned columns. } \begin{ruledtabular} \begin{tabular}{lcr} Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\ \hline 1 & 2 & 3\\ 10 & 20 & 30\\ 100 & 200 & 300\\ \end{tabular} \end{ruledtabular} \end{table} and Fig.~\ref{fig:epsart}. \section{Figures and Tables} Figures and tables are typically ``floats''; \LaTeX\ determines their final position via placement rules. \LaTeX\ isn't always successful in automatically placing floats where you wish them. Figures are marked up with the \texttt{figure} environment, the content of which imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+). The argument of the latter command should itself contain a \verb+\label+ command if you wish to refer to your figure with \verb+\ref+. Import your image using either the \texttt{graphics} or \texttt{graphix} packages. These packages both define the \verb+\includegraphics{#1}+ command, but they differ in the optional arguments for specifying the orientation, scaling, and translation of the figure. Fig.~\ref{fig:epsart}% \begin{figure} \includegraphics{fig_1 \caption{\label{fig:epsart} A figure caption. The figure captions are automatically numbered.} \end{figure} is small enough to fit in a single column, while Fig.~\ref{fig:wide}% \begin{figure*} \includegraphics{fig_2 \caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide figure, spanning the page in \texttt{twocolumn} formatting.} \end{figure*} is too wide for a single column, so instead the \texttt{figure*} environment has been used. The analog of the \texttt{figure} environment is \texttt{table}, which uses the same \verb+\caption+ command. However, you should type your caption command first within the \texttt{table}, instead of last as you did for \texttt{figure}. The heart of any table is the \texttt{tabular} environment, which represents the table content as a (vertical) sequence of table rows, each containing a (horizontal) sequence of table cells. Cells are separated by the \verb+&+ character; the row terminates with \verb+\\+. The required argument for the \texttt{tabular} environment specifies how data are displayed in each of the columns. For instance, a column may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+), or aligned on a decimal point (\verb+d+). (Table~\ref{tab:table4}% \begin{table} \caption{\label{tab:table4}Numbers in columns Three--Five have been aligned by using the ``d'' column specifier (requires the \texttt{dcolumn} package). Non-numeric entries (those entries without a ``.'') in a ``d'' column are aligned on the decimal point. Use the ``D'' specifier for more complex layouts. } \begin{ruledtabular} \begin{tabular}{ccddd} One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\ \hline one&two&\mbox{three}&\mbox{four}&\mbox{five}\\ He&2& 2.77234 & 45672. & 0.69 \\ C\footnote{Some tables require footnotes.} &C\footnote{Some tables need more than one footnote.} & 12537.64 & 37.66345 & 86.37 \\ \end{tabular} \end{ruledtabular} \end{table} illustrates the use of decimal column alignment.) Extra column-spacing may be be specified as well, although REV\TeX~4 sets this spacing so that the columns fill the width of the table. Horizontal rules are typeset using the \verb+\hline+ command. The doubled (or Scotch) rules that appear at the top and bottom of a table can be achieved by enclosing the \texttt{tabular} environment within a \texttt{ruledtabular} environment. Rows whose columns span multiple columns can be typeset using \LaTeX's \verb+\multicolumn{#1}{#2}{#3}+ command (for example, see the first row of Table~\ref{tab:table3}).% \begin{table*} \caption{\label{tab:table3}This is a wide table that spans the page width in \texttt{twocolumn} mode. It is formatted using the \texttt{table*} environment. It also demonstrates the use of \textbackslash\texttt{multicolumn} in rows with entries that span more than one column.} \begin{ruledtabular} \begin{tabular}{ccccc} &\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\ Ion&1st alternative&2nd alternative&lst alternative &2nd alternative\\ \hline K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\ Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.} &$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\ Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. } &$(4e)^{\text{a}}$\\ He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\ Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\ \end{tabular} \end{ruledtabular} \end{table*} The tables in this document illustrate various effects. Tables that fit in a narrow column are contained in a \texttt{table} environment. Table~\ref{tab:table3} is a wide table, therefore set with the \texttt{table*} environment. Lengthy tables may need to break across pages. A simple way to allow this is to specify the \verb+[H]+ float placement on the \texttt{table} or \texttt{table*} environment. Alternatively, using the standard \LaTeXe\ package \texttt{longtable} gives more control over how tables break and allows headers and footers to be specified for each page of the table. An example of the use of \texttt{longtable} can be found in the file \texttt{summary.tex} that is included with the REV\TeX~4 distribution. There are two methods for setting footnotes within a table (these footnotes will be displayed directly below the table rather than at the bottom of the page or in the bibliography). The easiest and preferred method is just to use the \verb+\footnote{#1}+ command. This will automatically enumerate the footnotes with lowercase roman letters. However, it is sometimes necessary to have multiple entries in the table share the same footnote. In this case, create the footnotes using \verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+. \texttt{\#1} is a numeric value. Each time the same value for \texttt{\#1} is used, the same mark is produced in the table. The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular} environment. Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and \ref{tab:table2}% \begin{table} \caption{\label{tab:table2}A table with more columns still fits properly in a column. Note that several entries share the same footnote. Inspect the \LaTeX\ input for this table to see exactly how it is done.} \begin{ruledtabular} \begin{tabular}{cccccccc} &$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$& &$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\ \hline Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1] & 0.680 & 1.870 & 3.700 \\ Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2] & 0.450 & 1.930 & 3.760 \\ Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3] & 0.750 & 2.170 & 3.560 \\ Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4] & 0.900 & 2.370 & 3.720 \\ Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2] & 0.380 & 1.730 & 2.830 \\ Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5] & 0.760 & 2.110 & 3.120 \\ Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5] & 1.120 & 2.620 & 3.480 \\ Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3] & 1.330 & 2.800 & 3.590 \\ Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4] & 1.420 & 3.030 & 3.740 \\ In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5] & 0.960 & 2.460 & 3.780 \\ Tl& 0.480 & 18.90 & 3.550 & & & & \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.} \footnotetext[2]{Here's the second.} \footnotetext[3]{Here's the third.} \footnotetext[4]{Here's the fourth.} \footnotetext[5]{And etc.} \end{table} for an illustration. All AAPM journals require that the initial citation of figures or tables be in numerical order. \LaTeX's automatic numbering of floats is your friend here: just put each \texttt{figure} environment immediately following its first reference (\verb+\ref+), as we have done in this example file. \begin{acknowledgments} We wish to acknowledge the support of the author community in using REV\TeX{}, offering suggestions and encouragement, testing new versions, \dots. \end{acknowledgments} \section{\label{sec:level1}First-level heading:\protect\\ The line break was forced \lowercase{via} \textbackslash\textbackslash} This sample document demonstrates proper use of REV\TeX~4.2 (and \LaTeXe) in manuscripts prepared for submission to AIP journals. Further information can be found in the documentation included in the distribution or available at \url{http://authors.aip.org} and in the documentation for REV\TeX~4.2 itself. When commands are referred to in this example file, they are always shown with their required arguments, using normal \TeX{} format. In this format, \verb+#1+, \verb+#2+, etc. stand for required author-supplied arguments to commands. For example, in \verb+\section{#1}+ the \verb+#1+ stands for the title text of the author's section heading, and in \verb+\title{#1}+ the \verb+#1+ stands for the title text of the paper. Line breaks in section headings at all levels can be introduced using \textbackslash\textbackslash. A blank input line tells \TeX\ that the paragraph has ended. \subsection{\label{sec:level2}Second-level heading: Formatting} This file may be formatted in both the \texttt{preprint} (the default) and \texttt{reprint} styles; the latter format may be used to mimic final journal output. Either format may be used for submission purposes; however, for peer review and production, AIP will format the article using the \texttt{preprint} class option. Hence, it is essential that authors check that their manuscripts format acceptably under \texttt{preprint}. Manuscripts submitted to AIP that do not format correctly under the \texttt{preprint} option may be delayed in both the editorial and production processes. The \texttt{widetext} environment will make the text the width of the full page, as on page~\pageref{eq:wideeq}. (Note the use the \verb+\pageref{#1}+ to get the page number right automatically.) The width-changing commands only take effect in \texttt{twocolumn} formatting. It has no effect if \texttt{preprint} formatting is chosen instead. \subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes} Citations in text refer to entries in the Bibliography; they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+. Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly, its entire repertoire of commands are available in your document; see the \verb+natbib+ documentation for further details. The argument of \verb+\cite+ is a comma-separated list of \emph{keys}; a key may consist of letters and numerals. By default, citations are numerical; \cite{feyn54} author-year citations are an option. To give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}). REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate. REV\TeX\ provides the ability to properly punctuate textual citations in author-year style; this facility works correctly with numerical citations only with \texttt{natbib}'s compress option turned off. To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983}, and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}). Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography. A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command, where the argument is the citation key mentioned above. \verb+\bibitem{#1}+ commands may be crafted by hand or, preferably, generated by using Bib\TeX. The AIP styles for REV\TeX~4 include Bib\TeX\ style files \verb+aipnum.bst+ and \verb+aipauth.bst+, appropriate for numbered and author-year bibliographies, respectively. REV\TeX~4 will automatically choose the style appropriate for the document's selected class options: the default is numerical, and you obtain the author-year style by specifying a class option of \verb+author-year+. This sample file demonstrates a simple use of Bib\TeX\ via a \verb+\bibliography+ command referencing the \verb+sorsamp.bib+ file. Running Bib\TeX\ (in this case \texttt{bibtex sorsamp}) after the first pass of \LaTeX\ produces the file \verb+sorsamp.bbl+ which contains the automatically formatted \verb+\bibitem+ commands (including extra markup information via \verb+\bibinfo+ commands). If not using Bib\TeX, the \verb+thebibiliography+ environment should be used instead. \paragraph{Fourth-level heading is run in.}% Footnotes are produced using the \verb+\footnote{#1}+ command. Numerical style citations put footnotes into the bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}. Author-year and numerical author-year citation styles (each for its own reason) cannot use this method. Note: due to the method used to place footnotes in the bibliography, \emph{you must re-run BibTeX every time you change any of your document's footnotes}. \section{Math and Equations} Inline math may be typeset using the \verb+$+ delimiters. Bold math symbols may be achieved using the \verb+bm+ package and the \verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and Blackboard (or open face or double struck) characters should be typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands respectively. Both are supplied by the \texttt{amssymb} package. For example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and \verb+$\mathfrak{G}$+ gives $\mathfrak{G}$ In \LaTeX\ there are many different ways to display equations, and a few preferred ways are noted below. Displayed math will center by default. Use the class option \verb+fleqn+ to flush equations left. Below we have numbered single-line equations, the most common kind: \begin{eqnarray} \chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2} \left( \begin{array}{c} |{\bf p}|+p_z\\ px+ip_y \end{array}\right)\;, \\ \left\{% \openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}% \label{eq:one}. \end{eqnarray} Note the open one in Eq.~(\ref{eq:one}). Not all numbered equations will fit within a narrow column this way. The equation number will move down automatically if it cannot fit on the same line with a one-line equation: \begin{equation} \left\{ ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}. \end{equation} When the \verb+\label{#1}+ command is used [cf. input for Eq.~(\ref{eq:one})], the equation can be referred to in text without knowing the equation number that \TeX\ will assign to it. Just use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in the \verb+\label{#1}+ command. Unnumbered single-line equations can be typeset using the \verb+\[+, \verb+\]+ format: \[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \] \subsection{Multiline equations} Multiline equations are obtained by using the \verb+eqnarray+ environment. Use the \verb+\nonumber+ command at the end of each line to avoid assigning a number: \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} \delta_{\sigma_1,-\sigma_2} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1), \end{eqnarray} \begin{eqnarray} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\nonumber \\ & &\times \left( \sum_{i<j}\right) \sum_{\text{perm}} \frac{1}{S_{12}} \frac{1}{S_{12}} \sum_\tau c^f_\tau~. \end{eqnarray} \textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline equation if \verb+\nonumber+ is also used on that line. Incorrect cross-referencing will result. Notice the use \verb+\text{#1}+ for using a Roman font within a math environment. To set a multiline equation without \emph{any} equation numbers, use the \verb+\begin{eqnarray*}+, \verb+\end{eqnarray*}+ format: \begin{eqnarray*} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\\ & &\times \left( \sum_{i<j}\right) \left( \sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}} \right) \frac{1}{S_{12}}~. \end{eqnarray*} To obtain numbers not normally produced by the automatic numbering, use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired equation number. For example, to get an equation number of (\ref{eq:mynum}), \begin{equation} g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum} \end{equation} A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires \texttt{amsmath}. The \verb+\tag{#1}+ must come before the \verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is \textit{transparent} to the automatic numbering in REV\TeX{}; therefore, the number must be known ahead of time, and it must be manually adjusted if other equations are added. \verb+\tag{#1}+ works with both single-line and multiline equations. \verb+\tag{#1}+ should only be used in exceptional case - do not use it to number all equations in a paper. Enclosing single-line and multiline equations in \verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce a set of equations that are ``numbered'' with letters, as shown in Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below: \begin{subequations} \label{eq:whole} \begin{equation} \left\{ abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2} \right\},\label{subeq:1} \end{equation} \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2} \end{eqnarray} \end{subequations} Putting a \verb+\label{#1}+ command right after the \verb+\begin{subequations}+, allows one to reference all the equations in a subequations environment. For example, the equations in the preceding subequations environment were Eqs.~(\ref{eq:whole}). \subsubsection{Wide equations} The equation that follows is set in a wide format, i.e., it spans across the full page. The wide format is reserved for long equations that cannot be easily broken into four lines or less: \begin{widetext} \begin{equation} {\cal R}^{(\text{d})}= g_{\sigma_2}^e \left( \frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right) + x_WQ_e \left( \frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right)\;. \label{eq:wideeq} \end{equation} \end{widetext} This is typed to show the output is in wide format. (Since there is no input line between \verb+\equation+ and this paragraph, there is no paragraph indent for this paragraph.) \section{Cross-referencing} REV\TeX{} will automatically number sections, equations, figure captions, and tables. In order to reference them in text, use the \verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a particular page, use the \verb+\pageref{#1}+ command. The \verb+\label{#1}+ should appear in a section heading, within an equation, or in a table or figure caption. The \verb+\ref{#1}+ command is used in the text where the citation is to be displayed. Some examples: Section~\ref{sec:level1} on page~\pageref{sec:level1}, Table~\ref{tab:table1},% \begin{table} \caption{\label{tab:table1}This is a narrow table which fits into a text column when using \texttt{twocolumn} formatting. Note that REV\TeX~4 adjusts the intercolumn spacing so that the table fills the entire width of the column. Table captions are numbered automatically. This table illustrates left-aligned, centered, and right-aligned columns. } \begin{ruledtabular} \begin{tabular}{lcr} Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\ \hline 1 & 2 & 3\\ 10 & 20 & 30\\ 100 & 200 & 300\\ \end{tabular} \end{ruledtabular} \end{table} and Fig.~\ref{fig:epsart}. \section{Figures and Tables} Figures and tables are typically ``floats''; \LaTeX\ determines their final position via placement rules. \LaTeX\ isn't always successful in automatically placing floats where you wish them. Figures are marked up with the \texttt{figure} environment, the content of which imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+). The argument of the latter command should itself contain a \verb+\label+ command if you wish to refer to your figure with \verb+\ref+. Import your image using either the \texttt{graphics} or \texttt{graphix} packages. These packages both define the \verb+\includegraphics{#1}+ command, but they differ in the optional arguments for specifying the orientation, scaling, and translation of the figure. Fig.~\ref{fig:epsart}% \begin{figure} \includegraphics{fig_1 \caption{\label{fig:epsart} A figure caption. The figure captions are automatically numbered.} \end{figure} is small enough to fit in a single column, while Fig.~\ref{fig:wide}% \begin{figure*} \includegraphics{fig_2 \caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide figure, spanning the page in \texttt{twocolumn} formatting.} \end{figure*} is too wide for a single column, so instead the \texttt{figure*} environment has been used. The analog of the \texttt{figure} environment is \texttt{table}, which uses the same \verb+\caption+ command. However, you should type your caption command first within the \texttt{table}, instead of last as you did for \texttt{figure}. The heart of any table is the \texttt{tabular} environment, which represents the table content as a (vertical) sequence of table rows, each containing a (horizontal) sequence of table cells. Cells are separated by the \verb+&+ character; the row terminates with \verb+\\+. The required argument for the \texttt{tabular} environment specifies how data are displayed in each of the columns. For instance, a column may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+), or aligned on a decimal point (\verb+d+). (Table~\ref{tab:table4}% \begin{table} \caption{\label{tab:table4}Numbers in columns Three--Five have been aligned by using the ``d'' column specifier (requires the \texttt{dcolumn} package). Non-numeric entries (those entries without a ``.'') in a ``d'' column are aligned on the decimal point. Use the ``D'' specifier for more complex layouts. } \begin{ruledtabular} \begin{tabular}{ccddd} One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\ \hline one&two&\mbox{three}&\mbox{four}&\mbox{five}\\ He&2& 2.77234 & 45672. & 0.69 \\ C\footnote{Some tables require footnotes.} &C\footnote{Some tables need more than one footnote.} & 12537.64 & 37.66345 & 86.37 \\ \end{tabular} \end{ruledtabular} \end{table} illustrates the use of decimal column alignment.) Extra column-spacing may be be specified as well, although REV\TeX~4 sets this spacing so that the columns fill the width of the table. Horizontal rules are typeset using the \verb+\hline+ command. The doubled (or Scotch) rules that appear at the top and bottom of a table can be achieved by enclosing the \texttt{tabular} environment within a \texttt{ruledtabular} environment. Rows whose columns span multiple columns can be typeset using \LaTeX's \verb+\multicolumn{#1}{#2}{#3}+ command (for example, see the first row of Table~\ref{tab:table3}).% \begin{table*} \caption{\label{tab:table3}This is a wide table that spans the page width in \texttt{twocolumn} mode. It is formatted using the \texttt{table*} environment. It also demonstrates the use of \textbackslash\texttt{multicolumn} in rows with entries that span more than one column.} \begin{ruledtabular} \begin{tabular}{ccccc} &\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\ Ion&1st alternative&2nd alternative&lst alternative &2nd alternative\\ \hline K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\ Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.} &$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\ Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. } &$(4e)^{\text{a}}$\\ He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\ Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\ \end{tabular} \end{ruledtabular} \end{table*} The tables in this document illustrate various effects. Tables that fit in a narrow column are contained in a \texttt{table} environment. Table~\ref{tab:table3} is a wide table, therefore set with the \texttt{table*} environment. Lengthy tables may need to break across pages. A simple way to allow this is to specify the \verb+[H]+ float placement on the \texttt{table} or \texttt{table*} environment. Alternatively, using the standard \LaTeXe\ package \texttt{longtable} gives more control over how tables break and allows headers and footers to be specified for each page of the table. An example of the use of \texttt{longtable} can be found in the file \texttt{summary.tex} that is included with the REV\TeX~4 distribution. There are two methods for setting footnotes within a table (these footnotes will be displayed directly below the table rather than at the bottom of the page or in the bibliography). The easiest and preferred method is just to use the \verb+\footnote{#1}+ command. This will automatically enumerate the footnotes with lowercase roman letters. However, it is sometimes necessary to have multiple entries in the table share the same footnote. In this case, create the footnotes using \verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+. \texttt{\#1} is a numeric value. Each time the same value for \texttt{\#1} is used, the same mark is produced in the table. The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular} environment. Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and \ref{tab:table2}% \begin{table} \caption{\label{tab:table2}A table with more columns still fits properly in a column. Note that several entries share the same footnote. Inspect the \LaTeX\ input for this table to see exactly how it is done.} \begin{ruledtabular} \begin{tabular}{cccccccc} &$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$& &$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\ \hline Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1] & 0.680 & 1.870 & 3.700 \\ Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2] & 0.450 & 1.930 & 3.760 \\ Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3] & 0.750 & 2.170 & 3.560 \\ Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4] & 0.900 & 2.370 & 3.720 \\ Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2] & 0.380 & 1.730 & 2.830 \\ Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5] & 0.760 & 2.110 & 3.120 \\ Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5] & 1.120 & 2.620 & 3.480 \\ Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3] & 1.330 & 2.800 & 3.590 \\ Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4] & 1.420 & 3.030 & 3.740 \\ In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5] & 0.960 & 2.460 & 3.780 \\ Tl& 0.480 & 18.90 & 3.550 & & & & \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.} \footnotetext[2]{Here's the second.} \footnotetext[3]{Here's the third.} \footnotetext[4]{Here's the fourth.} \footnotetext[5]{And etc.} \end{table} for an illustration. All AIP journals require that the initial citation of figures or tables be in numerical order. \LaTeX's automatic numbering of floats is your friend here: just put each \texttt{figure} environment immediately following its first reference (\verb+\ref+), as we have done in this example file. \begin{acknowledgments} We wish to acknowledge the support of the author community in using REV\TeX{}, offering suggestions and encouragement, testing new versions, \dots. \end{acknowledgments} \section{\label{sec:level1}First-level heading:\protect\\ The line break was forced \lowercase{via} \textbackslash\textbackslash} This sample document demonstrates proper use of REV\TeX~4.2 (and \LaTeXe) in manuscripts prepared for submission to AAPM journals. Further information can be found in the documentation included in the distribution or available at \url{http://www.aapm.org} and in the documentation for REV\TeX~4.2 itself. When commands are referred to in this example file, they are always shown with their required arguments, using normal \TeX{} format. In this format, \verb+#1+, \verb+#2+, etc. stand for required author-supplied arguments to commands. For example, in \verb+\section{#1}+ the \verb+#1+ stands for the title text of the author's section heading, and in \verb+\title{#1}+ the \verb+#1+ stands for the title text of the paper. Line breaks in section headings at all levels can be introduced using \textbackslash\textbackslash. A blank input line tells \TeX\ that the paragraph has ended. \subsection{\label{sec:level2}Second-level heading: Formatting} This file may be formatted in both the \texttt{preprint} (the default) and \texttt{reprint} styles; the latter format may be used to mimic final journal output. In addition, there is another option available, \texttt{lengthcheck}, which formats the document as closely as possible to an actual journal article, to facilitate the author's performance of a length check. Either format may be used for submission purposes; however, for peer review and production, AAPM will format the article using the \texttt{preprint} class option. Hence, it is essential that authors check that their manuscripts format acceptably under \texttt{preprint}. Manuscripts submitted to AAPM that do not format correctly under the \texttt{preprint} option may be delayed in both the editorial and production processes. The \texttt{widetext} environment will make the text the width of the full page, as on page~\pageref{eq:wideeq}. (Note the use the \verb+\pageref{#1}+ to get the page number right automatically.) The width-changing commands only take effect in \texttt{twocolumn} formatting. It has no effect if \texttt{preprint} formatting is chosen instead. \subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes} Citations in text refer to entries in the Bibliography; they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+. Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly, its entire repertoire of commands are available in your document; see the \verb+natbib+ documentation for further details. The argument of \verb+\cite+ is a comma-separated list of \emph{keys}; a key may consist of letters and numerals. By default, AAPM citations are numerical; \cite{feyn54} to give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}). REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate. To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983}, and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}). Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography. A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command, where the argument is the citation key mentioned above. \verb+\bibitem{#1}+ commands may be crafted by hand or, preferably, generated by using Bib\TeX. The AAPM styles for REV\TeX~4 include Bib\TeX\ style file \verb+aapmrev4-2.bst+, appropriate for numbered bibliography. REV\TeX~4 will automatically choose the style appropriate for the document's selected class options: the default is numerical. This sample file demonstrates a simple use of Bib\TeX\ via a \verb+\bibliography+ command referencing the \verb+aapmsamp.bib+ file. Running Bib\TeX\ (in this case \texttt{bibtex aapmsamp}) after the first pass of \LaTeX\ produces the file \verb+aapmsamp.bbl+ which contains the automatically formatted \verb+\bibitem+ commands (including extra markup information via \verb+\bibinfo+ commands). If not using Bib\TeX, the \verb+thebibiliography+ environment should be used instead. \paragraph{Fourth-level heading is run in.}% Footnotes are produced using the \verb+\footnote{#1}+ command. Numerical style citations put footnotes into the bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}. Note: due to the method used to place footnotes in the bibliography, \emph{you must re-run BibTeX every time you change any of your document's footnotes}. \section{Math and Equations} Inline math may be typeset using the \verb+$+ delimiters. Bold math symbols may be achieved using the \verb+bm+ package and the \verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and Blackboard (or open face or double struck) characters should be typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands respectively. Both are supplied by the \texttt{amssymb} package. For example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and \verb+$\mathfrak{G}$+ gives $\mathfrak{G}$ In \LaTeX\ there are many different ways to display equations, and a few preferred ways are noted below. Displayed math will flush left by default. Below we have numbered single-line equations, the most common kind: \begin{eqnarray} \chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2} \left( \begin{array}{c} |{\bf p}|+p_z\\ px+ip_y \end{array}\right)\;, \\ \left\{% \openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}% \label{eq:one}. \end{eqnarray} Note the open one in Eq.~(\ref{eq:one}). Not all numbered equations will fit within a narrow column this way. The equation number will move down automatically if it cannot fit on the same line with a one-line equation: \begin{equation} \left\{ ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}. \end{equation} When the \verb+\label{#1}+ command is used [cf. input for Eq.~(\ref{eq:one})], the equation can be referred to in text without knowing the equation number that \TeX\ will assign to it. Just use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in the \verb+\label{#1}+ command. Unnumbered single-line equations can be typeset using the \verb+\[+, \verb+\]+ format: \[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \] \subsection{Multiline equations} Multiline equations are obtained by using the \verb+eqnarray+ environment. Use the \verb+\nonumber+ command at the end of each line to avoid assigning a number: \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} \delta_{\sigma_1,-\sigma_2} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1), \end{eqnarray} \begin{eqnarray} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\nonumber \\ & &\times \left( \sum_{i<j}\right) \sum_{\text{perm}} \frac{1}{S_{12}} \frac{1}{S_{12}} \sum_\tau c^f_\tau~. \end{eqnarray} \textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline equation if \verb+\nonumber+ is also used on that line. Incorrect cross-referencing will result. Notice the use \verb+\text{#1}+ for using a Roman font within a math environment. To set a multiline equation without \emph{any} equation numbers, use the \verb+\begin{eqnarray*}+, \verb+\end{eqnarray*}+ format: \begin{eqnarray*} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\\ & &\times \left( \sum_{i<j}\right) \left( \sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}} \right) \frac{1}{S_{12}}~. \end{eqnarray*} To obtain numbers not normally produced by the automatic numbering, use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired equation number. For example, to get an equation number of (\ref{eq:mynum}), \begin{equation} g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum} \end{equation} A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires \texttt{amsmath}. The \verb+\tag{#1}+ must come before the \verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is \textit{transparent} to the automatic numbering in REV\TeX{}; therefore, the number must be known ahead of time, and it must be manually adjusted if other equations are added. \verb+\tag{#1}+ works with both single-line and multiline equations. \verb+\tag{#1}+ should only be used in exceptional case - do not use it to number all equations in a paper. Note the equation number gets reset again: \begin{equation} g^+g^+g^+ \rightarrow g^+g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \end{equation} Enclosing single-line and multiline equations in \verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce a set of equations that are ``numbered'' with letters, as shown in Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below: \begin{subequations} \label{eq:whole} \begin{equation} \left\{ abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2} \right\},\label{subeq:1} \end{equation} \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2} \end{eqnarray} \end{subequations} Putting a \verb+\label{#1}+ command right after the \verb+\begin{subequations}+, allows one to reference all the equations in a subequations environment. For example, the equations in the preceding subequations environment were Eqs.~(\ref{eq:whole}). \subsubsection{Wide equations} The equation that follows is set in a wide format, i.e., it spans across the full page. The wide format is reserved for long equations that cannot be easily broken into four lines or less: \begin{widetext} \begin{equation} {\cal R}^{(\text{d})}= g_{\sigma_2}^e \left( \frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right) + x_WQ_e \left( \frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right)\;. \label{eq:wideeq} \end{equation} \end{widetext} This is typed to show the output is in wide format. (Since there is no input line between \verb+\equation+ and this paragraph, there is no paragraph indent for this paragraph.) \section{Cross-referencing} REV\TeX{} will automatically number sections, equations, figure captions, and tables. In order to reference them in text, use the \verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a particular page, use the \verb+\pageref{#1}+ command. The \verb+\label{#1}+ should appear in a section heading, within an equation, or in a table or figure caption. The \verb+\ref{#1}+ command is used in the text where the citation is to be displayed. Some examples: Section~\ref{sec:level1} on page~\pageref{sec:level1}, Table~\ref{tab:table1},% \begin{table} \caption{\label{tab:table1}This is a narrow table which fits into a text column when using \texttt{twocolumn} formatting. Note that REV\TeX~4 adjusts the intercolumn spacing so that the table fills the entire width of the column. Table captions are numbered automatically. This table illustrates left-aligned, centered, and right-aligned columns. } \begin{ruledtabular} \begin{tabular}{lcr} Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\ \hline 1 & 2 & 3\\ 10 & 20 & 30\\ 100 & 200 & 300\\ \end{tabular} \end{ruledtabular} \end{table} and Fig.~\ref{fig:epsart}. \section{Figures and Tables} Figures and tables are typically ``floats''; \LaTeX\ determines their final position via placement rules. \LaTeX\ isn't always successful in automatically placing floats where you wish them. Figures are marked up with the \texttt{figure} environment, the content of which imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+). The argument of the latter command should itself contain a \verb+\label+ command if you wish to refer to your figure with \verb+\ref+. Import your image using either the \texttt{graphics} or \texttt{graphix} packages. These packages both define the \verb+\includegraphics{#1}+ command, but they differ in the optional arguments for specifying the orientation, scaling, and translation of the figure. Fig.~\ref{fig:epsart}% \begin{figure} \includegraphics{fig_1 \caption{\label{fig:epsart} A figure caption. The figure captions are automatically numbered.} \end{figure} is small enough to fit in a single column, while Fig.~\ref{fig:wide}% \begin{figure*} \includegraphics{fig_2 \caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide figure, spanning the page in \texttt{twocolumn} formatting.} \end{figure*} is too wide for a single column, so instead the \texttt{figure*} environment has been used. The analog of the \texttt{figure} environment is \texttt{table}, which uses the same \verb+\caption+ command. However, you should type your caption command first within the \texttt{table}, instead of last as you did for \texttt{figure}. The heart of any table is the \texttt{tabular} environment, which represents the table content as a (vertical) sequence of table rows, each containing a (horizontal) sequence of table cells. Cells are separated by the \verb+&+ character; the row terminates with \verb+\\+. The required argument for the \texttt{tabular} environment specifies how data are displayed in each of the columns. For instance, a column may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+), or aligned on a decimal point (\verb+d+). (Table~\ref{tab:table4}% \begin{table} \caption{\label{tab:table4}Numbers in columns Three--Five have been aligned by using the ``d'' column specifier (requires the \texttt{dcolumn} package). Non-numeric entries (those entries without a ``.'') in a ``d'' column are aligned on the decimal point. Use the ``D'' specifier for more complex layouts. } \begin{ruledtabular} \begin{tabular}{ccddd} One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\ \hline one&two&\mbox{three}&\mbox{four}&\mbox{five}\\ He&2& 2.77234 & 45672. & 0.69 \\ C\footnote{Some tables require footnotes.} &C\footnote{Some tables need more than one footnote.} & 12537.64 & 37.66345 & 86.37 \\ \end{tabular} \end{ruledtabular} \end{table} illustrates the use of decimal column alignment.) Extra column-spacing may be be specified as well, although REV\TeX~4 sets this spacing so that the columns fill the width of the table. Horizontal rules are typeset using the \verb+\hline+ command. The doubled (or Scotch) rules that appear at the top and bottom of a table can be achieved by enclosing the \texttt{tabular} environment within a \texttt{ruledtabular} environment. Rows whose columns span multiple columns can be typeset using \LaTeX's \verb+\multicolumn{#1}{#2}{#3}+ command (for example, see the first row of Table~\ref{tab:table3}).% \begin{table*} \caption{\label{tab:table3}This is a wide table that spans the page width in \texttt{twocolumn} mode. It is formatted using the \texttt{table*} environment. It also demonstrates the use of \textbackslash\texttt{multicolumn} in rows with entries that span more than one column.} \begin{ruledtabular} \begin{tabular}{ccccc} &\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\ Ion&1st alternative&2nd alternative&lst alternative &2nd alternative\\ \hline K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\ Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.} &$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\ Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. } &$(4e)^{\text{a}}$\\ He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\ Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\ \end{tabular} \end{ruledtabular} \end{table*} The tables in this document illustrate various effects. Tables that fit in a narrow column are contained in a \texttt{table} environment. Table~\ref{tab:table3} is a wide table, therefore set with the \texttt{table*} environment. Lengthy tables may need to break across pages. A simple way to allow this is to specify the \verb+[H]+ float placement on the \texttt{table} or \texttt{table*} environment. Alternatively, using the standard \LaTeXe\ package \texttt{longtable} gives more control over how tables break and allows headers and footers to be specified for each page of the table. An example of the use of \texttt{longtable} can be found in the file \texttt{summary.tex} that is included with the REV\TeX~4 distribution. There are two methods for setting footnotes within a table (these footnotes will be displayed directly below the table rather than at the bottom of the page or in the bibliography). The easiest and preferred method is just to use the \verb+\footnote{#1}+ command. This will automatically enumerate the footnotes with lowercase roman letters. However, it is sometimes necessary to have multiple entries in the table share the same footnote. In this case, create the footnotes using \verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+. \texttt{\#1} is a numeric value. Each time the same value for \texttt{\#1} is used, the same mark is produced in the table. The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular} environment. Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and \ref{tab:table2}% \begin{table} \caption{\label{tab:table2}A table with more columns still fits properly in a column. Note that several entries share the same footnote. Inspect the \LaTeX\ input for this table to see exactly how it is done.} \begin{ruledtabular} \begin{tabular}{cccccccc} &$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$& &$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\ \hline Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1] & 0.680 & 1.870 & 3.700 \\ Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2] & 0.450 & 1.930 & 3.760 \\ Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3] & 0.750 & 2.170 & 3.560 \\ Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4] & 0.900 & 2.370 & 3.720 \\ Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2] & 0.380 & 1.730 & 2.830 \\ Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5] & 0.760 & 2.110 & 3.120 \\ Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5] & 1.120 & 2.620 & 3.480 \\ Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3] & 1.330 & 2.800 & 3.590 \\ Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4] & 1.420 & 3.030 & 3.740 \\ In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5] & 0.960 & 2.460 & 3.780 \\ Tl& 0.480 & 18.90 & 3.550 & & & & \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.} \footnotetext[2]{Here's the second.} \footnotetext[3]{Here's the third.} \footnotetext[4]{Here's the fourth.} \footnotetext[5]{And etc.} \end{table} for an illustration. All AAPM journals require that the initial citation of figures or tables be in numerical order. \LaTeX's automatic numbering of floats is your friend here: just put each \texttt{figure} environment immediately following its first reference (\verb+\ref+), as we have done in this example file. \begin{acknowledgments} We wish to acknowledge the support of the author community in using REV\TeX{}, offering suggestions and encouragement, testing new versions, \dots. \end{acknowledgments} \section{\label{sec:level1}First-level heading:\protect\\ The line break was forced \lowercase{via} \textbackslash\textbackslash} This sample document demonstrates proper use of REV\TeX~4.2 (and \LaTeXe) in manuscripts prepared for submission to AIP journals. Further information can be found in the documentation included in the distribution or available at \url{http://authors.aip.org} and in the documentation for REV\TeX~4.2 itself. When commands are referred to in this example file, they are always shown with their required arguments, using normal \TeX{} format. In this format, \verb+#1+, \verb+#2+, etc. stand for required author-supplied arguments to commands. For example, in \verb+\section{#1}+ the \verb+#1+ stands for the title text of the author's section heading, and in \verb+\title{#1}+ the \verb+#1+ stands for the title text of the paper. Line breaks in section headings at all levels can be introduced using \textbackslash\textbackslash. A blank input line tells \TeX\ that the paragraph has ended. \subsection{\label{sec:level2}Second-level heading: Formatting} This file may be formatted in both the \texttt{preprint} (the default) and \texttt{reprint} styles; the latter format may be used to mimic final journal output. Either format may be used for submission purposes; however, for peer review and production, AIP will format the article using the \texttt{preprint} class option. Hence, it is essential that authors check that their manuscripts format acceptably under \texttt{preprint}. Manuscripts submitted to AIP that do not format correctly under the \texttt{preprint} option may be delayed in both the editorial and production processes. The \texttt{widetext} environment will make the text the width of the full page, as on page~\pageref{eq:wideeq}. (Note the use the \verb+\pageref{#1}+ to get the page number right automatically.) The width-changing commands only take effect in \texttt{twocolumn} formatting. It has no effect if \texttt{preprint} formatting is chosen instead. \subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes} Citations in text refer to entries in the Bibliography; they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+. Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly, its entire repertoire of commands are available in your document; see the \verb+natbib+ documentation for further details. The argument of \verb+\cite+ is a comma-separated list of \emph{keys}; a key may consist of letters and numerals. By default, citations are numerical; \cite{feyn54} author-year citations are an option. To give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}). REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate. REV\TeX\ provides the ability to properly punctuate textual citations in author-year style; this facility works correctly with numerical citations only with \texttt{natbib}'s compress option turned off. To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983}, and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}). Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography. A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command, where the argument is the citation key mentioned above. \verb+\bibitem{#1}+ commands may be crafted by hand or, preferably, generated by using Bib\TeX. The AIP styles for REV\TeX~4 include Bib\TeX\ style files \verb+aipnum.bst+ and \verb+aipauth.bst+, appropriate for numbered and author-year bibliographies, respectively. REV\TeX~4 will automatically choose the style appropriate for the document's selected class options: the default is numerical, and you obtain the author-year style by specifying a class option of \verb+author-year+. This sample file demonstrates a simple use of Bib\TeX\ via a \verb+\bibliography+ command referencing the \verb+aipsamp.bib+ file. Running Bib\TeX\ (in this case \texttt{bibtex aipsamp}) after the first pass of \LaTeX\ produces the file \verb+aipsamp.bbl+ which contains the automatically formatted \verb+\bibitem+ commands (including extra markup information via \verb+\bibinfo+ commands). If not using Bib\TeX, the \verb+thebibiliography+ environment should be used instead. \paragraph{Fourth-level heading is run in.}% Footnotes are produced using the \verb+\footnote{#1}+ command. Numerical style citations put footnotes into the bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}. Author-year and numerical author-year citation styles (each for its own reason) cannot use this method. Note: due to the method used to place footnotes in the bibliography, \emph{you must re-run BibTeX every time you change any of your document's footnotes}. \section{Math and Equations} Inline math may be typeset using the \verb+$+ delimiters. Bold math symbols may be achieved using the \verb+bm+ package and the \verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and Blackboard (or open face or double struck) characters should be typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands respectively. Both are supplied by the \texttt{amssymb} package. For example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and \verb+$\mathfrak{G}$+ gives $\mathfrak{G}$ In \LaTeX\ there are many different ways to display equations, and a few preferred ways are noted below. Displayed math will center by default. Use the class option \verb+fleqn+ to flush equations left. Below we have numbered single-line equations, the most common kind: \begin{eqnarray} \chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2} \left( \begin{array}{c} |{\bf p}|+p_z\\ px+ip_y \end{array}\right)\;, \\ \left\{% \openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}% \label{eq:one}. \end{eqnarray} Note the open one in Eq.~(\ref{eq:one}). Not all numbered equations will fit within a narrow column this way. The equation number will move down automatically if it cannot fit on the same line with a one-line equation: \begin{equation} \left\{ ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}. \end{equation} When the \verb+\label{#1}+ command is used [cf. input for Eq.~(\ref{eq:one})], the equation can be referred to in text without knowing the equation number that \TeX\ will assign to it. Just use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in the \verb+\label{#1}+ command. Unnumbered single-line equations can be typeset using the \verb+\[+, \verb+\]+ format: \[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \] \subsection{Multiline equations} Multiline equations are obtained by using the \verb+eqnarray+ environment. Use the \verb+\nonumber+ command at the end of each line to avoid assigning a number: \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} \delta_{\sigma_1,-\sigma_2} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1), \end{eqnarray} \begin{eqnarray} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\nonumber \\ & &\times \left( \sum_{i<j}\right) \sum_{\text{perm}} \frac{1}{S_{12}} \frac{1}{S_{12}} \sum_\tau c^f_\tau~. \end{eqnarray} \textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline equation if \verb+\nonumber+ is also used on that line. Incorrect cross-referencing will result. Notice the use \verb+\text{#1}+ for using a Roman font within a math environment. To set a multiline equation without \emph{any} equation numbers, use the \verb+\begin{eqnarray*}+, \verb+\end{eqnarray*}+ format: \begin{eqnarray*} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\\ & &\times \left( \sum_{i<j}\right) \left( \sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}} \right) \frac{1}{S_{12}}~. \end{eqnarray*} To obtain numbers not normally produced by the automatic numbering, use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired equation number. For example, to get an equation number of (\ref{eq:mynum}), \begin{equation} g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum} \end{equation} A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires \texttt{amsmath}. The \verb+\tag{#1}+ must come before the \verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is \textit{transparent} to the automatic numbering in REV\TeX{}; therefore, the number must be known ahead of time, and it must be manually adjusted if other equations are added. \verb+\tag{#1}+ works with both single-line and multiline equations. \verb+\tag{#1}+ should only be used in exceptional case - do not use it to number all equations in a paper. Enclosing single-line and multiline equations in \verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce a set of equations that are ``numbered'' with letters, as shown in Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below: \begin{subequations} \label{eq:whole} \begin{equation} \left\{ abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2} \right\},\label{subeq:1} \end{equation} \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2} \end{eqnarray} \end{subequations} Putting a \verb+\label{#1}+ command right after the \verb+\begin{subequations}+, allows one to reference all the equations in a subequations environment. For example, the equations in the preceding subequations environment were Eqs.~(\ref{eq:whole}). \subsubsection{Wide equations} The equation that follows is set in a wide format, i.e., it spans across the full page. The wide format is reserved for long equations that cannot be easily broken into four lines or less: \begin{widetext} \begin{equation} {\cal R}^{(\text{d})}= g_{\sigma_2}^e \left( \frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right) + x_WQ_e \left( \frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right)\;. \label{eq:wideeq} \end{equation} \end{widetext} This is typed to show the output is in wide format. (Since there is no input line between \verb+\equation+ and this paragraph, there is no paragraph indent for this paragraph.) \section{Cross-referencing} REV\TeX{} will automatically number sections, equations, figure captions, and tables. In order to reference them in text, use the \verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a particular page, use the \verb+\pageref{#1}+ command. The \verb+\label{#1}+ should appear in a section heading, within an equation, or in a table or figure caption. The \verb+\ref{#1}+ command is used in the text where the citation is to be displayed. Some examples: Section~\ref{sec:level1} on page~\pageref{sec:level1}, Table~\ref{tab:table1},% \begin{table} \caption{\label{tab:table1}This is a narrow table which fits into a text column when using \texttt{twocolumn} formatting. Note that REV\TeX~4 adjusts the intercolumn spacing so that the table fills the entire width of the column. Table captions are numbered automatically. This table illustrates left-aligned, centered, and right-aligned columns. } \begin{ruledtabular} \begin{tabular}{lcr} Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\ \hline 1 & 2 & 3\\ 10 & 20 & 30\\ 100 & 200 & 300\\ \end{tabular} \end{ruledtabular} \end{table} and Fig.~\ref{fig:epsart}. \section{Figures and Tables} Figures and tables are typically ``floats''; \LaTeX\ determines their final position via placement rules. \LaTeX\ isn't always successful in automatically placing floats where you wish them. Figures are marked up with the \texttt{figure} environment, the content of which imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+). The argument of the latter command should itself contain a \verb+\label+ command if you wish to refer to your figure with \verb+\ref+. Import your image using either the \texttt{graphics} or \texttt{graphix} packages. These packages both define the \verb+\includegraphics{#1}+ command, but they differ in the optional arguments for specifying the orientation, scaling, and translation of the figure. Fig.~\ref{fig:epsart}% \begin{figure} \includegraphics{fig_1 \caption{\label{fig:epsart} A figure caption. The figure captions are automatically numbered.} \end{figure} is small enough to fit in a single column, while Fig.~\ref{fig:wide}% \begin{figure*} \includegraphics{fig_2 \caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide figure, spanning the page in \texttt{twocolumn} formatting.} \end{figure*} is too wide for a single column, so instead the \texttt{figure*} environment has been used. The analog of the \texttt{figure} environment is \texttt{table}, which uses the same \verb+\caption+ command. However, you should type your caption command first within the \texttt{table}, instead of last as you did for \texttt{figure}. The heart of any table is the \texttt{tabular} environment, which represents the table content as a (vertical) sequence of table rows, each containing a (horizontal) sequence of table cells. Cells are separated by the \verb+&+ character; the row terminates with \verb+\\+. The required argument for the \texttt{tabular} environment specifies how data are displayed in each of the columns. For instance, a column may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+), or aligned on a decimal point (\verb+d+). (Table~\ref{tab:table4}% \begin{table} \caption{\label{tab:table4}Numbers in columns Three--Five have been aligned by using the ``d'' column specifier (requires the \texttt{dcolumn} package). Non-numeric entries (those entries without a ``.'') in a ``d'' column are aligned on the decimal point. Use the ``D'' specifier for more complex layouts. } \begin{ruledtabular} \begin{tabular}{ccddd} One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\ \hline one&two&\mbox{three}&\mbox{four}&\mbox{five}\\ He&2& 2.77234 & 45672. & 0.69 \\ C\footnote{Some tables require footnotes.} &C\footnote{Some tables need more than one footnote.} & 12537.64 & 37.66345 & 86.37 \\ \end{tabular} \end{ruledtabular} \end{table} illustrates the use of decimal column alignment.) Extra column-spacing may be be specified as well, although REV\TeX~4 sets this spacing so that the columns fill the width of the table. Horizontal rules are typeset using the \verb+\hline+ command. The doubled (or Scotch) rules that appear at the top and bottom of a table can be achieved by enclosing the \texttt{tabular} environment within a \texttt{ruledtabular} environment. Rows whose columns span multiple columns can be typeset using \LaTeX's \verb+\multicolumn{#1}{#2}{#3}+ command (for example, see the first row of Table~\ref{tab:table3}).% \begin{table*} \caption{\label{tab:table3}This is a wide table that spans the page width in \texttt{twocolumn} mode. It is formatted using the \texttt{table*} environment. It also demonstrates the use of \textbackslash\texttt{multicolumn} in rows with entries that span more than one column.} \begin{ruledtabular} \begin{tabular}{ccccc} &\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\ Ion&1st alternative&2nd alternative&lst alternative &2nd alternative\\ \hline K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\ Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.} &$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\ Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. } &$(4e)^{\text{a}}$\\ He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\ Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\ \end{tabular} \end{ruledtabular} \end{table*} The tables in this document illustrate various effects. Tables that fit in a narrow column are contained in a \texttt{table} environment. Table~\ref{tab:table3} is a wide table, therefore set with the \texttt{table*} environment. Lengthy tables may need to break across pages. A simple way to allow this is to specify the \verb+[H]+ float placement on the \texttt{table} or \texttt{table*} environment. Alternatively, using the standard \LaTeXe\ package \texttt{longtable} gives more control over how tables break and allows headers and footers to be specified for each page of the table. An example of the use of \texttt{longtable} can be found in the file \texttt{summary.tex} that is included with the REV\TeX~4 distribution. There are two methods for setting footnotes within a table (these footnotes will be displayed directly below the table rather than at the bottom of the page or in the bibliography). The easiest and preferred method is just to use the \verb+\footnote{#1}+ command. This will automatically enumerate the footnotes with lowercase roman letters. However, it is sometimes necessary to have multiple entries in the table share the same footnote. In this case, create the footnotes using \verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+. \texttt{\#1} is a numeric value. Each time the same value for \texttt{\#1} is used, the same mark is produced in the table. The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular} environment. Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and \ref{tab:table2}% \begin{table} \caption{\label{tab:table2}A table with more columns still fits properly in a column. Note that several entries share the same footnote. Inspect the \LaTeX\ input for this table to see exactly how it is done.} \begin{ruledtabular} \begin{tabular}{cccccccc} &$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$& &$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\ \hline Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1] & 0.680 & 1.870 & 3.700 \\ Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2] & 0.450 & 1.930 & 3.760 \\ Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3] & 0.750 & 2.170 & 3.560 \\ Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4] & 0.900 & 2.370 & 3.720 \\ Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2] & 0.380 & 1.730 & 2.830 \\ Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5] & 0.760 & 2.110 & 3.120 \\ Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5] & 1.120 & 2.620 & 3.480 \\ Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3] & 1.330 & 2.800 & 3.590 \\ Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4] & 1.420 & 3.030 & 3.740 \\ In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5] & 0.960 & 2.460 & 3.780 \\ Tl& 0.480 & 18.90 & 3.550 & & & & \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.} \footnotetext[2]{Here's the second.} \footnotetext[3]{Here's the third.} \footnotetext[4]{Here's the fourth.} \footnotetext[5]{And etc.} \end{table} for an illustration. All AIP journals require that the initial citation of figures or tables be in numerical order. \LaTeX's automatic numbering of floats is your friend here: just put each \texttt{figure} environment immediately following its first reference (\verb+\ref+), as we have done in this example file. \begin{acknowledgments} We wish to acknowledge the support of the author community in using REV\TeX{}, offering suggestions and encouragement, testing new versions, \dots. \end{acknowledgments} \section{Introduction} Kohn--Sham density-functional theory (KS-DFT)~\cite{KStheory_1965} has become over the last two decades the method of choice for computational chemistry and physics studies, essentially because it often provides a relatively accurate description of the electronic structure of large molecular or extended systems at a low computational cost. The major simplification of the electronic structure problem in KS-DFT lies in the fact that the ground-state energy is evaluated, in principle exactly, from a non-interacting single-configuration wave function, which is simply referred to as the KS determinant. The latter is obviously not the exact solution to the Schr\"{o}dinger equation. However, its density matches the exact interacting ground-state density, so that the Hartree-exchange-correlation (Hxc) energy of the physical system, which is induced by the electronic repulsion, can be recovered from an appropriate (in principle exact and universal) Hxc density functional. Despite the success of KS-DFT, standard density-functional approximations still fail in describing strongly correlated electrons. To overcome this issue, various strategies have been explored and improved over the years, both in condensed matter physics~\cite{Anisimov_1997_lda_plus_U,Anisimov_1997,PRB98_Lichtenstein_LDA_plus_DMFT,kotliar2006reviewDMFT,Haule_2ble_counting_DMFT-DFT_2015,requist2019model} and quantum chemistry~\cite{CR18_Truhlar_Multiconf_DFTs}. Note that, in the latter case, in-principle-exact multi-determinantal extensions of DFT based on the adiabatic connection formalism have been developed~\cite{savinbook,toulouse2004long,sharkas2011double,fromager2015exact}. In these approaches, the KS system is only referred to in the design of density-functional approximations. In practice, a single (partially-interacting) many-body wave function is calculated self-consistently and the complement to the partial interaction energy is described with an appropriate density functional (which differs from the conventional xc one). In other words, there is no KS construction in the actual calculation. Some of these concepts have been reused in the study of model lattice Hamiltonians~\cite{fromager2015exact,senjean2018site}. A similar strategy will be adopted in the present work, with an important difference though. The {\it reduced-in-size} correlated density-functional many-body wave function that we will introduce will be extracted from a quantum embedding theory where the KS determinant of the full system is a key ingredient that must be evaluated explicitly.\\ Quantum embedding theory~\cite{IJQC20_Adam-Michele_embedding_special_issue} is at first sight a completely different approach to the strong electron correlation problem. Interestingly, some of its implementations, like the \textit{density matrix embedding theory} (DMET)~\cite{knizia2012density,knizia2013density,tsuchimochi2015density,welborn2016bootstrap,sun2016quantum,wouters2016practical,wu2019projected,JCTC20_Chan_ab-initio_DMET,faulstich2022_vrep}, rely on a reference Slater determinant that is computed for the full system. This is also the case in practical embedding calculations based on the exact factorization formalism~\cite{PRL20_Lacombe_embedding_via_EF,PRL21_Requist_EF_electrons}. Unlike the well-established \textit{dynamical mean-field theory} (DMFT)~\cite{georges1992hubbard,georges1996limitdimension,kotliar2004strongly,held2007electronic,zgid2011DMFTquantum}, which relies on the one-electron Green's function, DMET is a static theory of ground electronic states. Most importantly, the bath, in which a fragment of the original system (referred to as impurity when it is a single localized orbital) is embedded, is drastically reduced in size in DMET. As a result, the ``impurity+bath'' embedding cluster can be accurately (if not exactly) described with wave function-based quantum chemical methods. The authors have shown recently that the Schmidt decomposition of the reference Slater determinant, which is central in DMET, can be recast into a (one-electron reduced) density-matrix functional Householder transformation~\cite{sekaran2021}, which is much simpler to implement. The approach, in which the bath orbitals can in principle be correlated directly through the density matrix~\cite{sekaran2021}, is referred to as \textit{ Householder~transformed~density matrix~functional~embedding~theory} (Ht-DMFET). Since the seminal work of Knizia and Chan on DMET~\cite{knizia2012density}, various connections with DMFT and related approaches have been established~\cite{ayral2017dynamical,lee2019rotationally,fertitta2018rigorous,JCP19_Booth_Ew-DMET_hydrogen_chain, PRB21_Booth_effective_dynamics_static_embedding,PRX21_Lee_SlaveBoson_resp_functions-superconductivity}. Connections with DFT have been less explored, and only at the approximate level of theory. We can refer to the \textit{density embedding theory} (DET) of Bulik {\it et al.}~\cite{bulik2014density}, which is a simplified version of DMET where only the diagonal elements of the embedded density matrix are mapped onto the reference Slater determinant of the full system. More recently, Senjean~\cite{senjean2019projected} combined DFT for lattices~\cite{lima2003density,DFT_ModelHamiltonians} with DMET, and Mordovina {\it et al.}~\cite{mordovina2019self} (see also Ref.~\cite{Theophilou_2021}) proposed a {\it self-consistent density-functional embedding} (SDE), where the KS determinant is explicitly used as the reference wave function in the DMET algorithm.\\ In the present work, an in-principle-exact combination of KS-DFT with DMET is derived for the one-dimensional (1D) Hubbard lattice, as a proof of concept. For that purpose, we use the density-matrix functional Householder transformation introduced recently by the authors~\cite{sekaran2021}. On the basis of well-identified density-functional approximations, we propose and implement a {\it local potential functional embedding theory} (LPFET) where the Hxc potential is evaluated self-consistently in the lattice by ``learning'' from the embedding cluster at each iteration of the optimization process. LPFET can be seen as a flavor of KS-DFT where no density functional is actually used.\\ The paper is organized as follows. After a short introduction to the 1D Hubbard model in Sec.~\ref{subsec:1D_hub}, a detailed review of Ht-DMFET is presented in Sec.~\ref{subsec:review_Ht-DMFET}, for clarity and completeness. An exact density-functional reformulation of the theory is then proposed in Sec.~\ref{subsec:exact_dfe_dft}. The resulting approximate LPFET and its comparison with SDE are detailed in Secs.~\ref{sec:LPFET} and Sec.~\ref{subsec:sde_comparison}, respectively. The LPFET algorithm is summarized in Sec.~\ref{sec:lpfet_algo}. Results obtained for a 1000-site Hubbard ring are presented and discussed in Sec.~\ref{sec:results}. The conclusion and perspectives are finally given in Sec.~\ref{sec:conclusion}. \section{Theory}\label{sec:theory} \subsection{One-dimensional Hubbard lattice}\label{subsec:1D_hub} By analogy with Ref.~\cite{sekaran2021}, various quantum embedding strategies will be discussed in the following within the simple but nontrivial uniform 1D Hubbard model. The corresponding lattice Hamiltonian (for a $L$-site ring) reads as \begin{eqnarray}\label{eq:1D_Hubbard_Hamilt} \hat{H}=\hat{T}+\hat{U}+v_{\rm ext}\hat{N}, \end{eqnarray} where the hopping operator (written in second quantization), \begin{eqnarray}\label{eq:hopping_operator} \hat{T}=-t\sum^{L-1}_{i=0}\sum_{\sigma=\uparrow,\downarrow}\left(\hat{c}^\dagger_{i\sigma}\hat{c}_{(i+1)\sigma} +\hat{c}_{(i+1)\sigma}^\dagger\hat{c}_{i\sigma} \right), \end{eqnarray} with parameter $t$, is the analog for lattices of the kinetic energy operator. For convenience, we will systematically use periodic boundary conditions, {\it{i.e.}}, $\hat{c}^\dagger_{L\sigma}\equiv \hat{c}^\dagger_{0\sigma}$. On-site repulsions only are taken into account in the two-electron repulsion operator $\hat{U}$, {\it{i.e.}}, \begin{eqnarray} \hat{U}=\sum^{L-1}_{i=0}\hat{U}_i, \end{eqnarray} where $\hat{U}_i=U\hat{n}_{i\uparrow}\hat{n}_{i\downarrow}$, $U$ is the parameter that controls the strength of the interaction, and $\hat{n}_{i\sigma}=\hat{c}_{i\sigma}^\dagger\hat{c}_{i\sigma}$ is a site occupation operator for spin $\sigma$. Since the lattice is uniform, the local external potential (which would correspond to the nuclear potential in a conventional quantum chemical calculation) operator is proportional to the electron counting operator [see the last term on the right-hand side of Eq.~(\ref{eq:1D_Hubbard_Hamilt})], \begin{eqnarray} \hat{N}=\sum^{L-1}_{i=0}\sum_{\sigma=\uparrow,\downarrow}\hat{n}_{i\sigma}. \end{eqnarray} The uniform value of the external potential can be rewritten as \begin{eqnarray}\label{eq:constant_ext_pot} v_{\rm ext}=-\mu, \end{eqnarray} where the chemical potential $\mu$ controls the number of electrons $N$ or, equivalently, the uniform density $n=N/L$ in the lattice. In this case, $\hat{H}$ is actually a (zero-temperature) grand canonical Hamiltonian. For convenience, we rewrite the hopping operator as follows, \begin{eqnarray}\label{eq:hopping_final_form_sumij} \hat{T}\equiv\sum^{L-1}_{i,j=0}\sum_{\sigma=\uparrow,\downarrow}t_{ij}\hat{c}^\dagger_{i\sigma}\hat{c}_{j\sigma}, \end{eqnarray} where \begin{eqnarray}\label{eq:hopping_matrix} t_{ij}=-t\left(\delta_{j(i+1)}+\delta_{i(j+1)}\right), \end{eqnarray} and $t_{(L-1)0}=t_{0(L-1)}=-t$. From now on the bounds in the summations over the full lattice will be dropped, for simplicity: \begin{eqnarray} \sum_i\equiv \sum^{L-1}_{i=0}. \end{eqnarray} Note that the quantum embedding strategies discussed in the present work can be extended to more general (quantum chemical, in particular) Hamiltonians~\cite{wouters2016practical}. For that purpose, the true {\it ab initio} Hamiltonian should be written in a localized molecular orbital basis, thus leading to the more general Hamiltonian expression, \begin{eqnarray} \hat{H}=\sum_{\sigma}\sum_{ij}h_{ij}\hat{c}^\dagger_{i\sigma}\hat{c}_{j\sigma}+\dfrac{1}{2}\sum_{\sigma,\sigma'}\sum_{ijkl}\langle ij\vert kl\rangle \hat{c}^\dagger_{i\sigma}\hat{c}^\dagger_{j\sigma'}\hat{c}_{l\sigma'}\hat{c}_{k\sigma}, \end{eqnarray} where $h_{ij}$ and $\langle ij\vert kl\rangle$ are the (kinetic and nuclear attraction) one-electron and two-electron repulsion integrals, respectively. Using a localized orbital basis allows for the decomposition of the molecule under study into fragments that can be embedded afterward~\cite{wouters2016practical}. In the following, we will work with the simpler Hamiltonian of Eq.~(\ref{eq:1D_Hubbard_Hamilt}), as a proof of concept.\\ \subsection{Review of Ht-DMFET}\label{subsec:review_Ht-DMFET} For the sake of clarity and completeness, a review of Ht-DMFET~\cite{sekaran2021} is presented in the following subsections. Various ingredients (operators and reduced quantities) that will be used later on in Sec.~\ref{subsec:exact_dfe_dft} in the derivation of a formally exact density-functional embedding theory (which is the main outcome of this work) are introduced. Real algebra will be used. For simplicity, we focus on the embedding of a single impurity. A multiple-impurity extension of the theory can be obtained from a block Householder transformation~\cite{sekaran2021,AML99_Rotella_Block_Householder_transf}. Unlike in the exact reformulation of the theory which is proposed in the following Sec.~\ref{subsec:exact_dfe_dft} and where the chemical potential $\mu$ controls the density of the uniform lattice, the total number of electrons will be {\it fixed} to the value $N$ in the present section. In other words, the uniform density is set to $n=N/L$ and $\mu$ is an arbitrary constant (that could be set to zero). \subsubsection{Exact non-interacting embedding}\label{subsubsec:non_int_embedding} Let us first consider the particular case of a non-interacting ($U=0$) lattice for which Ht-DMFET is exact~\cite{sekaran2021}. As it will be applied later on (in Sec.~\ref{subsec:exact_dfe_dft}) to the auxiliary KS lattice, it is important to highlight the key features of the non-interacting embedding. Following Ref.~\cite{sekaran2021}, we label as $i=0$ one of the localized (lattice site in the present case) spin-orbital $\ket{\chi^\sigma_0}\equiv \hat{c}_{0\sigma}^\dagger\ket{\rm vac}$ [we denote $\ket{\rm vac}$ the vacuum state of second quantization] that, ultimately, will become the so-called {\it embedded impurity}. The ingredient that is central in Ht-DMFET is the (one-electron reduced) density matrix of the full system in the lattice representation, {\it{i.e.}}, \begin{eqnarray}\label{eq:1RDM_lattice} {\bm \gamma}^{\uparrow}={\bm \gamma}^{\downarrow}={\bm \gamma}\equiv\gamma_{ij}=\mel{\Phi}{\hat{c}_{i\sigma}^\dagger\hat{c}_{j\sigma}}{\Phi}, \end{eqnarray} where we restrict ourselves to closed-shell singlet ground states $\ket{\Phi}$, for simplicity. Note that \begin{eqnarray}\label{eq:gamma_00_filling} \gamma_{00}=\dfrac{n}{2}=\dfrac{N}{2L} \end{eqnarray} is the uniform lattice filling per spin. Since the full lattice will always be described with a single Slater determinant in the following, the density matrix ${\bm \gamma}$ will always be {\it idempotent}. The latter is used to construct the Householder unitary transformation which, once it has been applied to the one-electron lattice space, defines the so-called {\it bath} spin-orbital with which the impurity will ultimately be exclusively entangled. More explicitly, the Householder transformation matrix \begin{eqnarray}\label{eq:P_from_HH_vec} {\bm P}={\bm I}-2{\mathbf{v}}\bv^{\dagger}\equiv P_{ij}=\delta_{ij}-2{\rm v}_i{\rm v}_j, \end{eqnarray} where ${\bm I}$ is the identity matrix, is a functional of the density matrix, {\it{i.e.}}, \begin{eqnarray}\label{eq:P_functional_oneRDM} {\bm P}\equiv{\bm P}\left[{\bm \gamma}\right], \end{eqnarray} where the density-matrix-functional Householder vector components read as~\cite{sekaran2021} \begin{eqnarray}\label{eq:v0_zero} {\rm v}_0&=&0, \\ \label{eq:HH_vec_compt1} {\rm v}_1&=&\dfrac{\gamma_{10}-\tilde{\gamma}_{10}}{\sqrt{2\tilde{\gamma}_{10}\left(\tilde{\gamma}_{10}-\gamma_{10}\right)}}, \\ \label{eq:HH_vec_compt_i_larger2} {\rm v}_i&\underset{i\geq 2}{=}&\dfrac{\gamma_{i0}}{\sqrt{2\tilde{\gamma}_{10}\left(\tilde{\gamma}_{10}-\gamma_{10}\right)}}, \end{eqnarray} with \begin{eqnarray}\label{eq:gamma01_tilde} \tilde{\gamma}_{10}=-{\rm sgn}\left(\gamma_{10}\right)\sqrt{\sum_{j>0}\gamma^2_{j0}}, \end{eqnarray} and \begin{eqnarray}\label{eq:normalization_HH_vec} {{\mathbf{v}}}^\dagger{\mathbf{v}}=\sum_{i\geq 1}{\rm v}^2_i=1. \end{eqnarray} Note that, in the extreme case of a two-site lattice, the denominator in Eqs.~(\ref{eq:HH_vec_compt1}) and (\ref{eq:HH_vec_compt_i_larger2}) is still well defined and it does not vanish. Indeed, by construction [see Eq.~(\ref{eq:gamma01_tilde})], \begin{eqnarray} \tilde{\gamma}_{10}&\underset{\tiny\left\{\gamma_{j0}\overset{j>1}{=}0\right\}}{=}&-{\rm sgn}\left(\gamma_{10}\right)\abs{\gamma_{10}}=-\gamma_{10} \end{eqnarray} in this case, thus leading to $\tilde{\gamma}_{10}\left(\tilde{\gamma}_{10}-\gamma_{10}\right)=2\gamma_{10}^2>0$. Note also that ${\bm P}$ is hermitian and unitary, {\it{i.e.}}, ${\bm P}={\bm P}^\dagger$ and \begin{eqnarray}\label{eq:unitary_transf} {\bm P}^2={\bm P}{\bm P}^\dagger={\bm P}^\dagger{\bm P}={\bm I}. \end{eqnarray} The bath spin-orbital $\ket{\varphi^\sigma_{\rm bath}}$ is then constructed as follows in second quantization, \begin{eqnarray} \ket{\varphi^\sigma_{\rm bath}}:=\hat{d}_{1\sigma}^\dagger\ket{\rm vac}, \end{eqnarray} where, according to Eqs.~(\ref{eq:P_from_HH_vec}) and (\ref{eq:v0_zero}), \begin{eqnarray}\label{eq:bath_expansion} \begin{split} \hat{d}_{1\sigma}^\dagger&=\sum_{k}P_{1k}\hat{c}_{k\sigma}^\dagger \\ &=\hat{c}_{1\sigma}^\dagger-2{\rm v}_1\sum_{k\geq 1}{\rm v}_k\hat{c}_{k\sigma}^\dagger. \end{split} \end{eqnarray} More generally, the entire lattice space can be Householder-transformed as follows, \begin{eqnarray}\label{eq:Householder_creation_ops} \hat{d}_{i\sigma}^\dagger\underset{0\leq i\leq L-1}{=}\sum_k P_{ik}\hat{c}_{k\sigma}^\dagger, \end{eqnarray} and the back transformation simply reads as \begin{eqnarray}\label{eq:from_HH_to_lattice_rep} \sum_i P_{li}\hat{d}_{i\sigma}^\dagger=\sum_{ik}P_{li}P_{ik}\hat{c}_{k\sigma}^\dagger=\sum_{k}\left[{\bm P}^2\right]_{lk}\hat{c}_{k\sigma}^\dagger=\hat{c}_{l\sigma}^\dagger. \end{eqnarray} We stress that the impurity is invariant under the Householder transformation, {\it{i.e.}}, \begin{eqnarray}\label{eq:imp_invariant_underHH_transf} \hat{d}_{0\sigma}^\dagger=\hat{c}_{0\sigma}^\dagger, \end{eqnarray} and, according to the Appendix, the Householder-transformed density matrix elements involving the impurity can be simplified as follows, \begin{eqnarray}\label{eq:simplified_tilde_gamma0j} \mel{\Phi}{\hat{d}_{j\sigma}^\dagger\hat{d}_{0\sigma}}{\Phi}=\gamma_{j0}-{\rm v}_j\sqrt{2\tilde{\gamma}_{10}\left(\tilde{\gamma}_{10}-\gamma_{10}\right)}. \end{eqnarray} As readily seen from Eqs.~(\ref{eq:HH_vec_compt1}) and (\ref{eq:simplified_tilde_gamma0j}), the matrix element $\tilde{\gamma}_{10}$ introduced in Eq.~(\ref{eq:gamma01_tilde}) is in fact the bath-impurity element of the density matrix in the Householder representation: \begin{eqnarray} \mel{\Phi}{\hat{d}_{1\sigma}^\dagger\hat{d}_{0\sigma}}{\Phi}=\tilde{\gamma}_{10}. \end{eqnarray} If we denote \begin{eqnarray}\label{eq:1RDM_with_d_ops} \tilde{\bm \gamma}\equiv \tilde{\gamma}_{ij}= \mel{\Phi}{\hat{d}_{i\sigma}^\dagger\hat{d}_{j\sigma}}{\Phi}=\sum_{kl}P_{ik}\gamma_{kl}P_{lj}\equiv {\bm P}{\bm \gamma}{\bm P} \end{eqnarray} the full Householder-transformed density matrix, we do readily see from Eqs.~(\ref{eq:HH_vec_compt_i_larger2}) and (\ref{eq:simplified_tilde_gamma0j}) that the impurity is exclusively entangled with the bath, {\it{i.e.}}, \begin{eqnarray}\label{eq:imp_disconnected_from_Henv} \tilde{\gamma}_{i0}\underset{i\geq 2}{=}0, \end{eqnarray} by construction~\cite{sekaran2021}. As $\tilde{\bm \gamma}$ inherits the idempotency of ${\bm \gamma}$ through the unitary Householder transformation, we deduce from Eq.~(\ref{eq:imp_disconnected_from_Henv}) that \begin{eqnarray} \tilde{\gamma}_{i0}=\left[\tilde{\bm \gamma}^2\right]_{i0}=\sum_j\tilde{\gamma}_{ij}\tilde{\gamma}_{j0}=\tilde{\gamma}_{i0}\tilde{\gamma}_{00}+\tilde{\gamma}_{i1}\tilde{\gamma}_{10}, \end{eqnarray} or, equivalently, \begin{eqnarray} \tilde{\gamma}_{i1}=\dfrac{\tilde{\gamma}_{i0}\left(1-\tilde{\gamma}_{00}\right)}{\tilde{\gamma}_{10}}, \end{eqnarray} thus leading to [see Eq.~(\ref{eq:imp_disconnected_from_Henv})] \begin{eqnarray}\label{eq:bath_entangled_with_imp} \tilde{\gamma}_{i1}\underset{i\geq 2}{=}0, \end{eqnarray} and \begin{eqnarray}\label{eq:2e_in_cluster} \tilde{\gamma}_{00}+\tilde{\gamma}_{11}=1. \end{eqnarray} Eqs.~(\ref{eq:bath_entangled_with_imp}) and (\ref{eq:2e_in_cluster}) simply indicate that, by construction~\cite{sekaran2021}, the bath is itself entangled exclusively with the impurity, and the Householder ``impurity+bath'' cluster, which is disconnected from its environment, contains exactly two electrons (one per spin). Therefore, the Householder cluster sector of the density matrix can be described exactly by a {\it two-electron} Slater determinant $\Phi^{\mathcal{C}}$: \begin{eqnarray}\label{eq:cluster_sector_Psi_representable} \tilde{\gamma}_{ij}\underset{0\leq i,j\leq 1}{=}\mel{\Phi^{\mathcal{C}}}{\hat{d}_{i\sigma}^\dagger\hat{d}_{j\sigma}}{\Phi^{\mathcal{C}}}. \end{eqnarray} Note that, in the Householder representation, the lattice ground-state determinant reads as $\Phi\equiv \Phi^{\mathcal{C}}\Phi_{\rm core}$, where the cluster's determinant $\Phi^{\mathcal{C}}$ is disentangled from the core one $\Phi_{\rm core}$. Once the cluster's block of the density matrix has been diagonalized, we obtain the sole occupied orbital that overlaps with the impurity, exactly like in DMET~\cite{wouters2016practical}. In other words, for non-interacting (or mean-field-like descriptions of) electrons, the Ht-DMFET construction of the bath is equivalent (although simpler) to that of DMET. We refer the reader to Ref.~\cite{sekaran2021} for a more detailed comparison of the two approaches.\\ \subsubsection{Non-interacting embedding Hamiltonian} As the Householder cluster is strictly disconnected from its environment in the non-interacting case, it is exactly described by the two-electron ground state $\ket{\Phi^{\mathcal{C}}}$ of the Householder-transformed hopping operator (that we refer to as kinetic energy operator from now on, like in DFT for lattices~\cite{DFT_ModelHamiltonians,senjean2018site}) on projected onto the cluster~\cite{sekaran2021}, {\it{i.e.}}, \begin{eqnarray}\label{eq:non-int_cluster_SE} \hat{\mathcal{T}}^{\mathcal{C}}\ket{\Phi^{\mathcal{C}}}=\mathcal{E}_{\rm s}^{\mathcal{C}}\ket{\Phi^{\mathcal{C}}}, \end{eqnarray} where, according to Eqs.~(\ref{eq:hopping_final_form_sumij}) and (\ref{eq:from_HH_to_lattice_rep}), \begin{eqnarray} \hat{\mathcal{T}}^{\mathcal{C}}=\sum_{ij}\sum_{\sigma=\uparrow,\downarrow}t_{ij} \sum^1_{k,l=0}P_{ik}P_{jl}\hat{d}_{k\sigma}^\dagger\hat{d}_{l\sigma}. \end{eqnarray} For convenience, we will separate in $\hat{\mathcal{T}}^{\mathcal{C}}$ the physical per-site kinetic energy operator [see Eq.~(\ref{eq:hopping_operator})], \begin{eqnarray} \hat{t}_{01}=-t\sum_{\sigma=\uparrow,\downarrow}\left(\hat{c}^\dagger_{0\sigma}\hat{c}_{1\sigma} +\hat{c}_{1\sigma}^\dagger\hat{c}_{0\sigma} \right), \end{eqnarray} from the correction induced (within the cluster) by the Householder transformation: \begin{eqnarray}\label{eq:hc_t01_plus_corr} \hat{\tau}^{\mathcal{C}}= \hat{\mathcal{T}}^{\mathcal{C}}-\hat{t}_{01}. \end{eqnarray} Note that, since $t_{00}=0$, $\hat{\tau}^{\mathcal{C}}$ can be expressed more explicitly as follows, \begin{eqnarray} \begin{split} \hat{\tau}^{\mathcal{C}} &=\sum_{\sigma=\uparrow,\downarrow}\left(\sum_{ij}P_{i1}P_{j0}t_{ij}\right)\left[\hat{d}_{0\sigma}^\dagger\hat{d}_{1\sigma}+\hat{d}_{1\sigma}^\dagger\hat{d}_{0\sigma}\right]\\ &\quad +\sum_{\sigma=\uparrow,\downarrow}\left(\sum_{ij}P_{i1}P_{j1}t_{ij}\right)\hat{d}_{1\sigma}^\dagger\hat{d}_{1\sigma}-\hat{t}_{01} \\ &=\sum_{\sigma=\uparrow,\downarrow}\left(\sum_iP_{i1}t_{i0}\right)\left[\hat{c}_{0\sigma}^\dagger\hat{d}_{1\sigma}+\hat{d}_{1\sigma}^\dagger\hat{c}_{0\sigma}\right] \\ &\quad +\sum_{\sigma=\uparrow,\downarrow}\left(\sum_{ij}P_{i1}P_{j1}t_{ij}\right)\hat{d}_{1\sigma}^\dagger\hat{d}_{1\sigma}-\hat{t}_{01} \\ &=\sum_{\sigma=\uparrow,\downarrow}t_{10}\left[\hat{c}_{0\sigma}^\dagger\hat{d}_{1\sigma}+\hat{d}_{1\sigma}^\dagger\hat{c}_{0\sigma}\right] \\ &\quad -2{\rm v}_1\sum_{\sigma=\uparrow,\downarrow}\left(\sum_i{\rm v}_it_{i0}\right)\left[\hat{c}_{0\sigma}^\dagger\hat{d}_{1\sigma}+\hat{d}_{1\sigma}^\dagger\hat{c}_{0\sigma}\right] \\ &\quad +\sum_{\sigma=\uparrow,\downarrow}\left(\sum_{ij}P_{i1}P_{j1}t_{ij}\right)\hat{d}_{1\sigma}^\dagger\hat{d}_{1\sigma}-\hat{t}_{01}, \end{split} \end{eqnarray} thus leading to \begin{eqnarray}\label{eq:simplified_tauC} \begin{split} \hat{\tau}^{\mathcal{C}} &=2t{\rm v}_1\sum_{\sigma=\uparrow,\downarrow}\sum_{k\geq 1}{\rm v}_k\left[\hat{c}_{0\sigma}^\dagger\hat{c}_{k\sigma}+\hat{c}_{k\sigma}^\dagger\hat{c}_{0\sigma}\right] \\ &\quad -2{\rm v}_1\sum_{\sigma=\uparrow,\downarrow}\left(\sum_i{\rm v}_it_{i0}\right)\left[\hat{c}_{0\sigma}^\dagger\hat{d}_{1\sigma}+\hat{d}_{1\sigma}^\dagger\hat{c}_{0\sigma}\right] \\ &\quad+4\left(\sum_{ij}{\rm v}_i{\rm v}_j\left({\rm v}_1^2-\delta_{j1}\right)t_{ij}\right)\sum_{\sigma=\uparrow,\downarrow}\hat{d}_{1\sigma}^\dagger\hat{d}_{1\sigma}, \end{split} \end{eqnarray} where we used Eqs.~(\ref{eq:P_from_HH_vec}) and (\ref{eq:bath_expansion}), as well as the fact that $t_{11}=0$ and $t_{10}=-t$. Note that, when no Householder transformation is performed ({\it{i.e.}}, when ${\rm v}_i=0$ for $0\leq i\leq L-1$), the bath site simply corresponds to the nearest neighbor ($i=1$) of the impurity in the lattice [see Eq.~(\ref{eq:bath_expansion})] and, as readily seen from Eqs.~(\ref{eq:hc_t01_plus_corr}) and (\ref{eq:simplified_tauC}), the non-interacting cluster's Hamiltonian $\hat{\mathcal{T}}^{\mathcal{C}}$ reduces to $\hat{t}_{01}$.\\ Unlike in the interacting case, which is discussed in Sec.~\ref{sec:approx_int_embedding}, it is unnecessary to introduce an additional potential on the embedded impurity in order to ensure that it reproduces the correct lattice filling. Indeed, according to Eqs.~(\ref{eq:gamma_00_filling}), (\ref{eq:v0_zero}), (\ref{eq:simplified_tilde_gamma0j}), (\ref{eq:1RDM_with_d_ops}), \ref{eq:cluster_sector_Psi_representable}), \begin{eqnarray}\label{eq:occ_embedded_imp_lattice_equal} \mel{\Phi^{\mathcal{C}}}{\hat{c}_{0\sigma}^\dagger\hat{c}_{0\sigma}}{\Phi^{\mathcal{C}}}=\mel{\Phi^{\mathcal{C}}}{\hat{d}_{0\sigma}^\dagger\hat{d}_{0\sigma}}{\Phi^{\mathcal{C}}}=n/2. \end{eqnarray} This constraint is automatically fulfilled when Householder transforming the kinetic energy operator $\hat{T}$ of the full lattice, thanks to the local potential contribution on the bath [see the last term on the right-hand side of Eq.~(\ref{eq:simplified_tauC})]. Interestingly, the true (non-interacting in this case) per-site energy of the lattice can be determined solely from $\Phi^{\mathcal{C}}$. Indeed, according to Eq.~(\ref{eq:1RDM_lattice}), the per-site kinetic energy can be evaluated from the lattice ground-state wave function $\Phi$ as follows, \begin{eqnarray}\label{eq:true_per-site_ener_lattice} \mel{\Phi}{\hat{t}_{01}}{\Phi}=-4t\gamma_{10}. \end{eqnarray} When rewritten in the Householder representation, Eq.~(\ref{eq:true_per-site_ener_lattice}) gives [see Eqs.~(\ref{eq:from_HH_to_lattice_rep}), (\ref{eq:imp_disconnected_from_Henv}), and (\ref{eq:cluster_sector_Psi_representable})] \begin{eqnarray}\label{eq:ener_from_lattice_to_cluster} \begin{split} \mel{\Phi}{\hat{t}_{01}}{\Phi} &=-4t\sum_iP_{1i}\tilde{\gamma}_{i0} \\ &=-4t\sum_{0\leq i\leq 1}P_{1i}\tilde{\gamma}_{i0} \\ &=-4t\sum_{0\leq i\leq 1}P_{1i}\mel{\Phi^{\mathcal{C}}}{\hat{d}_{i\sigma}^\dagger\hat{d}_{0\sigma}}{\Phi^{\mathcal{C}}} \\ &=-4t\sum_{i}P_{1i}\mel{\Phi^{\mathcal{C}}}{\hat{d}_{i\sigma}^\dagger\hat{c}_{0\sigma}}{\Phi^{\mathcal{C}}}, \end{split} \end{eqnarray} where we used Eq.~(\ref{eq:imp_invariant_underHH_transf}) and the fact that $\hat{d}_{i\sigma}\ket{\Phi^{\mathcal{C}}}\overset{i>1}{=}0$, since $\Phi^{\mathcal{C}}$ is constructed within the cluster. We finally recover from Eq.~(\ref{eq:ener_from_lattice_to_cluster}) the following equality~\cite{sekaran2021}, \begin{eqnarray}\label{eq:per_site_kin_ener_from_cluster} \begin{split} \mel{\Phi}{\hat{t}_{01}}{\Phi}&=-4t\mel{\Phi^{\mathcal{C}}}{\hat{c}_{1\sigma}^\dagger\hat{c}_{0\sigma}}{\Phi^{\mathcal{C}}} \\ &=\mel{\Phi^{\mathcal{C}}}{\hat{t}_{01}}{\Phi^{\mathcal{C}}}, \end{split}\end{eqnarray} which drastically (and exactly) simplifies the evaluation of non-interacting energies for lattices. \subsubsection{Approximate interacting embedding}\label{sec:approx_int_embedding} The simplest (approximate) extension of Ht-DMFET to interacting electrons consists in introducing the on-impurity-site two-electron repulsion operator $\hat{U}_0$ into the non-interacting Householder cluster's Hamiltonian of Eq.~(\ref{eq:non-int_cluster_SE}), by analogy with DMET~\cite{knizia2012density,sekaran2021}. In such a (standard) scheme, the interaction is treated {\it on top} of the non-interacting embedding. Unlike in the non-interacting case, it is necessary to introduce a chemical potential $\tilde{\mu}^{\rm imp}$ on the embedded impurity in order to ensure that it reproduces the correct lattice filling $N/L$~\cite{sekaran2021}, {\it{i.e.}}, \begin{eqnarray} \expval{\hat{n}_0}_{\Psi^{\mathcal{C}}}=N/L, \end{eqnarray} where the two-electron cluster's ground-state wave function $\Psi^{\mathcal{C}}$ fulfills the following interacting Schr\"{o}dinger equation: \begin{eqnarray}\label{eq:int_cluster_SE} \left(\hat{\mathcal{T}}^{\mathcal{C}}+\hat{U}_0-\tilde{\mu}^{\rm imp}\hat{n}_0\right)\ket{\Psi^{\mathcal{C}}}=\mathcal{E}^{\mathcal{C}}\ket{\Psi^{\mathcal{C}}}. \end{eqnarray} The physical per-site energy (from which we remove the chemical potential contribution) is then evaluated as follows: \begin{eqnarray}\label{eq:per_site_ener_HtDMFET} \left(E+\mu N\right)/{L}\underset{\rm{Ht-DMFET}}{\approx}\mel{\Psi^{\mathcal{C}}}{\hat{t}_{01}+\hat{U}_0}{\Psi^{\mathcal{C}}}. \end{eqnarray} Let us stress that, in Ht-DMFET, the cluster is designed from a single determinantal (non-interacting in the present case) lattice wave function, like in regular DMET calculations~\cite{wouters2016practical}. In other words, the Householder transformation is constructed from an idempotent density matrix. Moreover, the interacting cluster is described as a {\it closed} (two-electron) subsystem. As shown for small Hubbard rings, the exact interacting cluster is in principle an open subsystem~\cite{sekaran2021}. It rigorously contains two electrons only at half filling, as a consequence of the hole-particle symmetry of the Hubbard lattice Hamiltonian~\cite{sekaran2021}.\\ Note finally that, if we Householder transform the two-electron repulsion operator $\hat{U}$ of the full lattice, one can in principle take into account its complete projection onto the cluster. It means that the interaction on the bath site could be added to the Hamiltonian in Eq.~(\ref{eq:int_cluster_SE}). For simplicity, we will focus in the following on the (so-called) non-interacting bath formulation of the theory, which is described by Eq.~(\ref{eq:int_cluster_SE}). Let us finally mention that, in the present single-impurity embedding, DMET, DET, and Ht-DMFET are equivalent~\cite{sekaran2021}. \subsection{Exact density-functional embedding}\label{subsec:exact_dfe_dft} We will show in the following that, once it has been merged with KS-DFT, Ht-DMFET can be made formally exact. For clarity, we start with reviewing briefly KS-DFT for lattice Hamiltonians in Sec.~\ref{subsubsec:KS-DFT_lattices}. A multi-determinantal extension of the theory based on the interacting Householder cluster's wave function is then proposed in Sec.~\ref{subsubsec:exact_DFE}. \subsubsection{KS-DFT for uniform lattices}\label{subsubsec:KS-DFT_lattices} According to the Hohenberg--Kohn (HK) variational principle~\cite{hktheo}, which is applied in this work to lattice Hamiltonians~\cite{DFT_ModelHamiltonians}, the ground-state energy of the full lattice can be determined as follows, \begin{eqnarray}\label{eq:HK_var_principle_full_lattice} E=\min_n\left\{F(n)+v_{\rm ext}nL\right\}, \end{eqnarray} where the HK density functional reads as \begin{eqnarray} F(n)=\mel{\Psi(n)}{\hat{T}+\hat{U}}{\Psi(n)}, \end{eqnarray} and $\ket{\Psi(n)}$ is the lattice ground state with uniform density profile $n\overset{0\leq i< L}{=}\mel{\Psi(n)}{\hat{n}_i}{\Psi(n)}$. Strictly speaking, $F(n)$ is a function of the site occupation $n$, hence the name {\it site occupation functional theory} often given to DFT for lattices~\cite{DFT_ModelHamiltonians,senjean2018site}. Note that the ground-state energy $E$ is in fact a (zero-temperature) grand canonical energy since a change in uniform density $n$ induces a change in the number $N=nL$ of electrons. In the thermodynamic $N\rightarrow+\infty$ and $L\rightarrow+\infty$ limit, with $N/L$ fixed to $n$, one can in principle describe {\it continuous} variations in $n$ with a pure-state wave function ${\Psi(n)}$. The derivations that follow will be based on this assumption. If we introduce the per-site analog of the HK functional, \begin{eqnarray}\label{eq:per_site_HK_func} f(n)=F(n)/L=\mel{\Psi(n)}{\hat{t}_{01}+\hat{U}_0}{\Psi(n)}, \end{eqnarray} and use the notation of Eq.~(\ref{eq:constant_ext_pot}), then Eq.~(\ref{eq:HK_var_principle_full_lattice}) becomes \begin{eqnarray} E/L\equiv E(\mu)/L=\min_n\left\{f(n)-\mu n\right\}, \end{eqnarray} and the minimizing density $n(\mu)$ fulfills the following stationarity condition: \begin{eqnarray}\label{eq:true_chem_pot_from_DFT} \mu=\left.\dfrac{\partial f(n)}{\partial n}\right|_{n=n(\mu)}. \end{eqnarray} In the conventional KS formulation of DFT, the per-site HK functional is decomposed as follows, \begin{eqnarray}\label{eq:KS_decomp} f(n)=t_{\rm s}(n)+e_{\rm Hxc}(n), \end{eqnarray} where \begin{eqnarray}\label{eq:per_site_ts} t_{\rm s}(n)=\mel{\Phi(n)}{\hat{t}_{01}}{\Phi(n)}=\dfrac{1}{L}\mel{\Phi(n)}{\hat{T}}{\Phi(n)} \end{eqnarray} is the (per-site) analog for lattices of the non-interacting kinetic energy functional, and the Hxc density functional reads as~\cite{DFT_ModelHamiltonians} \begin{eqnarray} e_{\rm Hxc}(n)=\dfrac{U}{4}n^2+e_{\rm c}(n), \end{eqnarray} where $e_{\rm c}(n)$ is the exact (per-site) correlation energy functional of the interacting lattice. The (normalized) density-functional lattice KS determinant ${\Phi(n)}$ fulfills the (non-interacting) KS equation \begin{eqnarray} \left(\hat{T}-\mu_{\rm s}(n)\hat{N}\right)\ket{\Phi(n)}=\mathcal{E}_{\rm s}(n)\ket{\Phi(n)}, \end{eqnarray} so that [see Eq.~(\ref{eq:per_site_ts})] \begin{eqnarray} \begin{split} \dfrac{\partial t_{\rm s}(n)}{\partial n}&=\dfrac{2}{L}\mel{\frac{\partial \Phi(n)}{\partial n}}{\hat{T}}{\Phi(n)} \\ &=\dfrac{2\mu_{\rm s}(n)}{L}\mel{\frac{\partial \Phi(n)}{\partial n}}{\hat{N}}{\Phi(n)} \\ &=\dfrac{\mu_{\rm s}(n)}{L}\dfrac{\partial (nL)}{\partial n} \\ &=\mu_{\rm s}(n), \end{split} \end{eqnarray} since $\mel{\Phi(n)}{\hat{N}}{\Phi(n)}=N=nL$. Thus we recover from Eqs.~(\ref{eq:true_chem_pot_from_DFT}) and (\ref{eq:KS_decomp}) the well-known relation between the physical and KS chemical potentials: \begin{eqnarray}\label{eq:KS_pot_decomp} \mu_{\rm s}(n(\mu))\equiv \mu_{\rm s}=\mu-v_{\rm Hxc}, \end{eqnarray} where the density-functional Hxc potential reads as $v_{\rm Hxc}=v_{\rm Hxc}(n(\mu))$ with \begin{eqnarray}\label{eq:Hxc_pot_from_eHxc} v_{\rm Hxc}(n)= \dfrac{\partial e_{\rm Hxc}(n)}{\partial n}. \end{eqnarray} Note that the exact non-interacting density-functional chemical potential can be expressed analytically as follows~\cite{lima2003density}: \begin{eqnarray}\label{eq:KS_dens_func_chemical_potential} \mu_{\rm s}(n) = -2t\cos\left(\frac{\pi}{2}n\right). \end{eqnarray} Capelle and coworkers~\cite{lima2003density,DFT_ModelHamiltonians} have designed a local density approximation (LDA) to $e_{\rm Hxc}(n)$ on the basis of exact Bethe Ansatz (BA) solutions~\cite{lieb_absence_1968} (the functional is usually referred to as BALDA).\\ Unlike in conventional {\it ab initio} DFT, the Hxc functional of lattice Hamiltonians is not truly universal in the sense that it is universal for a given choice of (hopping) one-electron and two-electron repulsion operators. In other words, the Hxc functional does not depend on the (possibly non-uniform) one-electron local potential operator $\sum_i v_{{\rm ext},i}\hat{n}_i$, which is the analog for lattices of the nuclear potential in molecules, but it is $t$- and $U$-dependent and, in the present case, it should be designed specifically for the 1D Hubbard model. Even though BALDA can be extended to higher dimensions~\cite{Vilela_2019}, there is no general strategy for constructing (localized) orbital-occupation functional approximations, thus preventing direct applications to quantum chemistry~\cite{fromager2015exact}, for example. Turning ultimately to a potential-functional theory, as proposed in Sec.~\ref{sec:LPFET}, is appealing in this respect. With this change of paradigm, which is the second key result of the paper, the Hxc energy and potential become implicit functionals of the density, and they can be evaluated from a (few-electron) correlated wave function through a quantum embedding procedure. \subsubsection{Density-functional interacting cluster}\label{subsubsec:exact_DFE} We propose in this section an alternative formulation of DFT based on the interacting Householder cluster introduced in Sec.~\ref{sec:approx_int_embedding}. For that purpose, we consider the following {\it exact} decomposition, \begin{eqnarray}\label{eq:decomp_f_from_cluster} f(n)=f^{\mathcal{C}}(n)+\overline{e}_{\rm c}(n), \end{eqnarray} where the Householder cluster HK functional \begin{eqnarray}\label{eq:f_cluster_func} f^{\mathcal{C}}(n)=\mel{\Psi^{\mathcal{C}}(n)}{\hat{t}_{01}+\hat{U}_0}{\Psi^{\mathcal{C}}(n)} \end{eqnarray} is evaluated from the two-electron cluster density-functional wave function ${\Psi^{\mathcal{C}}(n)}$, and $\overline{e}_{\rm c}(n)$ is the complementary correlation density functional that describes the correlation effects of the Householder cluster's environment on the embedded impurity. Note that, according to Sec.~\ref{sec:approx_int_embedding}, $\ket{ \Psi^{\mathcal{C}}(n) }$ fulfills the following Schr\"{o}dinger-like equation, \begin{eqnarray}\label{eq:dens_func_int_cluster_SE} \hat{\mathcal{H}}^{\mathcal{C}}(n)\ket{\Psi^{\mathcal{C}}(n)}=\mathcal{E}^{\mathcal{C}}(n)\ket{\Psi^{\mathcal{C}}(n)}, \end{eqnarray} where (we use the same notations as in Sec.~\ref{sec:approx_int_embedding}) \begin{eqnarray}\label{eq:dens_func_cluster_hamilt} \hat{\mathcal{H}}^{\mathcal{C}}(n)\equiv \hat{\mathcal{T}}^{\mathcal{C}}(n)+\hat{U}_0-\tilde{\mu}^{\rm imp}(n)\,\hat{n}_0 \end{eqnarray} and \begin{eqnarray}\label{eq:Tcluster_op_dens_func} \hat{\mathcal{T}}^{\mathcal{C}}(n)\equiv\hat{t}_{01}+\hat{\tau}^{\mathcal{C}}(n). \end{eqnarray} The dependence in $n$ of the (projected-onto-the-cluster) Householder-transformed kinetic energy operator $\hat{\mathcal{T}}^{\mathcal{C}}(n)$ comes from the fact that the KS lattice density matrix ${\bm \gamma}(n)\equiv\mel{\Phi(n)}{\hat{c}_{i\sigma}^\dagger\hat{c}_{j\sigma}}{\Phi(n)}$ (on which the Householder transformation is based) is, like the KS determinant $\Phi(n)\equiv \Phi^{\mathcal{C}}(n)\Phi_{\rm core}(n)$ of the lattice, a functional of the uniform density $n$. On the other hand, for a given uniform lattice density $n$, the local potential $-\tilde{\mu}^{\rm imp}(n)$ is adjusted on the embedded impurity such that the interacting cluster reproduces $n$, {\it{i.e.}}, \begin{eqnarray}\label{eq:dens_constraint_int_cluster} \mel{\Psi^{\mathcal{C}}(n)}{\hat{n}_0}{\Psi^{\mathcal{C}}(n)}=n. \end{eqnarray} Interestingly, on the basis of the two decompositions in Eqs.~(\ref{eq:KS_decomp}) and (\ref{eq:decomp_f_from_cluster}), and Eq.~(\ref{eq:f_cluster_func}), we can relate the exact Hxc functional to the density-functional Householder cluster as follows, \begin{eqnarray} e_{\rm Hxc}(n)&=\mel{\Psi^{\mathcal{C}}(n)}{\hat{t}_{01}+\hat{U}_0}{\Psi^{\mathcal{C}}(n)} -t_{\rm s}(n)+\overline{e}_{\rm c}(n), \end{eqnarray} where, as shown in Eq.~(\ref{eq:per_site_kin_ener_from_cluster}), the per-site non-interacting kinetic energy can be determined exactly from the two-electron cluster's part $\Phi^{\mathcal{C}}(n)$ of the KS lattice determinant $\Phi(n)$, {\it{i.e.}}, \begin{eqnarray} t_{\rm s}(n)=\mel{\Phi^{\mathcal{C}}(n)}{\hat{t}_{01}}{\Phi^{\mathcal{C}}(n)}, \end{eqnarray} thus leading to the final expression \begin{eqnarray}\label{eq:final_exp_eHxc_from_cluster} \begin{split} e_{\rm Hxc}(n)&=\mel{\Psi^{\mathcal{C}}(n)}{\hat{t}_{01}+\hat{U}_0}{\Psi^{\mathcal{C}}(n)} -\mel{\Phi^{\mathcal{C}}(n)}{\hat{t}_{01}}{\Phi^{\mathcal{C}}(n)}+\overline{e}_{\rm c}(n). \end{split} \end{eqnarray} Note that, according to Eqs.~(\ref{eq:non-int_cluster_SE}) and (\ref{eq:hc_t01_plus_corr}), $\Phi^{\mathcal{C}}(n)$ fulfills the KS-like equation \begin{eqnarray}\label{eq:dens_func_cluster_KS_eq} \left(\hat{t}_{01}+\hat{\tau}^{\mathcal{C}}(n)\right)\ket{\Phi^{\mathcal{C}}(n)}=\mathcal{E}_{\rm s}^{\mathcal{C}}(n)\ket{\Phi^{\mathcal{C}}(n)}, \end{eqnarray} where the Householder transformation ensures that $\mel{\Phi^{\mathcal{C}}(n)}{\hat{n}_0}{\Phi^{\mathcal{C}}(n)}=n$ [see Eq.~(\ref{eq:occ_embedded_imp_lattice_equal})].\\ We will now establish a clearer connection between the KS lattice system and the Householder cluster {\it via} the evaluation of the Hxc density-functional potential in the lattice. According to Eqs.~(\ref{eq:Hxc_pot_from_eHxc}) and (\ref{eq:final_exp_eHxc_from_cluster}), the latter can be expressed as follows, \begin{eqnarray} \begin{split} v_{\rm Hxc}(n)&=2\mel{\frac{\partial\Psi^{\mathcal{C}}(n)}{\partial n}}{\hat{t}_{01}+\hat{U}_0}{\Psi^{\mathcal{C}}(n)} \\ &\quad -2\mel{\frac{\partial\Phi^{\mathcal{C}}(n)}{\partial n}}{\hat{t}_{01}}{\Phi^{\mathcal{C}}(n)}+\frac{\partial \overline{e}_{\rm c}(n)}{\partial n}, \end{split} \end{eqnarray} or, equivalently [see Eqs.~(\ref{eq:dens_func_int_cluster_SE}), (\ref{eq:dens_constraint_int_cluster}), and (\ref{eq:dens_func_cluster_KS_eq})], \begin{eqnarray}\label{eq:Hxc_dens_pot_final} \begin{split} v_{\rm Hxc}(n)&= \tilde{\mu}^{\rm imp}(n) -2\mel{\frac{\partial\Psi^{\mathcal{C}}(n)}{\partial n}}{\hat{\tau}^{\mathcal{C}}(n)}{\Psi^{\mathcal{C}}(n)} \\ &\quad +2\mel{\frac{\partial\Phi^{\mathcal{C}}(n)}{\partial n}}{\hat{\tau}^{\mathcal{C}}(n)}{\Phi^{\mathcal{C}}(n)}+\frac{\partial \overline{e}_{\rm c}(n)}{\partial n}. \end{split} \end{eqnarray} If we introduce the following bi-functional of the density, \begin{eqnarray}\label{eq:kinetic_corr_bi_func} \begin{split} \tau^{\mathcal{C}}_{\rm c}(n,\nu)&=\mel{\Psi^{\mathcal{C}}(\nu)}{\hat{\tau}^{\mathcal{C}}(n)}{\Psi^{\mathcal{C}}(\nu)} -\mel{\Phi^{\mathcal{C}}(\nu)}{\hat{\tau}^{\mathcal{C}}(n)}{\Phi^{\mathcal{C}}(\nu)}, \end{split} \end{eqnarray} which can be interpreted as a kinetic correlation energy induced within the density-functional cluster by the Householder transformation and the interaction on the impurity, we obtain the final {\it exact} expression \begin{eqnarray}\label{eq:final_vHxc_exp_from_muimp} v_{\rm Hxc}(n)= \tilde{\mu}^{\rm imp}(n)-\left.\frac{\partial \tau^{\mathcal{C}}_{\rm c}(n,\nu)}{\partial \nu}\right|_{\nu=n}+\frac{\partial \overline{e}_{\rm c}(n)}{\partial n}, \end{eqnarray} which is the first key result of this paper.\\ Before turning Eq.~(\ref{eq:final_vHxc_exp_from_muimp}) into a practical self-consistent embedding method (see Sec.~\ref{sec:LPFET}), let us briefly discuss its physical meaning and connection with Ht-DMFET. As pointed out in Sec.~\ref{subsubsec:non_int_embedding}, the (density-functional) operator $\hat{\tau}^{\mathcal{C}}(n)$ is an auxiliary correction to the true per-site kinetic energy operator $\hat{t}_{01}$ which originates from the Householder-transformation-based embedding of the impurity. It is not physical and its impact on the impurity chemical potential $\tilde{\mu}^{\rm imp}(n)$, which is determined in the presence of $\hat{\tau}^{\mathcal{C}}(n)$ in the cluster's Hamiltonian [see Eqs.~(\ref{eq:dens_func_int_cluster_SE})-(\ref{eq:Tcluster_op_dens_func})], should be removed when evaluating the Hxc potential of the true lattice, hence the minus sign in front of the second term on the right-hand side of Eq.~(\ref{eq:final_vHxc_exp_from_muimp}). Finally, the complementary correlation potential $\partial \overline{e}_{\rm c}(n)/\partial n$ is in charge of recovering the electron correlation effects that were lost when considering an interacting cluster that is disconnected from its environment~\cite{sekaran2021}. We should stress at this point that, in Ht-DMFET (which is equivalent to DMET or DET when a single impurity is embedded~\cite{sekaran2021}), the following density-functional approximation is made: \begin{eqnarray}\label{eq:DFA_in_HtDMFET} \overline{e}_{\rm c}(n)\underset{\rm Ht-DMFET}{\approx}0, \end{eqnarray} so that the physical density-functional chemical potential is evaluated as follows~\cite{sekaran2021}, \begin{eqnarray}\label{eq:true_chem_pot_Ht-DMFET} \mu(n)\underset{\rm Ht-DMFET}{\approx}\frac{\partial f^{\mathcal{C}}(n)}{\partial n}. \end{eqnarray} Interestingly, even though it is never computed explicitly in this context, the corresponding (approximate) Hxc potential simply reads as \begin{eqnarray} v_{\rm Hxc}(n)\underset{\rm{Ht-DMFET}}{\approx}\frac{\partial (f^{\mathcal{C}}(n)-t_{\rm s}(n))}{\partial n}, \end{eqnarray} or, equivalently [see Eqs.~(\ref{eq:final_vHxc_exp_from_muimp}) and (\ref{eq:DFA_in_HtDMFET})], \begin{eqnarray}\label{eq:approx_Hxc_pot_HtDMFET} v_{\rm Hxc}(n)\underset{\rm{Ht-DMFET}}{\approx} \tilde{\mu}^{\rm imp}(n)-\left.\frac{\partial \tau^{\mathcal{C}}_{\rm c}(n,\nu)}{\partial \nu}\right|_{\nu=n}. \end{eqnarray} Therefore, Ht-DMFET can be seen as an approximate formulation of KS-DFT where the Hxc potential is determined solely from the density-functional Householder cluster. \subsection{Local potential functional embedding theory}\label{sec:LPFET} Until now the Householder transformation has been described as a functional of the uniform density $n$ or, more precisely, as a functional of the KS density matrix, which is itself a functional of the density. If we opt for a potential-functional reformulation of the theory, as suggested in the following, the Householder transformation becomes a functional of the KS chemical potential $\mu_{\rm s}$ instead, and, consequently, the Householder correction to the per-site kinetic energy operator within the cluster [see Eq.~(\ref{eq:Tcluster_op_dens_func})] is also a functional of $\mu_{\rm s}$: \begin{eqnarray} \hat{\tau}^{\mathcal{C}}(n)\rightarrow \hat{\tau}^{\mathcal{C}}(\mu_{\rm s}). \end{eqnarray} Similarly, the interacting cluster's wave function becomes a bi-functional of the KS {\it and} interacting embedded impurity chemical potentials: \begin{eqnarray}\label{eq:bifunctional_cluster_wfn} \Psi^{\mathcal{C}}(n)\rightarrow \Psi^{\mathcal{C}}\left(\mu_{\rm s},\tilde{\mu}^{\rm imp}\right). \end{eqnarray} In the exact theory, for a given chemical potential value $\mu$ in the true interacting lattice, both the KS lattice and the embedded impurity reproduce the interacting lattice density $n(\mu)$, {\it{i.e.}}, \begin{eqnarray}\label{eq:exact_dens_mapping_KS_cluster} n(\mu)=n_{\rm lattice}^{\rm KS}\left(\mu-v_{\rm Hxc}\right)=n^{\mathcal{C}}\left(\mu-v_{\rm Hxc},\tilde{\mu}^{\rm imp}\right), \end{eqnarray} where \begin{eqnarray} n_{\rm lattice}^{\rm KS}(\mu_{\rm s})\equiv\expval{\hat{n}_0}_{\hat{T}-\mu_{\rm s}\hat{N}}, \end{eqnarray} and \begin{eqnarray} \begin{split} n^{\mathcal{C}}\left(\mu_{\rm s},\tilde{\mu}^{\rm imp}\right) &=\expval{\hat{n}_0}_{\Psi^{\mathcal{C}}\left(\mu_{\rm s},\tilde{\mu}^{\rm imp}\right)} \\ &\equiv \expval{\hat{n}_0}_{\hat{t}_{01}+\hat{\tau}^{\mathcal{C}}(\mu_{\rm s})+\hat{U}_0-\tilde{\mu}^{\rm imp}\hat{n}_0}, \end{split} \end{eqnarray} with, according to Eq.~(\ref{eq:final_vHxc_exp_from_muimp}), \begin{eqnarray}\label{eq:exact_muimp_expression} \begin{split} \tilde{\mu}^{\rm imp}&=\tilde{\mu}^{\rm imp}(n(\mu)) \\ &=v_{\rm Hxc}-\left[\frac{\partial \overline{e}_{\rm c}(\nu)}{\partial \nu} -\frac{\partial \tau^{\mathcal{C}}_{\rm c}(n(\mu),\nu)}{\partial \nu}\right]_{\nu=n(\mu)}. \end{split} \end{eqnarray} The density constraint of Eq.~(\ref{eq:exact_dens_mapping_KS_cluster}) combined with Eq.~(\ref{eq:exact_muimp_expression}) allows for an in-principle-exact evaluation of the Hxc potential $v_{\rm Hxc}$. Most importantly, these two equations can be used for designing an alternative (and self-consistent) embedding strategy on the basis of well-identified density-functional approximations. Indeed, in Ht-DMFET, the second term on the right-hand side of Eq.~(\ref{eq:exact_muimp_expression}) is simply dropped, for simplicity [see Eq.~(\ref{eq:DFA_in_HtDMFET})]. If, in addition, we neglect the Householder kinetic correlation density-bi-functional potential correction $\partial \tau^{\mathcal{C}}_{\rm c}(n,\nu)/\partial \nu$ [last term on the right-hand side of Eq.~(\ref{eq:exact_muimp_expression})], we obtain from Eq.~(\ref{eq:exact_dens_mapping_KS_cluster}) the following self-consistent equation, \begin{eqnarray}\label{eq:sc_LPFET_eq} n_{\rm lattice}^{\rm KS}\left(\mu-\tilde{v}_{\rm Hxc}\right)=n^{\mathcal{C}}\left(\mu-\tilde{v}_{\rm Hxc},\tilde{v}_{\rm Hxc}\right), \end{eqnarray} from which an approximation $\tilde{v}_{\rm Hxc}\equiv \tilde{v}_{\rm Hxc}(\mu)$ to the Hxc potential can be determined. Eq.~(\ref{eq:sc_LPFET_eq}) is the second main result of this paper. Since $\tilde{v}_{\rm Hxc}$ is now the to-be-optimized quantity on which the embedding fully relies, we refer to the approach as {\it local potential functional embedding theory} (LPFET), in which the key density-functional approximation that is made reads as \begin{eqnarray}\label{eq:LPFET_approx_to_vHxc} {v}_{\rm Hxc}(n)\underset{\rm LPFET}{\approx}\tilde{\mu}^{\rm imp}(n). \end{eqnarray} The approach is graphically summarized in Fig.~\ref{Fig:self-consistent-loop-scheme}. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.4]{Figure1.pdf} \caption{Graphical representation of the LPFET procedure. Note that the {\it same} Hxc potential $\tilde{v}_{\rm Hxc}$ is used in the KS lattice and the embedding Householder cluster. It is optimized self-consistently in order to fulfill the density constraint of Eq.~(\ref{eq:sc_LPFET_eq}). See text for further details.} \label{Fig:self-consistent-loop-scheme} \end{center} \end{figure} In order to verify that the first HK theorem~\cite{hktheo} still holds at the LPFET level of approximation, let us assume that two chemical potentials $\mu$ and $\mu+\Delta\mu$ lead to the same density. If so, the converged Hxc potentials should differ by $\tilde{v}_{\rm Hxc}\left(\mu+\Delta\mu\right)-\tilde{v}_{\rm Hxc}(\mu)=\Delta\mu$, so that both calculations give the same KS chemical potential value [see Eq.~(\ref{eq:KS_pot_decomp})]. According to Eqs.~(\ref{eq:sc_LPFET_eq}) and (\ref{eq:LPFET_approx_to_vHxc}), it would imply that two different values of the interacting embedded impurity chemical potential can give the same density, which is impossible~\cite{sekaran2021,senjean2017local}. Therefore, when convergence is reached in Eq.~(\ref{eq:sc_LPFET_eq}), we can generate an approximate map \begin{eqnarray} \mu\rightarrow n(\mu) \underset{\rm LPFET}{\approx} n_{\rm lattice}^{\rm KS}\left(\mu-\tilde{v}_{\rm Hxc}\right) = \expval{\hat{n}_0}_{\Psi^{\mathcal{C}}\left(\mu-\tilde{v}_{\rm Hxc},\tilde{v}_{\rm Hxc}\right)} , \end{eqnarray} and compute approximate per-site energies as follows, \begin{eqnarray}\label{eq:per_site_ener_LPFFET} \frac{E(\mu)}{L}+\mu n(\mu)\underset{\rm{LPFET}}{\approx}\expval{\hat{t}_{01}+\hat{U}_0}_{\Psi^{\mathcal{C}}\left(\mu-\tilde{v}_{\rm Hxc},\tilde{v}_{\rm Hxc}\right)}, \end{eqnarray} since the approximation in Eq.~(\ref{eq:DFA_in_HtDMFET}) is also used in LPFET, as discussed above.\\ Note that Ht-DMFET and LPFET use the same per-site energy expression [see Eq.~(\ref{eq:per_site_ener_HtDMFET})], which is a functional of the interacting cluster's wave function. In both approaches, the latter and the non-interacting lattice share the same density. Therefore, if the per-site energy is evaluated as a function of the lattice filling $n$, both methods will give exactly the same result. However, different energies will be obtained if they are evaluated as functions of the chemical potential value $\mu$ in the interacting lattice. The reason is that Ht-DMFET and LPFET will give different densities. Indeed, as shown in Sec.~\ref{subsubsec:exact_DFE}, Ht-DMFET can be viewed as an approximation to KS-DFT where the Hxc density-functional potential of Eq.~(\ref{eq:approx_Hxc_pot_HtDMFET}) is employed. As readily seen from Eq.~(\ref{eq:LPFET_approx_to_vHxc}), the LPFET and Ht-DMFET Hxc potentials differ by the Householder kinetic correlation potential (which is neglected in LPFET). If the corresponding KS densities were the same then the Hxc potential, the Householder transformation, and, therefore, the chemical potential on the interacting embedded impurity would be the same, which is impossible according to Eqs.~(\ref{eq:approx_Hxc_pot_HtDMFET}) and (\ref{eq:LPFET_approx_to_vHxc}).\\ \subsection{Comparison with SDE}\label{subsec:sde_comparison} At this point we should stress that LPFET is very similar to the SDE approach of Mordovina {\it et al.}~\cite{mordovina2019self}. The major difference between SDE and LPFET (in addition to the fact that LPFET has a clear connection with a formally exact density-functional embedding theory based on the Householder transformation) is that no KS construction is made within the cluster. Instead, the Hxc potential is directly updated in the KS lattice, on the basis of the correlated embedded impurity density. This becomes even more clear when rewriting Eq.~(\ref{eq:sc_LPFET_eq}) as follows, \begin{eqnarray}\label{eq:LPFET_Hxc_pot_from_dens_cluster} \tilde{v}_{\rm Hxc}=\mu-\left[n_{\rm lattice}^{\rm KS}\right]^{-1}\left(n^{\mathcal{C}}\left(\mu-\tilde{v}_{\rm Hxc},\tilde{v}_{\rm Hxc}\right)\right), \end{eqnarray} where $\left[n_{\rm lattice}^{\rm KS}\right]^{-1}:\,n\rightarrow \mu_{\rm s}(n)$ is the inverse of the non-interacting chemical-potential-density map. A practical advantage of such a procedure (which remains feasible since the full system is treated at the non-interacting KS level only) lies in the fact that the KS construction within the cluster is automatically (and exactly) generated by the Householder transformation, once the density has been updated in the KS lattice (see Eq.~(\ref{eq:occ_embedded_imp_lattice_equal}) and the comment that follows). Most importantly, the density in the KS lattice and the density of the non-interacting KS embedded impurity (which, unlike the embedded {\it interacting} impurity, is not used in the actual calculation) will match {\it at each iteration} of the Hxc potential optimization process, as it should when convergence is reached. If, at a given iteration, the KS construction were made directly within the cluster, there would always be a ``delay'' in density between the KS lattice and the KS cluster, which would only disappear at convergence. Note that, when the latter is reached, the (approximate) Hxc potential of the lattice should match the one extracted from the cluster, which is defined in SDE as the difference between the KS cluster Hamiltonian and the one-electron part of the interacting cluster's Hamiltonian~\cite{mordovina2019self}, both reproducing the density of the KS lattice. Therefore, according to Eqs.~(\ref{eq:dens_func_cluster_hamilt}), (\ref{eq:Tcluster_op_dens_func}) and (\ref{eq:dens_func_cluster_KS_eq}), the converged Hxc potential will simply correspond to the chemical potential on the interacting embedded impurity, exactly like in LPFET [see Eq.~(\ref{eq:LPFET_approx_to_vHxc})].\\ Note finally that the simplest implementation of LPFET, as suggested by Eq.~(\ref{eq:LPFET_Hxc_pot_from_dens_cluster}), can be formally summarized as follows: \begin{eqnarray}\label{eq:LPFET_algo} \begin{split} \tilde{v}^{(i+1)}_{\rm Hxc}&=\mu-\left[n_{\rm lattice}^{\rm KS}\right]^{-1}\left(n^{\mathcal{C}}\left(\mu-\tilde{v}^{(i)}_{\rm Hxc},\tilde{v}^{(i)}_{\rm Hxc}\right)\right), \\ \tilde{v}^{(i=0)}_{\rm Hxc}&=0. \end{split} \end{eqnarray} A complete description of the algorithm is given in the next section. \section{LPFET algorithm}\label{sec:lpfet_algo} The LPFET approach introduced in Sec.~\ref{sec:LPFET} aims at computing the interacting chemical-potential-density $\mu \rightarrow n(\mu)$ map through the self-consistent optimization of the uniform Hxc potential. A schematics of the algorithm is provided in Fig.~\ref{Fig:self-consistent-loop-convergence}. It can be summarized as follows. \newline \\1. We start by diagonalizing the one-electron Hamiltonian ({\it{i.e.}}, the hopping in the present case) matrix ${\bm t}\equiv t_{ij}$ [see Eq.~(\ref{eq:hopping_matrix})]. Thus we obtain the ``molecular'' spin-orbitals and their corresponding energies. We fix the chemical potential of the interacting lattice to some value $\mu$ and (arbitrarily) initialize the Hxc potential to $\tilde{v}_{\rm Hxc}=0$. Therefore, at the zeroth iteration, the KS chemical potential $\mu_{\rm s}$ equals $\mu$. \newline \\ 2. We occupy all the molecular spin-orbitals with energies below $\mu_{\rm s}=\mu-\tilde{v}_{\rm Hxc}$ and construct the corresponding density matrix (in the lattice representation). The latter provides the uniform KS density (denoted $n^{\rm KS}_{\rm lattice}$ in Fig.~\ref{Fig:self-consistent-loop-convergence}) and the embedding Householder cluster Hamiltonian [see Eq.~(\ref{eq:int_cluster_SE})] in which the impurity chemical potential is set to $\tilde{\mu}^{\rm imp}=\tilde{v}_{\rm Hxc}$ [see Eq.~(\ref{eq:LPFET_approx_to_vHxc})]. \newline \\ 3. We solve the interacting Schr\"{o}dinger equation for the two-electron Householder cluster and deduce the occupation of the embedded impurity (which is denoted $n^{\mathscr{C}}$ in Fig.~\ref{Fig:self-consistent-loop-convergence}). This can be done analytically since the Householder cluster is an asymmetric Hubbard dimer~\cite{sekaran2021}. \newline \\ 4. We verify that the density in the KS lattice $n^{\rm KS}_{\rm lattice}$ and the occupation of the interacting embedded impurity $n^{\mathscr{C}}$ match (a convergence threshold has been set to 10$^{-4}$). If this is the case, the calculation has converged and $n^{\mathscr{C}}$ is interpreted as (an approximation to) the density $n(\mu)$ in the true interacting lattice. If the two densities do not match, the Hxc potential $\tilde{v}_{\rm Hxc}$ is adjusted in the KS lattice such that the latter reproduces $n^{\mathscr{C}}$ [see Eq.~(\ref{eq:LPFET_algo})] or, equivalently, such that the KS lattice contains $Ln^{\mathscr{C}}$ electrons. We then return to step 2.\\ \color{black} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.27]{Figure2.pdf} \caption{Schematics of the LPFET algorithm. The (one-electron reduced) density matrix of the KS lattice is referred to as the 1RDM. See text for further details.} \label{Fig:self-consistent-loop-convergence} \end{center} \end{figure} \section{Results and discussion}\label{sec:results} In the following, LPFET is applied to a uniform Hubbard ring with a large $L$ = 1000 number of sites in order to approach the thermodynamic limit. Periodic boundary conditions have been used. The hopping parameter is set to $t$ = 1. As pointed out in Sec.~\ref{sec:LPFET}, plotting the Ht-DMFET (which is equivalent to DMET or DET for a single embedded impurity) and LPFET per-site energies as functions of the lattice filling $n$ would give exactly the same results (we refer the reader to Ref.~\cite{sekaran2021} for a detailed analysis of these results). However, the chemical-potential-density $\mu\rightarrow n(\mu)$ maps obtained with both methods are expected to differ since they rely on different density-functional approximations [see Eqs.~(\ref{eq:approx_Hxc_pot_HtDMFET}) and (\ref{eq:LPFET_approx_to_vHxc})]. We focus on the self-consistent evaluation of the LPFET map in the following. Comparison is made with Ht-DMFET and the exact BA results.\\ As illustrated by the strongly correlated results of Figs.~\ref{fig:convergence_density} and \ref{fig:convergence_vHxc}, the LPFET self-consistency loop converges smoothly in few iterations. The same observation is made in weaker correlation regimes (not shown). The deviation in density between the KS lattice and the embedded impurity is drastically reduced after the first iteration (see Fig.~\ref{fig:convergence_density}). This is also reflected in the large variation of the Hxc potential from the zeroth to the first iteration (see Fig.~\ref{fig:convergence_vHxc}). It originates from the fact that, at the zeroth iteration, the Hxc potential is set to zero in the lattice while, in the embedding Householder cluster, the interaction on the impurity site is ``turned on''. As shown in Fig.~\ref{fig:convergence_density}, the occupation of the interacting embedded impurity is already at the zeroth iteration a good estimate of the self-consistently converged density. A few additional iterations are needed to refine the result. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.6]{Figure3.pdf} \caption{Comparison of the KS lattice and embedded impurity densities at each iteration of the LPFET calculation. The interaction strength and chemical potential values are set to $U/t=8$ and $\mu/t = - 0.97$, respectively. As shown in the inset, convergence is reached after five iterations.} \label{fig:convergence_density} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.6]{Figure4.pdf} \caption{Convergence of the LPFET Hxc potential for $U/t=8$ and $\mu/t = - 0.97$.} \label{fig:convergence_vHxc} \end{center} \end{figure} The converged LPFET densities are plotted in Fig.~\ref{fig:Mott-Hubbard} as functions of the chemical potential $\mu$ in various correlation regimes. The non-interacting $U=0$ curve describes the KS lattice at the zeroth iteration of the LPFET calculation. Thus we can visualize, as $U$ deviates from zero, how much the KS lattice learns from the interacting two-electron Householder cluster. LPFET is actually quite accurate (even more than Ht-DMFET, probably because of error cancellations) in the low-density regime. Even though LPFET deviates from Ht-DMFET when electron correlation is strong, as expected, their chemical-potential-density maps are quite similar. This is an indication that neglecting the Householder kinetic correlation potential contribution to the Hxc potential, as done in LPFET, is not a crude approximation, even in the strongly correlated regime. As expected~\cite{knizia2012density,sekaran2021}, LPFET and Ht-DMFET poorly perform when approaching half filling. They are unable to describe the density-driven Mott--Hubbard transition ({\it{i.e.}}, the opening of the gap). As discussed in Ref.~\cite{sekaran2021}, this might be related to the fact that, in the exact theory, the Householder cluster is not disconnected from its environment and it contains a fractional number of electrons, away from half filling, unlike in the (approximate) Ht-DMFET and LPFET schemes. In the language of KS-DFT, modeling the gap opening is equivalent to modeling the derivative discontinuity in the density-functional correlation potential $v_{\rm c}(n)=\mu(n)-\mu_{\rm s}(n)-\frac{U}{2}n$ at half filling. As clearly shown in Fig.~\ref{fig:Hartree-exchange-correlation-potential}, Ht-DMFET and LPFET do not reproduce this feature. In the language of the exact density-functional embedding theory derived in Sec.~\ref{subsec:exact_dfe_dft}, both Ht-DMFET and LPFET approximations neglect the complementary density-functional correlation energy $\overline{e}_{\rm c}(n)$ that is induced by the environment of the (closed) density-functional Householder cluster. As readily seen from Eq.~(\ref{eq:final_vHxc_exp_from_muimp}), it should be possible to describe the density-driven Mott--Hubbard transition with a single statically embedded impurity, provided that we can model the derivative discontinuity in $\partial\overline{e}_{\rm c}(n)/\partial n$ at half filling. This is obviously a challenging task that is usually bypassed by embedding more impurities~\cite{knizia2012density,sekaran2021}. The implementation of a multiple-impurity LPFET as well as its generalization to higher-dimension lattice or quantum chemical Hamiltonians is left for future work. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.6]{Figure5.pdf} \caption{Converged LPFET densities (red solid lines) plotted as functions of the chemical potential $\mu$ in various correlation regimes. Comparison is made with the exact BA (black solid lines) and Ht-DMFET (blue dotted lines) results. In the latter case, the chemical potential is evaluated {\it via} the numerical differentiation of the density-functional Ht-DMFET per-site energy [see Eqs.~(\ref{eq:f_cluster_func}) and (\ref{eq:true_chem_pot_Ht-DMFET})]. The non-interacting ($U=0$) chemical-potential-density map [see Eq.~(\ref{eq:KS_dens_func_chemical_potential})] is shown for analysis purposes.} \label{fig:Mott-Hubbard} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.6]{Figure6.pdf} \caption{Correlation potential $v_{\rm c}(n)=\mu(n)-\mu_{\rm s}(n)-\frac{U}{2}n$ plotted as a function of the lattice filling $n$ at the Ht-DMFET (blue dashed line) and LPFET (red solid line) levels of approximation for $U/t=8$. Comparison is made with the exact BA correlation potential (black solid line).} \label{fig:Hartree-exchange-correlation-potential} \end{center} \end{figure} \section{Conclusion and perspectives}\label{sec:conclusion} An in-principle-exact density-functional reformulation of the recently proposed {\it Householder transformed density matrix functional embedding theory} (Ht-DMFET)~\cite{sekaran2021} has been derived for the uniform 1D Hubbard Hamiltonian with a single embedded impurity. On that basis, an approximate {\it local potential functional embedding theory} (LPFET) has been proposed and implemented. Ht-DMFET, which is equivalent to DMET or DET in the particular case of a single impurity, is reinterpreted in this context as an approximation to DFT where the complementary density-functional correlation energy $\overline{e}_{\rm c}(n)$ induced by the environment of the embedding ``impurity+bath'' cluster is neglected. LPFET neglects, in addition, the kinetic correlation effects induced by the Householder transformation on the impurity chemical potential. We have shown that combining the two approximations is equivalent to approximating the latter potential with the Hxc potential of the full lattice. Thus an approximate Hxc potential can be determined {\it self-consistently} for a given choice of external (chemical in the present case) potential in the true interacting lattice. The self-consistency loop, which does not exist in regular single-impurity DMET or DET~\cite{PRB21_Booth_effective_dynamics_static_embedding}, emerges naturally in LPFET from the exact density constraint, {\it{i.e.}}, by forcing the KS lattice and interacting embedded impurity densities to match. In this context, the energy becomes a functional of the Hxc potential. In this respect, LPFET can be seen as a flavor of KS-DFT where no density functional is used. LPFET is very similar to SDE~\cite{mordovina2019self}. The two approaches essentially differ in the optimization of the potential. In LPFET, no KS construction is made within the embedding cluster, unlike in SDE. Instead, the Hxc potential is directly updated in the lattice. As a result, the KS cluster (which is not used in the actual calculation) can be automatically generated with the correct density by applying the Householder transformation to the KS lattice Hamiltonian.\\ LPFET and Ht-DMFET chemical-potential-density maps have been computed for a 1000-site Hubbard ring. Noticeable differences appear in the strongly correlated regime. LPFET is more accurate than Ht-DMFET in the low-density regime, probably because of error cancellations. As expected from previous works~\cite{sekaran2021,knizia2012density}, their performance deteriorates as we approach half filling. It appears that, in the language of density-functional embedding theory, it should be possible to describe the density-driven Mott--Hubbard transition ({\it{i.e.}}, the opening of the gap), provided that the complementary correlation potential $\partial\overline{e}_{\rm c}(n)/\partial n$ exhibits a derivative discontinuity at half filling. Since the latter is neglected in both methods, the gap opening is not reproduced. The missing correlation effects might be recovered by applying a multi-reference G\"{o}rling--Levy-type perturbation theory on top of the correlated cluster calculation~\cite{sekaran2021}. Extending LPFET to multiple impurities by means of a block Householder transformation is another viable strategy~\cite{sekaran2021}. Note that, like DMET or SDE, LPFET is in principle applicable to quantum chemical Hamiltonians written in a localized molecular orbital basis. Work is currently in progress in these directions. \section*{Acknowledgments} The authors thank Saad Yalouz (for his comments on the manuscript and many fruitful discussions) and Martin Rafael Gulin (for stimulating discussions). The authors also thank LabEx CSC (ANR-10-LABX-0026-CSC) and ANR (ANR-19-CE29-0002 DESCARTES and ANR-19-CE07-0024-02 CoLab projects) for funding. \begin{appendices} \section{\label{sec:level1}First-level heading:\protect\\ The line break was forced \lowercase{via} \textbackslash\textbackslash} This sample document demonstrates proper use of REV\TeX~4.2 (and \LaTeXe) in manuscripts prepared for submission to AIP journals. Further information can be found in the documentation included in the distribution or available at \url{http://authors.aip.org} and in the documentation for REV\TeX~4.2 itself. When commands are referred to in this example file, they are always shown with their required arguments, using normal \TeX{} format. In this format, \verb+#1+, \verb+#2+, etc. stand for required author-supplied arguments to commands. For example, in \verb+\section{#1}+ the \verb+#1+ stands for the title text of the author's section heading, and in \verb+\title{#1}+ the \verb+#1+ stands for the title text of the paper. Line breaks in section headings at all levels can be introduced using \textbackslash\textbackslash. A blank input line tells \TeX\ that the paragraph has ended. \subsection{\label{sec:level2}Second-level heading: Formatting} This file may be formatted in both the \texttt{preprint} (the default) and \texttt{reprint} styles; the latter format may be used to mimic final journal output. Either format may be used for submission purposes; however, for peer review and production, AIP will format the article using the \texttt{preprint} class option. Hence, it is essential that authors check that their manuscripts format acceptably under \texttt{preprint}. Manuscripts submitted to AIP that do not format correctly under the \texttt{preprint} option may be delayed in both the editorial and production processes. The \texttt{widetext} environment will make the text the width of the full page, as on page~\pageref{eq:wideeq}. (Note the use the \verb+\pageref{#1}+ to get the page number right automatically.) The width-changing commands only take effect in \texttt{twocolumn} formatting. It has no effect if \texttt{preprint} formatting is chosen instead. \subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes} Citations in text refer to entries in the Bibliography; they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+. Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly, its entire repertoire of commands are available in your document; see the \verb+natbib+ documentation for further details. The argument of \verb+\cite+ is a comma-separated list of \emph{keys}; a key may consist of letters and numerals. By default, citations are numerical; \cite{feyn54} author-year citations are an option. To give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}). REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate. REV\TeX\ provides the ability to properly punctuate textual citations in author-year style; this facility works correctly with numerical citations only with \texttt{natbib}'s compress option turned off. To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983}, and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}). Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography. A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command, where the argument is the citation key mentioned above. \verb+\bibitem{#1}+ commands may be crafted by hand or, preferably, generated by using Bib\TeX. The AIP styles for REV\TeX~4 include Bib\TeX\ style files \verb+aipnum.bst+ and \verb+aipauth.bst+, appropriate for numbered and author-year bibliographies, respectively. REV\TeX~4 will automatically choose the style appropriate for the document's selected class options: the default is numerical, and you obtain the author-year style by specifying a class option of \verb+author-year+. This sample file demonstrates a simple use of Bib\TeX\ via a \verb+\bibliography+ command referencing the \verb+sorsamp.bib+ file. Running Bib\TeX\ (in this case \texttt{bibtex sorsamp}) after the first pass of \LaTeX\ produces the file \verb+sorsamp.bbl+ which contains the automatically formatted \verb+\bibitem+ commands (including extra markup information via \verb+\bibinfo+ commands). If not using Bib\TeX, the \verb+thebibiliography+ environment should be used instead. \paragraph{Fourth-level heading is run in.}% Footnotes are produced using the \verb+\footnote{#1}+ command. Numerical style citations put footnotes into the bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}. Author-year and numerical author-year citation styles (each for its own reason) cannot use this method. Note: due to the method used to place footnotes in the bibliography, \emph{you must re-run BibTeX every time you change any of your document's footnotes}. \section{Math and Equations} Inline math may be typeset using the \verb+$+ delimiters. Bold math symbols may be achieved using the \verb+bm+ package and the \verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and Blackboard (or open face or double struck) characters should be typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands respectively. Both are supplied by the \texttt{amssymb} package. For example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and \verb+$\mathfrak{G}$+ gives $\mathfrak{G}$ In \LaTeX\ there are many different ways to display equations, and a few preferred ways are noted below. Displayed math will center by default. Use the class option \verb+fleqn+ to flush equations left. Below we have numbered single-line equations, the most common kind: \begin{eqnarray} \chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2} \left( \begin{array}{c} |{\bf p}|+p_z\\ px+ip_y \end{array}\right)\;, \\ \left\{% \openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}% \label{eq:one}. \end{eqnarray} Note the open one in Eq.~(\ref{eq:one}). Not all numbered equations will fit within a narrow column this way. The equation number will move down automatically if it cannot fit on the same line with a one-line equation: \begin{equation} \left\{ ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}. \end{equation} When the \verb+\label{#1}+ command is used [cf. input for Eq.~(\ref{eq:one})], the equation can be referred to in text without knowing the equation number that \TeX\ will assign to it. Just use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in the \verb+\label{#1}+ command. Unnumbered single-line equations can be typeset using the \verb+\[+, \verb+\]+ format: \[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \] \subsection{Multiline equations} Multiline equations are obtained by using the \verb+eqnarray+ environment. Use the \verb+\nonumber+ command at the end of each line to avoid assigning a number: \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} \delta_{\sigma_1,-\sigma_2} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1), \end{eqnarray} \begin{eqnarray} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\nonumber \\ & &\times \left( \sum_{i<j}\right) \sum_{\text{perm}} \frac{1}{S_{12}} \frac{1}{S_{12}} \sum_\tau c^f_\tau~. \end{eqnarray} \textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline equation if \verb+\nonumber+ is also used on that line. Incorrect cross-referencing will result. Notice the use \verb+\text{#1}+ for using a Roman font within a math environment. To set a multiline equation without \emph{any} equation numbers, use the \verb+\begin{eqnarray*}+, \verb+\end{eqnarray*}+ format: \begin{eqnarray*} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\\ & &\times \left( \sum_{i<j}\right) \left( \sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}} \right) \frac{1}{S_{12}}~. \end{eqnarray*} To obtain numbers not normally produced by the automatic numbering, use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired equation number. For example, to get an equation number of (\ref{eq:mynum}), \begin{equation} g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum} \end{equation} A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires \texttt{amsmath}. The \verb+\tag{#1}+ must come before the \verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is \textit{transparent} to the automatic numbering in REV\TeX{}; therefore, the number must be known ahead of time, and it must be manually adjusted if other equations are added. \verb+\tag{#1}+ works with both single-line and multiline equations. \verb+\tag{#1}+ should only be used in exceptional case - do not use it to number all equations in a paper. Enclosing single-line and multiline equations in \verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce a set of equations that are ``numbered'' with letters, as shown in Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below: \begin{subequations} \label{eq:whole} \begin{equation} \left\{ abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2} \right\},\label{subeq:1} \end{equation} \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2} \end{eqnarray} \end{subequations} Putting a \verb+\label{#1}+ command right after the \verb+\begin{subequations}+, allows one to reference all the equations in a subequations environment. For example, the equations in the preceding subequations environment were Eqs.~(\ref{eq:whole}). \subsubsection{Wide equations} The equation that follows is set in a wide format, i.e., it spans across the full page. The wide format is reserved for long equations that cannot be easily broken into four lines or less: \begin{widetext} \begin{equation} {\cal R}^{(\text{d})}= g_{\sigma_2}^e \left( \frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right) + x_WQ_e \left( \frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right)\;. \label{eq:wideeq} \end{equation} \end{widetext} This is typed to show the output is in wide format. (Since there is no input line between \verb+\equation+ and this paragraph, there is no paragraph indent for this paragraph.) \section{Cross-referencing} REV\TeX{} will automatically number sections, equations, figure captions, and tables. In order to reference them in text, use the \verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a particular page, use the \verb+\pageref{#1}+ command. The \verb+\label{#1}+ should appear in a section heading, within an equation, or in a table or figure caption. The \verb+\ref{#1}+ command is used in the text where the citation is to be displayed. Some examples: Section~\ref{sec:level1} on page~\pageref{sec:level1}, Table~\ref{tab:table1},% \begin{table} \caption{\label{tab:table1}This is a narrow table which fits into a text column when using \texttt{twocolumn} formatting. Note that REV\TeX~4 adjusts the intercolumn spacing so that the table fills the entire width of the column. Table captions are numbered automatically. This table illustrates left-aligned, centered, and right-aligned columns. } \begin{ruledtabular} \begin{tabular}{lcr} Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\ \hline 1 & 2 & 3\\ 10 & 20 & 30\\ 100 & 200 & 300\\ \end{tabular} \end{ruledtabular} \end{table} and Fig.~\ref{fig:epsart}. \section{Figures and Tables} Figures and tables are typically ``floats''; \LaTeX\ determines their final position via placement rules. \LaTeX\ isn't always successful in automatically placing floats where you wish them. Figures are marked up with the \texttt{figure} environment, the content of which imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+). The argument of the latter command should itself contain a \verb+\label+ command if you wish to refer to your figure with \verb+\ref+. Import your image using either the \texttt{graphics} or \texttt{graphix} packages. These packages both define the \verb+\includegraphics{#1}+ command, but they differ in the optional arguments for specifying the orientation, scaling, and translation of the figure. Fig.~\ref{fig:epsart}% \begin{figure} \includegraphics{fig_1 \caption{\label{fig:epsart} A figure caption. The figure captions are automatically numbered.} \end{figure} is small enough to fit in a single column, while Fig.~\ref{fig:wide}% \begin{figure*} \includegraphics{fig_2 \caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide figure, spanning the page in \texttt{twocolumn} formatting.} \end{figure*} is too wide for a single column, so instead the \texttt{figure*} environment has been used. The analog of the \texttt{figure} environment is \texttt{table}, which uses the same \verb+\caption+ command. However, you should type your caption command first within the \texttt{table}, instead of last as you did for \texttt{figure}. The heart of any table is the \texttt{tabular} environment, which represents the table content as a (vertical) sequence of table rows, each containing a (horizontal) sequence of table cells. Cells are separated by the \verb+&+ character; the row terminates with \verb+\\+. The required argument for the \texttt{tabular} environment specifies how data are displayed in each of the columns. For instance, a column may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+), or aligned on a decimal point (\verb+d+). (Table~\ref{tab:table4}% \begin{table} \caption{\label{tab:table4}Numbers in columns Three--Five have been aligned by using the ``d'' column specifier (requires the \texttt{dcolumn} package). Non-numeric entries (those entries without a ``.'') in a ``d'' column are aligned on the decimal point. Use the ``D'' specifier for more complex layouts. } \begin{ruledtabular} \begin{tabular}{ccddd} One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\ \hline one&two&\mbox{three}&\mbox{four}&\mbox{five}\\ He&2& 2.77234 & 45672. & 0.69 \\ C\footnote{Some tables require footnotes.} &C\footnote{Some tables need more than one footnote.} & 12537.64 & 37.66345 & 86.37 \\ \end{tabular} \end{ruledtabular} \end{table} illustrates the use of decimal column alignment.) Extra column-spacing may be be specified as well, although REV\TeX~4 sets this spacing so that the columns fill the width of the table. Horizontal rules are typeset using the \verb+\hline+ command. The doubled (or Scotch) rules that appear at the top and bottom of a table can be achieved by enclosing the \texttt{tabular} environment within a \texttt{ruledtabular} environment. Rows whose columns span multiple columns can be typeset using \LaTeX's \verb+\multicolumn{#1}{#2}{#3}+ command (for example, see the first row of Table~\ref{tab:table3}).% \begin{table*} \caption{\label{tab:table3}This is a wide table that spans the page width in \texttt{twocolumn} mode. It is formatted using the \texttt{table*} environment. It also demonstrates the use of \textbackslash\texttt{multicolumn} in rows with entries that span more than one column.} \begin{ruledtabular} \begin{tabular}{ccccc} &\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\ Ion&1st alternative&2nd alternative&lst alternative &2nd alternative\\ \hline K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\ Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.} &$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\ Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. } &$(4e)^{\text{a}}$\\ He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\ Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\ \end{tabular} \end{ruledtabular} \end{table*} The tables in this document illustrate various effects. Tables that fit in a narrow column are contained in a \texttt{table} environment. Table~\ref{tab:table3} is a wide table, therefore set with the \texttt{table*} environment. Lengthy tables may need to break across pages. A simple way to allow this is to specify the \verb+[H]+ float placement on the \texttt{table} or \texttt{table*} environment. Alternatively, using the standard \LaTeXe\ package \texttt{longtable} gives more control over how tables break and allows headers and footers to be specified for each page of the table. An example of the use of \texttt{longtable} can be found in the file \texttt{summary.tex} that is included with the REV\TeX~4 distribution. There are two methods for setting footnotes within a table (these footnotes will be displayed directly below the table rather than at the bottom of the page or in the bibliography). The easiest and preferred method is just to use the \verb+\footnote{#1}+ command. This will automatically enumerate the footnotes with lowercase roman letters. However, it is sometimes necessary to have multiple entries in the table share the same footnote. In this case, create the footnotes using \verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+. \texttt{\#1} is a numeric value. Each time the same value for \texttt{\#1} is used, the same mark is produced in the table. The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular} environment. Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and \ref{tab:table2}% \begin{table} \caption{\label{tab:table2}A table with more columns still fits properly in a column. Note that several entries share the same footnote. Inspect the \LaTeX\ input for this table to see exactly how it is done.} \begin{ruledtabular} \begin{tabular}{cccccccc} &$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$& &$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\ \hline Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1] & 0.680 & 1.870 & 3.700 \\ Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2] & 0.450 & 1.930 & 3.760 \\ Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3] & 0.750 & 2.170 & 3.560 \\ Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4] & 0.900 & 2.370 & 3.720 \\ Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2] & 0.380 & 1.730 & 2.830 \\ Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5] & 0.760 & 2.110 & 3.120 \\ Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5] & 1.120 & 2.620 & 3.480 \\ Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3] & 1.330 & 2.800 & 3.590 \\ Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4] & 1.420 & 3.030 & 3.740 \\ In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5] & 0.960 & 2.460 & 3.780 \\ Tl& 0.480 & 18.90 & 3.550 & & & & \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.} \footnotetext[2]{Here's the second.} \footnotetext[3]{Here's the third.} \footnotetext[4]{Here's the fourth.} \footnotetext[5]{And etc.} \end{table} for an illustration. All AIP journals require that the initial citation of figures or tables be in numerical order. \LaTeX's automatic numbering of floats is your friend here: just put each \texttt{figure} environment immediately following its first reference (\verb+\ref+), as we have done in this example file. \begin{acknowledgments} We wish to acknowledge the support of the author community in using REV\TeX{}, offering suggestions and encouragement, testing new versions, \dots. \end{acknowledgments}
{'timestamp': '2022-02-17T02:22:31', 'yymm': '2202', 'arxiv_id': '2202.08071', 'language': 'en', 'url': 'https://arxiv.org/abs/2202.08071'}
\section{Introduction} The concept of a "point rotation reference frame", i.e., the frame with the axis of rotation at every point, arises in optics. However, this concept is also applicable to other areas of physics. An example of such a frame is the optical indicatrix (index ellipsoid) \cite{Born}.\ Any rotating field, including spinor and gravitational, is the object of the point rotation. Coordinates of the frame are the angle, time and axis of rotation. The radial coordinate is not used in manipulations with the frames. Centrifugal forces don't exist in such frames. Optically, the rotating half-wave plate is an equivalent to the resting electrooptical crystal with the rotating indicatrix \cite{Bur} but, physically, they are different because the plate has only one axis of rotation. The frames are not compatible with Cartesian's frames. The main question in such a concept: what is the transformation for point rotating reference frames? Is the transformation Galilean or not? From the viewpoint of contemporary physics a non-Galilean transformation, with different time for frames rotating at different frequencies, is much more preferable in comparison with the Galilean, where time is the same. Moreover, such a transformation must contain a constant with the dimension of time similarly to the Lorentz transformation and the speed of light. This constant should define limits of applicability of basic physical laws.\ In contrast to mechanics, where the relativity principle is used to deduce the transformation for the rectilinear motion, such a general principle does not exist for the point rotation. Therefore, this transformation cannot be explicitly determined. It is known that an electric field, rotating perpendicular to the optical axis of a 3-fold electrooptic crystal, causes rotation of the optical indicatrix at a frequency equal to half the frequency of the field.\ It means that the optical indicatrix of such a crystal possesses some properties of two-component spinor.\ The sense of rotation of the circularly polarized optical wave propagating through this crystal, is reversed, and the frequency is shifted, if the amplitude of the applied electric field is equal to the half-wave value. The device for such a shifting is the electrooptical single-sideband modulator \cite{pat}. The use of the transformation makes the description of the light propagation in the electrooptical single-sideband modulator simpler and comprehensible. For the description of the phenomenon transit to a rotating reference frame associated with axes of the indicatrix. As result of such a transition, the frequency of the wave is shifted by half frequency of the modulating electric field. This shift is doubled at the modulator output due to the polarization reversal and transition to the initial reference frame \cit {pat} . In this paper we study the general form of two-dimensional non-Galilean transformation and the possibility of its experimental verification. Emphasize, experiment always involves the direct and reverse transformation because an observer rotating at each point does not exist. \section{The transformation} The general form of \ the normalized non-Galilean transformation may be written as follow \begin{equation} \tilde{\varphi}=\varphi -\Omega t,\text{ \ }\tilde{t}=-\tau \varphi +t, \label{trn1} \end{equation where the tilde corresponds to the rotating frame, $\tilde{\varphi},\varphi $ and $\tilde{t},t$ are the normalized angle and time, $\Omega $ is the frequency of the rotating frame (the modulating frequency is $2\Omega $), \tau (\Omega )$ is a parameter with the dimension of time. The reverse transformation follows\emph{\ }from (\ref{trn1}) \begin{equation} (1-\Omega \tau )\varphi =\tilde{\varphi}+\Omega \tilde{t},\text{ \ (1-\Omega \tau )t=\tau \tilde{\varphi}+\text{\ }\tilde{t}. \label{tr1} \end{equation} Consider a plane circularly polarized light wave propagating through the modulator. Transit into the rotating frame. The optical frequency in this frame i \begin{equation} \tilde{\omega}=\frac{\omega -\Omega }{-\tau \omega +1}, \label{f1} \end{equation where the frequency in the resting and rotating frame is defined as $\omega =\varphi /t$ and $\tilde{\omega}=\tilde{\varphi}/\tilde{t}$ respectively. If the half-wave condition is fulfilled, the reversal of rotation occurs at the modulator output. For the circularly polarized wave the negative sign of the frequency corresponds to the opposite rotation. Making transition into the resting frame with changing the sign of $\tilde{\omega}$, obtain the output frequency as a function of $\omega $ and $\Omega $ \begin{equation} \omega ^{\prime }=\frac{-\omega (1+\tau \Omega )+2\Omega }{-2\tau \omega +\tau \Omega +1}. \label{fr1} \end{equation} In fact, we consider the single-sideband modulator in this approach as a black box. This box changes the sense of rotation of the circularly polarized light wave and shifts its frequency. \section{Optics} For the evaluation of the parameter $\tau $ we use results of optical measurements from the work \cite{jpc}. In this work the principle of the single-sideband modulation was checked and the Galilean transformation was used for the theoretical description of the process. Circularly polarized light from Helium-Neon laser was modulated by a Lithium Niobate single-sideband modulator at the frequency 110 MHz. The experiment showed an asymmetry of the frequency shift for two opposite polarizations. The extra shifts was of the order of few MHz. Proof. W. H. Steier, one of the authors of the work \cite{jpc}, kindly answered my question about the origin of this asymmetry: "Your are correct about the apparent asymmetry. We never noticed it earlier. I do not know if this is a property of the scanning mirror interferometer. It has been many many years since we did that work and all of the equipment has now been replaced. It would not be possible for us to redo any work or start the experiments again". Possibly, the origin of this extra shift is a defect of the equipment. In any case this shift can be used for approximate estimates of the upper boundary of the parameter $\tau $. From this the important conclusion follows. The parameter $\tau $ is very small. For small \ $\tau $ and $|\Omega |\ll |\omega |$\ the output frequency (\re {fr1}) may be written a \begin{equation} \omega ^{\prime }\approx -\omega +2\Omega +2\tau \omega ^{2}. \label{frs} \end{equation The extra shift equals $2\tau \omega ^{2}.$ The exact form of the dependency $\tau (\Omega )$ is unknown. Therefore, assume that $\tau $ may be expanded in power series in $\Omega \begin{equation} \tau =\tau _{0}+\tau _{1}^{2}\Omega +\tau _{2}^{3}\Omega ^{2}+\ldots \label{tau} \end{equation In such a form all the coefficients $\tau _{n}$ have the dimension of time. Since $\tau $ is very small and, usually, $\Omega \ll $ $\omega $, we can restrict ourselves only the first non-zero term in the expansion (\ref{tau}). \subsubsection{The case $\protect\tau _{0}\neq 0.$} This case is most favorable for optical measurements from the viewpoint of simplicity. The constant $\tau _{0}$\ may be called "quantum of time". The value of the extra shift defines the upper boundary of the quantum of time \sim 10^{-23}\sec $. That corresponds to a distance of the order of the proton size. The experiment, similar to \cite{jpc}, provide an excellent opportunity to measure the parameter $\tau _{0}$. Accuracy of the measurement can be increased by several orders of magnitude by using modern technology. The advent of laser cooling has underpinned the development of cold $Cs$ fountain clocks, which now achieve frequency uncertainties of approximately 5\cdot 10^{-16}$ and even lesser \cite{NPL}. That can be used in the measurement. Best accuracy may be achieved in a ring schematic similarly measurements of the anomalous magnetic moment of electron. \subsubsection{The case $\protect\tau _{0}=0$} If $\tau _{0}$ is exactly equal to zero, the accuracy should be increased by $1/(\tau _{1}\Omega )$ times. Accordingly to results of \cite{jpc}, the upper boundary of $\tau _{1}$ is$\ \sim 10^{-16}\sec $. Using the optical range for the modulation is connected with the problem of phase matching \cite{pat}. Below briefly summarized results of the analysis and possibilities of measurements in other areas of physics. \section{General relativity} Consider the case $\tau _{0}=0$ in application to general relativity. Now we restrict ourselves only by the second term of $\tau (\Omega )$ and consider \ref{trn1}) as the Lorentz transformation. Usually this name relates to the rectilinear motion in mechanics. Here the role of the coordinate and velocity is played by the angle and frequency respectively. After a normalization \begin{equation} (\tilde{\varphi},\tilde{t})\rightarrow \frac{(\tilde{\varphi},\tilde{t})} \sqrt{1+\tau _{1}\Omega }},\text{ \ }(\varphi ,t)\rightarrow (\varphi ,t \sqrt{1-\tau _{1}\Omega }, \label{norm} \end{equation obtai \begin{equation} \tilde{\varphi}=\frac{\varphi -\Omega t}{\sqrt{1-\tau _{1}^{2}\Omega ^{2}}} \text{ \ }\tilde{t}=\frac{-\tau _{1}\Omega \varphi +t}{\sqrt{1-\tau _{1}^{2}\Omega ^{2}}}. \label{rL} \end{equation Analogously mechanics, $\tau _{1}$ can be regarded as the minimum possible time interval and $1/\tau _{1}$ as the maximum possible frequency. The form $\ (\tau _{1}^{2}\varphi ^{2}-t^{2})$ \ is invariant under the transformation (\ref{rL}). \ Despite the fact that the Cartesian reference frames are not compatible with the point rotation reference frames, there exists a solution of Einstein's equation invariant under the transformation (\ref{rL}). Consider an exact solution with cylindrical symmetry \cite{Mar \begin{equation} ds^{2}=Ar^{a+b}dr^{2}+r^{2}d\varphi ^{2}+r^{b}dz^{2}+Cr^{a}dt^{2}, \label{mM} \end{equation where $A,C,a,b$ are constants. This solutions is a invariant under the transformation (\ref{rL}) provided $a=2$. Moreover, at $a=b=2$ and a normalization of $r$ and $t$ the metric can be reduced to the form \begin{equation} ds^{2}=(1+\frac{1}{L}r)[dr^{2}+l^{2}d\varphi ^{2}+dz^{2}-c^{2}dt^{2}], \label{mp} \end{equation where\emph{\ }$l\equiv c\tau _{1}$ and $L$ are constants with the dimension of length. For a "center", at $r=0$, this metric looks like "Euclidean metric" for the point rotations. Non-stationary solutions of Einstein's equation, invariant under the transformation (\ref{rL}), also exists. The existence of such metrics opens the way for applying the concept of the point rotation reference frames to general relativity. In this sence suitable solutions of Einstein's equation are possible but searching for consequences of such solutions applicable for measurements of $\tau _{1}$ or $l$ is not simple problem. \section{Quantum mechanics} Initially, quantum mechanics was considered as the most suitable area of physics for the measurement of the parameter $\tau $. However, the hope to find in quantum mechanics a consequence of the transformation (\ref{trn1}) applicable to measurements, proved to be illusory. Quantum states in rotating magnetic or electromagnetic fields are not stationary. The problem becomes stationary by the transition to the rotating frame. Main role in the transition plays the phase transformation of the spinor, which is defined by the first equation in (\ref{trn1}). The second equation, containing the parameter $\tau ,$ plays a minor role. The transition was used for finding a new class of exact localized solutions of the Dirac equation \cite{arx} in the rotating electromagnetic field. However, the further study showed that the parameter $\tau $ vanishes from final results due to the reverse transition into the resting frame. It allows also to conclude that the non-Galilean transformation is not related to the problem of anomalous magnetic moment\ in any case for the above exact localized solutions. \section{Conclusion} We have considered the concept of the point rotating frame and the non-Galilean transformation for such frames. The concept is applicable to optics, general relativity and quantum mechanics. The parameter $\tau $ with the dimension of time is a distinguishing feature of the non-Galilean transformation. This parameter is very small. Presently optics can be considered as the main area of physics for measurements of the parameter $\tau $. The nonzero term $\tau _{0}$\ in the expansion (\ref{tau}) is the most favorable case for optical measurements. However, in the case $\tau _{0}=0$\ measurements are also possible. The experiment would be similar to \cite{jpc , but on the basis of modern technology. Best accuracy may be achieved in a ring schematic similarly measurements of the anomalous magnetic moment of electron. This schematic, regardless of results of experiments (positive or negative), can also be used for high-precise manipulations with the laser frequency in variety applications, in particular, for standards of length and time. A fundamental constant with dimension time must be on the list of basic physical constants. However, this constant is absent in this list. The parameter $\tau $\ contains this constant and it should be the basic physical constant because it is determined by such a basic physical process as rotation. The investigation of this problem is very important since the constant\ defines the limits of the applicability of the basic physical laws for very small intervals of time and length. Moreover, this constant might determine the minimum possible values of such intervals as well as the minimum possible value of energy. The above opinion of prof. W. H. Steier about the origin of the asymmetry is an argument against funding the high-precise measurements. Nevertheless, the problem of "to be or not to be" (in the sense of Galilean or not) must be solved.
{'timestamp': '2013-11-12T02:08:07', 'yymm': '1307', 'arxiv_id': '1307.7354', 'language': 'en', 'url': 'https://arxiv.org/abs/1307.7354'}
\section{Introduction} Many problems in financial mathematics, economics, and engineering may be cast in the form of a stochastic optimisation problem, and typically the agent's optimal control depends on the underlying (dynamic) model assumptions. Models, however, are approximations of the world, whether they are purely data driven (i.e., empirical) models, parametric models estimated from data, or models that are posited to reflect the given stochastic dynamics. As models are approximations, understanding how to protect ones decisions from uncertainty inherent in a model is of paramount importance. Thus, here we consider robust stochastic optimisation problems where the agent chooses the action that is optimal under the worst case within an uncertainty set. In many contexts, and particularly so in financial modelling, it is important to account for risk. Using expected utility of rewards is one approach for trading off risk and reward, however, there are many models of decision under uncertainty that go beyond it. Here, we take the ranked dependent expected utility (RDEU) framework of \cite{Yaari1987Economitrica} which allows agents to account not only for the concavity in their utility, but also allows them to distort the probabilities of outcomes to have a more fulsome reflection of their risk preferences. While only specific examples of robust stochastic optimisation problems admit (semi-) analytical solutions or are numerically tractable, a general framework for solving robust stochastic optimisation problems is still missing and that is the focus of this paper. Hence, we develop a reinforcement learning (RL) approach for solving a general class of robust stochastic optimisation problems, where agents aim to minimise their risk -- measured by RDEU -- subject to model uncertainty; thus robustifying their actions. In our setting, an agent's action induces a univariate controlled random variable (rv) which is subject to distributional uncertainty, modelled via the Wasserstein distance. Notable is that in our setting, while the uncertainty is on the controlled rv and the alternative distributions lie within a Wasserstein ball around it, the alternate distributions may also have other structural constraints. The vast majority of the literature on RL considers maximising expected total reward, while in many contexts, and particularly so in financial modelling, it is important to account for risk. While risk-aware RL often focuses on expected utility, recently \cite{tamar2015policy} extended policy gradient methods to account for coherent measures of risk. Here, however, we are interested in RDEU measures of risk that falls outside the class of coherent risk measures. A (risk-neutral) distributionally robust RL approach for Markov decision processes, where robustness is induced by looking at all transition probabilities (from a given state) that have relative entropy with respect to (wrt) a reference probability less than a given epsilon, is developed in \cite{smirnova2019distributionally}. \cite{abdullah2019wasserstein} develops a (risk-neutral) robust RL paradigm where policies are randomised with a distribution that depends on the current state, see \cite{wang2020reinforcement} for a continuous time version of randomised policies with entropy regularisation and \cite{guo2020entropy,firoozi2020exploratory} for its generalisation to mean-field game settings. In \cite{abdullah2019wasserstein}, the uncertainty is placed on the conditional transition probability from old state and action to new state, and the set of distributions are those that lie within an ``average'' $2$-Wasserstein ball around a benchmark model's distribution. As randomised policies are used, the constraint and policy decouple. In this work, there is no such decoupling and we work mostly with deterministic policies that map states to actions in a unique manner. As far as the authors are aware, this paper fills two gaps in the literature. The first is the incorporation of RDEU measures of risk to RL problems, and the second is robustifying risk-aware RL. We fill these gaps by posing a generic robust risk-aware optimisation problem, develop policy gradient formulae for numerically solving it, and illustrate its tractability on three prototypical examples in financial mathematics. The remainder of this paper is structured as follows: Section \ref{sec:optimisation} introduces the robust stochastic optimisation problem. Section \ref{sec:gradients} provides the RL policy gradient formulae for the inner and outer problems. Section \ref{sec:example} illustrates the tractability of the RL framework on three examples: robust portfolio allocation, optimising a benchmark, and statistical arbitrage. \section{Robust Optimisation Problems}\label{sec:optimisation} We consider agents who measure the risk/reward of a rv using Yaari's dual theory \cite{Yaari1987Economitrica}. As Yaari argues, agents not only value outcomes according to an utility function, but also view probabilities of outcomes subjectively and thus distort them. This leads to the notion of rank dependent expected utility (RDEU) defined below. \begin{definition}[RDEU] The RDEU of a rv $Y$ may be defined via a Choquet integral as \begin{equation} \label{eqn: RDEU} {\mathcal{R}}^U_g[Y] := \int_{-\infty}^0 1 - g\big(\P (U(Y) > y)\big) \, dy - \int_0^{+\infty} g\big(\P (U(Y) > y)\big)\, dy \,, \end{equation} where $g \colon [0,1] \to [0,1]$ is an increasing function with $g(0) = 0$ and $g(1) = 1$, called distortion function, and $U$ is a non-decreasing concave utility function. We assume that $U$ is differentiable almost everywhere. \end{definition} The above definition assumes that positive outcomes correspond to gains and negative ones to losses. The RDEU framework subsumes the class of distortion risk measures, for $U(x) = x$, which includes the well-known Conditional-Value-at-Risk (CVaR), see Section \ref{sec:example}. Moreover, it includes the expected utility framework when $g(x) = x$, in which case ${\mathcal{R}}^U_g[Y] = -{\mathbb{E}}[U(Y)]$. Throughout, we refer to the RDEU of a rv as the rv's risk. We consider the situation where an agent's action $\phi \in \varphi$ induces a rv $X^\phi$ and that the agent aims to minimise the risk associated with $X^\phi$, i.e. ${\mathcal{R}}^U_g[X^\phi]$. However, due, to the presence of model uncertainty -- distributional uncertainty on $X^\phi$ -- the agent, instead of choosing actions with minimal risk, chooses the action that minimises the worst-case risk of all alternative rvs $X^\theta$, where $\theta$ belongs to an uncertainty set $\vartheta_\phi$, which may depend on the agents' action $\phi$. Specifically, the agent aims to solve the robust optimisation problem \begin{equation} \inf_{\phi\in\varphi}\; \sup_{\theta\in\vartheta_\phi}\;{\mathcal{R}}^U_{g}[X^\theta] \;, \qquad \text{where} \qquad \vartheta_\phi:=\left\{\theta \in \vartheta : d_p[X^\theta,X^\phi]\le \varepsilon\right\}\;, \tag{P} \label{eqn:P} \end{equation} where the admissible set of controls $\varphi\subseteq{\mathds{R}}^m$, $X^\phi$ is a controlled ${\mathds{R}}$-valued rv, $X^\theta$ is an ${\mathds{R}}$-valued rv parametrised by $\theta$, $\vartheta\subseteq R^n$ parametrises the robustness set, and $d_p[X,Y]$ denotes the $p$-Wasserstein distance between two rvs $X$ and $Y$, defined below. Problem \eqref{eqn:P} is only fully specified once the mappings $\phi\mapsto X^\phi$ and $\theta\mapsto X^\theta$ are given. The proposed RL approach allows for flexibility in these mappings, thus we make here, apart from the existence of a solution to \eqref{eqn:P}, no further assumption on them. We may interpret \eqref{eqn:P} as an adversarial attack, where the agent picks an action, and an adversary distorts $X^\phi$ to have as worst performance as possible within a given Wasserstein ball. Below we provide several examples of problem \eqref{eqn:P} which we revisit in Section \ref{sec:example}. Recall that the Wasserstein distance of order $p \in [1, +\infty)$ between $X \stackrel{\P}{\sim} F_X$ and $Y \stackrel{\P}{\sim} F_Y$, is given explicitly by (see e.g., \cite{ambrosio2003lecture}, Chap. 1) \begin{equation} d_p[X\,,\, Y] := \inf_{\chi\in\Pi(F_{X},F_{Y})} \left(\textstyle\int_{{\mathds{R}}^2}|x-y|^p\,\chi(dx,dy)\right)^{\frac1p}, \end{equation}% where $\Pi(F_{X},F_{Y})$ is the set of all bivariate probability measures with marginals $F_{X}$ and $F_{Y}$. The $p$-Wasserstein distance defines a metric on the space of probability measures. The robust stochastic optimisation problem \eqref{eqn:P} is a generalisation of distributional robust optimisation, where the uncertainty set is a subset of the space of distribution functions only, see e.g., \cite{esfahani2018data, Bernard2020robust}. Here, however, the uncertainty set $\vartheta_\phi$ possesses additional features in that it (a) may depend on the agent's action $\phi$, (b) the rv $X^\theta$ may have a structure induced by $\theta$, in which case not all rv within a Wasserstein distance around $X^\phi$ are feasible rvs, and (c) the set of feasible parameters $\theta$ belong to a set $ \vartheta$, which may impose additional constraints on $X^\theta$. Problem \eqref{eqn:P} performs a robust optimisation (over $\phi$) of $X^\phi$ as follows. Given $X^\phi$ from the ``outer'' problem, the ``inner'' problem $\sup_{\theta\in\vartheta_\phi} {\mathcal{R}}_g^U[X^\theta]$ corresponds to a robust version of $X^\phi$'s risk. As $\varepsilon\downarrow0$, the inner problem reduces to the RDEU of $X^\phi$. When $\varepsilon>0$, however, the agent incorporates model uncertainty, and instead assesses the risk associated with $X^\phi$ by seeking over all alternate rv, generated by $\theta\in\vartheta$, that lie within a Wasserstein ball around it. \begin{example}[Robust Portfolio Allocation.] \label{ex:Robust-Portfolio-Allocation} Suppose that $\varphi$ is the probability simplex in $d$-dimensions, for $\phi\in\varphi$ we write $X^\phi=\phi^\intercal X$, and $X=(X_1,\dots,X_d)$ represents the returns of $d$ traded assets. Further, let $\vartheta={\mathds{R}}^n$ and write $X^\theta=H_\theta(X^\phi)$, where $H_\theta(\cdot)$ is an artificial neural net (ANN) parameterised by $\theta$. In this setup, the inner problem corresponds to seeking over all distribution functions, that may be generated by the ANN and that lie within a Wasserstein ball around $X^\phi$; thus, the inner problem results in a robust estimate of the risk of $X^\phi$. The outer problem then seeks to find the best investment that is robust to model uncertainty. \cite{pflug2007ambiguity,esfahani2018data} investigate a similar class of problems, however, the uncertainty ball is on the inputs $X$ and not the output $X^\phi$ and they use coherent/convex risk measures as measures of risk compared to RDEU. \end{example} \begin{example}[Optimising Risk-Measures with a Benchmark.] \label{ex:benchmark} Suppose that $\varphi$ is a singleton, the components of $\phi\in\varphi$ denote the percentage of wealth to invest in various assets, and $X^\phi$ denotes the terminal value of such an investment. Then, $X^\phi$ may be interpreted as benchmark strategy that the investor wishes to outperform in terms of RDEU. Let $\theta\in\vartheta$ parameterise a dynamic self-financing trading strategy (e.g., parameters in an ANN that map time and asset prices to trading positions) whose terminal value is $X^\theta$. If we replace in the inner problem in \eqref{eqn:P} the $\sup$ with $\inf$, the corresponding problem is to find a dynamic strategy that has the best risk of all portfolios within a Wasserstein ball around the benchmark. This example generalises \cite{pesenti2020portfolio} to the case of RDEU and also applies to incomplete markets. \end{example} \begin{example}[Robust Dynamic Trading Strategy.] \label{ex:dynamic} Consider the case where $X=(X_0,X_1,\dots,X_{T-1})$ denotes the price path of an asset at (trading) time points $0<t_1<t_2<\dots<t_{T-1}$, and $\varphi=[-a,a]^T$ denotes the shares bought/sold at the sequence of trading times. For any $\phi\in\varphi$, the terminal wealth from the sequence of trades is \begin{align} X^\phi = -\textstyle\sum_{i=0}^{T-1} \phi_i \;X_i + q_T^\phi\,X_T = \textstyle\sum_{i=1}^T q_{i}^\phi\,(X_i-X_{i-1}), \end{align} where $q^\phi_i=\sum_{j=0}^{i-1} \phi_j$ is the total assets held at time $t_i$. Further, we set $X^\theta=H_\theta(X^\phi)$, where $H_\theta(\cdot)$ is an ANN parametrised by $\theta$. As in Example \ref{ex:Robust-Portfolio-Allocation} this corresponds to an agent who aims to minimise over $\phi\in\varphi$ a robust measure of risk of $X^\phi$. A related work is \cite{cartea2017algorithmic} who consider robust algorithmic trading problems using relative entropy penalisations under linear utility. \end{example} In the next section we derive the policy gradient formulae for the inner and outer problem. \section{Policy Gradients}\label{sec:gradients} Policy gradient methods provide a sequence of policies/actions that improve upon one another by taking steps in the direction of the value function's gradient, where the gradient is taken wrt the parameters of the policy. In this section, we derive policy gradient update rules for optimising \eqref{eqn:P} over both $\phi$ and $\theta$. In Section \ref{sub-sec-random-action}, we provide a policy gradient formula when the agent controls not the action itself, but rather its distribution. Such actions are also referred to as relaxed controls, see \cite{wang2020reinforcement,firoozi2020exploratory,guo2020entropy}. \subsection{The Inner Problem} First, we study the inner problem of \eqref{eqn:P}. To do so, we employ an augmented Lagrangian approach to incorporate the constraints. For this, we fix the rv $X^\phi$ and denote by $G_\phi$ and $G^{-1}_\phi$ its corresponding distribution, respectively, quantile function. We further denote the distribution and quantile function of $X^\theta$ by $F_\theta$ and $F^{-1}_\theta$, respectively. The augmented Lagrangian may then be written as \begin{align} \begin{split} L[\theta,\phi] =& {\mathcal{R}}_g^U[X^\theta] + \lambda\, c[X^\theta,X^\phi] + \tfrac{\mu}{2} \left(c[X^\theta,X^\phi]\right)^2\!\!, \end{split} \end{align} where $c[X^\theta,X^\phi]:=\left(\left(d_p[X^\theta,X^\phi]\right)^p - \varepsilon^p\right)_+$ is the $p$-Wasserstein constraint error, $(x)_+$ denotes the positive part of $x$, $\lambda$ is the Lagrange multiplier that enforces this constraint, and $\mu$ the penalty constraint. The augmented Lagrangian approach fixes $\lambda$ and $\mu$, minimises/maximises $L[\theta,\phi]$, e.g., by using stochastic gradient descent (SGD), then updates $\lambda \leftarrow\lambda + \mu\,c[X^{\theta^*},X^\phi]$ and $\mu\leftarrow a\,\mu$ with some $a>1$. For an overview of the augmented Lagrangian approach see, e.g., \cite{birgin2014practical}, Chap. 4. While the augmented Lagrangian may be estimated from a mini-batch of simulations, optimising over the parameters $\theta$ requires gradients wrt $\theta$. Many widely used risk measures, such as CVaR, RVaR, UTE, however, admit a derivative (of $g$) that has discontinuities, and whenever the derivative of $g$ has discontinuities, na\"ive back-propagation will incorrectly estimate its gradient. To overcome these potential discontinuities, we derive a gradient formula that can be estimated using mini-batch samples. \begin{proposition}[Inner Gradient Formula.] \label{prop:inner-gradient-formula} Let $X_c^\phi$ denote the version of $X^\phi$ that makes $(X^\theta,X_c^\phi)$ comonotonic -- i.e., reorder the realisations of $X_c^\phi$ according to the rank of $X^\theta$. If $g$ is left-differentiable, then \begin{align} \begin{split} \nabla_\theta L[\theta,\phi] =& {\mathbb{E}}\left[\left\{ U'\left(X^\theta\right)\gamma\left(F_\theta(X^\theta)\right) - p\,\Lambda\, |X^\theta - X_c^\phi|^{p-1} \sgn(X^\theta - X_c^\phi)\right\} \frac{\nabla_\theta F_\theta(x)|_{x=X^\theta}}{f_\theta(X^\theta)} \right], \label{eqn:L-gradient-inner} \end{split} \end{align} where $\gamma \colon (0,1) \to {\mathds{R}}_\ge$ is given by $\gamma(u):= \partial_{-} g(x)|_{x = 1 - u}$, $\partial_{-}$ denotes the derivative from the left, the constant $\Lambda:= \left( \lambda + \mu\,c[X^\theta,X^\phi]\right)\mathds{1}_{d_p[X^\theta,X^\phi]>\varepsilon}$, and $f_\theta(\cdot)$ is the density of $X^\theta$. \end{proposition} The gradient formula \eqref{eqn:L-gradient-inner} requires estimating the function $\nabla_\theta F_\theta(x)$. For this purpose, suppose we are given a mini-batch of data $\{(x_\phi^{(1)},x_\theta^{(1)}),\dots,(x_\phi^{(N)},x_\theta^{(N)})\}$ of $(X^\phi,X^\theta)$, which, e.g., may be the result of an accumulation of multiple sources of randomness (such as in dynamic trading). We then make a kernel density estimator (KDE) $\hat{F}_\theta$ of $F_\theta$ given by \begin{equation} \hat{F}_\theta(x) = \tfrac{1}{N}\textstyle\sum_{i=1}^N \Phi\big(x- x^{(i)}_\theta\big), \end{equation} where $\Phi(\cdot)$ denotes the distribution function for an appropriate (zero-centred and standardised) kernel (e.g., Gaussian). Therefore, \begin{equation} \nabla_\theta \hat{F}_\theta(x) = -\tfrac{1}{N}\textstyle\sum_{i=1}^N \Phi^\prime\big(x-x^{(i)}_\theta\big) \,\nabla_\theta x^{(i)}_\theta, \end{equation} where $\Phi^\prime(\cdot)$ is the kernel's corresponding density. As the samples $x_\theta^{(i)}$ are viewed as outputs of an ANN, the gradients $\nabla_\theta x_\theta^{(i)}$ may be efficiently obtained using standard back-propagation techniques. Inserting the KDE into \eqref{eqn:L-gradient-inner}, we may estimate the gradient by \begin{multline} \nabla_\theta L[\theta,\phi] \approx - \frac{1}{N} \sum_{i=1}^N\left[ \Bigg( U'(x_\theta^{(i)})\gamma\left(\hat{F}_\theta(x_\theta^{(i)})\right) \right. \\ \left. - p\,\Lambda\, |x_\theta^{(i)} - x_{\phi,c}^{(i)}|^{p-1} \sgn(x_\theta^{(i)} - x_{\phi,c}^{(i)})\Bigg) \tfrac{ \sum_{j=1}^N \Phi^\prime(x_\theta^{(i)}-x_\theta^{(j)} ) \,\nabla_\theta x_\theta^{(j)} } { \sum_{k=1}^N \Phi^\prime(x_\theta^{(i)}-x_\theta^{(k)} ) } \right]\,, \label{eqn:inner-gradient-sim} \end{multline} where $x_{\phi,c}^{(1)}, \ldots x_{\phi,c}^{(N)}$ are the reordered realisations of $X^\phi$, such that they are comonotonic with $X^\theta$. The Wasserstein distance between $X^\theta$ and $X^\phi$ may be approximated using the same mini-batch as $\left(\frac1N \sum_{i = 1}^N |x_\theta^{(i)} - x_{\phi, c}^{(i)}|^p\right)^\frac1p$, see e.g., \cite{ambrosio2003lecture}[Chapter 1.]. \subsection{The Outer Problem} Similar to the inner problem, optimisation for the outer problem is carried out using the augmented Lagrangian, this time taking gradients wrt $\phi$. To calculate the derivatives, we must specify how $X^\theta$ is generated. Specifically, we assume $X^\theta=H_\theta(X^\phi,Y)$ where $Y$ is another (multi-dimensional) source of randomness. \begin{proposition}[Outer Gradient Formula.] \label{prop:outer-gradient-formula} Let $X_c^\phi$ denote the version of $X^\phi$ which makes $(X^\theta,X_c^\phi)$ comonotonic -- i.e., reorder the realisations of $X_c^\phi$ according to the rank of $X^\theta$ -- then the gradient becomes \begin{multline} \nabla_\phi L[\theta,\phi] ={\mathbb{E}}\left[ U^\prime(X^\theta)\,\gamma(F_\theta(X^\theta)) \frac{\nabla_\phi F_\theta(x)|_{x=X^\theta}}{f_\theta(X^\theta)} \right. \\ \left. -p\,\Lambda\, |X^\theta - X_c^\phi|^{p-1} \sgn(X^\theta - X_c^\phi) \left(\frac{\nabla_\phi F_\theta(x)|_{x=X^\theta}}{f_\theta(X^\theta)}+ \frac{\nabla_\phi G_\phi(x)|_{x=X^\phi}}{g_\phi(X^\phi)}\right) \right], \label{eqn:L-gradient-outer} \end{multline} where the constant $\Lambda:= (\lambda + \mu\,c[X^\theta,X^\phi]^p)\mathds{1}_{d_p[X^\theta,X^\phi]>\varepsilon}$ and $g_\phi(\cdot)$ is the density of $X^\phi$. \end{proposition} As in the previous section, given a mini-batch $\{(x_\phi^{(1)},x_\theta^{(1)},y^{(1)}),\dots,(x_\phi^{(N)},x_\theta^{(N)},y^{(N)})\}$ of \linebreak $(X^\phi,X^\theta,Y)$, we may estimate the gradient by \begin{multline} \nabla_\phi L[\theta,\phi] \approx -\frac{1}{N} \sum_{i=1}^N\left[ U^\prime(x_\theta^{(i)}) \gamma\left(\hat{F}_\theta(x_\theta^{(i)})\right) \tfrac{ \sum_{j=1}^N \Phi^\prime(x_\theta^{(i)}-x_\theta^{(j)} ) \,\nabla_\phi x_\theta^{(j)} } { \sum_{k=1}^N \Phi^\prime(x_\theta^{(i)}-x_\theta^{(k)} ) } \right. \\ \left. -p\,\Lambda |x_\theta^{(i)} - x_{\phi,c}^{(i)}|^{p-1}\sgn(x_\theta^{(i)} - x_{\phi,c}^{(i)})\, \left(\tfrac{ \sum_{j=1}^N \Phi^\prime(x_\theta^{(i)}-x_\theta^{(j)} ) \,\nabla_\phi x_\theta^{(j)} } { \sum_{k=1}^N \Phi^\prime(x_\theta^{(i)}-x_\theta^{(k)} ) } + \tfrac{ \sum_{j=1}^N \Phi^\prime(x_{\phi,c}^{(i)}-x_{\phi,c}^{(j)} ) \,\nabla_\phi x_{\phi,c}^{(j)} } { \sum_{k=1}^N \Phi^\prime(x_{\phi,c}^{(i)}-x_{\phi,c}^{(k)} ) }\right) \right]\,, \label{eqn:outer-gradient-sim} \end{multline} where we use for simplicity the same kernel for $F_\theta$ and $G_\phi$. The gradients $\nabla_\phi x_{\phi,c}^{(j)}$ and $\nabla_\phi x_\theta^{(j)}$ may be computed using the relationship $x_\theta^{(j)}=H_\theta(x^{(j)}_\phi,y^{(j)})$ and back-propagation. Algorithm \ref{alg:generic} provides an overview of the optimisation methodology. \subsection{Randomised Policies}\label{sub-sec-random-action} \begin{wrapfigure}{r}{0.5\textwidth} \centering \begin{tikzpicture}[scale=0.65,every node/.style={transform shape}] \node[obs, minimum size=3em] (x0) at (-2,0) {$x_{0}$}; \node[action, minimum size=3em] (a0) at (-1,1.5) {$a_0$}; \node[obs, minimum size=3em] (x1) at (0,0) {$x_1$}; \node[action, minimum size=3em] (a1) at (1,1.5) {$a_1$}; \node[obs, minimum size=3em] (x2) at (2,0) {$x_2$}; \node (x3) at (3.5,0) {$\dots$}; \draw [->] (x2) to (x3); \node [obs, minimum size=3em] (x4) at (5,0) {$x_{n-1}$}; \node [action, minimum size=3em] (a4) at (6,1.5) {$a_{n-1}$}; \draw [->] (x3) to (x4); \node [obs, minimum size=3em] (x5) at (7,0) {$X^\phi$}; \draw [->] (x0) to (a0); \draw [->] (a0) to (x1); \draw [->] (x0) to (x1); \draw [->] (x1) to (a1); \draw [->] (a1) to (x2); \draw [->] (x1) to (x2); \draw [->] (x4) to (a4); \draw [->] (a4) to (x5); \draw [->] (x4) to (x5); \end{tikzpicture} \caption{Graphical model representation of randomised policies. \label{fig:random-policies}} \end{wrapfigure} There are many instances where optimising over randomised (also known as probabilistic) policies allows one to explore the state-action space better (and, hence, obtain better model estimates) or randomised policies are all that is viable. As such, we briefly discuss how the results in Propositions \ref{prop:inner-gradient-formula} and \ref{prop:outer-gradient-formula} may be applied in the randomised policy case. For example, it is often the case that the terminal rv $X^\phi$ (of the outer problem) stems from a sequence of actions $a_{0:n-1}$ that are conditionally generated from the previous system states $x_{0:n}$, where $x_n = X^\theta$, as in the graphical model in Figure \ref{fig:random-policies}. Hence, the probability density function (pdf) over the sequence of state/action pairs admits the decomposition \begin{equation} g_\phi(x_{0:n},a_{0:n-1}) = h(x_0)\textstyle\prod_{t=0}^{n-1} \pi_\phi(a_t|x_t)\, h(x_{t+1}|x_t,a_t), \end{equation} where $h(x_{t+1}|x_t,a_t)$ specifies the conditional one-step transition densities, $h(x_0)$ is the prior on $x_0$, and $\pi_\phi(a_t|x_t)$ the pdf of actions conditioned on states. To compute the gradient $\nabla_\phi G_\phi(x)$ we use the above decomposition and note that \begin{equation} G_{\phi}(x) = \textstyle\int_{{\mathds{R}}}\dots\int_{{\mathds{R}}}\int_{-\infty}^x h(x_0)\prod_{t=0}^{n-1} \pi_\phi(a_t|x_t)\, h(x_{t+1}|x_t,a_t)\;dx_0\dots dx_{n-1}\, dx_n, \end{equation} Therefore, its gradient becomes \begin{align*} & \nabla_\phi G_\phi(x) \nonumber \\ \;&= \int_{{\mathds{R}}}\dots\int_{{\mathds{R}}}\int_{-\infty}^x\sum_{t'=0}^{n-1} \nabla_\phi \pi_\phi(a_{t'}|x_{t'})\; h(x_0)\prod_{\substack{t=0\\t\ne t'}}^{n-1} \pi_\phi(a_t|x_t)\, h(x_{t+1}|x_t,a_t)\;dx_0\dots dx_{n-1}\, dx_n \\ \;&= \int_{{\mathds{R}}}\dots\int_{{\mathds{R}}}\int_{-\infty}^x \left(\sum_{t'=0}^{n-1} \nabla_\phi \log \pi_\phi(a_{t'}|x_{t'})\right)h(x_0)\prod_{t=0}^{n-1} \pi_\phi(a_t|x_t)\, h(x_{t+1}|x_t,a_t)\;dx_0\dots dx_{n-1}\, dx_n \\ \;&= {\mathbb{E}}\left[\sum_{t=0}^{n-1} \nabla_\phi \log \pi_\phi(a_{t}|x_{t})\;{\mathds{1}}_{\{X^\phi\le x\}}\right]. \end{align*} In the last line, $(a_t,x_t)_{t=0,\dots,n}$ should be understood as rvs corresponding to the outputs of all nodes in the graphical model in Figure \ref{fig:random-policies}. Thus, using a KDE approximation from samples of state-action sequences $\{(x^{(m)}_0,a_0^{(m)},\dots, x^{(m)}_{n-1}, a_{n-1}^{(m)},x^{(m)}_n)_{m=1,\dots N}\}$, we may estimate the gradient by \begin{equation} \nabla_\phi \hat{G}_\phi(x) \approx \tfrac{1}{N}\textstyle\sum_{m=1}^N \textstyle\sum_{t=0}^{n-1} \nabla_\phi \log \pi_\phi(a_{t}^{(m)}|x_{t}^{(m)})\, \Phi(x^{(m)}_n-x). \end{equation} To obtain a more explicit form, one must specify how actions are drawn using a given policy, e.g., they may be normally distributed with mean and standard deviation parameterised by an ANN. The remaining gradient $\nabla_\phi \log \pi_\phi(a|x)$ may then be computed using back-propagation along the sampled mini-batch of paths. Similar calculations can be performed to derive formulae for $\nabla_\phi F_\theta$, note that $\nabla_\theta F_\theta$ does not have a gradient wrt actions. \begin{algorithm}[t] \scriptsize \SetAlgoLined initialise networks $\theta,\phi$; initialise Lagrangian multipliers $\lambda = 1$, $\mu = 10$, $\alpha = 1.5$; \For{$i\leftarrow 1$ \KwTo $M_{outer}$} { Simulate mini-batch of $X^\phi$; \For{$j\leftarrow 1$ \KwTo $M_{inner}$} { Simulate mini-batch of $X^\theta$ using fixed $X^\phi$ in outer loop; Estimate inner gradient $\nabla_\theta L[\theta,\phi]$ using \eqref{eqn:inner-gradient-sim}; Update network $\theta$ using a ADAM step; \If{$ (j + 1) \% N_{Lagrange} = 0 $}{ Update multipliers: $\lambda \leftarrow\lambda + \mu\,c(\theta^*)$ and $\mu\leftarrow \alpha\,\mu $; } Repeat until $d_p[X^\theta,X^\phi]\le \varepsilon$, and ${\mathcal{R}}^U_{\gamma}[X^\theta]$ has not increased for the past 100 iterations; } Simulate mini-batch of $X^\theta$ from $X^\phi$ and trained $\theta$ network; Estimate outer gradient $\nabla_\phi L[\theta,\phi]$ using \eqref{eqn:outer-gradient-sim}; Update network $\phi$ using a ADAM step; Repeat until ${\mathcal{R}}^U_{\gamma}[X^\theta]$ has not decreased for the past 100 iterations; } \caption{Schematic of optimisation algorithm.\label{alg:generic}} \end{algorithm} \section{Examples}\label{sec:example} Here, we illustrate the three prototypical examples described earlier. For this, the investor's RDEU is a combination of a linear utility and an $\alpha$-$\beta$ distortion given by: \begin{equation}\label{eqn:gamma} \gamma(u) = \tfrac{1}{\eta}\left( p\,{\mathds{1}}_{\{u\le \alpha\}} + (1-p)\,{\mathds{1}}_{\{u>\beta\}}\right), \end{equation} with normalising constant $\eta=p\,\alpha+(1-p)\,(1-\beta)$, $0<\alpha\le\beta<1$, and $p\in[0,1]$. This parametric family is $U$-shaped (i.e., $S$-shaped RDEU), which is well-known to account for the investor's loss avoiding while simultaneously risk-seeking behaviour, and contains several notable risk measures as special cases. For $p = 1$, it reduces to the CVaR at level $\alpha$, for $p = 0$, to the upper tail expectation (UTE) at level $\beta$, and for $p>\frac{1}{2}$ ($p<\frac{1}{2}$), it emphasises losses (gains) relative to gains (losses). For all experiments, unless otherwise stated, we use $\alpha=0.1$, $\beta=0.9$, and $p=0.75$ to showcase how investors protect themselves from downside risk while still seeking gains. In the examples below, before computing the outer gradient, we ensure that constraints of the inner problem are binding, so that $\Lambda = 0$ in \eqref{eqn:L-gradient-outer}. Furthermore, while it is easy to incorporate transaction costs in all of these examples, we opted to exclude them for simplicity of the settings. Example code may be found at \url{https://github.com/sebjai/robust-risk-aware-rl}. \subsection{Robust Portfolio Allocation} In this subsection, we illustrate the results on a problem introduced in Example \ref{ex:Robust-Portfolio-Allocation}. We take the setup from \cite{esfahani2018data} where the market consists of $d$-assets whose returns are driven by a systematic factor $\zeta\sim\mathcal{N}(0, 0.02^2)$ and idiosyncratic factors $Z_i\sim\mathcal{N}(0.03\,i,0.025^2\,i^2)$, $i \in {\mathcal{D}}:=\{1,\dots,d\}$, where the factors $\zeta,Z_1,\dots,Z_d$ are mutually independent. The individual returns are $R_i=\zeta+Z_i$ and the total return is $X^\phi=\phi^\intercal R$. \begin{wrapfigure}{r}{0.6\textwidth} \centering \includegraphics[align=c,height=0.2\textheight]{Figures/portfolio_allocation/X_phi_distr_ab.pdf} \includegraphics[align=c,height=0.19\textheight]{Figures/portfolio_allocation/phi_weights_ab.pdf} \caption{Robust portfolio allocations as the size of the Wasserstein ball $\varepsilon$ varies. (Left) densities of terminal wealth, and (right) percentage of wealth held in each asset. Dashed vertical lines: $CVaR_\alpha$ and $UTE_\beta$.} \label{fig:RobustPortfolioAllocation} \end{wrapfigure} Such a model can easily be generalised to include several systemic factors. We model the outer strategy $\pi^\phi$ as an ANN that maps a zero tensor directly to the asset weights $\phi$ through a softmax activation function (to avoid short-selling: $\phi_i\ge0$, $i\in{\mathcal{D}}$, $\sum_{i\in{\mathcal{D}}}\phi_i=1$) of the learned bias. We use the Wasserstein distance of order $p=1$. Figure \ref{fig:RobustPortfolioAllocation} illustrates the optimal terminal wealth (left panel) and percentage of investment as a function of the size of the Wasserstein ball $\varepsilon$ (right panel), for $d=10$ assets. For larger $\varepsilon$, the investor seeks more robustness, which is illustrated in the left panel that shows that all $CVaR_\alpha$ ($UTE_\beta$) move to the left as $\varepsilon$ increases. Specifically, for $\varepsilon$ equal to $10^{-3}$,$10^{-2}$, and $10^0$, we have $CVaR_\alpha=$ $0.073$, $0.082$, and $0.08$ and $UTE_\beta=$ ($0.41$, $0.34$, $0.29$). These statistics indicate that the investor becomes more and more conservative with increasing Wasserstein distance. This is further emphasised in the right panel, where, for small $\varepsilon$, the investor puts most of their wealth in the riskier assets, and as $\varepsilon$ increases, moves closer to an equally weighted portfolio. For completeness, the $CVaR_\alpha$ ($UTE_\beta$) of the worst-case distribution around the optimal $X^\phi$, as $\varepsilon$ varies from $10^{-3}$,$10^{-2}$, $10^0$ are $0.071$, $-0.012$, and $-9.942$ ($0.401$, $0.341$, $0.285$), respectively. \subsection{Optimising Risk-Measures with a Benchmark} Next, we illustrate our RL approach on a portfolio allocation problem where an agent aims to improve upon a benchmark strategy, as described in Example \ref{ex:benchmark}. The outer problem has a singleton corresponding to a benchmark strategy $\phi$ -- which we take to be a constant proportion of wealth strategy (any other benchmark strategy would do) -- and the agent aims to seek over alternate strategies that minimise RDEU, i.e. replacing sup with inf in the inner problem. We optimise in discrete time $\mathcal{T}:=\{0,1,\dots,T\}$ and consider an investor who chooses from the set of admissible strategies ${\mathcal{A}}$ consisting of $({\mathcal{F}}_t)_{t\in\mathcal{T}}$-adapted Markov processes that are self-financing and in ${\mathbb{L}^p}(\Omega,[0,T])$. For an arbitrary $\pi_\theta\in{\mathcal{A}}$, where $\pi_\theta:=((\pi_{\theta,t}^{i})_{i\in{\mathcal{D}}})_{t\in\mathcal{T}}$, ${\mathcal{D}}:=\{1,\dots,d\}$, represents the percentage of wealth invested in each asset, the investor's wealth process $X^{\pi_\theta}:=(X^{\pi_\theta}_t)_{t\in\mathcal{T}}$ satisfies the usual self-financing equation. To illustrate the flexibility of our formulation, we use a stochastic interest rate model combined with a constant elasticity of variance (SIR-CEV) market model. The details of the market model dynamics and parameters may be found in \cite{pesenti2020portfolio}. Here, we use the Wasserstein distance of order $p=2$. We use a fully connected feed-forward neural network with 3 hidden layers of 50 neurons each with ReLU activation functions to map the state consisting of the features $\{t, S_t, X_t^{\phi}\}$ to policy $\pi_{\theta,t}$. The final output layer may be chosen to reflect constraints on the portfolio weights -- e.g., long only weights with no leveraging, in which case one would use a sigmoid activation function. For the results shown here we have no activation for the final layer and thus allow the agent to take long, short, and leveraged positions. \begin{wrapfigure}{r}{0.6\textwidth} \centering \includegraphics[height=0.2\textheight]{Figures/benchmark/Terminal_Distrib_new.pdf} \includegraphics[height=0.2\textheight]{Figures/benchmark/Terminal_Mapping_new.pdf} \vspace*{-1em} \caption{Illustrations of the terminal wealth rv of the benchmark's $X_T^\phi$ and of the optimal's $X_T^{\pi_\theta}$. Vertical lines indicate the $CVaR_{\alpha}$ and $UTE_\beta$.} \label{fig:benchmark} \end{wrapfigure} Figure \ref{fig:benchmark} illustrates the resulting optimal portfolio $\pi_\theta$ for a constant proportion benchmark strategy $\phi$. The left panel shows the density of the benchmark $X_T^\phi$ and optimal $X_T^{\pi_\theta}$ terminal wealth. The optimal pulls mass from the left tails into a spike near $VaR_\alpha$ and pushes mass into the right tail. This reflects the investor's risk preferences. The right scatter plot shows the state-by-state dependence between the optimal and the benchmark. The results are qualitatively similar to those derived in \cite{pesenti2020portfolio}, even though here the problem is posed in discrete time. While the analytical approach in \cite{pesenti2020portfolio} applies only for complete market models, the proposed RL approach is also applicable to incomplete markets. \subsection{Robust Statistical Arbitrage} In this subsection, we explore Example \ref{ex:dynamic}. In this case, the outer strategy $\pi^\phi=(\pi^\phi_t)_{t\in\mathcal{T}}$ denotes the position the trader holds at discrete times $\mathcal{T}=\{0,\Delta T,\dots,N\Delta T\}$. We assume the asset price process $S=(S_t)$ is an Ornstein-Uhlenbeck process satisfying $dS_t=\kappa(b-S_t)\,dt+\sigma\,dW_t$ with $\kappa=5$, $b=1$, $\sigma=0.8$, $N=252$, and $\Delta T=\frac{1}{252}$. The outer strategy $\pi^\phi$ is determined by a fully connected feed-forward neural network with 3 hidden layers of 50 neurons each with ReLU activation functions, and takes the state consisting of the features $\{t, S_t, q_{t-1}\}$ as input. The final output layer is chosen to reflect the constraints on the inventory. Here, we use a $5\tanh$ activation function to constrain the strategy such that inventory remains in the interval $[-5,5]$. We use the Wasserstein distance of order $p=2$. \begin{wrapfigure}{r}{0.6\textwidth} \centering \includegraphics[height=0.2\textheight]{Figures/stat-arb/Three_Comparison_fixed.pdf} \includegraphics[height=0.2\textheight]{Figures/stat-arb/Three_Worst_Comparison_fixed.pdf} \vspace*{-1em} \caption{ Optimal statistical arbitrage strategy's terminal densities (left) and corresponding worst-case densities (right). Vertical dashed lines show the corresponding $CVaR_{0.1}$.\label{fig:stat-arb-} } \end{wrapfigure} Figure \ref{fig:stat-arb-} show results for the $\alpha$-$\beta$ risk measure for $p=1$ ($CVaR_\alpha$), $p=0.9$, and $p=0.75$. The left panel shows the optimal densities as $p$ varies and the right panel the corresponding worst-case densities that result from solving the inner problem with the optimal as the reference distribution. For increasing $p$, the agent puts more and more weight on the upper tail and the optimal distribution becomes more profitable, but also more risky (the $CVaR_{0.1}$ moves to the left). In Figure \ref{fig:stat-arb-optimal-strategy}, we illustrate the optimal execution strategies at time $t = 0.75\,T$ through a heat map and as a function of current inventory and asset price. The colours indicate the optimal trade -- e.g., a location in deep red indicates short selling of $10$ units of the asset. As $p$ decreases the agent becomes more gain-seeking and starts taking more aggressive actions to take advantage of the mean-reversion of asset prices. \begin{figure}[h] \centering \begin{minipage}[t]{0.2\textheight} \centering \includegraphics[width=\textwidth]{Figures/stat-arb/p_1.pdf} \\[-0.5em] $p=1$ \end{minipage} \begin{minipage}[t]{0.2\textheight} \centering \includegraphics[width=\textwidth]{Figures/stat-arb/p_0.9.pdf} \\[-0.5em] $p=0.9$ \end{minipage} \begin{minipage}[t]{0.2\textheight} \centering \includegraphics[width=\textwidth]{Figures/stat-arb/p_0.75.pdf} \\[-0.5em] $p=0.75$ \end{minipage} \vspace*{-0.5em} \caption{Optimal statistical arbitrage strategy at $t=0.75\,T$ for various values of $p$ of the $\alpha$-$\beta$ risk measure. \vspace*{-1em}} \label{fig:stat-arb-optimal-strategy} \end{figure} \section{Conclusions and Future Work} We pose a generic robust risk-aware optimisation problem, develop a policy gradient approach for numerically solving it, and illustrate its tractability on three prototypical examples in financial mathematics. While the approach appears to work well on a collection of different problems, there are several avenues still open for investigation, such as under what conditions on the RDEU, the controlled rv $X^\phi$, and the uncertainty set $\vartheta_\theta$, is the problem well-posed, as well as establishing the convergence for the policy gradient method itself. We believe that the generality of our proposed RL framework opens doors to help solving a host of other problems, including, e.g., robust hedging of derivative contracts and robustifying optimal timing of irreversible investments. One further issue worth illuminating, is that, while the approach applies to dynamic decision making (such as in Examples 1 and 2), as RDEU is not a dynamically time consistent risk measure, the optimal strategies may not be time consistent. Hence, there is also need for developing an RL approach for robustifying time-consistent dynamic risk measures. \bibliographystyle{siamplain}
{'timestamp': '2021-08-25T02:04:35', 'yymm': '2108', 'arxiv_id': '2108.10403', 'language': 'en', 'url': 'https://arxiv.org/abs/2108.10403'}
\section{Introduction} \label{intro} Optimization problems with constraints which require the solution of a partial differential equation (PDE) arise widely in many areas of the sciences and engineering, in particular in problems of design. The development, analysis and implementation of efficient and robust numerical techniques for PDE constrained optimization is of utmost importance for the optimal control of processes and the optimal design of structures and systems in modern technology. In recent years, a high level of sophistication has been reached for PDE constrained optimization. We refer to the contributions in \cite{HiPiUl,WaWa,BeItKu,PreconditioningforL1control} and many further references given therein. In this paper, we shall focus on the efficient numerical methods to solve the following elliptic PDE-constrained optimization problem with $L^1$-control cost \begin{equation}\label{eqn:orginal problems} \qquad \left\{ \begin{aligned} &\min \limits_{(y,u)\in Y\times U}^{}\ \ J(y,u)=\frac{1}{2}\|y-y_d\|_{L^2(\Omega)}^{2}+\frac{\alpha}{2}\|u\|_{L^2(\Omega)}^{2}+\beta\|u\|_{L^1(\Omega)} \\ &\qquad{\rm s.t.}\qquad Ly=u+y_r\ \ \mathrm{in}\ \Omega, \\ &\qquad \qquad \qquad y=0\qquad\quad \mathrm{on}\ \partial\Omega,\\ &\qquad \qquad\qquad u\in U_{ad}=\{v(x)|a\leq v(x)\leq b,\ {\rm a.e. }\ \mathrm{on}\ \Omega\}\subseteq U, \end{aligned} \right.\tag{$\mathrm{P}$} \end{equation} where $Y:=H_0^1(\Omega)$, $U:=L^2(\Omega)$, $\Omega\subseteq \mathbb{R}^n$ ($n=2$ or $3$) is a convex, open and bounded domain with $C^{1,1}$- or polygonal boundary $\Gamma$; the desired state $y_d\in L^2(\Omega)$ and the source term $y_r \in L^2(\Omega)$ are given; and $a\leq0\leq b$ and $\alpha$, $\beta>0$. Moreover, the operator $L$ is a second-order linear elliptic differential operator. It is well-known that {adding the $L^1$-norm penalty} can lead to a sparse optimal control, i.e., the optimal control with small support, which is desirable, for instance, in actuator placement problems \cite{Stadler}. In optimal control of distributed parameter systems, it may be difficult or undesirable to place control devices all over the control domain. Instead, we can decide to localize controllers in small and effective regions. Thus, solving the control problem with an $L^1$-norm {penalty on} the control will give us information about the optimal location {to place} the control devices. {Throughout this paper, the elliptic PDE is given in the following form} \begin{equation}\label{eqn:state equations} \begin{array}{ccccc} &Ly=u+y_r &\mathrm{in} && \Omega \\ &y=0 &\mathrm{on} & &\partial\Omega \end{array} \end{equation} which satisfies the following assumption. \begin{assumption}\label{equ:assumption:1} The linear second-order differential operator $L$ is defined by \begin{equation}\label{operator A} (Ly)(x):=-\sum \limits^{n}_{i,j=1}\partial_{x_{j}}(a_{ij}(x)y_{x_{i}})+c_0(x)y(x), \end{equation} where functions $a_{ij}(x), c_0(x)\in L^{\infty}(\Omega)$, $c_0\geq0$. Moreover, it is uniformly elliptic, i.e. $a_{ij}(x)=a_{ji}(x)$ and there is a constant $\theta>0$ such that \begin{equation}\label{equ:operator A coercivity} \sum \limits^{n}_{i,j=1}a_{ij}(x)\xi_i\xi_j\geq\theta\|\xi\|^2, \qquad \mathrm{for\ a.a.}\ x\in \Omega\ \mathrm{and}\ \forall \xi \in \mathbb{R}^n. \end{equation} {In the above, $y_{x_i}$ denotes the partial derivative of $y(\cdot)$ with respect to $x_i$.} \end{assumption} The weak formulation of (\ref{eqn:state equations}) is given by \begin{equation}\label{eqn:weak form} \mathrm{Find}\ y\in H_0^1(\Omega):\ a(y,v)={\langle u+y_r,v\rangle_{L^2(\Omega)}},\quad \forall v \in H_0^1(\Omega), \end{equation} with the bilinear form \begin{equation}\label{eqn:bilinear form} a(y,v)=\int_{\Omega}(\sum \limits^{n}_{i,j=1}a_{ji}y_{x_{i}}v_{x_{j}}+c_0yv)\mathrm{d}x, \end{equation} or in short $ Ay=B(u+y_r)$, where $A\in \mathcal{L}(Y,Y^*)$ is the operator induced by the bilinear form $a$, i.e., $Ay=a(y,\cdot)$ and $B\in \mathcal{L}(U,Y^*)$ is defined by $Bu={\langle u,\cdot\rangle_{L^2(\Omega)}}$. Since the bilinear form $a(\cdot,\cdot)$ is symmetric and $U,Y$ are Hilbert spaces, we have $A^*\in\mathcal{L}(Y,Y^*)=A$, and $B^*\in\mathcal{L}(Y,U)$ with $B^*v=v, \forall v\in Y$. \begin{remark}\label{more general case} Although we assume that the Dirichlet boundary condition $y=0$ holds, it should be noted that the assumption is not a restriction and our considerations can also carry over to the more general boundary conditions of Robin type: \begin{equation*} \frac{\partial y}{\partial \nu}+\gamma y=g \quad {\rm on}\ \partial\Omega, \end{equation*} where $g\in L^2(\partial\Omega)$ is given and $\gamma\in L^{\infty}(\partial\Omega)$ is a nonnegative coefficient. \end{remark} Let us mention some existing numerical methods for solving problem (\ref{eqn:orginal problems}). {For the nonsmooth problem (\ref{eqn:orginal problems}), semismooth Newton (SSN) methods are {the primary} choices in consideration of their locally superlinear convergence, see \cite{Ulbrich1,Ulbrich2,HiPiUl} for more details.} With no doubt, employing the SSN method can derive the solution with {a high accuracy}. However, it should be mentioned that the total error of numerically solving the PDE-constrained problem contains two parts: the discretization error and the iteration error resulted from an algorithm of solving the discretized problem. Obviously, the discretization error accounts for the main part of the total error due to the error order of $O(h)$. Thus, {with the precision of discretization error in mind, algorithms {for very accurately solving the discretized problem} may not reduce the order of the total error but may incur extra computations.} {As one may have observed, for finite dimensional large scale optimization problems, some efficient first-order algorithms, such as iterative soft thresholding algorithms (ISTA), accelerated proximal gradient (APG)-based method, alternating direction method of multipliers (ADMM), etc, have become very popular in situations when high accuracy is not sought, see \cite{Blumen,inexactAPG,Beck,Toh,Fazel,SunToh1,SunToh2} and {the} references therein.} Hence, employing fast and efficient first-order algorithms with the aim of solving problem (\ref{eqn:orginal problems}) to moderate accuracy is a wise choice. Motivated by the success of some first-order optimization algorithms for finite dimensional optimization problems, to solve problem (\ref{eqn:orginal problems}), the authors \cite{iwADMM} employ an inexact semi-proximal ADMM (isPADMM) algorithm designed in \cite{SunToh1}. Recently, an APG method was proposed to solve (\ref{eqn:orginal problems}) in \cite{FIP}, {which has the highly desirable iteration complexity of $O(1/k^2)$.} As far as we know, most of the aforementioned papers are devoted to solve the primal problem. However, when the primal problem (\ref{eqn:orginal problems}) is discretized by the piecewise linear finite elements and directly solved by some algorithms mentioned above, e.g., SSN, isPADMM and APG, the resulting discretized $L^1$-norm \begin{eqnarray*}\label{equ:discrete norm} \|u_h\|_{L^1(\Omega_h)}&=&\int_{\Omega_h}\big{|}\sum\limits_{i=1}^{N_h}u_i\phi_i(x)\big{|}\mathrm{d}x, \end{eqnarray*} does not have a decoupled form. To overcome the difficulties, one approach in \cite{WaWa,iwADMM} is introduced by employing an alternative discretization of the $L^1$-norm \begin{eqnarray* \|u_h\|_{L^{1}_h(\Omega_h)}&:=&\sum_{i=1}^{n}|u_i|\int_{\Omega_h}\phi_i(x)\mathrm{d}x. \end{eqnarray*} {For} the approximate $L^1$-norm, the authors proved that this approximation technique will not {change} the order of finite element error estimates. Another approach is introduced by Song, Chen and Yu in \cite{mABCDSOPT} by proposing a duality-based approach for solving the problem (\ref{eqn:orginal problems}). Taking advantage of the structure of the dual problem, the authors proposed an inexact symmetric Gauss-Seidel based majorized ABCD (sGS-imABCD) method to solve the discretized dual problem. It should be emphasized that the design of this method combines an inexact 2-block majorized accelerated block coordinate descent (mABCD) method proposed by Cui in \cite{CuiYing} and the recent advances in the inexact symmetric Gauss-Seidel (sGS) decomposition technique developed in \cite{SunToh2,SunToh3}. In this paper, we will continue to focus on the majorized ABCD algorithm. As known to us, the majorized ABCD method was originally developed for finite dimensional problems. However, when the majorized ABCD algorithm {is} applied to optimization problems with PDE constraints, some new aspects become important. In particular, a key issue should be considered is how {various} measures of the convergence behavior of the iteration sequence vary with the level of approximation. Such questions come under the category of mesh-independence results. It should be pointed out that mesh independence allows us to predict the convergence of the method {when} applied to the discretized problem {after it} has been analyzed for the infinite dimensional problem. Further, it can be used to improve the performance of the method. Specifically, we can use a prolongated solution on a coarse grid as a good initialization for a finer discretization, which leads to mesh-refinement strategies. Mesh-independence is a theoretical justification for mesh-refinement strategies. More importantly, in \cite{mABCDSOPT}, the numerical results in terms of {the} iteration numbers of mABCD method show that the majorized ABCD method is robust with respect to the mesh size $h$. This phenomenon gives us the strong motivation to establish the mesh independence of the majorized ABCD method, which is the main contribution of this paper. { To achieve our goal, we first apply the majorized ABCD algorithm on the continuous level for solving the infinite dimensional dual problem of (\ref{eqn:orginal problems}).} Specifically, we will first give a framework of the majorized ABCD algorithm in function space to focus the presentation on structural aspects inherent in the majorized ABCD algorithm and analyze its convergence property. {Then, for the purpose of numerical implementation, a finite element discretized version of the majorized ABCD algorithm is proposed.} Finally, {comparing} the convergence results of the the majorized ABCD algorithm in function space and the discretized version of the majorized ABCD algorithm, {one type of mesh independence for the majorized ABCD method is given.} The result shows that the iteration number $k$ after which the difference $\Phi_h(z^k)-\inf\Phi_h(z)$ has been identified up to less than $\epsilon$ is independent of the mesh size $h$. In other words, we will show that the ``discretized" convergence factor $\tau_h$ defined in the convergence theorem can be bounded by the ``continuous" convergence factor $\tau$. The remainder of the paper is organized as follows. In Section \ref{sec:2}, we give a majorized accelerate block coordinate descent (mABCD) method in Hilbert space. For the purpose of numerical implementation, in Section \ref{sec:3} the finite element approximation is introduced and the finite element discretizations of the mABCD method is also given. In Section \ref{sec:4}, we show the mesh independence result of the mABCD method for the sparse PDE-constrained optimization problem (\ref{eqn:orginal problems}). Finally, we conclude our paper in Section \ref{sec:5}. \section{Duality-based approach} \label{sec:2} In this section, we will introduce the duality-based approach to solve problem (\ref{eqn:orginal problems}). First, we will give the dual problem of (\ref{eqn:orginal problems}). Then, to solve the dual problem, we will propose a framework of the majorized ABCD algorithm in function space and focus the presentation on the structural aspects inherent in the majorized ABCD algorithm. \subsection{\textbf{Dual of problem (\ref{eqn:orginal problems})}}\label{sec:2.1} {With simple calculations, the dual of problem (\ref{eqn:orginal problems}) can be written}, in its equivalent minimization form, as \begin{equation}\label{eqn:dual problem} \begin{aligned} \min\ \Phi(\lambda,p,\mu):=&{\frac{1}{2}\|A^*p-y_d\|_{L^2(\Omega)}^2}+ \frac{1}{2\alpha}\|p-\lambda-\mu\|_{L^2(\Omega)}^2+\langle p, y_r\rangle_{L^2(\Omega)}\\ &+\delta_{\beta B_{\infty}(0)}(\lambda)+\delta^*_{U_{ad}}(\mu)-\frac{1}{2}\|y_d\|_{L^2(\Omega)}^2,\end{aligned}\tag{$\mathrm{D}$} \end{equation} where $p\in H^1_0(\Omega)$, $\lambda,\mu\in L^2(\Omega)$, $B_{\infty}(0):=\{\lambda\in L^2(\Omega): \|\lambda\|_{L^\infty(\Omega)}\leq 1\}$, and for any given nonempty, closed convex subset $C$ of $L^2(\Omega)$, $\delta_{C}(\cdot)$ is the indicator function of $C$. Based on the $L^2$-inner product, we define the conjugate of $\delta_{C}(\cdot)$ as follows: \begin{equation*} \delta^*_{C}(s^*)=\sup\limits_{s\in C}^{}{\langle s^*,s\rangle}_{L^2(\Omega)}. \end{equation*} Obviously, by choosing $v=(\lambda, p)$, $w=\mu$ and taking \begin{eqnarray} f(v) &=&\delta_{\beta B_{\infty}(0)}(\lambda)+{\frac{1}{2}\|A^*p-y_d\|_{L^2(\Omega)}^2}+\langle p, y_r\rangle_{L^2(\Omega)}-\frac{1}{2}\|y_d\|_{L^2(\Omega)}^2\label{f function for D},\\ g(w) &=& \delta^*_{U_{ad}}(\mu)\label{g function for D}, \\ \phi(v, w) &=&\frac{1}{2\alpha}\|p-\lambda-\mu\|_{L^2(\Omega)}^2\label{phi function for D}, \end{eqnarray} it is quite clear that our dual problem (\ref{eqn:dual problem}) belongs to a general class of unconstrained, multi-block convex optimization problems with coupled objective function, that is \begin{equation}\label{eqn:model problem} \begin{aligned} \min_{v, w} \theta(v,w):= f(v)+ g(w)+ \phi(v, w), \end{aligned} \end{equation} where $f: \mathcal{V}\rightarrow (-\infty, +\infty ]$ and $g: \mathcal{W}\rightarrow (-\infty, +\infty ]$ are two convex functions (possibly nonsmooth), $\phi: \mathcal{V}\times \mathcal{W}\rightarrow (-\infty, +\infty ]$ is a smooth convex function, and $\mathcal{V}$, $\mathcal{W}$ are real Hilbert spaces. Thus taking advantage of the structure of the dual problem, we will aim to present an algorithm {to solve} problem (\ref{eqn:dual problem}) efficiently. \subsection{\bf A majorized ABCD algorithm for the general problem (\ref{eqn:model problem})}\label{sec:2.2} Thanks to the structure of (\ref{eqn:model problem}), Cui in \cite{CuiYing} proposed a majorized accelerate block coordinate descent (mABCD) method. {We give} a brief sketch of mABCD method below. To deal with the general model (\ref{eqn:model problem}), we need some more assumptions on $\phi$. \begin{assumption}\label{assumption on differentiable with Lipschitz continuous gradients} The convex function $\phi: \mathcal{V}\times \mathcal{W}\rightarrow (-\infty, +\infty ]$ is continuously differentiable with Lipschitz continuous gradients. \end{assumption} Let us denote $z:=(v, w)\in \mathcal{V}\times\mathcal{W}$. The authors \cite[Theorem 2.3]{Hiriart1984Generalized} provide a second order Mean-Value Theorem for $\phi$, which states that for any $z'$ and $z$ in $\mathcal{V}\times \mathcal{W}$, there exist $ z''\in [z',z]$ and a self-adjoint positive semidefinite operator $\mathcal{G}\in \partial^2\phi(z'')$ such that \begin{equation*} \phi(z)= \phi(z')+ \langle\nabla\phi(z'), z- z'\rangle+ \frac{1}{2}\|z'- z\|_{\mathcal{G}}^2, \end{equation*} where $\partial^2\phi(z'')$ denotes the Clarke's generalized Hessian at given $z''$ and $[z',z]$ denotes the the line segment connecting $z'$ and $z$. Under Assumption \ref{assumption on differentiable with Lipschitz continuous gradients}, it is obvious that there exist two self-adjoint positive semidefinite linear operators $\mathcal{Q}$ and $\widehat{\mathcal{Q}}: \mathcal{V}\times\mathcal{W}\rightarrow \mathcal{V}\times\mathcal{W}$ such that for any $z\in \mathcal{V}\times\mathcal{W}$, $\mathcal{Q}\preceq\mathcal{G}\preceq\widehat{\mathcal{Q}}$. Thus, for any $z, z'\in \mathcal{V}\times\mathcal{W}$, it holds that \begin{equation*} \phi(z)\geq \phi(z')+ \langle\nabla\phi(z'), z- z'\rangle+ \frac{1}{2}\|z'- z\|_{\mathcal{Q}}^2, \end{equation*} and \begin{equation*} \phi(z)\leq \hat{\phi}(z; z'):= \phi(z ')+ \langle\nabla\phi(z '), z- z '\rangle+ \frac{1}{2}\|z'- z\|_{\widehat{\mathcal{Q}}}^2. \end{equation*} Furthermore, we decompose the operators $\mathcal{Q}$ and $\widehat{\mathcal{Q}}$ into the following block structures: \begin{equation*} \mathcal{Q}z:=\left( \begin{array}{cc} \mathcal{Q}_{11} & \mathcal{Q}_{12}\\ \mathcal{Q}_{12}^* &\mathcal{Q}_{22} \end{array} \right) \left( \begin{array}{c} v\\ w \end{array} \right),\quad \widehat{\mathcal{Q}}z:=\left( \begin{array}{cc} \widehat{\mathcal{Q}}_{11} & \widehat{\mathcal{Q}}_{12}\\ \widehat{\mathcal{Q}}_{12}^* & \widehat{\mathcal{Q}}_{22} \end{array} \right) \left( \begin{array}{c} v\\ w \end{array} \right), \quad\forall z=(v, w)\in \mathcal{U}\times \mathcal{V}, \end{equation*} and assume $\mathcal{Q}$, $\widehat{\mathcal{Q}}$ satisfy the following assumption. \begin{assumption}{\rm\textbf{\cite[Assumption 3.1]{CuiYing}}}\label{assumption majorized} There exist two self-adjoint positive semidefinite linear operators $\mathcal{D}_1: \mathcal{U}\rightarrow \mathcal{U}$ and $\mathcal{D}_2: \mathcal{V}\rightarrow \mathcal{V}$ such that \begin{equation*} \widehat{\mathcal{Q}}:=\mathcal{Q}+ {\rm Diag}(\mathcal{D}_1,\mathcal{D}_2). \end{equation*} Furthermore, $\widehat{\mathcal{Q}}$ satisfies that $\widehat{\mathcal{Q}}_{11}\succ 0$ and $\widehat{\mathcal{Q}}_{22}\succ 0$. \end{assumption} \begin{remark}\label{choice of Hessian} It is important to note that Assumption {\rm\ref{assumption majorized}} is a realistic assumption in practice. For example, when $\phi$ is a quadratic function, we could choose $\mathcal{Q}=\mathcal{G}=\nabla^2\phi$. If we have $\mathcal{Q}_{11}\succ0$ and $\mathcal{Q}_{22}\succ0$, then Assumption {\rm\ref{assumption majorized}} holds automatically. We should point out that $\phi$ is a quadratic function for many problems in {practical applications}. Fortunately, it should be noted that the function $\phi$ defined in {\rm(\ref{phi function for D})} for our problem {\rm(\ref{eqn:dual problem})} is quadratic and thus we can choose $\mathcal{Q}=\nabla^2\phi$. \end{remark} We can now present the majorized ABCD algorithm for (\ref{eqn:model problem}) as follows. \begin{algorithm}[H]\caption{\textbf{(A majorized ABCD algorithm for (\ref{eqn:model problem}))}}\label{algo1:imabcd} \textbf{Input}:{$(\tilde{v}^1,\tilde{w}^1)=({v}^0, {w}^0)\in \textrm{dom} (f)\times \textrm{dom}(g)$. Set $t_1=1$, $k=1$.}\\ \textbf{Output}:{$ ({v}^k, {w}^k)$} \begin{description} \item[Step 1] Compute \begin{equation*} \left\{ \begin{aligned} &{v}^{k} =\arg\min_{v \in \mathcal{V}}\{f(v)+ \hat{\phi}(v,\tilde{w}^k; \tilde{z}^k)\},\\ &{w}^{k} =\arg\min_{w \in \mathcal{W}}\{g(w)+ \hat{\phi}(v^k,w; \tilde{z}^k)\}, \end{aligned} \right. \end{equation*} where $\tilde{z}^k=(\tilde{v}^k,\tilde{w}^k)$. \item[Step 2] Set $t_{k+1}=\frac{1+\sqrt{1+4t_k^2}}{2}$ and $\beta_k=\frac{t_k-1}{t_{k+1}}$, compute \begin{equation*} \tilde{v}^{k+1}=v^{k}+ \beta_{k}(v^{k}-v^{k-1}), \quad \tilde{w}^{k+1}= w^{k}+ \beta_{k}(w^{k}-w^{k-1}). \end{equation*} \item[\bf Step 3] If a termination criterion is not met, set $k:=k+1$ and go to Step 1 \end{description} \end{algorithm} Here we state the convergence result. For the detailed proof, one could see \cite{CuiYing}. This theorem builds a solid foundation for our subsequent proposed algorithm. \begin{theorem}{\rm\textbf{\cite[Theorem 3.2]{CuiYing}}}\label{imABCD convergence} Suppose that Assumption {\rm\ref{assumption majorized}} holds and the solution set $\Omega$ of the problem {\rm(\ref{eqn:model problem})} is non-empty. Let $z^*=(v^*,w^*)\in \Omega$. Then the sequence $\{{z}^k\}:=\{({v}^k,{w}^k)\}$ generated by the Algorithm {\rm\ref{algo1:imabcd}} satisfies that \begin{equation*} \theta({z}^k)- \theta(z^*)\leq \frac{2\|{z}^0- z^*\|_{\mathcal{S}}^2}{(k+1)^2} \quad \forall k\geq 1, \end{equation*} where $\theta(\cdot)$ is the objective function of {\rm(\ref{eqn:model problem})} and $\mathcal{S}:={\rm{Diag}}(\mathcal{D}_1,\mathcal{D}_2+\mathcal{Q}_{22})$. \end{theorem} \subsection{\textbf{The sGS-majorized ABCD method in Hilbert Space for (\ref{eqn:dual problem})}}\label{sec:2.3} Now, we can apply Algorithm \ref{algo1:imabcd} to (\ref{eqn:dual problem}), where $(\lambda,p)$ is taken as one block, and $\mu$ is taken as the other one. Let us denote $z=(\lambda, p,\mu)$. Since $\phi$ defined in (\ref{phi function for D}) for (\ref{eqn:dual problem}) is quadratic, we can take \begin{equation*} \mathcal{Q}:= \frac{1}{\alpha}\left( \begin{array}{ccc} \mathcal{I} &\quad -\mathcal{I} &\quad \mathcal{I}\\ -\mathcal{I} & \quad \mathcal{I}&\quad -\mathcal{I}\\ \mathcal{I} & \quad-\mathcal{I}&\quad\mathcal{I} \end{array} \right), \end{equation*} where \begin{equation*} \mathcal{Q}_{11}:= \frac{1}{\alpha}\left( \begin{array}{cc} \mathcal{I}& \quad-\mathcal{I}\\ -\mathcal{I} & \quad \mathcal{I} \end{array} \right), \quad \mathcal{Q}_{22}:= \frac{1}{\alpha}\mathcal{I}. \end{equation*} Additionally, we assume that there exist two self-adjoint positive semidefinite operators $\mathcal{D}_1$ and $\mathcal{D}_2$, such that Assumption \ref{assumption majorized} holds. Thus, it implies that we should majorize $\phi(\lambda,p,\mu)$ at $z'=(\lambda', p',\mu')$ as \begin{equation}\label{majorized function} \begin{aligned} \phi(z) \leq \hat{\phi}(z;z') :=&\frac{1}{2\alpha}\|-p+\lambda+\mu\|_{L^2(\Omega)}^2+\frac{1}{2} \Big\langle \left(\begin{array}{c} \lambda-\lambda' \\ p-p' \end{array}\right), \mathcal{D}_1\left(\begin{array}{c} \lambda-\lambda' \\ p-p' \end{array}\right) \Big\rangle_{L^2(\Omega)}\\ &+\frac{1}{2}\big\langle\mu-\mu',\mathcal{D}_2(\mu-\mu')\big\rangle_{L^2(\Omega)}. \end{aligned} \end{equation} Thus, the framework of mABCD for (\ref{eqn:dual problem}) is given below: \begin{algorithm}[H] \caption{\textbf{(mABCD algorithm for (\ref{eqn:dual problem}))}}\label{algo1:ABCD algorithm for (D)} \textbf{Input}:{$(\tilde{\lambda}^1, \tilde{p}^1,\tilde{\mu}^1 )=({\lambda}^0, {p}^0,\mu^0)\in [-\beta,\beta]\times H^{1}_0(\Omega)\times L^2(\Omega)$. $\mathcal{T}\succeq0$. Set $k= 1, t_1= 1.$} \textbf{Output}:{$ ({\lambda}^k, {p}^k,{\mu}^k)$} \begin{description} \item[Step 1] Compute \begin{eqnarray*} ({\lambda}^{k},p^k) &=&\arg\min\delta_{[-\beta,\beta]}(\lambda)+\frac{1}{2}\|A^*p-y_{d}\|_{L^2(\Omega)}^2+\langle p, y_r\rangle_{L^2(\Omega)}+\frac{1}{2\alpha}\|-p+\lambda+\tilde{\mu}^k\|_{L^2(\Omega)}^2\\ &&\qquad\qquad+\frac{1}{2} \Big\langle \left(\begin{array}{c} \lambda-\tilde{\lambda}^k \\ p-\tilde{p}^k \end{array}\right), \mathcal{D}_1\left(\begin{array}{c} \lambda-\tilde{\lambda}^k \\ p-\tilde{p}^k \end{array}\right) \Big\rangle,\\ {\mu}^{k}&=&\arg\min\frac{1}{2\alpha}\|\mu-(p^k-\lambda^k)\|_{L^2(\Omega)}^2+\delta^*_{[a,b]}(\mu)+\frac{1}{2}\langle\mu-\tilde{\mu}^k,\mathcal{D}_2(\mu-\tilde{\mu}^k)\rangle. \end{eqnarray*} \item[Step 2] Set $t_{k+1}=\frac{1+\sqrt{1+4t_k^2}}{2}$ and $\beta_k=\frac{t_k-1}{t_{k+1}}$, compute \begin{eqnarray*} \tilde{\lambda}^{k+1}= {\lambda}^{k}+ \beta_{k}({\lambda}^{k}-{\lambda}^{k-1}),\quad \tilde {p}^{k+1}={p}^{k}+ \beta_{k}({p}^{k}-{p}^{k-1}), \quad \tilde {\mu}^{k+1}={\mu}^{k}+ \beta_{k}({\mu}^{k}-{\mu}^{k-1}). \end{eqnarray*} \item[\bf Step 3] If a termination criterion is not met, set $k:=k+1$ and go to Step 1 \end{description} \end{algorithm} We now can discuss the issue on how to choose two operators $\mathcal{D}_1$ and $\mathcal{D}_2$ for Algorithm \ref{algo1:ABCD algorithm for (D)}. As we know, choosing {the operators $\mathcal{D}_1$ and $\mathcal{D}_2$ appropriately is important for numerical computation}. Note that for numerical efficiency, the general principle is that both $\mathcal{D}_1$ and $\mathcal{D}_2$ should be chosen as small as possible such that $({\lambda}^{k},{p}^{k})$ and ${\mu}^{k}$ could take larger step-lengths while the corresponding subproblems still can be solved relatively easily. Firstly, for the proximal term $\frac{1}{2}\|\mu-\tilde{\mu}^k\|^2_{\mathcal{D}_2}$, since $\mathcal{Q}_{22}=\frac{1}{\alpha}\mathcal{I}\succ0$, we can choose $\mathcal{D}_2=0$. Then, it is obvious that the optimal solution of the $\mu$-subproblem at $k$-th iteration is unique and also has a closed form {solution given by} \begin{equation}\label{closed form solution of mu} \mu^k=p^k-\lambda^k-\alpha{\rm\Pi}_{[a,b]}(\frac{1}{\alpha}(p^k-\lambda^k)). \end{equation} Next, we focus on how to choose $\mathcal{D}_1$. Ignoring the proximal term \begin{equation*} \frac{1}{2}\Big\langle\left(\begin{array}{c} \lambda-\tilde{\lambda}^k \\ p-\tilde{p}^k \end{array}\right), \mathcal{D}_1\left(\begin{array}{c} \lambda-\tilde{\lambda}^k \\ p-\tilde{p}^k \end{array}\right) \Big \rangle, \end{equation*} it is clear that the subproblem with respect to $(\lambda,p)$ at $k$-th iteration can be equivalently rewritten as: \begin{equation}\label{sGS subproblem for D} \min \delta_{[-\beta,\beta]}(\lambda)+\frac{1}{2}\Big\langle \left(\begin{array}{c} \lambda \\ p \end{array}\right) ,\mathcal{H}\left(\begin{array}{c} \lambda \\ p \end{array}\right)\Big\rangle- \Big\langle r,\left(\begin{array}{c} \lambda \\ p \end{array}\right) \Big\rangle, \end{equation} where $\mathcal{H}= \left( \begin{array}{cc} \frac{1}{\alpha}\mathcal{I} & \quad -\frac{1}{\alpha}\mathcal{I} \\ -\frac{1}{\alpha}\mathcal{I} & \quad AA^*+\frac{1}{\alpha}\mathcal{I} \\ \end{array} \right)$ and $r=\left(\begin{array}{c} -\frac{1}{\alpha}\tilde{\mu}^k \\ -y_r+Ay_d+\frac{1}{\alpha}\tilde{\mu}^k \end{array}\right)$, whose objective function of (\ref{sGS subproblem for D}) is the sum of a two-block quadratic function and a non-smooth function involving only the first block, thus the symmetric Gauss-Seidel (sGS) technique proposed recently by Li, Sun and Toh \cite{SunToh2,SunToh3}, could be used to solve it. For later discussions, we consider a splitting of any given self-adjoint positive semidefinite linear operator $\mathcal{Q}$ \begin{equation}\label{equ:splitting} \mathcal{Q}=\mathcal{D}+\mathcal{U}+\mathcal{U}^*, \end{equation} where $\mathcal{U}$ denotes the strict upper triangular part of $\mathcal{Q}$ and $\mathcal{D}$ is the diagonal of $\mathcal{Q}$. Moreover, we assume that $\mathcal{D}\succ0$ and define the following self-adjoint positive semidefinite linear operator \begin{equation}\label{SGSoperator} {\rm sGS}(\mathcal{Q}):=\mathcal{U}\mathcal{D}^{-1}\mathcal{U}^*. \end{equation} Thus, to achieve our goal, we choose \begin{eqnarray*} \mathcal{D}_1:&=&\mathrm{sGS}\left(\mathcal{H}\right) =\left( \begin{array}{cc} \frac{1}{\alpha}(\alpha A A^*+\mathcal{I})^{-1} & \quad 0 \\ 0 & \quad 0 \\ \end{array} \right). \end{eqnarray*} Then according to \cite[Theorem 2.1]{SunToh3}, solving the $(\lambda,p)$-subproblem \begin{equation*} \begin{aligned} {(\lambda^k,p^k) =} \mbox{argmin}_{\lambda,p}\ &\delta_{[-\beta,\beta]}(\lambda)+\frac{1}{2}\|A^*p-y_{d}\|_{L^2(\Omega)}^2+\langle p, y_r\rangle_{L^2(\Omega)}+\frac{1}{2\alpha}\|-p+\lambda+\tilde{\mu}^k\|_{L^2(\Omega)}^2\\ &+\frac{1}{2}\Big\langle \left(\begin{array}{c} \lambda-\tilde{\lambda}^k \\ p-\tilde{p}^k \end{array}\right), \mathcal{D}_1\left(\begin{array}{c} \lambda-\tilde{\lambda}^k \\ p-\tilde{p}^k \end{array}\right) \Big\rangle, \end{aligned} \end{equation*} is equivalent to computing {$(\lambda^k,p^k)$} via the following procedure: \begin{equation*} \left\{\begin{aligned} &\hat{p}^{k}=\arg\min\frac{1}{2}\|A^* p-y_{d}\|_{L^2(\Omega)}^2+ \frac{1}{2\alpha}\|p-\tilde{\lambda}^k-\tilde{\mu}^k\|_{L^2(\Omega)}^2+\langle p, y_r\rangle_{L^2(\Omega)},\\ &{\lambda}^{k} =\arg\min\frac{1}{2\alpha}\|\lambda-(\hat{p}^{k}-\tilde{\mu}^k)\|_{L^2(\Omega)}^2+\delta_{[-\beta,\beta]}(\lambda),\\ &{p}^{k}=\arg\min\frac{1}{2}\|A^*p-y_{d}\|_{L^2(\Omega)}^2+ \frac{1}{2\alpha}\|p-{\lambda}^k-\tilde{\mu}^k\|_{L^2(\Omega)}^2+\langle p, y_r\rangle_{L^2(\Omega)}. \end{aligned}\right. \end{equation*} \begin{remark}\label{solutions of subproblem} Specifically, for the $\lambda$-subproblem of Algorithm {\rm\ref{algo1:ABCD algorithm for (D)}} at {the} $k$-th iteration, it has a closed form solution which {is given} by \begin{equation*} \lambda^k={\rm\Pi}_{[-\beta,\beta]}(\hat{p}^{k}-\tilde{\mu}^k). \end{equation*} For the $\hat{p}$-subproblem, it is obvious that solving the subproblem is equivalent to solving the following system: \begin{equation*} A(A^*p-y_d)+\frac{1}{\alpha}(p-\tilde{\lambda}^k-\tilde{\mu}^k)+y_r=0. \end{equation*} Moreover, to solve the ${p^k}$-subproblem, we only need to replace $\tilde{\lambda}^k$ by ${\lambda}^k$ in the right-hand term. Thus, all the numerical techniques for the block $\hat{p}^k$ is also applicable for the block ${p^k}$. \end{remark} At last, combining a 2-block majorized ABCD and the recent advances in the symmetric Gauss-Seidel (sGS) decomposition technique, {a sGS} based majorized ABCD (sGS-mABCD) algorithm for (\ref{eqn:dual problem}) is presented as follows. \begin{algorithm}[H] \caption{\textbf{(sGS-mABCD algorithm for (\ref{eqn:dual problem}))}}\label{algo1:sGS-mABCD algorithm for (D)} \textbf{Input}:{$(\tilde{\lambda}^1, \tilde{p}^1,\tilde{\mu}^1)=( {\lambda}^0, {p}^0, \mu^0)\in [-\beta,\beta]\times H^{1}_0(\Omega)\times L^2(\Omega)$. Set $k= 1, t_1= 1.$} \textbf{Output}:{$ ({\lambda}^k, {p}^k, {\mu}^k)$} \begin{description} \item[Step 1] Compute \begin{eqnarray*} \hat{p}^{k}&=&\arg\min\frac{1}{2}\|A^* p-y_{d}\|_{L^2(\Omega)}^2+ \frac{1}{2\alpha}\|p-\tilde{\lambda}^k-\tilde{\mu}^k\|_{L^2(\Omega)}^2+\langle p, y_r\rangle_{L^2(\Omega)},\\ \\ {\lambda}^{k} &=&\arg\min\frac{1}{2\alpha}\|\lambda-(\hat{p}^{k}-\tilde{\mu}^k)\|_{L^2(\Omega)}^2+\delta_{[-\beta,\beta]}(\lambda),\\ \\ {p}^{k}&=&\arg\min\frac{1}{2}\|A^*p-y_{d}\|_{L^2(\Omega)}^2+ \frac{1}{2\alpha}\|p-{\lambda}^k-\tilde{\mu}^k\|_{L^2(\Omega)}^2+\langle p, y_r\rangle_{L^2(\Omega)},\\ {\mu}^{k}&=&\arg\min\frac{1}{2\alpha}\|\mu-({p}^k-{\lambda}^k)\|_{L^2(\Omega)}^2+\delta^*_{[a,b]}(\mu). \end{eqnarray*} \item[Step 2] Set $t_{k+1}=\frac{1+\sqrt{1+4t_k^2}}{2}$ and $\beta_k=\frac{t_k-1}{t_{k+1}}$, compute \begin{eqnarray*} \tilde{\lambda}^{k+1}= {\lambda}^{k}+ \beta_{k}({\lambda}^{k}-{\lambda}^{k-1}),\quad \tilde {p}^{k+1}={p}^{k}+ \beta_{k}({p}^{k}-{p}^{k-1}), \quad \tilde {\mu}^{k+1}={\mu}^{k}+ \beta_{k}({\mu}^{k}-{\mu}^{k-1}). \end{eqnarray*} \item[\bf Step 3] If a termination criterion is not met, set $k:=k+1$ and go to Step 1 \end{description} \end{algorithm} Employing Theorem \ref{imABCD convergence}, we have the following convergence result for Algorithm \ref{algo1:sGS-mABCD algorithm for (D)}. \begin{theorem}\label{imABCD convergence for dual problem} Suppose that the solution set $\Theta$ of Problem {\rm(\ref{eqn:dual problem})} is non-empty. Let $(\lambda^*,p^*,\mu^*)\in \Theta$. Then the sequence $\{(\lambda^k,p^k,\mu^k)\}$ generated by Algorithm {\rm\ref{algo1:sGS-mABCD algorithm for (D)}} satisfies that \begin{equation}\label{iteration complexity of dual problem for D} {\Phi}(\lambda^k,p^k,\mu^k)- {\Phi}(\lambda^*,p^*,\mu^*)\leq \frac{4\tau}{(k+1)^2}, \end{equation} where $\Phi(\cdot)$ is the objective function of the dual problem {\rm(\ref{eqn:dual problem})} and \begin{eqnarray*} &&\tau=\frac{1}{2}\langle\left(\begin{array}{c} \lambda^*-\lambda^0 \\ p^*-p^0\\ \mu^*-\mu^0 \\ \end{array}\right), \mathcal{S}\left(\begin{array}{c} \lambda^*-\lambda^0 \\ p^*-p^0\\ \mu^*-\mu^0 \\ \end{array}\right) \rangle,\quad \mathcal{S}= \left( \begin{array}{ccc} \frac{1}{\alpha}(\alpha A^*A+\mathcal{I})^{-1} & 0 & \quad0 \\ 0 & 0 & \quad0 \\ 0 & 0 & \quad\frac{1}{\alpha}\mathcal{I} \\ \end{array} \right). \end{eqnarray*} \end{theorem} \section{Finite element discretization} \label{sec:3} \subsection{\textbf {Piecewise linear finite elements discretization}} \label{sec:3.1} To numerically solve problem (\ref{eqn:orginal problems}), we consider the finite element method, in which the state $y$ and the control $u$ are both discretized by the piecewise linear, globally continuous finite elements. To achieve this aim, let us fix the assumptions on the discretization by finite elements. We first consider a family of regular and quasi-uniform triangulations $\{\mathcal{T}_h\}_{h>0}$ of $\bar{\Omega}$. For each cell $T\in \mathcal{T}_h$, let us define the diameter of the set $T$ by $\rho_{T}:={\rm diam}\ T$ and define $\sigma_{T}$ to be the diameter of the largest ball contained in $T$. The mesh size of the grid is defined by $h=\max_{T\in \mathcal{T}_h}\rho_{T}$. We suppose that the following regularity assumption on the triangulation {is} satisfied, which {is} standard in the context of error estimates. \begin{assumption}[regular and quasi-uniform triangulations]\label{regular and quasi-uniform triangulations} There exist two positive constants $\kappa$ and $\tau$ such that \begin{equation*} \frac{\rho_{T}}{\sigma_{T}}\leq \kappa,\quad \frac{h}{\rho_{T}}\leq \tau, \end{equation*} hold for all $T\in \mathcal{T}_h$ and all $h>0$. Moreover, let us define $\bar{\Omega}_h=\bigcup_{T\in \mathcal{T}_h}T$, and let ${\Omega}_h \subset\Omega$ and $\Gamma_h$ denote its interior and its boundary, respectively. In the case that $\Omega$ is a convex polyhedral domain, we have $\Omega=\Omega_h$. In the case {that} $\Omega$ has a $C^{1,1}$- boundary $\Gamma$, we assume that $\bar{\Omega}_h$ is convex and all boundary vertices of $\bar{\Omega}_h$ are contained in $\Gamma$, such that \begin{equation*} |\Omega\backslash {\Omega}_h|\leq c h^2, \end{equation*} where $|\cdot|$ denotes the measure of the set and $c>0$ is a constant. \end{assumption} On account of the homogeneous boundary condition of the state equation, we use \begin{equation}\label{state discretized space} Y_h =\left\{y_h\in C(\bar{\Omega})~\big{|}~y_{h|T}\in \mathcal{P}_1~ {\rm{for\ all}}~ T\in \mathcal{T}_h~ \mathrm{and}~ y_h=0~ \mathrm{in } ~\bar{\Omega}\backslash {\Omega}_h\right\} \end{equation} as the discrete state space, where $\mathcal{P}_1$ denotes the space of polynomials of degree less than or equal to $1$. As mentioned above, we also use the same discrete space to discretize {the} control $u$, thus we define \begin{equation}\label{control discretized space} U_h =\left\{u_h\in C(\bar{\Omega})~\big{|}~u_{h|T}\in \mathcal{P}_1~ {\rm{for\ all}}~ T\in \mathcal{T}_h~ \mathrm{and}~ u_h=0~ \mathrm{in } ~\bar{\Omega}\backslash{\Omega}_h\right\}. \end{equation} For a given regular and quasi-uniform triangulation $\mathcal{T}_h$ with nodes $\{x_i\}_{i=1}^{N_h}$, let $\{\phi_i(x)\} _{i=1}^{N_h}$ be a set of nodal basis functions, which span $Y_h$ as well as $U_h$ and satisfy the following properties: \begin{eqnarray}\label{basic functions properties} &&\phi_i(x) \geq 0, \quad \|\phi_i\|_{\infty} = 1 \quad \forall i=1,2,...,N_h, \quad \sum\limits_{i=1}^{N_h}\phi_i(x)=1, \forall x\in \Omega_h. \end{eqnarray} The elements $u_h\in U_h$ and $y_h\in Y_h$ can be represented in the following forms respectively, \begin{equation*} u_h=\sum \limits_{i=1}^{N_h}u_i\phi_i(x),\quad y_h=\sum \limits_{i=1}^{N_h}y_i\phi_i(x), \end{equation*} where $u_h(x_i)=u_i$ and $y_h(x_i)=y_i$. Let $U_{ad,h}$ denote the discrete feasible set, which is defined by \begin{eqnarray*} U_{ad,h}:&=&U_h\cap U_{ad}\\ &=&\left\{z_h=\sum \limits_{i=1}^{N_h} z_i\phi_i(x)~\big{|}~a\leq z_i\leq b, \forall i=1,...,N_h\right\}\subset U_{ad}. \end{eqnarray*} From the perspective of numerical implementation, we introduce the following stiffness and mass matrices: \begin{equation*} K_h = \left(a(\phi_i, \phi_j)\right)_{i,j=1}^{N_h},\quad M_h=\left(\int_{\Omega_h}\phi_i(x)\phi_j(x){\mathrm{d}}x\right)_{i,j=1}^{N_h}, \end{equation*} and let $y_{r,h}$, $y_{d,h}$ be the projections of $y_r$ and $y_d$ onto $Y_h$, respectively, \begin{equation*} y_{r,h}=\sum\limits_{i=1}^{N_h}y_r^i\phi_i(x),\quad y_{d,h}=\sum\limits_{i=1}^{N_h}y_d^i\phi_i(x). \end{equation*} Moreover, for the requirement of the subsequent discretized algorithms, {next we} introduce the lumped mass matrix $W_h$ \begin{equation*} W_h:={\rm{diag}}\left(\int_{\Omega_h}\phi_i(x)\mathrm{d}x\right)_{i=1}^{N_h}, \end{equation*} which is a diagonal matrix, and define an alternative discretization of the $L^1$-norm: \begin{equation}\label{equ:approximal L1} \|u_h\|_{L^{1}_h(\Omega)}:=\sum_{i=1}^{N_h}|u_i|\int_{\Omega_h}\phi_i(x)\mathrm{d}x=\|W_h{\bf u}\|_1, \end{equation} which is a weighted $l^1$-norm of the coefficients of $u_h$. More importantly, the following results about the mass matrix $M_h$ and the lumped mass matrix $W_h$ hold. \begin{proposition}{\rm{\textbf{\cite[Table 1]{Wathen}}}}\label{eqn:martix properties1} $\forall$ ${\bf z}=(z_1,z_2,...,z_{N_h})\in \mathbb{R}^{N_h}$, the following inequalities hold: \begin{eqnarray} \label{Winequality1}&&\|{\bf z}\|^2_{M_h}\leq\|{\bf z}\|^2_{W_h}\leq \gamma\|{\bf z}\|^2_{M_h} \quad where \ \gamma= \left\{ \begin{aligned} &4 \quad if \ n=2, \\ &5 \quad if \ n=3, \end{aligned} \right. \\ \label{Winequality2}&&\int_{\Omega_h}|\sum_{i=1}^n{z_i\phi_i(x)}|~\mathrm{d}x\leq\|W_h{\bf z}\|_1. \end{eqnarray} \end{proposition} To analyze the error between $\|u_h\|_{L^{1}_h(\Omega)}$ and $\|u_h\|_{L^{1}(\Omega)}$, we first introduce the nodal interpolation operator $I_h$. For a given regular and quasi-uniform triangulation $\mathcal{T}_h$ of $\Omega$ with nodes $\{x_i\}_{i=1}^{N_h}$, we define \begin{equation}\label{nodal interpolation operator} (I_hw)(x)=\sum_{i=1}^{N_h}w(x_i)\phi_i(x) \ {\rm\ for\ any}\ w\in L^1(\Omega). \end{equation} {Concerning} the interpolation error estimate, we have the following result, see {\rm\cite[Theorem 3.1.6]{Ciarlet}} for more details. \begin{lemma}\label{interpolation error estimate} For all $w\in W^{k+1,p}(\Omega)$, $k\geq 0$, $p,q\in [0,+\infty)$, and $0\leq m\leq k+1$, we have \begin{equation}\label{interpolation error estimate inequilaty} \|w-I_hw\|_{W^{m,q}(\Omega)}\leq c_I h^{k+1-m}\|w\|_{W^{k+1,p}(\Omega)}. \end{equation} \end{lemma} Thus, according to Lemma \ref{interpolation error estimate}, we have the following error estimate results. \begin{proposition}\label{eqn:martix properties $\forall$ ${\bf z}=(z_1,z_2,...,z_{N_h})\in \mathbb{R}^{N_h}$, let $z_h=\sum\limits_{i=1}^{N_h}z_i\phi_i(x)$, then the following inequalities hold \begin{eqnarray} \label{Winequality3}&&0\leq\|z_h\|_{L^{1}_h(\Omega)}-\|z_h\|_{L^{1}(\Omega)}\leq C\,h\,\|z_h\|_{H^1(\Omega)}, \end{eqnarray} where $C$ is a constant. \begin{proof} Obviously, we have \begin{eqnarray*} \|z_h\|_{L^{1}_h(\Omega)}- {\|z_h\|_{L^{1}(\Omega)}}&=&\int_{\Omega_h}\sum_{i=1}^{N_h}|z_i|\phi_i(x)\mathrm{d}x-\int_{\Omega_h}|\sum_{i=1}^n{z_i\phi_i(x)}|~\mathrm{d}x\\ &=&\int_{\Omega_h} {\big((I_h|z_h|)(x)-|z_h(x)|\big)} \mathrm{d}x. \end{eqnarray*} Moreover, due to $z_h\in U_h$, we have $|z_h|\in H^1(\Omega)$. Thus employing Lemma \ref{interpolation error estimate}, we have \begin{eqnarray*} \int_{\Omega_h}\big( (I_h|z_h|)(x)-|z_h(x)|\big) \mathrm{d}x&\leq& c_{\Omega}\|I_h|z_h|- |z_h|\|_{L^2(\Omega)} =C \,h\, \|z_h\|_{H^1(\Omega)}. \end{eqnarray*} {Thus, the proof is completed.} \end{proof} \end{proposition} \subsection{\textbf{A discretized form of sGS-majorized ABCD algorithm for (\ref{eqn:discretized matrix-vector dual problem})}}\label{sec:3.2} Although an efficient majorized ABCD algorithm in Hilbert space is presented in Section \ref{sec:2}, for the purpose of numerical implementation, we should give the finite element discretizations of the majorized ABCD method. First, employing the piecewise linear, globally continuous finite elements to discretize all the dual variables, then a type of finite element discretization of (\ref{eqn:dual problem}) is given as follows \begin{equation}\label{eqn:discretized matrix-vector dual problem} \begin{aligned} \min\limits_{{\bm \lambda},{\bf p},{\bm \mu}\in \mathbb{R}^{N_h}} \Phi_h({\bm \lambda},{\bf p},{\bm \mu}):=& \frac{1}{2}\|K_h {\bf p}-{M_h}{\bf y_{d}}\|_{M_h^{-1}}^2+ \frac{1}{2\alpha}\|{\bm \lambda}+{\bm \mu}-{\bf p}\|_{M_h}^2+\langle M_h{\bf y_r}, {\bf p}\rangle\\ &+ \delta_{[-\beta,\beta]}({\bm\lambda})+ \delta^*_{[a,b]}({M_h}{\bm\mu})-\frac{1}{2}\|{\bf y_d}\|^2_{M_h}. \end{aligned}\tag{$\mathrm{D_h}$} \end{equation} Obviously, by choosing $v=(\bm \lambda, {\bf p})$, $w=\bm \mu$ and taking \begin{eqnarray} f_h(v) &=&\delta_{[-\beta,\beta]}(\bm\lambda)+\frac{1}{2}\|K_h {\bf p}-{M_h}{\bf y_{d}}\|_{M_h^{-1}}^2+\langle M_h {\bf y_r}, {\bf p}\rangle-\frac{1}{2}\|{\bf y_d}\|^2_{M_h} \label{g function for Dh}, \\ g_h(w) &=&\delta^*_{[a,b]}({M_h}\bm\mu)\label{f function for Dh},\\ \phi_h(v, w) &=&\frac{1}{2\alpha}\|\bm\lambda- {\bf p} + \bm\mu\|_{M_h}^2\label{phi function for Dh}, \end{eqnarray} (\ref{eqn:discretized matrix-vector dual problem}) also belongs to the problem of form (\ref{eqn:model problem}). Thus, Algorithm \ref{algo1:imabcd} also can be applied to (\ref{eqn:discretized matrix-vector dual problem}). Let us denote ${\bf z}=({\bm\lambda},{\bf p},{\bm\mu})$. As shown in Section \ref{sec:3.2}, we should first majorize the coupled function {$\phi_h$} defined in (\ref{phi function for Dh}) for (\ref{eqn:discretized matrix-vector dual problem}). Since $\phi_h$ is quadratic, we can take \begin{equation}\label{Discretized Hessian Matrix} \mathcal{Q}_h:= { \frac{1}{\alpha}\left( \begin{array}{ccc} M_h &\quad -M_h &\quad M_h\\ -M_h & \quad M_h&\quad -M_h\\ M_h & \quad-M_h&\quad M_h \end{array} \right), } \end{equation} where \begin{equation*} \mathcal{Q}_h^{11}:= \frac{1}{\alpha}\left( \begin{array}{cc} M_h& \quad-M_h\\ -M_h & \quad M_h \end{array} \right),\quad \mathcal{Q}_h^{22}:= \frac{1}{\alpha} M_h. \end{equation*} Moreover, we assume that there exist two self-adjoint positive semidefinite operators $D_{1h}$ and $D_{2h}$, which satisfy Assumption \ref{assumption majorized}. Then, we majorize ${\phi_h}({\bm \lambda},{\bf p},{\bm \mu})$ at $z'=({\bm \lambda'}, {\bf p'},{\bm\mu'})$ as \begin{equation}\label{majorized function2} \begin{aligned} {\phi_h({\bf z})} \leq {\hat{\phi}_h({\bf z};{\bf z'})} =& \frac{1}{2\alpha}\|{\bm\lambda}+ {\bm\mu}-{\bf p}\|_{M_h}^2+\frac{1}{2}\left\|\left(\begin{array}{c} {\bm\lambda} \\ {\bf p} \end{array}\right) -\left(\begin{array}{c} {\bm \lambda'} \\ {\bf p'} \end{array}\right)\right\|^2_{D_{1h}}+\frac{1}{2}\|{\bm\mu}-\bm \mu'\|^2_{D_{2h}}. \end{aligned} \end{equation} Thus, the framework of mABCD for (\ref{eqn:discretized matrix-vector dual problem}) is given as {follows.} \begin{algorithm}[H]\label{algo1:imABCD algorithm for (Dh)} \caption{\textbf{(mABCD algorithm for (\ref{eqn:discretized matrix-vector dual problem}))}} \textbf{Input}:{$(\tilde{{\bm\lambda}}^1, \tilde{{\bf p}}^1,\tilde{{\bm\mu}}^1)=({\bm \lambda}^0, {\bf p}^0,\bm\mu^0)\in {\rm dom} (\delta^*_{[a,b]})\times [-\beta,\beta]\times \mathbb{R}^{N_h}$. Set $k= 1, t_1= 1.$} \textbf{Output}:{$ ({\bm\lambda}^k, {\bf p}^k,{\bm\mu}^k)$} \begin{description} \item[Step 1] Compute \begin{eqnarray*} ({\bm\lambda}^{k},{\bf p}^{k})&=&\arg\min\delta_{[-\beta,\beta]}({\bm\lambda})+\frac{1}{2}\|K_h {\bf p}-{M_h}{\bf y_{d}}\|_{M_h^{-1}}^2+\langle M_h {\bf y_r}, {\bf p}\rangle+\frac{1}{2\alpha}\|{\bm \lambda}-{\bf p}+\tilde{{\bm\mu}}^k\|_{M_h}^2\\ &&\qquad\qquad+\frac{1}{2}\left\|\left(\begin{array}{c} {\bm\lambda} \\ {\bf p} \end{array}\right) -\left(\begin{array}{c} \tilde{{\bm\lambda}}^k \\ \tilde{{\bf p}}^k \end{array}\right)\right\|^2_{D_{1h}}.\\ {\bm\mu}^{k}&=&\arg\min\delta^*_{[a,b]}(M_h{\bm\mu})+\frac{1}{2\alpha}\|{\bm\mu}-({\bf p}^k-{\bm\lambda}^k)\|_{M_h}^2+\frac{1}{2}\|{\bm\mu}-\tilde{{\bm\mu}}^k\|^2_{D_{2h}}, \end{eqnarray*} \item[Step 2] Set $t_{k+1}=\frac{1+\sqrt{1+4t_k^2}}{2}$ and $\beta_k=\frac{t_k-1}{t_{k+1}}$, Compute \begin{eqnarray*} \tilde{\bm\lambda}^{k+1}= {\bm\lambda}^{k}+ \beta_{k}({\bm\lambda}^{k}-{\bm\lambda}^{k-1}),\quad \tilde{\bf p}^{k+1}={\bf p}^{k}+ \beta_{k}({\bf p}^{k}-{\bf p}^{k-1}), \quad\tilde{\bm\mu}^{k+1}={\bm\mu}^{k}+ \beta_{k}({\bm\mu}^{k}-{\bm\mu}^{k-1}). \end{eqnarray*} \item[\bf Step 3] If a termination criterion is not met, set $k:=k+1$ and go to Step 1 \end{description} \end{algorithm} As we know, {it is important to appropriately choose the two operators $D_{1h}$ and $D_{2h}$ for efficient numerical computation}. Firstly, if we choose $D_{2h}=0$, which is similar to choosing $D_2=0$ {for the continuous problem in the previous section}, it is unfortunately {not a good choice since} there does not exist a closed form solution for the $\bm\mu$-subproblem because the mass matrix $M_h$ is not diagonal. In order to make the {$\bm\mu$-subproblem to have an} analytical solution, we choose \begin{equation*} D_{2h}:=\frac{1}{\alpha}\gamma M_hW_h^{-1}M_h-\frac{1}{\alpha}M_h,\quad {\rm where} \ \gamma = \left\{ \begin{aligned} &4 \quad if \ n=2, \\ &5 \quad if \ n=3. \end{aligned} \right. \end{equation*} From Proposition \ref{eqn:martix properties1}, it is easy to see that $\mathcal{D}_{2h}\succeq0$. Let us denote ${\bm\xi}=M_h{\bm\mu}$, then solving the subproblem {for the variable $\bm\mu$} can be translated to solving the following subproblem: \begin{equation}\label{subproblem-z} \begin{aligned} {\bm\xi}^{k &=\arg\min\frac{1}{2\alpha}\|{\bm\xi}-M_h({\bf p}^k-{\bm\lambda}^k)\|_{M_h^{-1}}^2+\delta^*_{[a,b]}(\bm\xi)+\frac{1}{2\alpha}\|{\bm\xi}-\tilde{\bm\xi}^k\|^2_{\gamma W_h^{-1}-M_h^{-1}}\\ &=\arg\min\frac{1}{2\alpha}\|{\bm\xi}-(\tilde{\bm\xi}^k+\frac{1}{\gamma}W_h({\bf p}^k-{\bm \lambda}^k-M_h^{-1}\tilde{{\bm\xi}}^k))\|_{\gamma W_h^{-1}}^2+\delta^*_{[a,b]}(\bm\xi).\\ &={\bf v}^k-\frac{\alpha}{\gamma}W_h{\rm\Pi}_{[a,b]}(\frac{\gamma}{\alpha}W_h^{-1}{\bf v}^k). \end{aligned} \end{equation} where \begin{equation*} {\bf v}^k= M_h\tilde{\bm\mu}^k+\frac{1}{\gamma}W_h({\bf p}^k-{\bm\lambda}^k-\tilde{\bm\mu}^k). \end{equation*} Then we can compute ${\bm\mu}^k$ by ${\bm\mu}^k= M_h^{-1}{\bm\xi}^{k}$. Next, we {discuss} how to choose the operator $D_{1h}$. {Similar to \eqref{sGS subproblem for D}}, the $({\bm\lambda},{\bf p})$-subproblem can also be rewritten in the following form: \begin{equation}\label{sGS subproblem for Dh} \min \delta_{[-\beta,\beta]}(\bm\lambda)+\frac{1}{2}\Big\langle \left(\begin{array}{c} {\bm\lambda} \\ {\bf p} \end{array}\right) ,\mathcal{H}_h\left(\begin{array}{c} {\bm\lambda}\\ {\bf p} \end{array}\right)\Big\rangle-\Big\langle {\bf r},\left(\begin{array}{c} {\bm\lambda} \\ {\bf p} \end{array}\right)\Big\rangle, \end{equation} where $\mathcal{H}_h= \frac{1}{\alpha}\left( \begin{array}{cc} M_h& \quad-M_h\\ -M_h & \quad M_h+\alpha K_h M_h^{-1}K_h \end{array} \right)$ and ${\bf r}=\left(\begin{array}{c} -\frac{1}{\alpha}M_h\tilde{\bm\mu}^k \\ -M_h{\bf y_r}+K_h{\bf y_d}+\frac{1}{\alpha}M_h\tilde{\bm\mu}^k \end{array}\right)$. Based on the structure of the $({\bm\lambda},{\bf p})$-subproblem, we also use the block sGS decomposition technique to solve it. Thus, we choose \begin{equation*} \mathcal{\widetilde{D}}_{1h}=\mathrm{sGS}(\mathcal{H}_h)=\frac{1}{\alpha}\left( \begin{array}{cc} M_h(M_h+\alpha K_h M_h^{-1}K_h)^{-1}M_h & \quad0\\ 0 & \quad 0\\ \end{array} \right). \end{equation*} And once again, according to \cite[Theorem 2.1]{SunToh3}, we can solve the $({\bm\lambda},{\bf p})$-subproblem by the following steps: \begin{equation*} \left\{\begin{aligned} \hat{\bf p}^{k}&=\arg\min\frac{1}{2}\|K_h {\bf p}-{M_h}{\bf y_{d}}\|_{M_h^{-1}}^2+ \frac{1}{2\alpha}\|{\bf p}-\tilde{\bm\lambda}^k-\tilde{\bm\mu}^k\|_{M_h}^2+\langle M_h {\bf y_r, p}\rangle,\\ {\bm\lambda}^{k} &=\arg\min\frac{1}{2\alpha}\|{\bm\lambda}-(\hat{\bf p}^{k}-\tilde{\bm\mu}^k)\|_{M_h}^2+\delta_{[-\beta,\beta]}(\bm\lambda),\\ {\bf p}^{k}&=\arg\min\frac{1}{2}\|K_h {\bf p}-{M_h}{\bf y_{d}}\|_{M_h^{-1}}^2+ \frac{1}{2\alpha}\|{\bf p}-{\bm\lambda}^k-\tilde{\bm\mu}^k\|_{M_h}^2+\langle M_h {\bf y_r, p}\rangle. \end{aligned}\right. \end{equation*} However, it is easy to see that the $\bm\lambda$-subproblem is {not a simple projection problem} with respect to the variable $\bm\lambda$ since the mass matrix $M_h$ is not diagonal, thus there is no closed form solution for $\bm\lambda$. To overcome this difficulty, we can add a proximal term $\frac{1}{2\alpha}\|\bm\lambda- \tilde{\bm\lambda}^{k}\|_{W_h-M_h}^2$ to the $\bm\lambda$-subproblem. {Then} for the $\bm\lambda$-subproblem, we have \begin{equation*} {\bm\lambda}^{k}={\rm\Pi}_{[-\beta,\beta]}(\tilde{\bm\lambda}^{k}+W_h^{-1}M_h(\hat{\bf p}^k-\tilde{\bm\mu}^k-\tilde{\bm\lambda}^{k})). \end{equation*} Thus, we can choose $D_{1h}$ as follows \begin{equation*} \mathcal{D}_{1h}={\mathrm{sGS}\left(\mathcal{H}_h + \frac{1}{\alpha}\left[ \begin{array}{cc} W_h-M_h &\quad0 \\ 0 & \quad0 \\ \end{array} \right]\right) +\left( \frac{1}{\alpha}\left[ \begin{array}{cc} W_h-M_h &\quad0 \\ 0 & \quad0 \\ \end{array} \right]\right)} \end{equation*} Then, according to the {above choices} of $\mathcal{D}_{1h}$ and $\mathcal{D}_{2h}$, the detailed framework of our inexact sGS based majorized ABCD method for (\ref{eqn:discretized matrix-vector dual problem}) {is given} as follows. \begin{algorithm}[H] \caption{\textbf{(sGS-mABCD algorithm for (\ref{eqn:discretized matrix-vector dual problem}))}}\label{algo1:Full inexact ABCD algorithm for (Dh)} \textbf{Input}:{$(\tilde{\bm\lambda}^1, \tilde{\bf p}^1,\tilde{\bm\mu}^1)=({\bm\lambda}^0, {\bf p}^0,\bm\mu^0)\in {\rm dom} (\delta^*_{[a,b]})\times [-\beta,\beta]\times \mathbb{R}^{N_h}$. Set $k= 1, t_1= 1.$} \textbf{Output}:{$ ({\bm\lambda}^k, {\bf p}^k,{\bm\mu}^k)$} \begin{description} \item[Step 1] Compute \begin{eqnarray*} \hat{\bf p}^{k}&=&\arg\min\frac{1}{2}\|K_h {\bf p}-{M_h}{\bf y_{d}}\|_{M_h^{-1}}^2+ \frac{1}{2\alpha}\|{\bf p}-\tilde{\bm\lambda}^k-\tilde{\bm\mu}^k\|_{M_h}^2+\langle M_h {\bf y_r, p}\rangle,\\ {\bm\lambda}^{k} &=&\arg\min\delta_{[-\beta,\beta]}(\bm\lambda)+\frac{1}{2\alpha}\|\bm\lambda-(\hat{\bf p}^{k}-\tilde{\bm\mu}^k)\|_{M_h}^2+\frac{1}{2\alpha}\|\bm\lambda- \tilde{\bm\lambda}^{k}\|_{W_h-M_h}^2,\\ {\bf p}^{k}&=&\arg\min\frac{1}{2}\|K_h {\bf p}-{M_h}{\bf y_{d}}\|_{M_h^{-1}}^2+ \frac{1}{2\alpha}\|{\bf p}-{\bm\lambda}^k-\tilde{\bm\mu}^k\|_{M_h}^2+\langle M_h {\bf y_r, p}\rangle,\\ {\bm \mu}^{k}&=&\arg\min\delta^*_{[a,b]}(M_h\bm\mu)+\frac{1}{2\alpha}\|\bm\mu-({\bf p}^k-\bm\lambda^k)\|_{M_h}^2+\frac{1}{2\alpha}\|\bm\mu-\tilde{\bm\mu}^k\|^2_{\gamma M_hW_h^{-1}M_h-M_h}. \end{eqnarray*} \item[Step 2] Set $t_{k+1}=\frac{1+\sqrt{1+4t_k^2}}{2}$ and $\beta_k=\frac{t_k-1}{t_{k+1}}$, compute \begin{eqnarray*} \tilde{\bm\lambda}^{k+1}= {\bm\lambda}^{k}+ \beta_{k}({\bm\lambda}^{k}-{\bm\lambda}^{k-1}),\quad \tilde{\bf p}^{k+1}={\bf p}^{k}+ \beta_{k}({\bf p}^{k}-{\bf p}^{k-1}), \quad\tilde{\bm\mu}^{k+1}={\bm\mu}^{k}+ \beta_{k}({\bm\mu}^{k}-{\bm\mu}^{k-1}). \end{eqnarray*} \item[\bf Step 3] If a termination criterion is not met, set $k:=k+1$ and go to Step 1 \end{description} \end{algorithm} Similarly, owing to Theorem \ref{imABCD convergence}, we can show Algorithm \ref{algo1:Full inexact ABCD algorithm for (Dh)} also has the following $O(1/k^2)$ iteration complexity. \begin{theorem}\label{sGS-imABCD convergence} Suppose that the solution set $\Omega$ of the problem {\rm(\ref{eqn:discretized matrix-vector dual problem})} is non-empty. Let ${\bf z}^*=(\bm\lambda^*,{\bf p}^*,{\bm\mu}^*)\in \Omega$. Let $\{{\bf z}^k\}:=\{({\bm\lambda}^k,{\bf p}^k,{\bm\mu}^k)\}$ be the sequence generated by the Algorithm {\rm\ref{algo1:Full inexact ABCD algorithm for (Dh)}}. Then we have \begin{equation}\label{iteration complexity of discretized dual problem} \Phi_h({\bf z}^k)- \Phi_h({\bf z}^*)\leq \frac{4\tau_h}{(k+1)^2}, \; \forall k\geq 1, \end{equation} where $\Phi_h(\cdot)$ is the objective function of the dual problem {\rm(\ref{eqn:discretized matrix-vector dual problem})} and \begin{eqnarray} &&\tau_h=\frac{1}{2}\|{\bf z}^0- {\bf z}^*\|_{\mathcal{S}_h}^2 \label{tauh}\\ &&\mathcal{S}_h:=\frac{1}{\alpha}\left( \begin{array}{ccc} M_h(M_h+\alpha K_h M_h^{-1}K_h)^{-1}M_h+W_h-M_h & \ 0&\ 0\\ 0 & \ 0& \ 0\\ 0& \ 0& \ \gamma M_hW_h^{-1}M_h\\ \end{array} \right)\label{Sh} \end{eqnarray} Moreover, the sequence $\{({\bm\lambda}^k,{\bf p}^k,{\bm\mu}^k)\}$ generated by the Algorithm {\rm\ref{algo1:Full inexact ABCD algorithm for (Dh)}} is bounded. \end{theorem} \section{Robustness with respect to $h$}\label{sec:4} In this section, {we deal with the issue on} how measures of the convergence behavior of the iteration sequence vary with the level of approximation. Such questions come under the category of mesh-independence results. In this section, we will establish the mesh independence of majorized accelerate block coordinate descent (mABCD) method for optimal control problems. In what follows we will give one type of mesh-independence result for mABCD method. { It says that the iterate $k$ after which the difference $\Phi_h({\bf z}^k)-\inf\Phi_h({\bf z})$ has been identified up to less than $\epsilon$ is independent of the mesh size $h$. } In order to show these results, let us first present some bounds on the Rayleigh quotients of $K_h$ and $M_h$, one can see \cite[Proposition 1.29 and Theorem 1.32]{spectralproperty} for more details. \begin{lemma}\label{spectral property} For $\mathcal{P}1$ approximation on a regular and quasi-uniform subdivision of $\mathbb{R}^n$ which satisfies Assumption {\rm\ref{regular and quasi-uniform triangulations}}, and for any ${\bf x}\in \mathbb{R}^{N_h}$, the mass matrix $M_h$ approximates the scaled identity matrix in the sense that \begin{equation*} c_1 h^2\leq \frac{{\bf x}^{T}M_h{\bf x}}{{\bf x}^{T}{\bf x}}\leq c_2 h^2 \quad if\ n=2, \ {\rm and}\ c_1 h^3\leq \frac{{\bf x}^{T}M_h{\bf x}}{{\bf x}^{T}{\bf x}}\leq c_2 h^3 \quad if\ n=3, \end{equation*} the stiffness matrix $K_h$ satisfies \begin{equation*} d_1h^2\leq \frac{{\bf x}^{T}K_h{\bf x}}{{\bf x}^{T}{\bf x}}\leq d_2 \quad if\ n=2, \ {\rm and }\ d_1h^3\leq \frac{{\bf x}^{T}K_h{\bf x}}{{\bf x}^{T}{\bf x}}\leq d_2 h \quad if\ n=3, \end{equation*} where the constants $c_1$, $c_2$, $d_1$ and $d_2$ are independent of the mesh size $h$. \end{lemma} Based on Lemma \ref{spectral property}, we can easily obtain the following lemma. \begin{lemma}\label{spectral property of Gh} Let $G_h=M_h+\alpha K_h M_h^{-1}K_h$. For any ${\bf x}\in \mathbb{R}^{N_h}$, there exist four constants $u_1$, $u_2$, $l_1$, $l_2$ and $h_0>0$, such that for any $0<h<h_0$, the matrix $G_h$ satisfies the following inequalities \begin{equation}\label{spectral property of Gh12} \begin{aligned} &l_1 h^2\leq \frac{{\bf x}^{T}G_h{\bf x}}{{\bf x}^{T}{\bf x}}\leq u_1 \frac{1}{h^2} \quad if\ n=2, \quad l_2 h^3\leq \frac{{\bf x}^{T}G_h{\bf x}}{{\bf x}^{T}{\bf x}}\leq u_2 \frac{1}{h} \quad\ if\ n=3. \end{aligned} \end{equation} \end{lemma} Thus based on Lemma \ref{spectral property} and Lemma \ref{spectral property of Gh}, it is easy to prove that there exists $h_0>0$, such that for any $0<h<h_0$, the matrix $M_hG_h^{-1}M_h$ satisfies the following properties \begin{equation*}\label{property of MinvGM} \begin{aligned} &\lambda_{\max}(M_hG_h^{-1}M_h)=O(h^2) \quad {\rm for}\ n=2,\quad \lambda_{\max}(M_hG_h^{-1}M_h)=O(h^3) \quad {\rm for}\ n=3, \end{aligned} \end{equation*} where $ \lambda_{\max}(\cdot)$ {denotes} the largest eigenvalue of a given matrix. Furthermore, we have \begin{equation}\label{property of Sh} \begin{aligned} \lambda_{\max}(\mathcal{S}_h)&=\frac{1}{\alpha}\max\{ \lambda_{\max}(M_hG_h^{-1}M_h+W_h-M_h),\lambda_{\max}(\gamma M_hW_h^{-1}M_h)\}\\ &=\left\{\begin{aligned} & O(h^2) \quad {\rm for}\ n=2,\\ & O(h^3) \quad {\rm for}\ n=3. \end{aligned}\right. \end{aligned}. \end{equation} where $\mathcal{S}_h$ defined in (\ref{Sh}). {In other words, we can say that the largest eigenvalue of the matrix $\mathcal{S}_h$ can be uniformly bounded by a constant}, which implies the ``discretized" convergence factor $\tau_h$ could be uniformly bounded by a constant. Hence, this conclusion prompts us to consider analysing the mesh independence of {the} mABCD method. We present our first mesh independence result for our mABCD method, in which we prove that the ``discretized" convergence factor $\tau_h$ defined in Theorem \ref{sGS-imABCD convergence} approach the ``continuous" convergence factor $\tau$ defined in {Theorem \ref{imABCD convergence for dual problem}} in the limits $h\rightarrow 0$ and the distance can be bounded in terms of the mesh size. \begin{theorem}\label{mesh independent result1} Let Algorithm {\rm\ref{algo1:ABCD algorithm for (D)}} for {the} continuous problem {\rm(\ref{eqn:dual problem})} start from $z^0=({\lambda}^0, {p}^0,\mu^0)$ and Algorithm {\rm\ref{algo1:Full inexact ABCD algorithm for (Dh)}} for {the} discretized problem {\rm(\ref{eqn:discretized matrix-vector dual problem})} start from $z_h^0=({\lambda}_h^0, {p}_h^0,\mu_h^0)$, respectively. And we take $z(x)^* \in(\partial\Phi)^{-1}(0)$ and $z_h^*(x)=\sum\limits_{i=1}^{N_h}z_i^*\phi(x)$ where the coefficients $(z_1^*, z_2^*,...,z_{N_h}^*)\in(\partial\Phi_h)^{-1}(0)$. Assume that $z_h^0=I_hz^0$ where $I_h$ is the nodal interpolation operator, and $\|z^*-z_h^*\|_{L^2(\Omega)}=O(h)$. Then there exist $h^*\in (0, \hat h]$ and a constant $C$, such that \begin{equation}\label{difference of tauh and tau} \tau_h\leq\tau+Ch \end{equation} for all $h \in(0, h^*]$. \begin{proof} From the definition of $\tau$ in Theorem \ref{imABCD convergence}, we have \begin{equation}\label{defintion of tau} \begin{aligned} \tau&=\frac{1}{2\alpha}\|\mu^*-\mu^0\|^2_{L^2(\Omega)}+\frac{1}{2\alpha}\langle\lambda^*-\lambda^0, (\alpha A^*A+\mathcal{I})^{-1}(\lambda^*-\lambda^0)\rangle_{L^2(\Omega)}\\ &=\frac{1}{2\alpha}\|\mu^*-\mu^0\|^2_{L^2(\Omega)}+\frac{1}{2\alpha}\int_{\Omega}(\lambda^*-\lambda^0)q^1{~\rm dx}, \end{aligned} \end{equation} where $q^1$ is the weak solution of the following problem \begin{eqnarray}\label{PDEs1} \nonumber&&{\rm Find}~(q^1,q^2)\in (H^1_0(\Omega))^2, {\rm such~ that}\\ &&\left\{\begin{aligned} &a(q^1, v) =\langle q^2, v\rangle_{L^2(\Omega)},\\ &\alpha a(q^2, v)+\langle q^1, v\rangle_{L^2(\Omega)}=\langle \lambda^*-\lambda^0, v\rangle_{L^2(\Omega)}, \forall v\in H^1_0(\Omega) \end{aligned}\right. \end{eqnarray} where the bilinear form $a(\cdot,\cdot)$ is defined in (\ref{eqn:bilinear form}). Similarly, according to the definition of $\tau_h$ and Proposition \ref{eqn:martix properties}, we obtain \begin{equation}\label{defintion of tauh} \begin{aligned} \tau_h&=\frac{1}{2\alpha}\|\mu_h^*-\mu_h^0\|^2_{L^2(\Omega)}+\frac{1}{2\alpha}\int_{\Omega}(\lambda_h^*-\lambda_h^0)q^1_h{~\rm dx}+\frac{1}{2\alpha}\|I_h(\lambda_h^*-\lambda_h^0)^2-(\lambda_h^*-\lambda_h^0)^2\|_{L^1(\Omega)}, \end{aligned} \end{equation} where $q_h^1$ is the solution of the following discretized problem which is discretized by piecewise linear finite elements \begin{eqnarray}\label{PDEs2} \nonumber&&{\rm Find}~(q_h^1,q_h^2), {\rm such~ that}\\ &&\left\{\begin{aligned} &a(q_h^1, v_h) =\langle q_h^2, v_h\rangle_{L^2(\Omega)},\\ &\alpha a(q_h^2, v_h)+\langle q_h^1, v_h\rangle_{L^2(\Omega)}=\langle \lambda_h^*-\lambda_h^0, v_h\rangle_{L^2(\Omega)}, \forall v_h\in Y_h \end{aligned}\right. \end{eqnarray} where $Y_h$ is defined in {\rm(\ref{state discretized space})}. In order to estimate the value of $\tau_h$, we define $\tilde{q}_h^1$ as the solution of the following discretized problem \begin{eqnarray}\label{PDEs3} \nonumber&&{\rm Find}~(\tilde{q}_h^1,\tilde{q}_h^2), {\rm such~ that}\\ &&\left\{\begin{aligned} &a(\tilde{q}_h^1, v_h) =\langle \tilde{q}_h^2, v_h\rangle_{L^2(\Omega)},\\ &\alpha a(\tilde{q}_h^2, v_h)+\langle \tilde{q}_h^1, v_h\rangle_{L^2(\Omega)}=\langle \lambda^*-\lambda^0, v_h\rangle_{L^2(\Omega)}, \forall v_h\in Y_h. \end{aligned}\right. \end{eqnarray} Obviously, there exists $h^*\in (0, \hat h]$ and four constants $C_1$ and $C_2$, $C_3$ and $C_4$ which independent of $h$, such that for all $h \in(0, h^*]$, the following inequalities hold: \begin{equation}\label{triangle inequality} \begin{aligned} \|q^1-q_h^1\|_{L^2(\Omega)}&\leq\|q^1-\tilde{q}_h^1\|_{L^2(\Omega)}+\|q_h^1-\tilde{q}_h^1\|_{L^2(\Omega)}\\ &\leq C_1h^2\|\lambda^*-\lambda^0\|_{L^2(\Omega)}+C_2h^2\|\lambda_h^*-\lambda_h^0\|_{L^2(\Omega)}\\ &\leq C_1h^2\|\lambda^*-\lambda^0\|_{L^2(\Omega)}+C_2h^2(\|\lambda_h^*-\lambda^*\|_{L^2(\Omega)}+\|\lambda_h^0-\lambda^0\|_{L^2(\Omega)}+\|\lambda^*-\lambda^0\|_{L^2(\Omega)})\\ &\leq C_3h^2\|\lambda^*-\lambda^0\|_{L^2(\Omega)}+C_4h^3(\|\lambda^*\|_{L^2(\Omega)}+\|\lambda^0\|_{L^2(\Omega)}). \end{aligned} \end{equation} Thus, we now can estimate $\tau$ and get \begin{equation}\label{estimate tau}\small \begin{aligned} \tau_h=&\frac{1}{2\alpha}\|\mu_h^*-\mu^*+\mu^0-\mu_h^0+\mu^*-\mu^0\|^2_{L^2(\Omega)}+\frac{1}{2\alpha}\int_{\Omega}(\lambda^*-\lambda^0)q^1{~\rm dx}+\frac{1}{2\alpha}\int_{\Omega}(\lambda_h^*-\lambda^*)(q_h^1-q^1){~\rm dx}\\&+\frac{1}{2\alpha}\int_{\Omega}(\lambda^0-\lambda_h^0)(q_h^1-q^1){~\rm dx}+\frac{1}{2\alpha}\int_{\Omega}(\lambda^*-\lambda^0)(q_h^1-q^1){~\rm dx}+\frac{1}{2\alpha}\int_{\Omega}(\lambda^0-\lambda_h^0)q^1{~\rm dx}\\ &+\frac{1}{2\alpha}\int_{\Omega}(\lambda_h^*-\lambda^*)q^1{~\rm dx}+\frac{1}{2\alpha}\|I_h(\lambda_h^*-\lambda_h^0)^2-(\lambda_h^*-\lambda_h^0)^2\|_{L^1(\Omega)}\\ \leq&\frac{1}{2\alpha}\|\mu^*-\mu^0\|^2_{L^2(\Omega)}+\frac{1}{2\alpha}\int_{\Omega}(\lambda^*-\lambda^0)q^1{~\rm dx}+\frac{1}{2\alpha}\|\mu_h^*-\mu^*\|^2_{L^2(\Omega)}+\frac{1}{2\alpha}\|\mu^0-\mu_h^0\|^2_{L^2(\Omega)}\\ &+\frac{1}{2\alpha}\int_{\Omega}(\lambda_h^*-\lambda^*)(q_h^1-q^1){~\rm dx} +\frac{1}{2\alpha}\int_{\Omega}(\lambda^0-\lambda_h^0)(q_h^1-q^1){~\rm dx} +\frac{1}{2\alpha}\int_{\Omega}(\lambda^*-\lambda^0)(q_h^1-q^1){~\rm dx}\\ &+\frac{1}{2\alpha}\int_{\Omega}(\lambda^0-\lambda_h^0)q^1{~\rm dx} +\frac{1}{2\alpha}\int_{\Omega}(\lambda_h^*-\lambda^*)q^1{~\rm dx} +\frac{1}{2\alpha}\|I_h(\lambda_h^*-\lambda_h^0)^2-(\lambda_h^*-\lambda_h^0)^2\|_{L^1(\Omega)}\\ \leq&\tau+C_5h(\|\lambda^0\|_{L^2(\Omega)}+\|\lambda^*\|_{L^2(\Omega)})+C_6h^2(\|\mu^*\|_{L^2(\Omega)}+\|\mu^0\|_{L^2(\Omega)}+\|\lambda^0\|_{L^2(\Omega)}+\|\lambda^*\|_{L^2(\Omega)})+O(h^3)\\ \leq&\tau+Ch \end{aligned} \end{equation} \end{proof} \end{theorem} \section{Concluding remarks}\label{sec:5} In this paper, instead of solving the optimal control problem with $L^1$ control cost, we directly solve {its dual, which is a multi-block unconstrained convex composite minimization problem}. By taking advantage of the structure of the dual problem, and combining the majorized ABCD (mABCD) method and the recent advances in the inexact symmetric Gauss-Seidel (sGS) technique, we introduce the sGS-mABCD method to solve the dual problem. More importantly, one type of mesh independence {result for the} mABCD method is proved, which asserts that asymptotically the infinite dimensional {mABCD} method and {the} finite dimensional discretization {version} have the same convergence property {in the sense that the worst case iteration complexity of the mABCD method remains} nearly constant as the discretization is refined.
{'timestamp': '2020-01-08T02:14:19', 'yymm': '2001', 'arxiv_id': '2001.02118', 'language': 'en', 'url': 'https://arxiv.org/abs/2001.02118'}
\section{Algorithms} \subsection{Algorithm A} \subsubsection{Complexity} The pseudo code of the algorithm would be to long to be displayed in a single page, thus, we divided it into two parts. The first part, corresponding to the preparation of the weighted network between the tags and the building of local hierarchies is given in Algorithm \ref{algA_first}. \begin{algorithm}[!ht] \caption{Algorithm A, 1$^{\rm st}$ part: building local hierarchies.} \label{algA_first} \begin{algorithmic}[1] \ForAll{objects: object1} \ForAll{tags appearing on object1: tag1} \ForAll{tags appearing on object1: tag2} coappearances(tag1,tag2)+=1 \EndFor \EndFor \EndFor \ForAll{tags: tag1} \State max= maximal coappearances(tag1,tag2) \ForAll{tags: tag2} \If{coappearances(tag1, tag2) $>=$ $\omega$ * max} \State calc zscore(tag1,tag2) \State strongpartners(tag1, tag2) = zscore(tag1, tag2) \EndIf \EndFor \EndFor \ForAll{tags: tag1} \State parent = undef \ForAll{strongpartners of tag1 sorted to descending order: tag2} \If{parent = undef and not exists strongpartners(tag2, tag1)} \State parent = tag2 \EndIf \EndFor \EndFor \end{algorithmic} \end{algorithm} By assuming that the number of tags on one object is $\mathcal{O}(1)$, the number of operations needed for generating the weighted co-occurrence network between the tags can be given by the number of objects, $Q$, as $\mathcal{O}(Q)$. According to our experience, the resulting co-occurrence network between the tags is usually sparse, thus, the number of links in the network between the tags, $M$, and the number of tags, $N$, are similar in magnitude, $\mathcal{O}(M)=\mathcal{O}(N)$, and the average number of links of the tags is $\mathcal{O}(1)$. According to that, the individual thresholding of the network based on the strongest link on each tag also needs $\mathcal{O}(N \log N)$ operations. Similarly, the calculation of the $z$-score and choosing the in-neighbor with the highest value as a parent need also $\mathcal{O}(M \log M)$ operations. In the next phase, the smaller isolated subgraphs under the local roots have to be assembled into a single hierarchy, as shown in Algorithm \ref{algA_sec}. \begin{algorithm}[!ht] \caption{Algorithm A, 2$^{\rm nd}$ part: assembly into a global hierarchy.} \label{algA_sec} \begin{algorithmic}[1] \If{there are more components} \ForAll{roots: root} \State h(root) = entropy of root \ForAll{tags in the component of root: tag1} \State component(tag1) = root \EndFor \EndFor \State global$\_$root = root with highest entropy \ForAll{roots except the global: root} \State suggested$\_$parent(root) = undef \ForAll{coappearing tags sorted to descending coappearances: tag2} \If{suggested$\_$parent(root) = undef and component(tag2) is not root} \State suggested$\_$parent(root) = tag2 \EndIf \EndFor \EndFor \ForAll{roots appearing in suggested$\_$parent: root} \State tag1 = root \State empty visited \While{does not exists visited(tag1) and exists suggested$\_$parent(tag1)} \State tag1 = component(suggested$\_$parent(tag1)) \State visited(tag1) = 1 \EndWhile \If{exists visited(tag1)} \ForAll{roots in visited: root2} looped(root2) = 1 delete suggested$\_$parent(root2) \EndFor \EndIf \EndFor \ForAll{roots in looped, sorted to descending order of h: root} \ForAll{tags coappearing with root: tag1} \If{not exists suggested$\_$parent(root)} \State check whether tag1 is below root \If{tag1 is not below root} suggested$\_$parent(root) = tag1 \EndIf \EndIf \EndFor \If{not exists suggested$\_$parent(root)} suggested$\_$parent(root) = global$\_$root \EndIf \EndFor \EndIf \end{algorithmic} \end{algorithm} Choosing the global root of the hierarchy needs $\mathcal{O}(N)$ operations, and similarly, choosing the parent of a local root also needs at most $\mathcal{O}(N)$ operations. During this process we need to detect (and correct) possible newly created loops, requiring at most $\mathcal{O}(N)$ operations. Based on the above, the resulting overall complexity of algorithm A is $\mathcal{O}(Q) + \mathcal{O}(M \log M)= \mathcal{O}(Q)+\mathcal{O}(N \log N)$, where we assumed that the co-occurrence network between the tags is sparse, i.e., $\mathcal{O}(M)=\mathcal{O}(N)$. \subsubsection{Optimizing the parameter $\omega$} \label{sect:optim_omega} The parameter $\omega \in[0,1]$ in algorithm A is corresponding to the local weight threshold used for throwing away weak connections in the tag co-occurrence network. In order to find the optimal value for $\omega$, we measured the LMI between the reconstructed hierarchy and the exact hierarchy as a function of $\omega$ in case of the protein function data set. The results of this experiment are shown in Fig.\ref{fig:opt_omega}. Although $I_{\rm lin}$ is showing only minor changes over the whole range of possible $\omega$ values, a maximal plateau can still be observed between $\omega=0.3$ and $\omega=0.55$. Based on this, throughout the experiments detailed in our paper, we used algorithm A with $\omega=0.4$. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.8\textwidth]{alg_A_omega_valid} \end{center} \begin{flushleft} \caption{ {\bf Optimizing the parameter $\omega$.} We show the LMI as a function of $\omega$ for the protein function data set. \label{fig:opt_omega} } \end{flushleft} \end{figure} \subsection{Algorithm B} \subsubsection{Complexity} \begin{algorithm}[!ht] \caption{Algorithm B} \label{algB} \begin{algorithmic}[1] \ForAll{tags: tag1} \ForAll{tags: tag2} \State calc zscore(tag1, tag2) \EndFor \EndFor \ForAll{tags: tag1} \ForAll{tags: tag2} \If{zscore(tag1, tag2) $>$ threshold$\_$B or coappearances(tag1, tag2) $>=$ 0.5 * objects(tag1) or coappearances(tag1, tag2) $>=$ 0.5 * objects(tag2)} \State M(tag1, tag2) = coappearances(tag1, tag2) \State strength(tag1) += coappearances(tag1, tag2) \EndIf \EndFor \EndFor \ForAll{tags: tag1} \State centrality(tag1) = strength(tag1) \EndFor \For{i=1, i$<=$100} \State sum = 0 \ForAll{tags: tag1} \ForAll{tags: tag2} \State temp$\_$centrality(tag1) = M(tag1, tag2) * centrality(tag2) \EndFor \State sum += temp$\_$centrality(tag1) \EndFor \ForAll{tags: tag1} centrality(tag1) = temp$\_$centrality(tag1) / sum \EndFor \EndFor \For{tags sorted to ascending centralities: tag1} \State empty score; \For{coappearing partners of tag1: tag2} \State score(tag2) = zscore(tag1, tag2) \EndFor \For{descendants of tag1: desc} \For{coappearing partners of desc: tag2} \If{tag2 coappears with tag1 and centrality(tag2) > centrality(tag1) and (zscore(tag1, tag2) $>$ threshold$\_$B or coappearances(tag1, tag2) $>=$ 0.5 * objects(tag1)) and (zscore(desc, tag2) $>$ threshold$\_$B or coappearances(desc, tag2) $>=$ 0.5 * objects(desc))} \State score(tag2) += zscore(desc, tag2) \EndIf \EndFor \EndFor \If{score is not empty} \State parent(tag1) = highest scoring tag \EndIf \EndFor \end{algorithmic} \end{algorithm} The pseudo code for the algorithm is given in Algorithm \ref{algB}. The preparation of the tag co-occurrence network is the same as in case of algorithm A, with a complexity of $\mathcal{O}(Q)$, and similarly, the calculation of the $z$-score needs $\mathcal{O}(M)$ operations. To evaluate the eigenvector centrality, we simply use the power iteration method on the filtered co-appearance matrix, (see the pseudo code), which needs $\mathcal{O}(N)$ operations, for the typical case of a sparse matrix. The hierarchy is assembled bottom up, and the calculation of the scores for the possible parents of a given tag requires $\mathcal{O}(N\cdot\log N)$ operations, assuming that the structure of the complete DAG is similar to a tree with a constant branching number. (In case it is chain-like, this is modified to $\mathcal{O}(N^2)$, whereas for a star-like topology, it is only $\mathcal{O}(N)$). The resulting overall complexity of the algorithm is $\mathcal{O}(Q) + \mathcal{O}(N\cdot\log N)$. \subsubsection{Optimizing the $z$-score threshold} The $z$-score threshold is an important parameter in algorithm B, which is used for pruning the network of co-occurrences between the tags by throwing away irrelevant connections. In order to optimize this parameter, we have run tests on the ``hard'' synthetic data set, introduced in Section ``Results on synthetic data'' in the main paper. The reason for this choice instead of, e.g., the protein function data set as in Sect.\ref{sect:optim_omega}, is that algorithm B showed best performance on this data set. In Fig.\ref{fig:opt_z}.\ we show the LMI between the reconstructed hierarchy and the exact hierarchy as a function of the $z$-score threshold $z^*$. Although the obtained curve is rather flat in most of the examined region, setting the threshold to $z^*=10$ in general seems as a good choice: below $z^*=5$ the quality drops down, whereas no significant increase can be observed in $I_{\rm lin}$ between $z^*=10$ and $z^*=20$. By choosing $z^*=10$, we ensure good quality, and also avoid throwing away too many connections. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.8\textwidth]{alg_B_z_thresh_valid_2} \end{center} \begin{flushleft} \caption{ {\bf Optimizing the $z$-score threshold.} We show the LMI as a function of the $z$-score threshold for the ``hard'' synthetic data set. \label{fig:opt_z} } \end{flushleft} \end{figure} \section{Normalized mutual information} \subsection{NMI by partitioning of the tags} \begin{figure}[!ht] \begin{center} \includegraphics[width=0.8\textwidth]{NMI_illustr_h6} \end{center} \begin{flushleft} \caption{ {\bf Mapping from hierarchies to communities.} a) A simple intuitive mapping from the DAG to a communities of the tags in the DAG is given by nested sets, as shown here for $\mathcal{G}_1$ and $\mathcal{G}_2$, resulting in partitions $\alpha_1$ and $\alpha_2$. b) If we use instead communities given by the union of all descendants from non-leaf tags, (always excluding the given tag itself), the NMI given by (\ref{eq:NMI_com}) becomes equivalent to the NMI defined for hierarchies in Eq.(1) in the main paper. \label{fig:NMI_com} } \end{flushleft} \end{figure} As mentioned in the main paper, a very important application of the concept of the NMI is given in community detection, where this measure can be used to quantify the similarity between partitions of the same network into communities by two alternative methods \cite{Danon_mutinfo,Lancichinetti_mutinfo}. The formula providing the NMI between community partitions $\alpha$ and $\beta$ can be given as\begin{equation} I_{\alpha,\beta}=\frac{-2\sum\limits_{i=1}^{C_{\alpha}}\sum\limits_{j=1}^{C_{\beta}}N_{ij}\ln\left(\frac{N_{ij}N}{N_iN_j}\right)}{\sum\limits_{i=1}^{C_{\alpha}}N_i\ln \left(\frac{N_i}{N}\right)+\sum\limits_{j=1}^{C_{\beta}}N_j\ln \left(\frac{N_j}{N}\right)}, \label{eq:NMI_com} \end{equation} where $C_{\alpha}$ and $C_{\beta}$ denote the number of communities in the two partitions, $N_i$ and $N_j$ stand for the number of nodes in communities $i$ and $j$ respectively, with $N_{ij}$ giving the number of common nodes in $i$ and $j$, and finally, $N$ denoting the total number of nodes in the network. This measure can be used e.g., when judging the quality of a community finding method run on a benchmark for which the ground truth communities are known. Meanwhile, (\ref{eq:NMI_com}) is in complete analogy with our definition of the NMI for a pair of hierarchies, (Eq.(1) in the main paper): if we convert the hierarchies to be compared into community partitions in an appropriate way, the two measures become equivalent. Probably the most natural idea for a mapping from a DAG to communities of the tags in the DAG is turning the original ``order'' hierarchy represented by the DAG into a ``containment'' hierarchy of nested sets, as shown in Fig.\ref{fig:NMI_com}a., (with each set corresponding to the union of tags in a given branch of the DAG). However, by applying (\ref{eq:NMI_com}) to the partitions obtained in this way we obtain different results compared to Eq.(1) in the main paper, and the resulting similarity measure does not approach 0 even for independent random DAGs. The reason for this effect is that leafs in the DAG provide communities consisting of single nodes, and due to the relatively large number of leafs in a general DAG, we always obtain a non vanishing portion of exactly matching communities. The mapping from a DAG to communities providing results equivalent to our NMI definition is obtained by associating with every tag in the DAG the union of its descendants, excluding the tag itself, (see Fig.\ref{fig:NMI_com}b for illustration). This way the leafs appear only in the communities corresponding to their ancestors, thus, the emergence of a large number of communities with only a single member is avoided. \subsection{Gene Ontology DAG} In Fig.1. in the main paper we have examined the behavior of the NMI between a binary tree and its randomized counterpart as a function of the fraction of rewired links. Here we show similar results obtained for the exact hierarchy of our protein data set, obtained from the Gene Ontology \cite{GO}. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.7\textwidth]{NMI_GO_decay} \end{center} \begin{flushleft} \caption{ {\bf NMI decay for the exact hierarchy of protein tags and its randomized counterpart}. We plotted $I$ as given in Eq.(1) of the main paper as a function of the randomly rewired links, $f$. The three different curves correspond to rewiring the links in reverse order according to their position in the hierarchy (purple circles), rewiring in random order (blue squares) and rewiring in the order of the position in the hierarchy (green triangles). The red lines illustrate the calculation of the linearized mutual information for the reconstruction result obtained from algorithm A. \label{fig:GO_DAG_rand} } \end{flushleft} \end{figure} In Fig.\ref{fig:GO_DAG_rand}. the NMI defined in Eq.(1) of the main paper is shown for the exact hierarchy and its randomized counterpart as a function of the randomly rewired links, $f$. The three different curves correspond to three different orders in which the links were chosen for the rewiring: in case of the purple curve we started the rewiring with links pointing to leafs, and continued in reverse order according to the hierarchy, in case of the blue curve, the links were chosen in random order, while in case of the green curve, we started the rewiring at the top of the hierarchy, and continued in the order according to the hierarchy. Similarly to Fig.1. in the main paper, all three curves decay to 0 as $f\rightarrow 1$, thus, the similarity becomes 0 when the compared DAGs become independent. However, the behavior in the small and medium $f$ regime is rather different: the green curve drops below $I_{\rm GO,rand}=0.5$ already at $f=0.05$, while the blue curve shows a moderate decrease and the purple curve decays even more mildly. Similarly to Fig.1. in the main paper, this justifies our statement that the NMI is sensitive also to the position of the links in the hierarchy: rewiring links high in the hierarchy has a larger effect on the similarity compared to rewiring links close to the leafs. Interesting, in the medium $f^*$ regime a crossover can be observed between the green- and the blue curve. The possible explanation for this effect lies in the non-trivial, nor random, nor regular structure of the original DAG. The red lines in Fig.\ref{fig:GO_DAG_rand}. demonstrate the calculation of the linearized mutual information for the results obtained from algorithm A: The obtained NMI value of $I_{\rm e,r}=0.37$ between the output of the algorithm and the exact DAG is projected to the $f$ axis using the blue curve, resulting in $f^*=0.22$. The linearized mutual information, $I_{\rm lin}$ is given by $1-f^*$, resulting $I_{\rm lin}=0.78$. \section{Further results on Flickr and IMDb} \subsection{Additional samples from the Flickr hierarchy} The exact hierarchy between the tags appearing in Flickr is not known, thus, the quality of the extracted hierarchy can be judged only by ``eye'', i.e., by looking at smaller subgraphs, whether they make sense or not. In Fig.3. of the main paper we have already shown a part of the branch under ``reptile'' in the hierarchy obtained from algorithm B. Here we show further examples in the same manner. In Fig.\ref{fig:winter}. we depict a part of the hierarchy under the tag ``winter'', with very reasonable descendants like ``snow'', ``ski'', ``cold'', ``ice'', etc. Similarly, in Fig.\ref{fig:rodent}. we show a part of the descendants of ``rodent'', displaying again a rather meaningful hierarchy. \begin{figure}[!ht] \begin{center} \includegraphics[width=\textwidth]{winter2_onlydescvotes-z} \end{center} \begin{flushleft} \caption{ {\bf Partial subgraph of the descendants of ``winter'' in the hierarchy between Flickr tags obtained from algorithm B.} Stubs (in dashed line) signal further descendants not shown in the figure, and the size of the nodes indicate the total number of descendants. \label{fig:winter} } \end{flushleft} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=0.7\textwidth]{rodent2_onlydescvotes-z} \end{center} \begin{flushleft} \caption{ {\bf A part of the descendants of ``rodent'' in the hierarchy between Flickr tags.} Similarly to Fig.\ref{fig:winter}., the overall hierarchy behind the subgraph shown here was obtained from algorithm B. Stubs (in dashed line) signal further descendants not shown in the figure, and the size of the nodes indicate the total number of descendants. \label{fig:rodent} } \end{flushleft} \end{figure} \subsection{Samples from the hierarchies extracted with the other methods} For comparison with Figs.3-4.\ in the main paper, here in Figs.\ref{fig:reptile_alg_A}-\ref{fig:murder_condprob}.\ we show the corresponding parts from the hierarchies extracted with algorithm A, the method by P.~Heymann \& H.~Garcia-Molina and the algorithm by P.~Schmitz. Since the overall structure of the hierarchies is varying over the different algorithms, naturally, the set of tags appearing in these figures is somewhat different compared to Figs.3-4.\ in the main paper. I.e., tags in direct ancestor-descendant relation according to algorithm B can be classified into different branches by an other algorithm or siblings may become unrelated etc. in the output of another method. Therefore, our strategy when preparing Figs.\ref{fig:reptile_alg_A}-\ref{fig:murder_condprob}.\ was to choose the largest branch, containing the most common tags with Figs.3-4.\ in the main paper. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.9\textwidth]{reptile_algA} \end{center} \begin{flushleft} \caption{ {\bf A part of the descendants of ``reptile'' and ``lizard'' in the hierarchy between Flickr tags obtained with algorithm A.} Stubs (in dashed line) signal further descendants not shown in the figure, and the size of the nodes indicate the total number of descendants. \label{fig:reptile_alg_A} } \end{flushleft} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=\textwidth]{reptile_centrality} \end{center} \begin{flushleft} \caption{ {\bf A part of the descendants of ``snake'' in the hierarchy between Flickr tags obtained with the method by P.~Heymann \& H.~Garcia-Molina.} Stubs (in dashed line) signal further descendants not shown in the figure, and the size of the nodes indicate the total number of descendants. \label{fig:reptile_centrality} } \end{flushleft} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=\textwidth]{reptile_condprob} \end{center} \begin{flushleft} \caption{ {\bf Samples from the small hierarchies between Flickr tags obtained with the algorithm by P.~Schmitz.} Triangular shaped nodes represent local roots. These were chosen from tags appearing in Fig.3.\ in the main paper. \label{fig:reptile_condprob} } \end{flushleft} \end{figure} In Figs.\ref{fig:reptile_alg_A}-\ref{fig:reptile_condprob}.\ we show samples corresponding to Fig.3.\ in the main paper, obtained from the hierarchies extracted for the Flickr tags. Interestingly, in case of algorithm A, (Fig.\ref{fig:reptile_alg_A}.), the tag ``lizard'' and ``reptile'' are classified into different branches. Meanwhile, in the subgraph obtained from the algorithm by P.~Heymann \& H.~Garcia-Molina, (Fig.\ref{fig:reptile_centrality}), the tag ``snake'' has been chosen to be the direct ancestor of ``reptile''. Apart from that, the hierarchy of the tags is rather similar to that shown in Fig.3.\ in the main paper. In case of the algorithm by P.~Schmitz, the obtained result was actually composed of many distinct small hierarchies, with the tags given in Fig.3.\ in the main paper spreading over a large number of different components. Thus, we included a larger set of these small hierarchies in Fig.\ref{fig:reptile_condprob}.\ instead of a single larger subgraph as in Figs.\ref{fig:reptile_alg_A}-\ref{fig:reptile_centrality}. \begin{figure}[!ht] \begin{center} \includegraphics[width=\textwidth]{murder_algA} \end{center} \begin{flushleft} \caption{ {\bf A part of the descendants of ``blood'' in the hierarchy between IMDb tags obtained with algorithm A.} Stubs (in dashed line) signal further descendants not shown in the figure, and the size of the nodes indicate the total number of descendants. \label{fig:murder_alg_A} } \end{flushleft} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=0.8\textwidth]{murder_centrality} \end{center} \begin{flushleft} \caption{ {\bf A part of the descendants of ``murder'' in the hierarchy between IMDb tags obtained with the method by P.~Heymann \& H.~Garcia-Molina.} Stubs (in dashed line) signal further descendants not shown in the figure, and the size of the nodes indicate the total number of descendants. \label{fig:murder_centrality} } \end{flushleft} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=\textwidth]{murder_condprob} \end{center} \begin{flushleft} \caption{ {\bf Samples from the small hierarchies between IMDb tags obtained with the algorithm by P.~Schmitz} Triangular shaped nodes represent local roots. These were chosen from tags appearing in Fig.4.\ in the main paper. \label{fig:murder_condprob} } \end{flushleft} \end{figure} In Figs.\ref{fig:murder_alg_A}-\ref{fig:murder_condprob}.\ we show samples from the hierarchies obtained for the IMDb tags. In case of algorithm A, (Fig.\ref{fig:murder_alg_A}.), we display the branch under ``blood'', as most of its descendants appear also on Fig.4 in the main paper, while the tag ``murder'' is missing from the figure, since it was sorted into a different branch. The subgraph shown for the method by P.~Heymann \& H.~Garcia-Molina, (Fig.\ref{fig:murder_centrality}.), has similar features compared to Fig.4 in the main paper, however, the direct ancestor-descendant relation between ``murder'' and ``death'' has been reversed. Finally, the results for the algorithm by P.~Schmitz are again very dispersed, thus, we included more than one small subgraph in Fig.\ref{fig:murder_condprob}. \section{Synthetic benchmark} \subsection{Pseudo code} In Algorithm \ref{benchmark}. we briefly sketch the pseudo code of the preparation of the synthetic tagged data in our benchmark system. As explained in the main paper, the basic idea is to use a random walk process on the pre-defined hierarchy for ensuring the higher frequency of co-occurrences between more closely related tags. Beside the hierarchy between the tags, the following parameters are also assumed to be pre-defined: the number of virtual objects to be generated, the frequency distribution of the tags, the distribution of the number of tags on the objects and the distribution of the random walk lengths. \begin{algorithm}[!ht] \caption{Generating synthetic data based on random walk} \label{benchmark} \begin{algorithmic}[1] \ForAll{virtual objects} \State draw tag t$_1$ at random according to the tag frequency distribution \State assign t$_1$ to the virtual object \State draw number of tags n$_{\rm T}$ at random from the distribution of the number of tags on the objects \ForAll{i=2, i $<=$ n$_{\rm T}$} \If{random number r $< p_{\rm RW}$} \State draw random walk length $l_{\rm RW}$ at random from the random walk length distribution \State set tag t$_{\rm i}=$t$_1$ \ForAll{j=1, j $<= l_{\rm RW}$} \State random walk on the pre-defined hierarchy, ignoring the link directions: \State new tag t$_{\rm j}:=$ random neighbor of t$_{\rm i}$ \State set t$_{\rm i}=$t$_{\rm j}$ \EndFor \State assign t$_{\rm i}$ to the virtual object \Else \State draw tag t$_{\rm i}$ at random according to the tag frequency distribution \State assign t$_{\rm i}$ to the virtual object \EndIf \EndFor \EndFor \end{algorithmic} \end{algorithm} \subsection{Further tests based on the ``easy'' parameter settings} \label{sect:test_easy} In the main paper we have shown that when the frequency of tags is decreasing linearly as a function of the depth in the hierarchy, the synthetic benchmark becomes ``easy'', and an almost perfect reconstruction becomes possible. As an illustration, in Fig.\ref{fig:easy_bench_result}a-b we show parts from the exact DAGs, (binary trees of 1023 tags), used for testing algorithm A and algorithm B, respectively. In Fig.\ref{fig:easy_bench_result}c we display the corresponding subgraph from the hierarchy obtained from algorithm A. The result is quite good, where the majority of the links are exactly matching, (colored green), while the rest are acceptable (shown in orange). However in case of algorithm B the chosen part of the reconstruction is perfect, as shown in Fig.\ref{fig:easy_bench_result}d, with only exactly matching links. \begin{figure}[!ht] \begin{center} \includegraphics[width=\textwidth]{benchm_easy_together} \end{center} \begin{flushleft} \caption{ {\bf Comparison between the exact hierarchy and the reconstructed hierarchy in case of the ``easy'' computer generated benchmark}. a) A subgraph from the exact hierarchy for testing algorithm A. b) A subgraph from the exact hierarchy for testing algorithm B. c) The subgraph corresponding to a) in the result obtained from algorithm A. Exactly matching links are shown in green, acceptable links are colored orange. d) The subgraph corresponding to b) in the result obtained from algorithm B, showing a perfect match. \label{fig:easy_bench_result} } \end{flushleft} \end{figure} According to the results discussed in the main paper, when the tag frequencies are independent of the position in the hierarchy and have a power-law distribution, the benchmark becomes hard. Here we examine the effect of changes in the other parameters of the benchmark. First, our starting point is the ``easy'' parameter setting, while the results obtained for the ``hard'' parameter setting are discussed in Sect.\ref{sect:test_hard}. As mentioned in the main paper, the most important feature of the ``easy'' parameter settings is that the frequency of the tags is decreasing linearly as a function of the level depth in the hierarchy. The other parameters were set as follows: an average number of 3 co-occurring tags were generated on altogether 2,000,000 hypothetical objects, with random walk probability of $p_{\rm RW}=0.5$ and random walk lengths chosen from a uniform distribution between 1 and 3, (the results are shown in Table 2. in the main paper). First we study the effect of changing the length of the random walks. \begin{table}[!ht] \caption{ \bf{Quality measures of the reconstructed hierarchies with random walk length of 1 step.}} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & $r_{\rm E}$ & $r_{\rm A}$ & $r_{\rm I}$ & $r_{\rm U}$ & $r_{\rm M}$ & $I_{\rm e,r}$ & $I_{\rm lin}$ \\ \hline algorithm A & 68\% &95\% & 0\% & 5\%& 0\% & 47\% & 86\% \\ \hline algorithm B & 100\% & 100\% & 0\% & 0\% & 0\% & 100\% & 100\% \\ \hline P.~Heymann \& H.~Garcia-Molina & 99\% & 99\% & 1\% & 0\%& 0\%& 92\% & 99\% \\ \hline P.~Schmitz & 0\% & 0\% & 0\% & 0\% & 100\% & 0\% & 0\% \\ \hline \end{tabular} \begin{flushleft} The setting of the other parameters were exactly the same as in case of the ``easy'' synthetic data set discussed in the main paper. \end{flushleft} \label{table:rw_step_1} \end{table} In Table \ref{table:rw_step_1}. we show the results when we decrease the random walk length to only a single step: according to the listed measures, the quality of the reconstruction for Algorithm B, the method by P.~Heymann \& H.~Garcia-Molina and the algorithm by P.~Schmitz remain exactly or almost exactly the same. In case of Algorithm A the quality indicators are somewhat lower compared to Table 2. in the main text, however, solely $I_{\rm e,r}$ is changed significantly. \begin{table}[!ht] \caption{ \bf{Quality measures of the reconstructed hierarchies with maximum random walk length of 5 step.}} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & $r_{\rm E}$ & $r_{\rm A}$ & $r_{\rm I}$ & $r_{\rm U}$ & $r_{\rm M}$ & $I_{\rm e,r}$ & $I_{\rm lin}$ \\ \hline algorithm A & 95\% &100\% & 0\% & 0\%& 0\% & 99\% & 100\% \\ \hline algorithm B & 100\% & 100\% & 0\% & 0\% & 0\% & 100\% & 100\% \\ \hline P.~Heymann \& H.~Garcia-Molina & 99\% & 99\% & 0\% & 0\%& 0\%& 93\% & 99\% \\ \hline P.~Schmitz & 0\% & 0\% & 0\% & 0\% & 100\% & 0\% & 0\% \\ \hline \end{tabular} \begin{flushleft} The setting of the other parameters were exactly the same as in case of the ``easy'' synthetic data set discussed in the main paper. \end{flushleft} \label{table:rw_step_5} \end{table} In Table \ref{table:rw_step_5}. we show the results when the length of the random walks was chosen from a uniform distribution between 1 and 5, and the other parameters of the data set were left the same. Again, algorithm B, the method by P.~Heymann \& H.~Garcia-Molina and the algorithm by P.~Schmitz produce the same (or almost the same) results as presented in Table 2. of the main text. The results from algorithm A are now better compared to the original settings, reaching almost the same quality as algorithm A. In conclusion, the change in the length of the random walks has only a negligible effect for three out of the four methods studied here, and a mild effect on the results from the fourth one. Next, we examine the effect of reducing the number of generated virtual objects. \begin{table}[!ht] \caption{ \bf{Quality measures of the reconstructed hierarchies with 200,000 hypothetical objects}} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & $r_{\rm E}$ & $r_{\rm A}$ & $r_{\rm I}$ & $r_{\rm U}$ & $r_{\rm M}$ & $I_{\rm e,r}$ & $I_{\rm lin}$ \\ \hline algorithm A & 67\% &97\% & 1\% & 2\%& 0\% & 86\% & 98\% \\ \hline algorithm B & 98\% & 98\% & 2\% & 0\% & 0\% & 100\% & 100\% \\ \hline P.~Heymann \& H.~Garcia-Molina & 98\% & 98\% & 2\% & 0\%& 0\%& 93\% &99\% \\ \hline P.~Schmitz & 0\% & 0\% & 0\% & 0\% & 100\% & 0\% & 0\% \\ \hline \end{tabular} \begin{flushleft} The setting of the other parameters were exactly the same as in case of the ``easy'' synthetic data set discussed in the main paper. \end{flushleft} \label{table:200_obj} \end{table} In Table \ref{table:200_obj}. we show the results obtained when we generated only 200,000 virtual objects instead of 2,000,000, (and otherwise used the same parameters as in case of the ``easy'' synthetic data set). Not surprisingly, the quality of the methods show a slight decrease, as the hierarchy has to be reconstructed based on less information. However, the effect is only minor. \begin{table}[!ht] \caption{ \bf{Quality measures of the reconstructed hierarchies with 50,000 hypothetical objects}} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & $r_{\rm E}$ & $r_{\rm A}$ & $r_{\rm I}$ & $r_{\rm U}$ & $r_{\rm M}$ & $I_{\rm e,r}$ & $I_{\rm lin}$ \\ \hline algorithm A & 57\% &89\% & 3\% & 8\%& 0\% & 75\% & 95\% \\ \hline algorithm B & 70\% & 81\% & 13\% & 6\% & 0\% & 76\% & 95\% \\ \hline P.~Heymann \& H.~Garcia-Molina & 87\% & 91\% & 5\% & 4\%& 0\%& 94\% &99\% \\ \hline P.~Schmitz & 2\% & 3\% & 0\% & 0\% & 97\% & 1\% & 11\% \\ \hline \end{tabular} \begin{flushleft} The setting of the other parameters were exactly the same as in case of the ``easy'' synthetic data set discussed in the main paper. \end{flushleft} \label{table:50_obj} \end{table} When reducing the number of objects further down to 50,000, the drop in the quality measures becomes more pronounced, as presented in Table \ref{table:50_obj}. Interestingly, the algorithm by P.~Schmitz shows a different behavior, with a slight increase in quality. As mentioned in the main paper, the study of the reasons for the outlying behavior of this algorithm on the synthetic data is out of the scope of present work. \begin{table}[!ht] \caption{ \bf{Quality measures of the reconstructed hierarchies with random walk probability $p_{\rm RW}=0.1$.}} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & $r_{\rm E}$ & $r_{\rm A}$ & $r_{\rm I}$ & $r_{\rm U}$ & $r_{\rm M}$ & $I_{\rm e,r}$ & $I_{\rm lin}$ \\ \hline algorithm A & 1\% &61\% & 0\% & 39\%& 0\% & 0\% & 1\% \\ \hline algorithm B & 89\% & 90\% & 8\% & 2\% & 0\% & 64\% & 92\% \\ \hline P.~Heymann \& H.~Garcia-Molina & 88\% & 99\% & 1\% & 0\%& 0\%& 68\% &93\% \\ \hline P.~Schmitz & 0\% & 0\% & 0\% & 0\% & 100\% & 0\% & 0\% \\ \hline \end{tabular} \begin{flushleft} The setting of the other parameters were exactly the same as in case of the ``easy'' synthetic data set discussed in the main paper. \end{flushleft} \label{table:rw_prob} \end{table} Finally, in Table \ref{table:rw_prob}. we show the quality measures obtained when the random walk probability was reduced from $p_{\rm RW}=0.5$ to $p_{\rm RW}=0.1$, (and the other parameters were the same as in case of the ``easy'' synthetic data set). Similarly to the case of reducing the number of objects, this provides a more difficult task for the tag hierarchy extracting algorithms, as most of the tags are chosen at random on the objects. Accordingly, the quality measures are decreased when compared to the results shown in Table 2. of the main text. This effect is quite significant in case of algorithm A, while its less pronounced for algorithm B and the method by P.~Heymann and H.~Garcia-Molina. \subsection{Further tests based on the ``hard'' parameter settings} \label{sect:test_hard} In similar fashion to Sect.\ref{sect:test_easy}, here we examine the effects of changing the parameters when we start from the ``hard'' parameter setting. As mentioned in the main paper, the main feature making this choice of parameters ``hard'' is that the frequency of tags is independent of the level depth in the hierarchy. Otherwise, the parameters of the data set discussed in Table 3.\ of the main text were the following: an average number of 3 co-occurring tags were generated on altogether 2,000,000 hypothetical objects, with random walk probability of $p_{\rm RW}=0.5$ and random walk lengths chosen from a uniform distribution between 1 and 3. Starting from this parameter setting, in Table \ref{table:rw_step_1_hard}.\ we show the results obtained when the random walk length is reduced to 1. For all 4 methods, we can observe a slight increase in the quality, however, no significant changes have occurred when comparing to Table 3.\ in the main text. \begin{table}[!ht] \caption{ \bf{Quality measures of the reconstructed hierarchies with random walk length of 1 step.}} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & $r_{\rm E}$ & $r_{\rm A}$ & $r_{\rm I}$ & $r_{\rm U}$ & $r_{\rm M}$ & $I_{\rm e,r}$ & $I_{\rm lin}$ \\ \hline algorithm A & 40\% & 40\% & 17\% & 43\%& 0\% & 21\% & 70\% \\ \hline algorithm B & 92\% & 93\% & 5\% & 2\% & 0\% & 84\% & 97\% \\ \hline P.~Heymann \& H.~Garcia-Molina & 51\% & 55\% & 30\% & 15\%& 0\%& 28\% & 76\% \\ \hline P.~Schmitz & 4\% & 4\% & 0\% & 5\% & 91\% & 2\% & 18\% \\ \hline \end{tabular} \begin{flushleft} The setting of the other parameters were exactly the same as in case of the ``hard'' synthetic data discussed in the main paper. \end{flushleft} \label{table:rw_step_1_hard} \end{table} In Table \ref{table:rw_step_5_hard}.\ we show the results when the length of the random walks was chosen from a uniform distribution between 1 and 5, and the other parameters of the data set were left the same as in case of Table 3.\ in the main text. Interestingly, this time the quality measures have been lowered slightly, nevertheless, no significant change can be observed. In a similar fashion to Sect.\ref{sect:test_easy}, our conclusion is that the length of the random walk has no significant effect on the quality of the examined algorithms. \begin{table}[!ht] \caption{ \bf{ Quality measures of the reconstructed hierarchies with maximum random walk length of 5 step.}} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & $r_{\rm E}$ & $r_{\rm A}$ & $r_{\rm I}$ & $r_{\rm U}$ & $r_{\rm M}$ & $I_{\rm e,r}$ & $I_{\rm lin}$ \\ \hline algorithm A & 28\% &36\% & 29\% & 35\%& 0\% & 18\% & 66\% \\ \hline algorithm B & 85\% & 88\% & 10\% & 2\% & 0\% & 81\% & 96\% \\ \hline P.~Heymann \& H.~Garcia-Molina & 46\% & 52\% & 34\% & 14\%& 0\%& 28\% & 76\% \\ \hline P.~Schmitz & 1\% & 1\% & 1\% & 4\% & 94\% & 1\% & 4\% \\ \hline \end{tabular} \begin{flushleft} The setting of the other parameters were exactly the same as in case of the ``hard'' synthetic data set discussed in the main paper. \end{flushleft} \label{table:rw_step_5_hard} \end{table} We continue our experiments by changing the number of virtual objects in the preparation of the data set. In Table \ref{table:200_obj_hard}.\ we show the results obtained when we generated only 200,000 virtual objects instead of 2,000,000, (and otherwise used the same parameters as in case of the ``hard'' synthetic data set). The quality measures for algorithm A, the method by P. Heymann \& H. Garcia-Molina and the algorithm by P. Schmitz remained almost the same, while the marks for algorithm B have been slightly reduced, (however, algorithm B is still far the best method on this data set). \begin{table}[!ht] \caption{ \bf{Quality measures of the reconstructed hierarchies with 200,000 hypothetical objects}} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & $r_{\rm E}$ & $r_{\rm A}$ & $r_{\rm I}$ & $r_{\rm U}$ & $r_{\rm M}$ & $I_{\rm e,r}$ & $I_{\rm lin}$ \\ \hline algorithm A & 31\% &36\% & 26\% & 38\%& 0\% & 18\% & 66\% \\ \hline algorithm B & 80\% & 86\% & 12\% & 2\% & 0\% & 76\% & 95\% \\ \hline P.~Heymann \& H.~Garcia-Molina & 48\% & 54\% & 32\% & 14\%& 0\%& 29\% & 76\% \\ \hline P.~Schmitz & 1\% & 2\% & 0\% & 4\% & 94\% & 1\% & 5\% \\ \hline \end{tabular} \begin{flushleft} The setting of the other parameters were exactly the same as in case of the ``hard'' synthetic data set discussed in the main paper. \end{flushleft} \label{table:200_obj_hard} \end{table} In Table \ref{table:50_obj_hard}.\ we show the results obtained when the number of hypothetical objects were further reduced to 50,000. In this case the quality of algorithm A, the method by P. Heymann \& H. Garcia-Molina and the algorithm by P. Schmitz has slightly dropped, when compared to Table 3.\ in the main paper. The decrease in the quality is more pronounced in case of algorithm B, however, its results are still much better than that of the others. In conclusion, the lowering of the number of virtual objects affects most the result from algorithm B, nevertheless its quality was always significantly higher compared to the other methods. \begin{table}[!ht] \caption{ \bf{Quality measures of the reconstructed hierarchies with 50,000 hypothetical objects}} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & $r_{\rm E}$ & $r_{\rm A}$ & $r_{\rm I}$ & $r_{\rm U}$ & $r_{\rm M}$ & $I_{\rm e,r}$ & $I_{\rm lin}$ \\ \hline algorithm A & 29\% &36\% & 26\% & 39\%& 0\% & 17\% & 65\% \\ \hline algorithm B & 66\% & 74\% & 20\% & 6\% & 0\% & 55\% & 89\% \\ \hline P.~Heymann \& H.~Garcia-Molina & 46\% & 53\% & 33\% & 14\%& 0\%& 28\% & 76\% \\ \hline P.~Schmitz & 1\% & 2\% & 0\% & 5\% & 93\% & 1\% & 6\% \\ \hline \end{tabular} \begin{flushleft} The setting of the other parameters were exactly the same as in case of the ``hard'' synthetic data set discussed in the main paper. \end{flushleft} \label{table:50_obj_hard} \end{table} Finally, in Table \ref{table:rw_prob_hard}.\ we examine the effects of lowering the random walk probability from $p_{\rm RW}=0.5$ to $p_{\rm RW}=0.1$,(while keeping the other parameters the same as in case of the ``hard'' synthetic data set). As mentioned in Sect.\ref{sect:test_easy}, this provides a more difficult task for the tag hierarchy extracting algorithms, as most of the tags are chosen at random on the objects. Accordingly, the quality measures are decreased when compared to the results shown in Table 3.\ of the main text. However, this effect is quite significant in case of algorithm A, while it is more mild for the method by P.\ Heymann \& H.\ Garcia-Molina, and is even less pronounced in case of algorithm B. \begin{table}[!ht] \caption{ \bf{Quality measures of the reconstructed hierarchies with random walk probability $p_{\rm RW}=0.1$.}} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & $r_{\rm E}$ & $r_{\rm A}$ & $r_{\rm I}$ & $r_{\rm U}$ & $r_{\rm M}$ & $I_{\rm e,r}$ & $I_{\rm lin}$ \\ \hline algorithm A & 10\% &12\% & 14\% & 74\%& 0\% & 5\% & 33\% \\ \hline algorithm B & 65\% & 71\% & 21\% & 8\% & 0\% & 61\% & 91\% \\ \hline P.~Heymann \& H.~Garcia-Molina & 35\% & 36\% & 34\% & 30\%& 0\%& 18\% &66\% \\ \hline P.~Schmitz & 0\% & 0\% & 0\% & 7\% & 93\% & 0\% & 0\% \\ \hline \end{tabular} \begin{flushleft} The setting of the other parameters were exactly the same as in case of the ``hard'' synthetic data set discussed in the main paper. \end{flushleft} \label{table:rw_prob_hard} \end{table} Our general conclusion regarding the robustness of the examined algorithms is that no significant differences could be observed in our experiments on the synthetic data sets. Algorithm B showed more sensitivity to the number of virtual objects compared to algorithm A and the method by P.~Heymann \& H.~Garcia-Molina. In contrast, when reducing the random walk probability, algorithm B was found to be more robust compared to the other methods. \section*{Abstract} Tagging items with descriptive annotations or keywords is a very natural way to compress and highlight information about the properties of the given entity. Over the years several methods have been proposed for extracting a hierarchy between the tags for systems with a "flat", egalitarian organization of the tags, which is very common when the tags correspond to free words given by numerous independent people. Here we present a complete framework for automated tag hierarchy extraction based on tag occurrence statistics. Along with proposing new algorithms, we are also introducing different quality measures enabling the detailed comparison of competing approaches from different aspects. Furthermore, we set up a synthetic, computer generated benchmark providing a versatile tool for testing, with a couple of tunable parameters capable of generating a wide range of test beds. Beside the computer generated input we also use real data in our studies, including a biological example with a pre-defined hierarchy between the tags. The encouraging similarity between the pre-defined and reconstructed hierarchy, as well as the seemingly meaningful hierarchies obtained for other real systems indicate that tag hierarchy extraction is a very promising direction for further research with a great potential for practical applications. Tags have become very prevalent nowadays in various online platforms ranging from blogs through scientific publications to protein databases. Furthermore, tagging systems dedicated for voluntary tagging of photos, films, books, etc. with free words are also becoming popular. The emerging large collections of tags associated with different objects are often referred to as folksonomies, highlighting their collaborative origin and the ``flat'' organization of the tags opposed to traditional hierarchical categorization. Adding a tag hierarchy corresponding to a given folksonomy can very effectively help narrowing or broadening the scope of search. Moreover, recommendation systems could also benefit from a tag hierarchy. \section*{Introduction} The appearance of tags in various online contents have become very common, e.g., tags indicate the topic of news-portal feeds and blog post, the genre of films or music records on file sharing portals, or the kind of goods offered in Web stores. By summarizing the most important properties of an entity in only a few words we ``compress'' information and provide a rough description of the given entity which can be processed very rapidly, (e.g., the user can decide whether the given post is of interest or not without actually reading it). The usage of tags, keywords, categories, etc., for helping the search and browsing amongst a large number of objects is a general idea that has been around for a long time in, e.g., scientific publications, library classification systems and biological classification. However, in the former examples the tagging (categorization) of the involved entities is hierarchical, with a set of narrower or broader categories building up a tree-like structure composed of ``is a subcategory of'' type relations. In contrast, the nature of tags appearing in online systems is rather different: they can usually correspond to any free word relevant to the tagged item, and they are almost never organized into a pre-defined hierarchy of categories and sub-categories \cite{Mika_folk_and_ont,Spyns_folk_and_ont,Voss_cond_mat}. Moreover, in some cases they originate from extensive collaboration as, e.g., in tagging systems like Flickr, CiteUlike or Delicious \cite{Cattuto_PNAS,Lambiotte_ct,Cattuto_PNAS2}, where unlimited number of users can tag photos, Web pages, etc., with free words. The arising set of free tags and associated objects are usually referred to as folksonomies, for emphasizing their collaborative nature. Since each tagging action is forming a new user-tag-object triple in these systems, their natural representation is given by tri-partite graphs, or in a more general framework by hypergraphs \cite{Lambiotte_ct,Newman_PRE,Caldarelli_PRE,Schoder_tags,Zhou_recommend_overview}, where the hyperedges connect more than two nodes together. One of the very interesting challenges related to systems with free tagging is extracting a hierarchy between the appearing tags. Although most tagging systems are intrinsically egalitarian, the way users think about objects presumably has some built in hierarchy, e.g., ``poodle'' is usually considered as a special case of ``dog''. By revealing this sort of hierarchy from, e.g., tag co-occurrence statistics, we can significantly help broadening or narrowing the scope of search in the system, give recommendation about yet unvisited objects to the user \cite{Kazienko_chapter,Kazienko_paper}, or help the categorization of newly appearing objects. Beside the high relevance for practical applications, this problem is interesting also from the theoretical point of view, as marked by several alternative approaches proposed in the recent years. P.~Heymann and H.~Garcia-Molina introduced a tag hierarchy extracting algorithm based on analyzing node centralities in a co-occurrence network between the tags \cite{Garcia-Molina}, where connections between tags indicate the appearance of the tags on the same objects simultaneously and link weights correspond to the frequency of co-occurrences. Another interesting approach was outlined by A.~Plangprasopchok and K.~Lerman \cite{Lerman_constr,Lerman_constr_2}, which can be applied to systems where users may define a shallow hierarchy for their own tags, and by agglomerating these shallow hierarchies we gain a global hierarchy between the tags. Further notable algorithms were given by P.~Schmitz \cite{Schmitz_constr}, using a probabilistic model and C.~Van~Damme et al. \cite{Van_Damme_constr}, integrating information from as many sources as possible. In this paper we introduce a detailed framework for tag hierarchy extraction. Our intended main contributions to this field here are represented by the development of a synthetic, computer generated benchmark system, and the introduction of quality measures for extracted hierarchies. The basic idea of the benchmark system is to simulate the tagging of virtual objects with tags based on a pre-defined input hierarchy between the tags. When applying a hierarchy extraction algorithm to the generated data, the obtained tag hierarchy can be compared to the original tag hierarchy used in the simulation. By changing the parameters of the simulations we can test various properties of the tag hierarchy extracting algorithm in a controlled way. The different quality measures we introduce can be used to evaluate the results of a tag hierarchy extracting algorithm when the exact hierarchy between the tags is also known, (as, e.g., in case of the synthetic benchmark). Furthermore, we also develop new hierarchy extraction methods, which are competitive with the state of the art current methods. These methods are tested on both the synthetic benchmark and on a couple of real systems as well. One of our data set contains proteins tagged with protein functions, where the extracted tag hierarchy can be compared to the protein function hierarchy of the Genome Ontology. The other real systems included in our study are given by tagged photos from the photo sharing platform Flickr and tagged movies from the Internet Movie Database (IMDb). In these cases, pre-defined ``exact'' tag hierarchies are not given, therefore, the outcomes of our hierarchy extraction algorithm can be evaluated only by visual inspection of smaller subgraphs in the obtained hierarchies. Luckily, as the tags correspond to English words in these systems, we can still get a good impression whether the obtained hierarchies are meaningful or not. Our tag hierarchy extraction methods are rooted in complex network theory. In the last 15 years the network approach has become an ubiquitous tool for analyzing complex systems \cite{Laci_revmod,Dorog_book}. Networks corresponding to realistic systems can be highly non-trivial, characterized by a low average distance combined with a high average clustering coefficient \cite{Watts-Strogatz}, anomalous degree distributions \cite{Faloutsos,Laci_science} and an intricate modular structure \cite{GN-pnas,CPM_nature,Fortunato_report}. The appearance of node tags is very common in e.g., biological networks,\cite{Mason_nets_in_bio,Zhu_nets_in_bio,Aittokallio_nets_in_bio,Finocchiaro_cancer,Jonsson_Bioinformatics,Jonsson_BMC}, where they usually refer to the biological function of the units represented by the nodes (proteins, genes, etc.). Node features are also fundamental ingredients in the so-called co-evolving network models, where the evolution of the network topology affects the node properties and vice versa \cite{Eguiluz_coevolv,Watts_science,Newman_coevolv,Vazquez_cond_mat,Kozma_coevolv,Castellano_coevolv}. Meanwhile, hierarchical organization is yet another very relevant concept in network theory \cite{Laci_hier_scale,Sneppen_hier_measures,Newman_hier,Pumain_book,Sole_chaos_hier,Enys_hierarchy,Sole_cond_mat}. As networks provide a sort of ``backbone'' description for systems in biology, physics, chemistry, sociology, etc., whenever the related system is hierarchical, naturally, the given network is likely to preserve this aspect to some degree. This is supported by several recent studies, focusing on the dominant-subordinate hierarchy among crayfish \cite{Huber_crayfish}, the leader-follower network of pigeon flocks \cite{Tamas_pigeons}, the rhesus macaque kingdoms \cite{McCowan_macaque}, the structure of the transcriptional regulatory network of Escherichia coli \cite{Zeng_Ecoli}, and on a wide range of social \cite{Guimera_hier_soc,our_pref_coms,Sole_hier_soc} and technological networks \cite{Pumain_book}. The two network based tag hierarchy extraction methods presented in this paper are both relying on the weighted network between the tags based on co-occurrence statistics. For the majority of the tags, the direct ancestor in the hierarchy is actually chosen from its neighbors in the network according to various delicate measures. \section*{Results} \subsection*{Algorithms} The reason for including both algorithm A and algorithm B in the paper is that algorithm A ``wins'' on the protein function data set, while algorithm B is better on the computer generated benchmarks and also seems to produce even more meaningful results in case of Flickr and IMDb. We made free implementation of both methods available at (http://hiertags.elte.hu). \subsubsection*{Algorithm A} The first stage corresponds to defining weighted links between the tags. Probably the most natural choice is given by the number of co-occurrences, (the number of objects tagged simultaneously by the given two tags). Since we are aiming at a directed network, (in which links are pointing from tags higher in the hierarchy towards descendants lower in the hierarchy), in this initial stage we actually assume two separate links pointing in the opposite direction for every pair of co-occurring tags, (with both links having the same weight). In the next step we prune the network by throwing away a part of the links. Instead applying a global threshold, for each tag $i$ we remove incoming links with a weight smaller than $\omega$ fraction of the weight of the strongest incoming link on $i$. According to our tests on the protein function data set, the quality of the results was only slightly effected by changing $\omega$. (Our quality measures and the description of the data sets are given in forthcoming sections). Nevertheless, an optimal plateau was observed in the quality as a function of $\omega$ between $\omega=0.3$ and $\omega=0.55$, as discussed in details Sect.S1.1.2 in the Supporting Information. Thus, in the rest of the paper we show results obtained at $\omega=0.4$. After the complete link removal process has been finished, the direct ancestor of tag $i$ is chosen from the remaining in-neighbors as follows. We calculate the $z$-score for the co-occurrence with each in-neighbor individually, given by the difference between the number of observed co-occurrences and the number of expected co-occurrences at random, scaled by the standard deviation, (based on the tag frequencies, more details on the $z$-score are given in Methods). The in-neighbor $j$ with the highest $z$-score is usually identified as the direct ancestor, and all other incoming links are deleted on $i$. However, there is a very important exception to this rule: in case the $i\rightarrow j$ link ``survived'' when thresholding the incoming links on $j$. This means that $i$ happens to be also a candidate for the ancestor of $j$, and actually the two tags are more likely to be siblings. In this scenario we go down the list of remaining in-neighbors of $i$ in the order of the $z$-score, until we find a candidate $l$ for which the link $i\rightarrow l$ was already deleted, and identify $l$ as the ancestor of $i$. In case no such in-neighbor can be found, $i$ becomes a local root, with temporally no incoming links. In the last phase of the algorithm we first choose a global root from the local ones according to the maximum entropy of their incoming link weight distribution: if the incoming link weights on $i$ are given by $w_{ij}$ with $\sum_jw_{ij}=W$, then entropy can be written as $-\sum_j \frac{w_{ij}}{W}\ln \frac{w_{ij}}{W}$. The reasoning behind this choice is that a large entropy usually corresponds to a large number of direct descendants with more or less uniform weight distribution. After the global root has been chosen, we go through the list of local roots in the order of their entropy, and link them under their partner with which they co-occur most frequently. (To avoid the formation of loops, we choose only from co-occurring partners located in another subtree). The result of the algorithm is a directed tree, since we assign one direct ancestor to every tag during the process, (except the global root), and we do not allow loops. The complexity of the algorithm can be estimated as $\mathcal{O}(Q)+\mathcal{O}(M \log M)$, where $Q$ denotes the number of objects, and $M$ stands for the number of links in the co-occurrence network between the tags. (The details and the pseudo code of the algorithm are given in Sect.S1.1.1 in the Supporting Information). \subsubsection*{Algorithm B} In case of algorithm B the weight of the links in the network between the tags is the same as in algorithm A, namely the number of objects the tags co-occurred on. However, instead of parallel directed links pointing in the opposite direction, here we consider only single undirected links. Similarly to algorithm A, in the second phase we remove a part of the links from the network. However, in this case we use the $z$-score between connected pairs as a threshold, i.e., if the $z$-score is below 10, the given link is thrown away. (The optimal value for the $z$-score threshold was set based on experiments on our synthetic benchmark, as detailed in Sect.S1.2.2 in the Supporting Information.) There is one exception to the above rule of thresholding: if a tag appears on more than half of the objects of the other tag, then the corresponding link is kept even if the $z$-score is low. Next, the eigenvector centrality is calculated for the tags based on the weighted undirected network remaining after the thresholding, and the tags are sorted according to their centrality value. The hierarchy is built from bottom up: starting from the tag with the lowest eigenvector centrality we choose the direct ancestor of the given tag from its remaining neighbors according to a couple of simple rules. First of all, the ancestor must have a higher centrality. The reasoning behind this is that the eigenvector centrality is analogous to PageRank. Thus, the centrality of a tag is high if it is connected to many other high centrality tags, and therefore, higher centrality values are likely to appear on more frequent and more general tags. In case the tag $i$ has more than one remaining neighbor with a higher centrality value, we choose the candidate which is the most related to $i$ and the set of tags already classified as a descendant of $i$. This is implemented by aggregating the $z$-score between the given candidate and the tags in the branch starting from $i$, (including $i$ as well), and selecting according to the highest aggregated $z$-score value. We note that this is a unique feature of the algorithm: by aggregating over the descendants of $i$ we are using more information compared to simple similarity measures, and hence, are more likely to choose the most related candidate as the parent of $i$. Since we iterate over the tags in reverse order according to their centrality value, and ancestors have always higher centralities compared to their descendants, no loops are formed during the procedure. The complexity of the method can be estimated as $\mathcal{O}(Q)+\mathcal{O}(N\cdot\ln N)$, where $Q$ stands for the number of objects and $N$ denotes the number of different tags. (The details and the pseudo code of the algorithm are given in Sect.S1.2.1 in the Supporting Information). \subsection*{Measuring the quality of the extracted tag hierarchy} \subsubsection*{Simple quality measures} Before actually discussing the results given by tag hierarchy extracting methods in different systems, we need to specify a couple of measures for quantifying the quality of the obtained hierarchies. The natural representation of a hierarchy is given by a directed acyclic graph (DAG), in which links are pointing from nodes at higher level in the hierarchy towards related other nodes lower in the hierarchy. If the exact tag hierarchy is known, the problem is mapped onto measuring the similarity between the DAG obtained from the tag hierarchy extraction method, the ``reconstructed'' graph, $\mathcal{G}_{\rm r}$ and the exact DAG, $\mathcal{G}_{\rm e}$. A simple and natural idea is taking the ratio of exactly matching links in $\mathcal{G}_{\rm r}$, denoted by $r_{\rm E}$, as a primary indicator. In case $\mathcal{G}_{\rm r}$ has only a single connected component, $r_{\rm E}$ is simply given by the number of links also present in $\mathcal{G}_{\rm e}$, divided by the total number of links in $\mathcal{G}_{\rm r}$, denoted by $M_{\rm r}$. However, if $\mathcal{G}_{\rm r}$ contains only a few links with a vast number of isolated nodes, this sort of normalization can lead to a unrealistically high $r_{\rm E}$ value, in case the links happen to be exactly matching. Thus, in the general case we normalize the number of exactly matching links by $\max(N-1,M_{\rm r})$, where $N-1$ corresponds to the number of links needed for creating a tree between the $N$ tags. In a more tolerant approach we may also accept links between more distant ancestor descendant pairs according to the exact hierarchy, (e.g., links pointing from ``grandparents'' to ``grandchildren''). Beside the ratio of acceptable links, $r_{\rm A}$, we can measure the ratio of links between unrelated tags, $r_{\rm U}$ as well, (these are pairs which are not connected by any directed path in $\mathcal{G}_{\rm e}$), and also the ratio of ``inverted'' links, $r_{\rm I}$, pointing in the opposite direction compared to $\mathcal{G}_{\rm e}$, or connecting more distant ancestor descendant pairs in the wrong direction. Furthermore, when $M_{\rm r}<N-1$, the ratio of missing links from $\mathcal{G}_{\rm r}$, denoted by $r_{\rm M}$, is another important indicator of the effectiveness of the algorithm. (If $\mathcal{G}_{\rm r}$ is composed of only a single component, $r_{\rm M}$ is 0 by definition.) Similarly to $r_{\rm E}$, all quality indicators introduced so far are normalized by $\max(N-1,M_{\rm r})$. These measures are not completely independent of each other, i.e., the ratio of acceptable links is always larger than or equal to the ratio of exactly matching links, $r_{\rm A}\geq r_{\rm E}$, and also $r_{\rm A}+r_{\rm I}+r_{\rm U}+r_{\rm M}=1$. \subsubsection*{Normalized mutual information between hierarchies} A somewhat more elaborate approach to measuring the quality of the reconstructed hierarchy can be given by the normalized mutual information, (NMI), introduced originally in information theory for measuring the mutual dependence of two random variables \cite{Kuncheva_mutinfo,Fred_mutinfo}. (The definition of the NMI in general is given in Methods). A very important application of the NMI is related to the problem of comparing different partitioning of the same graph into communities \cite{Danon_mutinfo,Lancichinetti_mutinfo}. The advantage of the NMI approach when comparing hierarchies is that the resulting similarity measure is sensitive not only to the amount of non-matching links, but also to the position of these links in the hierarchies. In other words, the change in the similarity is different for rewiring a link pointing to a leaf and for rewiring a link higher in the hierarchy. When judging the similarity between two hierarchies, a natural idea is to compare the sets of descendants for each tag in the corresponding DAGs. E.g., if the set of descendants of tag $i$ is $D_{\rm e}(i)$ in the exact hierarchy and $D_{\rm r}(i)$ in the reconstructed one, then the number of tags in the intersection of these two sets is given by $\left|D_{\rm e}(i)\cap D_{\rm r}(i)\right|$. Roughly speaking, the higher the value of this quantity over all tags, the higher is the similarity between the two hierarchies. To build a similarity measure from this concept in the spirit of the NMI, first we define $p_{\rm e}(i)=\left| D_{\rm e}(i)\right|/(N-1)$ as the probability for picking a tag from the descendants of $i$ at random in the exact hierarchy, where $N$ denotes the total number of tags in ${\mathcal G}_{\rm e}$. (Since the tag $i$ is not included in $D_{\rm e}(i)$, the possible maximum value for $\left|D_{\rm e}(i)\right|$ is $N-1$). Similarly, the probability for choosing a tag from the descendants of $i$ at random in ${\mathcal G}_{\rm r}$ is given by $p_{\rm r}(i)=\left| D_{\rm r}(i)\right|/(N-1)$, while the probability for picking a tag from the intersection between the descendants of $i$ in the two hierarchies can be written as $p_{\rm r,e}(i)=\left| D_{\rm e}(i)\cap D_{\rm r}(i)\right|/(N-1)$. Based on this, the NMI between the exact- and reconstructed hierarchies can be formulated as \begin{equation} I_{\rm e,r}=-\frac{2\sum\limits_{i=1}^Np_{\rm e,r}(i)\ln\left(\frac{p_{\rm e,r}(i)}{p_{\rm e}(i)p_{\rm r}(i)}\right)}{\sum\limits_{i=1}^Np_{\rm e}(i)\ln p_{\rm e}(i)+\sum\limits_{i=1}^Np_{\rm r}(i)\ln p_{\rm r}(i)}=\frac{2\sum\limits_{i=1}^N \left|D_{\rm e}(i)\cap D_{\rm r}(i)\right|\ln \left(\frac{\left|D_{\rm e}(i)\cap D_{\rm r}(i)\right| (N-1)}{\left|D_{\rm e}(i)\right|\cdot \left| D_{\rm r}(i)\right|}\right)}{ \sum\limits_{i=1}^N\left|D_{\rm e}(i)\right| \ln \left(\frac{\left|D_{\rm e}(i)\right|}{N-1}\right)+\sum\limits_{i=1}^N \left|D_{\rm r}(i)\right|\ln \left(\frac{\left|D_{\rm r}(i)\right|}{N-1}\right)} . \label{eq:NMI_DAG} \end{equation} This measure is 1 if and only ${\mathcal G}_{\rm e}$ and ${\mathcal G}_{\rm r}$ are identical, and is 0 if the intersections between the corresponding branches in the two hierarchies is of the same magnitude as we would expect at random, or in other words, if ${\mathcal G}_{\rm e}$ and ${\mathcal G}_{\rm r}$ are independent. The similarity defined in the above way is very closely related to the NMI used in community detection \cite{Danon_mutinfo,Lancichinetti_mutinfo}, the analogy between the two quantities can be made explicit by an appropriate mapping from the hierarchy between the tags to a partitioning of the tags, (further details are given in Sect.S2.1 in the Supporting Information). \begin{figure}[!ht] \begin{center} \includegraphics[width=0.5\textwidth]{Fig_01} \end{center} \caption{ {\bf Using the normalized mutual information (NMI) for measuring the similarity between hierarchies.} We tested the behavior of the NMI by applying (\ref{eq:NMI_DAG}) to a binary tree of 1,023 nodes, ${\mathcal G}_{\rm b}$, and its randomized counter part, ${\mathcal G}_{\rm rand}$, obtained by rewiring the links at random, as shown in the illustration at the top. The decay of the obtained NMI is shown in the bottom panel as a function of the fraction of the rewired links, $f$. The three different curves correspond to rewiring the links in reverse order according to their position in the hierarchy (purple circles), rewiring in random order (blue squares) and rewiring in the order of the position in the hierarchy (green triangles). The concept of the linearized mutual information (LMI) for the general tag hierarchy reconstruction problem is illustrated in red: By projecting the measured $I_{\rm e,r}$ value onto the $f$ axis via the blue curve we obtain $f^*$, giving the fraction of rewired links in a randomization process with the same NMI value. The LMI is equal to $I_{\rm lin}=1-f^*$, corresponding to the fraction of unchanged links. \label{fig:NMI}} \end{figure} We examined the behavior of the NMI given in (\ref{eq:NMI_DAG}) by taking a binary tree of 1,023 nodes, $\mathcal{G}_{\rm b}$, and comparing it to its randomized counterpart, $\mathcal{G}_{\rm rand}$, obtained by rewiring a fraction of $f$ links to a random location. In Fig.\ref{fig:NMI}. we show the measured NMI as a function of $f$. If we start the rewiring with links pointing to leafs, and continue according to the reverse order in the hierarchy, the NMI shows a close to linear decay as a function of $f$ almost in the entire $[0,1]$ interval (purple circles). However, if links are chosen in random order, $I_{\rm b,rand}$ is decreasing much faster in the small $f$ region, with an overall non-linear $f$ dependency (blue squares). An even steeper decay can be observed when links are chosen in the order of their position in the hierarchy (green triangles). Nevertheless, $I_{\rm b,rand}\rightarrow 0$ when $f\rightarrow 1$ in all cases, thus, the similarity defined in this way is vanishing for a pair of independent DAGs. Meanwhile, the significant difference between the three curves displayed in Fig.\ref{fig:NMI}c shows that the NMI is sensitive also to the position of the rewired links in the hierarchy: rewiring the top levels of the hierarchy is accompanied by a drastic drop in the similarity, while changes at the bottom of the hierarchy cause only a minor decrease, which is linear in the fraction of rewired links. This non-trivial feature of the NMI allows the introduction of another interesting quality measure for a reconstructed hierarchy. Supposing a similar randomization procedure on ${\mathcal G}_{\rm e}$ as shown in Fig.\ref{fig:NMI}, we may ask what fraction of links has to be rewired on average for reaching the same NMI as ${\mathcal G}_{\rm r}$? The formal definition of this measure is given as follows. Let $I(f)$ denote the average NMI obtained for a fraction of $f$ randomly rewired links, where the links are chosen in random order, $I(f)\equiv \left< I_{\rm e,rand}\right>_{f}$. By projecting the NMI between the exact- and reconstructed hierarchies, $I_{\rm e,r}$, to the $f$ axis using this function as \begin{equation} f^*= I^{-1}(I_{\rm e,r}), \end{equation} we receive the fraction of randomly chosen links to be rewired in ${\mathcal G}_{\rm e}$ for obtaining a randomized hierarchy with the same NMI as ${\mathcal G}_{\rm r}$, (see Fig.\ref{fig:NMI} for illustration). Based on that we define the linearized mutual information, (LMI) as \begin{equation} I_{\rm lin}=1-f^*=1-I^{-1}(I_{\rm e,r}). \label{eq:LMI} \end{equation} This quality measure corresponds to the fraction of unchanged links in a random link rewiring process, resulting in a hierarchy with the same NMI as ${\mathcal G}_{\rm r}$. (The reason for calling it ``linearized'' is that (\ref{eq:LMI}) is actually projecting $I_{\rm e,r}$ to the linear $1-f$ curve). By comparing the LMI to the fraction of exactly matching links, $r_{\rm e}$, we gain further information on the nature of the reconstructed DAG: If $I_{\rm lin}$ is significantly larger than $r_{\rm e}$, the reconstructed DAG is presumably better for the links high in the hierarchy, whereas if $I_{\rm lin}$ is significantly lower than $r_{\rm e}$, the reconstructed DAG is more precise for links close to the leafs. \subsection*{Real tagging systems} \subsubsection*{Reconstructing the hierarchy of protein functions} Although the primary targets of tag hierarchy extraction methods are given by tagging systems with no pre-defined hierarchy between the tags, for testing the quality of the extracted hierarchy we need input data for which the exact hierarchy is also given. A very important real tag hierarchy is provided by protein functions as described in the Gene Ontology \cite{GO}, organizing function annotations into three separate DAGs corresponding to ``biological process'', ``molecular function'' and ``cellular component'' oriented description of proteins. The corresponding input data for a tag hierarchy extraction algorithm would be a collection of proteins, each tagged by its function annotations. Luckily, the Gene Ontology provides also a regularly updated large data set enlisting proteins and their known functions aggregated from a wide range of sources, (a more detailed description of the data set we used is given in Materials and Methods). \begin{figure}[!ht] \begin{center} \includegraphics[width=0.9\textwidth]{Fig_02} \end{center} \caption{ {\bf Comparison between the exact hierarchy and the reconstructed hierarchy obtained from algorithm A.} a) A subgraph in the hierarchy of protein functions, (describing molecular functions), according to the Gene Ontology, treated as the exact hierarchy, ${\mathcal G}_{\rm e}$. b) The hierarchy between the same tags obtained from running algorithm A on the tagged protein data set, (the reconstructed hierarchy, ${\mathcal G}_{\rm r}$). The exactly matching- and acceptable links are colored green and orange respectively, the unrelated links are shown in red, while the missing links are colored gray. c) The list of included protein functions in panels (a) and (b). \label{fig:GO_DAGs} } \end{figure} In Fig.\ref{fig:GO_DAGs}a we show a smaller subgraph from the hierarchy between molecular functions given in the Gene Ontology, ${\mathcal G}_{\rm e}$, together with the subgraph between the same tags in the result obtained by running our algorithm A on the tagged protein data set, ${\mathcal G}_{\rm r}$, displayed in Fig.\ref{fig:GO_DAGs}b. The matching between the two subgraphs is very good: the majority of the connections are either exactly the same (shown in green), or acceptable (shown in orange), by-passing levels in the hierarchy and e.g., connecting ``grandchildren'' to ``grandparents''. The appearing few unrelated-- and missing links are colored red and gray, respectively. The quality measures obtained for the complete reconstructed hierarchy are given in table \ref{table:results}. For comparison we also evaluated the same measures for algorithm B, the algorithm by P.~Heymann and H.~Garcia-Molina, and the algorithm by P.~Schmitz. According to the results all 4 methods perform rather well, however, our algorithm seems to achieve the best scores. Although the ratio of exactly matching links is $r_{\rm E}=21\%$, (which is not very high), the ratio of acceptable links is reaching $r_{\rm A}=66\%$, which is very promising. The NMI given by (\ref{eq:NMI_DAG}) is $I_{\rm e,r}=35\%$, however, the LMI according to (\ref{eq:LMI}) is $I_{\rm lin}=78\%$. (The corresponding plot showing the decay of the NMI between the Gene Ontology hierarchy and its randomized counterpart is given in Sect.S2.2 in the Supporting Information). Thus, the similarity between our reconstructed hierarchy and the hierarcy from the Gene Ontology is so high that if we would randomize the Gene Ontology, (by rewirnig the links in random order), the same NMI value would be reached already after rewiring 22\% of the links. The large difference between $I_{\rm lin}$ and $r_{\rm E}$ in favour of $I_{\rm lin}$ indicates that our algorithm is better at predicting links higher in the hierarchy. E.g., in a randomization with random link rewiring order keeping only $r_{\rm E}=21\%$ of the links unchanged, the NMI would be around $2\%$ instead of the actualy measured $I_{e,r}=35\%$. The reason why $I_{\rm e,r}$ can stay relatively high for the reconstructed hierarchy is that the majority of the non-matching links are low in the hierarchy, therefore, have a smaller effect on the NMI. \begin{table}[!ht] \caption{ \bf{Quality measures for the reconstructed hierarchies in case of the protein function data set}} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & $r_{\rm E}$ & $r_{\rm A}$ & $r_{\rm I}$ & $r_{\rm U}$ & $r_{\rm M}$ & $I_{\rm e,r}$ & $I_{\rm lin}$ \\ \hline algorithm A & 21\% &66\% & 2\% & 32\%& 0\% & 35\% & 78\% \\ \hline algorithm B & 20\% & 52\% & 3\% & 44\% & 1\% & 30\% & 75\% \\ \hline P.~Heymann \& H.~Garcia-Molina & 19\% & 51\% & 3\% & 46\%& 0\%& 30\% & 75\% \\ \hline P.~Schmitz & 18\% & 65\% & 2\% & 23\% & 10\% & 30\% & 75\% \\ \hline \end{tabular} \begin{flushleft} The quality of the tag hierarchy obtained for the tagged protein data set, ${\mathcal G}_{\rm r}$, was evaluated by comparing it to the hierarchy of protein functions in the Gene Ontology, ${\mathcal G}_{\rm e}$. The quality measures presented in the different columns are the following: the ratio of exactly matching links in ${\mathcal G}_{\rm r}$,denoted by $r_{\rm E}$, the ratio of acceptable links, $r_{\rm A}$, (connecting more distant ancestor-descendant pairs), the ratio of inverted links, $r_{\rm I}$, (pointing in the opposite direction), the ratio of unrelated links, $r_{\rm U}$, (connecting tags on different branches in ${\mathcal G}_{\rm e}$), the ratio of missing links in ${\mathcal G}_{\rm e}$, denoted by $r_{\rm M}$, the normalized mutual information between the two hierarchies, $I_{\rm e,r}$, and the linearized mutual information, $I_{\rm lin}$, corresponding to the fraction of exactly matching links remaining after a random link rewiring process stopped at NMI value given by $I_{\rm e,r}$. The different rows correspond to results obtained from algorithm A (1$^{\rm st}$ row), algorithm B (2$^{\rm nd}$ row),the method by P.~Heymann \& H.~Garcia-Molina (3$^{\rm d}$ row), and the algorithm by P.~Schmitz (4$^{\rm th}$ row). \end{flushleft} \label{table:results} \end{table} \subsubsection*{Hierarchy of Flickr tags} One of the most widely known tagging systems is given by Flickr, an online photo management and sharing application, where users can tag the uploaded photos with free words. Since the tags are not organized into a global hierarchy, this system provides an essential example for the application field of tag hierarchy extracting algorithms. We have run our algorithm B on a relatively large, filtered sample of photos, (the details of the construction of our data set are given in Methods). Although the ``exact'' hierarchy between the tags is not known in this case, since the tags correspond to English words, we can still give a qualitative evaluation of the result just by looking at smaller subgraphs in the extracted hierarchy. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.9\textwidth]{Fig_03} \end{center} \begin{flushleft} \caption{ {\bf Subgraph from the hierarchy between Flickr tags.} By running our algorithm B on a filtered sample from Flickr, we obtained a hierarchy between the tags appearing on the photos in the sample. Since the total number of tags in our data reached 25,441, here we show only a smaller subgraph from the result, corresponding to a part of the tags categorized under ``reptile''. Stubs correspond to further direct descendants not shown in the figure, and the size of the nodes indicate the total number of descendants on a logarithmic scale, (e.g., ``prairie rattlesnake'' has none, while ``snake'' has altogether 110.). \label{fig:Flickr} } \end{flushleft} \end{figure} An example is given in Fig.\ref{fig:Flickr}., showing a few descendants of the tag ``reptile'' in our reconstruction. Most important direct descendants are ``snake'', ``lizard'', ``alligator'' and ``turtle''. The tags under these main categories seem to be correctly classified, e.g., ``alligator snapping turtle'' is under ``turtle'', (instead of the also related ``alligator''). Interestingly, Latin names (binomial names) from the taxonomy of ``reptilia'' form a further individual branch under ``reptile'', however, occasionally we can also see binomial names directly connected to the corresponding English name of the given species. More examples from our result on the Flickr data are given in Sect.S3.1 in the Supporting Information, which taken together with Fig.\ref{fig:Flickr} give an overall impression of a meaningful hierarchy, following the ``common sense'' by and large. (Furthermore, similar samples from the hierarchies extracted by the other methods are also given in Sect.S3.2 in the Supporting Information.) \subsubsection*{Hierarchy of IMDb tags} Another widely known online database is given by the IMDb, providing detailed information related to films, television programs and video games. One of the features relevant from the point of view of our research is that keywords related to the genre, content, subject, scenes, and basically any relevant feature of the movies are also available. These can be treated similarly to the Flickr tags, i.e., they are corresponding to English words, which are not organized into a hierarchy. In Fig.\ref{fig:IMDb}. we show results obtained by running Algorithm B on a relatively large, filtered sample of tagged movies. (The details of the construction of the data set are given in Methods). Similarly to the Flickr data, we display a smaller part of the branch under the tag ``murder'' in the extracted hierarchy. Most important direct descendants are corresponding to ``death'', ``prison'' and ``investigation'', with ``blood'', ``suspect'' and ``police detective'' appearing on lower levels of the hierarchy. Although the tags appearing in the different sub-branches are all related to their parents, the quality of the Flickr hierarchy seemed a bit better. This may be due to the fact that keywords can pertain to any part of the movies, and hence, the tags on a single movie can already be very diverse, providing a more difficult input data set for tag hierarchy extraction. Nevertheless, this result reassures our statement related to the Flickr data, namely that the hierarchies obtained from our algorithm have a meaningful overall impression. (Similar samples from the hierarchies obtained with the other methods are shown in Sect.S3.2 in the Supporting Information.) \begin{figure}[!ht] \begin{center} \includegraphics[width=0.9\textwidth]{Fig_04} \end{center} \caption{{\bf Subgraph from the hierarchy between IMDb tags.} The results were obtained by running Algorithm B on a filtered sample of films from IMDb, tagged by keywords describing the content of the movies. Here we show only a smaller subgraph between the descendants of ``murder'', where stubs correspond to further direct descendants not shown in the figure, and the size of the nodes indicate the total number of descendants on a logarithmic scale. \label{fig:IMDb} } \end{figure} \subsection*{Synthetic benchmark based on random walks} \subsubsection*{Defining the benchmark system} Providing adjustable benchmarks is very important when testing and comparing algorithms. The basic idea of a benchmark in general is given by a system, where the ground truth about the object of search is also known. However, for most real systems this sort of information is not available, therefore, synthetic benchmarks are constructed. E.g., community finding is one of the very intensively studied area of complex network research, with an enormous number of different community finding algorithms available \cite{Fortunato_report}. Since the ground truth communities are known only for a couple of small networks, the testing is usually carried out on the LFR benchmark \cite{LFR_benchmark}, which is a purely synthetic, computer generated benchmark: the communities are pre-defined, and the links building up the network are generated at random, with linking probabilities taking into account the community structure. The drawback of such synthetic test data is its artificial nature, however, the benefit on the other side is the freedom of the choice of the parameters, enabling the variance of the test conditions on a much larger scale compared to real systems. Here we propose a similar synthetic benchmark system for testing tag hierarchy extraction algorithms. The basic idea is to start from a given pre-defined hierarchy, (the ``exact'' hierarchy), and generate collections of tags at random, (corresponding to tagged objects in a real system), based on this hierarchy. The tag hierarchy extraction methods to be tested can be run on these sets of tags, and the obtained hierarchies, (the "reconstructed" hierarchies), can be compared to the exact hierarchy used when generating the synthetic data. When drawing an analogy between this system and the LFR benchmark, our pre-defined hierarchy is corresponding to the pre-defined community structure in the LFR benchmark, while the generated collections of tags are corresponding to the random networks generated according to the communities. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.5\textwidth]{Fig_05} \end{center} \caption{ {\bf Generating tags on virtual objects by random walks on the hierarchy.} The objects in this approach are represented simply by collections of tags. For a given collection, the first tag is picked at random, (illustrated in red), while the rest of the tags are obtained by implementing a short undirected random walk on the DAG, starting from the first tag, (illustrated in purple). \label{fig:random_walk} } \end{figure} To make the above idea of a synthetic tagging system work in practice, we have to specify the method for generating the random collections of tags based on the given pre-defined hierarchy. In general, the basic idea is that tags more closely related to each other according to the hierarchy should appear together with a larger probability compared to unrelated tags. To implement this, we have chosen a random walk approach as suggested in \cite{our_ontology}. The first tag in each collection is chosen at random. For the rest of the tags in the same collection, with probability $p_{\rm RW}$ we start a short undirected random walk on the hierarchy starting from the first tag, and choose the endpoint of the random walk, or with probability $1-p_{\rm RW}$ we again choose at random. An illustration of this process is given in Fig.\ref{fig:random_walk}, (a brief pseudo-code of the data generation algorithm is given in Algorithm S4. in the Supporting Information). The parameters of the benchmark are the following: the pre-defined hierarchy between the tags, the frequency of the tags when choosing at random, the probability $p_{\rm RW}$ for generating the second and further tags by random walk, the length of the random walks, the number of objects and finally, the distribution of the number of tags per object. Although this is a long list of parameters, the quality of the reconstructed hierarchy is not equally sensitive to all of them. E.g., according to our experiments change in the topology of the exact hierarchy, or in the length of the random walk have only a minor effect, while the distribution of the tag frequencies seems to play a very important role. \subsubsection*{Results on synthetic data} In Table \ref{table:comp_gen_1}. we show the tag hierarchy extraction results obtained on synthetic data generated by using our random walk based benchmark system. In the data generation process the exact hierarchy was set to a binary tree of 1,023 tags, with tag frequencies decreasing linearly as a function of the depth in the hierarchy. We generated an average number of 3 co-occurring tags on altogether 2,000,000 hypothetical objects, with random walk probability of $p_{\rm RW}=0.5$ and random walk lengths chosen from a uniform distribution between 1 and 3. We ran the same algorithms on the obtained data as in case of the protein data set, and used the same measures for evaluating the quality of the results. According to Table \ref{table:comp_gen_1}., the majority of the algorithms perform very well, e.g., algorithm B and the algorithm by P.~Heymann \& H.~Garcia-Molina are producing almost perfect reconstructions, thus, this example is an ``easy'' data set. Interestingly, the results of the algorithm by Schmitz were very poor on this input. Nevertheless, this method is still competitive with the others, e.g., it showed a quite good performance on the protein data set. However, the study of why does this algorithm behave completely different from the others on our benchmark is out of the scope of the present work. \begin{table}[!ht] \caption{ \bf{Quality measures of the reconstructed hierarchies for the ``easy'' synthetic data set.}} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & $r_{\rm E}$ & $r_{\rm A}$ & $r_{\rm I}$ & $r_{\rm U}$ & $r_{\rm M}$ & $I_{\rm e,r}$ & $I_{\rm lin}$ \\ \hline algorithm A & 67\% &100\% & 0\% & 0\%& 0\% & 91\% & 99\% \\ \hline algorithm B & 100\% & 100\% & 0\% & 0\% & 0\% & 100\% & 100\% \\ \hline P.~Heymann \& H.~Garcia-Molina & 99\% & 100\% & 0\% & 0\%& 0\%& 93\% & 99\% \\ \hline P.~Schmitz & 0\% & 0\% & 0\% & 0\% & 100\% & 0\% & 0\% \\ \hline \end{tabular} \begin{flushleft} When generating the data set, the frequency of the initial tags was decreasing linearly as a function of the level depth in the exact hierarchy. We show the same quality measures as in Table \ref{table:results}.: the ratio of exactly matching links, $r_{\rm E}$, the ratio of acceptable links, $r_{\rm A}$, the ratio of inverted links, $r_{\rm I}$, the ratio of unrelated links, $r_{\rm U}$, the ratio of missing links, $r_{\rm M}$, the normalized mutual information between the exact- and the reconstructed hierarchies, $I_{\rm e,r}$, and the linearized mutual information, $I_{\rm lin}$. The different rows correspond to results obtained from algorithm A, (1$^{\rm st}$ row), algorithm B, (2$^{\rm nd}$ row), the method by P.~Heymann \& H.~Garcia-Molina (3$^{\rm d}$ row), and the algorithm by P.~Schmitz (4$^{\rm th}$ row). \end{flushleft} \label{table:comp_gen_1} \end{table} The ``easy'' synthetic data discussed above can be turned into a ``hard'' one by changing the frequency distribution of the tags. In Table \ref{table:comp_gen_2}. we show the results obtained when the tag frequencies were independent of the level depth in the hierarchy, and had a power-law distribution, with the other parameters of the benchmark left unchanged. According to the studied quality measures, the performance of the involved methods drops down drastically compared to Table \ref{table:comp_gen_1}. However, algorithm B provides an exception in this case, achieving pretty good results even for this ``hard'' test data. E.g., the NMI value is still $I_{\rm e,r}=0.83$ for our algorithm, while for e.g., the algorithm by P. Heymann \& H. Garcia-Molina it is reduced to $I_{\rm e,r}=0.29$. Moreover, the fraction of exactly matching links is almost 90\% for algorithm B, while it is below 50\% for the algorithm by P. Heymann \& H. Garcia-Molina. This shows that algorithm B can have a significantly better performance compared to other algorithms, as the quality of its output is less dependent on the correlation between tag frequencies and level depth in the hierarchy. Another interesting effect in Table \ref{table:comp_gen_1}. is that the results for the algorithm by Schmitz are slightly better compared to the ``easy'' data set. As we mentioned earlier, studies of the reasons for the outlying behavior of this algorithm on our benchmark compared to the other methods is left for future work. \begin{table}[!ht] \caption{ \bf{Quality measures of the reconstructed hierarchies for the ``hard'' synthetic data set.}} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & $r_{\rm E}$ & $r_{\rm A}$ & $r_{\rm I}$ & $r_{\rm U}$ & $r_{\rm M}$ & $I_{\rm e,r}$ & $I_{\rm lin}$ \\ \hline algorithm A & 31\% &35\% & 18\% & 47\%& 0\% & 18\% & 66\% \\ \hline algorithm B & 89\% & 91\% & 6\% & 3\% & 0\% & 83\% & 97\% \\ \hline P.~Heymann \& H.~Garcia-Molina & 48\% & 54\% & 29\% & 17\%& 0\%& 29\% & 76\% \\ \hline P.~Schmitz & 1\% & 2\% & 1\% & 3\% & 94\% & 1\% & 5\% \\ \hline \end{tabular} \begin{flushleft} In this case the frequency of the initial tags was independent of their position in the exact hierarchy during the benchmark generation, and the frequency distribution followed a power-law. This change compared to the data set used in Table \ref{table:comp_gen_1}. results in significant decrease in the quality measures for most of the involved methods, as shown by the ratio of acceptable links, $r_{\rm A}$, the ratio of inverted links, $r_{\rm I}$, the ratio of unrelated links, $r_{\rm U}$, the ratio of missing links, $r_{\rm M}$, the normalized mutual information between the exact- and the reconstructed hierarchies, $I_{\rm e,r}$, and the linearized mutual information, $I_{\rm lin}$. The different rows correspond to results obtained from algorithm A, (1$^{\rm st}$ row), algorithm B, (2$^{\rm nd}$ row), the method by P.~Heymann \& H.~Garcia-Molina (3$^{\rm d}$ row), and the algorithm by P.~Schmitz (4$^{\rm th}$ row). \end{flushleft} \label{table:comp_gen_2} \end{table} The effects of the modifications in the other parameters of the benchmark are discussed in Sect.S4.2-S4.3 in the Supporting Information. Nevertheless these results already show that the provided framework can serve as versatile test tool for tag hierarchy extraction methods. \section*{Methods} \subsection*{$z$-score} Both algorithms introduced in the paper depend on the $z$-score related to the number of co-occurrences between a pair of tags. If the tags are assigned to the objects completely at random, the distribution of the number of co-occurrences for a given pair of tags $i$ and $j$ follows the hypergeometric distribution: Assuming that tag $i$ and $j$ appear altogether on $Q_i$ and $Q_j$ objects respectively, let us consider the random assignment of tag $i$ among a total number of $Q$ objects. This is equivalent to drawing $Q_i$ times from the objects without replacement, where the ``successful'' draws correspond to objects also having tag $j$, (and the total number of such objects is $Q_j$). Based on this, the probability for observing a given $Q_{ij}$ number of co-occurrences between $i$ and $j$ is \begin{equation} P(Q_{ij}=k)=\frac{\binom{Q_j}{k}\binom{Q-Q_j}{Q_i-k}}{\binom{Q}{Q_i}}, \end{equation} with the expected number of co-appearances given by \begin{equation} \left< Q_{ij}\right>=\frac{Q_{i}Q_{j}}{Q}, \label{eq:exp_rand} \end{equation} and the variance formulated as \begin{equation} \sigma^2(Q_{ij})=\frac{Q_iQ_j}{Q}\frac{Q-Q_i}{Q}\frac{Q-Q_j}{Q-1}. \label{eq:var} \end{equation} The $z$-score is defined as the difference between the observed number of co-occurrences in the data, $Q_{ij}$, and the expected number of co-occurrences at random as given in (\ref{eq:exp_rand}), scaled by the standard deviation according to (\ref{eq:var}), \begin{equation} z_{ij}=\frac{Q_{ij}-\left< Q_{ij}\right>}{\sigma(Q_{ij})}. \end{equation} \subsection*{Normalized mutual information} For discrete variables $x_i$ and $y_j$ with a joint probability distribution given by $P(x_i,y_j)$, the mutual information is defined as \begin{equation} I(x,y)\equiv \sum_{i}\sum_{j} p(x_i,y_j)\ln \left(\frac{p(x_i,y_j)}{p(x_i)p(y_j)}\right), \label{eq:mutinfo} \end{equation} where $p(x_i)$ and $p(y_j)$ denote the (marginal) probability distributions of $x_i$ and $y_j$ respectively. If the two variables are independent, $p(x_i,y_j)=p(x_i)p(y_j)$, thus, $I(x,y)$ becomes $0$. The above quantity is very closely related to the entropy of the random variables, \begin{equation} I(x,y)=H(x)+H(y)-H(x,y), \label{eq:mutinfo_entropy} \end{equation} where $H(x)=-\sum_ip(x_i)\ln p(x_i)$ and $H(y)=-\sum_j p(y_j)\ln p(y_j)$ correspond to the entropy of $x$ and $y$, while $H(x,y)=-\sum_{ij}p(x_i,y_j)\ln p(x_i,y_j)$ denotes the joint entropy. Based on (\ref{eq:mutinfo_entropy}), the NMI can be defined as \begin{equation} I_{\rm norm}(x,y)\equiv \frac{2 I(x,y)}{H(x)+H(y)}. \label{eq:NMI_alap} \end{equation} This way the NMI is 1 if and only $x$ and $y$ are identical, and 0 if they are independent. \subsection*{Data} \subsubsection*{Protein data} Both the exact DAG describing the hierarchy between protein functions and the corresponding input data set given by proteins tagged with known function annotations were taken from the Gene Ontology \cite{GO}. The hierarchy of protein function is composed of three separate DAGs, corresponding to ``molecular function'', ``biological process'' and ``cellular component''. We concentrated on molecular functions, where the complete DAG has altogether 6,469 tags. However, a considerable part of these annotations are rather rare, thus, reconstructing the complete hierarchy would be a very hard task due to the lack of information. Therefore, we took a smaller subgraph, namely the branch starting from ``catalytic activity'', counting 4,181 tags, most of which are relatively more frequent. For the data set of proteins, tagged with their known molecular function annotations, we took the monthly (quality controlled) release as in 2012.08.01. For simplicity, we neglected proteins lacking any tags appearing in the exact hierarchy, and deleted all annotations which are not descendants of ``catalytic activity''. The resulting smaller data set contained 5,913,610 proteins, each having on average 3.7 tags. This data set, (together with the corresponding exact DAG) is available at (http://hiertags.elte.hu). \subsubsection*{Flickr data} Flickr provides the possibility for searching photos by tags, thus, as a first step we downloaded photos resulting from search queries over a list of 68,812 English nouns, yielding altogether 2,565,501 photos, (the same photo can appear multiple times as a result for the different queries). At this stage we stored all the tags of the photos and the anonymous user id of the photo owners as well. Next, the set of tags on the photos had to be cleaned: only English nouns were accepted, and in case of parts of a compound word appeared beside the compound word on the same photo, the smaller parts were deleted, leaving only the complete compound word. Since our algorithms rely on the weighted network of co-appearances, we applied a further filtering: a link was accepted only if the corresponding tags co-appeared on photos belonging to at least 10 different users. The resulting tag co-appearance network had 25,441 nodes, encoding information originating from 1,519,030 photos. We made the list of weighted links between the tags available at (http://hiertags.elte.hu). \subsubsection*{IMDb data} We have downloaded the data from the IMDb Web site\cite{imdbwww}, and used the ``keywords.list.gz'' data file, listing the keywords associated with the different movies. The goal of the keywords is helping the users in searching amongst the movies, and keywords can pertain to any part, scene, subject, gender, etc. of the movie. Although keywords can be given only by registered users, there is no restriction what so ever for registering, and the submitted information is processed by the "Database Content Team" of the IMDb site. The version of the original data we are used here contained 487,356 movie titles and 136,204 different keywords. However, to improve the quality of the data set, we restricted our studies to keywords appearing on at least a 100 different movies, leaving 336,223 movies and 6,358 different keywords in the data set. This cleaned version is available at (http://hiertags.elte.hu). \section*{Discussion} We introduced a detailed framework for tag hierarchy extraction in tagging systems. First, we have defined quality measures for hierarchy extraction methods based on comparing the obtained results to a pre-defined exact hierarchy. A part of these quantities were simply given by fractions of links fulfilling some criteria, (e.g., exactly matching, inverted, etc.). However we also defined the NMI between the exact- and the reconstructed hierarchies, providing a quality measure which is sensitive also to the position of the non-matching links in the hierarchy. This was illustrated by our experiments comparing a hierarchy to its randomized counterpart, where the NMI showed a significantly faster decay when the rewiring was started at the top of the hierarchy, compared to the opposite case of starting from the leafs. Furthermore, we developed a synthetic, computer generated benchmark system for tag hierarchy extraction. This tool provides versatile possibilities for testing hierarchy extraction algorithms under controlled conditions. The basic idea of our benchmark is generating collections of tags associated to virtual objects based on a pre-defined hierarchy between the tags. By running a tag hierarchy extraction algorithm on the generated synthetic data, the obtained result can be compared to the pre-defined exact hierarchy used in the data generation process. According to our experiments on the benchmark, by changing the parameters during the generation of the synthetic data, we can enhance or decrease the difficulty of the tag hierarchy reconstruction. In addition, we developed two novel tag hierarchy extraction algorithms based on the network approach, and tested them both on real systems and computer generated benchmarks. In case of the tagged protein data the similarity between the obtained protein function hierarchy and the hierarchy given by the Gene Ontology was very encouraging, and the hierarchy between the English words obtained for the Flickr and IMDb data sets seemed also quite meaningful. The computer generated benchmark system we have set up provides further possibilities for testing tag hierarchy extraction algorithms in general. By changing the parameters during the input generation we can enhance or decrease the difficulty of the tag hierarchy reconstruction. Our methods were compared to current state of the art tag hierarchy extraction algorithms by P.~Heymann \& H.~Garcia-Molina and by P.~Schmitz. Interestingly, the rank of the algorithms according to the introduced quality measures was varying from system to system. In case of the protein data set algorithm A was slightly ahead of the others, while the rest of the methods achieved more or less the same quality. In turn, for the easy synthetic test data, algorithm B and the algorithm by P.~Heymann \& H.~Garcia-Molina reached almost perfect reconstruction, with algorithm A left slightly behind, and the algorithm by P. Schmitz achieving very poor marks. However, when changing to the hard synthetic test data, a large difference was observed between the quality of the obtained results, as algorithm B significantly outperformed all other methods. The different ranking of the algorithms for the included examples indicates that tag hierarchy extraction is a non-trivial problem where a system can be challenging for one given approach and easy for another method and vice versa. Nevertheless the results obtained indicate that tag hierarchy extraction is a very promising direction for further research with a great potential for practical applications. \input{Hiertags_arxiv1.bbl} \newpage \input{Hiertags_SI_resub_01} \end{document}
{'timestamp': '2014-01-23T02:11:04', 'yymm': '1401', 'arxiv_id': '1401.5741', 'language': 'en', 'url': 'https://arxiv.org/abs/1401.5741'}
\section{\label{sec:introduction}Introduction} Optomechanical systems are widely used in experiments of precise measurements~\cite{Aspelmeyer:2013lha}. In particular, optomechanical systems consisting of massive oscillators are suitable for testing macroscopic quantum mechanics~\cite{Chen:2013sgs}, investing Newtonian interaction of quantum objects~\cite{Miao:2019pxw, Kafri:2014zsa}, measuring gravitational force of milligram masses~\cite{Schmole:2016mde}, and gravitational wave detection~\cite{LIGOScientific:2016aoc}. An optomechanical system consists of mechanical oscillators coupled with optical fields. The displacement of or the force acting on the oscillator can be measured precisely by using optical interferometry. Massive oscillators not only make the optomechanical systems resistant to noise sources~\cite{LIGOScientific:2016aoc}, but also allow for the exploration of the unique physics~\cite{Chen:2013sgs,Miao:2019pxw,Kafri:2014zsa,Schmole:2016mde}. Ultimately, macroscopic quantum optomechanical systems are expected to elucidate the quantum nature of gravity~\cite{Marletto:2017kzi, Bose:2017nin, Belenchia:2018szb}. Suspended pendulums are often used as mechanical oscillators in optomechanical systems over a milligram scale~\cite{Pontin:2016nem, Matsumoto:2013sua, Matsumoto:2014fda, Komori:2019zlg, Sakata:2010zz, Matsumoto:2018via, Corbitt:2007spn, Neben:2012edt, LIGOScientific:2009mif, LIGOScientific:2020luc}, while membranes and cantilevers are used in many experiments of smaller mass scales~\cite{Chan:2011ivv, Teufel:2011smx, Peterson:2016ayo}. Suspended pendulums are advantageous in that they can be isolated from the environment. In other words, pendulums are resistant to seismic noise and thermal noise. Furthermore, a pendulum can be regarded as a free mass in the broad frequency range over the resonant frequency. In general, pendulums have a wide range of sensitivity for this reason. Recently, detecting schemes for ultralight dark matter using optomechanical oscillators were proposed~\cite{Graham:2015ifn, Pierce:2018xmy, Carney:2019cio}, and the wide searchable range was crucial for the dark matter search because the mass of dark matter was scarcely known. \begin{figure} \centering \includegraphics[width=0.7\columnwidth,clip]{rotation.pdf} \caption{Schematic illustration of a suspended linear cavity. The rotational degrees of freedom we focus on are defined along the vertical axis. The input mirror is much heavier than the end mirror for our configuration.} \label{fig:rotation} \end{figure} However, the issue of suspended pendulums as mechanical oscillators is their instability in the rotational degree of freedom, the so-called Sidles-Sigg instability~\cite{Sidles:2006vzf}. In a linear cavity, the yaw rotational motion of a suspended mirror that is indicated in Fig.~\ref{fig:rotation} changes the position of the beam spot of the laser beam. Due to the change in the position of the beam spot, the radiation pressure in the cavity can behave as an anti-restoring force and destabilize the cavity. In the case that the mechanical restoring torque is dominant in the rotational degree of freedom, the Sidles-Sigg instability does not matter because the radiation pressure torque would be too weak to make a pendulum unstable. Thus, several experiments used multiple wires to suspend a mirror to stiffen the pendulum in the rotational degree of freedom~\cite{Corbitt:2007spn, Neben:2012edt, Kelley:2015axa}. However, the increase of wires induces a stronger coupling to a thermal bath. As a result, the thermal noise due to the suspension gets larger. One other way to deal with Sidles-Sigg instability is introducing feedback control in the rotational motion~\cite{Barsotti:2010zz}. If an oscillator has actuators applying torque on it, the unstable rotational motion can be suppressed. Therefore, modern gravitational wave detectors use active feedback controls for the angular motion~\cite{Liu:2018dfs}. However, active feedback control systems were not always available, especially for macroscopic quantum experiments and quantum measurements. This was because milligram or gram scale oscillators were often too small to attach actuators to~\cite{Sakata:2010zz}. In such cases, a more complicated feedback system was required to actuate a tiny oscillator remotely~\cite{Enomoto:2016lee, Nagano:2016wcg}. As a different approach, some experiments used triangular cavities to avoid the Sidles-Sigg instability~\cite{Matsumoto:2013sua, Matsumoto:2014fda, Komori:2019zlg}. When the number of mirrors consisting in the cavity is odd, the radiation pressure behaves as a positive restoring torque. Thus, the Sidles-Sigg instability can be avoided. However, additional mirrors are noise sources~\cite{Komori:2019zlg}. For the best sensitivity, a cavity with only two mirrors (a linear cavity) is favorable. Therefore, a stable configuration of a linear cavity is desired. In this paper, we propose and experimentally validate a stable trapping configuration for a suspended mirror in a linear cavity. We utilize radiation pressure inside the cavity as the restoring torque. Thus, our system can trap a rotational motion of the suspended mirror without additional active feedback controls. Therefore, our trapping scheme is applicable to tiny systems that cannot have actuators attached to them, and it is free from the feedback-control noises. In our system, we operate the cavity in the negative-$g$ regime, and the input mirror is much heavier than the end test mass. The combined characteristics of the negative-$g$ regime and unbalanced masses produce a stable rotational trapping. Furthermore, we validate the configuration experimentally with an 8 mg mirror. In addition, we discuss the feasibility to observe the quantum radiation pressure fluctuation of a milligram scale optomechanical system for testing macroscopic quantum mechanics by using our trapping configuration. \section{\label{sec:theory}Theoretical description} We analyze the rotational motion of suspended mirrors in a linear cavity. The following calculation shows the suspended mirrors are trapped with the positive radiation pressure torque under the condition that the cavity is in the negative-$g$ regime and one mirror is much heavier than the other one. As shown in Fig.~\ref{fig:rotation}, we define the angles of two mirrors as $\alpha_i$ and the torques exerted to them as $T_i$ ($i = 1,~2)$. The equation of motion of the rotational modes of the two mirrors is given by \begin{align} (\bm{K}_\mathrm{opt} + \bm{K}_\mathrm{mech} - \bm{I}\omega^2) \begin{pmatrix} \alpha_1 \\ \alpha_2 \end{pmatrix} = \begin{pmatrix} T_1 \\ T_2 \end{pmatrix} , \label{eq:motion} \end{align} where \begin{align} \bm{K}_\mathrm{mech} = \begin{pmatrix} K_1 & 0 \\ 0 & K_2 \end{pmatrix} ,~~ \bm{I} = \begin{pmatrix} I_1 & 0 \\ 0 & I_2 \end{pmatrix} \end{align} are the matrices of the mechanical restoring torques and the moment of inertia of each mirror. The optical torsional stiffness matrix is represented as~\cite{Sidles:2006vzf} \begin{align} \bm{K}_\mathrm{opt} = \frac{2P}{c(R_1 + R_2 - L)} \begin{pmatrix} R_1(L - R_2) & R_1R_2 \\ R_1R_2 & R_2(L - R_1) \end{pmatrix}, \label{eq:stiffness} \end{align} where $P$, $c$, $R_1$, and $L$ are the intracavity power, the speed of light, the radii of curvature, and the cavity length, respectively. The optical torque is generated by the change of the beam spot on the mirror due to the movement of the cavity axis as the mirror rotates. Equation~(\ref{eq:motion}) can be rewritten as \begin{align} \begin{pmatrix} K_1 - \beta g_2 - I_1\omega^2 & \beta \\ \beta & K_2 - \beta g_1 -I_2\omega^2 \end{pmatrix} \begin{pmatrix} \alpha_1 \\ \alpha_2 \end{pmatrix} = \begin{pmatrix} T_1 \\ T_2 \end{pmatrix}, \label{eq:rewritemotion} \end{align} by defining $\beta = 2PL/[c(1 - g_1g_2)]$, $g_i = 1 - L/R_i$. $g_i$ is determined by the geometry of the cavity and generally called the $g$ factor. Hereafter, we consider a case where the mirror 2 is much heavier than the mirror 1, and the mechanical restoring torque of the mirror 1 is much smaller than that of the mirror 2. This assumption is practical for actual experiments. This optomechanical system can be a sensitive force sensor by using the lighter mirror as a test mass. At the same time, we can control the cavity length to maintain the resonance by attaching actuators on the larger (heavier) mirror; $I_1 \ll I_2$, and $K_1 \ll K_2$. In this case, the diagonalization of Eq.~(\ref{eq:rewritemotion}) indicates the resonant frequency of the differential mode is \begin{align} \omega_\mathrm{diff} \simeq \sqrt{\frac{K_1 - \beta g_2}{I_1}}. \label{eq:resfreqdiff} \end{align} The diagonalization of Eq.~(\ref{eq:rewritemotion}) derives two decoupled modes. Here, the eigen mode where the two mirrors rotate in the same direction is named the differential mode. On the other hand, the eigen mode in which the mirrors rotate in the opposite directions is named the common mode. When the lighter mirror is flat ($g_1 = 1$), $g_2$ should be positive to satisfy the optical cavity condition of $0 < g_1g_2 <1$. In this case, the resonant frequency rapidly goes to zero with the increase of the intracavity laser power, which causes an angular instability. On the other hand, we can avoid the instability and even can stiffen the differential mode in the negative-$g$ regime. Equation~(\ref{eq:rewritemotion}) also indicates the resonant frequency of the common mode decreases as \begin{align} \omega_\mathrm{com} \simeq \sqrt{\frac{K_2 + \beta(1 - g^2)/g}{I_2}}. \end{align} Here, we assume the two curvatures of the mirrors are identical ($R_1 = R_2 = R$, $g_1 = g_2 = g$) for simplicity. The mechanical resonant frequency of the common mode can be high enough by using a heavy enough mirror for the mirror 2. Therefore, the decrease of the resonant frequency due to the radiation pressure torque can be ignored. In other words, the radiation pressure torque will not make the common mode unstable when one mirror is much heavier than the other. The tolerable intracavity power can be increased by increasing only the mass of one mirror. Thus, this configuration allows trapping of a small mirror. Figure~\ref{fig:theoretical}. shows the dependence of the resonant frequency on the intracavity power. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{theoretical.pdf} \caption{Dependence of the resonant frequencies of differential and common modes. The negative resonant frequency implies that the mode is unstable. For comparison, both the negative-$g$ regime ($g_1 = g_2 = -0.1$) and the positive-$g$ regime ($g_1 = g_2 = +0.1$) cases are plotted; other parameters are described in the main text. In the range between 0.72~W and 34~kW of the intracavity power, only the negative-$g$ cavity is stable.} \label{fig:theoretical} \end{figure} As for this plot, we use similar parameters of our experimental setup as follows: mirror masses of $m_1 = 10~\mathrm{mg}$ and $m_2 = 10~\mathrm{g}$, mirror radii of $r_1 = 1.5~\mathrm{mm}$ and $r_2 = 10~\mathrm{mm}$, (momentum of intertia of $I_1 = 5.6\times 10^{-12}~\mathrm{kg~m^2}$ and $I_2 = 2.5\times 10^{-7\textbf{}}~\mathrm{kg~m^2}$,) mechanical resonant frequency of $\omega_1/(2\pi) = 0.5~\mathrm{Hz}$ and $\omega_2/(2\pi) = 5~\mathrm{Hz}$, and the cavity length of $L = 1.1R = 11~\mathrm{cm}~(g = -0.1)$. For comparison, we also plot the resonant frequencies of the differential and common modes for the case that the $g$ factors of the cavity mirrors are positive ($g_1 =g_2 = +0.1$). The cavity in the negative-$g$ regime is tolerant over the intracavity power of 10~kW, while the cavity in the positive-$g$ regime is unstable just over 0.72~W. \begin{figure*} \centering \includegraphics[width=2\columnwidth,clip]{setupphoto.pdf} \caption{(a)Schematic drawing of the experimental setup. The test mass is an 8~mg mirror, which is suspended as a double pendulum. The test mass and the input mirror consist of the main cavity on the platform in vacuum chamber. The laser source is an Nd:YAG laser, and the wavelength is 1064~nm. The cavity length is feedback controlled for the continuous resonance. The laser intensity fluctuation is also suppressed by the feedback control with an acousto-optic modulator. The beam spot on the test mass is monitored with a quadrant photo detector for the transfer function measurement. (b)Photographs of the main cavity on the platform. The platform is suspended with springs as a double pendulum to isolate the main cavity from seismic vibration. The test mass is focused on the lower left side. The input mirror and the coil magnet actuator on it is focused on the lower right side.} \label{fig:setup} \end{figure*} \section{\label{sec:method}Experimental demonstration} We experimentally demonstrate that our trapping configuration works properly. As mentioned in the previous section, our configuration utilizes the radiation pressure of the laser light inside a linear cavity. Thus, the restoring torque due to the radiation pressure also increases as the intracavity power increases. We design our experiment to observe this increase of the restoring torque. The experimental setup is shown in Fig.~\ref{fig:setup}. The instability due to the radiation pressure of the laser light inside the cavity will be an issue when the radiation pressure torque is dominant. To realize the predominance of the radiation pressure, we build a linear cavity with a tiny mirror of 8~mg (0.5~mm thick with a diameter 3~mm) as the test mass. This tiny mirror is suspended with a thin carbon fiber (6~$\mu$m in diameter and 2~cm long) to lower the mechanical restoring torque. The $Q$ value of a single pendulum with this carbon fiber is measured to be $Q \sim 8\times 10^4$ at the mechanical resonant frequency, 3~Hz. The test mass is suspended as a double pendulum via the intermediate mass for isolation from the seismic motion. The input mirror is much heavier. Its mass is 60~g. Two coil-magnet actuators are attached to the input mirror to apply a force and a torque. The radii of the curvature of the mirrors are 10~cm, and the cavity length is $11.0~\pm 0.3$~cm. Thus, the cavity is in the negative-$g$ regime of $g = -0.1$. The finesse of the cavity is measured to be $(3.0 \pm 0.3)\times 10^3$. The cavity is built on a platform board; the platform is also suspended as a double pendulum in a vacuum chamber. The pressure is kept about 1~Pa to suppress acoustic disturbances. The cavity length is feedback controlled to resonate continuously. We use a side locking to keep the cavity at a detuned point near half of the resonant peak. The displacement of the mirror from the control point is sensed by the reflected-light power that is monitored by the photo detector. The error signal is obtained by comparing the output signal from the photo detector with a constant voltage. The error signal is filtered and sent to the coil-magnet actuator attached to the input mirror. The transmitted light from the cavity is monitored by a quadrant photo detector, as described in Fig.~\ref{fig:setup}. The half of the injected laser beam is also monitored for the laser intensity control. By sending the feedback signal to the acousto-optic modulator (AOM), we suppress the laser intensity fluctuation. To show the radiation pressure works as a positive restoring torque, we evaluate the resonant frequency that is described in Eq.~(\ref{eq:resfreqdiff}). The resonant frequency is determined by the transfer function of the rotational motion of the mirrors. We apply torque to the input mirror by injecting differential sine-wave signals into the coil-magnet actuators on the input mirror~\ref{fig:setup}. Then, the test mass is also swung via the radiation pressure inside the cavity. The rotation of the test mass results in the changes of the beam spot on the test mass because the cavity axis changes. We observe the transmitted light from the cavity by a quadrant photo detector to detect the change in the beam spot. A convex lens is placed halfway between the test mass and the quadrant photo detector to measure only the change in the beam spot without being affected by the change in the cavity axis. The distance between the lens and the test mass (the quadrant photo detector) is twice as long as the focal length of the lens. The transfer function from the injected signal to quadrant photo detector signal includes the transfer function of the suspended mirrors~\cite{Nagano:2016wcg}. Thus, the transfer function has the characteristic form of the resonant peak. We fit the transfer function to estimate the resonant frequency. Note that the cavity length is feedback controlled to keep the resonance of the cavity during this transfer function measurement. \section{\label{sec:result}Result} The measured transfer functions from the excitation to the quadrant photo detector signal are plotted in Fig.~\ref{fig:transfunc}. We measure them with five different intracavity powers. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{YawtransPaper.pdf} \caption{The measured transfer functions of the five measurements of different intracavity powers. The points are the measured data, and the lines represent the fittings. The peaks and the phase flips indicate the resonant points. By the fitting, we estimate the resonant frequencies.} \label{fig:transfunc} \end{figure} The peaks and the phase flips due to the resonance of the test mass are observed in each measurement. The intracavity power in each measurement is estimated by the transmitted light power. We fit the gain of the transfer functions to determine the resonant frequency. The fitted parameters are resonant frequency, damping ratio of the resonance, and the overall gain factor. The fitted curves are also plotted in Fig.~\ref{fig:transfunc}. The uncertainty in the intracavity power is dominated by the fluctuation in the power of the transmitted light. The fluctuation in the transmitted light power is at the frequency of the excitation signal. Thus, the fluctuation would be due to the misalignment when the mirror is swung. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{ResFreq_m.pdf} \caption{The resonant frequencies of the differential mode in the rotational degree of freedom. The shaded region represents the theoretically predicted values with the width corresponding to the uncertainties in mirror reflectivities and the cavity length.} \label{fig:resfreq} \end{figure} We show the dependence of the resonant frequency on the intracavity power in Fig.~\ref{fig:resfreq}. The uncertainty of the resonant frequency is estimated from the fit of the transfer function measurement. We also show the predicted region from the theoretical calculation using Eq.~(\ref{eq:resfreqdiff}) with the parameters of the optics. The width of the region corresponds to the uncertainty of the design reflectivities of the mirrors and the uncertainty of the cavity length. The measured dependency is consistent with the theoretical prediction. We note that the longitudinal optical spring effect does not affect our results since the beam spot on the test mass mirror is precisely adjusted to the center. When the beam spot is off the center of the mirror, the longitudinal optical spring also acts as a restoring torque. However, the deviation of the beam spot from the center of the test mass must be smaller than 2~$\mu$m with our experimental parameters. Otherwise, the constant radiation pressure rotates the test mass mirror and breaks the cavity locking. When the deviation is smaller than 2~$\mu$m, the restoring torque from the longitudinal optical spring is smaller than several $10^{-11}$~Nm/rad. This is smaller than the original restoring torque of the pendulum's suspension wire. Furthermore, the trapping potential by the longitudinal optical spring is suppressed by the feedback controlling of the cavity length, and thus, its effect is further minimized. Therefore, we conclude that the effect of the coupling of the longitudinal optical spring is negligible. \section{\label{sec:discussion}Discussion} One of the essential applications of our trapping scheme is the observation of the quantum radiation pressure fluctuation for testing macroscopic quantum mechanics; the quantum radiation pressure fluctuation is a radiation pressure fluctuation due to the quantum fluctuation of the photon number of light. As mentioned in the Introduction, one attracting target of milligram scale optomechanical systems is testing macroscopic quantum mechanics. A proposed experimental test of macroscopic quantum mechanics requires to prepare a conditional state~\cite{Chen:2013sgs}. To set an optomechanical system in a conditional state, the quantum radiation pressure fluctuation needs to dominate force noises~\cite{Michimura:2020yvn}. For the observation of quantum radiation pressure fluctuation, the combination of a light mirror and a high power laser beam is preferable. The high power laser beam enhances the quantum radiation pressure fluctuation itself, and a light mirror enhances the displacement of the mirror that is sensed by the interference. Therefore, our trapping system is suitable because it overcomes the Sidles-Sigg instability, which limited the optimal sensitivities in the previous milligram and gram scale experiments~\cite{Corbitt:2007spn,Neben:2012edt,Kelley:2015axa,Sakata:2010zz,Matsumoto:2013sua,Matsumoto:2014fda,Komori:2019zlg} We estimate the sensitivity of our experimental setup to discuss the feasibility of observing the quantum radiation pressure fluctuation. The calculation reveals that the quantum radiation pressure fluctuation will be dominant with the intracavity power of over 14~W (see the Appendix~\ref{sec:appendix} for more details). Since our trapping scheme overcomes the limitation of the Sidles-Sigg instability, the cavity can accumulate 30~kW or more power inside, according to our measurement result. Therefore, we conclude that our system can realize the observation of the quantum radiation pressure fluctuation, though technical classical noises should be well suppressed. \section{\label{sec:conclusion}Conclusion} We propose a configuration to trap the rotational motions of the suspended mirrors in a linear cavity. By operating a linear cavity in the negative-$g$ regime and using unbalanced-mass mirrors, the two rotational modes of the cavity mirrors are stable with the radiation pressure inside the cavity. Furthermore, we demonstrate an experimental validation of the trapping by building a linear cavity with an 8~mg mirror. We observe the rotational restoring torque on the mirror increases as the intracavity power increases. The behavior is consistent with theoretical prediction. Therefore, we confirm that the 8~mg mirror obtains the positive restoring torque originating from the radiation pressure of the inside laser beam. We also discuss the possibility of observing quantum radiation pressure fluctuation by using our trapping scheme. The calculation shows that the quantum radiation pressure fluctuation can be observed with the realistic parameters that are identical or similar to our realized experiment. Considering the successful result of the trapping together, we confirm that the cavity can accumulate enough large laser power to enhance the quantum radiation pressure fluctuation to observe it. Thus, this work is a crucial step towards testing macroscopic quantum mechanics, while the configuration is also applicable to a broad range of optomechanical systems. \begin{acknowledgments} We thank Yutaro Enomoto and Ooi Ching Pin for fruitful discussions. This work was supported by a Grant-in-Aid for Challenging Research Exploratory Grant No. 18K18763 from the Japan Society for the Promotion of Science (JSPS), by JST CREST Grant No. JPMJCR1873, and by MEXT Quantum LEAP Flagship Program (MEXT Q-LEAP) Grant No. JPMXS0118070351. T.K. is supported by KAKENHI Grant No. 19J21861 from the JSPS. \end{acknowledgments}
{'timestamp': '2022-07-28T02:14:35', 'yymm': '2110', 'arxiv_id': '2110.13507', 'language': 'en', 'url': 'https://arxiv.org/abs/2110.13507'}
\section{Introduction} \noindent This paper is devoted to the study of perturbations of the Crapper waves through gravity and concentrated vorticity. The problem that we analyze is the free-boundary stationary Euler equation with vorticity \begin{subequations}\label{v-euler} \begin{align}[left=\empheqlbrace\,] &v\cdot\nabla v+\nabla p+ge_2=0\hspace{1cm}\textrm{in}\hspace{0.3cm}\Omega\label{v-euler1}\\[2mm] &\nabla\cdot v=0\hspace{3.35cm}\textrm{in}\hspace{0.3cm}\Omega\label{v-euler2}\\[2mm] &\nabla^{\perp}\cdot v=\omega \hspace{3.05cm}\textrm{in}\hspace{0.3cm}\Omega\label{v-euler3}\\[2mm] &v\cdot n =0 \hspace{3.45cm}\textrm{on}\hspace{0.3cm} \mathcal{S}\label{v-euler4}\\[2mm] &p=TK\hspace{3.5cm}\textrm{on}\hspace{0.3cm}\mathcal{S}\label{v-euler5} \end{align} \end{subequations} \medskip \noindent Here $v$ and $p$ are the velocity and the pressure, respectively; $g$ is the gravity, $e_2$ is the second vector of the Cartesian basis, $K$ is the curvature of the free boundary, $T$ the surface tension and $\omega$ the vorticity, that we will specify later. Moreover, since $\Omega$ is defined as a fluid region and $\mathbb{R}^2\setminus \Omega$ as a vacuum region, there exists an interface $\mathcal{S}$ that separates the two regions and $n$ is the normal vector at the interface. We parametrize the interface with $z(\alpha)=(z_1(\alpha),z_2(\alpha))$, for $\alpha\in[-\pi,\pi]$. Thus $\Omega$ is defined for $-\pi< x< \pi$ and $y$ below the interface $z(\alpha)$ with $z_2(\pm \pi)=1$.\\ \noindent The Crapper waves are exact solutions of the water waves problem with surface tension at infinite depth. In \cite{Crapper1957}, Crapper proves the existence of pure capillary waves with an overhanging profile. Its result has been extended in \cite{Kinnersley1976} by Kinnersley for the finite depth case and in \cite{AAW2013} by Akers-Ambrose-Wright by adding a small gravity. In \cite{ASW2014} Ambrose-Strauss-Wright analyze the global bifurcation problem for traveling waves, considering the presence of two fluids and in \cite{CEG2016} and \cite{CEG2019}, C\'ordoba-Enciso-Grubic add beyond the small gravity a small density in the vacuum region in order to prove the existence of self-intersecting Crapper solutions with two fluids.\\ \noindent In the present paper we will deal with rotational waves. The literature about these waves is very recent and the first important result is the one by Constantin and Strauss \cite{CS2004}. They study the rotational gravity water waves problem without surface tension at finite depth and they are able to prove the existence of large amplitude waves. Later, in \cite{CV2011}, Constantin and Varvaruca extend the Babenko equation for irrotational flow \cite{Babenko87} to the gravity water waves with constant vorticity at finite depth. They remark that the new formulation opens the possibility of using global bifurcation theory to show the existence of large amplitude and possibly overhanging profiles. Furthermore, in a recent paper \cite{CSV2016}, the same authors construct waves of large amplitude via global bifurcation. Such waves could have overhanging profiles but their explicit existence is still an open problem.\\ \noindent Furthermore, there are some new results by Hur and Vanden-Broeck \cite{Hur-Vanden2020} and by Hur and Wheeler \cite{Hur-Wheeler2020}, where the authors prove the numerical and further analytical existence of a new exact solution for the periodic traveling waves in a constant vorticity flows of infinite depth, in the absence of gravity and surface tension. They show that the free surface is the same as that of Crapper's capillary waves in an irrotational flow.\\ \noindent Concerning the presence of surface tension in a rotational fluid we recall the works by Wahl\'en, in \cite{Wahlen2006-1}, where the author proves the existence of symmetric regular capillary waves for arbitrary vorticity distributions, provided that the wavelength is small enough and in \cite{Wahlen2006-2}, he adds a gravity force acting at the interface and proves the existence of steady periodic capillary-gravity waves. As far as we know, there is not a proof of the existence of overhanging waves in both capillary and gravity-capillary rotational settings, with a fixed period. In \cite{deBoeck2014}, De Boeck shows that Crapper waves are limiting configuration for both gravity-capillary water waves in infinte depth (see also \cite{AAW2013}) and gravity-capillary water waves with constant vorticity at finite depth. His formulation comes from the one introduced in \cite{CV2011} and the idea is based on taking a small period, which implies that Crapper's waves govern both gravity-capillary and gravity-capillary with constant vorticity at finite depth. \noindent Differently from his work, we will consider a fixed period and small and concentrated vorticity as the point vortex and the vortex patch. \\ \noindent In \cite{SWZ2013}, Shatah, Walsh and Zheng study the capillary-gravity water waves with concentrated vorticity and they extend their work in \cite{EWZ2019} by considering an exponential localized vorticity; in both cases they perturb from the flat and they do not consider overhanging profiles.\\ \noindent However, the technique we will use is completely different from the cited papers since we would like to show the existence of a perturbation of Crapper's waves with both small concentrated vorticity and small gravity. \subsection{Outline of the paper} \noindent In section \ref{settings} we describe the setting in which we work and we introduce a new formulation for the problem \eqref{v-euler}, through the stream function and a proper change of coordinates to fix the domain. In section \ref{point-case} we describe the point vortex formulation and the principal operators that identify our problem. In the end of the section we will prove the main theorem \ref{point-existence}, which shows the existence of a perturbation of Crapper's waves with a small point vortex. In the last section we introduce the problem \eqref{v-euler} with a vortex patch, which we identify through three operators and the implicit function theorem allow us to prove the existence of a perturbation of Crapper's waves also with a small vortex patch, theorem \ref{patch-existence}. \bigskip \section{Setting of the problem}\label{settings} \noindent The interface $\mathcal{S}=\partial\Omega$, between the fluid region with density $\rho=1$ and the vacuum region, has a parametrization $z(\alpha)$ which satisfies the periodicity conditions $$z_1(\alpha+2\pi)=z_1(\alpha) +2\pi, \hspace{1cm} z_2(\alpha+2\pi)=z_2(\alpha),$$ \medskip \noindent and it is symmetric with respect to the $y-$axis \begin{equation}\label{z-parity} z_1(\alpha)=-z_1(-\alpha), \hspace{1cm} z_2(\alpha)=z_2(-\alpha). \end{equation} \medskip \noindent The aim of this paper is to prove the existence of perturbations of the Crapper waves with vorticity through the techniques developed in \cite{OS2001}, in \cite{AAW2013} and \cite{CEG2016}. First of all we will rewrite the system \eqref{v-euler} in terms of the stream function and then we will do some changes of variables in order to modify the fluid region and to analyse the problem in a more manageable domain. The key point is the use of the implicit function theorem to show that in a neighborhod of the Crapper solutions there exists a perturbation due to the presence of the gravity and the vorticity. \subsection{The stream formulation with vorticity} \noindent The fluid flow is governed by the incompressible stationary Euler equations \eqref{v-euler}. The incompressibility condition \eqref{v-euler2} implies the existence of a stream function $\psi:\Omega\rightarrow\mathbb{R}$, with $v=\nabla^{\perp}\psi$ and the kinematic boundary condition \eqref{v-euler4} implies $\psi=0$ on $\mathcal{S}$. In addition we can rewrite the equation \eqref{v-euler1} at the interface by using the condition \eqref{v-euler5} and the fact that the vorticity we consider is concentrated in the domain $\Omega$, we end up in the Bernoulli equation. \begin{equation} \frac{1}{2} |v|^2+TK+gy=\textrm{constant}. \end{equation} \medskip \noindent We can write the system \eqref{v-euler} in terms of the stream function as follows \begin{subequations}\label{psi-euler} \begin{align}[left=\empheqlbrace\,] &\Delta\psi=\omega\hspace{6cm}\textrm{in}\hspace{0.3cm}\Omega\label{psi-eul1}\\ &\psi=0\hspace{6.4cm}\textrm{on}\hspace{0.3cm}\mathcal{S}\label{psi-eul2}\\ &\frac{1}{2}|\nabla\psi|^2+gy+TK=\textrm{constant}\hspace{1.9cm}\textrm{on}\hspace{0.3cm}\mathcal{S}\label{psi-eul3}\\ &\frac{\partial\psi}{\partial x}=0 \hspace{6.1cm}\textrm{on}\hspace{0.3cm} x=\pm\pi\label{psi-eul4}\\ &\lim_{y\rightarrow 0}\left(\frac{\partial\psi}{\partial y},-\frac{\partial\psi}{\partial x}\right)=(c,0)\label{psi-eul5} \end{align} \end{subequations} \medskip \noindent where, the condition \eqref{psi-eul4} comes from the periodic and symmetric assumptions and the condition \eqref{psi-eul5} means that the flow becomes uniform at the infinite bottom and $c\in\mathbb{R}$ is the wave speed. The main problem we have to face is the absence of a potential and is due to the rotationality of the problem. We will treat the point vortex and the vortex patch in two different ways, since the singularity of the problem is distinct, but before dealing with our problem we will focus on the general framework. \bigskip \subsection{The general vorticity case}\label{subsec-change-variables}\label{general-vorticity} \noindent The main difficulties of the problem \eqref{psi-euler} are the presence of a moving interface and the absence of a potential, since the fluid is not irrotational. We recall the Zeidler theory \cite{Zeidler} about pseudo-potential, so we introduce the function $\phi$, which satisfies the following equations \begin{equation}\label{pseudo-potential} \left\{\begin{array}{lll} \displaystyle\frac{\partial\phi}{\partial x}=W(x,y)\frac{\partial\psi}{\partial y}\\[2mm] \displaystyle\frac{\partial\phi}{\partial y}=-W(x,y)\frac{\partial\psi}{\partial x}, \end{array}\right. \end{equation} \medskip \noindent where $W(x,y)$ is exactly equal to $1$ when the fluid is irrotational and satisfies \begin{equation}\label{x-y-W} \frac{\partial W}{\partial x}\frac{\partial\psi}{\partial x}+\frac{\partial W}{\partial y}\frac{\partial\psi}{\partial y}+W\Delta\psi=0. \end{equation} \medskip \noindent We transform the problem from the $(x,y)$-plane into the $(\phi,\psi)$-plane, by taking the advantage of the fact that the stream function is zero at the interface, see fig. \ref{omega-to-disk} . Furthermore, we consider the case of symmetric waves, then it follows that \begin{equation}\label{symmetry} \left\{\begin{array}{lll} \phi(x,y)=-\phi(-x,y)\\[3mm] \psi(x,y)=\psi(-x,y), \end{array}\right. \end{equation} \medskip \noindent and they satisfy the following relations, coming from \eqref{pseudo-potential}. \begin{equation}\label{x-y-Jacobian} \begin{pmatrix} \displaystyle\frac{\partial x}{\partial\phi} & \displaystyle\frac{\partial x}{\partial\psi}\\ \displaystyle\frac{\partial y}{\partial\phi} & \displaystyle\frac{\partial y}{\partial\psi}\\ \end{pmatrix} =\frac{1}{W(v_1^2+v_2^2)} \begin{pmatrix} v_1 & -W v_2\\ v_2 & W v_1 \end{pmatrix}, \end{equation} \medskip \noindent where $v_1, v_2$ are the components of the velocity field. Moreover, we want to write the system in a non-dimensional setting, thus the new variables are $$(\phi,\psi)=\frac{1}{c}(\phi^*,\psi^*), \quad (v_1,v_2)=\frac{1}{c} (v_1^*,v_2^*), \quad \omega=\frac{1}{c}\omega^*,$$ \noindent where the variables with the star are the dimensional one and $c$ is the wave speed. The properties of our problem allow us to pass from $\Omega$ into $\tilde{\Omega}$, defined as follows \begin{equation}\label{Omega-tilde} \tilde{\Omega}:=\{(\phi,\psi): -\pi<\phi<\pi, -\infty<\psi<0\}. \end{equation} \medskip \noindent We have to transform the system \eqref{psi-euler} and the equation \eqref{x-y-W} in the new coordinates. So we take the derivative with respect to $\phi$ of the condition \eqref{psi-eul3} and we get \begin{equation}\label{phi-psi-DBernoulli} \frac{\partial}{\partial\phi}\left(\frac{v_1^2+v_2^2}{2}\right)+p\frac{v_2}{v_1^2+v_2^2}-q\frac{\partial}{\partial\phi}\left[\frac{W}{\sqrt{v_1^2+v_2^2}}\left(v_1\frac{\partial v_2}{\partial\phi}-v_2\frac{\partial v_1}{\partial\phi}\right)\right]=0, \end{equation} \medskip \noindent where $\displaystyle p=\frac{g}{c^2}$ and $\displaystyle q=\frac{T}{c^2}$ and \eqref{x-y-W} becomes \begin{equation}\label{phi-psi-W} (v_1^2+v_2^2)\frac{\partial W}{\partial\psi}=W\omega. \end{equation} \medskip \noindent The problem we study is periodic, so it is more natural to do the analysis in a circular domain. We introduce the independent variable $\zeta=e^{-i\phi+\psi}$, where $\phi+i\psi$ runs in $\tilde{\Omega}$ and $\zeta$ in the unit disk, so $\zeta=\rho e^{i\alpha}$. The relation between $(\phi,\psi)$ and the variable in the disk $(\alpha,\rho)$ is the following $(\phi,\psi)=(-\alpha,\log(\rho))$, where $-\pi<\alpha<\pi$ and $0 <\rho<1$. Thus, we pass from $\tilde{\Omega}$ into the unit disk, see fig. \ref{omega-to-disk}. \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{omega-to-disk} \caption{The domains $\Omega$, $\tilde{\Omega}$ and the disk.}\label{omega-to-disk} \end{figure} \medskip \noindent Furthermore we define the dependent variables $\tau(\alpha,\rho)$ and $\theta(\alpha,\rho)$ as follows \begin{equation}\label{tau-theta} \tau=\frac{1}{2}\log(v_1^2+v_2^2),\quad \theta=\arctan\left(\frac{v_2}{v_1}\right) \end{equation} \medskip \noindent Thanks to \eqref{tau-theta}, the equation \eqref{phi-psi-W} for $W$ becomes \begin{equation*} e^{2\tau}\rho\frac{\partial W}{\partial\rho}=W\omega, \end{equation*} \medskip \noindent then we have \begin{equation}\label{alpha-rho-W} \displaystyle W(\alpha,\rho)=\exp\left(\int_{0}^{\rho}\omega\frac{e^{-2\tau(\alpha,\rho')}}{\rho'}\,d\rho'\right). \end{equation} \medskip \noindent The derivative of the Bernoulli equation \eqref{phi-psi-DBernoulli}, computed at the interface $z(\alpha)$ which corresponds to $\rho=1$, becomes \begin{equation}\label{alpha-rho-DBernoulli} \frac{\partial}{\partial\alpha}\left(\frac{1}{2}e^{2\tau(\alpha,1)}\right)-p\frac{e^{-\tau(\alpha,1)}\sin(\theta(\alpha,1))}{W(\alpha,1)}+q\frac{\partial}{\partial\alpha}\left(W(\alpha,1)e^{\tau(\alpha,1)}\frac{\partial\theta}{\partial\alpha}\right)=0. \end{equation} \bigskip \section{The point vortex case}\label{point-case} \subsection{The point vortex framework} We consider a point of constant vorticity, that does not touch the interface $z(\alpha)$, defined as $\omega=\omega_0 \delta((x,y)-(0,0))$, where $\delta((x,y)-(0,0))$ is a delta distribution taking value at the point $(0,0)$ and $\omega_0$ is a small constant. In addition, since we have a fluid with density $1$ inside the domain $\Omega$ and the vacuum in $\mathbb{R}^2\setminus\Omega$, then there is a discontinuity of the velocity field at the interface and a concentration of vorticity $\tilde{\omega}(\alpha)\delta((x,y)-(z_1(\alpha),z_2(\alpha)))$, where $\tilde{\omega}(\alpha)$ is the amplitude of the vorticity along the interface. This implies the stream function $\psi$ in $\Omega$ to be the sum of an harmonic part \begin{equation}\label{harmonic-point-stream} \psi_H(x,y)=\frac{1}{2\pi}\int_{-\pi}^{\pi}\log\left|(x,y)-(z_1(\alpha'), z_2(\alpha'))\right|\tilde{\omega}(\alpha')\,d\alpha', \end{equation} \noindent which is continuous over the interface and another part related to the point vortex. The velocity can be obtained by taking the orthogonal gradient of the stream function and we have\\ \begin{equation}\label{velocity-point} v(x,y)=(\partial_y\psi_{H}(x,y), -\partial_x\psi_H(x,y))+\frac{\omega_0}{2\pi}\frac{(y,-x)}{x^2+y^2}. \end{equation} \medskip \noindent However, in order to describe the point vortex problem we have to adapt the kinematic boundary condition \eqref{v-euler4} and the Bernoulli equation \eqref{v-euler5}, equivalent to \eqref{psi-eul3}. At first, let us compute the velocity at the interface by taking the limit in the normal direction and we get \begin{equation} \begin{split} v(z(\alpha))&=(\partial_{z_2}\psi_H,-\partial_{z_1}\psi_H)+\frac{1}{2}\frac{\tilde{\omega}(\alpha)}{|\partial_{\alpha}z|^2}\partial_{\alpha}z+\frac{\omega_0}{2\pi}\frac{(z_2(\alpha),-z_1(\alpha))}{|z(\alpha)|^2}\\[3mm] &=BR(z(\alpha),\tilde{\omega}(\alpha))+\frac{1}{2}\frac{\tilde{\omega}(\alpha)}{|\partial_{\alpha}z|^2}\partial_{\alpha}z+\frac{\omega_0}{2\pi}\frac{(z_2(\alpha),-z_1(\alpha))}{|z(\alpha)|^2}, \end{split} \end{equation} \medskip \noindent where $BR(z(\alpha),\tilde{\omega}(\alpha))$ is the Birkhoff-Rott integral \begin{equation*} BR(z(\alpha),\tilde{\omega}(\alpha))=\frac{1}{2\pi}\textrm{P.V.}\int_{-\pi}^{\pi}\frac{(z(\alpha)-z(\alpha'))^{\perp}}{|z(\alpha)-z(\alpha')|^2}\cdot\tilde{\omega}(\alpha')\,d\alpha'. \end{equation*} \medskip \noindent Thus the condition \eqref{v-euler4} becomes \begin{equation}\label{point-kbc} \begin{split} &v(z(\alpha))\cdot(\partial_{\alpha}z(\alpha))^{\perp}=\left(BR(z(\alpha),\tilde{\omega}(\alpha))+\frac{\omega_0}{2\pi}\frac{(z_2(\alpha),-z_1(\alpha)}{|z(\alpha)|^2}\right)\cdot \partial_{\alpha}z(\alpha)^{\perp}=0. \end{split} \end{equation} \medskip \noindent To deal with the Bernoulli equation and to reach a manageable formulation, we have to use the change of variables described in section \ref{general-vorticity}. We pass from the domain $\Omega$ in $(x,y)$ variables, fig. \ref{omega-to-disk} into $\tilde{\Omega}$ in $(\phi, \psi)$ and finally in the unit disk. In order to pass from $\Omega$ into $\tilde{\Omega}$ we use the pseudo-potential defined in \eqref{pseudo-potential} and \eqref{x-y-W}. Moreover, as one can see in fig. \ref{disk} (left), the interface $z(\alpha)$ is sending in the line $\psi=0$, thanks to condition \eqref{psi-eul2}, and the point vortex is still a point $(0,\psi_0)$ on the vertical axis due to the oddness of $\phi$. In order to pass from $\tilde{\Omega}$ into the unit disk, see fig. \ref{disk} (right), we use the function $e^{\psi-i\phi}=\rho e^{i\alpha}$ and the point vortex $(0,\psi_0)$ becomes a point $(0,\rho_0)$, it does not depend on the angle $\alpha$. \medskip \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{disk}\label{disk} \caption{The domain $\tilde{\Omega}$ and $D_1$}\label{disk} \end{figure} \medskip \noindent After this change of variables, we rewrite the equation \eqref{alpha-rho-W} for $W(\alpha,\rho)$, by substituting $\omega=\omega_0\delta((\alpha,\rho)-(0,\rho_0))$. And we have \begin{equation}\label{W_0} W(\alpha,\rho)= \left\{ \begin{array}{rl} 1 & \alpha\neq 0\\[2mm] \displaystyle \exp\left(\frac{\omega_0 e^{-2\tau(\alpha,\rho_0)}}{\rho_0}\right) & \alpha=0,\hspace{0.3cm}\rho_0\in (0,\rho). \end{array}\right. \end{equation} \medskip \noindent We immediately point out that in this case the function $W(\alpha,\rho)=W_{\omega_0,\rho_0}=W_0\in\mathbb{R}$ and the constant is exactly one when there is no vorticity.\\ \noindent The derivative of the Bernoulli equation \eqref{alpha-rho-DBernoulli} becomes \begin{equation}\label{point-DBernoulli} \frac{\partial}{\partial\alpha}\left(\frac{1}{2}e^{2\tau(\alpha,1)}\right)-p\frac{e^{-\tau(\alpha,1)}\sin(\theta(\alpha,1))}{W_0}+q W_0\frac{\partial}{\partial\alpha}\left(e^{\tau(\alpha,1)}\frac{\partial\theta}{\partial\alpha}\right)=0 \end{equation} \medskip \noindent By integrating with respect to $\alpha$, we get \begin{equation*} \frac{1}{2}e^{2\tau(\alpha,1)}-\frac{p}{W_0}\int_{-\pi}^{\alpha} e^{-\tau(\alpha',1)}\sin(\theta(\alpha',1))\,d\alpha'+q W_0 e^{\tau(\alpha,1)}\frac{\partial\theta}{\partial\alpha}(\alpha,1)=\tilde{\gamma}. \end{equation*} \medskip \noindent In the pure capillarity case the constant is exactly $\frac{1}{2}$. For this reason we take $\tilde{\gamma}=\frac{1}{2}+B$, where $B$ is a perturbation of the Crapper constant, see \cite{OS2001}. We multiply the equation by $e^{-\tau(\alpha,1)}$ and we get a new formulation for the Bernoulli equation. \begin{equation}\label{point-Bernoulli-tau-theta} \begin{split} &\sinh(\tau(\alpha,1))-\frac{p}{W_0} e^{-\tau(\alpha,1)}\left(\int_{-\pi}^{\alpha}e^{-\tau(\alpha',1)}\sin(\theta(\alpha',1)\,d\alpha'{-1}\right)\\[3mm] &+q W_0\frac{\partial\theta(\alpha,1)}{\partial\alpha}-Be^{-\tau(\alpha,1)}=0. \end{split} \end{equation} \medskip \noindent We can solve our problem by finding $2\pi$ periodic functions $\tau(\alpha)$ even and $\theta(\alpha)$ odd, a function $\tilde{\omega}(\alpha)$ even, that satisfy the following equations \eqref{point-kbc} and \eqref{point-Bernoulli-tau-theta}. However, in subsection \ref{subsec-change-variables}, we explain the necessary change of variables to fix the domain. We observe that at the interface $z(\alpha)$, we have $\psi(z(\alpha))=0$ and $\rho=1$, respectively. Thus, \begin{equation}\label{phi-parametrization} \phi(z(\alpha))=-\alpha\quad\Longrightarrow\quad \nabla\phi(z(\alpha))\cdot\partial_{\alpha}z(\alpha)= -1, \end{equation} \medskip \noindent from \eqref{pseudo-potential} we get that \eqref{phi-parametrization}, can be written as follows \begin{equation}\label{v-tangential} W_0 v(z(\alpha))\cdot\partial_{\alpha}z(\alpha)=-1. \end{equation} \medskip \noindent Since the equation \eqref{v-tangential} has been obtained by using the kinematic boundary condition \eqref{point-kbc}, then to solve our problem we will use the Bernoulli equation \eqref{point-Bernoulli-tau-theta} and the equation \eqref{v-tangential}. \bigskip \subsection{Crapper formulation}\label{Crapper formulation} \noindent Our goal is to prove the existence of overhanging waves with the presence of concentrated vorticity, such as a point vortex or a vortex patch (Section \ref{patch}). It is well-known that without vorticity ($\omega_0=0$ equivalent to $W_0=1$), in \cite{AAW2013} the authors prove the existence of gravity-capillary overhanging waves. If we remove also the gravity then there is the pillar result of Crapper \cite{Crapper1957}, where the problem was to find a $2\pi$ periodic, analytic function $f_c=\theta_c+i\tau_c$ in the lower half plane which solves the Bernoulli equation \begin{equation}\label{Crapper-Bernoulli} \sinh(\tau_c)+q \frac{\partial\theta_c}{\partial\alpha}=0, \end{equation} \medskip \noindent where $q=\frac{T}{c^2}$. Furthermore, the analyticity of the function $f$ implies that $\tau_c$ can be written as the Hilbert transform of $\theta_c$ at the boundary $\rho=1$, so the equation above reduces to an equation in the variable $\theta_c$, \begin{equation}\label{Crapper-Bernoulli-only-theta} \sinh(\mathcal{H}\theta_c)+q \frac{\partial\theta_c}{\partial\alpha}=0. \end{equation} \medskip \noindent This problem admits a family of exact solutions, \begin{equation}\label{Crapper-solutions} f_c(w)=2i\log\left(\frac{1+Ae^{-iw}}{1-Ae^{-iw}}\right), \end{equation} \medskip \noindent where $w=\phi+i\psi$ and in this case $(\phi,\psi)$ are harmonic conjugates. The parameter $A$ is defined in $(-1,1)$ and for $|A|<A_0=0.45467\ldots$, the interface do not have self-intersection. Moreover, by substituting \eqref{Crapper-solutions} into \eqref{Crapper-Bernoulli-only-theta} for $\rho=1$, we get $q=\frac{1+A^2}{1-A^2}$. This implies \begin{equation}\label{surface-tension} T=\frac{1+A^2}{1-A^2}c^2. \end{equation} \medskip \noindent By using \eqref{x-y-Jacobian} in the Crapper case so with $W=1$, coupled with $\phi=-\alpha$ and $\rho=1$, we get \begin{equation}\label{Crapper-z-derivative} \partial_{\alpha}z^c(\alpha)=- e^{-\tau_c(\alpha)+i\theta_c(\alpha)}. \end{equation} \medskip \noindent We focus on this kind of waves because for some values of the parameter $A$, these waves are overhanging. \bigskip \subsection{Perturbation of Crapper waves with a point of vorticity} \noindent In our formulation, the main difference with respect to the Crapper \cite{Crapper1957} waves is in the function $f=\tau+i\theta$ which is not analytic because of the presence of vorticity. The idea is to prove that our solutions are perturbation of the Crapper waves. If we recall the Crapper solution with small gravity but without vorticity, $(\theta_A,\tau_A)$, we know that $f_A=\theta_A+i\tau_A$ is now analytic and $(\theta_A,\tau_A)$ satisfy the following relations in both $(\phi,\psi)$ and $(\alpha,\rho)$ variables. \begin{equation} \label{tauCR-thetaCR} \begin{cases} \displaystyle\frac{\partial\theta_A}{\partial\phi}=\frac{\partial\tau_A}{\partial\psi} \\[4mm] \displaystyle\frac{\partial\theta_A}{\partial\psi}=-\frac{\partial\tau_A}{\partial\phi} \end{cases}\Longrightarrow \begin{cases} \displaystyle\frac{\partial\theta_A}{\partial\alpha}=-\rho\frac{\partial\tau_A}{\partial\rho} \\[4mm] \displaystyle\rho\frac{\partial\theta_A}{\partial\rho}=\frac{\partial\tau_A}{\partial\alpha}. \end{cases} \end{equation} \medskip \noindent Moreover, $\tau_A=\mathcal{H}\theta_A$ at the interface. The idea is to write our dependent variables $\tau$ and $\theta$ as the sum of a Crapper part and a small perturbation, due to the small vorticity. So we have \begin{equation}\label{point-tau-theta-perturbations} \tau=\tau_A+\omega_0\tilde{\tau},\quad \theta=\theta_A+\omega_0\tilde{\theta}. \end{equation} \medskip \noindent So the Bernoulli equation \eqref{point-Bernoulli-tau-theta}, reduces \begin{equation}\label{point-Bernoulli} \begin{split} &\sinh(\mathcal{H}\theta_A+\omega_0\tilde{\tau})-p e^{-\mathcal{H}\theta_A-\omega\tilde{\tau}}\left(\frac{1} {W_0}\int_{-\pi}^{\alpha}e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}}\sin(\theta_A+\omega_0\tilde{\theta})\,d\alpha'{-1}\right)\\[2mm] &+q\frac{\partial(\theta_A+\omega_0\tilde{\theta})}{\partial\alpha}W_0-Be^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}}=0\hspace{1cm}\textrm{at}\hspace{0.3cm}\rho=1. \end{split} \end{equation} \medskip \noindent However, we will figure out that $\tilde{\tau}$ and $\tilde{\theta}$ are functions of $\theta_A$ and so \eqref{point-Bernoulli} will be an equation in the variable $\theta_A$. In order to end up with this statement we need to use some properties of our problem. We use the incompressibility and rotational conditions and we get the following relations for $(\tau,\theta)$ \begin{equation}\label{point-theta-tau-quasi-Cauchy-Riemann-1} \left\{\begin{array}{lll} \displaystyle\frac{\partial\theta}{\partial\psi}=-W_0\frac{\partial\tau}{\partial\phi}\\[5mm] \displaystyle\frac{\partial\theta}{\partial\phi}=\frac{\omega_0 e^{-2\tau(0,\psi_0)}\delta((\phi,\psi)-(0,\psi_0))}{W_0}+\frac{1}{W_0}\frac{\partial\tau}{\partial\psi} \end{array}\right. \end{equation} \medskip \noindent By substituting \eqref{point-tau-theta-perturbations} in \eqref{point-theta-tau-quasi-Cauchy-Riemann-1}, we get \begin{equation}\label{point-tilde-tau-theta-quasi-Cauchy-Riemann-1} \left\{\begin{array}{lll} \displaystyle\omega_0\frac{\partial\tilde{\theta}}{\partial\psi}=-W_0\omega_0\frac{\partial\tilde{\tau}}{\partial\phi}-W_0\frac{\partial\tau_A}{\partial\phi}-\frac{\partial\theta_A}{\partial\psi}\\[5mm] \displaystyle\omega_0\frac{\partial\tilde{\theta}}{\partial\phi}=\frac{\omega_0 e^{-2\mathcal{H}\theta_A(0,\psi_0)-2\omega_0\tilde{\tau}(0,\psi_0)}\delta((\phi,\psi)-(0,\psi_0))}{W_0}+\frac{1}{W_0}\left(\omega_0\frac{\partial\tilde{\tau}}{\partial\psi}\frac{\partial\tau_A}{\partial\psi}\right)+\frac{\partial\theta_A}{\partial\phi} \end{array}\right. \end{equation} \medskip \noindent If we cross systems \eqref{tauCR-thetaCR} with system \eqref{point-tilde-tau-theta-quasi-Cauchy-Riemann-1} then we obtain \begin{equation}\label{point-tilde-theta-phi-psi} \left\{\begin{array}{lll} \displaystyle\omega_0\frac{\partial\tilde{\theta}}{\partial\psi}=-W_0\omega_0\frac{\partial\tilde{\tau}}{\partial\phi}+(W_0-1)\frac{\partial\theta_A}{\partial\psi}\\[5mm] \displaystyle\omega_0\frac{\partial\tilde{\theta}}{\partial\phi}=\frac{\omega_0 e^{-2\mathcal{H}\theta_A(0,\psi_0)-2\omega_0\tilde{\tau}(0,\psi_0)}\delta((\phi,\psi)-(0,\psi_0))}{W_0}+\frac{}{W_0}\frac{\partial\tilde{\tau}}{\partial\psi}+\left(1+\frac{1}{W_0}\right)\frac{\partial\theta_A}{\partial\phi} \end{array}\right. \end{equation} \medskip \noindent By taking the derivative with respect to $\phi$ in the first equation and the derivative with respect to $\psi$ in the second equation and then the difference we get an elliptic equation \begin{equation}\label{point-elliptic-phi-psi} \displaystyle W_0\tilde{\varepsilon}\frac{\partial^2\tilde{\tau}}{\partial\phi^2}+\frac{1}{W_0}\tilde{\varepsilon}\frac{\partial^2\tilde{\tau}}{\partial\psi^2}+\frac{\omega_0 e^{-2\tau(0,\psi_0)}}{W_0}\frac{\partial}{\partial\psi}\delta(\phi,\psi-\psi_0) +\left(\frac{1-W_0^2}{W_0}\right)\frac{\partial^2\theta_A}{\partial\phi\partial\psi}=0 \end{equation} \medskip \noindent We can do the same as \eqref{point-theta-tau-quasi-Cauchy-Riemann-1}, \eqref{point-tilde-tau-theta-quasi-Cauchy-Riemann-1} and \eqref{point-tilde-theta-phi-psi} also in the variables $(\alpha,\rho)$ and the elliptic equation is the following \begin{equation*}\label{point-elliptic-alpha-rho} \begin{split} &\frac{\rho}{W_0}\omega_0\frac{\partial^2\tilde{\tau}}{\partial\rho^2}+\frac{1}{W_0}\omega_0\frac{\partial\tilde{\tau}}{\partial\rho}+\frac{W_0}{\rho}\omega_0\frac{\partial^2\tilde{\tau}}{\partial\alpha^2}+\frac{\partial}{\partial\rho}\left(\frac{\omega_0 e^{-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau}}}{W_0}\right)+\left(W_0-\frac{1}{W_0}\right)\frac{\partial^2\theta_A}{\partial\alpha\partial\rho}=0. \end{split} \end{equation*} \medskip \noindent Once we solve the elliptic equation we have a solution $\tilde{\tau}$ as a function of $\theta_A$ and thanks to the relations \eqref{point-tilde-theta-phi-psi} also $\tilde{\theta}$ is a function of $\theta_A$. \subsection{The elliptic problem} \noindent In this section we want to show how to solve the elliptic problem. For simplicity, we will study the problem in the $(\phi,\psi)$ coordinates thus, from \eqref{point-elliptic-phi-psi}, the system is \begin{equation*}\label{point-elliptic-phi-psi-system} \displaystyle W_0\omega_0\frac{\partial^2\tilde{\tau}}{\partial\phi^2}+\frac{1}{W_0}\omega_0\frac{\partial^2\tilde{\tau}}{\partial\psi^2}+\frac{\omega_0 e^{-2\tau(0,\psi_0)}}{W_0}\frac{\partial}{\partial\psi}\delta(\phi,\psi-\psi_0) +\left(\frac{1-W_0^2}{W_0}\right)\frac{\partial^2\theta_A}{\partial\phi\partial\psi}=0. \end{equation*} \medskip \noindent The equation above is a linear elliptic equation with constant coefficients $\displaystyle W_0, \frac{1}{W_0}$. If we do a change of variables we obtain a Poisson equation. In the specific if we define $\phi=W_0\phi'$, then we have \begin{equation*} \begin{split} &\frac{\partial f}{\partial\phi'}(\phi',\psi)=\frac{\partial f}{\partial\phi}\frac{\partial\phi}{\partial\phi'}=W_0\frac{\partial f}{\partial\phi}\Longrightarrow\frac{\partial f}{\partial\phi}=\frac{1}{W_0}\frac{\partial f}{\partial\phi'}\\[3mm] &\frac{\partial^2 f}{\partial\phi^2}=\frac{1}{W_0^2}\frac{\partial^2 f}{\partial\phi'^2}. \end{split} \end{equation*} \medskip \noindent And the domain $\tilde{\Omega'}=\{(\phi',\psi):-\frac{\pi}{W_0}<\phi<\frac{\pi}{W_0}, -\infty<\psi<0\}$. By substituting in \eqref{point-elliptic-phi-psi}, we have \begin{equation}\label{point-elliptic-phi-psi-NEW} \displaystyle \omega_0\frac{\partial^2\tilde{\tau}}{\partial\phi'^2}+\omega_0\frac{\partial^2\tilde{\tau}}{\partial\psi^2}+\omega_0 e^{-2\tau(0,\psi_0)}\frac{\partial}{\partial\psi}\delta(\phi,\psi-\psi_0) +\left(\frac{1-W_0^2}{W_0}\right)\frac{\partial^2\theta_A}{\partial\phi'\partial\psi}=0 \end{equation} \medskip \noindent Since we are looking for $\tau\in H^2$ and we know that $\tilde{\tau}\in H^2$ so that its Laplacian is in $L^2(\tilde{\Omega'})$; then by the elliptic theory there exists a weak solution and so we can invert the Laplace operator, \cite[Theorem 9.25]{Brezis}. We have \begin{equation}\label{point-tilde-tau} \omega_0\tilde{\tau}=\left(-\omega_0 e^{-2\tau(0,\psi_0)}\frac{\partial}{\partial\psi}\delta(\phi,\psi-\psi_0) -\left(\frac{1-W_0^2}{W_0}\right)\frac{\partial^2\theta_A}{\partial\phi'\partial\psi}\right)*G_2(\phi',\psi), \end{equation} \medskip \noindent where $G_2$ is the Green function of the Poisson equation in $\tilde{\Omega'}$. \subsection{Existence of gravity rotational perturbed Crapper waves}\label{point-existence-Crapper} \noindent The main theorem we want to prove is the following \begin{theorem}\label{point-existence} Let us consider the water waves problem \eqref{v-euler}, with a small point vortex and a small gravity $g$. Then, for some values of $A<A_0$, defined in \eqref{Crapper-solutions}, there exist periodic solutions to \eqref{v-euler} with overhanging profile. \end{theorem} \noindent In order to prove the existence of perturbed rotational Crapper waves we will apply the implitic function theorem around the Crapper solutions. \begin{theorem}[Implicit function theorem]\label{IFT} Let $X, Y, Z$ be Banach spaces and $\zeta:X\times Y\rightarrow Z$ is a $C^k$, with $k\geq 1$. If $\zeta(x_*, y_*)=0$ and $D_x\zeta(x_*, y_*)$ is a bijection from $X$ to $Z$, then there exists $\varepsilon>0$ and a unique $C^k$ map $\chi:Y\rightarrow X$ such that $\chi(y_*)=x_*$ and $\zeta(\chi(y_*), y_*)=0$ when $\|y-y_*\|_Y\leq\varepsilon$. \end{theorem} \medskip \noindent The operators that identify the water waves problem with a point vortex are the following \begin{subequations} \label{point-water-waves} \begin{align} \begin{split} &\mathcal{F}_1(\theta_A,\tilde{\omega}; B,p,\omega_0):=\sinh(\mathcal{H}\theta_A+\omega_0\tilde{\tau}(\theta_A))\\[2mm] &\hspace{2cm}-p e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}(\theta_A)}\left(\frac{1}{W_0}\int_{-\pi}^{\alpha}e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}(\theta_A)}\sin(\theta_A+\omega_0\tilde{\theta}(\theta_A))\,d\alpha'{-1}\right)\\[2mm] &\hspace{2cm}+q\frac{\partial(\theta_A+\omega_0\tilde{\theta}(\theta_A))}{\partial\alpha}W_0-Be^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}(\theta_A)} \end{split}\label{point-water-waves-1}\\[5mm] \begin{split} &\mathcal{F}_2(\theta_A,\tilde{\omega};B, p,\omega_0):= W_0\left(2BR(z(\alpha),\tilde{\omega}(\alpha))\cdot\partial_{\alpha}z(\alpha)+\tilde{\omega}(\alpha)\right.\\[2mm] &\hspace{2cm}\left.+\frac{\omega_0}{\pi}\frac{(z_2(\alpha),-z_1(\alpha))}{|z(\alpha)|^2}\cdot\partial_{\alpha}z(\alpha)\right)+2 \end{split}\label{point-water-waves-2} \end{align} \end{subequations} \medskip \noindent We have that $$(\mathcal{F}_1,\mathcal{F}_2)(\theta_A,\tilde{\omega};B, p,\omega_0): H^{2}_{odd}\times H^{1}_{even}\times\mathbb{R}^{{3}}\rightarrow H^{1}_{even}\times H^{1}_{even}.$$ \medskip \subsubsection{Proof of Theorem \ref{point-existence}}\label{point-existence-proof} We have to analyze the two operators. First we have to show that the operators are zero when computed at the point $(\theta_c,\tilde{\omega}_c;0,0,0)$. \begin{equation}\label{F1} \mathcal{F}_1(\theta_c,\tilde{\omega}_c;0,0,0)=\sinh(\mathcal{H}\theta_c)+q\frac{\partial\theta_c}{\partial\alpha}=0, \end{equation} \noindent since this is exactly \eqref{Crapper-Bernoulli-only-theta}. \medskip \noindent The second operator related to the kinematic boundary conditions satisfies \begin{equation}\label{F2} \mathcal{F}_2(\theta_c,\tilde{\omega}_c;0,0,0)= 2BR(z^c({\alpha}),\tilde{\omega}_c(\alpha))+\tilde{\omega}_c(\alpha)+2=0, \end{equation} \medskip \noindent where $z^c(\alpha)$ is the parametrization of the Crapper interface and it is zero by construction \eqref{phi-parametrization}. \medskip \noindent Now, we compute all the Fr\'echet derivatives. We will take the derivatives with respect to $\theta_A$ and $\tilde{\omega}$, then we will compute them at the point $(\theta_c,\tilde{\omega}_c;0,0,0)$ and we will show their invertibility. For the operator $\mathcal{F}_1$ we observe that $$D_{\tilde{\omega}}\mathcal{F}_1(\theta_c,\tilde{\omega}_c,;0,0,0)=0.$$ \medskip \noindent It remains to compute the derivative with respect to $\theta_A$. \begin{equation*} \begin{split} &D_{\theta_A}\mathcal{F}_1=\left[\frac{d}{d\mu}\mathcal{F}_1(\theta_A+\mu\theta_1,\psi_H,c;B,p,\varepsilon,\omega_0)\right]_{|\mu=0}=\left[\frac{d}{d\mu}\left[\sinh(\mathcal{H}\theta_A+\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}(\theta_A+\mu\theta_1))\right.\right.\\[3mm] &-p e^{-\mathcal{H}\theta_A-\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}(\theta_A+\mu\theta_1)}\left(\frac{1}{W_0}\int_{-\pi}^{\alpha}e^{-\mathcal{H}\theta_A+\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}(\theta_A+\mu\theta_1)}\sin(\theta_A+\mu\theta_1+\omega_0 \tilde{\theta})\,d\alpha'{-1}\right)\\[2mm] &\left.\left.+q\frac{\partial(\theta_A+\mu\theta_1+\omega_0\tilde{\theta})}{\partial\alpha}W_0-Be^{-\mathcal{H}\theta_A-\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}(\theta_A+\mu\theta_1)}\right]\right]_{|\mu=0}=\\[3mm] &=\cosh(\mathcal{H}\theta_A+\omega_0\tilde{\tau}(\theta_A))\left(\mathcal{H}\theta_1+\omega_0\left[\frac{d}{d\mu}\tilde{\tau}(\theta_A+\mu\theta_1)\right]_{|\mu=0}\right)\\[3mm] &-p\left[\frac{d}{d\mu}\left[e^{-\mathcal{H}\theta_A-\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}(\theta_A+\mu\theta_1)}\cdot\right.\right.\\[3mm] &\hspace{2cm}\left.\left.\cdot\left(\frac{1}{W_0}\int_{-\pi}^{\alpha}e^{-\mathcal{H}\theta_A+\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}(\theta_A+\mu\theta_1)}\sin(\theta_A+\mu\theta_1+\omega_0 \tilde{\theta})\,d\alpha'{-1}\right)\right]\right]_{|\mu=0}\\[3mm] &+qW_0\frac{\partial\theta_1}{\partial\alpha}-B\left[\frac{d}{d\mu}\left[e^{-\mathcal{H}\theta_A-\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}(\theta_A+\mu\theta_1)}\right]\right]_{|\mu=0} \end{split} \end{equation*} \medskip \noindent In order to compute $\frac{d}{d\mu}\tilde{\tau}$ we refer to \eqref{point-tilde-tau} and since it is multiply by $\omega_0$ that we will take equal to zero, then it will desapper as well as the terms multiplied by $p, B$. Thus the Fr\'echet derivative computed at $(\theta_c,\tilde{\omega}_c;0,0,0)$ is \begin{equation}\label{D-theta-F1} D_{\theta_A}\mathcal{F}_1(\theta_c,\tilde{\omega}_c;0,0,0)=\cosh(\mathcal{H}\theta_c)\mathcal{H}\theta_1+q\frac{\partial\theta_1}{\partial\alpha}. \end{equation} \bigskip \noindent The Fr\'echet derivative with respect to $\theta_A$, can be obtained by substituting the definition of the interface $z(\alpha)$ to the operator. Indeed, from the equations \eqref{x-y-Jacobian}, we get \begin{equation}\label{interface-derivative} \left\{\begin{array}{lll} \displaystyle\frac{\partial z_1}{\partial\alpha}=-\frac{e^{-\tau(\alpha,1)}\cos(\theta(\alpha,1))}{W(\alpha,1)}\\[4mm] \displaystyle\frac{\partial z_2}{\partial\alpha}=-\frac{e^{-\tau(\alpha,1)}\sin(\theta(\alpha,1))}{W(\alpha,1)}, \end{array}\right. \end{equation} \medskip \noindent By substituting the value of $W(\alpha,1)$ for the point vortex and by rewriting $\tau, \theta$ as the sum of Crapper and a perturbation, then we have \begin{equation}\label{point-interface} \left\{\begin{array}{lll} \displaystyle z_1(\alpha)=-\frac{1}{W_0}\int_{-\pi}^{\alpha} e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}(\theta_A)}\cos(\theta_A+\omega_0 \tilde{\theta}(\theta_A))\,d\alpha'\\[4mm] \displaystyle z_2(\alpha)=-\frac{1}{W_0}\int_{-\pi}^{\alpha} e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}(\theta_A)}\sin(\theta_A+\omega_0 \tilde{\theta}(\theta_A))\,d\alpha'-1 \end{array}\right. \end{equation} \medskip \noindent In a compact way, the interface $z(\alpha)$ is \begin{equation}\label{point-z} \displaystyle z(\alpha)=-\frac{1}{W_0}\int_{-\pi}^{\alpha} e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}(\theta_A)+i(\theta_A+\omega_0\tilde{\theta}(\theta_A))}\,d\alpha' -e_2. \end{equation} \medskip \noindent The main Fr\'echet derivative for the operator $\mathcal{F}_2$ is with respect to $\tilde{\omega}$. \begin{equation*} \begin{split} &D_{\tilde{\omega}}\mathcal{F}_2=\left[\frac{d}{d\mu}\mathcal{F}_2(\theta_A,\tilde{\omega}+\mu\omega_1;B,p,\omega_0)\right]_{|\mu=0}=\left[\frac{d}{d\mu}\left[2 W_0 BR(z(\alpha),\tilde{\omega}(\alpha)+\mu\omega_1)\cdot\partial_{\alpha}z(\alpha)\right.\right.\\[3mm] &\left.\left.+W_0 (\tilde{\omega}(\alpha)+\mu\omega_1(\alpha))+W_0\frac{\omega_0}{\pi}\frac{(z_2(\alpha),-z_1(\alpha))}{|z(\alpha)|^2}\cdot\partial_{\alpha}z(\alpha)+2\right]\right]_{|\mu=0}\\[3mm] &=2W_0 \textrm{P.V.}\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{(z(\alpha)-z(\alpha'))^{\perp}}{|z(\alpha)-z(\alpha')|^2}\cdot \omega_1(\alpha')\,d\alpha'\cdot \partial_{\alpha}z(\alpha)+W_0\omega_1(\alpha). \end{split} \end{equation*} \medskip \noindent When we compute this derivative at the point $(\theta_c,\tilde{\omega}_c;0,0,0)$ we get \begin{equation}\label{D-omega-F2} D_{\tilde{\omega}}\mathcal{F}_2(\theta_c,\tilde{\omega}_c;0,0,0)=2BR(z^c(\alpha),\omega_1(\alpha))\cdot \partial_{\alpha}z^c(\alpha)+\omega_1(\alpha), \end{equation} \medskip \noindent where $z^c(\alpha)$ is the parametrization of the Crapper interface coming from \eqref{Crapper-z-derivative}. \bigskip \noindent The final step of this proof is to show the invertibility of the derivative's matrix, defined as follows \begin{equation}\label{point-frechet-derivative} D\mathcal{F}(\theta_c,\tilde{\omega}_c;0,0,0)= \begin{pmatrix} D_{\theta_A}\mathcal{F}_1 & 0 \\ D_{\theta_A}\mathcal{F}_2 & D_{\tilde{\omega}}\mathcal{F}_2 \end{pmatrix}= \begin{pmatrix} \Gamma & 0 \\ D_{\theta_A}\mathcal{F}_2 & \mathcal{A}(z^c(\alpha))+\mathcal{I} \end{pmatrix}\cdot \begin{pmatrix} \theta_1\\ \omega_1 \end{pmatrix} \end{equation} \medskip \noindent where \begin{align*} &\Gamma\theta_1=\cosh(\mathcal{H}\theta_c)\mathcal{H}\theta_1+q\frac{d}{d\alpha}\theta_1\\[3mm] &(\mathcal{A}(z^c(\alpha))+\mathcal{I})\omega_1=2BR(z^c(\alpha),\omega_1)\cdot\partial_{\alpha}z^c(\alpha)+\omega_1. \end{align*} \medskip \noindent The invertibility of \eqref{point-frechet-derivative} is related with the invertibility of the diagonal, since the matrix is triangular. Hence we have to analyze the invertibility of the operators $\Gamma $ and $ \mathcal{A}+\mathcal{I}$, where $\mathcal{I}$ stays for the identity operator. Below, we resume the properties of the $\Gamma$ operator, for details, see in \cite{AAW2013} and \cite{CEG2016}. \begin{lemma} The operator $$D_{\theta_A}\mathcal{F}_1(\theta_c, \tilde{\omega}_c;0,0,0)=\cosh(\mathcal{H}\theta_c)\theta_1+q\frac{d}{d\alpha}\theta_1=\Gamma\theta_1,$$ defined $\Gamma: H^{1}_{odd}\rightarrow L^{2}_{even}$ is injective. \end{lemma} \begin{proof} The injectivity follows from the fact that $\Gamma\theta_1=0$ if and only if $\displaystyle\theta_1=\frac{d\theta_c}{d\alpha}$, see \cite[Lemma 2.1]{OS2001}. Moreover, we know that $\theta_c$ is an odd function then $\displaystyle\frac{d\theta_c}{d\alpha}$ is even. This statement implies that the constants are the only trivial solutions of $\Gamma\theta_1=0$. \end{proof} \noindent The problem concerning the invertibility of this operator is related with its surjectivity. \begin{lemma}\label{DF1-invertible} Let $f\in L^{2}_{even}$. Then there exists $\theta_1\in H^{1}_{odd}$ with $\Gamma\theta_1=f$ if and only if $$(f,\cos\theta_c)=\int_{-\pi}^{\pi} f(\alpha)\cos\theta_c(\alpha)\,d\alpha=0$$ \end{lemma} \begin{proof} The complete proof can be found in \cite[Proposition 3.3]{AAW2013}. Here we will prove that the cokernel has dimension one and it is spanned by $\cos\theta_c$.\\ \noindent If we consider the operator $\mathcal{F}_1$ with $(p,\omega_0,B)=(0,0,0)$, we have \begin{align*} &\int_{-\pi}^{\pi}\mathcal{F}_1\cos\theta\,d\alpha=\int_{-\pi}^{\pi}\left(\sin\mathcal{H}\theta + q\frac{d\theta}{d\alpha}\right)\cos\theta\,d\alpha=0, \end{align*} \noindent because the second term is the $q$ multiplied by $\sin\theta$ in the interval $(-\pi,\pi)$ and the the first term is $0$ because of the Cauchy integral theorem. In particular, if we take the derivative with respect to $\theta$ and we compute it in $\theta_c$ we get \begin{align*} &\int_{-\pi}^{\pi}\Gamma\theta_1\cos\theta_c\,d\alpha +\int_{-\pi}^{\pi} \mathcal{F}_1\sin\theta_c\,d\alpha=\int_{-\pi}^{\pi}\Gamma\theta_1\cos\theta_c\,d\alpha+\int_{-\pi}^{\pi}\left(q\frac{d\theta_c}{d\alpha}+\sinh\mathcal{H}\theta_c\right)\sin\theta_c\,d\alpha=0, \end{align*} \noindent since the quantity in the brackets is $0$ for \eqref{Crapper-Bernoulli} thus it follows \begin{equation}\label{Gamma-coseno} \int_{-\pi}^{\pi}\Gamma\theta_1\cos\theta_c\,d\alpha=0. \end{equation} \end{proof} \bigskip \noindent For the operator $\mathcal{A}+\mathcal{I}$ we have the following result, proved in \cite{CCG2011}. \begin{lemma}\label{DF2-invertible} Let $z\in H^{3}$ be a curve without self-intersections. Then $$\mathcal{A}(z)\omega=2 BR(z,\omega)\cdot\partial_{\alpha}z$$ \medskip \noindent defines a compact linear operator $$\mathcal{A}(z): H^{1}\rightarrow H^{1}$$ \medskip \noindent whose eigenvalues are strictly smaller than $1$ in absolute value. In particular, the operator $\mathcal{A}+\mathcal{I}$ is invertible. \end{lemma} \medskip \noindent In conclusion, the equations \begin{equation} \begin{pmatrix} \Gamma & 0 \\ D_{\theta_A}\mathcal{F}_2 & D_{\tilde{\omega}}\mathcal{F}_2 \end{pmatrix}\cdot \begin{pmatrix} \theta_1\\ \omega_1 \end{pmatrix}= \begin{pmatrix} f\\ g \end{pmatrix}. \end{equation} \medskip \noindent computed at the point $(\theta_c,\tilde{\omega}_c;0,0,0)$ has a solution if and only if $|A|<A_0$ and $(f,\cos\theta_c)=0$. \bigskip \noindent To prove Theorem \ref{point-existence}, we cannot use directly the implicit function theorem \ref{IFT} since the F\'echet derivative $D\mathcal{F}$ is not surjective. Following \cite{AAW2013} and also \cite{CEG2016}, we use an adaptation of the Lyapunov-Schmidt reduction argument. Define \begin{equation*} \Pi\theta_1:=(\cos\theta_c,\theta_1)\frac{\cos\theta_c}{\|\cos\theta_c\|^2_{L^2}}, \end{equation*} \medskip \noindent where $\Pi$ is the $L^2$ projector onto the linear span of $\cos\theta_c$ and from \eqref{Gamma-coseno} we have $\Pi\Gamma=0$. Thus, we define the projector on $\Gamma(H^{2}_{odd})$, as $\mathcal {I}-\Pi$ and \begin{equation}\label{tilde-mathcal-F} \tilde{\mathcal{F}}=((\mathcal {I}-\Pi)\mathcal{F}_1,\mathcal{F}_2):H^2_{odd}\times H^1_{even}\times\mathbb{R}^3\rightarrow \Gamma(H^2_{odd})\times L^2, \end{equation} \medskip \noindent where $\mathcal{F}=(\mathcal{F}_1,\mathcal{F}_2)$ is defined in \eqref{point-water-waves}. The Fr\'echet derivatives of \eqref{tilde-mathcal-F} in $(\theta_A,\tilde{\omega}_A)$ at the Crapper point $(\theta_c,\tilde{\omega_c};0,0,0)$ is now invertible. So we can apply the implicit function theorem to $\tilde{\mathcal{F}}$ then there exists a smooth function $\Theta_c:U_{B,p,\omega_0}\rightarrow H^2_{odd}\times H^1_{even}$, where $U_{B,p,\omega_0}$ is a small neighborhod of $(0,0,0)$ such that $\Theta_c(0,0,0)=(\theta_c,\tilde{\omega}_c)$ and for all $(B,p,\omega_0)\in U_{B,p,\omega_0,}$ $$\tilde{\mathcal{F}}(\Theta_c(B,p,\omega_0);B,p,\omega_0)=0.$$ \medskip \noindent But now, if we consider $\mathcal{F}(\Theta_c(B,p,\omega_0);B,p,\omega_0)$, defined in \eqref{point-water-waves}, then it could not be $0$. So we introduce a differentiable function on $U_{B,p,\omega_0}$: $$f(B;p,\omega_0)=(\cos\theta_c,\mathcal{F}_1(\Theta_c(B,p,\omega_0);B,p,\omega_0)).$$ \medskip \noindent We have that $\Pi\mathcal{F}_1=f(B;p,\omega_0)\frac{\cos\theta_c}{\|\cos\theta_c\|^2_{L^2}}$ and if we find a point $(B^*;p^*,\omega_0^*)$ such that $f(B^*;p^*,\omega_0^*)=0$, then $\mathcal{F}_1(\Theta_c(B^*,p^*,\omega_0^*);B^*,p^*,\omega_0^*))=0$ and so our problem is solved.\\ \noindent We note that choosing $(B^*;p^*,\omega_0^*)=(0,0,0)$, then $f(0;0,0)=0$. Its derivative with respect to $B$ is \begin{equation*} D_B f(0,0;0)=\left(\cos\theta_c,\Gamma\partial_k\Theta_c+e^{-\mathcal{H}\theta_c}\right)=\left(\cos\theta_c,e^{-\mathcal{H}\theta_c}\right)=-2\pi, \end{equation*} \medskip \noindent where we have used \eqref{Gamma-coseno} and the Cauchy integral theorem. Hence, we can apply the implicit function theorem \ref{IFT} to the function $f$ and there exists a smooth function $B^*(p,\omega_0)$ that satisfies $f(B^*(p,\omega_0);p,\omega_0)=0$, for $(p,\omega_0)$ in $U_{p,\omega_0}$, a small neighborhod of $(0,0)$. \\ \noindent We can resume these results in the following theorem. \begin{theorem} Let $|A|<A_0$. There exist $(B,p,\omega_0)$ and a unique smooth function $B^*:U_{p,\omega_0}\rightarrow U_B$, such that $B^*(0,0)=0$ and a unique smooth function $$\Theta_c: U_{B,p,\omega_0}\rightarrow H^2_{odd}\times H^1_{even},$$ such that $\Theta_c(0,0,0)=(\theta_c,\tilde{\omega}_c)$ and satisfy $$\mathcal{F}(\Theta_c(B^*(p,\omega_0), p,\omega_0);B^*(p,\omega_0),p,\omega_0,)=0.$$ \end{theorem} \medskip \noindent The main Theorem \ref{point-existence} is a direct consequence of the theorem above. \bigskip \section{The vortex patch case}\label{patch} \subsection{Framework} \noindent We consider a patch of vorticity $\omega(x,y)=\omega_0\chi_{D}(x,y)$, where $\omega_0\in\mathbb{R}$ and $\chi_{D}$ is the indicator function of the vortex domain $D$ near the origin, symmetric with respect to the $y$-axis, satisfying \begin{equation}\label{small-distance-cond} \max\{\textrm{dist}((x,y),(0,0))\}<<1, \quad\quad \forall (x,y)\in \partial D, \end{equation} \noindent equivalent to consider a small vortex patch. In this case, as for the previous one, the fluid is incompressible then we introduce a stream function, which is the sum of an harmonic part $\psi_H$, defined in \eqref{harmonic-point-stream} and a part related to the vortex patch \begin{equation}\label{patch-stream-function} \begin{split} \psi_{VP}(x,y)&=\frac{\omega_0}{2\pi}\int_{D}\log|(x,y)-(x',y')|\,dx'\,dy'. \end{split} \end{equation} \medskip \noindent We introduce the parametrization of the boundary $\partial D=\{\gamma(\alpha),\alpha\in [-\pi,\pi]\}$, that for now is a generic parametrization that satisfies the condition \eqref{small-distance-cond}. The representation of $D$ in the fig. \ref{new-annulus} is just an example, since it depends on the choice of the parametrization. Additionally, we obtain the velocity by taking the orthogonal gradient of the stream function, then the velocity associated to \eqref{patch-stream-function} is \begin{equation}\label{patch-velocity} \begin{split} v_{VP}(x,y)&=\frac{\omega_0}{2\pi}\int_{\partial D}\log|(x,y)-(\gamma_1(\alpha'),\gamma_2(\alpha'))|\partial_{\alpha}\gamma(\alpha')\,d\alpha'. \end{split} \end{equation} \medskip \noindent The velocity is the sum of \eqref{patch-velocity} and the orthogonal gradient of the harmonic stream function \eqref{harmonic-point-stream} \begin{equation}\label{velocity-patch} v(x,y)=(-\partial_y\psi_{H}(x,y), \partial_x\psi_H(x,y))+v_{VP}(x,y). \end{equation} \medskip \noindent As for the case of the point vortex we have to adapt the problem \eqref{v-euler}. One of the conditions is the kinematic boundary condition, so we need the velocity at the interface, \begin{equation} \begin{split} v(z(\alpha))=&BR(z(\alpha),\tilde{\omega}(\alpha))+\frac{1}{2}\frac{\tilde{\omega}(\alpha)}{|\partial_{\alpha}z|^2}\partial_{\alpha}z+\frac{\omega_0}{2\pi}\int_{-\pi}^{\pi}\log|z(\alpha)-\gamma(\alpha')|\,\partial_{\alpha}\gamma(\alpha')\,d\alpha', \end{split} \end{equation} \medskip \noindent where $z(\alpha)$ is the parametrization of the interface $\partial\Omega$ and we can write the kinematic boundary condition as follows \begin{equation}\label{patch-kbc} \begin{split} v(z(\alpha))\cdot(\partial_{\alpha}z(\alpha))^{\perp}&=BR(z(\alpha),\tilde{\omega}(\alpha))\cdot\partial_{\alpha}z(\alpha)^{\perp}\\[3mm] &+\frac{\omega_0}{2\pi}\int_{-\pi}^{\pi}\log|z(\alpha)-\gamma(\alpha')|\,\partial_{\alpha}\gamma(\alpha')\,d\alpha'\cdot (\partial_{\alpha}z(\alpha))^{\perp}=0. \end{split} \end{equation} \medskip \noindent In the analysis of this case we observe that the patch satisfies an elliptic equation and it has a moving boundary. We impose the patch to be fixed by the following condition $$v(\gamma(\alpha))\cdot (\partial_{\alpha}\gamma(\alpha))^{\perp}=0.$$ \noindent This is equivalent to require \begin{equation}\label{fix-patch} \begin{split} v(\gamma(\alpha))\cdot (\partial_{\alpha}\gamma(\alpha))^{\perp}&=\frac{1}{2\pi}\int_{-\pi}^{\pi} \frac{(\gamma(\alpha)-z(\alpha'))^{\perp}}{|\gamma(\alpha)-z(\alpha')|^2}\cdot\tilde{\omega}(\alpha')\,d\alpha'\cdot \partial_{\alpha}\gamma(\alpha)^{\perp}\\[3mm] &+\frac{\omega_0}{2\pi} P.V. \int_{\partial D}\log|\gamma(\alpha)-\gamma(\alpha')|\partial_{\alpha}\gamma(\alpha')\,d\alpha'\cdot \partial_{\alpha}\gamma(\alpha)^{\perp}=0 \end{split} \end{equation} \medskip \noindent Furthermore, we need another condition for identify completely our problem. This condition is related to the Bernoulli equation \eqref{v-euler5}, equivalent to \eqref{psi-eul3}. The most important issue is to fix the interface $\partial\Omega$, but for the vortex patch case we will slightly change the idea presented in subsection \ref{general-vorticity} and used for the point vortex case. To pass from the domain $\Omega(x,y)$ into $\tilde{\Omega}(\phi,\psi)$ we will consider an approximated stream function $\tilde{\psi}$ such that \begin{equation}\label{approx-stream} \left\{\begin{array}{lll} \displaystyle\frac{\partial\tilde{\psi}}{\partial x}=W(x,y)\frac{\partial\psi}{\partial x}\\[2mm] \displaystyle\frac{\partial\tilde{\psi}}{\partial y}=W(x,y)\frac{\partial\psi}{\partial y}, \end{array}\right. \end{equation} \medskip \noindent hence, in this way $(\phi,\tilde{\psi})$ are in relation through the Cauchy-Riemann equations \begin{equation}\label{potential-approx-stream} \left\{\begin{array}{lll} \displaystyle\frac{\partial\phi}{\partial x}=\frac{\partial\tilde{\psi}}{\partial y}=W v_1\\[2mm] \displaystyle\frac{\partial\phi}{\partial y}=-\frac{\partial\tilde{\psi}}{\partial x}=W v_2, \end{array}\right. \end{equation} \medskip \noindent where $W(x,y)$ has to satisfy \eqref{x-y-W} and it is equal to $1$ in the case of irrotational fluid and $\Delta\tilde{\psi}=0$. Moreover, we point out that the new domain $\tilde{\Omega}(\phi,\tilde{\psi})$ is also the lower half plane, see fig. \ref{new-annulus}, because the approximated stream function $\tilde{\psi}(z(\alpha))=0$, due to the kinematic boundary condition \eqref{v-euler4} and the positivity of $W$.\\ \noindent In view of the fact that we will use the new coordinate system $(\phi,\tilde{\psi})$, we have to rewrite \eqref{x-y-Jacobian} and \eqref{phi-psi-DBernoulli}. Let us start by writing the relation between $(x,y)$ and $(\phi,\tilde{\psi})$, so the system \eqref{x-y-Jacobian} becomes \begin{equation}\label{x-y-phi-tilde_psi} \begin{pmatrix} \displaystyle\frac{\partial x}{\partial\phi} & \displaystyle\frac{\partial x}{\partial\tilde{\psi}}\\ \displaystyle\frac{\partial y}{\partial\phi} & \displaystyle\frac{\partial y}{\partial\tilde{\psi}}\\ \end{pmatrix} =\frac{1}{W(v_1^2+v_2^2)} \begin{pmatrix} v_1 & - v_2\\ v_2 & v_1 \end{pmatrix}, \end{equation} \medskip \noindent Now, we have to rewrite the Bernoulli equation \eqref{psi-eul3} in the new coordinates. We will derive \eqref{psi-eul3} with respect to $\phi$, such that the constant on the RHS will disappear. Thus we have \begin{equation}\label{DBernoulli-patch2} \frac{1}{2}\frac{\partial}{\partial\phi}\left(v_1^2+v_2^2\right)+p\frac{v_2}{W(v_1^2+v_2^2)}-q\frac{\partial}{\partial\phi}\left(\frac{W}{\sqrt{v_1^2+v_2^2}}\left(v_1\frac{\partial v_2}{\partial\phi}-v_2\frac{\partial v_1}{\partial\phi}\right)\right)=0. \end{equation} \medskip \noindent Finally, it is natural to bring the equations into a disk, because of the periodicity of the problem. As we did for the point and we define \begin{equation}\label{patch-annulus} \left\{\begin{array}{lll} \phi=-\alpha\\[2mm] \tilde{\psi}=\log\rho, \end{array}\right. \end{equation} \medskip \noindent And we pass from $\tilde{\Omega}(\phi,\tilde{\psi})$ into the unit disk, see fig. \ref{new-annulus}. We observe that the patch has been chosen symmetric with respect to the vertical axis, so in the coordinates $(\phi,\tilde{\psi})$ it remains symmetric with respect to the $\tilde{\psi}-$axis (due to the symmetry of the functions \eqref{symmetry}). And in the coordinates $(\alpha,\rho)$, by using \eqref{patch-annulus}, it will be symmetric with respect to the horizontal axis and contained in a circular sector, where $\pm\alpha_1$ are defined through $\pm\phi_1$, in this way $$\textrm{dist}((\pm\phi_1,\tilde{\psi}_1),(0,\tilde{\psi}))>\textrm{dist}((\phi,\tilde{\psi}),(0,\tilde{\psi})), \quad \forall (\phi,\tilde{\psi})\in \partial{\tilde{D}}.$$ \medskip \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{patch-domains} \caption{The transformation of the patch $D(x,y)$, $\tilde{D}(\phi,\tilde{\psi})$ and $\mathcal{A}$.}\label{new-annulus} \end{figure} \medskip \noindent In addition, by using the independent variables $(\tau,\theta)$, defined in \eqref{tau-theta}, we write the equation $\Delta\tilde{\psi}=0$ in the new coordinates $(\phi,\tilde{\psi})$, by using the relations \eqref{approx-stream} and \eqref{potential-approx-stream} and we get an equation for $W(\phi,\tilde{\psi})$ $$\frac{\partial W}{\partial\tilde{\psi}}(v_1^2+v_2^2)-\omega_0\chi_{\tilde{D}}=0\quad\Rightarrow\quad W(\phi,\tilde{\psi})-W\left(\phi,-\infty\right) = \omega_0\int_{-\infty}^{\tilde{\psi}} e^{-2\tau(\phi,\tilde{\psi'})}\chi_{\tilde{D}}(\phi,\tilde{\psi})\,d\tilde{\psi'}. $$ \medskip \noindent Since the value of $W$ at infinite is $1$, then in the variables $(\phi,\tilde{\psi})$, we have \begin{equation}\label{W-phi-tilde-psi} W(\phi,\tilde{\psi})=1+\omega_0\int_{-\infty}^{\tilde{\psi}} e^{-2\tau(\phi,\tilde{\psi'})}\chi_{\tilde{D}}(\phi,\tilde{\psi'})\,d\tilde{\psi'}. \end{equation} \medskip \noindent By using the change of variables \eqref{patch-annulus}, we have \begin{equation}\label{patch-W-alpha-rho} W(\alpha,\rho)=1+\omega_0\int_{0}^{\rho} \frac{e^{-2\tau(\alpha,\rho')}}{\rho'}\chi_{\mathcal{A}}(\alpha,\rho')\,d\rho' \end{equation} \medskip \noindent Concerning the derivative of the Bernoulli equation \eqref{DBernoulli-patch2}, we get \begin{equation}\label{DBernoulli-patch3} \frac{\partial}{\partial\alpha}\left( \frac{1}{2} e^{2\tau}\right)-p\frac{e^{-\tau}\sin\theta}{W}+q\frac{\partial}{\partial\alpha}\left(W e^{\tau}\frac{\partial\theta}{\partial\alpha}\right)=0. \end{equation} \medskip \noindent By integrating with respect to $\alpha$ and taking the constant that appears on the RHS as $\frac{1}{2}+B$, as explained for the point vortex case, we get \begin{equation*} \frac{1}{2} e^{2\tau}-p\left(\int_{-\pi}^{\alpha}\frac{e^{-\tau}\sin\theta}{W(\alpha,1)}\,d\alpha'-1\right)+q W(\alpha,1) e^{\tau}\frac{\partial\theta}{\partial\alpha}=\frac{1}{2}+B, \end{equation*} \medskip \noindent We multiply by $e^{-\tau}$ and we obtain the equation \begin{equation}\label{patch-Bernoulli-tau-theta} \sinh(\tau(\alpha,1))-p e^{-\tau(\alpha,1)}\left(\int_{-\pi}^{\alpha}\frac{e^{-\tau(\alpha',1)}\sin\theta(\alpha',1)}{W}\,d\alpha'-1\right)+q W \frac{\partial\theta(\alpha,1)}{\partial\alpha}-Be^{-\tau(\alpha,1)}=0. \end{equation} \medskip \noindent We can solve our problem by finding a $2\pi$ periodic functions $\tau(\alpha)$ even and $\theta(\alpha)$ odd, an even function $\tilde{\omega}(\alpha)$ and a curve $\gamma(\alpha)$, which is the parametrization of the vortex patch that satisfy \eqref{patch-kbc}, \eqref{fix-patch} and \eqref{patch-Bernoulli-tau-theta}. As we did for the point vortex, the kinematic boundary condition \eqref{patch-kbc}, can be replaced by \begin{equation}\label{patch-v-tangential} W(\alpha,1) v(z(\alpha))\cdot\partial_{\alpha}z(\alpha)=-1, \end{equation} \medskip \noindent then our problem reduced to analyze the equation \eqref{fix-patch}, \eqref{patch-Bernoulli-tau-theta} and \eqref{patch-v-tangential}. \bigskip \subsection{Perturbation of the Crapper formulation with a vortex patch} In this section, we want to write our variables as a perturbation of Crapper variables. First of all, we get a relation between $(\tau,\theta)$ in both $(\phi,\tilde{\psi})$ and $(\alpha,\rho)$ variables, by using the rotational and the divergence free conditions, \begin{equation}\label{tau-theta-patch} \begin{cases} \displaystyle\frac{\partial\theta}{\partial\phi}=\frac{\omega e^{-2\tau}}{W}+\frac{\partial\tau}{\partial\tilde{\psi}}\\[4mm] \displaystyle\frac{\partial\theta}{\partial\tilde{\psi}}=-\frac{\partial\tau}{\partial\phi} \end{cases}\Longrightarrow \begin{cases} \displaystyle -\frac{\partial\theta}{\partial\alpha}=\frac{\omega e^{-2\tau}}{W}+\rho\frac{\partial\tau}{\partial\rho}\\[4mm] \displaystyle\rho\frac{\partial\theta}{\partial\rho}=\frac{\partial\tau}{\partial\alpha}. \end{cases} \end{equation} \medskip \noindent Once we find the values of $(\tau, \theta)$, we can use the relations \eqref{x-y-phi-tilde_psi} and \eqref{patch-annulus} to obtain the parametrization of the interface \begin{equation}\label{patch-interface-derivative} \left\{\begin{array}{lll} \displaystyle\frac{\partial z_1}{\partial\alpha}=-\frac{e^{-\tau(\alpha,1)}\cos(\theta(\alpha,1))}{W(\alpha,1)}\\[4mm] \displaystyle\frac{\partial z_2}{\partial\alpha}=-\frac{e^{-\tau(\alpha,1)}\sin(\theta(\alpha,1))}{W(\alpha,1)}, \end{array}\right. \end{equation} \medskip \noindent where $W(\alpha,1)$ is defined in \eqref{patch-W-alpha-rho}.\\ \noindent In the case of rotational waves $(\tau,\theta)$ do not satisfy the Cauchy-Riemann equations. For this reason we define $\tau=\tau_A+\omega_0\tilde{\tau}$ and $\theta=\theta_A+\omega_0\tilde{\theta}$, such that $(\tau_A,\theta_A)$ is the Crapper solution with small gravity but without vorticity thus it is incompressible and irrotational and satisfies the Cauchy-Riemann equations in the variables $(\phi,\psi)$, as explained in \eqref{tauCR-thetaCR} \begin{equation*} \left\{\begin{array}{lll} \displaystyle\frac{\partial\theta_A}{\partial\phi}=\frac{\partial\tau_A}{\partial{\psi}}\\[4mm] \displaystyle\frac{\partial\theta_A}{\partial{\psi}}=-\frac{\partial\tau_A}{\partial\phi}. \end{array}\right. \end{equation*} \medskip \noindent This implies that on the interface $\mathcal{S}$, i.e. $\psi=0$, one variable can be written as the Hilbert transform of the other $\tau_A=\mathcal{H}\theta_A$. Hence, in the $(\phi,\tilde{\psi})$ variables, we have \begin{equation}\label{Crapper-C-R-patch} \begin{cases} \displaystyle \frac{\partial\theta_A}{\partial\phi}=W\frac{\partial\mathcal{H}\theta_A}{\partial\tilde{\psi}}\\[4mm] \displaystyle\frac{\partial\theta_A}{\partial\tilde{\psi}}=-\frac{1}{W }\frac{\partial\mathcal{H}\theta_A}{\partial\phi} \end{cases}\Longrightarrow \begin{cases} \displaystyle -\frac{\partial\theta_A}{\partial\alpha}=W\rho\frac{\partial\mathcal{H}\theta_A}{\partial\rho}\\[4mm] \displaystyle\rho\frac{\partial\theta_A}{\partial\rho}=\frac{1}{W}\frac{\partial\mathcal{H}\theta_A}{\partial\alpha}. \end{cases} \end{equation} \medskip \noindent By substituting \eqref{Crapper-C-R-patch} in \eqref{tau-theta-patch}, we have \begin{equation}\label{patch-tilde-tau-theta-quasi-Cauchy-Riemann-1} \left\{\begin{array}{lll} \displaystyle\omega_0\frac{\partial\tilde{\theta}}{\partial\phi}=\left(\frac{1}{W}-1\right)\frac{\partial\theta_A}{\partial\phi}+\frac{\omega_0\chi_{\tilde{D}}e^{-2\tau}}{W}+\omega_0\frac{\partial\tilde{\tau}}{\partial\tilde{\psi}}\\[5mm] \displaystyle \omega_0\frac{\partial\tilde{\theta}}{\partial\tilde{\psi}}=-\omega_0\frac{\partial\tilde{\tau}}{\partial\phi}+(W-1)\frac{\partial\theta_A}{\partial\tilde{\psi}} \end{array}\right. \end{equation} \medskip \begin{center} $\Downarrow$ \end{center} \medskip \begin{equation}\label{patch-tilde-tau-theta-quasi-Cauchy-Riemann-2} \left\{\begin{array}{lll} \displaystyle-\omega_0\frac{\partial\tilde{\theta}}{\partial\alpha}=\left(1-\frac{1}{W}\right)\frac{\partial\theta_A}{\partial\alpha}+\frac{\omega_0\chi_{\tilde{D}}e^{-2\tau}}{W}+\rho\omega_0\frac{\partial\tilde{\tau}}{\partial\rho}\\[5mm] \displaystyle \omega_0\frac{\partial\tilde{\theta}}{\partial\tilde{\rho}}=\frac{1}{\rho}\omega_0\frac{\partial\tilde{\tau}}{\partial\alpha}+\left(1+W\right)\frac{\partial\theta_A}{\partial\tilde{\rho}} \end{array}\right. \end{equation} \medskip \noindent By deriving with respect to the opposite variable and taking the difference, we have the following elliptic equation in $(\alpha,\rho)$ \begin{equation}\label{elliptic-alpha-rho-patch} \begin{split} \omega_0\frac{\partial^2\tilde{\tau}}{\partial\rho^2}+\frac{1}{\rho^2}\omega_0\frac{\partial^2\tilde{\tau}}{\partial\alpha^2}+\frac{1}{\rho}\omega_0\frac{\partial\tilde{\tau}}{\partial\rho}=&-\frac{1}{\rho}\frac{\partial}{\partial\rho}\left(\frac{\omega_0\chi_{\mathcal{A}}(\alpha,\rho)e^{-2\tau}}{W}\right)+\frac{1}{\rho}\frac{\partial}{\partial\rho}\left(\frac{1}{W}\right)\frac{\partial\theta_A}{\partial\alpha}\\[3mm] &-\frac{1}{\rho}\frac{\partial W}{\partial\alpha}\frac{\partial\theta_A}{\partial\rho}+\frac{1}{\rho}\frac{\partial^2\theta_A}{\partial\alpha\partial\rho}\left(\frac{1-W^2}{W}\right). \end{split} \end{equation} \medskip \noindent But we are interested in the elliptic equation in $(\phi,\tilde{\psi})-$coordinates, since it will be easy to study and we have \begin{equation}\label{elliptic-phi-tilde_psi-patch} \begin{split} \omega_0\Delta\tilde{\tau}=-\frac{\partial}{\partial\tilde{\psi}}\left(\frac{\omega_0\chi_{\tilde{D}}(\phi,\tilde{\psi}) e ^{-2\tau}}{W}\right)+\frac{1}{W^2}\frac{\partial W}{\partial\tilde{\psi}}\frac{\partial\theta_A}{\partial\phi}+\frac{\partial W}{\partial\phi}\frac{\partial\theta_A}{\partial\tilde{\psi}}+\left(\frac{W^2-1}{W}\right)\frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}. \end{split} \end{equation} \bigskip \noindent We want to find a solution $\tilde{\tau}$ of the elliptic problem \eqref{elliptic-phi-tilde_psi-patch}. First of all let us rewrite the equation with all the explicit terms. \begin{equation*} \begin{split} \Delta\tilde{\tau}(\phi,\tilde{\psi})=&-\frac{\partial}{\partial\tilde{\psi}}\left(\frac{\chi_{\tilde{D}}(\phi,\tilde{\psi}) e ^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi})}}{1+\omega_0\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'}\right)\\[4mm] &+\frac{\chi_{\tilde{D}}(\phi,\tilde{\psi})e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi})}}{\left(1+\omega_0\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'\right)^2}\frac{\partial\theta_A}{\partial\phi}\\[4mm] &+\frac{\partial}{\partial\phi}\left(\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'\right)\frac{\partial\theta_A}{\partial\tilde{\psi}}\\[4mm] &+\frac{2\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'}{1+\omega_0\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'}\frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}\\[4mm] &+\frac{\omega_0\left(\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'\right)^2}{1+\omega_0\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'}\frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}=f((\phi,\tilde{\psi}),\tilde{\tau}). \end{split} \end{equation*} \bigskip \noindent where we use that $\displaystyle\frac{\partial W}{\partial\tilde{\psi}}=\omega_0\chi_{\tilde{D}}(\phi,\tilde{\psi})e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi})}.$ Now, we define a solution in the following way \begin{equation}\label{tau-tilde-solution} \tilde{\tau}(\phi,\tilde{\psi})=f((\phi,\tilde{\psi}),\tilde{\tau})*G_2(\phi,\tilde{\psi}), \end{equation} \medskip \noindent where $G_2(\phi,\tilde{\psi})$ is the Green function in the domain $\tilde{\Omega}$. We will show that \eqref{tau-tilde-solution} solves the elliptic equation, thanks to the smallness of the parameters involved.\\ \noindent If we use the properties of commutativity and differentiation of the convolution; the integration by parts with the fact that $W(\pm\pi,\tilde{\psi}')=1$, then we are able to eliminate the derivative of $\tilde{\tau}$ and we have \begin{equation}\label{patch-tilde-tau-omega0} \begin{split} \tilde{\tau}(\phi,\tilde{\psi})=&-\frac{\chi(\phi,\tilde{\psi}) e ^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi})}}{1+\omega_0\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'}*\frac{\partial}{\partial\tilde{\psi}} G_2(\phi,\tilde{\psi})\\[4mm] &+\frac{\chi(\phi,\tilde{\psi}) e ^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi})}}{\left(1+\omega_0\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'\right)^2}\cdot\frac{\partial\theta_A}{\partial\phi}*G_2\\[4mm] &-\frac{\partial G_2}{\partial\phi}*\frac{\partial\theta_A}{\partial\tilde{\psi}}\cdot\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi'})e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi'})}\,d\tilde{\psi}'\\[4mm] &-G_2*\frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}\cdot\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi'})e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi'})}\,d\tilde{\psi}'\\[4mm] &+\frac{2\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'}{1+\omega_0\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'}\cdot \frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}*G_2\\[4mm] &+\frac{\omega_0\left(\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'\right)^2}{1+\omega_0\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{(-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau})(\phi,\tilde{\psi}')}\,d\tilde{\psi}'}\cdot \frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}*G_2 \end{split} \end{equation} \medskip \noindent Since we are looking for a solution with small $\omega_0$, we rewrite \eqref{patch-tilde-tau-omega0} around $\omega_0=0$, we write just the first order \begin{equation}\label{approxim-tilde-tau} \begin{split} \tilde{\tau}&=-\chi(\phi,\tilde{\psi}) e ^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi})}*\frac{\partial G_2}{\partial\tilde{\psi}}+\chi(\phi,\tilde{\psi}) e ^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi})}\cdot\frac{\partial\theta_A}{\partial\phi}*G_2\\[4mm] &-\frac{\partial G_2}{\partial\phi}*\frac{\partial\theta_A}{\partial\tilde{\psi}}\int_{-\infty}^{\tilde{\psi}}\chi(\phi,\tilde{\psi}') e ^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\,d\tilde{\psi}' \end{split} \end{equation} \begin{equation*} \begin{split} &-G_2*\frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}\int_{-\infty}^{\tilde{\psi}}\chi(\phi,\tilde{\psi}') e ^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\,d\tilde{\psi}'+2\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\,d\tilde{\psi}'\cdot \frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}*G_2\\[4mm] &+\omega_02\chi_{\tilde{D}}(\phi,\tilde{\psi})e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi})}\tilde{\tau}(\phi,\tilde{\psi})*\frac{\partial G_2}{\partial\tilde{\psi}}\\[4mm] &+\omega_0\chi_{\tilde{D}}(\phi,\tilde{\psi})e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi})}\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\,d\tilde{\psi}'*\frac{\partial G_2}{\partial\tilde{\psi}}\\[4mm] &-2\omega_0 \chi_{\tilde{D}}(\phi,\tilde{\psi})e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi})}\tilde{\tau}(\phi,\tilde{\psi})\cdot \frac{\partial\theta_A}{\partial\phi}*G_2\\[4mm] &-2\omega_0\chi_{\tilde{D}}(\phi,\tilde{\psi})e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi})}\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\,d\tilde{\psi}'\cdot \frac{\partial\theta_A}{\partial\phi}*G_2\\[4mm] &+\omega_0\frac{\partial G_2}{\partial\phi}*\frac{\partial\theta_A}{\partial\tilde{\psi}}\cdot 2\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\tilde{\tau}(\phi,\tilde{\psi}')\,d\tilde{\psi}'\\[4mm] &+\omega_0 G_2*\frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}\cdot 2 \int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\tilde{\tau}(\phi,\tilde{\psi}')\,d\tilde{\psi}'\\[4mm] &-4\omega_0\int_{-\frac{\tilde{a}}{c}}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\tilde{\tau}(\phi,\tilde{\psi}')\,d\tilde{\psi}'\cdot\frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}*G_2\\[4mm] &-\omega_0\left(\int_{-\infty}^{\tilde{\psi}}\chi_{\tilde{D}}(\phi,\tilde{\psi}')e^{-2\mathcal{H}\theta_A(\phi,\tilde{\psi}')}\,d\tilde{\psi}'\right)^2\cdot\frac{\partial^2\theta_A}{\partial\phi\partial\tilde{\psi}}*G_2+o(\omega_0^2)\\[4mm] &\equiv\omega_0\mathcal{A}_1(\tilde{\tau},\theta_A)+\omega_0\mathcal{A}_2(\theta_A)+b(\theta_A)+o(\omega_0^2), \end{split} \end{equation*} \medskip \noindent We define the operator \begin{equation}\label{G-operator} \mathcal{G}(\tilde{\tau}; \omega_0,\theta_A)=\tilde{\tau}- \omega_0\mathcal{A}_1(\tilde{\tau},\theta_A)-\omega_0\mathcal{A}_2(\theta_A)-b(\theta_A)+o(\omega_0^2), \end{equation} \medskip \noindent where $\mathcal{G}(\tilde{\tau}; \omega_0,\theta_A):H^2_{even}\times\mathbb{R}\times H^2_{odd}\rightarrow H^2$ and to invert this operator in a neighborhod of $\omega_0=0$ we will use the Implicit function theorem \ref{IFT}. We observe that \begin{equation}\label{operator-tilde-tau} \left\{\begin{array}{lll} \mathcal{G}(\tilde{\tau};0,\theta_A) =0\\[4mm] D_{\tilde{\tau}}\mathcal{G}(\tilde{\tau};0,\theta_A)=\tau_1. \end{array}\right. \end{equation} \medskip \noindent The equations \eqref{operator-tilde-tau} guarantees that in a neighborhod of $(\omega_0=0,\theta_A)$, there exists a smooth function $\tilde{\tau}^*(\omega_0,\theta_A)$, such that $\tilde{\tau}^*(0,\theta_A)=\tilde{\tau}$. \bigskip \subsection{Existence of Crapper waves in the presence of a small vortex patch} In this section we prove the existence of a perturbation of the Crapper waves, with small gravity and small vorticity. We will prove the existence theorem (Theorem \ref{patch-existence}), by means of the implicit function theorem. However, to prove it we need an explicit parametrization for $\gamma(\alpha)$ in such a way that the operator, related to \eqref{fix-patch}, fulfils the hypotesis of the implicit function theorem. We define $\gamma(\alpha)$ as follows \begin{equation}\label{gamma} \gamma(\alpha)= \left\{ \begin{array}{lll} \displaystyle r\left(-\frac{\alpha+\pi}{\sin\alpha}\cos\alpha,-\alpha-\pi\right)\hspace{1.5cm}-\pi\leq \alpha<-\frac{\pi}{2}\\[5mm] \displaystyle r\left(\frac{\alpha}{\sin\alpha}\cos\alpha,\alpha\right)\hspace{3.3cm}-\frac{\pi}{2}\leq \alpha\leq\frac{\pi}{2}\\[5mm] \displaystyle r\left(-\frac{\alpha-\pi}{\sin\alpha}\cos\alpha,-\alpha+\pi\right)\hspace{2cm}\frac{\pi}{2}<\alpha\leq\pi, \end{array}\right. \end{equation} \medskip \noindent where $r\in \mathbb{R}$ the small radius. So that its derivative is \begin{equation} \partial_{\alpha}\gamma(\alpha)= \left\{ \begin{array}{lll} \displaystyle r\left(\frac{(\alpha+\pi)-\cos\alpha\sin\alpha}{\sin^2\alpha},-1\right)\hspace{1.5cm}-\pi\leq \alpha<-\frac{\pi}{2}\\[5mm] \displaystyle r\left(\frac{\cos\alpha\sin\alpha-\alpha}{\sin^2\alpha},1\right)\hspace{3.3cm}-\frac{\pi}{2}\leq \alpha\leq\frac{\pi}{2}\\[5mm] \displaystyle r\left(\frac{(\alpha-\pi)-\cos\alpha\sin\alpha}{\sin^2\alpha},-1\right)\hspace{2cm}\frac{\pi}{2}<\alpha\leq\pi \end{array}\right. \end{equation} \medskip \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{My-patch} \caption{The choice of $\gamma(\alpha)$.}\label{my-patch} \end{figure} \medskip \noindent However, we will work in a neighborhood of $\omega_0=0$, then we substitute $\tau=\mathcal{H}\theta_A+\omega_0 \tilde{\tau}^*$. \begin{subequations} \label{patch-water-waves} \begin{align} \begin{split} &\mathcal{F}_1(\theta_A, \tilde{\omega}, r; B,p,\omega_0)=\sinh(\mathcal{H}\theta_A+\omega_0\tilde{\tau}^*)\\[3mm] &\hspace{2cm}-p e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*}\left(\int_{-\pi}^{\alpha}\frac{e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*}\sin\theta(\alpha',1)}{W(\alpha',1)}\,d\alpha'-1\right)\\[3mm] &\hspace{2cm}+q W(\alpha,1) \frac{\partial(\theta_A+\omega_0\tilde{\theta}^*)}{\partial\alpha}-Be^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*}. \end{split}\label{patch-water-waves-1}\\[5mm] \begin{split} &\mathcal{F}_2(\theta_A,\tilde{\omega}, r; B,p,\omega_0):=W(\alpha,1)\left(2BR(z(\alpha),\tilde{\omega}(\alpha))\cdot\partial_{\alpha}z(\alpha)+\tilde{\omega}(\alpha)\right.\\[3mm] &\hspace{2cm}+\left.\frac{\omega_0}{2\pi}\int_{-\pi}^{\pi}\log|z(\alpha)-\gamma(\alpha')|\partial_{\alpha}\gamma(\alpha')\,d\alpha'\cdot\partial_{\alpha}z(\alpha)\right)+2 \end{split}\label{patch-water-waves-2}\\[5mm] \begin{split} &\mathcal{F}_3(\theta_A,\tilde{\omega}, r; B, p,\omega_0):=\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{(\gamma(\alpha)-z(\alpha'))^{\perp}}{|\gamma(\alpha)-z(\alpha')|^2}\cdot\tilde{\omega}(\alpha')\,d\alpha'\cdot \partial_{\alpha}\gamma(\alpha)^{\perp}\\[3mm] &\hspace{2cm}+\frac{\omega_0}{2\pi}P.V.\int_{-\pi}^{\pi}\log|\gamma(\alpha)-\gamma(\alpha')|\partial_{\alpha}\gamma(\alpha')\,d\alpha'\cdot\partial_{\alpha}\gamma(\alpha)^{\perp} \end{split}\label{patch-water-waves-3} \end{align} \end{subequations} \bigskip \noindent We have that $$(\mathcal{F}_1,\mathcal{F}_2,\mathcal{F}_3)(\theta_A, \tilde{\omega}, r; B,p,\omega_0):H^{2}_{odd}\times H^{1}_{even}\times\mathbb{R}^{{4}}\rightarrow H^{1}_{even}\times H^{1}_{even}\times H^{1}.$$ \medskip \noindent The main theorem we want to prove is the following \begin{theorem}\label{patch-existence} Let us consider the water waves problem \eqref{v-euler}, with a small vortex patch and a small gravity $g$. Then, for some values of $A<A_0$, defined in \eqref{Crapper-solutions}, there exist periodic solutions to \eqref{v-euler} with overhanging profile. \end{theorem} \bigskip \subsubsection{Proof of Theorem \ref{patch-existence}} We will analyse the three operators \eqref{patch-water-waves} that identify our problem. And we will show they satisfy the hypotesis of the implicit function theorem. First of all we have to show that \begin{equation}\label{operators-zero} (\mathcal{F}_1,\mathcal{F}_2,\mathcal{F}_3)(\theta_c,\tilde{\omega}_c,0;0,0,0)=(0,0,0). \end{equation} \medskip \noindent For $\mathcal{F}_1$, we use \eqref{Crapper-Bernoulli} \begin{equation*} \begin{split} \mathcal{F}_1(\theta_c,\tilde{\omega}_c,0;0,0,0)&=\sinh(\mathcal{H}\theta_c)+q\frac{\partial\theta_c}{\partial\alpha}=0 \end{split} \end{equation*} \medskip \noindent For $\mathcal{F}_2$ it holds by construction \eqref{patch-v-tangential}. For $\mathcal{F}_3$, we write explicitly $\gamma(\alpha)$ as in \eqref{gamma} and by taking the radius $r$ to be $0$. Thus $\mathcal{F}_3(\theta_c,\tilde{\omega}_c,0;0,0,0,0)$ satisfies \eqref{operators-zero}.\\ \noindent The most considerable part is to prove the invertibility of the derivatives. We observe that $D_{\tilde{\omega}}\mathcal{F}_1=D_{r}\mathcal{F}_1=0$, so it remains to compute $D_{\theta_A}\mathcal{F}_1$. \begin{equation*} \begin{split} &D_{\theta_A}\mathcal{F}_1=\frac{d}{d\mu}\left[\sinh(\mathcal{H}\theta_A+\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}^*(\theta_A+\mu\theta_1))+qW_{\mu}\frac{\partial(\theta_A+\mu\theta_1+\omega_0\tilde{\theta}^*(\theta_A+\mu\theta_1))}{\partial\alpha}\right.\\[3mm] &\left.-pe^{-\mathcal{H}\theta_A-\mu\mathcal{H}\theta_1+\omega_0\tilde{\tau}^*(\theta_A+\mu\theta_1)}\int_{-\pi}^{\alpha}\frac{e^{-\mathcal{H}\theta_A-\mu\mathcal{H}\theta_1-\omega_0\tilde{\tau}^*(\theta_A+\mu\theta_1)}\sin(\theta_A+\mu\theta_1+\omega_0\tilde{\theta}^*(\theta_A+\mu\theta_1))}{W_\mu} \,d\alpha'\right.\\[3mm] &\left.-Be^{-\mathcal{H}\theta_A-\mu\mathcal{H}\theta_1-\omega_0\tilde{\tau}^*(\theta_A+\mu\theta_1)}\right]_{|\mu=0}\\[5mm] &=\cosh(\mathcal{H}\theta_A+\omega_0\tilde{\tau}^*(\theta_A))\cdot\left(\mathcal{H}\theta_1+\omega_0\left[\frac{d}{d\mu}\tilde{\tau}^*(\theta_A+\mu\theta_1)\right]_{|\mu=0}\right)\\[3mm] &+ q\left[\frac{d}{d\mu}W_{\mu}\right]_{|\mu=0}\frac{\partial(\theta_A+\omega_0\tilde{\theta}^*(\theta_A))}{\partial\alpha}+q W\frac{\partial}{\partial\alpha}\left(\theta_1+\omega_0\left[\frac{d}{d\mu}\tilde{\theta}^*(\theta_A+\mu\theta_1)\right]_{|\mu=0}\right)\\[3mm] &+p e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}\left(\mathcal{H}\theta_1+\omega_0\left[\frac{d}{d\mu}\tilde{\tau}^*(\theta_A+\mu\theta_1)\right]_{|\mu=0}\right)\cdot\int_{-\pi}^{\alpha}\frac{e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}\sin(\theta_A+\omega_0\tilde{\theta}^*(\theta_A))}{W} \,d\alpha'\\[5mm] &-p e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}\int_{-\pi}^{\alpha}\frac{e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}}{W}\left(-\mathcal{H}\theta_1-\omega_0\left[\frac{d}{d\mu}\tilde{\tau}^*(\theta_A+\mu\theta_1)\right]_{|\mu=0}\right)\sin(\theta_A+\omega_0\tilde{\theta}^*(\theta_A))\,d\alpha'\\[5mm] &-p e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}\int_{-\pi}^{\alpha}\frac{e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}}{W}\cos(\theta_A+\omega_0\tilde{\theta}^*(\theta_A))\left(\theta_1+\omega_0\left[\frac{d}{d\mu}\tilde{\theta}^*(\theta_A+\mu\theta_1)\right]_{|\mu=0}\right)\,d\alpha'\\[5mm] &+p e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}\int_{-\pi}^{\alpha}\frac{e^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}\sin(\theta_A+\omega_0\tilde{\theta}^*(\theta_A))}{W^2}\left[\frac{d}{d\mu}W_{\mu}\right]_{|\mu=0}\,d\alpha'\\[3mm] &-Be^{-\mathcal{H}\theta_A-\omega_0\tilde{\tau}^*(\theta_A)}\left(-\mathcal{H}\theta_1-\omega_0\left[\frac{d}{d\mu}\tilde{\tau}^*(\theta_A+\mu\theta_1)\right]_{|\mu=0}\right). \end{split} \end{equation*} \medskip \begin{remark}\label{tilde-tau-frechet-derivative} \noindent The equation for $W_{\mu}(\alpha,1)$ is the following \begin{equation}\label{patch-W-mu} W_{\mu}(\alpha,1)=1+\int_{0}^{1} \omega_0\chi_{\tilde{D}_{\mathcal{A}}}(\alpha,\rho') \frac{e^{-2\mathcal{H}\theta_A-2\mu\mathcal{H}\theta_1-2\tilde\omega_0\tilde{\tau}^*(\theta_A+\mu\theta_1)}}{\rho'}\,d\rho' \end{equation} \bigskip \noindent Now we have to compute $\displaystyle\frac{d W_{\mu}}{d\mu}$, that is \begin{equation}\label{patch-DW-mu} \begin{split} &\left[\frac{d W_{\mu}}{d\mu}\right]_{|\mu=0}=\left[\frac{d}{d\mu}\left(1+\int_{0}^{1} \omega_0\chi_{\tilde{D}_{\mathcal{A}}}(\alpha,\rho') \frac{e^{-2\mathcal{H}\theta_A-2\mu\mathcal{H}\theta_1-2\omega_0\tilde{\tau}^*(\theta_A+\mu\theta_1)}}{\rho'}\,d\rho'\right)\right]_{|\mu=0}\\[5mm] &=\left[\int_{0}^{1}\omega_0\chi_{\tilde{D}_{\mathcal{A}}}(\alpha,\rho') \frac{e^{-2\mathcal{H}\theta_A-2\mu\mathcal{H}\theta_1-2\omega_0\tilde{\tau}^*(\theta_A+\mu\theta_1)}}{\rho'}\cdot\right.\\[4mm] &\hspace{3cm}\left.\cdot\left(-2\mathcal{H}\theta_1-2\omega_0\frac{d}{d\mu}\tilde{\tau}^*(\theta_A+\mu\theta_1)\right)\,d\rho'\right]_{|\mu=0}\\[5mm] &=\omega_0\int_{0}^{1}\chi_{\tilde{D}_{\mathcal{A}}}(\alpha,\rho') \frac{e^{-2\mathcal{H}\theta_A-2\omega_0\tilde{\tau}^*(\theta_A)}}{\rho'}\left(-2\mathcal{H}\theta_1-2\omega_0\left[\frac{d}{d\mu}\tilde{\tau}^*(\theta_A+\mu\theta_1)\right]_{\mu=0}\right)\,d\rho' \end{split} \end{equation} \medskip \noindent It remains to observe that for our purpose it is sufficient to have the existence of $\frac{d}{d\mu}\tilde{\tau}^*(\theta_A+\mu\theta_1)$, coming from the elliptic equation \eqref{elliptic-alpha-rho-patch}. Indeed we must compute the Fr\'echet derivative at the point $(\theta_c,\tilde{\omega_c},0;0,0,0)$ and, as we can see in $D_{\theta_A}\mathcal{F}_1$, the term $\displaystyle\left[\frac{d}{d\mu}\tilde{\tau}(\theta_A+\mu\theta_1)\right]_{|\mu=0}$ is always multiplied by $\omega_0$ that it is taken equal to zero. And we can state that also $\displaystyle \left[\frac{d W_{\mu}}{d\mu}\right]_{|\mu=0}$ is zero. \end{remark} \medskip \noindent The remark \ref{tilde-tau-frechet-derivative} implies that \begin{equation*} D_{\theta_A}\mathcal{F}_1(\theta_c,\tilde{\omega}_c,0;0,0,0)= \cosh(\mathcal{H}\theta_c)\cdot\mathcal{H}\theta_1+q\frac{\partial\theta_1}{\partial\alpha}. \end{equation*} \medskip \noindent For the second operator, we observe that for computing $D_{\theta_A}\mathcal{F}_2$, we need to use the equation for $z(\alpha)$ can be obtained by integrating \eqref{patch-interface-derivative} and we define $$z_\mu(\alpha)=-\int_{-\pi}^{\alpha} \frac{e^{-\mathcal{H}\theta_A-\mu\mathcal{H}\theta_1-\omega_0\tilde{\tau}^*(\theta_A+\mu\theta_1)+i(\theta_A+\mu\theta_1+\omega_0 \tilde{\theta}^*(\theta_A+\mu\theta_1))}}{W_{\mu}(\alpha',1)}\,d\alpha'-e_2$$ \medskip \noindent where $W_{\mu}(\alpha',1)$ is defined in \eqref{patch-W-mu}. \\ \noindent In the same way, we did for computing $D_{\theta_A}\mathcal{F}_1$, we can compute $D_{\theta_A}\mathcal{F}_2$ and then at the point $(\theta_c,\tilde{\omega}_c,0;0,0,0)$ we will get $D_{\theta_A}\mathcal{F}_2(\theta_c,\tilde{\omega}_c,0;0,0,0)$.\\ \noindent Instead, it is important to compute $D_{\tilde{\omega}}\mathcal{F}_2$. \begin{equation*} \begin{split} &D_{\tilde{\omega}}\mathcal{F}_2=\left[\frac{d}{d\mu}\mathcal{F}_2(\theta_A,\tilde{\omega}+\mu\omega_1,r;p,\varepsilon,\omega_0,B)\right]_{|\mu=0}=\left[\frac{d}{d\mu}\left[2 W(\alpha,1) BR(z(\alpha),\tilde{\omega}(\alpha)+\mu\omega_1)\cdot\partial_{\alpha}z(\alpha)\right.\right.\\[3mm] &\left.\left.+W(\alpha,1) (\tilde{\omega}(\alpha)+\mu\omega_1(\alpha))+W(\alpha,1)\frac{\omega_0}{2\pi}\int_{-\pi}^{\pi}\log|z(\alpha)-\gamma(\alpha')|\partial_{\alpha}\gamma(\alpha')\,d\alpha'\cdot\partial_{\alpha}z(\alpha)+2\right]\right]_{|\mu=0}\\[3mm] &=2W(\alpha,1)\textrm{P.V.}\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{(z(\alpha)-z(\alpha'))^{\perp}}{|z(\alpha)-z(\alpha')|^2}\cdot \omega_1(\alpha')\,d\alpha'\cdot \partial_{\alpha}z(\alpha)+W(\alpha,1)\omega_1(\alpha). \end{split} \end{equation*} \medskip \noindent At the Crapper point we have $$D_{\tilde{\omega}}\mathcal{F}_2(\theta_c,\tilde{\omega}_c,0;0,0,0,0)=2\textrm{P.V.}\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{(z(\alpha)-z(\alpha'))^{\perp}}{|z(\alpha)-z(\alpha')|^2}\cdot \omega_1(\alpha')\,d\alpha'\cdot \partial_{\alpha}z(\alpha)+\omega_1(\alpha).$$ \bigskip \noindent It remains to compute the last derivate \begin{equation*} \begin{split} &D_{r}\mathcal{F}_2=\frac{d}{d\mu}\left[\mathcal{F}_2(\theta_A,\tilde{\omega},r+\mu r_1;B,p,\omega_0)\right]_{|\mu=0}\\[3mm] &=\left[\frac{d}{d\mu}\left[2 W(\alpha,1) BR(z(\alpha),\tilde{\omega}(\alpha))\cdot\partial_{\alpha}z(\alpha)+W(\alpha,1)\tilde{\omega}(\alpha)\right.\right.\\[3mm] &+W(\alpha,1) \frac{\omega_0}{2\pi}\int_{-\pi}^{-\frac{\pi}{2}}\log\sqrt{\left(z_1(\alpha)+(r+\mu r_1)\frac{\alpha'+\pi}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)+(r+\mu r_1)(\alpha'+\pi)\right)^2}\\[3mm] &\hspace{3cm}\cdot (r+\mu r_1)\left(\frac{(\alpha'+\pi)-\cos\alpha'\sin\alpha'}{\sin^2\alpha'},-1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha)\\[3mm] &+W(\alpha,1)\frac{\omega_0}{2\pi}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\log\sqrt{\left(z_1(\alpha)-(r+\mu r_1)\frac{\alpha'}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)-(r+\mu r_1)(\alpha')\right)^2}\\[3mm] &\hspace{3cm}\cdot (r+\mu r_1)\left(\frac{\cos\alpha'\sin\alpha'-\alpha'}{\sin^2\alpha'},1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha)\\[4mm] &+W(\alpha,1)\frac{\omega_0}{2\pi}\int_{\frac{\pi}{2}}^{\pi}\log\sqrt{\left(z_1(\alpha)+(r+\mu r_1)\frac{\alpha'-\pi}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)+(r+\mu r_1)(\alpha'-\pi)\right)^2}\\[3mm] &\hspace{3cm}\left.\cdot (r+\mu r_1)\left(\frac{(\alpha'-\pi)-\cos\alpha'\sin\alpha'}{\sin^2\alpha'},-1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha)\right]_{|\mu=0}\\[4mm] \end{split} \end{equation*} \begin{equation*} \begin{split} &=W(\alpha,1)\frac{\omega_0}{2\pi}r\cdot r_1\int_{-\pi}^{-\frac{\pi}{2}}\frac{z_1(\alpha)+r\frac{\alpha'+\pi}{\sin\alpha'}\cos\alpha'+z_2(\alpha)+r(\alpha'+\pi)}{\left(z_1(\alpha)+r\frac{\alpha'+\pi}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)+r(\alpha'+\pi)\right)^2}\\[3mm] &\hspace{3cm}\cdot\left(\frac{(\alpha'+\pi)-\cos\alpha'\sin\alpha'}{\sin^2\alpha'},-1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha)\\[3mm] &+W(\alpha,1)\frac{\omega_0}{2\pi}\cdot r_1\int_{-\pi}^{-\frac{\pi}{2}}\log\sqrt{\left(z_1(\alpha)+r\frac{\alpha'+\pi}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)+r(\alpha'+\pi)\right)^2}\\[3mm] &\hspace{3cm}\cdot\left(\frac{(\alpha'+\pi)-\cos\alpha'\sin\alpha'}{\sin^2\alpha'},-1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha)\\[3mm] &+W(\alpha,1)\frac{\omega_0}{2\pi}r\cdot r_1\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\frac{z_1(\alpha)-r\frac{\alpha'}{\sin\alpha'}\cos\alpha'+z_2(\alpha)-r(\alpha')}{\left(z_1(\alpha)-r\frac{\alpha'}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)-r\alpha'\right)^2}\\[3mm] &\hspace{3cm}\cdot\left(\frac{\cos\alpha'\sin\alpha'-\alpha'}{\sin^2\alpha'},1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha)\\[3mm] &+W(\alpha,1)\frac{\omega_0}{2\pi}\cdot r_1\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\log\sqrt{\left(z_1(\alpha)-r\frac{\alpha'}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)-r\alpha'\right)^2}\\[3mm] &\hspace{3cm}\cdot\left(\frac{\cos\alpha'\sin\alpha'-\alpha'}{\sin^2\alpha'},1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha) \end{split} \end{equation*} \begin{equation*} \begin{split} &+W(\alpha,1)\frac{\omega_0}{2\pi}r\cdot r_1\int_{\frac{\pi}{2}}^{\pi}\frac{z_1(\alpha)+r\frac{\alpha'-\pi}{\sin\alpha'}\cos\alpha'+z_2(\alpha)+r(\alpha'-\pi)}{\left(z_1(\alpha)+r\frac{\alpha'-\pi}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)+r(\alpha'-\pi)\right)^2}\\[3mm] &\hspace{3cm}\cdot\left(\frac{(\alpha'-\pi)-\cos\alpha'\sin\alpha'}{\sin^2\alpha'},-1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha)\\[3mm] &+W(\alpha,1)\frac{\omega_0}{2\pi}\cdot r_1\int_{\frac{\pi}{2}}^{\pi}\log\sqrt{\left(z_1(\alpha)+r\frac{\alpha'-\pi}{\sin\alpha'}\cos\alpha'\right)^2+\left(z_2(\alpha)+r(\alpha'-\pi)\right)^2}\\[3mm] &\hspace{3cm}\cdot\left(\frac{(\alpha'-\pi)-\cos\alpha'\sin\alpha'}{\sin^2\alpha'},-1\right)\,d\alpha' \cdot\partial_{\alpha}z(\alpha) \end{split} \end{equation*} \medskip \noindent When we evaluate this derivative at $(\theta_c,\tilde{\omega}_c,0;0,0,0)$, we get $$D_{r}\mathcal{F}_2(\theta_c,\tilde{\omega}_c,0;0,0,0)=0.$$ \medskip \noindent For the last operator $\mathcal{F}_3$ we have to compute the derivates, but for $D_{\theta_A}\mathcal{F}_3$ and $D_{\tilde{\omega}}\mathcal{F}_3$, we have just to substitute $\theta_A\mapsto \theta_A+\mu\theta_1$ and $\tilde{\omega}\mapsto\tilde{\omega}+\mu\omega_1$, respectively and compute the derivatives as we did for the previous operators. Then we will compute them at the Crapper point, so that we get $D_{\theta_A}\mathcal{F}_3(\theta_c,\tilde{\omega}_c,0;0,0,0)$ and $D_{\tilde{\omega}}\mathcal{F}_3(\theta_c,\tilde{\omega}_c,0;0,0,0)$. In order to apply the implicit function theorem the relevant derivative for the third operator is the one with respect to $r$. The presence of $r$ is in the definition of $\gamma(\alpha)$ in \eqref{gamma}, so we rewrite $\mathcal{F}_3$ in a convenient way. \begin{equation*} \begin{split} &\mathcal{F}_3(\theta_A,\tilde{\omega},r;B,p,\omega_0)=\frac{-\partial_{\alpha}\gamma_2(\alpha)}{2\pi}\int_{-\pi}^{\pi}\frac{-\gamma_2(\alpha)+z_2(\alpha')}{(\gamma_1(\alpha)-z_1(\alpha'))^2+(\gamma_2(\alpha)-z_2(\alpha'))^2}\tilde{\omega}(\alpha)\,d\alpha'\\[3mm] &\hspace{1cm}+\frac{\partial_{\alpha}\gamma_1(\alpha)}{2\pi}\int_{-\pi}^{\pi}\frac{\gamma_1(\alpha)-z_1(\alpha')}{(\gamma_1(\alpha)-z_1(\alpha'))^2+(\gamma_2(\alpha)-z_2(\alpha'))^2}\tilde{\omega}(\alpha)\,d\alpha'\\[3mm] &\hspace{1cm}-\frac{\omega_0}{2\pi}\partial_{\alpha}\gamma_2(\alpha)\textrm{P.V.}\int_{-\pi}^{\pi}\log\sqrt{(\gamma_1(\alpha)-\gamma_1(\alpha'))^2+(\gamma_2(\alpha)-\gamma_2(\alpha'))^2}\hspace{0.2cm}\partial_{\alpha}\gamma_1(\alpha')\,d\alpha'\\[3mm] &\hspace{1cm}+\frac{\omega_0}{2\pi}\partial_{\alpha}\gamma_1(\alpha)\textrm{P.V.}\int_{-\pi}^{\pi}\log\sqrt{(\gamma_1(\alpha)-\gamma_1(\alpha'))^2+(\gamma_2(\alpha)-\gamma_2(\alpha'))^2}\hspace{0.2cm}\partial_{\alpha}\gamma_2(\alpha')\,d\alpha'. \end{split} \end{equation*} \bigskip \noindent In order to simplify the computation we will define $\gamma(\alpha)=r\hspace{0.1cm}(\tilde{\gamma}_1(\alpha),\tilde{\gamma}_2(\alpha))$ and $\partial_{\alpha}\gamma(\alpha)=r\hspace{0.1cm}(\partial_{\alpha}\tilde{\gamma}_1(\alpha),\partial_{\alpha}\tilde{\gamma}_2(\alpha))$. \medskip \begin{equation*} \begin{split} &D_r\mathcal{F}_3=\frac{d}{d\mu}\left[\mathcal{F}_3(\theta_A,\tilde{\omega},r+\mu r_1;B,p,\omega_0)\right]_{|\mu=0}=\\[3mm] &=-\frac{r_1\partial_{\alpha}\tilde{\gamma}_2(\alpha)}{2\pi}\int_{-\pi}^{\pi}\frac{-r\tilde{\gamma}_2(\alpha)+z_2(\alpha')+1}{(r\tilde{\gamma}_1(\alpha)-z_1(\alpha'))^2+(r\tilde{\gamma}_2(\alpha)-z_2(\alpha')-1)^2}\tilde{\omega}(\alpha')\,d\alpha'\\[3mm] &-\frac{r\partial_{\alpha}\tilde{\gamma}_2(\alpha)}{2\pi}\left\{\int_{-\pi}^{\pi}\frac{-r_1\tilde{\gamma}_2(\alpha)}{(r\tilde{\gamma}_1(\alpha)-z_1(\alpha'))^2+(r\tilde{\gamma}_2(\alpha)-z_2(\alpha')-1)^2}\tilde{\omega}(\alpha')\,d\alpha'\right.\\[3mm] &\hspace{2cm}-\int_{-\pi}^{\pi}\frac{-r\tilde{\gamma}_2(\alpha)+z_2(\alpha')+1}{\left[(r\tilde{\gamma}_1(\alpha)-z_1(\alpha'))^2+(r\tilde{\gamma}_2(\alpha)-z_2(\alpha')-1)^2\right]^2}\tilde{\omega}(\alpha')\\[3mm] &\hspace{3cm}\left.\cdot\left[2(r\tilde{\gamma}_1(\alpha)-z_1(\alpha'))r_1\tilde{\gamma}_1(\alpha)+2(r\tilde{\gamma}_2(\alpha)-z_2(\alpha')-1)r_1\tilde{\gamma}_2(\alpha)\right]\,d\alpha'\right\}\\[4mm] &+\frac{r_1\partial_{\alpha}\tilde{\gamma}_1(\alpha)}{2\pi}\int_{-\pi}^{\pi}\frac{r\tilde{\gamma}_1(\alpha)-z_1(\alpha')}{(r\tilde{\gamma}_1(\alpha)-z_1(\alpha'))^2+(r\tilde{\gamma}_2(\alpha)-z_2(\alpha')-1)^2}\tilde{\omega}(\alpha')\,d\alpha'\\[3mm] &+\frac{r\partial_{\alpha}\tilde{\gamma}_1(\alpha)}{2\pi}\left\{\int_{-\pi}^{\pi}\frac{r_1\tilde{\gamma}_1(\alpha)}{(r\tilde{\gamma}_1(\alpha)-z_1(\alpha'))^2+(r\tilde{\gamma}_2(\alpha)-z_2(\alpha')-1)^2}\tilde{\omega}(\alpha')\,d\alpha'\right.\\[3mm] &\hspace{2cm}-\int_{-\pi}^{\pi}\frac{r\tilde{\gamma}_1(\alpha)-z_1(\alpha')}{\left[(r\tilde{\gamma}_1(\alpha)-z_1(\alpha'))^2+(r\tilde{\gamma}_2(\alpha)-z_2(\alpha')-1)^2\right]^2}\tilde{\omega}(\alpha')\\[3mm] &\hspace{3cm}\left.\cdot\left[2(r\tilde{\gamma}_1(\alpha)-z_1(\alpha'))r_1\tilde{\gamma}_1(\alpha)+2(r\tilde{\gamma}_2(\alpha)-z_2(\alpha')-1)r_1\tilde{\gamma}_2(\alpha)\right]\,d\alpha'\right\} \end{split} \end{equation*} \begin{equation*} \begin{split} &-\frac{\omega_0}{2\pi}2 r r_1\partial_{\alpha}\tilde{\gamma}_2(\alpha)\textrm{P.V.}\int_{-\pi}^{\pi}\log\left(r\sqrt{(\tilde{\gamma}_1(\alpha)-\tilde{\gamma}_1(\alpha'))^2+(\tilde{\gamma}_2(\alpha)-\tilde{\gamma}_2(\alpha'))^2}\right)\partial_{\alpha}\tilde{\gamma}_1(\alpha')\,d\alpha'\\[3mm] &-\frac{\omega_0}{2\pi}r r_1\partial_{\alpha}\tilde{\gamma}_2(\alpha)\int_{-\pi}^{\pi}\partial_{\alpha}\tilde{\gamma}_1(\alpha')\,d\alpha'\\[3mm] &+\frac{\omega_0}{2\pi}2rr_1\partial_{\alpha}\tilde{\gamma}_1(\alpha)\textrm{P.V.}\int_{-\pi}^{\pi}\log\left(r\sqrt{(\tilde{\gamma}_1(\alpha)-\tilde{\gamma}_1(\alpha'))^2+(\tilde{\gamma}_2(\alpha)-\tilde{\gamma}_2(\alpha'))^2}\right)\partial_{\alpha}\tilde{\gamma}_2(\alpha')\,d\alpha'\\[3mm] &+\frac{\omega_0}{2\pi}r r_1\partial_{\alpha}\tilde{\gamma}_1(\alpha)\int_{-\pi}^{\pi}\partial_{\alpha}\tilde{\gamma}_2(\alpha')\,d\alpha' \end{split} \end{equation*} \bigskip \begin{remark} We notice that all the terms above for $r=0$ disappear except for the first one and the third one. Moreover, by computing them at the Crapper point $(\theta_c, \tilde{\omega}_c)$ it follows that also the third will be zero because of the parity of the Crapper curve $z^c(\alpha)$ (see \eqref{z-parity}) and of $\tilde{\omega}(\alpha)$ which is even. Hence, in order to have the Fr\'echet derivative different from zero for every $\alpha\in[-\pi,\pi]$, we will choose $\gamma(\alpha)$ as \eqref{gamma} so that the first term will always be different from zero. \end{remark} \bigskip \noindent Then we end up in \begin{equation*} \begin{split} D_r\mathcal{F}_3(\theta_c,\tilde{\omega}_c,0;0,0,0,0)=&-\frac{r_1\partial_{\alpha}\tilde{\gamma}_2(\alpha)}{2\pi}\int_{-\pi}^{\pi}\frac{z^c_2(\alpha')+1}{(z^c_1(\alpha'))^2+(z^c_2(\alpha')+1)^2}\omega(\alpha')\,d\alpha'. \end{split} \end{equation*} \medskip \noindent It remains to prove the invertibility of the derivatives. In particular, the derivatives' matrix is the following \begin{equation}\label{frechet-derivative} D\mathcal{F}(\theta_c,\tilde{\omega}_c, 0;0,0,0)= \begin{pmatrix} D_{\theta_A}\mathcal{F}_1 & 0 & 0 \\ D_{\theta_A}\mathcal{F}_2 & D_{\tilde{\omega}}\mathcal{F}_2 & 0\\ D_{\theta_A}\mathcal{F}_3 & D_{\tilde{\omega}}\mathcal{F}_3 & D_r \mathcal{F}_3 \end{pmatrix}= \begin{pmatrix} \Gamma & 0 & 0\\ D_{\theta_A}\mathcal{F}_2 & \mathcal{A}(z^c(\alpha))+\mathcal{I}& 0\\ D_{\theta_A}\mathcal{F}_3 & D_{\tilde{\omega}}\mathcal{F}_3 & D_r\mathcal{F}_3 \end{pmatrix}\cdot \begin{pmatrix} \theta_1\\ \omega_1\\ r_1 \end{pmatrix} \end{equation} \medskip \noindent where \begin{align*} &\Gamma\theta_1=\cosh(\mathcal{H}\theta_c)\mathcal{H}\theta_1+q\frac{d}{d\alpha}\theta_1\\[3mm] &(\mathcal{A}(z^c(\alpha))+\mathcal{I})\omega_1=2BR(z^c(\alpha),\omega_1)\cdot\partial_{\alpha}z^c(\alpha)+\omega_1.\\[3mm] &D_r\mathcal{F}_3(\theta_c,\tilde{\omega}_c,0;0,0,0)=-\frac{r_1\partial_{\alpha}\tilde{\gamma}_2(\alpha)}{2\pi}\int_{-\pi}^{\pi}\frac{z^c_2(\alpha')+1}{(z^c_1(\alpha'))^2+(z^c_2(\alpha')+1)^2}\omega(\alpha')\,d\alpha'. \end{align*} \medskip \noindent We put in evidence only these three operators since the matrix is diagonal and it will be invertible if the diagonal is invertible.\\ \noindent We observe immediately that the choice of the curve $\gamma(\alpha)$ is crucial since the second component of $\partial_{\alpha}\tilde{\gamma}(\alpha)\neq 0$, for every $\alpha\in [-\pi,\pi]$. So we can invert $D_r\mathcal{F}_3(\theta_c,\tilde{\omega}_c,0;0,0,0)$, as required. For the other two operators we have to use Lemma \ref{DF1-invertible} and Lemma \ref{DF2-invertible} to overcome the problem of the non invertibility of $\Gamma$, see section \ref{point-existence-proof}. Hence, we state the following result. \begin{theorem} Let $|A|<A_0$. Then \medskip \begin{enumerate} \item there exists $(\omega_0,\theta_A)$ and a unique smooth function $\tilde{\tau}^*:U_{\omega_0, \theta_A}\rightarrow H^2_{even}$, such that $\tilde{\tau}^*(0,\theta_A)=\tilde{\tau}$ (see \eqref{G-operator}),\\[3mm] \item there exists $(B,p,\omega_0)$ and a unique smooth function $B^*:U_{p,\omega_0}\rightarrow U_B$, such that $B^*(0,0)=0$,\\[3mm] \item there exists a unique smooth function $\Theta_c: U_{B,p,\omega_0}\rightarrow H^2_{odd}\times H^1_{even}\times\mathbb{R}$, such that $\Theta_c(0,0,0)=(\theta_c,\tilde{\omega}_c,0)$ \end{enumerate} \medskip and satisfy $$\mathcal{F}(\Theta_c(B^*(p,\omega_0), p,\omega_0,);B^*(p,\omega_0),p,\omega_0)=0.$$ \end{theorem} \medskip \noindent The proof of Theorem \ref{patch-existence} holds directly from this Theorem. \bigskip \subsection*{Acknowledgements.} This work is supported in part by the Spanish Ministry of Science and Innovation, through the “Severo Ochoa Programme for Centres of Excellence in R\&D (CEX2019-000904-S)” and MTM2017-89976-P. DC and EDI were partially supported by the ERC Advanced Grant 788250. \printbibliography \end{document}
{'timestamp': '2021-07-01T02:14:32', 'yymm': '2106', 'arxiv_id': '2106.15923', 'language': 'en', 'url': 'https://arxiv.org/abs/2106.15923'}
\section{The bi-criteria objective} \label{sec-bicriteria} In this section we state our results with respect to the bi-criteria objective, for both deterministic and randomized benchmarks. Recall that our bi-criteria objective focuses on the worst-case error rates. We only consider the case of uniform costs. Let $k\geq 2$ be the number of crowds. \xhdr{Worst-case error rates.} Let $R_0$ be a single-crowd stopping rule. Let $\error(R_0)$ be the worst-case error rate of $R_0$, taken over all single-crowd instances (i.e., all values of the gap). Let $R$ be the composite stopping rule based on $R_0$. Let $(\A,R_0)$ denote the \Algorithm in which a crowd-selection algorithm $\A$ is used together with the stopping rule $R$. Let $\error(\A|R_0)$ be the worst-case error rate of $(\A,R_0)$, over all problem instances. Then \begin{align}\label{eq:A-error} \error(\A|R_0) \leq (k+1)\, \error(R_0). \end{align} Note that the worst-case error rate of benchmark is simply $\error(R_0)$. (It is achieved on a problem instance in which all crowds have gap which maximizes the error rate of $R_0$.) Thus, using the same $R_0$ roughly equalizes the worst-case error rate between $\A$ and the benchmarks. \xhdr{Absolute benchmarks.} We consider benchmarks in which both the best crowd (resp., the best distribution over crowds) and the stopping rule are chosen by the benchmark. Thus, the benchmark cost is not relative to any particular single-crowd stopping rule. We call such benchmarks \emph{absolute}. Let $T(\rho)$ be the smallest time horizon $T$ for which the single-crowd stopping rule in \eqref{eq:stopping-rule-total-body} achieves $\error(R_0)\leq \rho$. Fix error rate $\rho>0$ and time horizon $T\geq T(\rho)$. We focus on symmetric, gap-decreasing single-crowd stopping rules $R_0$ such that $\error(R_0)\leq \rho$ and $R_0$ must stop after $T$ rounds; let $\mathcal{R}(\rho,T)$ be the family of all such stopping rules. Fix a problem instance. Let $i^*$ be the crowd with the largest bias, and let $\mu^*$ be the distribution over crowds with the largest induced bias. The \emph{absolute deterministic benchmark} (with error rate $\rho$ and time horizon $T\geq T(\rho)$) is defined as $$ \bench(i^*,\rho,T) = \min_{R_0\in \mathcal{R}(\rho,T)} \cost(i^*|R_0). $$ Likewise, the \emph{absolute randomized benchmark} is defined as $$ \bench(\mu^*,\rho,T) = \min_{R_0\in \mathcal{R}(\rho,T)} \cost(\mu^*|R_0). $$ \begin{theorem}[bi-criteria results] \label{thm:UCB-bi} Consider the \problem with $k$ crowds and uniform costs. Fix error rate $\rho>0$ and time horizon $T\geq T(\rho)$. Then: \begin{itemize} \item[(a)] {\em Deterministic benchmark.} There exists a \Algorithm $(\A,R_0)$ such that \begin{align*} \cost(\A|R_0) &\leq \bench(i^*,\rho,T) + O(\Lambda \log T), \text{ where } \Lambda = \textstyle \sum_{i\neq i^*} \left(\bias_{i^*}-\bias_i \right)^{-2} ,\\ \error(\A|R_0) &\leq (k+1)\,\rho. \end{align*} \item[(b)] {\em Randomized benchmark.} There exists a \Algorithm $(\A,R_0)$ such that \begin{align*} \cost(\A|R_0) &\leq O(\log T \log \tfrac{1}{\rho}) \;(\bench(\mu^*,\rho,T))^{1+k/2} \\ \error(\A|R_0) &\leq (k+1)\,\rho. \end{align*} \end{itemize} \end{theorem} \begin{myproof}[Sketch] For part (a), we use the version of $\AlgUCB$ as in Theorem~\ref{thm:UCB}, with the single-crowd stopping rule $R_0$ from the absolute deterministic benchmark. The upper bound on $\cost(\A|R_0)$ follows from Theorem~\ref{thm:UCB}. The upper bound on $\error(\A|R_0)$ follows from \eqref{eq:A-error}. For part (b), we use the algorithm from Theorem~\ref{thm:AlgUnif}, together with the stopping rule given by \eqref{eq:stopping-rule-total}. The stopping rule has time horizon $T$; the quality parameter $\errorC$ is tuned so that the worst-case error rate matches that in the absolute randomized benchmark. The upper bound on $\cost(\A|R_0)$ follows from Theorem~\ref{thm:AlgUnif}, and the upper and lower bounds in Section~\ref{sec:single-crowd}. The upper bound on $\error(\A|R_0)$ follows from \eqref{eq:A-error}. \end{myproof} \xhdr{A lower bound on the error rate.} Fix a single-crowd stopping rule $R_0$ with $\rho = \error(R_0)$, and a crowd-selection algorithm $\A$. To complement \eqref{eq:A-error}, we conjecture that $\error(\A|R_0) \geq \rho$. We prove a slightly weaker result: essentially, if the composite stopping rule does not use the total crowd, then $\error(\A|R_0) \geq \rho\,(1-2k\rho)$. We will need a mild assumption on $\A$: essentially, that it never commits to stop using any given crowd. Formally, $\A$ is called \emph{non-committing} if for every problem instance, each time $t$, and every crowd $i$, it will choose crowd $i$ at some time after $t$ with probability one. (Here we consider a run of $\A$ that continues indefinitely, without being stopped by the stopping rule.) \begin{lemma}\label{lm:error-rate-LB} Let $R_0$ be a symmetric single-crowd stopping rule with worst-case error rate $\rho$. Let $\A$ be a non-committing crowd-selection algorithm, and let $R$ be the composite stopping rule based on $R_0$ which does not use the total crowd. If $\A$ is used in conjunction with $R$, the worst-case error rate is at least $\rho\,(1-2k\rho)$, where $k$ is the number of crowds. \end{lemma} \begin{myproof} Suppose $R_0$ attains the worst-case error rate for a crowd with gap $\gap$. Consider the problem instance in which one crowd (say, crowd $1$) has gap $\gap$ and all other crowds have gap $0$. Let $R_{(i)}$ be the instance of $R_0$ that takes inputs from crowd $i$, for each $i$. Let $E$ be the event that each $R_{(i)}$, $i>1$ does not ever stop. Let $E'$ be the event that $R_{(1)}$ stops and makes a mistake. These two events are independent, so the error rate of $R$ is at least $\Pr[E]\, \Pr[E']$. By the choice of the problem instance, $\Pr[E']=\rho$. And by Lemma~\ref{lm:LB-infty}, $\Pr[E] \geq 1-2k\rho$. It follows that the error rate of $R$ is at least $\rho\, (1-2k\rho)$. \end{myproof} \section{Open questions} \label{sec:questions} \OMIT{ Research in human computation algorithms is a new and exciting area that has a lot of potential. In this paper, we presented the \problem and introduced a number of algorithms. Our approach was inspired by real world problems from a commercial search engine such as relevance assessment and training set construction.} \xhdr{The \problem.} The main open questions concern crowd-selection algorithms for the randomized benchmark. First, we do not know how to handle non-uniform costs. Second, we conjecture that our algorithm for uniform costs can be significantly improved. Moreover, it is desirable to combine guarantees against the randomized benchmark with (better) guarantees against the deterministic benchmark. Our results prompt several other open questions. First, while we obtain strong provable guarantees for $\AlgUCB$, it is desirable to extend these or similar guarantees to $\AlgThompson$, since this algorithm performs best in the experiments. Second, is it possible to significantly improve over the composite stopping rules? Third, is it advantageous to forego our "independent design" approach and design the crowd-selection algorithms jointly with the stopping rules? \xhdr{Extended models.} It is tempting to extend our model in several directions listed below. First, while in our model the gap of each crowd does not change over time, it is natural to study settings with bounded or ``adversarial'' change; one could hope to take advantage of the tools developed for the corresponding versions of MAB. Second, as discussed in the introduction, an alternative model worth studying is to assign a monetary penalty to a mistake, and optimize the overall cost (i.e., cost of labor minus penalty). Third, one can combine the \problem with learning across multiple related microtasks. \xhdr{Acknowledgements.} We thank Ashwinkumar Badanidiyuru, Sebastien Bubeck, Chien-Ju Ho, Robert Kleinberg and Jennifer Wortman Vaughan for stimulating discussions on our problem and related research. Also, we thank Rajesh Patel, Steven Shelford and Hai Wu from Microsoft Bing for insights into the practical aspects of crowdsourcing. Finally, we are indebted to the anonymous referees for sharp comments which have substantially improved presentation. In particular, we thank anonymous reviewers for pointing out that our index-based algorithm can be interpreted via virtual rewards. \begin{small} \bibliographystyle{alpha} \section{A missing proof from Section~\ref{sec:randomized-benchmark}} In the proof of Lemma~\ref{lm:benchmarks-2options}, we have used the following general vector inequality: \begin{claim}\label{cl:standard-inequality} $(\vec{x}\cdot \vec{\alpha})(\vec{x}\cdot \vec{\beta}) \geq \min_i\alpha_i \beta_i$ for any vectors $\vec{\alpha},\vec{\beta}\in \R^k_+$ and any $k$-dimensional distribution $\vec{x}$. \end{claim} This inequality appears standard, although we have not been able to find a reference. We supply is a self-contained proof below. \begin{proof} W.l.o.g. assume $\alpha_1 \beta_1 \leq \alpha_2 \beta_2 \leq \ldots \leq \alpha_k \beta_k$. Let us use induction on $k$, as follows. Let \[ f(\vec{x}) \triangleq (\vec{x}\cdot \vec{\alpha})(\vec{x}\cdot \vec{\beta}) = (x_1 \alpha_1 + A)(x_1\beta_1 +B) \] where \[ \begin{cases} A &= \sum_{i>1} x_i \alpha_i \\ B &= \sum_{i>1} x_i \beta_i \end{cases}.\] Denoting $p = x_1$, we can write the above expression as \begin{align}\label{eq:standard-inequality-1} f(\vec{x}) = p^2 \alpha_1 \beta_1 + p(\alpha_1 B + \beta_1 A) + AB. \end{align} First, let us invoke the inductive hypothesis to handle the $AB$ term in~\eqref{eq:standard-inequality-1}. Let $y_i = \tfrac{x_i}{1-p}$ and note that $\{y_i\}_{i>1}$ is a distribution. It follows that $\tfrac{A}{1-p} \tfrac{B}{1-p} \geq \alpha_2 \beta_2$. In particular, $AB\geq (1-p)^2 \alpha_1\beta_1$. Next, let us handle the second summand in~\eqref{eq:standard-inequality-1}. Let us re-write it to make things clearer: \begin{align} \alpha_1\, B + \beta_1\, A &= (1-p)\; \sum_{i>1}\; \alpha_1\, y_i\, \beta_i + \beta_1\, y_i\, \alpha_i \nonumber \\ &= (1-p)\,\alpha_1 \beta_1 \sum_{i>1}\; y_i \left(\frac{\alpha_i}{\alpha_1} + \frac{\beta_i}{\beta_1} \right). \label{eq:standard-inequality-2} \end{align} We handle the term in big brackets using the assumption that $\alpha_1 \beta_1 \leq \alpha_i \beta_i$. By this assumption it follows that $\tfrac{\alpha_i}{\alpha_1} \geq \tfrac{\beta_1}{\beta_i}$ and therefore $\tfrac{\alpha_i}{\alpha_1} + \tfrac{\beta_i}{\beta_1} \geq \tfrac{\beta_1}{\beta_i} + \tfrac{\beta_i}{\beta_1} \geq 2$. Plugging this into~\eqref{eq:standard-inequality-2}, we obtain \[ \alpha_1 B + \beta_1 A \geq 2(1-p)\, \alpha_1 \beta_1.\] Finally, going back to~\eqref{eq:standard-inequality-1} we obtain \begin{align*} f(\vec{x}) &\geq p^2 \,\alpha_1\beta_1 + 2p(p-1) \,\alpha_1\beta_1 + (1-p)^2 \,\alpha_1\beta_1 \\ &= \alpha_1\beta_1. \qedhere \end{align*} \end{proof} \OMIT \xhdr{Comparison of worst-case error rates.} Fix a single-crowd stopping rule $R_0$. We would like to argue that the worst-case error rate of an arbitrary crowd-selection rule $\A$, when used with $R_0$, is not much smaller than the worst-case error rate of the benchmarks, i.e. that of $R_0$. We will need a mild assumption on $\A$: essentially, that it never commits to stop using any given crowd. Formally, a crowd-selection algorithm $\A$ is called \emph{non-committing} if for every problem instance, each time $t$, and every crowd $i$, it will choose crowd $i$ at some time after $t$ with probability one. (Here we consider a run of $\A$ that continues indefinitely, without being stopped by the stopping rule.) \begin{lemma}\label{lm:error-rate-LB} Let $R_0$ be a symmetric single-crowd stopping rule with worst-case error rate $\rho$. Let $\A$ be a non-committing crowd-selection algorithm, and let $R$ be the composite stopping rule based on $R_0$ which does not use the total crowd. If $\A$ is used in conjunction with $R$, the worst-case error rate is at least $\rho\,(1-2k\rho)$, where $k$ is the number of crowds. \end{lemma} \begin{proof} Suppose $R_0$ attains the worst-case error rate for a crowd with gap $\gap$. Consider the problem instance in which one crowd (say, crowd $1$) has gap $\gap$ and all other crowds have gap $0$. Let $R_{(i)}$ be the instance of $R_0$ that takes inputs from crowd $i$, for each $i$. Let $E$ be the event that each $R_{(i)}$, $i>1$ does not ever stop. Let $E'$ be the event that $R_{(1)}$ stops and makes a mistake. These two events are independent, so the error rate of $R$ is at least $\Pr[E]\, \Pr[E']$. By the choice of the problem instance, $\Pr[E']=\rho$. And by Lemma~\ref{lm:LB-infty}, $\Pr[E] \geq 1-2k\rho$. It follows that the error rate of $R$ is at least $\rho\, (1-2k\rho)$. \end{proof} } \section{The randomized benchmark} \label{sec:randomized-benchmark} In this section we further discuss the randomized benchmark for crowd-selection algorithms. Informally, it is the best \emph{randomized} time-invariant policy given the latent information (response distributions of the crowds). Formally this benchmark is defined as $\min \cost(\mu|R_0)$, where the minimum is over all distributions $\mu$ over crowds, and $R_0$ is a fixed single-parameter stopping rule. Recall that in the definition of $\cost(\mu|R_0)$, the total crowd is treated as a single data source to which $R_0$ is applied. The total crowd under a given $\mu$ behaves as a single crowd whose response distribution $\D_\mu$ is given by $\D_\mu(x) = \E_{i\sim \mu}[\D_i(x)]$ for all options $x$. The gap of $\D_\mu$ will henceforth be called the \emph{induced gap} of $\mu$, and denoted $f(\mu) = \gap(\D_\mu)$. If the costs are uniform then $\cost(\mu|R_0)$ is simply the expected stopping time of $R_0$ on $\D_\mu$, which we denote $\tau(\D_\mu)$. Informally, $\tau(\D_\mu)$ is driven by the induced gap of $\mu$. \OMIT{ \footnote{To make this a formal statement, we need to assume that the expected stopping time of $R_0$ for a gap-$\gap$ crowd is $\gap^{-2}$ up to a constant factor. (The factor may depend on $R_0$ but not on the response distribution of the crowd). Then $\cost(\mu|R_0) = \Theta(\gap^{-2})$, where $\gap = \gap(\D_\mu)$.} } We show that the induced gap can be much larger than the gap of any crowd. \begin{lemma}\label{lm:induced-gap} Let $\mu$ be the uniform distribution over crowds. For any $\gap>0$ there exists a problem instance such that the gap of each crowd is $\gap$, and the induced gap of $\mu$ is at least $\tfrac{1}{10}$. \end{lemma} \begin{proof} The problem instance is quite simple: there are two crowds and three options, and the response distributions are $(\tfrac25+\eps, \tfrac25,\tfrac15-\eps )$ and $(\tfrac25+\eps, \tfrac15-\eps, \tfrac25)$. Then $\D_\mu = (\tfrac25 + \eps, \tfrac{3}{10}-\tfrac{\eps}{2}, \tfrac{3}{10}-\tfrac{\eps}{2})$. \end{proof} We conclude that the randomized benchmark does not reduce to the deterministic benchmark: in fact, it can be much stronger. Formally, this follows from Lemma~\ref{lm:induced-gap} under a very mild assumption on $R_0$: that for any response distribution $\D$ with gap $\tfrac{1}{10}$ or more, and any response distribution $\D'$ whose gap is sufficiently small, it holds that $\tau(\D)\gg \tau(\D')$. The implication for the design of crowd-selection algorithms is that algorithms that zoom in on the best crowd may be drastically suboptimal. Instead, for some problem instances the right goal is to optimize over distributions over crowds. However, the randomized benchmark coincides with the deterministic benchmark for some important special cases. First, the two benchmarks coincide if the costs are uniform and all crowds agree on the top \emph{two} options (and $R_0$ is gap-decreasing). Second, the two benchmarks may coincide if there are only two options ($|\options|=2$), see Lemma~\ref{lm:benchmarks-2options} below. To prove this lemma for non-uniform costs, one needs to explicitly consider $\cost(\mu|R_0)$ rather than just argue about the induced gaps. Our proof assumes that the expected stopping time of $R_0$ is a concave function of the gap; it is not clear whether this assumption is necessary. \begin{lemma}\label{lm:benchmarks-2options} Consider the \problem with two options ($|\options|=2$). Consider a symmetric single-crowd stopping rule $R_0$. Assume that the expected stopping time of $R_0$ on response distribution $\D$ is a concave function of $\gap(\D)$. Then the randomized benchmark coincides with the deterministic benchmark. That is, $\cost(\mu|R_0) \geq \min_i \cost(i|R_0)$ for any distribution $\mu$ over crowds. \end{lemma} \begin{proof} Let $\mu$ be an arbitrary distribution over crowds. Recall that $f(\mu)$ denotes the induced gap of $\mu$. Note that $f(\mu) = \mu\cdot \vec{\gap}$. To see this, let $\options = \{x,y\}$, where $x$ is the correct answer, and write \begin{align*} \gap(\D_\mu) &= \D_\mu(x) - \D_\mu(y) = \mu\cdot \vec{D}(x) - \mu\cdot \vec{D}(y) = \mu \cdot \left( \vec{D}(x) - \vec{D}(y) \right) = \mu\cdot \vec{\gap}. \end{align*} Let $\A$ be the non-adaptive crowd-selection algorithm that corresponds to $\mu$. For each round $t$, let $i_t$ be the crowd chosen by $\A$ in this round, i.e. an independent sample from $\mu$. Let $N$ be the realized stopping time of $\A$. Let $\tau(\gap)$ be the expected stopping time of $R_0$ on response distribution with gap $\gap$. Note that $\E[N] = \tau( f(\mu) )$. Therefore: \begin{align*} \cost(\mu|R_0) &= \textstyle \E\left[ \sum_{t=1}^N c_{i_t} \right] = \E[c_{i_t}] \; \E[N] & \text{by Wald's identity}\\ &= (\vec{c}\cdot \mu)\;\; \tau(\vec{\gap}\cdot\mu) \geq (\vec{c}\cdot \mu)\, \textstyle \sum_i\; \mu_i\, \tau(\gap_i) & \text{by concavity of $\tau(\cdot)$} \\ &\geq \min_i\; c_i\, \tau(\gap_i) & \text{by Claim~\ref{cl:standard-inequality}}\\ &= \min_i \cost(i | R_0 ). \end{align*} We have used a general fact that $(\vec{x}\cdot \vec{\alpha})(\vec{x}\cdot \vec{\beta}) \geq \min_i\alpha_i \beta_i$ for any vectors $\vec{\alpha},\vec{\beta}\in \R^k_+$ and any $k$-dimensional distribution $\vec{x}$. A self-contained proof of this fact can be found in the Appendix (Claim~\ref{cl:standard-inequality}). \end{proof} \section{Crowd selection against the randomized benchmark} \label{sec:randomized-benchmark-algs} \newcommand{\mathcal{M}}{\mathcal{M}} We design a crowd-selection algorithm with guarantees against the randomized benchmark. We focus on uniform costs, and (a version of) the single-crowd stopping rule from Section~\ref{sec:single-crowd}. Our single-crowd stopping rule $R_0$ is as follows. Let $\empir{\gap}{*}{t}$ be the empirical gap of the total crowd. Then $R_0$ stops upon reaching round $t$ if and only if \begin{align}\label{eq:stopping-rule-total} \empir{\gap}{*}{t} > \errorC/\sqrt{t} \quad \text{or} \quad t=T. \end{align} Here $\errorC$ is the ``quality parameter'' and $T$ is a given time horizon. Throughout this section, let $\mathcal{M}$ be the set of all distributions over crowds, and let $f^* = \max_{\mu\in \mathcal{M}} f(\mu)$ be the maximal induced gap. The benchmark cost is then at least $\Omega((f^*)^{-2})$. We design an algorithm $\A$ such that $\cost(\A|R_0)$ is upper-bounded by (essentially) a function of $f^*$, namely $O\left( (f^*)^{-(k+2)} \right)$. We interpret this guarantee as follows: we match the benchmark cost for a distribution over crowds whose induced gap is $(f^*)^{2/(k+2)}$. By Lemma~\ref{lm:induced-gap}, the gap of the best crowd may be much smaller, so this is can be a significant improvement over the deterministic benchmark. \begin{theorem}\label{thm:AlgUnif} Consider the \problem with uniform costs. Let $R_0$ be the single-crowd stopping rule given by~\refeq{eq:stopping-rule-total}. There exists a crowd-selection algorithm $\A$ such that $$ \cost(\A|R_0) \leq O\left( (f^*)^{-(k+2)}\; \sqrt{\log T} \right). $$ \end{theorem} The proof of Theorem~\ref{thm:AlgUnif} relies on some properties of the induced gap: concavity and Lipschitz-continuity. Concavity is needed for the reduction lemma (Lemma~\ref{lm:inducedMAB-reduction}), and Lipschitz-continuity is used to solve the MAB problem that we reduce to. \begin{claim}\label{lm:inducedGap-props} Consider the induced gap $f(\mu)$ as a function on $\mathcal{M}\subset \Re^k_+$. First, $f(\mu)$ is a concave function. Second, $ |f(\mu) - f(\mu')| \leq n\,\|\mu-\mu'\|_1$ for any two distributions $\mu_1,\mu_2\in \mathcal{M}$. \end{claim} \begin{proof} Let $\mu$ be a distribution over crowds. Then \begin{align} f(\mu) = \D_\mu(x^*) - \max_{x\in \options \setminus \{x^*\}} \D_\mu(x) = \min_{x\in \options \setminus \{x^*\}} \mu\cdot \left( \vec{D}(x^*) - \vec{D}(x)\right). \label{eq:min-of-linear-fns} \end{align} Thus, $f(\mu)$ is concave as a minimum of concave functions. The second claim follows because $$ (\mu-\mu') \cdot \left( \vec{D}(x^*) - \vec{D}(x)\right) \leq n\,\|\mu-\mu\|_1 \quad \text{for each option $x$}. \qedhere $$ \end{proof} \subsection{Proof of Theorem~\ref{thm:AlgUnif}} \xhdr{Virtual rewards.} Consider the MAB problem with virtual rewards, where arms correspond to distributions $\mu$ over crowds, and the virtual reward is equal to the induced gap $f(\mu)$; call it the \emph{induced MAB problem}. The standard definition of regret is with respect to the best fixed arm, i.e. with respect to $f^*$. We interpret an algorithm $\A$ for the induced MAB problem as a crowd-selection algorithm: in each round $t$, the crowd is sampled independently at random from the distribution $\mu_t\in \mathcal{M}$ chosen by $\A$. \begin{lemma}\label{lm:inducedMAB-reduction} Consider the \problem with uniform costs. Let $R_0$ be the single-crowd stopping rule given by~\refeq{eq:stopping-rule-total}. Let $\A$ be an MAB algorithm for the induced MAB instance. Suppose $\A$ has regret $O(t^{1-\gamma} \log T)$ with probability at least $1-\tfrac{1}{T}$, where $\gamma\in (0,\tfrac12]$. Then $$ \cost(\A|R_0) \leq O\left( (f^*)^{-1/\gamma}\; \sqrt{\log T} \right). $$ \end{lemma} \newcommand{\empirQty{\D}}{\empirQty{\D}} \newcommand{\empirQty{\gap}}{\empirQty{\gap}} \begin{proof} Let $\mu_t\in \mathcal{M}$ be the distribution chosen by $\A$ is round $t$. Then the total crowd returns each option $x$ with probability $\mu_t\cdot\vec{D}(x)$, and this event is conditionally independent of the previous rounds given $\mu_t$. Fix round $t$. Let $N_t(x)$ be the number times option $x$ is returned up to time $t$ by the total crowd, and let $ \empirQty{\D}_t(x) = \tfrac{1}{t}\, N_t(x)$ be the corresponding empirical frequency. Note that $$ \E\left[ \empirQty{\D}_t(x) \right] = \bar{\mu}_t\cdot \vec{D}(x), \quad\text{where } \bar{\mu}_t \triangleq \frac{1}{t}\sum_{s=0}^t \mu_s. $$ The time-averaged distribution over crowds $\bar{\mu}_t$ is a crucial object that we will focus on from here onwards. By Azuma-Hoeffding inequality, for each $C>0$ and each option $x\in\options$ we have \begin{align}\label{eq:totalCrowd-Azuma} \Pr\left[ \left|\empirQty{\D}_t(x) - \bar{\mu}_t\cdot \vec{D}(x) \right| < \frac{C}{\sqrt{t}} \right] > 1-e^{-\Omega(C^2)}. \end{align} Let $\empirQty{\gap}_t = \gap(\empirQty{\D}_t)$ be the empirical gap of the total crowd. Taking the Union Bound in \eqref{eq:totalCrowd-Azuma} over all options $x\in \options$, we conclude that $\empirQty{\gap}_t$ is close to the induced gap of $\bar{\mu}_t$: $$ \Pr\left[ \left| \empirQty{\gap}_t - f(\bar{\mu}_t) \right| < \frac{C}{\sqrt{t}} \right] > 1-n\,e^{-\Omega(C^2)},\quad \text{for each $C>0$.} $$ In particular, $R_0$ stops at round $t$ with probability at least $1-\frac{1}{T}$ as long as \begin{align}\label{eq:inducedMAB-stopping} f(\bar{\mu}_t) > t^{-1/2}\;( \errorC+ O(\sqrt{\log T})). \end{align} By concavity of $f$, we have $ f(\bar{\mu}_t) \geq \bar{f}_t $, where $\bar{f}_t \triangleq \frac{1}{t}\sum_{s=0}^t f(\mu_s)$ is the time-averaged virtual reward. Now, $ t\bar{f}_t$ is simply the total virtual reward by time $t$, which is close to $f^*$ with high probability. Specifically, the regret of $\A$ by time $t$ is $ R(t) = t(f^* - \bar{f}_t) $, and we are given a high-probability upper bound on $R(t)$. Putting this all together, $ f(\bar{\mu}_t) \geq \bar{f}_t \geq f^* - R(t)/t $. An easy computation shows that $f(\bar{\mu}_t)$ becomes sufficiently large to trigger the stopping condition~\refeq{eq:inducedMAB-stopping} for $t = O\left( (f^*)^{-1/\gamma}\; \sqrt{\log T} \right)$. \end{proof} \xhdr{Solving the induced MAB problem.} We derive a (possibly inefficient) algorithm for the induced MAB instance. We treat $\mathcal{M}$ as a subset of $\Re^k$, endowed with a metric $d(\mu,\mu') = n\,\|\mu-\mu'\|_1$. By Lemma~\ref{lm:inducedGap-props}, the induced gap $f(\mu)$ is Lipschitz-continuous with respect to this metric. Thus, in the induced MAB problem arms form a metric space $(\mathcal{M},d)$ such that the (expected) rewards are Lipschitz-continuous for this metric space. MAB problems with this property are called \emph{Lipschitz MAB}~\cite{LipschitzMAB-stoc08}. We need an algorithm for Lipschitz MAB that works with virtual rewards. We use the following simple algorithm from \cite{Bobby-nips04,LipschitzMAB-stoc08}. We treat $\mathcal{M}$ as a subset of $\Re^k$, and apply this algorithm to $\Re^k$. The algorithm runs in phases $j=1,2,3,\ldots$ of duration $2^j$. Each phase $j$ is as follows. For some fixed parameter $\delta_j>0$, discretize $\Re^k$ uniformly with granularity $\delta_j$. Let $S_j$ be the resulting set of arms. Run bandit algorithm $\UCB$~\cite{bandits-ucb1} on the arms in $S_j$. (For each arm in $S_j\setminus \mathcal{M}$, assume that the reward is always $0$.) This completes the specification of the algorithm. Crucially, we can implement $\UCB$ (and therefore the entire uniform algorithm) with virtual rewards, by using $\empirQty{\gap}_t$ as an estimate for $f(\mu)$. Call the resulting crowd-selection algorithm \AlgUnif. Optimizing the $\delta_j$ using a simple argument from \cite{Bobby-nips04}, we obtain regret $O(t^{1-1/(k+2)}\, \log T)$ with probability at least $(1-\tfrac{1}{T})$. Therefore, by Lemma~\ref{lm:inducedMAB-reduction} $\cost(\AlgUnif|R_0)$ suffices to prove Theorem~\ref{thm:AlgUnif}. We can also use a more sophisticated \emph{zooming algorithm} from~\cite{LipschitzMAB-stoc08}, which obtains the same in the worst case, but achieves better regret for ``nice'' problem instances. This algorithm also can be implemented for virtual rewards (in a similar way). However, it is not clear how to translate the improved regret bound for the zooming algorithm into a better cost bound for the \problem. \OMIT{ \begin{lemma} Let $R_0$ be the single-crowd stopping rule given by~\refeq{eq:stopping-rule-total}. Let $\D$ be a response distribution with gap $\gap>0$. Let $\tau(\D)$ be the expected stopping time of $R_0$ on $\D$. Then $\tfrac{1}{c}\, \errorC\, \gap^{-2} \leq \tau(\D) \leq c\, \errorC\, \gap^{-2}$ as long as $\errorC > c \sqrt{\log(T+\tfrac{n}{\gap} \log \tfrac{n}{\gap})}$, for a sufficiently large absolute constant $c$. \end{lemma} } \section{Experimental results: crowd-selection algorithms} \label{sec:expts-multi-crowds} We study the experimental performance of the various crowd-selection algorithms discussed in Section~\ref{sec:multi-crowd}. Specifically, we consider algorithms $\AlgUCB$ and $\AlgThompson$, and compare them to our straw-man solutions: $\AlgEER$ and $\AlgRR$.% \footnote{In the plots, we use shorter names for the algorithms: respectively, $\AlgUCBshort$, $\AlgThompsonshort$, $\AlgEERshort$, and $\AlgRRshort$.} Our goal is both to compare the different algorithms and to show that the associated costs are practical. We find that \AlgEER consistently outperforms \AlgRR for very small error rates, \AlgUCB significantly outperforms both across all error rates, and \AlgThompson significantly outperforms all three. We use all crowd-selection algorithms in conjunction with the composite stopping rule based on the single-crowd stopping rule proposed Section~\ref{sec:single-crowd}. Recall that the stopping rule has a ``quality parameter" $\errorC$ which implicitly controls the tradeoff between the error rate and the expected stopping time. We use three simulated workloads. All three workloads consist of microtasks with two options, three crowds, and unit costs. In the first workload, which we call the \emph{easy workload}, the crowds have gaps $(0.3,0,0)$. That is, one crowd has gap $0.3$ (so it returns the correct answer with probability $0.8$), and the remaining two crowds have gap $0$ (so they provide no useful information). This is a relatively easy workload for our crowd-selection algorithms because the best crowd has a much larger gap than the other crowds, which makes the best crowd easier to identify. In the second workload, called the \emph{medium workload}, crowds have gaps $(0.3,0.1,0.1)$, and in the third workload, called the \emph{hard workload}, the crowds have gaps $(0.3,0.2,0.2)$. The third workload is hard(er) for the crowd-selection algorithms in the sense that the best crowd is hard(er) to identify, because its gap is not much larger than the gap of the other crowds. The order that the crowds are presented to the algorithms is randomized for each instance, but is kept the same across the different algorithms. The quality of an algorithm is measured by the tradeoff between its average total cost and its error rate. To study this tradeoff, we vary the quality parameter $\errorC$ to obtain (essentially) any desired error rate. We compare the different algorithms by reporting the average total cost of each algorithm (over 20,000 runs with the same quality parameter) for a range of error rates. Specifically, for each error rate we report the average cost of each algorithm normalized to the average cost of the naive algorithm \AlgRR (for the same error rate). See Figure~\ref{fig:multiCrowd-main} for the main plot: the average cost vs. error rate plots for all three workloads. Additional results, reported in Figure~\ref{fig:multiCrowd-details} (see page~\pageref{fig:multiCrowd-details}) show the raw average total costs and error rates for the range of values of the quality parameter $\errorC$. \begin{figure}[h] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \input{chart_bias0_cost_v_error.tex} \caption{Easy: gaps $(.3,0,0)$.} \end{subfigure}% \begin{subfigure}[b]{0.32\textwidth} \centering \input{chart_bias1_cost_v_error.tex} \caption{Medium: gaps $(.3,.1,.1)$.} \end{subfigure}% \begin{subfigure}[b]{0.32\textwidth} \centering \input{chart_bias2_cost_v_error.tex} \caption{Hard: gaps $(.3,.2,.2)$.} \end{subfigure} \caption{Crowd-selection algorithms: error rate vs. average total cost (relative to $\AlgRR$).} \label{fig:multiCrowd-main} \end{figure} For \AlgUCB we tested different parameter values for the parameter $C$ which balances between exploration and exploitation. We obtained the best results for a range of workloads for $C=1$ and this is the value we use in all the experiments. For \AlgThompson we start with a uniform prior on each crowd. \begin{figure}[p] \centering {\bf \Large Additional plots for crowd-selection algorithms} \vspace{5mm} \begin{subfigure}[b]{0.5\textwidth} \centering \input{chart_bias0_cost_v_C.tex} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \centering \input{chart_bias0_error_v_C.tex} \end{subfigure}% The easy workload: gaps $(.3,0,0)$. \vspace{2mm} \centering \begin{subfigure}[b]{0.5\textwidth} \centering \input{chart_bias1_cost_v_C.tex} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \centering \input{chart_bias1_error_v_C.tex} \end{subfigure}% The medium workload: gaps $(.3,.1,.1)$. \vspace{2mm} \centering \begin{subfigure}[b]{0.5\textwidth} \centering \input{chart_bias2_cost_v_C.tex} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \centering \input{chart_bias2_error_v_C.tex} \end{subfigure}% The hard workload: gaps $(.3,.2,.2)$. \vspace{2mm} \caption{Crowd-selection algorithms: Average cost and error rate vs. $\errorC$.} \label{fig:multiCrowd-details} \end{figure} \xhdr{Results and discussion.} For the easy workload the cost of \AlgUCB is about $60\%$ to $70\%$ of the cost of \AlgRR. \AlgThompson is significantly better, with a cost of about $40\%$ the cost of \AlgRR. For the medium workload the cost of \AlgUCB is about $80\%$ to $90\%$ of the cost of \AlgRR. \AlgThompson is significantly better, with a cost of about $70\%$ the cost of \AlgRR. For the hard workload the cost of \AlgUCB is about $90\%$ to $100\%$ of the cost of \AlgRR. \AlgThompson is better, with a cost of about $80\%$ to $90\%$ the cost of \AlgRR. While our analysis predicts that \AlgEER should be (somewhat) better than \AlgRR, our experiments do not confirm this for every error rate. As the gap of the other crowds approaches that of the best crowd, choosing the best crowd becomes less important, and so the advantage of the adaptive algorithms over \AlgRR diminishes. In the extreme case where all crowds have the same gap all the algorithms would perform the same with an error rate that depends on the stopping rule. We conclude that \AlgUCB provides an advantage, and \AlgThompson provides a significant advantage, over the naive scheme of \AlgRR. \section{Experimental results: single crowd} \label{sec:expts-single-crowd} We conduct two experiments. First, we analyze real-life workloads to find which gaps are typical for response distributions that arise in practice. Second, to study the performance of the single-crowd stopping rule suggested in Section~\ref{sec:single-crowd}, using a large-scale simulation with a realistic distribution of gaps. We are mainly interested in the tradeoff between the error rate and the expected stopping time. We find that this tradeoff is acceptable in practice. \xhdr{Typical gaps in real-life workloads.} We analyze several batches of microtasks extracted from a commercial crowdsourcing platform (approx. 3000 microtasks total). Each batch consists of microtasks of the same type, with the same instructions for the workers. Most microtasks are related to relevance assessments for a web search engine. Each microtask was given to at least 50 judges coming from the same ``crowd". In every batch, the empirical gaps of the microtasks are very close to being \emph{uniformly distributed} over the range. A practical take-away is that assuming a Bayesian prior on the gap would not be very helpful, which justifies and motivates our modeling choice not to assume Bayesian priors. In Figure~\ref{fig:CDF-gap}, we provide CDF plots for two of the batches; the plots for the other batches are similar. \begin{figure}[h] \centering \begin{subfigure}[b]{0.5\textwidth} \centering \input{chart_gap_distribution.tex} \caption{Batch 1: 128 microtasks, 2 options each} \label{fig:CDF-gap-a} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \centering \input{chart_aggregate_gap_distribution.tex} \caption{Batch 2: 604 microtasks, variable \#options} \label{fig:CDF-gap-b} \end{subfigure} \caption{CDF for the empirical gap in real-life workloads.}\label{fig:CDF-gap} \end{figure} \xhdr{Our single-crowd stopping rule on simulated workloads.} We study the performance of the single-crowd stopping rule suggested in Section~\ref{sec:single-crowd}. Our simulated workload consists of 10,000 microtasks with two options each. For each microtask, the gap is is chosen independently and uniformly at random in the range $[0.05, 1]$. This distribution of gaps is realistic according to the previous experiment. (Since there are only two options the gap fully describes the response distribution.) We vary the parameter $\errorC$ and for each $\errorC$ we measure the average total cost (i.e., the stopping time averaged over all microtasks) and the error rate. The results are reported in Figure~\ref{fig:synthetic-singleCrowd}. In particular, for this workload, an error rate of $< 5\%$ can be obtained with an average of $<8$ workers per microtask. \begin{figure}[h] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \input{chart_uniform_bias_cost_v_error.tex} \caption{Average cost vs. error rate} \end{subfigure}% \begin{subfigure}[b]{0.3\textwidth} \centering \input{chart_uniform_bias_cost_v_C.tex} \caption{Average cost vs. $\errorC$} \end{subfigure}% \begin{subfigure}[b]{0.3\textwidth} \centering \input{chart_uniform_bias_error_v_C.tex} \caption{Average error rate vs. $\errorC$} \end{subfigure} \caption{Our single-crowd stopping rule on the synthetic workload.}\label{fig:synthetic-singleCrowd} \end{figure} Our stopping rule adapts to the gap of the microtask: it uses only a few workers for easy microtasks (ones with a large gap), and more workers for harder microtasks (those with a small gap). In particular, we find that our stopping rule requires significantly smaller number of workers than a non-adaptive stopping rule: one that always uses the same number of workers while ensuring a desired error rate. \section{Introduction} \label{sec:intro} In recent years there has been a surge of interest in automated methods for \emph{crowdsourcing}: a distributed model for problem-solving and experimentation that involves broadcasting the problem or parts thereof to multiple independent, relatively inexpensive workers and aggregating their solutions. Automation and optimization of this process at a large scale allows to significantly reduce the costs associated with setting up, running, and analyzing the experiments. Crowdsourcing is finding applications across a wide range of domains in information retrieval, natural language processing and machine learning. A typical crowdsourcing workload is partitioned into \emph{microtasks} (also called Human Intelligence Tasks), where each microtask has a specific, simple structure and involves only a small amount of work. Each worker is presented with multiple microtasks of the same type, to save time on training. The rigidity and simplicity of the microtasks' structure ensures consistency across multiple multitasks and across multiple workers. An important industrial application of crowdsourcing concerns web search. One specific goal in this domain is \emph{relevance assessment}: assessing the relevance of search results. One popular task design involves presenting a microtask in the form of a query along with the results from the search engine. The worker has to answer one question about the relevance of the query to the result set. Experiments such as these are used to evaluate the performance of a search engine, construct training sets, and discover queries which require more attention and potential algorithmic tuning. \xhdr{Stopping / selection issues.} The most basic experimental design issue for crowdsourcing is the \emph{stopping issue}: determining how many workers the platform should use for a given microtask before it stops and outputs the aggregate answer. The workers in a crowdsourcing environment are not very reliable, so multiple workers are usually needed to ensure a sufficient confidence level. There is an obvious tradeoff here: using more workers naturally increases the confidence of the aggregate result but it also increases the cost and time associated with the experiment. One fairly common heuristic is to use less workers if the microtasks seem easy, and more workers if the microtasks seem hard. However, finding a sweet-spot may be challenging, especially if different microtasks have different degrees of difficulty. Whenever one can distinguish between workers, we have a more nuanced \emph{selection issue}: which workers to choose for a given microtask? The workers typically come from a large, loosely managed population. Accordingly, the skill levels vary over the population, and are often hard to predict in advance. Further, the relative skill levels among workers may depend significantly on a particular microtask or type of microtasks. Despite this uncertainty, it is essential to choose workers that are suitable or cost-efficient for the micro-task at hand, to the degree of granularity allowed by the crowdsourcing platform. For example, while targeting individual workers may be infeasible, one may be able to select some of the workers' attributes such as age range, gender, country, or education level. Also, the crowdsourcing platform may give access to multiple third-party providers of workers, and allow to select among those. \xhdr{Our focus.} This paper is concerned with a combination of the stopping / selection issues discussed above. We seek a clean setting so as to understand these issues at a more fundamental level. \OMIT{As an initial paper which studies these issues in the context of crowdsourcing,} We focus on the scenario where several different populations of workers are available and can be targeted by the algorithm. As explained above, these populations may correspond to different selections of workers' attributes, or to multiple available third-party providers. We will refer to such populations as \emph{crowds}. We assume that the quality of each crowd depends on a particular microtask, and is not known in advance. Each microtask is processed by an online algorithm which can adaptively decide which crowd to ask next. Informally, the goal is target the crowds that are most suitable for this microtask. Eventually the algorithm must stop and output the aggregate answer. This paper focuses on processing a single microtask. This allows us to simplify the setting: we do not need to model how the latent quantities are correlated across different microtasks, and how the decisions and feedbacks for different microtasks are interleaved over time. Further, we separate the issue of learning the latent quality of a crowd for a given microtask from the issue of learning the (different but correlated) quality parameters of this crowd across multiple microtasks. \xhdr{Our model: the \problem.} We consider microtasks that are multiple-choice questions: one is given a set $\options$ of possible answers, henceforth called \emph{options}. \asedit{We allow more than two options. (In fact, we find this case to be much more difficult than the case of only two options.)} Informally, the microtask has a unique correct answer $x^* \in \options$, and the high-level goal of the algorithm is to find it. The algorithm has access to several crowds: populations of workers. Each crowd $i$ is represented by a distribution $\D_i$ over $\options$, called the \emph{response distribution} for $i$. We assume that all crowds agree on the correct answer \footnote{Otherwise the algorithm's high-level goal is less clear. We chose to avoid this complication in the current version.} some option $x^*\in \options$ is the unique most probable option for each $\D_i$. In each round $t$, the algorithm picks some crowd $i=i_t$ and receives an independent sample from the corresponding response distribution $\D_i$. Eventually the algorithm must stop and output its guess for $x^*$. Each crowd $i$ has a known per-round cost $c_i$. The algorithm has two objectives to minimize: the total cost $\sum_t c_{i_t}$ and the \emph{error rate}: the probability that it makes a mistake, i.e. outputs an option other than $x^*$. \asedit{There are several ways to trade off these two objectives; we discuss this issue in more detail later in this section.} The independent sample in the above model abstracts the following interaction between the algorithm and the platform: the platform supplies a worker from the chosen crowd, the algorithm presents the microtask to this worker, and the worker picks some option. {\em Alternative interpretation.} The crowds can correspond not to different populations of workers but to different ways of presenting the same microtask. For example, one could vary the instructions, the order in which the options are presented, the fonts and the styles, and the accompanying images. {\em The name of the game.} Our model is similar to the extensively studied \emph{multi-armed bandit problem} (henceforth, \emph{MAB}) in that in each round an algorithm selects one alternative from a fixed and known set of available alternatives, and the feedback depends on the chosen alternative. However, while an MAB algorithm collects rewards, an algorithm in our model collects a \emph{survey} of workers' opinions. Hence we name our model the {\bf \problem}. \xhdr{Discussion of the model.} The \problem belongs to a broad class of online decision problems with explore-exploit tradeoff: that is, the algorithm faces a tradeoff between collecting information (\emph{exploration}) and taking advantage of the information gathered so far (\emph{exploitation}). The paradigmatic problem in this class is MAB: in each round an algorithm picks one alternative (\emph{arm}) from a given set of arms, and receives a randomized, time-dependent reward associated with this arm; the goal is to maximize the total reward over time. Most papers on explore-exploit tradeoff concern MAB and its variants. The \problem is different from MAB in several key respects. First, the feedback is different: the feedback in MAB is the reward for the chosen alternative, whereas in our setting the feedback is the opinion of a worker from the chosen crowd. While the information received by a \Algorithm can be interpreted as a ``reward", the value of such reward is not revealed to the algorithm and moreover not explicitly defined. Second, the algorithm's goal is different: the goal in MAB is to maximize the total reward over time, whereas the goal in our setting is to output the correct answer. Third, in our setting there are two types of ``alternatives": crowds and options in the microtask. Apart from repeatedly selecting between the crowds, a \Algorithm needs to output one option: the aggregate answer for the microtask. An interesting feature of the \problem is that an algorithm for this problem consists of two components: a \emph{crowd-selection algorithm} -- an online algorithm that decides which crowd to ask next, and a \emph{stopping rule} which decides whether to stop in a given round and which option to output as the aggregate answer. These two components are, to a large extent, independent from one another: as long as they do not explicitly communicate with one another (or otherwise share a common communication protocol) any crowd-selection algorithm can be used in conjunction with any stopping rule \footnote{The no-communication choice is quite reasonable: in fact, it can be complicated to design a reasonable \Algorithm that requires explicit communication between the crowd-selection algorithm and a stopping rule.} \asedit{The conceptual separation of a \Algorithm into the two components is akin to one in Mechanism Design, where it is very useful to separate a ``mechanism" into an ``allocation algorithm" and a ``payment rule", even though the two components are not entirely independent of one another.} \xhdr{\asedit{Trading off the total cost and the error rate.}} \asedit{In the \problem, an algorithm needs to trade off the two objectives: the total cost and the error rate.} In a typical application, the customer is willing to tolerate a certain error rate, and wishes to minimize the total cost as long as the error rate is below this threshold. However, as the error rate depends on the problem instance, there are several ways to make this formal. Indeed, one could consider the worst-case error rate (the maximum over all problem instances), a typical error rate (the expectation over a given ``typical" distribution over problem instance), or a more nuanced notion such as the maximum over a given family of ``typical" distributions. Note that the ``worst-case" guarantees may be overly pessimistic, whereas considering ``typical" distributions makes sense only if one knows what these distributions are. \asedit{For our theoretical guarantees, we focus on the worst-case error rate, and use the \emph{bi-criteria objective}, a standard approach from theoretical computer science literature: we allow some slack on one objective, and compare on another. In our case, we allow slack on the worst-case error rate, and compare on the expected total cost. More precisely: we consider a benchmark with some worst-case error rate $\delta>0$ and optimal total cost given this $\delta$, allow our algorithm to have worst-case error rate which is (slightly) larger than $\delta$, and compare its expected total cost to that of the benchmark.} \asedit{Moreover, we obtain provable guarantees in terms of a different, problem-specific objective: use the same stopping rule, compare on the expected total cost. We believe that such results are well-motivated by the structure of the problem, and provide a more informative way to compare crowd-selection algorithms.} In our experiments, we fix the per-instance error rate, and compare on the expected total cost. An alternative objective is to assign a monetary penalty to a mistake, and optimize the overall cost, i.e. the cost of labor minus the penalty. However, it may be exceedingly difficult for a customer to assign such monetary penalty \footnote{In particular, this was the case in the authors' collaboration with a commercial crowdsourcing platform.} whereas it is typically feasible to specify tolerable error rates. \asedit{While we think this alternative is worth studying, we chose not to follow it in this paper.} \xhdr{Our approach: independent design.} Our approach is to design crowd-selection algorithms and stopping rules independently from one another. We make this design choice in order to make the overall algorithm design task more tractable. While this is not the only possible design choice, we find it productive, as it leads to a solid theoretical framework and algorithms that are practical and theoretically founded. Given this ``independent design'' approach, one needs to define the design goals for each of the two components. These goals are not immediately obvious. Indeed, two stopping rules may compare differently depending on the problem instance and the crowd-selection algorithms they are used with. Likewise, two crowd-selection algorithms may compare differently depending on the problem instance and the stopping rules they are used with. Therefore the notions of optimal stopping rule and optimal crowd-selection algorithm are not immediately well-defined. We resolve this conundrum as follows. We design crowd-selection algorithms that work well across a wide range of stopping rules. For a fair comparison between crowd-selection algorithms, we use them with the \emph{same} stopping rule (see Section~\ref{sec:benchmarks} for details), and argue that such comparison is consistent across different stopping rules. \xhdr{Our contributions.} We introduce the \problem and present initial results in several directions: benchmarks, algorithms, theoretical analysis, and experiments. We are mainly concerned with the design of crowd-selection algorithms. Our crowd-selection algorithms work with arbitrary stopping rules. While we provide a specific (and quite reasonable) family of stopping rules for concreteness, third-party stopping rules can be easily plugged in. For the theoretical analysis of crowd-selection algorithms, we use a standard benchmark: the best time-invariant policy given all the latent information. The literature on online decision problems typically studies a deterministic version of this benchmark: the best fixed alternative (in our case, the best fixed crowd). We call it the \emph{deterministic benchmark}. We also consider a randomized version, whereby an alternative (crowd) is selected independently from the same distribution in each round; we call it the \emph{randomized benchmark}. The technical definition of the benchmarks, as discussed in Section~\ref{sec:benchmarks}, roughly corresponds to equalizing the worst-case error rates and comparing costs. The specific contributions are as follows. \fakeItem[(1)] We largely solve the \problem as far as the deterministic benchmark is concerned. We design two crowd-selection algorithms, obtain strong provable guarantees, and show that they perform well in experiments. \asedit{Our provable guarantees are as follows. If our crowd-selection algorithm uses the same stopping rule as the benchmark, we match the expected total cost of the deterministic benchmark up to a small additive factor, assuming that all crowds have the same per-round costs. This result holds, essentially, for an arbitrary stopping rule. We obtain a similar, but slightly weaker result if crowds can have different per-round costs. Moreover, we can restate this as a bi-criteria result, in which we incur a small additive increase in the expected total cost and $(1+k)$ multiplicative increase in the worst-case error rate, where $k$ is the number of crowds. The contribution in these results is mostly conceptual rather than technical: it involves ``independent design" as discussed above, and a ``virtual rewards" technique which allows us to take advantage of the MAB machinery.} For comparison, we consider a naive crowd-selection algorithm that tries each crowd in a round-robin fashion. We prove that this algorithm, and more generally any crowd-selection algorithm that does not adapt to the observed workers' responses, performs very badly against the deterministic benchmark. While one expects this on an intuitive level, the corresponding mathematical statement is not easy to prove. In experiments, our proposed crowd-selection algorithms perform much better than the naive approach. \fakeItem[(2)] We observe that the randomized benchmark dramatically outperforms the deterministic benchmark on some problem instances. This is a very unusual property for an online decision problem \footnote{We are aware of only one published example of an online decision problem with this property, in a very different context of dynamic pricing~\cite{DynPricing-ec12}. However, the results in~\cite{DynPricing-ec12} focus on a special case where the two benchmarks essentially coincide.} (However, the two benchmarks coincide when there are only two possible answers.) We design an algorithm which significantly improves over the expected total cost of the deterministic benchmark on some problem instances (while not quite reaching the randomized benchmark), \asedit{when both our algorithm and the benchmarks are run with the same stopping rule}. This appears to be the first published result in the literature on online decision problems where an algorithm provably improves over the deterministic benchmark. \asedit{We can aslo restate this result in terms of the bi-criteria objective. Then we suffer a $(1+k)$ multiplicative increase in the worst-case error rate.} \fakeItem[(3)] We provide a specific stopping rule for concreteness; this stopping rule is simple, tunable, has nearly optimal theoretical guarantees (in a certain formal sense), and works well in experiments. \OMIT{This paper is mainly concerned with the design of crowd-selection algorithms. In particular, we do not attempt to fully optimize the stopping rules. We provide a specific stopping rule for concreteness; this stopping rule is simple, tunable, has nearly optimal theoretical guarantees (in a certain formal sense), and works well in experiments. However, a third-party stopping rule can be easily plugged in.} \OMIT{ We view crowdsourcing as a tool that a human computation system can use to distribute tasks (work). Similar to the terminology presented in \cite{Law11}, we define human computation as a computation (or task) that is performed by a human.} \section{Crowd selection against the deterministic benchmark} \label{sec:multi-crowd} In this section we design crowd-selection algorithms that compete with the deterministic benchmark. Throughout the section, let $R_0$ be a fixed single-parameter stopping rule. Recall that the deterministic benchmark is defined as $\min \cost(i|R_0)$, where the minimum is over all crowds $i$. We consider arbitrary composite stopping rules based on $R_0$, under a mild assumption that the $R_0$ does not favor one option over another. Formally, we assume that the probability that $R_0$ stops at any given round, conditional on any fixed history (sequence of observations that $R_0$ inputs before this round), does not change if the options are permuted. Then $R_0$ and the corresponding composite stopping rule are called \emph{symmetric}. For the case of two options (when the expected stopping time of $R_0$ depends only on the gap of the crowd that $R_0$ interacts with) we sometimes make another mild assumption: that the expected stopping time decreases in the gap; we call such $R_0$ \emph{gap-decreasing}. \subsection{Crowd-selection algorithms} \xhdr{Virtual reward heuristic.} Our crowd-selection algorithms are based on the following idea, which we call the virtual reward heuristic.% \footnote{We thank anonymous reviewers for pointing out that our index-based algorithm can be interpreted via virtual rewards. } Given an instance of the \problem, consider an MAB instance where crowds correspond to arms, and selecting each crowd $i$ results in reward $f_i = f(c_i/\gap_i^2)$, for some fixed decreasing function $f$. (Given the discussion in Section~\ref{sec:single-crowd}, we use $c_i/\gap_i^2$ as an approximation for $\cost(i|R_0)$; we can also plug in a better approximation when and if one is available.) Call $f_i$ the \emph{virtual reward}; note that it is not directly observed by a \Algorithm, since it depends on the gap $\gap_i$. However, various off-the-shelf bandit algorithms can be restated in terms of the estimated rewards, rather than the actual observed rewards. The idea is to use such bandit algorithms and plug in our own estimates for the rewards. A bandit algorithm thus applied would implicitly minimize the number of times suboptimal crowds are chosen. This is a desirable by-product of the design goal in MAB, which is to maximize the total (virtual) reward. We are not directly interested in this design goal, but we take advantage of the by-product. \xhdr{Algorithm 1: \UCB with virtual rewards.} Our first crowd-selection algorithm is based on $\UCB$~\cite{bandits-ucb1}, a standard MAB algorithm. We use virtual rewards $f_i = \gap_i/\sqrt{c_i}$. We observe that \UCB has a property that at each time $t$, it only requires an estimate of $f_i$ and a confidence term for this estimate. Motivated by~\eqref{eq:conf-rad-gap}, we use $\empir{\gap}{i}{t}/\sqrt{c_i}$ as the estimate for $f_i$, and $C/\sqrt{c_i\, \NSamples{i}{t}}$ as the confidence term. The resulting crowd-selection algorithm, which we call $\AlgUCB$, proceeds as follows. In each round $t$ it chooses the crowd $i$ which maximizes the \emph{index} $I_{i,t}$, defined as \begin{align}\label{eq:UCB-index} I_{i,t} = c_i^{-1/2}\left(\, \empir{\gap}{i}{t} + C/\sqrt{\NSamples{i}{t}} \,\right). \end{align} \noindent For the analysis, we use~\refeq{eq:UCB-index} with $C = \sqrt{8\log t}$. In our experiments, $C=1$ appears to perform best. \xhdr{Algorithm 2: Thompson heuristic.} Our second crowd-selection algorithm, called $\AlgThompson$, is an adaptation of \emph{Thompson heuristic} \cite{Thompson-1933} for MAB to virtual rewards $f_i = \gap_i/\sqrt{c_i}$. The algorithm proceeds as follows. For each round $t$ and each crowd $i$, let $\mathcal{P}_{i,t}$ be the Bayesian posterior distribution for gap $\gap_i$ given the observations from crowd $i$ up to round $t$ (starting from the uniform prior). Sample $\zeta_i$ independently from $\mathcal{P}_{i,t}$. Pick the crowd with the largest \emph{index} $\zeta_i/\sqrt{c_i}$. As in \UCB, the index of crowd $i$ is chosen from the confidence interval for the (virtual) reward of this crowd, but here it is a random sample from this interval, whereas in \UCB it is the upper bound. It appears difficult to compute the posteriors $\mathcal{P}_{i,t}$ exactly, so in practice an approximation can be used. In our simulations we focus on the case of two options, call them $x,y$. For each crowd $i$ and round $t$, we approximate $\mathcal{P}_{i,t}$ by the Beta distribution with shape parameters $\alpha = 1+\NSamples{i}{t}(x)$ and $\beta = 1+\NSamples{i}{t}(y)$, where $\NSamples{i}{t}(x)\geq \NSamples{i}{t}(y)$. (Essentially, we ignore the possibility that $x$ is not the right answer.) It is not clear how the posterior $\mathcal{P}_{i,t}$ in our problem corresponds to the one in the original MAB problem, so we cannot directly invoke the analyses of Thompson heuristic for MAB~\cite{Thompson-nips11,Thompson-analysis-arxiv11}. \newcommand{\TP}[1]{T_{\mathtt{Ph#1}}} \xhdr{Straw-man approaches.} We compare the two algorithms presented above to an obvious naive approach: iterate through each crowd in a round-robin fashion. More precisely, we consider a slightly more refined version where in each round the crowd is sampled from a fixed distribution $\mu$ over crowds. We will call such algorithms \emph{non-adaptive}. The most reasonable version, called \AlgRR (short for ``randomized round-robin'') is to sample each crowd $i$ with probability $\mu_i\sim 1/c_i$.% \footnote{For uniform costs it is natural to use a uniform distribution for $\mu$. For non-uniform costs our choice is motivated by Theorem~\ref{thm:LB}, where it (approximately) minimizes the competitive ratio.} In the literature on MAB, more sophisticated algorithms are often compared to the basic approach: first explore, then exploit. In our context this means to first \emph{explore} until we can identify the best crowd, then pick this crowd and \emph{exploit}. So for the sake of comparison we also develop a crowd-selection algorithm that is directly based on this approach. (This algorithm is not based on the virtual rewards.) In our experiments we find it vastly inferior to $\AlgUCB$ and $\AlgThompson$. The ``explore, then exploit" design does not quite work as is: selecting the best crowd with high probability seems to require a high-probability guarantee that this crowd can produce the correct answer with the current data, in which case there is no need for a further exploitation phase (and so we are essentially back to $\AlgRR$). Instead, our algorithm explores until it can identify the best crowd with \emph{low} confidence, then it exploits with this crowd until it sufficiently boosts the confidence or until it realizes that it has selected a wrong crowd to exploit. The latter possibility necessitates a third phase, called \emph{rollback}, in which the algorithm explores until it finds the right answer with high confidence. The algorithm assumes that the single-crowd stopping rule $R_0$ has a quality parameter $\errorC$ which controls the trade-off between the error rate and the expected running time (as in Section~\ref{sec:single-crowd}). In the exploration phase, we also use a \emph{low-confidence} version of $R_0$ that is parameterized with a lower value $\errorC'<\errorC$; we run one low-confidence instance of $R_0$ for each crowd. The algorithm, called $\AlgEER$, proceeds in three phases (and stops whenever the composite stopping rule decides so). In the exploration phase, it runs \AlgRR until the low-confidence version of $R_0$ stops for some crowd $i^*$. In the exploitation phase, it always chooses crowd $i^*$. This phase lasts $\alpha$ times as long as the exploration phase, where the parameter $\alpha$ is chosen so that crowd $i^*$ produces a high-confidence answer w.h.p. if it is indeed the best crowd.% \footnote{We conjecture that for $R_0$ from Section~\ref{sec:single-crowd} one can take $\alpha = \Theta(\errorC/\errorC')$.} Finally, in the roll-back phase it runs \AlgRR. \subsection{Analysis: upper bounds} We start with a lemma that captures the intuition behind the virtual reward heuristic, explaining how it helps to minimize the selection of suboptimal crowds. Then we derive an upper bound for $\AlgUCB$. \begin{lemma}\label{lm:Ni} Let $i^* = \argmin_i c_i/\eps^2_i$ be the approximate best crowd. Let $R_0$ be a symmetric single-crowd stopping rule. Then for any crowd-selection algorithm $\A$, letting $N_i$ be \#times crowd $i$ is chosen, we have $$ \cost(\A|R_0) \leq \cost(i^*|R_0) + \textstyle \sum_{i\neq i^*} c_i\, \E[N_i]. $$ \end{lemma} This is a non-trivial statement because $\cost(i^*|R_0)$ refers not to the execution of $\A$, but to a different execution in which crowd $i^*$ is always chosen. The proof uses a ``coupling argument''. \begin{proof} Let $\A^*$ be the crowd-selection algorithm which corresponds to always choosing crowd $i^*$. To compare $\cost(\A|R_0)$ and $\cost(\A^*|R_0)$, let us assume w.l.o.g. that the two algorithms are run on correlated sources of randomness. Specifically, assume that both algorithms are run on the same realization of answers for crowd $i^*$: the $\ell$-th time they ask this crowd, both algorithms get the same answer. Moreover, assume that the instance of $R_0$ that works with crowd $i^*$ uses the same random seed for both algorithms. Let $N$ be the realized stopping time for $\A^*$. Then $\A$ must stop after crowd $i^*$ is chosen $N$ times. It follows that the difference in the realized total costs between $\A$ and $\A^*$ is at most $\sum_i\; c_i N_i$. The claim follows by taking expectation over the randomness in the crowds and in the stopping rule. \end{proof} \begin{theorem}[\AlgUCB]\label{thm:UCB} Let $i^* = \argmin_i c_i/\gap^2_i$ be the approximate best crowd. Let $R_0$ be a symmetric single-crowd stopping rule. Assume $R_0$ must stop after at most $T$ rounds. Define \AlgUCB by~\refeq{eq:UCB-index} with $C = \sqrt{8\log t}$, for each round $t$. Let $\Lambda_i = ( c_i (f_{i^*} - f_i))^{-2}$ and $\Lambda = \sum_{i\neq i^*} \Lambda_i$. Then $$ \cost(\AlgUCB|R_0) \leq \cost(i^*|R_0) + O(\Lambda \log T). $$ \end{theorem} \begin{proof}[Proof Sketch] Plugging $C = \sqrt{8\log t}$ into~\eqref{eq:conf-rad-gap} and dividing by $\sqrt{c_i}$, we obtain the confidence bound for $|f_i - \empir{\gap}{i}{t}/\sqrt{c_i}|$ that is needed in the the original analysis of \UCB in~\cite{bandits-ucb1}. Then, as per that analysis, it follows that for each crowd $i\neq i^*$ and each round $t$ we have $ \E[\NSamples{i}{t}] \leq \Lambda_i \log t$. (This is also not difficult to derive directly.) To complete the proof, note that $t\leq T$ and invoke Lemma~\ref{lm:Ni}. \end{proof} Note that the approximate best crowd $i^*$ may be different from the (actual) best arm, so the guarantee in Theorem~\ref{thm:UCB} is only as good as the difference $ \cost(i^*|R_0) - \argmin_i \cost(i|R_0)$. Note that $i^*$ is in fact the best crowd for the basic special case of uniform costs and two options (assuming that $R_0$ is gap-decreasing). It is not clear whether the constants $\Lambda_i$ can be significantly improved. For uniform costs we have $\Lambda_i = (\eps_{i^*}- \eps_i)^{-2} $, which is essentially the best one could hope for. This is because one needs to try each crowd $i\neq i^*$ at least $\Omega(\Lambda_i)$ times to tell it apart from crowd $i^*$. \footnote{This can be proved using an easy reduction from an instance of the MAB problem where each arm $i$ brings reward $1$ with probability $(1+\gap_i)/2$, and reward $0$ otherwise. Treat this as an instance of the \problem, where arms correspond to crowds, and options to rewards. An algorithm that finds the crowd with a larger gap in less than $\Omega(\Lambda_i)$ steps would also find an arm with a larger expected reward, which would violate the corresponding lower bound for the MAB problem (see~\cite{bandits-exp3}).} \subsection{Analysis: lower bound for non-adaptive crowd selection} We purpose of this section is argue that non-adaptive crowd-selection algorithms performs badly compared to $\AlgUCB$. We prove that the competitive ratio of any non-adaptive crowd-selection algorithm is bounded from below by (essentially) the number of crowds. We contrast this with an upper bound on the competitive ratio of $\AlgUCB$, which we derive from Theorem~\ref{thm:UCB}. Here the competitive ratio of algorithm $\A$ (with respect to the deterministic benchmark) is defined as $ \max \frac{\cost(\A|R_0)}{\max_i \cost(i|R_0)}$, where the outer $\max$ is over all problem instances in a given family of problem instances. We focus on a very simple family: problem instances with two options and uniform costs, in which one crowd has gap $\gap>0$ and all other crowds have gap $0$; we call such instances \emph{$\gap$-simple}. Our result holds for a version of a composite stopping rule that does not use the total crowd. Note that considering the total crowd does not, intuitively, make sense for the $\gap$-simple problem instances, and we did not use it in the proof of Theorem~\ref{thm:UCB}, either. \begin{theorem}[$\AlgRR$]\label{thm:LB} Let $R_0$ be a symmetric single-crowd stopping rule with worst-case error rate $\rho$. Assume that the composite stopping rule does not use the total crowd. Consider a non-adaptive crowd-selection algorithm $\A$ whose distribution over crowds is $\mu$. Then for each $\gap>0$, the competitive ratio over the $\gap$-simple problem instances is at least $ \frac{\sum_i c_i\,\mu_i}{\min_i c_i\,\mu_i}\; (1-2k\rho)$, where $k$ is the number of crowds. \end{theorem} Note that $\min \, \frac{\sum_i c_i\,\mu_i}{\min_i c_i\,\mu_i} = k$, where the $\min$ is taken over all distributions $\mu$. The minimizing $\mu$ satisfies $\mu_i \sim 1/c_i$ for each crowd $i$, i.e. if $\mu$ corresponds to \AlgRR. The $(1-2k\rho)$ factor could be an artifact of our somewhat crude method to bound the ``contribution'' of the gap-$0$ crowds. We conjecture that this factor is unnecessary (perhaps under some minor assumptions on $R_0$). To prove Theorem~\ref{thm:LB}, we essentially need to compare the stopping time of the composite stopping rule $R$ with the stopping time of the instance of $R_0$ that works with the gap-$\gap$ crowd. The main technical difficulty is to show that the other crowds are not likely to force $R$ to stop before this $R_0$ instance does. To this end, we use a lemma that $R_0$ is not likely to stop in finite time when applied to a gap-$0$ crowd. \begin{lemma}\label{lm:LB-infty} Consider a symmetric single-crowd stopping rule $R_0$ with worst-case error rate $\rho$. Suppose $R_0$ is applied to a crowd with gap $0$. Then $\Pr[\text{$R_0$ stops in finite time}] \leq 2\rho$. \end{lemma} \begin{proof} Intuitively, if $R_0$ stops early if the gap is $0$ then it is likely to make a mistake if the gap is very small but positive. However, connecting the probability in question with the error rate of $R_0$ requires some work. Suppose $R_0$ is applied to a crowd with gap $\gap$. Let $q(\gap,t,x)$ be the probability that $R_0$ stops at round $t$ and ``outputs'' option $x$ (in the sense that by the time $R_0$ stops, $x$ is the majority vote). We claim that for all rounds $t$ and each option $x$ we have \begin{align}\label{eq:lm-LB-infty} \lim_{\gap\to 0}\; q(\gap,t,x) = q(0,t,x). \end{align} Indeed, suppose not. Then for some $\delta>0$ there exist arbitrarily small gaps $\gap>0$ such that $|q(\gap,t,x)-q(0,t,x)| >\delta$. Thus it is possible to tell apart a crowd with gap $0$ from a crowd with gap $\gap$ by observing $\Theta(\delta^{-2})$ independent runs of $R_0$, where each run continues for $t$ steps. In other words, it is possible to tell apart a fair coin from a gap-$\gap$ coin using $\Theta(t\,\delta^{-2})$ ``coin tosses'', for fixed $t$ and $\delta>0$ and an arbitrarily small $\gap$. Contradiction. Claim proved. Let $x$ and $y$ be the two options, and let $x$ be the correct answer. Let $q(\gap,t)$ be the probability that $R_0$ stops at round $t$. Let $\alpha(\gap|t) = q(\gap,t,y) / q(\gap,t) $ be the conditional probability that $R_0$ outputs a wrong answer given that it stops at round $t$. Note that by~\eqref{eq:lm-LB-infty} for each round $t$ it holds that $q(\gap,t)\to q(0,t)$ and $\alpha(\gap|t)\to \alpha(0|t)$ as $\gap\to 0$. Therefore for each round $t_0\in\N$ we have: \begin{align*} \rho = \textstyle \sum_{t\in\N}\; \alpha(\gap|t)\; q(\eps,t) \geq \textstyle \sum_{t\leq t_0 }\; \alpha(\gap|t) \; q(\eps,t) \to_{\gap\to\infty} \textstyle \sum_{t\leq t_0 }\; \alpha(0|t)\; q(0,t). \end{align*} Note that $\alpha(0|t) = \tfrac12$ by symmetry. It follows that $\sum_{t\leq t_0 }\; q(0,t) \leq 2\rho $ for each $t_0\in \N$. Therefore the probability that $R_0$ stops in finite time is $\sum_{t=1}^\infty\; q(0,t) \leq 2\rho $. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:LB}] Suppose algorithm $\A$ is applied to an $\gap$-simple instance of the \problem. To simplify the notation, assume that crowd $1$ is the crowd with gap $\gap$ (and all other crowds have gap $0$). Let $R_{(i)}$ be the instance of $R_0$ that corresponds to a given crowd $i$. Denote the composite stopping rule by $R$. Let $\sigma_R$ be the stopping time of $R$: the round in which $R$ stops. For the following two definitions, let us consider an execution of algorithm $\A$ that runs forever (i.e., it keeps running even after $R$ decides to stop). First, let $\tau_i$ be the ``local'' stopping time of $R_{(i)}$: the number of samples from crowd $i$ that $R_{(i)}$ inputs before it decides to stop. Second, let $\sigma_i$ be the ``global'' stopping time of $R_{(i)}$: the round when $R_{(i)}$ decides to stop. Note that $\sigma_R = \min_i\, \sigma_i$. Let us use Lemma~\ref{lm:LB-infty} to show that $R$ stops essentially when $R_{(1)}$ tells it to stop. Namely: \begin{align}\label{eq:pf-LB-sigma} \E[\sigma_1]\; (1-2k\rho) \leq \E[\sigma_R]. \end{align} To prove~\eqref{eq:pf-LB-sigma}, consider the event $E \triangleq \{\min_{i>1} \tau_i =\infty\}$, and let $1_E$ be the indicator variable of this event. Note that $\sigma_R \geq \sigma_1\, 1_E$ and that random variables $\sigma_1$ and $1_E$ are independent. It follows that $ \E[\sigma_R] \geq \Pr[E]\,\E[\sigma_1]$. Finally, Lemma~\ref{lm:LB-infty} implies that $\Pr[E] \geq 1-2k\rho$. Claim proved. Let $i_t$ be the option chosen by $\A$ in round $t$. Then by Wald's identity we have \begin{align*} \E[\tau_1] &= \E\left[ \sum_{t=1}^{\sigma_1} 1_{\{i_t=1\}} \right] = \E[1_{\{i_t=1\}}] \; \E[ \sigma_1] = \mu_1\; \E[\sigma_1] \\ \E[\cost(\A|R_0)] &= \E\left[\sum_{t=1}^{\sigma_R} c_{i_t} \right] = \E[c_{i_t}]\, \E[\sigma_R] = (\textstyle \sum_i c_i\,\mu_i)\; \E[\sigma_R] . \end{align*} Therefore, plugging in~\eqref{eq:pf-LB-sigma}, we obtain \begin{align*} \frac{\E[\cost(\A|R_0)]}{c_1\,\E[\tau_1]} \geq \frac{\textstyle \sum_i c_i\,\mu_i}{c_1\,\mu_1}\; (1-2k\rho). \end{align*} It remains to observe that $c_1\,\E[\tau_1]$ is precisely the expected total cost of the deterministic benchmark. \end{proof} \xhdr{Competitive ratio of $\AlgUCB$.} Consider the case of two options and uniform costs. Then (assuming $R_0$ is gap-decreasing) the approximate best crowd $i^*$ in Theorem~\ref{thm:UCB} is the best crowd. The competitive ratio of $\AlgUCB$ is, in the notation of Theorem~\ref{thm:UCB}, at most $1 + \frac{O(\Lambda \log T)}{\cost(i^*|R_0)}$. This factor is close to $1$ when $R_0$ is tuned so as to decrease the error rate at the expense of increasing the expected running time. \section{Related work} \label{sec:related-work} For general background on crowdsourcing and human computation, refer to \citet{Law11}. Most of the work on crowdsourcing is usually done using platforms like \emph{Amazon Mechanical Turk} or \emph{CrowdFlower}. Results using those platforms have shown that majority voting is a good approach to achieve quality~\cite{Snow08}. Get Another Label~\cite{Sheng08} explores adaptive schemes for the single-crowd case under Baysian assumptions (while our focus is on multiple-crowds and regret under non-Bayesian uncertainty). A study on machine translation quality uses preference voting for combining ranked judgments~\cite{Callison-Burch09}. Vox Populi~\cite{Dekel09} suggests to prune low quality workers, however their approach is not adaptive and their analysis does not provide regret bounds (while our focus is on adaptively choosing which crowds to exploit and obtaining regret bounds against an optimal algorithm that knows the quality of each crowd). Budget-Optimal Task Allocation~\cite{KOS11} focuses on a non-adaptive solution to the task allocation problem given a prior distribution on both tasks and judges (while we focus adaptive solutions and do not assume priors on judges or tasks). From a methodology perspective, CrowdSynth focuses on addressing consensus tasks by leveraging supervised learning~\cite{Kamar12}. Adding a crowdsourcing layer as part of a computation engine is a very recent line of research. An example is CrowdDB, a system for crowdsourcing which includes human computation for processing queries~\cite{Franklin11}. CrowdDB offers basic quality control features, but we expect adoption of more advanced techniques as those systems become more available within the community. Multi-armed bandits (MAB) have a rich literature in Statistics, Operations Research, Computer Science and Economics. A proper discussion of this literature is beyond our scope; see \cite{CesaBL-book} for background. Most relevant to our setting is the work on prior-free MAB with stochastic rewards: \cite{Lai-Robbins-85,bandits-ucb1} and the follow-up work, and Thompson heuristic~\cite{Thompson-1933}. Recent work on Thompson heuristic includes \cite{Thompson-Bing-icml10,Thompson-Scott10,Thompson-nips11,Thompson-analysis-arxiv11}. \asedit{ Our setting is superficially similar to \emph{budgeted MAB}, a version of MAB where the goal is to find the best arm after a fixed period of exploration (e.g., \cite{Tsitsiklis-bandits-04,Bubeck-alt09}). Likewise, there is some similarity with the work on \emph{budgeted active learning} (e.g. \cite{Lizotte-uai03,Madani-uai04,Kaplan-stoc05}), where an algorithm repeatedly chooses instances and receives correct labels for these instances, with a goal to eventually output the correct hypothesis. The difference is that in the \problem, an algorithm repeatedly chooses among \emph{crowds}, whereas in the end the goal is to pick the correct \emph{option}; moreover, the true ``reward" or ``label" for each chosen crowd is not revealed to the algorithm and is not even well-defined.} Settings similar to stopping rules for a single crowd (but with somewhat different technical objectives) were considered in prior work, e.g. \cite{Bechhofer59,Ramey79,Bechhofer85,Dagum-sicomp00,Mnih-icml08}. \asedit{ In a very recent concurrent and independent work, \cite{Jenn-aaai12,Jenn-icml13,Chen-icml13,Tran-aamas13} studied related, but technically incomparable settings. The first three papers consider adaptive task assignment with multiple tasks and a budget constraint on the total number or total cost of the workers. In \cite{Jenn-aaai12,Jenn-icml13} workers arrive over time, and the algorithm selects which tasks to assign. In \cite{Chen-icml13}, in each round the algorithm chooses a worker and a task, and Bayesian priors are available for the difficulty of each task and the skill level of each worker (whereas our setting is prior-independent). Finally, \citet{Tran-aamas13} studies a \emph{non-adaptive} task assignment problem where the algorithm needs to distribute a given budget across multiple tasks with known per-worker costs. } \section{A warm-up: single-crowd stopping rules} \label{sec:single-crowd} Consider a special case with only one crowd to choose from. It is clear that whenever a \Algorithm decides to stop, it should output the most frequent option in the sample. Therefore the algorithm reduces to what we call a \emph{single-crowd stopping rule}: an online algorithm which in every round inputs an option $x\in\options$ and decides whether to stop. When multiple crowds are available, a single-crowd stopping rule can be applied to each crowd separately. This discussion of the single-crowd stopping rules, together with the notation and tools that we introduce along the way, forms a foundation for the rest of the paper. A single-crowd stopping rule is characterized by two quantities that are to be minimized: the expected stopping time and the \emph{error rate}: the probability that once the rule decides to stop, the most frequent option in the sample is not $x^*$. Note that both quantities depend on the problem instance; therefore we leave the bi-criteria objective somewhat informal at this point. \xhdr{A simple single-crowd stopping rule.} We suggest the following single-crowd stopping rule: \begin{align}\label{eq:stopping-rule} \text{Stop if}\; \empir{\gap}{i}{t}\, \NSamples{i}{t} > \errorC\,\sqrt{\NSamples{i}{t}}. \end{align} Here $i$ is the crowd the stopping rule is applied to, and $\errorC$ is the \emph{quality parameter} which indirectly controls the tradeoff between the error rate and the expected stopping time. Specifically, increasing $\errorC$ decreases the error rate and increases the expected stopping time. If there are only two options, call them $x$ and $y$, then the left-hand side of the stopping rule is simply $|\NSamples{i}{t}(x) - \NSamples{i}{t}(y)|$. The right-hand side of the stopping rule is a confidence term, which should be large enough to guarantee the desired confidence level. The $\sqrt{\NSamples{i}{t}}$ is there because the standard deviation of the Binomial distribution with $N$ samples is proportional to $\sqrt{N}$. In our experiments, we use a ``smooth" version of this stopping rule: we randomly round the confidence term to one of the two nearest integers. In particular, the smooth version is meaningful even with $\errorC <1$ (whereas the deterministic version with $\errorC <1$ always stops after one round). \xhdr{Analysis.} We argue that the proposed single-crowd stopping rule is quite reasonable. To this end, we obtain a provable guarantee on the tradeoff between the expected stopping time and the worst-case error rate. Further, we prove that this guarantee is nearly optimal across all single-crowd stopping rules. Both results above are in terms of the gap of the crowd that the stopping rule interacts with. We conclude that the gap is a crucial parameter for the \problem. \begin{theorem}\label{thm:single-crowd} Consider the stopping rule~\refeq{eq:stopping-rule} with $\errorC = \sqrt{\log (\tfrac{n}{\delta}\, \NSamples{i}{t}^2 )}$, for some $\delta>0$. The error rate of this stopping rule is at most $O(\delta)$, and the expected stopping time is at most $O\left( \gap_i^{-2}\, \log \tfrac{n}{\delta \gap_i} \right)$. \end{theorem} The proof of Theorem~\ref{thm:single-crowd}, and several other proofs in the paper, rely on the Azuma-Hoeffding inequality. More specifically, we use the following corollary: for each $C>0$, each round $t$, and each option $x\in \options$ \begin{align}\label{eq:conf-rad} \Pr\left[\; |\D_i(x)-\empir{\D}{i}{t}(x)| \leq C/\sqrt{\NSamples{i}{t}} \;\right] \geq 1-e^{-\Omega(C^2)}. \end{align} In particular, taking the Union Bound over all options $x\in\options$, we obtain: \begin{align}\label{eq:conf-rad-gap} \Pr\left[\; |\empir{\gap}{i}{t} - \gap_i| \leq C/\sqrt{\NSamples{i}{t}} \;\right] \geq 1- n\,e^{-\Omega(C^2)}. \end{align} \begin{proof}[Proof of Theorem~\ref{thm:single-crowd}] Fix $a\geq 1$ and let $C_t = \sqrt{\log (a\, \tfrac{n}{\delta}\, \NSamples{i}{t}^2 )}$. Let $\mathcal{E}_{x,t}$ be the event in ~\eqref{eq:conf-rad} with $C = C_t$. Consider the event that $\mathcal{E}_{x,t}$ holds for all options $x\in \options$ and all rounds $t$; call it the \emph{clean event}. Taking the Union Bound, we see that the clean event holds with probability at least $1-O(\delta/a)$. First, assuming the clean event we have $|\gap_i - \empir{\gap}{i}{t}| \leq 2\, C_t/\sqrt{\NSamples{i}{t}} $ for all rounds $t$. Then the stopping rule~\refeq{eq:stopping-rule} stops as soon as $\gap_i \geq 3\, C_t/\sqrt{\NSamples{i}{t}}$, which happens as soon as $\NSamples{i}{t} = O\left( \gap_i^{-2}\, \log \tfrac{an}{\delta \gap_i} \right)$. Integrating this over all $a\geq 1$, we derive that the expected stopping time is as claimed. Second, take $a=1$ and assume the clean event. Suppose the stopping rule stops at some round $t$. Let $x$ be the most probable option after this round. Then $\empir{\D}{i}{t}(x)-\empir{\D}{i}{t}(y) \geq C_t/\sqrt{\NSamples{i}{t}}$ for all options $y\neq x$. It follows that $D_i(x) > D_i(y)$ for all options $y\neq x$, i.e. $x$ is the correct answer. \end{proof} The following lower bound easily follows from classical results on coin-tossing. Essentially, one needs at least $\Omega(\gap^{-2})$ samples from a crowd with gap $\gap>0$ to obtain the correct answer. \begin{theorem}\label{thm:single-crowd-LB} Let $R_0$ be any single-crowd stopping rule with worst-case error rate less than $\delta$. When applied to a crowd with gap $\gap>0$, the expected stopping time of $R_0$ is at least $\Omega(\gap^{-2} \log \tfrac{1}{\delta})$. \end{theorem} While the upper bound in Theorem~\ref{thm:single-crowd} is close to the lower bound in Theorem~\ref{thm:single-crowd-LB}, it is possible that one can obtain a more efficient version of Theorem~\ref{thm:single-crowd} using more sophisticated versions of Azuma-Hoeffding inequality such as, for example, the Empirical Bernstein Inequality. \xhdr{Stopping rules for multiple crowds.} For multiple crowds, we consider stopping rules that are composed of multiple instances of a given single-crowd stopping rule $R_0$; we call them \emph{composite} stopping rules. Specifically, we have one instance of $R_0$ for each crowd (which only inputs answers from this crowd), and an additional instance of $R_0$ for the \emph{total crowd} -- the entire population of workers. The composite stopping rule stops as soon as some $R_0$ instances stops, and outputs the majority option for this instance.% \footnote{If $R_0$ is randomized, then each instance of $R_0$ uses an independent random seed. If multiple instances of $R_0$ stop at the same time, the aggregate answer is chosen uniformly at random among the majority options for the stopped instances.} Given a crowd-selection algorithm $\A$, let $\cost(\A|R_0)$ denote its expected total cost (for a given problem instance) when run together with the composite stopping rule based on $R_0$. \section{Omniscient benchmarks for crowd selection} \label{sec:benchmarks} We consider two ``omniscient" benchmarks for crowd-selection algorithms: informally, the best fixed crowd $i^*$ and the best fixed distribution $\mu^*$ over crowds, where $i^*$ and $\mu^*$ are chosen given the latent information: the response distributions of the crowds. Both benchmarks treat all their inputs as a single data source, and are used in conjunction with a given single-crowd stopping rule $R_0$ (and hence depend on the $R_0$). \xhdr{Deterministic benchmark.} Let $\cost(i|R_0)$ be the expected total cost of always choosing crowd $i$, with $R_0$ as the stopping rule. We define the \emph{deterministic benchmark} as the crowd $i^*$ that minimizes $\cost(i|R_0)$ for a given problem instance. In view of the analysis in Section~\ref{sec:single-crowd}, our intuition is that $\cost(i|R_0)$ is approximated by $c_i/\gap_i^2$ up to a constant factor (where the factor may depend on $R_0$ but not on the response distribution of the crowd). The exact identity of the best crowd may depend on $R_0$. For the basic special case of uniform costs and two options (assuming that the expected stopping time of $R_0$ is non-increasing in the gap), the best crowd is the crowd with the largest gap. In general, we approximate the best crowd by $\argmin_i c_i/\gap_i^2$. \xhdr{Randomized benchmark.} Given a distribution $\mu$ over crowds, let $\cost(\mu|R_0)$ be the expected total cost of a crowd-selection algorithm that in each round chooses a crowd independently from $\mu$, treats all inputs as a single data source -- essentially, a single crowd -- and uses $R_0$ as a stopping rule on this data source. The \emph{randomized benchmark} is defined as the $\mu$ that minimizes $\cost(\mu|R_0)$ for a given problem instance. This benchmark is further discussed in Section~\ref{sec:randomized-benchmark}. \xhdr{Comparison against the benchmarks.} In the analysis, we compare a given crowd-selection algorithm $\A$ against these benchmarks as follows: we use $\A$ in conjunction with the composite stopping rule based on $R_0$, and compare the expected total cost $\cost(\A|R_0)$ against those of the benchmarks.% \footnote{Using the same $R_0$ roughly equalizes the worst-case error rate between $\A$ and the benchmarks; see Section~\ref{sec-bicriteria} for details.} Moreover, we derive corollaries with respect to the bi-criteria objective, where the benchmarks choose both the best crowd (resp., best distribution over crowds) and the stopping rule. These corollaries are further discussed in Section~\ref{sec-bicriteria}. \OMIT Let us argue that using the same $R_0$ roughly equalizes the worst-case error rate between $\A$ and the benchmarks. Let $\rho$ be the worst-case error rate of $R_0$, and assume it is achieved for gap $\gap$. Then the worst-case error rate for both benchmarks is $\rho$; it is achieved on a problem instance in which all crowds have gap $\gap$. It is easy to see that the worst-case error rate of $\A$ is at most $(k+1)\rho$, where $k$ is the number of crowds. We also conjecture that the worst-case error rate for $\A$ is at least $\rho$. In the Appendix (Lemma~\ref{lm:error-rate-LB}), we prove a slightly weaker result: essentially, if the composite stopping rule does not use the total crowd, then the worst-case error rate for $\A$ is at least $\rho\,(1-2k\rho)$. }
{'timestamp': '2013-05-21T02:03:39', 'yymm': '1302', 'arxiv_id': '1302.3268', 'language': 'en', 'url': 'https://arxiv.org/abs/1302.3268'}
\section{Introduction} Leptophilic gauge bosons are popular candidates for physics beyond the Standard Model (SM). As opposed to new gauge bosons interacting with quarks, which are strongly constrained by LHC searches~\cite{ATLAS:2017eqx,CMS:2018mgb,ATLAS:2019erb,CMS:2021ctt}, those coupling solely to leptons are only subject to LEP constraints~\cite{ALEPH:2013dgf} and can therefore exist below the TeV scale. As a consequence, these kind of $Z'$ bosons can have important phenomenological consequences for a plethora of particle physics experiments and observations~\cite{Bauer:2018onh}. In view of recent experimental developments, $Z'$ bosons interacting with muons are of particular interest. Most importantly, they can economically solve the long-standing discrepancy between the SM prediction~\cite{Aoyama:2020ynm} and the observed values~\cite{Muong-2:2006rrc,Muong-2:2021vma} of the muon anomalous magnetic moment, as has been shown in~\cite{Foot:1994vd,Gninenko:2001hx,Baek:2001kca,Murakami:2001cs,Fayet:2007ua,Pospelov:2008zw,Ma:2001md,Heeck:2011wj,Carone:2013uh,Harigaya:2013twa,Davoudiasl:2014kua,Altmannshofer:2014cfa,Tomar:2014rya,Lee:2014tba,Allanach:2015gkd,Heeck:2016xkh,Patra:2016shz,Altmannshofer:2016brv,Iguro:2020rby,Holst:2021lzm,Hapitas:2021ilr}. Slight deviations from SM expectations in lepton flavor universality (LFU) ratios~\cite{HFLAV:2019otj} and an observed deficit of unitarity in the first row of the CKM matrix~\cite{Belfatto:2019swo,Grossman:2019bzp,Shiells:2020fqp,Seng:2020wjq} may also be explained by a new muonic force. Finally, accumulating evidence for lepton-universality violation in $b\rightarrow s\ell^+\ell^-$ transitions (see e.g.~\cite{Geng:2021nhg} and references therein) serves as a further motivation to study this kind of new physics (NP), although in this case coupling to quarks also need to be invoked. A global analysis of leptophilic gauge bosons was recently performed in~\cite{Buras:2021btx}. In this context, the most popular phenomenological model is that of a gauge boson coupling to the $L_\mu - L_\tau$ lepton flavor combination~\cite{He:1990pn,Foot:1990mn,He:1991qd}. Gauged $U(1)_{L_\mu-L_\tau}$ models are distinguished by being free from gauge anomalies (as are other differences in baryon and/or lepton flavor numbers), and from the stringent experimental constraints on $Z'$ couplings to electrons. One difficulty is that gauging $L_\mu-L_\tau$ symmetry prevents the usual interactions that generate the masses and mixings of neutrinos through the dimension-5 Weinberg operator. This means that a fully consistent model requires additional new physics to reproduce the observed values~\cite{Esteban:2020cvm} of the PMNS~\cite{Maki:1962mu,Pontecorvo:1967fh} matrix. Some proposals in this direction include extra Higgs doublets~\cite{Ma:2001md}, soft-breaking terms~\cite{Bell:2000vh,Choubey:2004hn}, and right-handed neutrinos~\cite{Binetruy:1996cs,Heeck:2011wj,Asai:2017ryy,Araki:2019rmw}. The common element of all these models is the presence of (often multiple) U(1)$_{L_\mu-L_\tau}$ symmetry-breaking scalars that can significantly complicate the minimal proposal. From an aesthetic point of view, it also seems rather arbitrary that only the $L_\mu-L_\tau$ difference should be gauged, considering that the SM treats all generations on an equal footing from a structural perspective. To address these shortcomings, in this paper we propose that SU(3)$_\ell$ of lepton flavor is gauged in a vectorial fashion. In this setup, the $L_\mu-L_\tau$ gauge boson need only be one of eight new $Z'$ states, all of which are leptophilic. Previous studies have gauged lepton and quark flavors together using horizontal SU(3) family symmetry, along with additional gauge or discrete symmetries~\cite{Berezhiani:1983hm,King:2001uz,King:2003rf,Alonso:2017bff}, or flavor symmetries for left- and right-handed leptons separately~\cite{Alonso:2016onw}. We are not aware of previous literature treating the possibility of a single vectorial SU(3)$_\ell$ for lepton flavor. In the following, we identify the additional particle content needed to spontaneously break SU(3)$_\ell$ to engender the observed lepton and neutrino masses, which would otherwise be forced to be flavor universal. Further guided by experimental observations, we focus on scenarios where the $L_\mu-L_\tau$ gauge boson is lighter than the other seven and can thus address the $(g-2)_\mu$ tension. For these purposes, we find it sufficient to add a triplet of vectorlike heavy leptons $E_i$ and three scalars in the $3$, $6$ and $8$ representations, which we denote by $\Phi_3$, $\Phi_6$, and $\Phi_8$, respectively. The usual triplet of right-handed neutrinos $N_i$ is also present. The vacuum expectation values (VEVs) of the scalars give rise to the required lepton and gauge boson masses. This particle content is summarized in Table \ref{tab1}. A modest hierarchy in the scalar VEVs, $\langle\Phi_3\rangle \sim\langle\Phi_8\rangle \gg \langle\Phi_6\rangle$ is needed to get the desired spectrum of leptophilic gauge boson masses. The gauge boson coupling to the $L_\mu-L_\tau$ current is lighter than the rest, which allows it to solve the muon $(g-2)$ anomaly in a way consistent with all other constraints~\cite{Holst:2021lzm,Hapitas:2021ilr}. At the same time, the Cabibbo angle anomaly (CAA), a 3-5\,$\sigma$ tension in the unitarity of the top row of the CKM matrix~\cite{Belfatto:2019swo,Grossman:2019bzp,Shiells:2020fqp,Seng:2020wjq}, can be resolved by new contributions to $\mu\to e\nu\bar\nu$ decay from two of the heavier new gauge bosons. A mild 2$\sigma$ hint of nonuniversality in $\tau$ lepton decays~\cite{HFLAV:2019otj} can be similarly ameliorated. The $\Phi_6$ VEVs further determine the mass matrix of the right-handed neutrinos, and thereby the pattern of light neutrino masses and mixings via a type I seesaw mechanism~\cite{Minkowski:1977sc,Gell-Mann:1979vob,Yanagida:1979as,Glashow:1979nm,Mohapatra:1979ia,Weinberg:1979sa,Witten:1979nr}. A striking prediction of our scenario is that the lightest active neutrino mass must exceed $\sim 0.1\,\mathrm{meV}$. We find the right-handed neutrino masses to lie in the GeV to TeV range and to be inversely proportional to the light neutrino masses. They are therefore heavy neutral leptons (HNLs), whose mixings with the active states exactly match the flavor composition of the light neutrino mass eigenstates and are therefore completely determined by the PMNS matrix. We show that the mixings of the HNLs with active neutrinos can be in a range that is relevant for affecting the primordial abundances of light elements. The paper is structured as follows. In Section \ref{sect:model} we introduce the particle content of the model and the resulting spectra of charged leptons, heavy vectorlike leptons, light neutrinos, and new gauge bosons. Constraints on the gauge bosons from flavor-sensitive processes are discussed in Section \ref{sect:pheno}, including the muon $(g-2)$, LEP di-lepton searches, the Cabibbo angle anomaly, and lepton flavor universality limits. The detailed properties of the heavy neutral leptons, and their phenomenological implications, are discussed in Section~\ref{sec:HNL}. Conclusions are given in Section~\ref{sect:conc}, and a brief discussion of the challenges associated with the construction of a potential in the scalar sector that can give rise to the desired pattern of VEVs is presented in Appendix~\ref{app:VEVs}. \section{Model and mass spectra} \label{sect:model} Our starting point is the gauging of the leptonic SU(3) flavor symmetry, under which both left- and right-handed lepton fields transform simultaneously. Clearly, our world does not exactly respect such a symmetry, requiring it to be spontaneously broken at some scale. For this purpose, we introduce three real scalar fields $\Phi_3$, $\Phi_6$, and $\Phi_8$ in the fundamental, symmetric two-index, and adjoint representations, respectively. Although $\Phi_6$ by itself would be sufficient to fully break SU(3)$_\ell$, this is the minimal scalar sector that we have identified as being phenomenologically viable,\footnote{Other possibilities may exist; it is not our purpose to exhaust them but rather to construct one instance of a working model.} as we will explain. Clear observational evidence for breaking of SU(3)$_\ell$ is provided by the hierarchy of lepton masses. Indeed, with only the SM particle content, the gauge-invariant Higgs coupling $\bar L^i H e_i$ (where $i$ is the SU(3)$_\ell$ index) only allows for a universal charged lepton mass. To obtain the observed mass splittings through spontaneous breaking of the gauge symmetry in a renormalizable way, we add a triplet of vectorlike charged lepton partners, $E_i$. Additionally, to generate neutrino masses, a right-handed neutrino triplet $N_i$ is included, which completes the particle content of the model. The list of particles and gauge charges are shown in Table~\ref{tab1}. \begin{table}[t]\centering \setlength\tabcolsep{4pt} \def1.5{1.2} \begin{tabular}{|c|c|c|c|}\hline & SU(3)$_\ell$ & SU(2)$_L$ & U(1)$_y$\\ \hline $L_i$ & 3 & 2 & $-1/2$ \\ $e^{c,i}$ & $\bar 3$ & 1 & $+1$ \\ \hline $E_{\L,i}$ & 3 & 1 & $-1$ \\ $E_R^{c,i}$ & $\bar 3$ & 1 & $+1$ \\ $N^{c,i}$ & $\bar 3$ & 1 & 0 \\ \hline $\Phi_{3,i}$ & 3 & 1 & 0 \\ $\Phi_{6,ij}$ & 6 & 1 & 0 \\ $\Phi^i_{8,j}$ & 8 & 1 & 0\\ \hline \end{tabular} \caption{Field content of the gauged SU(3)$_\ell$ model along with the corresponding gauge charges. The fermionic fields are all listed as left-handed Weyl states. $L_i$ and $e_i$ denote the SM lepton doublets and charged singlet leptons, respectively.} \label{tab1} \end{table} The addition of the right-handed neutrino triplet $N_i$ makes SU(3)$_\ell$ anomaly free, since all leptons now come in chiral pairs. Mixed anomalies involving non-abelian factors likewise cancel because of the tracelessness of the generators, and the SU(3)$_\ell^2\times$U(1)$_y$ one vanishes due to the hypercharges of the SU(3)$_\ell$ triplets adding up to zero. With the field content listed in Table~\ref{tab1}, the most general renormalizable Lagrangian that can be constructed (omitting kinetic terms) is \begin{eqnarray}\label{eq:Lagrangian} {\cal L} = &-& \bar E_\L^i(\mu_{{\sss E}\E}\delta^j_i + y_{{\sss E}\E}\Phi^j_{8,i}) E_{{\sss R},j}\nonumber\\ &-& \bar E_\L^i(\mu_{{\sss E} e}\delta_i^j + y_{{\sss E} e}\Phi_{8,i}^j)e_j \nonumber\\ &-&\bar L^i H(y_{\L e} e_i + y_{\L{\sss E}} E_{{\sss R},i}) - y_{\L{\sss N}} \bar L^i \tilde H N_i\nonumber\\ &-& {y_{\sss N}\over 2} \bar N^i\Phi_{6,ij}N^{c,j} + {\rm h.c.} \nonumber \\ &-& V(H,\, \Phi_3,\, \Phi_6,\, \Phi_8), \end{eqnarray} where $V$ denotes the scalar potential, $\mu_i$ are constants with dimensions of mass, and $y_i$ are dimensionless couplings. In the following sections we describe the consequences of nonvanishing vacuum expectation values for the scalar fields. The required values of the VEVs are constructed in a bottom-up fashion, making use of experimental measurements and constraints as guiding principles. Obtaining the desired symmetry breaking pattern from a specific scalar potential is likely to be a challenging task, beyond the scope of the present work. We however take some preliminary steps in this direction in Appendix~\ref{app:VEVs}. In the following, it will be convenient to order the gauge indices associated with the lepton flavors as $(1,2,3) = (\tau,\mu,e)$. This allows the $\mu-\tau$ gauge boson to be associated with the third generator $T_3$ of SU(3)$_\ell$ (using the standard form of the Gell-Mann matrices) instead of a linear combination of $T_3$ and $T_8$. \subsection{Charged lepton masses} In the absence of spontaneous breaking of SU(3)$_\ell$, charged leptons get a universal mass from electroweak symmetry breaking. We will use the $\Phi_8$ scalar field VEV, in conjunction with the vectorlike lepton partners $E_i$, to split masses between the different lepton generations.\footnote{Nonzero $\mu_{{\sss E}\E}$ or $\mu_{{\sss E} e}$ values in Eq.~\eqref{eq:Lagrangian} are also essential, since in the $\mu_{{\sss E}\E},\,\mu_{{\sss E} e}\rightarrow 0$ limit the charged lepton masses become universal again.}\ \ The Lagrangian~\eqref{eq:Lagrangian} gives rise to mass mixing between the SM leptons and the heavy $E_i$ partners. For simplicity, we take $\langle\Phi_8\rangle$ to be diagonal, in which case the $6\times 6$ Dirac mass matrix for the charged leptons becomes block diagonal (no flavor mixing), with three blocks of the form \begin{equation} \label{eq:Dirac_mass_matrix} (\bar e_\L^i,\, \bar E_\L^i)\left({y_{\L e}\bar v\atop \tilde \mu^i_{{\sss E} e}}\,{ y_{\L{\sss E}} \bar v\atop \tilde \mu^i_{{\sss E}\E}} \right)\left({e_{{\sss R},i}\atop E_{{\sss R},i}}\right), \end{equation} where $\bar v \cong 174\,$GeV is the complex Higgs VEV and we define \begin{eqnarray} \tilde \mu^i_{{\sss E} e} &=& \mu_{{\sss E} e} + y_{{\sss E} e}\langle\Phi_8\rangle^i_i\,,\nonumber\\ \tilde \mu^i_{{\sss E}\E} &=& \mu_{{\sss E}\E} + y_{{\sss E}\E}\langle\Phi_8\rangle^i_i\,. \label{tildedef} \end{eqnarray} Diagonalizing each of the $2\times2$ blocks separately under the assumption $y_{\L e}\bar{v},\,y_{\L{\sss E}}\bar{v}\ll \tilde\mu^i_{{\sss E}\E}$ or $\tilde\mu^i_{{\sss E} e}$ leads to three heavy and three light mass eigenstates with masses \begin{eqnarray} (m_{\sss E}^i)^2 &=& (\tilde\mu^i_{{\sss E} e})^2 + (\tilde\mu^i_{{\sss E}\E})^2,\nonumber\\ (m_e^i)^2 &=& \bar{v}^2\,\frac{(y_{\L e}\tilde\mu_{ {\sss E}\E}^i - y_{\L{\sss E}}\tilde\mu_{ {\sss E} e}^i)^2}{(\tilde\mu^i_{{\sss E} e})^2 + (\tilde\mu^i_{{\sss E}\E})^2}.\label{eq:charged_lepton_masses} \end{eqnarray} The contributions to the light charged lepton masses in the perturbative limit are depicted in Fig.\ \ref{fig:diag}. In the limit of large $\mu_{{\sss E}\E}$ or $\mu_{{\sss E} e}$, the heavy lepton effects do not decouple; lepton masses generally continue to be split by the heavy-$E_i$ effects even for arbitrarily heavy $E_i$. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{figures/diag.pdf} \caption{Extra contribution to the charged lepton masses from integrating out the heavy vectorlike lepton, that splits them relative to the flavor-independent mass $y_{\L e}\bar v$.} \label{fig:diag} \end{figure} Since we assume that $\langle\Phi_8\rangle$ is diagonal, mixing with the heavy states does not cause any flavor-violating effects in the charged lepton sector. However, the mixing of the light states with the heavy ones is constrained by electroweak precision data (EWPD). Diagonalizing the mass matrix in Eq.~\eqref{eq:Dirac_mass_matrix} yields mass eigenstates $(e^i_{\L,{\sss R}},\,E^i_{\L,{\sss R}})'$, given by \begin{equation} \left({e^i_{\L,{\sss R}}\atop E^i_{\L,{\sss R}}}\right) = \left({c_{\theta^i_{\L,{\sss R}}}\atop -s_{\theta^i_{\L,{\sss R}}}}\,{s_{\theta^i_{\L,{\sss R}}}\atop \phantom{-}c_{\theta^i_{\L,{\sss R}}}}\right)\left( e_{\L,{\sss R}}^i \atop E_{\L,{\sss R}}^i \right)'\,. \end{equation} To second order in $\bar{v}/\tilde\mu$, the mixing angles are given by \begin{eqnarray} \tan 2\theta^i_\L &=& \frac{2\bar v\,(y_{\L e}\tilde \mu^i_{{\sss E} e} + y_{\L{\sss E}}\tilde \mu^i_{{\sss E}\E})}{(\tilde \mu^i_{{\sss E} e})^2 + (\tilde \mu^i_{{\sss E}\E})^2}\,,\nonumber\\ \tan 2\theta^i_{\sss R} &=& 2 \frac{\tilde\mu^i_{{\sss E}\E} \tilde\mu^i_{{\sss E} e} + \bar{v}^2 y_{\L{\sss E}}y_{\L e}}{(\tilde\mu^i_{{\sss E}\E})^2 - (\tilde\mu^i_{{\sss E} e})^2} \,.\label{eq:charged_lepton_mixings} \end{eqnarray} In the limit where $\tilde \mu^i_{{\sss E}\E}$ or $\tilde\mu_{{\sss E} e}^i\gg \bar v$, the angle $\theta^i_\L$ is suppressed, but $\theta^i_{\sss R}$ can be relatively large. EWPD constraints require that $\theta_{{\sss R}}(\tau,\mu,e) \lesssim (0.03,\,0.02,\,0.02)$~\cite{delAguila:2008pw}.\footnote{The precise upper limits are model-dependent; we have chosen the strongest limits corresponding to models that give dominant right-handed mixing.}\ \ To sufficiently suppress $\theta_{\sss R}^i$, there must be a hierarchy $\tilde \mu^i_{{\sss E} e} \ll \tilde \mu^i_{{\sss E}\E}$ or $\tilde \mu^i_{{\sss E} {\sss E}} \ll \tilde \mu^i_{{\sss E} e}$ for each value of $i$. To give a concrete example, consider the case $\tilde \mu^i_{{\sss E} e} \lesssim 10^{-2}\, \tilde \mu^i_{{\sss E}\E}$; the derivation for the opposite possibility is completely analogous. In this limit, the masses of the heavy charged leptons are simply \begin{equation} m^i_{{\sss E}} \simeq \tilde \mu^i_{{\sss E}\E}\,, \end{equation} while the mass of the SM charged leptons becomes \begin{equation} m^i_e \cong \bar{v} \left( y_{\L e} - y_{\L{\sss E}} \frac{\tilde \mu^i_{{\sss E} e}}{\tilde \mu^i_{{\sss E}\E}} \right). \end{equation} Taking for simplicity $y_{\L{\sss E}}=1$ and $y_{\L e} = 0$ ($y_{\L e}\lesssim 10^{-6} y_{\L{\sss E}}$ is the sufficient condition), the ratios \begin{equation}\label{eq:mu_tilde_ratios} \frac{\tilde \mu^i_{{\sss E} e}}{\tilde \mu^i_{{\sss E}\E}} \simeq - \frac{m_e^i}{\bar{v}} \sim (10^{-2},\, 10^{-3},\, 10^{-6}) \end{equation} are determined by the measured charged lepton masses. Using Eq.~\eqref{tildedef} and taking into account the fact that $\Phi_8$ is traceless, one sees that achieving the hierarchies in Eq.~\eqref{eq:mu_tilde_ratios} requires some cancellations between terms in the numerator or denominator. For example, take $\langle\Phi_8\rangle = \mathrm{diag}(-\mu_{{\sss E}\E},\,\mu_{{\sss E}\E},\,0)$. Then Eq.~\eqref{eq:mu_tilde_ratios} is realized by choosing \begin{align} \mu_{{\sss E} e} &= \frac{m_e}{\bar{v}} \, \mu_{{\sss E}\E} \simeq 3\times 10^{-6}\, \mu_{{\sss E}\E} \,,\nonumber\\ y_{{\sss E}\E} &= \frac{m_\tau+m_\mu}{m_\tau-m_\mu} \simeq 1.1\,,\nonumber\\ y_{{\sss E} e} &= \frac{2}{\bar{v}} \frac{m_\tau m_\mu}{m_\tau-m_\mu}\simeq 1.3\times 10^{-3}\,.\label{eq:BM_parameters} \end{align} The required order-of-magnitude enhancement of $m_\tau$ relative to $m_\mu$ is achieved by a $\sim 10\%$ cancellation in the denominator of Eq.~\eqref{eq:mu_tilde_ratios} for $i=\tau$, while the smallness of $m_e$ is due to $\mu_{{\sss E} e}\ll \mu_{{\sss E}\E}$ and $\langle\Phi_8\rangle_e^e = 0$. Having fixed the charged lepton masses, one is still free to choose $\mu_{{\sss E}\E}$ and therefore the absolute mass scale of the charged lepton partners. If they are sufficiently light, they may be directly accessible at the Large Hadron Collider (LHC). They can be pair-produced by the Drell-Yan process, and their main decays are $E_i\to L_i + h$ via the $y_{\L{\sss E}}$ interaction. Ref.\ \cite{Kumar:2015tna} showed that electroweak-singlet vectorlike leptons (VLL's) are very difficult to observe at the LHC; indeed, existing constraints \cite{ATLAS:2015qoy,CMS:2019hsm} have focused on the search for doublet VLL's. VLL models similar to ours were studied in Ref.\ \cite{Bell:2019mbn}, where current LHC limits were found to be as low as $m_{\sss E}^i \gtrsim 300\,$GeV. This leaves a large range of allowed VLL masses for future discovery, consistent with our requirement $\langle\Phi_8\rangle\gtrsim 1$~TeV, that will be derived in Section~\ref{sect:pheno}. In the following sections, we adopt the parameter values described above as our principal benchmark model (BM1). However, other possibilities for fixing the constants in the Lagrangian~\eqref{eq:Lagrangian} exist. There are eight free parameters: two dimensionful scales $\mu_{{\sss E}\E}$ and $\mu_{{\sss E} e}$, four Yukawa couplings $y_{{\sss E}\E}$, $y_{{\sss E} e}$, $y_{\L{\sss E}}$, and $y_{\L e}$, and two independent entries of $\langle\Phi_8\rangle$. Reproducing the observed charged lepton masses imposes three constraints through Eq.~\eqref{eq:charged_lepton_masses}. In addition, the mixings in Eq.~\eqref{eq:charged_lepton_mixings} must be sufficiently small. In order to explore the parameter space and find other viable models, we employed a Markov Chain Monte Carlo method. Two further examples of parameters that satisfy all the phenomenological requirements but differ qualitatively from BM1 are given in Table \ref{tab3}. In the second example (BM2), also expressed in terms of the unconstrained parameter $\mu_{{\sss E}\E}$, the heavy lepton masses are all approximately $m_{\sss E}^i\cong \mu_{{\sss E}\E}$. Satisfying the EWPD bounds on mixing leads to the largest Yukawa coupling, $y_{\L{\sss E}} = 3.6$, being close to unitarity limits. In this example, since $\mu_{{\sss E}\E}\gg\langle\Phi_8\rangle$, the heavy leptons are far above the TeV scale and therefore not directly accessible in current collider experiments. In the third example (BM3) and similarly to BM1, the VLL are of order $\langle\Phi_8\rangle$, making them potentially accessible at the LHC. In this case (as in BM1), all Yukawa couplings are well below perturbative unitarity bounds. A qualitatively different feature of BM3 is that $\tilde\mu_{{\sss E}\E}^\tau \ll \tilde\mu_{{\sss E} e}^\tau$, while in the others $\tilde\mu_{{\sss E}\E}^i \gg \tilde\mu_{{\sss E} e}^i$ for all flavors. Moreover, the VVL-charged lepton mixing angles fall well below the experimental limits in this model. \iffalse \begin{table*}[t]\centering \begin{tabular}{|c||c|c|c|c||c|c|c|c||c|c|c|c|}\hline & \multicolumn{4}{c||}{Input parameters} & \multicolumn{4}{c||}{Derived parameters} & \multicolumn{4}{c|}{Phenomenological quantities} \\ \hline & $\phi_\mu$ & $\phi_e$ & $\mu_{{\sss E}\E}$ & $y_{\L{\sss E}}$ & $\mu_{{\sss E} e}$ & $y_{{\sss E}\E}$ & $y_{{\sss E} e}$ & $y_{\L e}$ & $m^i_E$ & $\theta_{{\sss R}}^{\tau}$ & $\theta_{{\sss R}}^{\mu}$ & $\theta_{{\sss R}}^e$\\ \hline BM1 & $\phi_8$ & $0$ & $\phi_8$ & $1$ & $3\times 10^{-6}\,\phi_8$ & $1.1$ & $1.3\cdot 10^{-3}$ & $\lesssim 10^{-6}$ & $(1,\,1,\,1)\,\phi_8$ & $1\cdot 10^{-2}$ & $6\cdot 10^{-4}$ & $3\cdot 10^{-6}$\\ \hline BM2 & $0.014\,\mu_{{\sss E}\E}$ & $0.016\,\mu_{{\sss E}\E}$ & $\mu_{{\sss E}\E}$ & $3.6$ & $0.023\,\mu_{{\sss E}\E}$ & $0.32$ & $-0.055$ & $0.56$ & $(1,\,1,\,1)\,\mu_{{\sss E}\E}$ & $0.021$ & $5.6\cdot 10^{-4}$ & $0.024$ \\ \hline BM3 & $-3.0\,\mu_{{\sss E}\E}$ & $-3.2\,\mu_{{\sss E}\E}$ & $\mu_{{\sss E}\E}$ & $-0.011$ & $1.55\,\mu_{{\sss E}\E}$ & $0.16$ & $-0.48$ & $-8.5\cdot 10^{-4}$ & $(4.5,1.5,1.5)\mu_{{\sss E}\E}$ & $-1\cdot 10^{-3}$ & $6\cdot 10^{-3}$ & $6\cdot 10^{-3}$ \\ \hline \end{tabular} \caption{Input parameters and derived parameters and quantities relevant for phenomenology (columns 10-13) for three benchmark models that can reproduce the observed charged lepton properties. The mixing angles $\theta_{\sss R}^i$ between right-handed charged leptons and heavy vectorlike leptons of mass $m_E^i$ are consistent with EWPD constraints~\cite{delAguila:2008pw}. The traceless octet VEV is parametrized as $\langle\Phi_8\rangle = \mathrm{diag}(-\phi_\mu-\phi_e,\,\phi_\mu,\,\phi_e)$. In model BM1, we have taken $\mu_{{\sss E}\E}=\phi_8$ to simplify the presentation, full expressions avoiding this assumption can be found in Eq.~\eqref{eq:BM_parameters}.} \label{tab3} \end{table*} \fi \begin{table}[t]\centering \setlength\tabcolsep{6pt} \def1.5{1.4} \begin{tabular}{|c| c c c|}\cline{2-4} \multicolumn{1}{c|}{} & BM1 & BM2 & \multicolumn{1}{c|}{BM3} \\ \cline{1-4} \multicolumn{4}{|c|}{Input parameters} \\ \hline $\phi_\mu$ & $\mu_{{\sss E}\E}$ & $0.014\,\mu_{{\sss E}\E}$ & $-3.0\,\mu_{{\sss E}\E}$ \\ \hline $\phi_e$ & $0$ & $0.016\,\mu_{{\sss E}\E}$ & $-3.2\,\mu_{{\sss E}\E}$ \\ \hline $y_{\L{\sss E}}$ & $1$ & $3.6$ & $-0.011$ \\ \hline \multicolumn{4}{|c|}{Derived parameters} \\ \hline $\mu_{{\sss E} e}$ & $3\times 10^{-6}\,\mu_{{\sss E}\E}$ & $0.023\,\mu_{{\sss E}\E}$ & $1.55\,\mu_{{\sss E}\E}$ \\ \hline $y_{{\sss E}\E}$ & $1.1$ & $0.32$ & $0.16$ \\ \hline $y_{{\sss E} e}$ & $1.3\times 10^{-3}$ & $-0.055$ & $-0.48$ \\ \hline $y_{\L e}$ & $\lesssim 10^{-6}$ & $0.56$ & $-8.5\times 10^{-4}$ \\ \hline \multicolumn{4}{|c|}{Phenomenological quantities} \\ \hline $m^i_E$ & $(1,\,1,\,1)\,\mu_{{\sss E}\E}$ & $(1,\,1,\,1)\,\mu_{{\sss E}\E}$ & $(4.5,\,1.5,\,1.5)\,\mu_{{\sss E}\E}$ \\ \hline $\theta_{{\sss R}}^{\tau}$ & $1\times 10^{-2}$ & $0.021$ & $-1\times 10^{-3}$ \\ \hline $\theta_{{\sss R}}^{\mu}$ & $6\times 10^{-4}$ & $5.6\times 10^{-4}$ & $6\times 10^{-3}$ \\ \hline $\theta_{{\sss R}}^e$ & $3\times 10^{-6}$ & $0.024$ & $6\times 10^{-3}$ \\ \hline \end{tabular} \caption{Input parameters, derived parameters and quantities relevant for phenomenology for three benchmark models that can reproduce the observed charged lepton properties. The mixing angles $\theta_{\sss R}^i$ between right-handed charged leptons and heavy vectorlike leptons of mass $m_E^i$ are consistent with EWPD constraints~\cite{delAguila:2008pw}. The traceless octet VEV is parametrized as $\langle\Phi_8\rangle = \mathrm{diag}(-\phi_\mu-\phi_e,\,\phi_\mu,\,\phi_e)$. The dimensionful parameter $\mu_{{\sss E}\E}$, which controls the scale of the octet VEV and the VLL masses, is left as freely adjustable parameter. } \label{tab3} \end{table} \subsection{Neutrino masses} The right-handed neutrinos $N_i$ get a Majorana mass matrix from the sextet VEVs, $M_N = y_{\sss N}\langle\Phi_6\rangle$, while the Dirac neutrino mass matrix is proportional to the unit matrix, with coefficient $m_D = y_{\L{\sss N}}\bar v$. The light neutrino mass matrix is therefore $m_\nu = -m_D^2 M_N^{-1}$ from the seesaw mechanism, and we can solve for the sextet VEVs in terms of the PMNS mixing matrix $U_{\rm\scriptscriptstyle PMNS}$ and the light neutrino mass eigenvalues $D = {\rm diag}(m_1,m_2,m_3)$: \begin{equation}\label{eq:sextet_VEV} \langle\Phi_6\rangle = -{m_D^2\over y_{\sss N}}\,U_{\rm\scriptscriptstyle PMNS}\, D^{-1}\, U_{\rm\scriptscriptstyle PMNS}^{\sss T}\,. \end{equation} Recall that we have interchanged the first and third rows of $U_{\rm\scriptscriptstyle PMNS}$ in our ordering convention for the lepton flavors. This allows us to refer to the $L_{\mu-\tau}$ gauge boson as the one associated with the Gell-Mann matrix $\lambda_3$. A quantitative analysis of the right-handed neutrino mass and mixing spectrum is deferred to Sec.~\ref{sec:HNL}. \subsection{Gauge boson masses} The mass-squared matrix of gauge bosons is given by \begin{eqnarray} \label{gbmasses} {M^2_{ab}\over g'^2} &=& \sfrac14\Phi_3^\dagger \{\lambda^a,\lambda^b\} \Phi_3 - \sfrac12{\rm tr}\left([\Phi_8,\lambda^a][\Phi_8,\lambda^b]\right)\nonumber\\ &+& \sfrac12{\rm tr}\left([\Phi_6^\dagger\lambda^a + \lambda^{a*}\Phi_6^\dagger][\Phi_6\lambda^{b*} + \lambda^b\Phi_6]\right)\,, \end{eqnarray} where the VEVs of the scalar fields are understood. In the approximation that $\langle\Phi_6\rangle=0$, while $\langle\Phi_3\rangle\sim \langle\Phi_8\rangle$, all gauge bosons get masses except for $A_3$, which corresponds to the $L_{\mu-\tau}$ gauge boson in our numbering scheme. Thus $A_3$ gets its mass solely from $\langle\Phi_6\rangle$. The overall magnitude of $\langle\Phi_6\rangle$ is determined by the smallest light neutrino mass $m_{\nu_1}$; hence one prediction of the model is that $m_{\nu_1}$ cannot be arbitrarily small. The various contributions of the scalar VEVs to the mass eigenvalues $M_i^2/g'^2$ are summarized in Table~\ref{tab:gauge_boson_masses}. Before turning on the $\Phi_6$ VEV, the gauge boson mass matrix is diagonal, and the eigenvalues can be labelled with the number of the corresponding generator. We first consider the contribution of the $\Phi_8$ VEV. Following the discussion in the previous section, in benchmark scenario BM1 it takes the form \begin{equation} \label{eq:Phi8_VEV} \langle\Phi_8\rangle \equiv \phi_8\,{\rm diag}(-1,\,1,\,0)\,, \end{equation} with $\phi_8 = \mu_{{\sss E}\E}$.\footnote{More generally, $\phi_8/\mu_{{\sss E}\E}$ could differ from 1, and the relations (\ref{eq:BM_parameters}) would be modified by factors of $\phi_8/\mu_{{\sss E}\E}$. We allow for this more general form in the following.}\ \ The resulting contributions to the different gauge boson masses are shown in the third column of Table~\ref{tab:gauge_boson_masses}. Purely diagonal $\langle\Phi_8\rangle$ VEVs leave the $A_3$ and $A_8$ gauge bosons massless. To generate mass for $A_{8}$, we introduce the VEV \begin{equation} \label{eq:Phi3_VEV} \langle\Phi_3\rangle = \phi_3\,(0,0,1)^T , \end{equation} which further lifts the $A_{4,5,6,7}$ masses, leaving only $A_3$ massless. The numerical values of the contributions to $M_i^2/g'^2$ are shown in the fourth column of Table~\ref{tab:gauge_boson_masses}. As we will see in the next section, phenomenological constraints on the leptophilic gauge bosons lead to the requirements $\phi_8\gtrsim 1$~TeV and $\phi_3\gtrsim 10$~TeV. The $\Phi_6$ VEV contributes to all of the masses and causes mixing between the gauge bosons. We assume that it is much smaller than the other two VEVs so that the mixing, as well as the shifts to the heavy masses, can be neglected, or computed perturbatively. Therefore the $\langle\Phi_6\rangle_{3,3}$ element directly determines $M_3$, and thereby the new physics contribution to $(g-2)_\mu$. \begin{table}[t]\centering \setlength\tabcolsep{5pt} \def1.5{1.5} \begin{tabular}{|c|c|c|c|c|}\hline \begin{tabular}[c]{@{}c@{}}Gauge\\boson\end{tabular} & \begin{tabular}[c]{@{}c@{}}Flavor\\structure\end{tabular} & $\Phi_8$ & $\Phi_3$ & $\Phi_6$\\ \hline $A_{1,2}$ &$\phantom{1\over\sqrt{3}}$\scalebox{0.6}{ $\begin{pmatrix} 0 & * & 0\\ * & \phantom{-}0\phantom{-} & 0\\ 0 & 0 & 0\end{pmatrix}\begin{array}{c} \tau\\ \mu\\ e\end{array}$} & $4\,\phi_8^2$ & \ding{55} & \ding{51} \\ \hline $A_{3}$ & $\phantom{1\over\sqrt{3}}$ \scalebox{0.6}{$ \begin{pmatrix} 1 & 0 & 0\\ 0 & -1\phantom{-} & 0\\ 0 & 0 & 0\end{pmatrix}\quad\;\;\;$} & \ding{55} & \ding{55} & \ding{51}\\ \hline $A_{4,5}$ & $\phantom{1\over\sqrt{3}}$\scalebox{0.6}{$\begin{pmatrix} 0 & 0 & *\\ 0 & \phantom{-}0\phantom{-} & 0\\ * & 0 & 0 \end{pmatrix}$\quad\;} & $\phi_8^2$ & $\sfrac12\, \phi_3^2$ & \ding{51}\\ \hline $A_{6,7}$ & $\phantom{1\over\sqrt{3}}$\scalebox{0.6}{$\begin{pmatrix} 0 & 0 & 0\\ 0 & \phantom{-}0\phantom{-} & *\\ 0 & * & 0 \end{pmatrix}$\quad\;} & $\phi_8^2$ & $\sfrac12\, \phi_3^2$ &\ding{51}\\ \hline $A_{8}$ & ${1\over\sqrt{3}}$\scalebox{0.6}{ $\begin{pmatrix} 1 & \;\;0\; & 0\\ 0 & \;\;1\; & 0\\ 0 & \;\;0\; & -2\end{pmatrix}\quad\;\;$} & \ding{55} & $\sfrac23\, \phi_3^2$ & \ding{51}\\ \hline \end{tabular} \caption{Flavor structure of the couplings of the 8 leptophilic gauge bosons in the SU(3)$_\ell$ model, together with the contributions to $M_{i}^2/g'^2$, their squared masses divided by $g^2$, arising from the various scalar VEVs (see Eq.~\ref{eq:Phi8_VEV}). The contributions from $\Phi_6$ are small compared to those of $\Phi_3$ and $\Phi_8$ by construction, and they lead to mixing between the tabulated Lagrangian states.} \label{tab:gauge_boson_masses} \end{table} Using Eq.~(\ref{eq:sextet_VEV}), the light gauge boson mass can be written as \begin{equation} M_3 \sim g' {m_D^2\over y_{\sss N}\, m_{\nu_1}} \equiv g' \phi_6\,, \end{equation} where the exact proportionality factor depends on the absolute scale of neutrino masses and varies between $\sim 0.5$ and $1.5$ within the range of interest. It will be shown in the next section that for values of $g'$ and $M_3$ that can explain the muon anomalous magnetic moment, $\phi_6 = \mathcal{O}(20-200)\,$GeV. If $y_{\sss N}\sim 1$, this is also the scale of the sterile neutrino masses, whose mass matrix is $m_{\sss N} = y_{\sss N}\langle\Phi_6\rangle$. In the following sections, we discuss the phenomenological consequences of the new particles predicted by the model. \section{Phenomenology of the leptophilic gauge bosons} \label{sect:pheno} The phenomenology of gauge bosons associated with lepton number gauge symmetries has been widely studied; see e.g.~\cite{Foldenauer:2016rpi,Buras:2021btx} and references therein. In the SU(3)$_\ell$ model presented here, neglecting the small mixing between gauge bosons, there are six purely flavor-violating currents, mediated by $A_{1,2}$ for $\mu\leftrightarrow\tau$, $A_{4,5}$ for $e\leftrightarrow\tau$, and $A_{6,7}$ for $e\leftrightarrow\mu$. The flavor-conserving $A_3$ couples to $\mu-\tau$, and $A_8$ to the $2e-\mu-\tau$ current. These leptophilic gauge bosons have implications for a number of different observables. \subsection{Muon anomalous magnetic moment} \label{amusect} By construction, $A_3$ is much lighter than the rest of the vector bosons, due to the symmetry-breaking pattern of the model. This enables it to explain the current $4.2\sigma$ discrepancy between experimental measurements~\cite{Muong-2:2006rrc,Muong-2:2021vma} and SM predictions~\cite{Aoyama:2020ynm} of the anomalous magnetic moment of the muon.\footnote{A recent lattice evaluation of $(g-2)_{\rm SM}$~\cite{Borsanyi:2020mff} may help alleviate this tension, but it is in disagreement with the other SM predictions~\cite{Aoyama:2020ynm}.}\ \ A gauge boson $A'$ coupling to the muonic vector current with strength $g'$ contributes to $a_\mu = (g-2)_\mu/2$ at the one-loop level~\cite{Baek:2001kca} with \begin{equation} \Delta a_\mu = \frac{g'^2}{4\pi^2}\int_0^1 \mathop{\mathrm{{d}} x} \frac{m_\mu^2\, x\, (1-x)^2}{m_\mu^2 (1-x)^2 + m_{A'}^2 \,x}. \end{equation} This has the right sign to address the experimental anomaly and is phenomenologically viable as long as $A'$ couples negligibly to electrons, which is the case for $A_3$. Such a $L_\mu-L_\tau$ gauge boson was searched for in the $e^+e^- \rightarrow 4\mu$ channel at BaBar~\cite{BaBar:2016sci} and through neutrino trident production at CCFR~\cite{Altmannshofer:2014pba}. Taking the ensuing constraints into account, the $A_3$ gauge boson can explain the $(g-2)_\mu$ discrepancy as long as its mass lies in the $10\lesssim m_{A_3}\lesssim 200$~MeV range and the gauge coupling is $3\times 10^{-4}\lesssim g' \lesssim 10^{-3}$ \cite{Holst:2021lzm,Hapitas:2021ilr} as shown in Fig.~\ref{fig:gminus2}. \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{figures/SU3_gminus2_parameter_space.pdf} \caption{Region of the gauge coupling versus mass parameter space for which a massive dark photon coupled to the $L_\mu-L_\tau$ current can explain the observed value of the muon $g-2$ factor. The blue-dotted region shows the $2\sigma$ preferred values obtained from the recently reported measurement~\cite{Muong-2:2021vma} and SM calculations~\cite{Aoyama:2020ynm}. The other shaded areas correspond to constraints from Babar~\cite{BaBar:2016sci} (orange), CCFR~\cite{Altmannshofer:2014pba} (green), and cosmology~\cite{Escudero:2019gzq} (purple).} \label{fig:gminus2} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{figures/gauge_bosons_combined.pdf} \caption{$95\%$ CL contours on the model parameters $\phi_8$ and $\phi_3$ from LEP dilepton analyses and lepton flavor universality constraints as labelled in the figure. The blue-dotted region shows the $2\,\sigma$ preferred values to address the CKM unitarity deficit, also known as the Cabibbo Angle Anomaly (CAA). The dotted orange line shows the edge of the preferred region for the slight $\sim 2\,\sigma$ excess in the LFU ratio~\eqref{eq:luv_2}.} \label{fig:lepton_universality} \end{figure} \subsection{Di-lepton searches at LEP} The LEP experiments searched for $e^+e^-\to \ell^+\ell^-$ events beyond the Standard Model \cite{ALEPH:2013dgf}, parametrizing the new physics (NP) contributions by contact interactions of the form \begin{equation} {4\pi\over\Lambda^2} J_{e,\alpha} \left[\sfrac12 J_e^\alpha + J_\mu^\alpha + J_\tau^\alpha \right], \end{equation} where $J_i^\mu$ is the vector current for charged leptons of flavor $i$. Different limits on $\Lambda$ were derived depending on the combinations of final states observed. However, the derived limits do not directly apply to states such as $A_8$ that couple with different strengths to different flavors. However, Ref.\ \cite{ALEPH:2013dgf} provides the observed differential distributions for the $e^+e^-\rightarrow \ell\ell$ processes studied by LEP, enabling us to derive limits on nonabelian leptophilic gauge bosons such as those in the present model. For dimuon and ditau final states we use the data for the total cross-section $\sigma_{\rm T}$ and forward-backward asymmetry $A_{\rm fb}$ as a function of $\sqrt{s}$, while for the $e^+e^-$ final state we use the full averaged differential cross section given in~\cite{ALEPH:2013dgf}. Employing a $\chi^2$ test statistic on the SM versus SM + NP models leads to the following bounds on the individual gauge boson masses at $95\%$ CL: \begin{eqnarray} {M_{4,5}\over g'} &>& 4.6\,{\rm TeV},\nonumber\\ {M_{6,7}\over g'} &>& 5.2\, {\rm TeV},\nonumber\\ {M_8\over g'} &>& 8.9\,{\rm TeV}. \label{LEPbounds2} \end{eqnarray} We further construct the combined $\chi^2$ for all the aforementioned channels, including the effects of all relevant gauge bosons parametrized by the two model parameters $\phi_3$ and $\phi_8$. Doing so leads to the $95\%$ CL constraints on the VEVs corresponding to the yellow contours in Fig.~\ref{fig:lepton_universality}. As shown in Eq.~(\ref{LEPbounds2}), the strongest limits are on $A_8$, it being the only state coupling diagonally to the electron current. Since the mass of $A_8$ comes solely from $\langle\Phi_3\rangle$, this constrains $\phi_3$ independently of the value of $\phi_8$, which explains the shape of the exclusion contour in Fig.~\ref{fig:lepton_universality}. \subsection{Lepton flavor universality violation} Vector bosons with flavor off-diagonal couplings can induce lepton decays of the form $\ell_i\rightarrow\ell_j \bar{\nu}\nu$, which interfere with the SM amplitudes and spoil the flavor-universal nature of these weak decays. Following the prescription of~\cite{Buras:2021btx}, these effects can be measured using the amplitude ratios \begin{equation} R(\ell_i\rightarrow\ell_j) = \frac{\mathcal{A}(\ell_i\rightarrow\ell_j \bar{\nu}\nu)}{\mathcal{A}(\ell_i\rightarrow\ell_j \bar{\nu}\nu)_{\rm SM}} = 1 + 2 \frac{g'^2}{g_2^2} \frac{m_W^2}{m_{A'}^2}\,, \end{equation} where $g_2$ denotes the SU(2)$_L$ gauge coupling and we have expanded to first order in $g'^2$, {\it i.e.,} keeping only SM-NP interference terms. These can be confronted with experimental measurements of the ratios of partial widths in purely leptonic decays~\cite{HFLAV:2019otj},\footnote{The $\tau-\mu$ universality ratio can also be measured in semi-hadronic decays, but with a larger uncertainty that does not significantly strengthen the constraints.} \begin{align}\label{eq:luv_1} \frac{\mathcal{A}(\tau\rightarrow\mu \bar{\nu}\nu)}{\mathcal{A}(\tau\rightarrow e \bar{\nu}\nu)} &= 1.0018\pm 0.0014 , \\ \label{eq:luv_2} \frac{\mathcal{A}(\tau\rightarrow\mu \bar{\nu}\nu)}{\mathcal{A}(\mu\rightarrow e \bar{\nu}\nu)} &= 1.0029\pm 0.0014 , \\ \label{eq:luv_3} \frac{\mathcal{A}(\tau\rightarrow e \bar{\nu}\nu)}{\mathcal{A}(\mu\rightarrow e \bar{\nu}\nu)} &= 1.0010\pm 0.0014 , \end{align} with the correlations given in Ref.~\cite{HFLAV:2019otj}. In our model, the $R$-ratios are completely determined by the $\Phi_3$ and $\Phi_8$ VEVs, which can be parametrized using $\phi_3$ and $\phi_8$ as defined in Eqs.~\eqref{eq:Phi8_VEV} and~\eqref{eq:Phi3_VEV}. Applying the experimental constraints in Eqs.~\eqref{eq:luv_1}-\eqref{eq:luv_3} at $95\%$ CL leads to the exclusion limits shown in Fig.~\ref{fig:lepton_universality}. \subsection{Cabibbo angle (or CKM) anomaly} In addition to LFU violation, a modification of the $\mu\rightarrow e \bar{\nu}\nu$ decay rate changes the inferred value of the Fermi constant, which affects the determination of $V_{ud}$ from $\beta$ decays. The latter measurements currently exhibit a $\sim 3 \sigma$~\cite{Belfatto:2019swo,Grossman:2019bzp,Shiells:2020fqp,Seng:2020wjq} tension with the SM predictions, which could be eased by a NP contribution at the level of~\cite{Buras:2021btx} \begin{equation} R(\mu\rightarrow e) = 1.00075\pm 0.00025\,. \end{equation} This experimental anomaly was originally formulated as a breakdown of unitarity in the first row of the CKM matrix, and is thus commonly referred to as the Cabibbo angle anomaly (CAA). Since the sum $|V_{ui}|^2$ appears to be less than unity, one interpretation is that weak decays of nuclei are suppressed relative to those of leptons. The additional contributions of $A_{6,7}$ gauge interactions to muon decays can explain the anomaly if the effective Fermi constant $G_\mu$ inferred from muon decays is increased by a factor of $(1+\delta_\mu)$, with $\delta_\mu = 7\times 10^{-4}$~\cite{Belfatto:2019swo}. It can be achieved in the present model with \begin{equation} {M_{6,7}\over g'}\cong 13\,{\rm TeV}\, \label{CAAeq} \end{equation} due to the additional neutral current contribution to $\mu\to e\nu_\mu\bar\nu_e$. This is consistent with the LEP and LFU bounds, as can be seen in Fig.~\ref{fig:lepton_universality}, where the $2\sigma$ preferred region to explain the CAA anomaly corresponds to the blue-dotted contour. If a NP origin of the anomaly were to be confirmed, it could be explained by the nonabelian model with VEVs $\phi_3\sim10-20$~TeV and $\phi_8\sim1-15$~TeV.\footnote{The LFU ratio~\eqref{eq:luv_2} also deviates by $2\sigma$ from the SM prediction, which would single out an even more restricted range $\phi_8\sim1-5$~TeV for the $\Phi_8$ VEV as indicated by the orange dotted line in Fig.~\ref{fig:lepton_universality}.} \subsection{Gauge boson mixing} The $\Phi_6$ VEV generates mixing between all the gauge boson Lagrangian states in the model, with mixing angles of order $\sin\xi_{ij}\sim(\phi_6/\phi_{3,8})^2\lesssim 10^{-4}$ for the VEVs of interest here. To see that these are phenomenologically harmless, we consider the most sensitive observable probing transitions enabled by them, $\mu\rightarrow e \gamma$, whose branching ratio is constrained to be $\mathrm{Br}(\mu\rightarrow e\gamma)\leq 4.2\times 10^{-13}$ \cite{SINDRUMII:2006dvw,BaBar:2009hkt,MEG:2016leq}. This process is generated at one loop via $A_{1,2}-A_{4,5}$ mixing (internal $\tau$ lepton) or $A_{6,7}-A_{8}$ mixing (internal electron and muon). Following Ref.\ \cite{Buras:2021btx}, we derive the bounds \begin{align} \frac{M_{i,j}}{g'} &\geq 3.7\,\mathrm{TeV}\,\sqrt{ \frac{\sin\xi_{ij}}{10^{-4}} }\,,\nonumber\\ \frac{M_{k,8}}{g'} &\geq 48\,\mathrm{GeV}\,\sqrt{ \frac{\sin\xi_{k8}}{10^{-4}} }\,, \end{align} where $i=1,2$, $j=4,5$, and $k=6,8$. The first limit is stronger than the second one by a factor of $\sqrt{m_\tau/m_e}\sim 60$ due to a cancellation of the $\mu$-exchange contribution in the latter. In any case, neither of the two compete with the previously discussed LEP or LFU bounds. \section{Phenomenology of the heavy neutral leptons}\label{sec:HNL} The spectrum of heavy neutral leptons $N_i$, and their mixing with the light neutrinos $\nu_i$, is largely dictated by the measured $\nu_i$ masses and mixings, once the lightest mass $m_{\nu, l}$ is specified. In particular, there is a strict relation between pairs of $m_{\nu_i}$ and $M_{N_i}$ eigenvalues, \begin{equation} m_{\nu_i}M_{N_i} = m_D^2, \quad {i=1,2,3}\,, \label{mMrel} \end{equation} where $m_D$ is the universal neutrino Dirac mass. Because of SU(3)$_\ell$ symmetry, $m_D$ is just a number, not a matrix. This implies that the logarithmic range of $N_i$ masses is the same as that of the light neutrinos, as is represented schematically in Fig.~\ref{fig:mass_hierarchy}. The mixing angles between $N_i$ and the $\nu_i$ are diagonal in the basis of mass eigenstates, and are approximately \begin{equation} U_i = {m_D\over M_{N_i}} = \sqrt{\frac{m_{\nu_i}}{M_{N_i}}}\,. \end{equation} Thus, heavier HNLs are more weakly mixed and the flavor mixing pattern of each one exactly matches that of the corresponding active neutrino, as is illustrated in Fig.~\ref{fig:mass_hierarchy}. We now proceed to quantify this picture more precisely. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{figures/Mass_hierarchy.pdf} \caption{Schematic representation of the mass and mixing hierarchies of the active neutrinos and the heavy neutral leptons in the gauged SU(3)$_\ell$ model, for both normal and inverted orderings. The masses of the HNLs are predicted to be inversely proportional to the light neutrino ones, so that they are distributed following the reflected pattern represented in the figure. The mixings of each HNL mass eigenstate with the light neutrino flavors, represented by colors, exactly match the flavor admixture of the corresponding active neutrino, which is dictated by the PMNS matrix.} \label{fig:mass_hierarchy} \end{figure} \subsection{HNL masses} There are three parameters that determine the HNL mass and mixing spectrum, together with the $\Phi_6$ contribution to gauge boson masses: the Yukawa couplings $y_N$ and $y_{\L{\sss N}}$ and the mass of the lightest active neutrino $m_{\nu,l}$. This state is labeled as $\nu_1$ in the normal mass hierarchy (NO), and $\nu_3$ in the inverted one (IO), as shown in Fig.\ \ref{fig:mass_hierarchy}. The choice of hierarchy as well as the relative signs of active neutrino masses affects the HNL mixings. \begin{figure}[t] \centering \centerline{ \includegraphics[width=0.49\textwidth]{figures/Ratio_HNL_masses.pdf}} \caption{Ratios of the absolute value of the lighter two HNL masses to the heaviest one, as a function of the mass of the lightest active neutrino. For normal ordering, there is a single HNL that is substantially lighter than the other two, while for inverted ordering (IO) there are two nearly degenerate HNLs that are lighter than the third one. For IO, the curves terminate when the cosmological limit on $\sum_i |m_{\nu,i}|$ becomes saturated. } \label{fig:fs} \end{figure} From Eq.~\eqref{eq:sextet_VEV}, the sextet VEV can be written as \begin{equation} \langle \Phi_6 \rangle = - \frac{y_{\L{\sss N}}^2}{y_N} \, \frac{\bar{v}^2}{m_{\nu,l}} \, U_{\rm\scriptscriptstyle PMNS}\, \hat{D}^{-1}\, U_{\rm\scriptscriptstyle PMNS}^T\,, \end{equation} where $\hat{D}$ is the diagonal active neutrino mass matrix rescaled by $1/m_{\nu,l}$ and $U_{\rm\scriptscriptstyle PMNS}$ is the PMNS matrix with columns arranged according to our flavor ordering scheme $(\tau,\,\mu,\,e)$. The $\Phi_6$ VEV determines the ratio $M_{3}/g'$ for the $\mu-\tau$ gauge boson via Eq.~\eqref{gbmasses}, and thus its ability to explain the $(g-2)_\mu$ anomaly. As was noted in the previous section, \begin{equation} \frac{M_3}{g'}\in (18\,\mathrm{GeV},\,200\,\mathrm{GeV}) \end{equation} is the preferred range at the $2\sigma$ level, corresponding to $g'\in(3.7,\,13)\times10^{-3}$. The precise value of $M_3$ predicted in this way has a weak dependence on the relative sign of the active neutrino masses, which amounts to a factor of $\sim 2$ for $m_{\nu,l}=0.03$~eV and disappears for $m_{\nu,l}\ll 0.03$\,eV. \begin{figure*}[t] \centering \centerline{ \includegraphics[width=0.49\textwidth]{figures/HNL_mixing_NO.pdf}\hfil \includegraphics[width=0.49\textwidth]{figures/HNL_mixing_IO.pdf}} \caption{Dark blue: predictions from the gauged SU(3)$_\ell$ lepton flavor model for the mass and mixing of light HNLs. The mixings are normalized to the muon \emph{(left)} or electron \emph{(right)} one as indicated in the insets. We show bounds (grey) from terrestrial searches as compiled in~\cite{Alekhin:2015byh} as well as the BBN limit (green) computed following~\cite{Boyarsky:2020dzc,Bondarenko:2021cpc}. For the normal mass hierarchy \emph{(left)}, there is a single light HNL with suppressed $e$ mixing and we therefore show terrestrial searches based on the $\mu$ mixing. For inverted hierarchy \emph{(right)}, there are two quasi-degenerate light HNLs with fairly universal mixings and the leading terrestrial constraints therefore come from $e$ mixing.} \label{fig:HNL_mixing_BBN} \end{figure*} The mass matrix for the HNLs is in turn determined by the sextet VEV as \begin{eqnarray}\label{eq:HNL_mass} M_N &=& y_N\langle\Phi_6\rangle\nonumber\\ &=& - y_{\L{\sss N}}^2 \frac{\bar{v}^2}{m_{\nu,l}} \, U_{\rm\scriptscriptstyle PMNS}\, \hat{D}^{-1}\, U_{\rm\scriptscriptstyle PMNS}^T\,. \end{eqnarray} Using the top line in Eq.~\eqref{eq:HNL_mass}, the $(g-2)_\mu$-preferred values for $\langle\Phi_6\rangle$ result in the prediction that the scale of HNL masses lies in the $\mathcal{O}(1-100) \mathrm{GeV}$ range for $\mathcal{O}(1)$ values of $y_N$. Eq.~\eqref{eq:HNL_mass} implies that the PMNS matrix also represents the unitary transformation that diagonalizes $M_N$. As a consequence, each HNL mass eigenstate is strictly inversely proportional to the corresponding active neutrino mass, \begin{equation} M_{N_i} = - y_{\L{\sss N}}^2 \frac{\bar{v}^2}{m_{\nu_i}}\,. \end{equation} This means that in the normal ordering case, there is one light and two heavy HNLs, while for inverted ordering there are two light and one heavy HNLs. This is schematically represented in Fig.~\ref{fig:mass_hierarchy}, while the quantitative mass ratios are shown in Fig~\ref{fig:fs} as a function of $m_{\nu,l}$. The largest considered value of $m_{\nu, l}=0.03\, (0.015)$~eV in the NO (IO) case, roughly saturates the cosmological constraint on the sum of the absolute value of neutrino masses\footnote{A recent evaluation~\cite{DiValentino:2021hoh} finds a tighter limit $\sum |m_{\nu_i}|\leq0.09$~eV, which would compromise the viability of the IO scenario but would not significantly affect the NO one.} $\sum |m_{\nu_i}|\leq0.12$~eV~\cite{Planck:2018vyg}. \subsection{HNL mixings} \begin{figure*}[t] \centering \centerline{ \includegraphics[width=0.49\textwidth]{figures/HNL_yN_yLN_no.pdf}\hfil \includegraphics[width=0.49\textwidth]{figures/HNL_yN_yLN_io.pdf}} \caption{Regions of the $y_N$-$y_{\L{\sss N}}$ parameter space for which the gauged SU(3)$_\ell$ model can address the $(g-2)_\mu$ anomaly while satisfying all other phenomenological constraints, for various values of the lightest neutrino mass as labeled in the figure. The blue-dotted regions show where the $A_3$ gauge boson mass and coupling $g'$ are consistent with the $2\sigma$ preferred values shown in Fig~\ref{fig:gminus2}. The green region is the BBN exclusion from Fig.~\ref{fig:HNL_mixing_BBN} and the orange one is forbidden by perturbative unitarity of the HNL Yukawa coupling, $y_N\leq\sqrt{4\pi}$. Left (right) panel corresponds to the NO (IO) active neutrino mass hierarchy, respectively. } \label{fig:HNL_yN_yLN} \end{figure*} Transforming to the HNL mass eigenbasis (denoted by primes), it is straightforward to show that the mixing between the light neutrino flavors and the HNLs is given by \begin{eqnarray} \nu_\alpha &\cong&\phantom{-} \nu'_\alpha + U_{\alpha i}\, N_i'\nonumber\\ N_i &\cong& -U_{\alpha i}\,\nu'_\alpha + N'_i \,, \end{eqnarray} where \begin{eqnarray} U_{\alpha i} = U^{\mathrm{\scriptscriptstyle PMNS}}_{\alpha i} \frac{m_{\nu_i}}{y_{\L{\sss N}}\bar{v}}\,. \end{eqnarray} The mixing structure is completely determined by the PMNS matrix: the mixing pattern of the HNL mass eigenstate $i$ is proportional to the flavor composition of the corresponding active neutrino mass eigenstate (see Fig.~\ref{fig:mass_hierarchy} for a graphical representation). The approximate scale of the mixing is $U = (m_\nu/M_N)^{1/2}\sim 10^{-6}-10^{-5}$ for HNLs in the $1-100$~GeV mass range. While the HNL masses and mixings predicted by the SU(3)$_\ell$ model are out of reach for present and upcoming terrestrial experiments, they have significant implications for cosmology. In particular, HNLs decaying too close to the time of Big Bang Nucleosynthesis (BBN) can affect the predictions for light element yields~\cite{Dolgov:2000pj,Dolgov:2000jw}. The strongest current constraints come from the hadronic decays of the HNLs, which inject light mesons into the plasma that can act to imbalance the proton-to-neutron ratio at BBN~\cite{Boyarsky:2020dzc}. Following the prescription of~ Ref.\ \cite{Bondarenko:2021cpc} together with the width calculations of Ref.~\cite{Bondarenko:2018ptm}, we obtain the constraints shown in Fig.~\ref{fig:HNL_mixing_BBN} for normal and inverted mass hierarchies. For NO, there is a single light HNL whose mixings match those of $\nu_3$ (see Fig.~\ref{fig:mass_hierarchy}). As a consequence, its $e$ mixing is suppressed by a factor of $\sim 5$ compared to the $\mu$ and $\tau$ mixings. Then the BBN bound gives $M_{N_3}\gtrsim2.4$~GeV or $U^2_{\mu3}\lesssim 1\times 10^{-11}$, which translates into $y_{\L{\sss N}}\gtrsim 6.4\times10^{-8}$. IO has two light HNL with quasi-degenerate masses and mixings dictated by the flavor composition of $\nu_1$ and $\nu_2$ (see Fig.~\ref{fig:mass_hierarchy}). The BBN bound arising from the combined effect of both HNLs leads to $M_{N_1}\sim M_{N_2}\gtrsim0.95$~GeV and $U^2_{e1}\lesssim 4\times 10^{-11}$, implying that $y_{\L{\sss N}}\gtrsim 4.0\times10^{-8}$. The BBN constraint on $y_{\L{\sss N}}$, together with the bound $y_N\leq\sqrt{4\pi}$ arising from perturbative unitarity in $N\bar{N}\rightarrow N\bar{N}$ scattering, rule out regions of the $y_{\L{\sss N}}$ versus $y_{\sss N}$ parameter space, including some relevant for explaining the $(g-2)_\mu$ anomaly. They are shown in Fig.~\ref{fig:HNL_yN_yLN} for several choices of the lightest active neutrino mass and for the two possible neutrino mass orderings. Here we assume that $m_{\nu_3}$ has opposite sign to $m_{\nu_1}$ and $m_{\nu_2}$, which gives the model the greatest latitude for consistency. Other choices of relative signs differ by factors $\lesssim 2$ for the extent of $(g-2)_\mu$ preferred regions. Several conclusions can be drawn from Fig.~\ref{fig:HNL_yN_yLN}. The minimum value for $m_{\nu,l}$ that is compatible with the experimental measurement of $(g-2)_\mu$ is $m_{\nu,l}\simeq 10^{-4}$~eV. Consequently, the maximum possible mass for the heaviest HNL is $M_{N_1}\simeq 1.2$~TeV for NO and $M_{N_3}\simeq 1$~TeV for IO. At the other extreme, values of $y_{\L{\sss N}}$ as large as $10^{-6}$ are possible if the mass of the lightest neutrino saturates the cosmological upper bound. This reveals a novel connection between the scale of neutrino masses and the anomalous magnetic moment of the muon within the SU(3)$_\ell$ paradigm. It constitutes a unique prediction of the proposal, which could exclude it definitively in light of more precise experimental data. \section{Conclusions} \label{sect:conc} In this work, we have proposed a framework for gauging the vectorial SU(3) family symmetry of lepton flavor in a minimal way. The construction can consistently reproduce the observed patterns of charged lepton masses and neutrino masses and mixings. The new particle content of the model, in addition to the eight leptophilic gauge bosons, consists of a right-handed neutrino triplet and a triplet of vectorlike isosinglet charged lepton partners. In addition, three scalar multiplets in the fundamental, symmetric two-index, and adjoint representations of SU(3)$_\ell$ provide the symmetry breaking required to reproduce the observed lepton masses and mixings, and allowed gauge boson masses. The charges and interactions are collected in Table~\ref{tab1} and Eq.~\eqref{eq:Lagrangian}, respectively. One of the new leptophilic gauge bosons, the one associated with the $L_\mu-L_\tau$ current, is taken to lie in the $10-200$~MeV range, being parametrically lighter than the others. This allows it to address the current discrepancy between experimental measurements and SM predictions of the anomalous magnetic moment of the muon. Moreover, the Cabibbo angle anomaly can be explained by the pair of vector bosons that couple off-diagonally to $e\mu$, with masses in the $1-10$~GeV range, while complying with all other LEP and lepton-flavor universality constraints. Interestingly, the relative lightness of the $L_\mu-L_\tau$ gauge boson is shown to be linked to a low ($\sim 1$-$100\,$GeV) mass scale for the right-handed neutrinos that participate in the seesaw mechanism. As shown schematically in Fig.~\ref{fig:mass_hierarchy}, the masses of the HNL eigenstates are inversely proportional to the light neutrino ones, and their mixings with the active flavors match the flavor composition of the corresponding light neutrino mass eigenstates. These two connections lead us to predict that the lightest active neutrino cannot be arbitrarily light, but must rather have a mass larger than $\sim 0.1\,$~meV. Once fit to the observed lepton and neutrino properties (including $(g-2)_\mu$), the gauged SU(3)$_\ell$ scenario is highly predictive and makes potentially testable forecasts for existing and upcoming astrophysical and particle physics experiments. Focusing on the HNL sector, there are three main avenues that complement each other in testing the parameter space in Fig.~\ref{fig:HNL_yN_yLN}: (\emph{i}) more precise experimental measurements and SM predictions for the muon anomalous magnetic moment; (\emph{ii}) refined measurements and theoretical calculations of the light-element yields and the number of relativistic degrees of freedom at Big Bang Nucleosynthesis, and (\emph{iii}) improved determinations of the absolute scale of neutrino masses from cosmology or terrestrial experiments like KATRIN~\cite{Aker:2021gma}. In the charged lepton sector, electroweak precision constraints on lepton mixing with heavy vectorlike partners, needed for realistic lepton mass generation, constitute a powerful test of the model. These lepton partners might be discoverable at the High-Luminosity LHC or other future colliders. In contrast, although they are energetically accessible, the new scalar fields in the model are weakly coupled to the SM states, making their possible detection difficult. Although it is not mandatory for the model, it is possible to introduce an additional fermionic vectorlike triplet $\Psi_i$ as a dark matter candidate. As in the $U(1)_{\L_\mu-\L_\tau}$ model of Refs.~\cite{Foldenauer:2018zrz, Holst:2021lzm}, the observed relic density could be achieved through resonantly enhanced $\Psi_{\mu/\tau}\bar\Psi_{\mu/\tau}\to \mu^+\mu^- + \nu_{\mu,\tau}\bar\nu_{\mu,\tau}$ annihilations. The obstacle in the SU(3)$_\ell$ model is that the $\Psi_e$ component would not be sufficiently suppressed unless its mass also happens to allow for resonantly enhanced (co)annihilations through the exchange of the heavier gauge bosons. This would require additional model building, which we do not pursue here. A remaining challenge for future study is to construct a suitable scalar potential leading to the required $\Phi_3$, $\Phi_6$, and $\Phi_8$ VEVs. Although the preliminary study in Appendix~\ref{app:VEVs} makes it plausible that the desired symmetry breaking pattern can be achieved, further investigation is required to prove it. It is in principle possible to construct a similar model for quark masses and mixing based on a spontaneously broken SU(3)$_B$ symmetry. This extension, combined with the present proposal including mixing between the SU(3)$_\ell$ and SU(3)$_B$ gauge bosons, might explain the accumulating evidence for lepton-universality violation in $b\rightarrow s\ell^+\ell^-$ decays~\cite{Geng:2021nhg}. Work in this direction is underway. \newpage {\bf Acknowledgments.} We thank B.\ Grinstein, A.\ Crivellin, W.\ Altmannshofer, and F. Kahlhoefer for helpful correspondence. This work was supported by NSERC (Natural Sciences and Engineering Research Council, Canada). G.A. is supported by the McGill Space Institute through a McGill Trottier Chair Astrophysics Postdoctoral Fellowship. \bibliographystyle{utphys}
{'timestamp': '2021-11-10T02:00:26', 'yymm': '2111', 'arxiv_id': '2111.04744', 'language': 'en', 'url': 'https://arxiv.org/abs/2111.04744'}
\section{Introduction} Generative Adversarial Models (GANs)\cite{Goodfellow:2014:GAN:2969033.2969125} have been found to produce images of very high quality on some datasets \cite{DBLP:conf/iclr/KarrasALL18, DBLP:journals/corr/abs-1812-04948}. However, their results on other datasets, while impressive, still lag behind \cite{DBLP:journals/corr/abs-1809-11096}. This raises the question whether GANs are indeed the right choice to model some distributions. This paper aims to test the distribution learning ability of GANs by evaluating them on synthetic datasets. \subsection{Related works and Contributions} It has been proposed in recent work that \say{a high number of classes is what makes ImageNet \cite{imagenet_cvpr09} synthesis difficult for GANs} \cite{pmlr-v70-odena17a}. Indeed, GANs have been able to produce very high quality images on CelebA \cite{DBLP:journals/corr/abs-1812-04948, DBLP:conf/iclr/KarrasALL18} while results on Imagenet are not so impressive \cite{DBLP:journals/corr/abs-1809-11096}. Because distributions of natural images are complex, in this work, we focus our attention on synthetically generated datasets. We study the learnability of commonly encountered distributions in low dimensional space and the impact of discontinuity. Additionally, we evaluate a specific aspect of learning high dimensional image distributions, counting similar objects in a scene. This constitutes an important part of learning latent space representations of images since for an image to be semantically well-formed, certain objects must obey certain numerical constraints (for example, number of heads on an animal). Our evaluation is performed on synthetic point and image datasets. To our knowledge, the only instance of synthetic image datasets used for GAN evaluation have been to learn manifolds of convex polygons (specifically triangles) \cite{Lucic:2018:GCE:3326943.3327008}. Although, we also use polygons as a testbed for our experiments, we focus on learning a manifold with multiple polygons where their number is fixed. Our contributions are as follows: \begin{enumerate} \item We show via experiments on synthetic datasets that commonly found distributions are learnable by GANs. We also highlight that distributions with gaps in support may be difficult to learn without using a mixture of generators. \item We empirically evaluate whether GANs can learn semantic constraints on objects in images in high dimensional space. Specifically, we test a GAN's ability to count an object that is repeated in an image. \item We underline a possible tension between generalization ability of GANs and their learning capabilities. \end{enumerate} \section{Experimental Setup} In this section, we describe the setup of our experiments which include details about the datasets generated, architectures used and the reasoning behind them. \subsection{Datasets} We generate two kinds of datasets (each with 5000 examples) for our evaluation : point datasets, where each sample is a point in $R^n$ and image datasets with each image containing a fixed number of polygons. We use 4 point datasets in our evaluation : Mixtures of Gaussians, Concentric Circles, S-shape curves and Swiss rolls. The first two are 2D while the latter two are in 3D space. This choice was made to enable us to visualize the learned distribution. For each of these four settings, we experimented with three variants, each containing a different amount of noise. \begin{figure} \centering \begin{minipage}{.45\linewidth} \includegraphics[width=\linewidth]{Unknown-4.png} \caption{ S-Curve Distribution} \label{img4} \end{minipage} \hspace{.05\linewidth} \begin{minipage}{.45\linewidth} \includegraphics[width=\linewidth]{Unknown-5.png} \caption{Swiss Roll Distribution} \label{img5} \end{minipage} \end{figure} To evaluate high dimensional learning, we generate image datasets which are mixtures of polygons. We created three datasets with each image containing: 1 square of size 4x4 (called Squares 1-4), 3 squares of size 4x4 (called Squares 3-4) and a mixture of two triangles and two circles (called CT2). All the datasets contain images of size 28x28 with the third one containing 3 channels. Additionally, for the first two datasets, all squares are non-overlapping and have edges which are axis-aligned. CT2, on the other hand, contains overlapping polygons. In each dataset, the number of objects is fixed and the only varying quantity is their position (which varies with a Gaussian distribution). For the square datasets, even the rotation and shape of the objects is held constant. Hence, the only source of variation is their position. Some examples are shown in Figures \ref{img1}, \ref{img2}, \ref{img3}. \begin{figure} \centering \begin{minipage}{.25\linewidth} \includegraphics[width=\linewidth]{Unknown-7.png} \caption{Image from the Squares 1-4 dataset} \label{img1} \end{minipage} \hspace{.05\linewidth} \begin{minipage}{.25\linewidth} \includegraphics[width=\linewidth]{Unknown-8.png} \caption{Image from the Squares 3-4 dataset} \label{img2} \end{minipage} \hspace{.05\linewidth} \begin{minipage}{.25\linewidth} \includegraphics[width=\linewidth]{Unknown-9.png} \caption{Image from the CT2 dataset} \label{img3} \end{minipage} \end{figure} The main objective behind creating images with a fixed number of objects was to test whether GANs can learn to count the number of objects in a scene. More specifically, since GANs have shown impressive image generation capabilities on centered image datasets\cite{DBLP:conf/iclr/KarrasALL18}, we want to measure whether that performance can transfer to datasets with objects occuring at varying locations in the scene. Since the only varying quantity in the square datasets is the position, we would expect GANs with true distribution learning abilities to be able to produce images with the exact numbers of squares at different positions in the image. Additional details about our data generation process can be found in the Appendix. \subsection{Architectures} We use two sets of architectures to train our models. For point datasets, we use a Vanilla GAN with a 3 layer MLP for both the generator and discriminator and another model with the same architecture with Wasserstein loss (enforced via gradient penalty) \cite{Gulrajani:2017:ITW:3295222.3295327}. For image datasets, we use one model with a DCGAN-inspired \cite{DBLP:journals/corr/RadfordMC15} architecture and another model with the same architecture with Wasserstein loss (enforced via gradient penalty). We do not evaluate Vanilla GANs on our image datasets because they do not seem to be competitive with the other models in our experiments. Further architectural details (including choice of hyperparameters) are described in the Appendix. \subsection{Experimental Details} We used the Google Colaboratory environment (12GB RAM Nvidia Tesla K80) for all our experiments in this paper. All models with Wasserstein loss are trained with RMSProp \cite{Tieleman2012} with a learning rate of 0.00005. All others are trained with the ADAM \cite{DBLP:journals/corr/KingmaB14} optimizer with a learning rate of 0.0002. We train models for up to 150k training steps and stop earlier if the model reaches convergence earlier. Increasing training steps beyond 150k were not found to significantly improve sample quality. For each generator update, we update the discriminator 5 times for WGAN-GP inspired architectures. Additional details about our experiments and architectures can be found in the Appendix. \section{Results and Analysis} \subsection{Point Data} \begin{figure} \centering \begin{minipage}{.4\linewidth} \includegraphics[width=\linewidth]{Unknown-10.png} \caption{Original distribution of Concentric circles} \label{img6} \end{minipage} \hspace{.05\linewidth} \begin{minipage}{.4\linewidth} \includegraphics[width=\linewidth]{conc-circ.png} \caption{Learned Distribution after 150k steps. \textbf{Note:} this distribution gets better after more iterations but we show the one after 150k for homogeneity} \label{img7} \end{minipage} \end{figure} \begin{figure} \centering \begin{minipage}{.4\linewidth} \includegraphics[width=\linewidth]{Unknown-12.png} \caption{Original distribution of 3 blobs} \label{img8} \end{minipage} \hspace{.05\linewidth} \begin{minipage}{.4\linewidth} \includegraphics[width=\linewidth]{Unknown-11.png} \caption{Learned Distribution after 150k steps} \label{img9} \end{minipage} \end{figure} On mixtures of Gaussians and concentric circles, we find that both architectures seem to perform equally well. They seem to approach approximate convergence as both their discriminator's accuracy oscillates around 50\%. Since both datasets contain disconnected components, we find that both models are not able to model this discontinuity and as a result still produce samples which lie in between clusters of data. This may explain the oscillatory behaviour observed. Examples of 1000 samples from the real and fake distributions are shown in Figures \ref{img7}, \ref{img9}. \begin{figure} \centering \begin{minipage}{.4\linewidth} \includegraphics[width=\linewidth]{Unknown-4.png} \caption{Original distribution of the shape S (minimal noise)} \label{img10} \end{minipage} \hspace{.05\linewidth} \begin{minipage}{.4\linewidth} \includegraphics[width=\linewidth]{Unknown-14.png} \caption{Learned Distribution after 140k steps (minimal noise)} \label{img11} \end{minipage} \end{figure} \begin{figure} \centering \begin{minipage}{.4\linewidth} \includegraphics[width=\linewidth]{Unknown-5.png} \caption{Original distribution of the Swiss roll (minimal noise)} \label{img12} \end{minipage} \hspace{.05\linewidth} \begin{minipage}{.4\linewidth} \includegraphics[width=\linewidth]{Unknown-13.png} \caption{Learned Distribution after 150k steps (minimal noise)} \label{img13} \end{minipage} \end{figure} The inability to model discontinuity is understandable since the latent space is continuous and the neural network can be considered to be a continuous function approximator so the output has to be continuous as well. This virtually guarantees that some samples from the model will be necessarily "bad". Next, we evaluate the S-curve and Swiss roll distributions. Traditionally, mixtures of Gaussians have been the toy distribution of choice for GAN evaluation. However, we find that both distributions are learned fairly faithfully but increasing noise can cause separate surfaces in the distribution to be merged. For example, with increased noise, the S shape in the S-curve can become an 8 or the Swiss roll may look like a circle (samples in Appendix). This phenomenon, in our experiments, seems to be worse (in terms of losing shape) for Swiss rolls than S-curves. We hypothesize this may be due to overlapping surfaces being closer in the Swiss roll distribution making it more sensitive to noise. This dependence of sensitivity to noise on the underlying distribution suggests that GANs may not be suitable to modelling certain distributions in noisy environments and alternative generative models may need to be explored. \subsection{Image Datasets} During training, the first model (without Wasserstein loss) seems to converge after about 74k steps while the other one doesn't seem to converge. Inspite of this difference, we do not find a major difference in image quality from the two models after 150k steps. We find that both models fail to learn to count on the Squares 3-4 dataset. Our models can potentially produce anything between 0 and 5 squares. Some random samples can be seen in \ref{img14}, \ref{img15}. \begin{figure} \centering \includegraphics[width=\linewidth]{Unknown-15.png} \vspace*{-1.2cm} \caption{Samples after 150k steps from DCGAN (converged)} \label{img14} \vspace*{0.5cm} \centering \includegraphics[width=\linewidth]{Unknown-16.png} \vspace*{-1.2cm} \caption{Samples after 150k steps from WGAN-GP} \label{img15} \end{figure} This raises an important question about the learning capability of GANs. Is what we are seeing just poor learning or should it be instead viewed as generalization? For example, in natural image datasets, say faces, we might see GANs produce images with completely new hair placement. When these images look "natural" to the observer, they might see it as generalization. However, when that image does not look plausible, we would call it poor learning. As it stands, there seems to be no clear demarcation between poor learning outcomes and generalization. Since the only source of variation in our image datasets is location, we would like to see the GAN learn that there are 3 squares in each image and then generalize their position. However, currently, we have no way of enforcing this constraint on the number or shape of objects. Quantifying this apparent tradeoff between learning and generalization is an interesting avenue for further work. In fact, there might not even be tradeoff and GANs may fundamentally be unable to learn distributions of such a nature. We leave the evaluation of these hypotheses as future work. We do not focus our discussion excessively on shape because for more natural datasets, shape is more variable and is not rigid, like in our case. For example, we know that cats have 4 legs, but, the legs may not look the same in every image. A natural objection would be that a similar reasoning would be applicable to the count of objects i.e. some of these legs may be occluded in the image and, therefore, it may appear to a GAN that a cat could have 3 legs. Thus, GANs producing cats with less than 4 legs could be justified. That is precisely why we have chosen non-overlapping squares for our experiment. Since there are no occlusions, the GAN should learn that in every image, there are supposed to be exactly 3 squares. Even in real world datasets, it may not be possible to collect a dataset which has cats whose legs have the same shape. However, it is possible to collect a dataset with cats having four legs in all images (no occlusions). Interestingly, both models seem to learn that there is only one square in the 1-4 dataset i.e. when trained on the Squares 1-4 dataset, most samples seem to include just 1 square of size which visually looks close to 4x4. Admittedly, this visual test of similarity may not be enough to ascertain whether the shape and size of the square corresponds exactly to a 4x4 square. However, during training with images of 1 square, we observed that if we increase the size of the square i.e. instead of using 4x4 use 16x16 (Squares 1-16), the likelihood of having multiple squares in the image reduces. We also observe that all squares generated seem to be axis aligned. Therefore, GANs seem to have no problem learning orientation but cannot seem to enforce counting constraints. We also investigated whether a GAN can leverage the fact that it has learnt what a square looks like on Squares 1-4. In this experiment, we transferred the weights from that trained model and fine-tuned it on the Squares 3-4 dataset. We did not observe a noticeable improvement in image quality. If anything, image quality tends to get worse. Samples from models trained on CT2 and additional samples from our models are included in the Appendix. We observed that the GANs did not sufficiently reproduce circles and triangles when trained on CT2. Since CT2 contains overlaps and multiple types of polygons, we consider this to be a more challenging dataset to model. \section{Conclusion} In this paper, we present the phenomenon of GANs being unable to count. We support this hypothesis with experiments on synthetic datasets where the count of similar objects in a scene is kept constant while their location is varied. We find that in their current form GANs are unable to learn semantic constraints even in the absence of noise introduced by natural image datasets. We also emphasize the fine line between generalization and good learning outcomes in GANs. Additionally, we conduct experiments on non-image data where we conclude that GANs tend to have difficulty learning discontinuous distributions which might necessitate the usage of mixtures of generators. A thorough evaluation of such an approach is left as future work.
{'timestamp': '2019-07-08T02:03:56', 'yymm': '1907', 'arxiv_id': '1907.02662', 'language': 'en', 'url': 'https://arxiv.org/abs/1907.02662'}
\section{Introduction} \label{sec:intro} Cosmological inflation \cite{Starobinsky:1980te, Sato:1980yn, Guth:1980zm, Linde:1981mu, Albrecht:1982wi, Linde:1983gd} is a phase of accelerated expansion that took place at high energy in the primordial universe. It explains its homogeneity and isotropy at large scales. Furthermore, during inflation, quantum vacuum fluctuations are amplified and swept up by the expansion to seed the observed large-scale structure of our universe~\cite{Mukhanov:1981xt, Mukhanov:1982nu, Starobinsky:1982ee, Guth:1982ec, Hawking:1982cz, Bardeen:1983qw}. Those fluctuations can be described with the curvature perturbation, $\zeta$, and at scales accessible to Cosmic Microwave Background (CMB) experiments~\cite{Ade:2015xua, Ade:2015lrj}, they are constrained to be small ($\zeta\sim 10^{-5}$ until they re-enter the Hubble radius). At smaller scales however, $\zeta$ may become much larger, such that, upon re-entry to the Hubble radius, it can overcome the pressure gradients and collapse into primordial black holes (PBHs)~\cite{Hawking:1971ei,Carr:1974nx,Carr:1975qj}. This is why the renewed interest in PBHs triggered by the LIGO--Virgo detections~\cite{LIGOScientific:2018mvr}, see e.g.~ \Refs{Clesse:2020ghq, Abbott:2020mjq}, has led to a need to understand inflationary fluctuations produced in non-perturbative regimes. Observational constraints on the abundance of PBHs are usually stated in terms of their mass fraction $\beta_M$, defined such that $\beta_M\mathrm{d}\ln M$ corresponds to the fraction of the mass density of the universe comprised in PBHs of masses between $M$ and $M+\mathrm{d}\ln M$, at the time PBHs form. Light PBHs ($10^9\mathrm{g}< M < 10^{16} \mathrm{g}$) are mostly constrained from the effects of Hawking evaporation on big bang nucleosynthesis and the extragalactic photon background, and constraints typically range from $\beta_M<10^{-24}$ to $\beta_M<10^{-17}$. Meanwhile, heavier PBHs ($10^{16} \mathrm{g} < M < 10^{50} \mathrm{g}$) are constrained by dynamical or gravitational effects (such as the microlensing of quasars for instance), with $\beta_M<10^{-11}$ to $\beta_M<10^{-5}$ (see \Refa{Carr:2020gox} for a recent review of constraints). A simple calculation of the abundance of PBHs proceeds as follows. In the perturbative regime, curvature fluctuations produced during inflation have a Gaussian distribution function $P(\zeta)$, where the width of the Gaussian is simply related to the power spectrum of $\zeta$. The mass fraction then follows from the probability that the mean value of $\zeta$ inside a Hubble patch exceeds a certain formation threshold $\zeta_\mathrm{c}$, \begin{equation}\begin{aligned} \label{eq:def:beta} \beta_M \sim 2\int_{\zeta_\mathrm{c}}^\infty P\left(\zeta\right) \mathrm{d} \zeta\, . \end{aligned}\end{equation} This formula follows from the Press-Schechter formalism~\cite{1974ApJ...187..425P}, while more refined estimates can be obtained in the excursion-set~\cite{Peacock:1990zz, Bower:1991kf, Bond:1990iw} or peak-theory~\cite{Bardeen:1985tr} approaches. In practice, $\zeta_\mathrm{c}$ depends on the details of the formation process but is typically of order one~\cite{Zaballa:2006kh, Harada:2013epa},\footnote{The density contrast can also provide an alternative criterion for the formation of PBHs, see \Refa{Young:2014ana}.} and $M$ is some fraction of the mass contained in a Hubble patch at the time of formation~\cite{Choptuik:1992jv, Niemeyer:1997mt, Kuhnel:2015vtw}. If $P(\zeta)$ is Gaussian, $\beta_M$ is thus directly related to the power spectrum of $\zeta$, and the only remaining task is to compute this power spectrum in different inflationary models. However, it has recently been pointed out~\cite{Pattison:2017mbe, Biagetti:2018pjj, Ezquiaga:2018gbw} that since PBHs require large fluctuations to form, a perturbative description may not be sufficient. Indeed, primordial curvature perturbations are expected to be well-described by a quasi-Gaussian distribution only when they are small and close enough to the maximum of their probability distribution. Large curvature perturbations on the other hand, which may be far from the peak of the distribution, are affected by the presence of the non-Gaussian tails associated with the unavoidable quantum diffusion of the field(s) driving inflation~\cite{Pattison:2017mbe, Ezquiaga:2019ftu}. In order to describe those tails, one thus requires a non-perturbative approach, such as the stochastic-$\delta N$ formalism~\cite{Enqvist:2008kt, Fujita:2013cna, Vennin:2015hra} that we will employ in this work. Stochastic inflation~\cite{Starobinsky:1982ee, Starobinsky:1986fx} is an effective infrared (low energy, large scale) theory that treats the ultraviolet (high energy, small scale) modes as a source term that provides quantum kicks to the classical motion of the fields on super-Hubble scales. Combined with the $\delta N$ formalism~\cite{Starobinsky:1982ee, Starobinsky:1986fxa, Sasaki:1995aw, Wands:2000dp, Lyth:2004gb}, it provides a framework in which curvature perturbations are related to fluctuations in the integrated expansion and can be characterised from the stochastic dynamics of the fields driving inflation. The reason why quantum diffusion plays an important role in shaping the mass fraction of PBHs is that, for large cosmological perturbations to be produced, the potential of the fields driving inflation needs to be sufficiently flat, hence the velocity of the fields induced by the potential gradient is small and easily taken over by quantum diffusion. Another consequence of having a small gradient-induced velocity is that it may also be taken over by the velocity inherited from previous stages during inflation where the potential is steeper. This is typically the case in potentials featuring inflection points~\cite{Garcia-Bellido:2017mdw, Germani:2017bcs}. When this happens, inflation proceeds in the so-called ultra-slow roll (USR) (or ``friction dominated'') regime~\cite{Inoue:2001zt, Kinney:2005vj}, which can be stable under certain conditions~\cite{Pattison:2018bct}, and our goal is to generalise and make use of the stochastic-$\delta N$ formalism in such a phase. In practice, we consider the simplest realisation of inflation, where the acceleration of the universe is driven by a single scalar field called the inflaton, $\phi$, whose classical motion is given by the Klein-Gordon equation in a Friedmann-Lemaitre-Robertson-Walker spacetime \begin{equation}\begin{aligned} \label{eq:kleingordon} \ddot{\phi} + 3H\dot{\phi} + V'(\phi) = 0 \, , \end{aligned}\end{equation} where $H$ is the Hubble rate, $V(\phi)$ is the potential energy of the field, and a prime denotes a derivative with respect to the field $\phi$ while a dot denotes a derivative with respect to cosmic time $t$. In the usual slow-roll approximation, the field acceleration can be neglected and \Eq{eq:kleingordon} reduces to $3H\dot{\phi} \simeq -V'(\phi)$. In this limit, the stochastic formalism has been shown to have excellent agreement with usual quantum field theoretic (QFT) techniques in regimes where the two approaches can be compared~\cite{Starobinsky:1994bd, Tsamis:2005hd, Finelli:2008zg, Garbrecht:2013coa, Vennin:2015hra, Onemli:2015pma, Burgess:2015ajz, Vennin:2016wnk, Hardwick:2017fjo, Tokuda:2017fdh}. However the stochastic approach can go beyond perturbative QFT using the full nonlinear equations of general relativity to describe the non-perturbative evolution of the coarse-grained field. This can be used to reconstruct the primordial density perturbation on super-Hubble scales using the stochastic-$\delta N$ approach mentioned above. In particular this approach has recently been used to reconstruct the full probability density function (PDF) for the primordial density field~\cite{Pattison:2017mbe,Ezquiaga:2019ftu}, finding large deviations from Gaussian statistics in the nonlinear tail of the distribution. More precisely, these tails were found to be exponential rather than Gaussian, which cannot be properly described by the usual, perturbative parametrisations of non-Gaussian statistics (such as those based on computing the few first moments of the distribution and the non-linearity parameters $f_{\mathrm{NL}}$, $g_{\mathrm{NL}}$, etc.), which can only account for polynomial modulations of Gaussian tails~\cite{Byrnes:2012yx, Passaglia:2018ixg, Atal:2018neu}. When the inflaton crosses a flat region of the potential, such that $|V'|\ll 3H|\dot\phi|$, the slow-roll approximation breaks down. In the limit where the potential gradient can be neglected, {i.e.~} the USR limit, one has $\ddot{\phi} + 3H\dot{\phi} \simeq 0$, which leads to $\dot{\phi} = \dot{\phi}_{\mathrm{in}}e^{-3N}$ where $N=\int H\, \mathrm{d} t$ is the number of $e$-folds. If in addition we consider the de-Sitter limit where $H$ is constant, we obtain \begin{equation}\begin{aligned} \label{eq:usr:dotphisol} \dot{\phi}=\dot{\phi}_\mathrm{in}+3H(\phi_\mathrm{in}-\phi) \,, \end{aligned}\end{equation} so the field's classical velocity is simply a linear function of the field value. Notice that the USR evolution is thus dependent on the initial values of both the inflaton field and its velocity, unlike in slow roll where the slow-roll attractor trajectory is independent of the initial velocity. Let us note that stochastic effects in USR have been already studied by means of the stochastic-$\delta N$ formalism in \Refs{Firouzjahi:2018vet, Ballesteros:2020sre} (see also \Refs{Cruces:2018cvq, Prokopec:2019srf} for an alternative approach), which investigated leading corrections to the first statistical moments of the curvature perturbation. Our work generalises these results by addressing the full distribution function of curvature perturbations, including in regimes where quantum diffusion dominates, and by drawing conclusions for the production of PBHs from a (stochastic) USR period of inflation. Let us also mention that a numerical analysis has recently been performed in \Refa{1836208}, in the context of a modified Higgs potential, where the non-Markovian effect of quantum diffusion on the amplitude of the noise itself was also taken into account. Our results complement this work, by providing analytical insight from simple toy models. This paper is organised as follows. In \Sec{sec:StochasticInflation} we review the stochastic inflation formalism and apply it to the USR setup. We explain how the full PDF of the integrated expansion, $N$, given the initial field value and its velocity, can be computed from first-passage time techniques. In practice, it obeys a second-order partial differential equation that we can solve analytically in two limits, that we study in \Secs{sec:classicallimit} and~\ref{sec:stochasticlimit}. In \Sec{sec:classicallimit}, we construct a formal solution for the characteristic function of the PDF in the regime where quantum diffusion plays a subdominant role compared with the classical drift ({i.e.~} the inherited velocity). We recover the results of \Refs{Firouzjahi:2018vet} in this case. Then, in \Sec{sec:stochasticlimit}, we consider the opposite limit where quantum diffusion is the main driver of the inflaton's dynamics. We recover the results of \Refa{Pattison:2017mbe} at leading order where the classical drift is neglected, and we derive the corrections induced by the finite classical drift at next-to-leading order. In both regimes, we compare our analytic results against numerical realisations of the Langevin equations driving the stochastic evolution. Since such a numerical method is computationally expensive (given that it relies on solving a very large number of realisations, in order to properly sample the tails of the PDF), in \Sec{sec:volterra} we outline a new method, that makes use of Volterra integral equations, and which, in some regimes of interest, is far more efficient than the Langevin approach. Finally, in \Sec{sec:applications} we apply our findings to two prototypical toy-models featuring a USR phase, and we present our conclusions in \Sec{sec:conclusions}. The paper ends with two appendices where various technical aspects of the calculation are deferred. \section{The stochastic-$\delta N$ formalism for ultra-slow roll inflation} \label{sec:StochasticInflation} As mentioned in \Sec{sec:intro}, hereafter we consider the framework of single-field inflation, but our results can be straightforwardly extended to multiple-field setups~\cite{Assadullahi:2016gkk}. \subsection{Stochastic inflation beyond slow roll} Away from the slow-roll attractor, the full phase space dynamics needs to be described, since the homogeneous background field $\phi$ and its conjugate momentum $\pi\equiv\dot\phi/H$ are two independent dynamical variables. Following the techniques developed in \Refa{Grain:2017dqa}, in the Hamiltonian picture, \Eq{eq:kleingordon} can be written as two coupled first-order differential equations \begin{align} \label{eq:conjmomentum} \frac{\mathrm{d} {\phi}}{\mathrm{d} N} &= {\pi} \, ,\\ \label{eq:KG:efolds} \frac{\mathrm{d}{\pi}}{\mathrm{d} N} &= - \left(3-\epsilon_{1}\right){\pi} - \frac{V'({\phi})}{H^2} \, , \end{align} where $\epsilon_{1}\equiv - \dot{H}/H^2$ is the first slow-roll parameter. The Friedmann equation \begin{equation}\begin{aligned} \label{eq:friedmann} H^2 = \dfrac{V+\frac{1}{2}\dot{\phi}^2}{3M_\usssPl^2}\, , \end{aligned}\end{equation} where $M_\usssPl$ is the reduced Planck mass, provides a relationship between the Hubble rate, $H$, and the phase-space variables $\phi$ and $\pi$.\footnote{In terms of $\phi$ and $\pi$, one has $H^2=V(\phi)(3M_\usssPl^2-\pi^2/2)^{-1}$ and $\epsilon_1=\pi^2/(2M_\usssPl^2)$.\label{footnote:H_eps1:phi_pi}} The stochastic formalism is an effective theory for the long-wavelength parts of the quantum fields during inflation, so we split the phase-space fields into a long-wavelength part and an inhomogeneous component that accounts for linear fluctuations on small-scales, \begin{align} \hat{\phi} &= \hat{\bar{\phi}} + \hat{\phi}_{\mathrm{s}} \label{eq:decomp:phi}\, , \\ \hat{\pi} &= \hat{\bar{\pi}} + \hat{\pi}_{\mathrm{s}} \label{eq:decomp:pi}\, , \end{align} where $\hat{\phi}$ and $\hat{\pi}$ are now quantum operators, which we make explicit with the hats. The short-wavelength parts of the fields, $\hat{\phi}_{\mathrm{s}}$ and $\hat{\pi}_{\mathrm{s}}$, can be written as \begin{align} \hat{\phi}_{\mathrm{s}} &= \int_{\mathbb{R}^3}\frac{\mathrm{d} \bm{k}}{\left(2\pi\right)^{\frac{3}{2}}}W\left( \frac{k}{\sigma aH}\right) \left[ e^{-i\bm{k}\cdot\bm{x}}\phi_{k}(N)\hat{a}_{\bm{k}} + e^{i\bm{k}\cdot\bm{x}}\phi_{k}^{*}(N)\hat{a}^{\dagger}_{\bm{k}} \right] \, ,\\ \hat{\pi}_{\mathrm{s}} &= \int_{\mathbb{R}^3}\frac{\mathrm{d} \bm{k}}{\left(2\pi\right)^{\frac{3}{2}}}W\left( \frac{k}{\sigma aH}\right) \left[ e^{-i\bm{k}\cdot\bm{x}}\pi_{k}(N)\hat{a}_{\bm{k}} + e^{i\bm{k}\cdot\bm{x}}\pi_{k}^{*}(N)\hat{a}^{\dagger}_{\bm{k}} \right] \, . \end{align} In these expressions $\hat{a}^{\dagger}_{\bm{k}}$ and $\hat{a}_{\bm{k}}$ are creation and annihilation operators, and $W$ denotes a window function that selects out modes with $k> k_\sigma$ where $k_\sigma = \sigma a H$ is the comoving coarse-graining scale and $\sigma\ll 1$ is the coarse-graining parameter. The coarse-grained fields $\bar{\phi}$ and $\bar{\pi}$ thus contain all wavelengths that are much larger than the Hubble radius, $k< k_\sigma$. These long-wavelength components are the local background values of the fields, which are continuously perturbed by the small wavelength modes as they cross the coarse-graining radius. By inserting the decompositions \eqref{eq:decomp:phi} and \eqref{eq:decomp:pi} into the classical equations of motion \eqref{eq:conjmomentum} and \eqref{eq:KG:efolds}, to linear order in the short-wavelength parts of the fields, the equations for the long-wavelength parts are found to be \begin{align} \frac{\partial \hat{\bar{\phi}}}{\partial N} &= \hat{\bar{\pi}} + \hat{\xi}_{\phi}(N) \label{eq:conjmomentum:langevin:quantum} \, ,\\ \frac{\partial \hat{\bar{\pi}}}{\partial N} &= -\left(3-\epsilon_{1}\right)\hat{\bar{\pi}} - \frac{V'(\hat{\bar{\phi}})}{H^2} +\hat{\xi}_{\pi}(N) \label{eq:KG:efolds:langevin:quantum} \, , \end{align} where the source functions $\hat{\xi}_{\phi}$ and $\hat{\xi}_{\pi}$ are given by \begin{align} \hat{\xi}_{\phi} &= -\int_{\mathbb{R}^3}\frac{\mathrm{d} \bm{k}}{\left(2\pi\right)^{\frac{3}{2}}}\frac{\mathrm{d} W}{\mathrm{d} N}\left( \frac{k}{\sigma aH}\right) \left[ e^{-i\bm{k}\cdot\bm{x}}\phi_{k}(N)\hat{a}_{\bm{k}} + e^{i\bm{k}\cdot\bm{x}}\phi_{k}^{*}(N)\hat{a}^{\dagger}_{\bm{k}} \right]\, , \\ \hat{\xi}_{\pi} &= -\int_{\mathbb{R}^3}\frac{\mathrm{d} \bm{k}}{\left(2\pi\right)^{\frac{3}{2}}}\frac{\mathrm{d} W}{\mathrm{d} N}\left( \frac{k}{\sigma aH}\right) \left[ e^{-i\bm{k}\cdot\bm{x}}\pi_{k}(N)\hat{a}_{\bm{k}} + e^{i\bm{k}\cdot\bm{x}}\pi_{k}^{*}(N)\hat{a}^{\dagger}_{\bm{k}} \right] \, . \end{align} If the window function $W$ is taken as a Heaviside function, the two-point correlation functions of the sources are given by \begin{equation}\begin{aligned} \langle 0| \hat{\xi}_{\phi}(N_1)\hat{\xi}_{\phi}(N_2)|0\rangle & \kern-0.1em = \kern-0.1em \frac{1}{6\pi^2}\frac{\mathrm{d} k_\sigma^3(N)}{\mathrm{d} N}\bigg|_{N_1}\left\vert\phi_{k_\sigma}(N_1) \right\vert^2 \delta\left(N_1 - N_2\right) \kern-0.1em ,\\ \langle 0| \hat{\xi}_{\pi}(N_1) \hat{\xi}_{\pi}(N_2)|0\rangle & \kern-0.1em = \kern-0.1em \frac{1}{6\pi^2}\frac{\mathrm{d} k_\sigma^3(N)}{\mathrm{d} N}\bigg|_{N_1}\left\vert \pi_{k_\sigma}(N_1) \right\vert^2 \delta\left(N_1 - N_2\right) \kern-0.1em, \\ \langle 0| \hat{\xi}_{\phi}(N_1)\hat{\xi}_{\pi}(N_2)|0\rangle & \kern-0.1em = \kern-0.1em \langle 0| \hat{\xi}_{\pi}(N_1) \hat{\xi}_{\phi}(N_2)|0\rangle^* \kern-0.1em = \kern-0.1em \frac{1}{6\pi^2}\frac{\mathrm{d} k_\sigma^3(N)}{\mathrm{d} N}\bigg|_{N_1}\phi_{k_\sigma}(N_1) \pi^*_{k_\sigma}(N_1) \delta\left(N_1 \kern-0.1em - \kern-0.1em N_2\right) \kern-0.1em . \label{eq:noise:correlators} \end{aligned}\end{equation} In particular, for a massless field in a de-Sitter background, in the long-wavelength limit, $\sigma\ll 1$, one obtains~\cite{Grain:2017dqa} \begin{equation}\begin{aligned} \langle 0| \hat{\xi}_{\phi}(N_1)\hat{\xi}_{\phi}(N_2)|0\rangle &\simeq \left( \frac{H}{2\pi} \right)^2 \delta\left(N_1 - N_2\right) \, ,\\ \langle 0| \hat{\xi}_{\pi}(N_1) \hat{\xi}_{\pi}(N_2)|0\rangle & \simeq 0 \, ,\\ \langle 0| \hat{\xi}_{\phi}(N_1)\hat{\xi}_{\pi}(N_2)|0\rangle & = \kern-0.1em \langle 0| \hat{\xi}_{\pi}(N_1) \hat{\xi}_{\phi}(N_2)|0\rangle^* \simeq 0 \, . \label{eq:dS:correlators} \end{aligned}\end{equation} The next step is to realise that on super-Hubble scales, once the decaying mode becomes negligible, commutators can be dropped and the quantum field dynamics can be cast in terms of a stochastic system~\cite{Lesgourgues:1996jc, Martin:2015qta, Grain:2017dqa, Vennin:2020kng}. In this limit, the source functions can be interpreted as random Gaussian noises rather than quantum operators, which are correlated according to \Eqs{eq:noise:correlators}. The dynamical equations~\eqref{eq:conjmomentum:langevin:quantum} and~\eqref{eq:KG:efolds:langevin:quantum} can thus be seen as stochastic Langevin equations for the random field variables $\bar{\phi}$ and $\bar{\pi}$, \begin{align} \frac{\partial {\bar{\phi}}}{\partial N} &= {\bar{\pi}} + {\xi}_{\phi}(N) \label{eq:conjmomentum:langevin} \, ,\\ \frac{\partial {\bar{\pi}}}{\partial N} &= -\left(3-\epsilon_{1}\right){\bar{\pi}} - \frac{V'({\bar{\phi}})}{H^2(\bar{\phi},\bar{\pi})} +{\xi}_{\pi}(N) \label{eq:KG:efolds:langevin} \, , \end{align} where we have removed the hats to stress that we now work with classical stochastic quantities rather than quantum operators. Note that since the time coordinate is not perturbed in \Eqs{eq:conjmomentum:langevin}-\eqref{eq:KG:efolds:langevin}, the Langevin equations are implicitly derived in the uniform-$N$ gauge, so the field fluctuations must be computed in that gauge when evaluating \Eq{eq:noise:correlators}~\cite{Pattison:2019hef}. The stochastic formalism also relies on the separate universe approach, which allows us to use the homogeneous equations of motion to describe the inhomogeneous field at leading order in a gradient expansion, and which has been shown to be valid even beyond the slow-roll attractor in \Refa{Pattison:2019hef}. \subsection{Stochastic ultra-slow-roll inflation} \label{sec:Stochastic:USR} In the absence of a potential gradient ($V'\simeq 0$) and in the quasi de-Sitter limit ($\epsilon_1\simeq 0$), plugging \Eqs{eq:dS:correlators} into \Eqs{eq:conjmomentum:langevin} and~\eqref{eq:KG:efolds:langevin} lead to the Langevin equations \begin{align} \label{eq:eom:phi:stochastic} \frac{\mathrm{d} \bar{\phi}}{\mathrm{d} N} &= \bar{\pi}+ \frac{H}{2\pi}\xi(N) \, , \\ \frac{\mathrm{d} \bar{\pi}}{\mathrm{d} N} &= -3\bar{\pi} \label{eq:eom:v:stochastic} \, , \end{align} where $\xi(N)$ is Gaussian white noise with unit variance, such that $\langle\xi(N)\rangle = 0$ and $\langle \xi(N)\xi(N')\rangle=\delta(N-N')$. In practice, in a given inflationary potential, ultra slow roll takes place only across a finite field range, which we denote $[\phi_0,\phi_0+ \Delta\phi_\mathrm{well}]$ and which we refer to as the ``USR well''. Outside this range, we assume that the potential gradient is sufficiently large to drive a phase of slow-roll inflation. A sketch of this setup is displayed in \Fig{fig:sketch}. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{figs/usr_sketch.pdf} \caption{Sketch of the setup considered in this work. The inflationary potential features a flat region, between $\phi_0$ and $\phi_0+\Delta\phi_{\mathrm{well}}$, where the potential gradient can be neglected and inflation proceeds in the ultra slow roll regime, {i.e.~} $\phi$ decreases because of the velocity $\bar{\pi}$ it has acquired in previous stages. Outside this range, the field is driven by the potential gradient, $V'$, in the slow-roll regime. Stochastic effects also make the inflaton fluctuate inside the USR well, with a noise amplitude given by $H/(2\pi)$. The dynamics inside the USR well follow the Langevin equations~\eqref{eq:eom:phi:stochastic}-\eqref{eq:eom:v:stochastic}. \label{fig:sketch}} \end{figure} It is convenient to rewrite \Eqs{eq:eom:phi:stochastic} and~\eqref{eq:eom:v:stochastic} in terms of rescaled, dimensionless variables. A relevant value of the field velocity is the one for which the classical drift is enough to cross the USR well. Making use of \Eq{eq:usr:dotphisol}, it is given by the value $\dot{\phi}_\mathrm{in}$ such that $\phi$ reaches $\phi_0$ as $\dot{\phi}$ reaches zero, hence \begin{equation}\begin{aligned} \label{eq:critical:velocity} \bar{\pi}_{\mathrm{cri}}=-3\Delta\phi_{\mathrm{well}}\, . \end{aligned}\end{equation} Since the typical field range of the problem is $\Delta\phi_{\mathrm{well}}$, we thus introduce \begin{equation}\begin{aligned} \label{eq:xy:variabletransforms} x = \frac{\bar{\phi} - \phi_0}{\Delta \phi_\mathrm{well}} \, , \quad y = \frac{\bar{\pi}}{\bar{\pi}_\mathrm{crit}} \, . \end{aligned}\end{equation} Upon crossing the USR well, $x$ varies from $1$ to $0$, and $y$, which is positive, decays from its initial value $y_\mathrm{in}$. If $y_\mathrm{in}>1$, the field would cross the USR well by means of the classical velocity only, while if $y_\mathrm{in}<1$, in the absence of quantum diffusion, it would come to a stop at $x=1-y_\mathrm{in}$. We therefore expect stochastic effects to play a crucial role when $y_\mathrm{in}<1$. In terms of $x$ and $y$, the Langevin equations~\eqref{eq:eom:phi:stochastic}-\eqref{eq:eom:v:stochastic} become \begin{equation}\begin{aligned} \label{eq:langevin:xy} \frac{\mathrm{d} x}{\mathrm{d} N} &= -3 y + \frac{\mathpalette\DHLhksqrt{2}}{\mu} \xi(N)\, , \\ \frac{\mathrm{d} y}{\mathrm{d} N} &= -3y \, , \end{aligned}\end{equation} where the dimensionless parameter \begin{equation}\begin{aligned} \label{eq:def:mu} \mu \equiv 2\mathpalette\DHLhksqrt{2}\pi \frac{\Delta \phi_\mathrm{well}}{H} \end{aligned}\end{equation} has been introduced, following the notations of \Refs{Pattison:2017mbe,Ezquiaga:2019ftu} and in order to make easy the comparison with those works. The two physical parameters of the problem are therefore $y_\mathrm{in}$, which depends on the slope of the potential at the entry point of the USR well, and $\mu$, which describes the width of the quantum well relative to the Hubble rate, and which controls the amplitude of stochastic effects. Note that, in the absence of quantum diffusion, according to \Eq{eq:langevin:xy}, $x-y$ is a constant. Let us also stress that since $\epsilon_1=\bar{\pi}^2/2$, see footnote~\ref{footnote:H_eps1:phi_pi}, in order for inflation not to come to an end before entering the USR well, one must impose $\epsilon_1<1$, which yields an upper bound on $y_\mathrm{in}$, namely \begin{equation}\begin{aligned} \label{eq:ymax:generic} y_\mathrm{in} < \frac{\mathpalette\DHLhksqrt{2}M_\usssPl}{3\Delta\phi_\mathrm{well}} \, . \end{aligned}\end{equation} In particular, if one considers the case of a super-Planckian well, $\Delta\phi_\mathrm{well} \gg M_\usssPl$, then the above condition imposes that the initial velocity is close to zero, and one recovers the limit studied in \Refa{Pattison:2017mbe} where the effect of the inherited velocity $\pi$ was neglected. Finally, the Langevin equations~\eqref{eq:langevin:xy} give rise to a Fokker--Planck equation that drives the probability density in phase space at time $N$, $P(x,y; N)$, which evolves according to~\cite{risken1989fpe} \begin{equation}\begin{aligned} \label{eq:FP:USR} \frac{\partial P(x,y;N)}{\partial N} &= \left[ 3 + 3y\left(\frac{\partial}{\partial x} + \frac{\partial}{\partial y}\right) + \frac{1}{\mu^2}\frac{\partial^2}{\partial x^2}\right] P (x,y;N) &\equiv \mathcal{L}_{\mathrm{FP}} \cdot P(x,y;N) \, . \end{aligned}\end{equation} This equation defines the Fokker--Planck operator $\mathcal{L}_{\mathrm{FP}}$, which is a differential operator in phase space. \subsection{First-passage time problem} \label{sec:} In the $\delta N$ formalism~\cite{Starobinsky:1982ee, Starobinsky:1986fxa, Sasaki:1995aw, Wands:2000dp, Lyth:2004gb}, the curvature perturbation on large scales is given by the fluctuation in the integrated expansion $N$, between an initial flat hypersurface and a final hypersurface of uniform energy density. Our goal is therefore to compute the number of $e$-folds~spent in the toy model depicted in \Fig{fig:sketch}, which is a random variable that we denote $\mathcal{N}$, and the statistics of $\delta\mathcal{N}=\mathcal{N}-\langle \mathcal{N} \rangle$ will give access to the statistics of the curvature perturbation $\zeta$ (hereafter, ``$\langle\cdot\rangle$'' denotes ensemble average over realisations of the Langevin equations). Note that if stochastic effects are subdominant in the slow-roll parts of the potential (between which the USR well is sandwiched, see \Fig{fig:sketch}), then the contribution of the slow-roll parts to $\mathcal{N}$ is only to add a constant, hence $\delta\mathcal{N}$ is given by the fluctuation in the number of $e$-folds~spent in the USR well only, which is why we focus on the USR regime below. The PDF for the random variable $\mathcal{N}$, starting from a given $x$ and $y$ in phase space, is denoted $P(\mathcal{N};x,y)$ [not to be confused with $P(x,y;N)$ introduced in \Eq{eq:FP:USR}]. In \Refs{Vennin:2015hra, Pattison:2017mbe, Ezquiaga:2019ftu}, it is shown to obey the adjoint Fokker-Planck equation, \begin{equation}\begin{aligned} \label{eq:adjoint:FP:equation:PDF} \frac{\partial P(\mathcal{N};x,y)}{\partial \mathcal{N}} = \mathcal{L}_\mathrm{FP}^\dagger \cdot P(\mathcal{N};x,y) =\left[ \frac{1}{\mu^2}\frac{\partial^2}{\partial x^2} - 3y\left(\frac{\partial}{\partial x} + \frac{\partial}{\partial y}\right) \right]P(\mathcal{N};x,y) . \end{aligned}\end{equation} In this expression, $\mathcal{L}_\mathrm{FP}^\dagger$ is the adjoint Fokker-Planck operator, related to the Fokker-Planck operator via $\int \mathrm{d} x\mathrm{d} y F_1(x,y) \mathcal{L}_{\mathrm{FP}}\cdot F_2(x,y) = \int \mathrm{d} x \mathrm{d} y F_2(x,y) \mathcal{L}_{\mathrm{FP}}^\dagger\cdot F_1(x,y) $. The adjoint Fokker-Planck equation is a partial differential equation in 3 dimensions ($x,y$ and $\mathcal{N}$), which can be cast in terms of a partial differential equation in 2 dimensions by Fourier transforming the $\mathcal{N}$ coordinate. This can be done by introducing the characteristic function~\cite{Pattison:2017mbe} \begin{equation}\begin{aligned} \label{eq:def:charfunction} \chi_{\mathcal{N}}(t; x,y) &= \left\langle e^{it\mathcal{N}(x,y)}\right\rangle =\int_{-\infty}^{\infty}e^{it\mathcal{N}} P(\mathcal{N};x,y) \mathrm{d} \mathcal{N}, \end{aligned}\end{equation} where $t$ is a dummy parameter. From the second equality, one can see that the characteristic function is nothing but the Fourier transform of the PDF, hence the PDF can be obtained by inverse Fourier transforming the characteristic function, \begin{equation}\begin{aligned} \label{eq:PDF:chi} P\left(\mathcal{N}; x, y\right) = \frac{1}{2\pi} \int^{\infty}_{-\infty} e^{-it\mathcal{N}} \chi_{\mathcal{N}}\left(t; x, y \right)\mathrm{d} t\, . \end{aligned}\end{equation} Let us also note that by Taylor expanding the exponential function in \Eq{eq:def:charfunction}, the moments of the PDF are given by \begin{equation}\begin{aligned} \label{eq:meanefolds:charfunction} \left\langle\mathcal{N}^n\right\rangle = i^{-n}\frac{\partial^n}{\partial t^n}\left. \chi_{\mathcal{N}}(t; x,y) \right\vert_{t=0}\, . \end{aligned}\end{equation} By plugging \Eq{eq:PDF:chi} into \Eq{eq:adjoint:FP:equation:PDF}, the characteristic function is found to obey \begin{equation}\begin{aligned} \label{eq:diff:chi} \left(\mathcal{L}_{\mathrm{FP}}^\dagger +it\right)\chi_\mathcal{N}\left(t;x,y\right)= 0\, , \end{aligned}\end{equation} which is indeed a partial differential equation in two dimensions ($x$ and $y$).\footnote{Since \Eq{eq:diff:chi} is linear in $x$, one can further reduce it down to a partial differential equation in one dimension, {i.e.~} an ordinary differential equation, by Fourier transforming $x$. This however makes the boundary conditions~\eqref{eq:char:initialconditions} difficult to enforce and does not bring much analytical insight.} It needs to be solved in the presence of an absorbing boundary at $x=0$ (if quantum diffusion is negligible in the lower slow-roll region, once the field has exited the well, it follows the gradient of the potential and cannot return to the USR well) and a reflective boundary at $x=1$ (if quantum diffusion is negligible in the upper slow-roll region, the potential gradient prevents the field from visiting that region once in the USR well), {i.e.~} \begin{equation}\begin{aligned} \label{eq:char:initialconditions} \chi_\mathcal{N}(t; 0, y) = 1 \, , \quad \frac{\partial \chi_\mathcal{N}}{\partial x}(t; 1, y) = 0 \, . \end{aligned}\end{equation} In the following, we solve this equation in the regime where the initial velocity $y_\mathrm{in}$ is large and quantum diffusion plays a negligible role, see \Sec{sec:classicallimit}, and in the opposite limit where the initial velocity is small and the field is mostly driven by quantum diffusion, see \Sec{sec:stochasticlimit}. \section{Drift-dominated regime} \label{sec:classicallimit} We first consider the regime where quantum diffusion provides a small contribution to the duration of (the majority of) the realisations of the Langevin equations. As explained in \Sec{sec:Stochastic:USR}, for the field to cross the USR well without the help of quantum diffusion, one must have $y_\mathrm{in}>1$. A necessary condition is therefore that $y_\mathrm{in}\gg 1$. As we will see below, one must also impose $\mu y_\mathrm{in} \gg 1$, which is a stricter requirement in the case where $\mu<1$. \subsection{Leading order} At leading order, the term proportional to $\partial^2/\partial x^2$ can be neglected in \Eq{eq:adjoint:FP:equation:PDF}, since it corresponds to quantum diffusion in the $x$ direction, and \Eq{eq:diff:chi} reduces to \begin{equation}\begin{aligned} \label{eq:char:classicallimit} \left[ - 3y\left(\frac{\partial}{\partial x} + \frac{\partial}{\partial y}\right) + it \right] \left. \chi_\mathcal{N}\right\vert _{\sss{\mathrm{LO}}} (t;x,y)= 0 \, . \end{aligned}\end{equation} This partial differential equation being first order, it can be solved with the method of characteristics, see \App{app:classicalLO:charfunction} where we provide a detailed derivation of the solution. Together with the boundary conditions~\eqref{eq:char:initialconditions}, one obtains\footnote{Note that the solution~\eqref{eq:char:classicalLO:solution} does not respect the reflective boundary condition at $x=1$. This is because, since \Eq{eq:char:classicallimit} is first order, only one boundary condition can be imposed. Indeed, at leading order in the drift-dominated limit, the system follows the classical trajectory in phase space and never bounces against the reflective boundary, which explains why the reflective boundary is irrelevant.} \begin{equation}\begin{aligned} \label{eq:char:classicalLO:solution} \left. \chi_\mathcal{N}\right\vert _\sss{\mathrm{LO}}(t;x,y) = \left( 1 - \frac{x}{y}\right)^{-\frac{it}{3}} \, . \end{aligned}\end{equation} Making use of \Eq{eq:meanefolds:charfunction}, the mean number of $e$-folds~at that order is given by \begin{equation}\begin{aligned} \label{eq:meanefolds:classical:LO} \left.\langle \mathcal{N}\rangle \right\vert_\sss{\mathrm{LO}}(x,y) &= -\frac{1}{3}\ln\left( 1 - \frac{x}{y}\right) \, , \end{aligned}\end{equation} which is nothing but the classical, deterministic result. Furthermore, by Fourier transforming \Eq{eq:char:classicalLO:solution} according to \Eq{eq:PDF:chi}, one obtains for the PDF \begin{equation}\begin{aligned} \label{eq:pdf:cl:lo} P(\mathcal{N};x,y) = \delta\left[ \mathcal{N} - \left.\langle \mathcal{N}\rangle \right\vert_\sss{\mathrm{LO}}(x,y) \right] \, , \end{aligned}\end{equation} where $\delta$ denotes the Dirac distribution, and which confirms that at that order, all realisations follow the same, classical trajectory. \subsection{Next-to-leading order} At next-to-leading order, the term proportional to $\partial^2/\partial x^2$ in \Eq{eq:diff:chi} can be evaluated with the leading-order solution~\eqref{eq:char:classicalLO:solution}, which leads to \begin{equation}\begin{aligned} \label{eq:Drift_dominated:nlo:rec} \left[ - 3y\left(\frac{\partial}{\partial x} + \frac{\partial}{\partial y}\right) + it \right]\left.\chi_\mathcal{N}\right\vert_\sss{\mathrm{NLO}}(t;x,y) = - \frac{1}{\mu^2}\frac{\partial^2}{\partial x^2} \left.\chi_{\mathcal{N}}\right\vert_\sss{\mathrm{LO}} (t;x,y)\, . \end{aligned}\end{equation} This partial differential equation is still of first order and can again be solved with the methods of characteristics, and in \App{app:classicalLO:charfunction} one obtains \begin{equation}\begin{aligned} \label{eq:characteristic:NLO} \chi_\mathcal{N}\bigg|_\mathrm{NLO} &= \left( 1 - \frac{x}{y}\right)^{-\frac{it}{3}}\left[ 1 - \frac{it}{9}\left(1+\frac{it}{3}\right)\frac{\ln\left( 1-\frac{x}{y}\right)}{\mu^2(y-x)^2} \right] \, . \end{aligned}\end{equation} It is interesting to note that contrary to the slow-roll case where the characteristic function is the one of a Gaussian distribution at next-to-leading order~\cite{Pattison:2017mbe} (at leading order, it is simply a Dirac distribution following the classical path) and starts deviating from Gaussian statistics only at next-to-next-to leading order, in USR the PDF differs from a Gaussian distribution already at next-to-leading order. The mean number of $e$-folds~can again be calculated using \Eq{eq:meanefolds:charfunction}, which leads to\footnote{The NLO correction found in \Refa{Firouzjahi:2018vet} [see equation $(4.15)$ in that reference] is given by $({\kappa^2}/{6}) \left< \mathcal{N} \right> \big|_\sss{\mathrm{LO}}$, where, in the notation of our paper, $\kappa = -({\mathpalette\DHLhksqrt{2}}/{\mu})(y-x)^{-1}$. We therefore recover the same result, despite using different techniques. } \begin{equation}\begin{aligned} \label{eq:meaefolds:classical:NLO} \langle \mathcal{N}\rangle(x,y)\Big|_{\mathrm{NLO}} &= \left< \mathcal{N} \right> \big|_\sss{\mathrm{LO}} \left[ 1 + \frac{1}{3\mu^2\left(y-x\right)^2} \right] \, . \end{aligned}\end{equation} The relative correction to the leading-order result is controlled by $(\mu y)^{-2}$ (recalling that $y\gg 1>x$), so as announced above, the present expansion also requires that $\mu y\gg 1$. One can also calculate the power spectrum~\cite{Fujita:2013cna, Ando:2020fjm} \begin{equation}\begin{aligned} \label{eq:Pzetageneral} P_{\zeta} = \frac{\mathrm{d} \langle \delta \mathcal{N}^2 \rangle}{\mathrm{d} \langle \mathcal{N}\rangle} \, , \end{aligned}\end{equation} where $\langle \delta \mathcal{N}^2 \rangle= \langle\mathcal{N}^2\rangle - \langle\mathcal{N}\rangle^2$, which relies on computing the second moment of $\mathcal{N}$. At leading order, combining \Eqs{eq:meanefolds:charfunction} and~\eqref{eq:char:classicalLO:solution}, one simply has $\langle \mathcal{N}^2\rangle\vert_\sss{\mathrm{LO}} = \langle \mathcal{N} \rangle\vert_\sss{\mathrm{LO}}^2$, so one has to go to next-to-leading order where \Eqs{eq:meanefolds:charfunction} and~\eqref{eq:characteristic:NLO} give rise to \begin{equation}\begin{aligned} \langle\mathcal{N}^2\rangle_\sss{\mathrm{NLO}} &= \left\langle \mathcal{N} \right\rangle \big|_\sss{\mathrm{NLO}}^2 + \frac{2\left\langle \mathcal{N} \right\rangle \big|_\sss{\mathrm{LO}}}{9\mu^2(y-x)^2} \, . \end{aligned}\end{equation} The leading contribution to $ \langle \delta \mathcal{N}^2 \rangle$ is thus given by \begin{equation}\begin{aligned} \label{eq:deltaN2:cl:nlo} \langle \delta \mathcal{N}^2 \rangle &= \frac{2}{9\mu^2(y-x)^2}\left< \mathcal{N} \right> \big|_\sss{\mathrm{LO}} \, . \\ \end{aligned}\end{equation} Note that, as mentioned above, $y-x=$ is constant along the classical trajectory. Therefore, in the classical limit, \Eq{eq:Pzetageneral} yields\footnote{At higher order, we expect the power spectrum to be affected by stochastic modifications of the trajectories along which the derivatives in \Eq{eq:Pzetageneral} are evaluated~\cite{Ando:2020fjm}.} \begin{equation}\begin{aligned} P_{\zeta}(x,y) \simeq \frac{2}{9\mu^2\left(y-x\right)^2} \, , \end{aligned}\end{equation} which coincides with the usual perturbative calculation of the power spectrum in ultra-slow roll \cite{Firouzjahi:2018vet}. One can also note that the condition to be in the classical regime, $\mu y\gg 1$, implies that the power spectrum remains small. \subsection{Series expansion for large velocity} The procedure outlined above can be extended to a generic expansion in inverse powers of $\mu^2 y^2$, \begin{equation}\begin{aligned} \label{eq:chi:large:muy:expansion} \chi_{\mathcal{N}} (t;x,y) = \sum_{n=0}^\infty \left( \mu^2 y^2 \right)^{-n} C_n \left( t;\frac{x}{y} \right) \ . \end{aligned}\end{equation} The boundary condition $\chi_\mathcal{N}(t;x,y)= 1$ at $x=0$ is zeroth-order in $\mu y$ and thus requires $C_0(t;0)=1$ and $C_n(t;0)=0$ for all $n\geq1$. The lowest-order solution $C_0(t;x/y)$ obeys the classical limit of the equation for the characteristic function, \Eq{eq:char:classicallimit} and the solution is given by \Eq{eq:char:classicalLO:solution}. The higher-order functions $C_n(t;u)$ for $n\geq1$ obey the recurrence relation \begin{equation}\begin{aligned} \label{eq:classicalrecurrence} (1-u)C_{n+1}^\prime - \left[ \frac{it}{3}+2(n+1) \right] C_{n+1} = \frac13 C_{n}^{\prime\prime} \,, \end{aligned}\end{equation} where a prime denotes derivatives with respect to the argument $u$, and which was obtained by plugging \Eq{eq:chi:large:muy:expansion} into \Eq{eq:diff:chi}. Thus we have an integrating-factor type first-order differential equation for each successive function, $C_{n+1}(t;u)$, determined by the second derivative of the preceding function, $C_n(t;u)$. The solutions are of the form\footnote{Note that we require $y\geq x$ and hence $u=x/y\leq 1$ for the classical trajectory to cross the potential well and reach $x=0$.} \begin{equation}\begin{aligned} \label{eq:Cn:cnm} C_n(t;u) = \sum_{m=0}^n c_{n,m}(t) \left( 1-u \right)^{-\frac{it}{3}-2n} \left[ \ln \left( 1-u \right) \right]^m \, , \end{aligned}\end{equation} where the coefficients $c_{n,m}(t)$ are polynomial functions of order up to $2n$, given by the recurrence relations (for $1\leq m \leq n+1$) \begin{equation}\begin{aligned} \label{eq:cn:iterative} c_{n+1,m}(t) &= - \frac{1}{3m} \left[ \left( \frac{it}{3} + 2n \right) \left( \frac{it}{3} +2n + 1 \right) c_{n,m-1}(t) \right. \\ & \qquad \qquad \left. -2m \left( \frac{it}{3} + 2n +\frac{1}{2}\right) c_{n,m}(t) + m(m+1) c_{n,m+1}(t) \right] \,. \end{aligned}\end{equation} Here, $c_{0,0}=1$ and $c_{n,0}=0$ for all $n\geq1$, and we set $c_{n,m}=0$ for all $m>n$. This allows one to compute the $c_{n,m}$ coefficients iteratively, and the first coefficients are given by \begin{equation}\begin{aligned} \label{eq:cnm:first} & c_{0,0}(t) = 1 \,, \\ & c_{1,0}(t) = 0 \,, \quad c_{1,1}(t) = - \frac{it}{9} \left( 1+ \frac{it}{3} \right) \,, \\ & c_{2,0}(t) = 0 \,, \quad c_{2,1}(t) = \frac{c_{1,1}(t)}{3} \left( \frac{2it}{3} +5 \right) \,, \quad c_{2,2}(t) = - \frac{c_{1,1}(t)}{6} \left( \frac{it}{3} + 2\right) \left( \frac{it}{3} +3 \right) \,. \end{aligned}\end{equation} It is straightforward to implement \Eq{eq:cn:iterative} numerically and to compute the characteristic function, hence the PDF, to arbitrary order in that expansion. For instance, the higher-order corrections to the mean number of $e$-folds~are found to be \begin{equation}\begin{aligned} \langle \mathcal{N}\rangle & = \langle\mathcal{N}\rangle_\sss{\mathrm{LO}} \left[1+\frac{1}{3 \mu ^2 (x-y)^2}+\frac{9 \langle \mathcal{N}\rangle_\sss{\mathrm{LO}}+5}{9 \mu ^4 (x-y)^4}+\frac{60 \langle \mathcal{N}\rangle_\sss{\mathrm{LO}}^2+77 \langle \mathcal{N}\rangle_\sss{\mathrm{LO}}+17}{9 \mu ^6 (x-y)^6} \right. \\ & \left. \qquad \qquad \qquad +\frac{5670 \langle \mathcal{N}\rangle_\sss{\mathrm{LO}}^3+12042 \langle \mathcal{N}\rangle_\sss{\mathrm{LO}}^2+6396 \langle \mathcal{N}\rangle_\sss{\mathrm{LO}}+817}{81 \mu ^8 (x-y)^8}+\cdots\right] . \end{aligned}\end{equation} One can check that this expression again coincides with the one given up to order $\mu^{-6}(x-y)^{-6}$ in \Refa{Firouzjahi:2018vet}. Here we obtain it as the result of a systematic expansion~\eqref{eq:chi:large:muy:expansion} that can be performed to arbitrary order without further calculations: \begin{equation}\begin{aligned} \langle \mathcal{N}\rangle & = \sum_{n=0}^\infty \sum_{m=0}^n \left[ -ic_{n,m}'(0) - \frac{c_{n,m}(0)}{3} \ln \left(1-\frac{x}{y}\right) \right] \frac{\left[ \ln \left( 1-\frac{x}{y} \right) \right]^m}{\mu^{2n}(x-y)^{2n}} \,. \end{aligned}\end{equation} The non-Gaussian nature of the PDF close to its maximum can be characterised by the local non-linearity parameter \begin{equation}\begin{aligned} \label{eq:fnl} f_{\mathrm{NL}}=\frac{5}{36 \mathcal{P}_\zeta^2}\frac{\mathrm{d}^2 \left\langle \delta\mathcal{N}^3\right\rangle}{\mathrm{d} \left\langle \mathcal{N}\right\rangle ^2}\, . \end{aligned}\end{equation} In this expression, the third moment can be obtained by combining \Eqs{eq:Cn:cnm} and~\eqref{eq:cnm:first}, keeping terms up to $n=2$ in \Eq{eq:chi:large:muy:expansion}. Then \Eq{eq:meanefolds:charfunction} for the third moment gives \begin{equation}\begin{aligned} \left\langle \delta \mathcal{N}^3 \right\rangle = \left\langle \left( \mathcal{N} - \left\langle \mathcal{N} \right\rangle \right)^3\right\rangle = \frac{4 \left[\log \left(1-\frac{x}{y}\right)-1\right] \log \left(1-\frac{x}{y}\right)}{81 \mu ^4 (x-y)^4}\, . \end{aligned}\end{equation} This yields \begin{equation}\begin{aligned} f_{\mathrm{NL}} = \frac{5}{2}+\order{\mu^{-2}y^{-2}}\, , \end{aligned}\end{equation} which is of order one, while it is suppressed by the slow-roll parameters in the slow-roll regime~\cite{Maldacena:2002vr}. This expression also matches the standard USR result of \Refa{Namjoo:2012aa}. \section{Diffusion-dominated regime} \label{sec:stochasticlimit} In this section, we consider the opposite limit where the initial velocity, inherited from the phase preceding the USR epoch, is small, and the field is mostly driven by the stochastic noise. In the same way that the drift-dominated regime was studied through a $1/y$-expansion, the diffusion-dominated regime can be approached via a $y$-expansion. Such a systematic expansion is performed in \App{app:separable:charfunction}. Here, we only derive the result at leading and next-to-leading orders, before discussing implications for primordial black hole production. Since the classical drift decays exponentially with time, $y = y_\mathrm{in}e^{-3N}$, it always becomes negligible at late time. It is thus important to stress that the diffusion-dominated regime becomes effective at late time, so the results derived in this section are relevant for the upper tail of the PDF of $\mathcal{N}$, even if the initial velocity is substantial. Since PBHs precisely form in that tail, this limit is therefore important to study the abundance of these objects. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{figs/Nmean_log_mu_eq_1.pdf} \includegraphics[width=0.49\textwidth]{figs/dNmean_log_mu_eq_1.pdf} \caption{Mean number of $e$-folds~(left panel) across the USR well, and its standard deviation (right panel), as a function of the rescaled initial velocity $y_\mathrm{in}$, for $\mu=1$. The blue bars are reconstructed from a large number (between $10^6$ and $10^8$, depending on the value of $y_\mathrm{in}$) of realisations of the Langevin equations \eqref{eq:langevin:xy}. The size of the bars correspond to a $2\sigma$-estimate for the statistical error, which is obtained using the jackknife resampling method (see footnote~\ref{footnote:jackknife}). The dashed black curve corresponds to the drift-dominated, leading-order result derived in \Sec{sec:classicallimit} while the solid black lines display the diffusion-dominated, next-to-leading-order result obtained in \Sec{sec:stochasticlimit}. \label{fig:meanefolds:smally}} \end{figure} \subsection{Leading order} \label{sec:latetimelimit} At leading order, one can simply set $y=0$ in the characteristic equation~\eqref{eq:diff:chi}, given explicitly in \Eq{eq:pde:chi:xy}, and one has \begin{equation}\begin{aligned} \label{eq:char:latetime} \frac{1}{\mu^2}\frac{\partial^2}{\partial x^2}\chi_{\mathcal{N}}(t; x, 0) = -it\chi_{\mathcal{N}}(t;x,0) \, . \end{aligned}\end{equation} This describes free diffusion on a flat potential where the inflaton has no classical velocity. In this zero-velocity limit, we recover the \textit{slow-roll} Langevin equations for a flat potential, which was previously studied as a limit of slow-roll inflation in \Refa{Pattison:2017mbe}. Classically this limit is not well-defined, since the inflaton remains at rest and inflation never ends if there is no classical velocity, but when one accounts for quantum diffusion the problem is ``regularised", giving a finite duration of inflation for a finite field range. The solution to \Eq{eq:char:latetime} that obeys the boundary conditions~\eqref{eq:char:initialconditions} is given by~\cite{Pattison:2017mbe,Ezquiaga:2019ftu} \begin{equation}\begin{aligned} \label{eq:chi:usr:latetime} \chi_{\mathcal{N}}(t;x,0) = \frac{\cos\left[ \omega_0\left(1-x\right)\right]}{\cos\omega_0} \, , \end{aligned}\end{equation} where we used the notation introduced in \App{app:separable:charfunction}, \begin{equation}\begin{aligned} \label{eq:omega0} \omega_0^2 = i t \mu^2 \, . \end{aligned}\end{equation} This allows one to compute the mean number of $e$-folds~from \eqref{eq:meanefolds:charfunction} for instance, and one finds \begin{equation}\begin{aligned} \label{eq:usr:efolds:analytic} \left. \left< \mathcal{N}(x)\right> \right|_{y=0}= \mu^2x\left(1-\frac{x}{2}\right) \, , \end{aligned}\end{equation} or the second moment, given by \begin{equation}\begin{aligned} \label{eq:usr:efolds:2ndmoment:analytic} \left. \left< \delta\mathcal{N}^2(x)\right> \right|_{y=0}=\frac{2+x(x-2)}{3}\mu^2 \left. \left< \mathcal{N}(x)\right> \right|_{y=0}\, . \end{aligned}\end{equation} By comparing this expression with \Eq{eq:deltaN2:cl:nlo}, one can see that the way it scales with $\mu$ is different than in the drift-dominated regime: in the drift-dominated regime, the typical size of the fluctuation decreases with $\mu$, while it increases with $\mu$ is the diffusion-dominated regime. When $\mu\ll 1$ the transition between the two regimes occurs in the range $1\ll y\ll \mu$ where neither approximation applies, and where the size of the typical fluctuation thus smoothly increases with $y$. If $\mu\gg 1$, the transition occurs at $y\sim 1$ and is therefore more abrupt. The full PDF of the number of $e$-folds~is given by the inverse Fourier transform of the characteristic function, see \eqref{eq:PDF:chi}, which leads to \begin{equation}\begin{aligned} \label{eq:pdf:usr:analyitic} P\left(\mathcal{N}; x, 0 \right) = & \frac{2 \pi}{\mu^2} \sum_{n=0}^\infty \left( n + \frac{1}{2} \right) \sin\left[\left(n+\frac{1}{2}\right) \pi x \right] \exp\left[ -\frac{\pi^2}{\mu^2} \left(n+\frac{1}{2}\right)^2 \mathcal{N}\right] \\ = & -\frac{\pi}{2 \mu^2} \vartheta_{2}' \left( \frac{\pi}{2}x, e^{-\frac{\pi^2}{\mu^2} \mathcal{N}} \right) \, , \end{aligned}\end{equation} where $\vartheta_{2}$ is the second elliptic theta function~\cite{Abramovitz:1970aa:theta} \begin{equation}\begin{aligned} \vartheta_2\left(z,q\right) &= 2 \sum_{n=0}^\infty q^{\left(n+\frac{1}{2}\right)^2}\cos\left[\left(2n+1\right)z\right]\, , \end{aligned}\end{equation} and $\vartheta_2^\prime$ is its derivative with respect to the first argument. One can check that this distribution is properly normalised and that its first moment is given by \Eq{eq:usr:efolds:analytic}. When $\mathcal{N}$ is large, the mode $n=0$ dominates in the sum of \Eq{eq:pdf:usr:analyitic}, which thus features an exponential, heavy tail. Those exponential tails have important consequences for PBHs, which precisely form from them~\cite{Pattison:2017mbe, Ezquiaga:2019ftu}. As explained above, since the small-velocity limit is a late-time limit, we expect the decay rate of the leading exponential, $e^{-\Lambda_0 \mathcal{N}}$ with $\Lambda_0=\pi^2/(4\mu^2)$, to remain the same even if the initial velocity does not vanish, since the realisations of the Langevin process that contribute to the tail of the PDF escape the USR well at a time when the velocity has decayed away. \subsection{Next-to-leading order} \label{sec:stochlimit:expansion} At next-to-leading order, the term proportional to $y$ in the characteristic equation~\eqref{eq:diff:chi} [given explicitly in \Eq{eq:pde:chi:xy}] can be evaluated with the leading-order solution. Formally, by expanding the characteristic function as \begin{equation}\begin{aligned} \label{eq:linear:chi} \chi_\mathcal{N}(t;x,y) \approx \chi_\mathcal{N}(t;x,0) + yf(t;x) \, , \end{aligned}\end{equation} where $\chi_\mathcal{N}(t;x,0)$ is given by \Eq{eq:chi:usr:latetime}, the characteristic equation reads \begin{equation}\begin{aligned} \left[ \frac{1}{\mu^2}\frac{\partial^2}{\partial x^2} + \left( it - 3 \right) \right]f(t;x) &= 3\frac{\partial}{\partial x}\chi_\mathcal{N}(t;x,0) = 3\omega_0 \frac{\sin\left[\omega_0\left(1-x\right)\right]}{\cos\omega_0} \, , \end{aligned}\end{equation} where only the terms linear in $y$ have been kept. This equation needs to be solved with the boundary conditions~\eqref{eq:char:initialconditions}, which require \begin{equation}\begin{aligned} \label{eq:boundaryconditions:smallylimit} & f(t;0) = 0 \, , & \frac{\partial f}{\partial x}(t;1) = 0 \, . \end{aligned}\end{equation} One obtains \begin{equation}\begin{aligned} \label{eq:solution:ftx} f(t;x) &= A_1\cos\left[\omega_1\left(1-x\right)\right] + B_1\sin\left[\omega_1\left(1-x\right)\right] - \frac{\omega_0\sin\left[\omega_0\left(1-x\right)\right]}{\cos\omega_0} \, , \end{aligned}\end{equation} where \begin{equation}\begin{aligned} \omega_1^2 = \left( it - 3 \right) \mu^2 \, , \end{aligned}\end{equation} and $A_1$ and $B_1$ are determined by the boundary conditions \eqref{eq:boundaryconditions:smallylimit}, \begin{equation}\begin{aligned} A_1 &= \frac{\omega_0}{\omega_1} \left( \frac{\omega_1\sin\omega_0 - \omega_0\sin\omega_1}{\cos\omega_0\cos\omega_1} \right) \,, \hspace{1cm} B_1 = \frac{\omega_0^2}{\omega_1\cos\omega_0} \,. \end{aligned}\end{equation} As before, we can compute the mean number of $e$-folds~by using \Eq{eq:meanefolds:charfunction}, and one obtains \begin{equation}\begin{aligned} \label{eq:meanefolds:smally} \left< \mathcal{N} \right>(x,y) &\simeq \mu^2x\left(1-\frac{x}{2}\right) + \mu^2 y \left\{ x-1 + \frac{\cosh\left[ \mathpalette\DHLhksqrt{3}\mu\left(1-x\right) \right]}{\cosh\left( \mathpalette\DHLhksqrt{3}\mu \right)} - \frac{\sinh\left( \mathpalette\DHLhksqrt{3}\mu x \right) }{\mathpalette\DHLhksqrt{3}\mu\cosh\left( \mathpalette\DHLhksqrt{3}\mu \right)} \right\} \, , \end{aligned}\end{equation} which reduces to \Eq{eq:usr:efolds:analytic} when $y=0$. This expression~\eqref{eq:meanefolds:smally} is displayed as a function of the initial velocity $y_\mathrm{in}$ in the left panel of \Fig{fig:meanefolds:smally}, and compared with the mean number of $e$-folds~computed over a large number of numerical simulations of the Langevin equations \eqref{eq:eom:phi:stochastic}-\eqref{eq:eom:v:stochastic} for $\mu=1$ and displayed with the blue bars. The number of numerical realisations that are generated varies between $10^6$ and $10^8$, depending on the value of the initial velocity. The size of the bars correspond to an estimate of the $2\sigma$ statistical error (due to using a finite number of realisations), as obtained from the jackknife resampling method.\footnote{For a sample of $n$ trajectories, the central limit theorem states that the statistical error on this sample scales as $1/\mathpalette\DHLhksqrt{n}$, so the standard deviation can be written as $\sigma=\lambda/\mathpalette\DHLhksqrt{n}$ for large $n$. The parameter $\lambda$ can be estimated by dividing the set of realisations into $n_\mathrm{sub}$ subsamples, each of size $n/n_\mathrm{sub}$. In each subsample, one can compute the mean number of $e$-folds~$\left<\mathcal{N}\right>_{i}$ where $i=1\cdots n_\mathrm{sub}$, and then compute the standard deviation $\sigma_{n/n_\mathrm{sub}}$ across the set of values of $\left<\mathcal{N}\right>_{i}$. This allows one to evaluate $\lambda=\mathpalette\DHLhksqrt{n/n_\mathrm{sub}}\sigma_{n/n_\mathrm{sub}}$, and hence $\sigma_n=\sigma_{n_\mathrm{sub}}/\mathpalette\DHLhksqrt{n_\mathrm{sub}}$. In practice, we take $n_\mathrm{sub}=100$.\label{footnote:jackknife}} One can check that when $y\ll 1$, \Eq{eq:meanefolds:smally} provides a good fit to the numerical result, while when $y\gg 1$, the small-velocity formula, \Eq{eq:meanefolds:classical:LO}, gives a correct description of the numerical result. This is also confirmed in the right panel of \Fig{fig:meanefolds:smally}, where the standard deviation of the number of $e$-folds, $\mathpalette\DHLhksqrt{\langle \delta \mathcal{N}^2 \rangle}$, is shown. When $y\ll 1$, the diffusion-dominated result, obtained by plugging \Eq{eq:linear:chi} into \Eq{eq:meanefolds:charfunction} (we do not reproduce the formula here since it is slightly cumbersome and not particularly illuminating) and displayed with the solid black line, provides a good fit to the numerical result, while when $y\gg 1$, the classical result~\eqref{eq:deltaN2:cl:nlo} gives a good approximation. The above considerations can be generalised into a systematic expansion in $y$ that we set up in \App{app:separable:charfunction}, and which allows one to compute higher corrections up to any desired order. \subsection{Implications for primordial black hole abundance} When going from the characteristic function to the PDF via the inverse Fourier transform of \Eq{eq:PDF:chi}, it is convenient to first identify the poles of the characteristic function, given as $\Lambda_n^{(m)} = it$, so that we can expand $\chi_\mathcal{N}$ as~\cite{Ezquiaga:2019ftu} \begin{equation}\begin{aligned} \label{eq:pole:expansion} \chi_\mathcal{N}(t;x,y) = \sum_{m,n} \frac{a_n^{(m)}(x,y)}{\Lambda_n^{(m)}-it} + g(t;x,y) \, , \end{aligned}\end{equation} where $g(t,x)$ is a regular function, and $a_n^{(m)}(x,y)$ is the residual associated with $\Lambda_n^{(m)}$ and can be found via \begin{equation}\begin{aligned} \label{eq:residue:chi} a_n^{(m)}(x,y) = -i\left[\frac{\partial}{\partial t}\chi_\mathcal{N}^{-1}\left(t=-i\Lambda_n^{(m)}; x, y\right)\right]^{-1} \, . \end{aligned}\end{equation} Here, the poles are labeled by two integer numbers $n$ and $m$, for future convenience. By inverse Fourier transforming \Eq{eq:pole:expansion}, one obtains \begin{equation}\begin{aligned} \label{eq:PDF:pole:expansion} P\left(\mathcal{N};x,y \right) = \sum_{m,n} a_n^{(m)}(x,y)e^{-\Lambda_n^{(m)}\mathcal{N}} \, , \end{aligned}\end{equation} so the location of the poles determine the exponential decay rates and the residues set the (non-necessarily positive) amplitude associated to each decaying exponential. At next-to-leading order in the diffusion-dominated limit, the characteristic function is given by \Eq{eq:linear:chi}, where both $\chi_\mathcal{N}(t;x,0)$ and $f(t;x)$ have $\cos\omega_0$ in their denominator, so there is a first series of simple poles when $\omega_0=[n+(1/2)]\pi$, where $n$ is an integer number. Making use of \Eq{eq:omega0}, these poles are associated to the decay rates \begin{equation}\begin{aligned} \Lambda_n^{(0)} = \frac{\pi^2}{\mu^2}\left(n+\frac{1}{2}\right)^2 \, . \end{aligned}\end{equation} The corresponding residuals can be obtained by plugging \Eq{eq:linear:chi} into \Eq{eq:residue:chi}, and one obtains \begin{equation}\begin{aligned} a_n^{(0)} &= \frac{(2n+1)\pi}{\mu^2} \left( \sin\left[\left(n+\frac12\right)\pi x\right] - y \left(n+\frac12\right)\pi \cos \left[\left(n+\frac12\right)\pi x\right] \right. \\ & \hspace{1cm} \left. + y \frac{\left(n+\frac12\right)\pi}{\omega_{1,n}^{(0)}\cos\omega_{1,n}^{(0)}} \left\lbrace \omega_{1,n}^{(0)} \cos \left[ \omega_{1,n}^{(0)}\left(1-x\right)\right] - (-1)^n \left(n+\frac12\right)\pi \sin\left( \omega_{1,n}^{(0)} x \right) \right\rbrace \right) \end{aligned}\end{equation} where \begin{equation}\begin{aligned} \omega_{1,n}^{(0)} \equiv \mathpalette\DHLhksqrt{\left(n+\frac12\right)^2\pi^2-3\mu^2} \,. \end{aligned}\end{equation} The function $f(t;x)$, which multiplies $y$ in the characteristic function \eqref{eq:linear:chi}, also features the term $\cos\omega_1$ in its denominator, which has simple poles at $it=\Lambda_n^{(1)}$ where \begin{equation}\begin{aligned} \Lambda_n^{(1)} = 3+ \frac{\pi^2}{\mu^2}\left(n+\frac{1}{2}\right)^2 \, , \end{aligned}\end{equation} with associated residual at \begin{equation}\begin{aligned} a_n^{(1)} &= \frac{2(-1)^n y}{\mu^2}\frac{\sin\left[ \left(n+\frac{1}{2}\right)\pi x\right]}{\cos\left[ \mathpalette\DHLhksqrt{3\mu^2 + \pi^2\left(n+\frac{1}{2}\right)^2} \right]} \left\{ -3\mu^2 - \pi^2\left(n+\frac{1}{2}\right)^2 \right. \\ & \hspace{1cm} \left. +\pi(-1)^n \left(n+\frac{1}{2}\right)\mathpalette\DHLhksqrt{3\mu^2 + \pi^2\left( n+\frac{1}{2} \right)^2}\sin\left[ \mathpalette\DHLhksqrt{3\mu^2 + \pi^2\left( n+\frac{1}{2} \right)^2} \right] \right\} \, . \end{aligned}\end{equation} The PDF is then obtained from \Eq{eq:PDF:pole:expansion} and reads \begin{equation}\begin{aligned} \label{eq:PDF:smallylimit} P(\mathcal{N}; x, y) &= \sum_{m=0,1}\sum_{n=0}^{\infty}a_n^{(m)}(x,y)e^{-\Lambda_n^{(m)}\mathcal{N}} &= \sum_{n=0}^{\infty}\left[ a_n^{(0)} + a_n^{(1)}e^{-3\mathcal{N}} \right] e^{-\Lambda_n^{(0)}\mathcal{N}} \, , \end{aligned}\end{equation} where all quantities appearing in this expression are given above. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{figs/NefPDF_linear.pdf} \includegraphics[width=0.49\textwidth]{figs/NefPDF_log.pdf} \caption{The probability distribution $P$ of the number of $e$-folds~$\mathcal{N}$ realised through the USR well, for $\mu=1$ and with several values of the initial velocity $y_\mathrm{in}$. In the left panel we use a linear scale of the vertical axis, and in the right panel we use a logarithmic scale (the value $y_\mathrm{in}=0.1$ is not shown in the right panel for display convenience). The black curves correspond to the small-$y$ expression \eqref{eq:PDF:smallylimit} (where in practice, the sum is truncated at $n=1000$), while the error bars are reconstructed from a large number of realisations of the Langevin equations \eqref{eq:langevin:xy}, with Gaussian kernel density of width $\mathrm{d} N=0.005$. The size of the bars is the $2\sigma$-estimate for the statistical error obtained from the jackknife resampling procedure, see footnote~\ref{footnote:jackknife}. \label{fig:pdf:smally}} \end{figure} Note that the presence of the classical velocity does not change the location of the first set of poles, $it=\Lambda_n^{(0)}$, that are already present at leading order in this calculation, but rather adds a second set of poles at $it=\Lambda_n^{(1)}=\Lambda_n^{(0)}+3$. Since the asymptotic behaviour on the tail is given by the lowest pole, {i.e.~} $\Lambda_n^{(0)}$, it does not depend on whether or not the classical drift is included, as announced above.\footnote{This is also a consequence of the more generic result shown in \Refa{Ezquiaga:2019ftu} that the poles, hence the decay rates, do not depend on the field-space coordinates (hence on the value of $x$ and $y$ here).} However, the overall amplitude of the leading exponential term, given by $a_0^{(0)}$, does depend on $y$. Let us also stress that by extending those considerations to the systematic $y$-expansion set up in \App{app:separable:charfunction}, one can show that a new set of poles arises at each order, and given by \Eq{eq:pole:expansion:nm}, {i.e.~} $\Lambda_n^{(m)}=\Lambda_n^{(0)}+3m$. Each set of poles is therefore more and more suppressed on the tail, but the sets are well separated only when $\mu\gg 1$. For instance, if $\mu < \mathpalette\DHLhksqrt{2/3}\pi$, one has $\Lambda_0^{(1)}<\Lambda_1^{(0)}$, so the second most important term in \Eq{eq:PDF:pole:expansion} comes from the second set of poles, not the first one. The formula~\eqref{eq:PDF:smallylimit} is compared to numerical simulations of the Langevin equation in \Fig{fig:pdf:smally}, where we can see that it provides a very good fit to the PDF, even for substantial values of $y_\mathrm{in}$. On the tail of the distributions, the error bars become larger, and this is because realisations that experience a large number of $e$-folds~are rare, and hence they provide sparser statistics. They are however sufficient to clearly see, especially in the right panel, that the tails are indeed exponential, with a decay rate that does not depend on $y_\mathrm{in}$. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{figs/beta_lin.pdf} \includegraphics[width=0.49\textwidth]{figs/beta_log.pdf} \caption{Fraction of the Langevin realisations that undergo a number of $e$-folds~$\mathcal{N}>\langle \mathcal{N} \rangle + \zeta_\mathrm{c}$ with $\mu=1$ and $\zeta_\mathrm{c}=1$. The black curves correspond to the small-$y$ approximation~\eqref{eq:beta:smally}, while the blue bars are reconstructed from a large number of numerical simulations of the Langvin equations \eqref{eq:langevin:xy} as in previous figures. \label{fig:beta:smally}} \end{figure} One can also calculate the mass fraction $\beta_M$ of primordial black holes from the estimate given in \Eq{eq:def:beta}, and which leads to \begin{equation}\begin{aligned} \label{eq:beta:smally} \beta_M &= \sum_{n=0}^{\infty} \left\lbrace \frac{a_n^{(0)}(x,y)}{\Lambda_n^{(0)}} + \frac{a_n^{(1)}(x,y)}{\Lambda_n^{(0)}+3}e^{-3\left[\left<\mathcal{N}\right>(x,y)+\zeta_c\right]} \right\rbrace e^{-\Lambda_n^{(0)}\left[\left<\mathcal{N}\right>(x,y)+\zeta_\mathrm{c}\right]} \, . \end{aligned}\end{equation} This expression is compared to our numerical results in \Fig{fig:beta:smally}, for $\zeta_\mathrm{c}=1$ When $y_\mathrm{in}$ increases, $\beta_M$ decreases, and we must simulate a very large number of realisations in order to avoid being dominated by statistical noise (in practice one needs to produce many more realisations than $1/\beta_M$ in order to accurately compute $\beta_M$), which becomes numerically expensive. In practice, for $y_\mathrm{in} \geq 6$, we produced $10^8$ realisations and found that none of these experienced more than $\left< \mathcal{N} \right> + 1$ $e$-folds~of inflation, so we can only place an upper bound of $\beta_M< 10^{-8}$ for these values of the initial velocity. When $y_\mathrm{in}$ is small, one can see that \Eq{eq:beta:smally} provides a good fit to the asymptotic behaviour of $\beta_M$ but is less accurate when $y_\mathrm{in}$ approaches one (at least slightly less than the small-velocity approximation in \Figs{fig:meanefolds:smally} and~\ref{fig:pdf:smally}). This is because, as mentioned above, for values of $\mu$ of order one or below, the sets of eigenvalues highly overlap and the mass fraction picks up contribution from higher-order terms. The approximation becomes better when larger values of $\mu$ are employed. Let us also mention that the large-velocity approximation developed in \Sec{sec:classicallimit} vastly underestimates the mass fraction, since this expansion applies to the neighborhood of the maximum of the PDF~\cite{Pattison:2018bct, Ezquiaga:2019ftu}. For this reason we do not display it in \Fig{fig:beta:smally} since it lies many orders of magnitude below the actual result. \section{Volterra equation approach} \label{sec:volterra} The above considerations made clear that resolving the tail of the PDF in the presence of a substantial initial velocity is both analytically and numerically challenging. This is why in this section, we propose a new method, that makes use of Volterra integral equations, to solve the first-passage-time problem for the stochastic system depicted in \Fig{fig:sketch}. As we will see, it is much more efficient at reconstructing the tail of the PDF than by simulating a large number of Langevin realisations. The first step of this approach is to make the dynamics trivial by introducing the phase-space variable \begin{equation}\begin{aligned} z\equiv \frac{\mu}{\mathpalette\DHLhksqrt{2}}\left(x-y\right) , \end{aligned}\end{equation} in terms of which \Eq{eq:langevin:xy} can be rewritten as \begin{equation}\begin{aligned} \label{eq:Langevin:z} \frac{\mathrm{d} z}{\mathrm{d} N} = \xi\left(N\right) . \end{aligned}\end{equation} In the absence of any boundary conditions, starting from $z_\mathrm{in}$ at time $N_\mathrm{in}$, the distribution function of the $z$ variable at time $N$ is of the Gaussian form \begin{equation}\begin{aligned} \label{eq:sol:FP:noBound} f\left(z,N\vert z_\mathrm{in},N_\mathrm{in}\right) = \frac{e^{-\frac{\left(z-z_\mathrm{in}\right)^2}{2\left(N-N_\mathrm{in}\right)}}}{\mathpalette\DHLhksqrt{2\pi\left(N-N_\mathrm{in}\right)}}\, . \end{aligned}\end{equation} The non-trivial features of the problem are now contained in the boundary conditions, since the absorbing boundary at $\phi=\phi_\mathrm{end}$ ($x=0$) and the reflective boundary at $\phi=\phi_\mathrm{end}+\Delta\phi_\mathrm{well}$ ($x=1$) give rise to the time-dependent absorbing and reflective boundaries, at respective locations \begin{equation}\begin{aligned} \label{eq:boundaries:z} z_-(N) &= - \frac{\mu}{\mathpalette\DHLhksqrt{2}} y_\mathrm{in} e^{-3\left(N-N_\mathrm{in}\right)} \, ,\\ z_+(N) &= \frac{\mu}{\mathpalette\DHLhksqrt{2}}\left[1 - y_\mathrm{in} e^{-3\left(N-N_\mathrm{in}\right)}\right] . \end{aligned}\end{equation} In terms of the $z$ variable, these equations describe the motion of a free particle with no potential gradient and constant noise amplitude, within a well of fixed width but with moving boundaries, one being absorbing and the other one reflecting. This problem has no known analytical solution but one can consider a related, solvable problem, where the two boundaries are absorbing. Let us first describe how that problem can be solved with Volterra equations, before explaining how the solutions to the original problem can be expressed in terms of solutions of the related one. \subsection{Two absorbing boundaries} \label{sec:two:absorbing:boundaries} If the two boundaries are absorbing, following \Refa{Buonocore:1990vol}, one can introduce $\mathcal{P}^{(0)}_-(N\vert N_\mathrm{in}, z_\mathrm{in})$ [respectively $\mathcal{P}^{(0)}_+(N\vert N_\mathrm{in}, z_\mathrm{in})$], the probability densities that $z$ crosses for the first time the boundary $z_-$ (respectively $z_+$) at time $N$ without having crossed $z_+$ (respectively $z_-$) before, starting from $z_\mathrm{in}$ at time $N_\mathrm{in}$, as well as the two auxiliary functions \begin{equation}\begin{aligned} \Psi_{\pm}\left(N\vert z_\mathrm{in},N_\mathrm{in}\right)\equiv \left[z_\pm'(N)-\frac{z_\pm(N)-z_\mathrm{in}}{N-N_\mathrm{in}}\right] f\left[z_\pm(N),N\vert z_\mathrm{in},N_\mathrm{in}\right] . \end{aligned}\end{equation} One can then show that $\mathcal{P}^{(0)}_-(N\vert N_\mathrm{in}, z_\mathrm{in})$ and $\mathcal{P}^{(0)}_+(N\vert N_\mathrm{in}, z_\mathrm{in})$ satisfy the two coupled integral equations~\cite{Buonocore:1990vol} \begin{equation}\begin{aligned} \label{eq:Volterra} \mathcal{P}^{(0)}_-(N\vert z_\mathrm{in},N_\mathrm{in}) = & \Psi_-\left(N\vert z_\mathrm{in},N_\mathrm{in}\right)-\int_{N_\mathrm{in}}^N \mathrm{d} \tilde{N} \Big\lbrace \mathcal{P}^{(0)}_-(\tilde{N}\vert z_\mathrm{in}, N_\mathrm{in}) \Psi_-\left[N\vert z_-(\tilde{N}),\tilde{N}\right] \\ & +\mathcal{P}^{(0)}_+(\tilde{N}\vert z_\mathrm{in}, N_\mathrm{in}) \Psi_-\left[N\vert z_+(\tilde{N}),\tilde{N}\right]\Big\rbrace\, ,\\ \mathcal{P}^{(0)}_+(N\vert z_\mathrm{in}, N_\mathrm{in}) = & -\Psi_+\left(N\vert z_\mathrm{in},N_\mathrm{in}\right)+\int_{N_\mathrm{in}}^N \mathrm{d} \tilde{N} \Big\lbrace \mathcal{P}^{(0)}_-(\tilde{N}\vert z_\mathrm{in}, N_\mathrm{in}) \Psi_+\left[N\vert z_-(\tilde{N}),\tilde{N}\right] \\ & +\mathcal{P}^{(0)}_+(\tilde{N}\vert z_\mathrm{in}, N_\mathrm{in}) \Psi_+\left[N\vert z_+(\tilde{N}),\tilde{N}\right]\Big\rbrace\, . \end{aligned}\end{equation} These formulas are obtained from first principles in \Refa{Buonocore:1990vol} and we do not reproduce their derivation here. It is nonetheless worth pointing out that they can be generalised to any drift function and noise amplitude in the Langevin equation, although, for practical use, one needs to compute the solution $f$ of the Fokker-Planck equation in the absence of boundaries [as was done in \Eq{eq:sol:FP:noBound} here], which is not always analytically possible. These equations can be readily solved numerically by discretising the integrals. Starting from $\mathcal{P}^{(0)}_-(N_\mathrm{in}\vert N_\mathrm{in}, z_\mathrm{in})=\mathcal{P}^{(0)}_+(N_\mathrm{in}\vert N_\mathrm{in}, z_\mathrm{in})=0$ (if $z_\mathrm{in} \in [z_-(N_\mathrm{in}),z_+(N_\mathrm{in})]$), one can then compute the value of the distribution functions at each time step by plugging into the right-hand sides of \Eqs{eq:Volterra} their values at previous time steps. \subsection{One absorbing and one reflective boundary} Let us now consider again our original problem, where the boundary at $z_-(N)$ is absorbing and the boundary at $z_+(N)$ is reflective, and where our goal is to compute the probability density $\mathcal{P}_-(N\vert z_\mathrm{in}, N_\mathrm{in})$ that the system crosses the absorbing boundary at time $N$ starting from $z_\mathrm{in}$ at time $N_\mathrm{in}$. This can be done by considering the probability density that the system crosses the absorbing boundary at time $N$ after having bounced exactly $n$ times at the reflective boundary, and that we denote $\mathcal{P}^{(n)}_-(N\vert z_\mathrm{in},N_\mathrm{in})$. One can decompose \begin{equation}\begin{aligned} \label{eq:PDF:sum:reflectionNumber} \mathcal{P}_-(N\vert z_\mathrm{in},N_\mathrm{in}) = \sum_{n=0}^\infty \mathcal{P}^{(n)}_-(N\vert z_\mathrm{in}, N_\mathrm{in})\, . \end{aligned}\end{equation} We also introduce $\mathcal{P}^{(n)}_+(N\vert z_\mathrm{in}, N_\mathrm{in})$, which is the distribution function associated with the $(n+1)^{\mathrm{th}}$ bouncing time. Obviously, neither $\mathcal{P}^{(n)}_-$ or $\mathcal{P}^{(n)}_+$ are normalised to unity since all realisations of the stochastic process escape through the absorbing boundary after a finite number of bouncing events. More precisely, their norms decay with $n$. In the regimes where this makes the sum~\eqref{eq:PDF:sum:reflectionNumber} converge, this can be used to compute $\mathcal{P}_-$ using the following iterative procedure. For $n=0$, it is clear that $\mathcal{P}^{(0)}_\pm$ match the quantities denoted in the same way in \Sec{sec:two:absorbing:boundaries} (hence the notation), since $\mathcal{P}^{(0)}_-$ corresponds to the probability that the system crosses the absorbing boundary without having ever bounced at the reflective boundary, and $\mathcal{P}^{(0)}_+$ is associated with the first encounter with the reflective boundary. For $n\geq 1$, let us consider realisations that bounce $n$ times before exiting the well, and let us call $N_\mathrm{b}$ their last bouncing time ({i.e.~} their $n^\mathrm{th}$ bouncing time). The distribution function associated with $N_\mathrm{b}$ is simply $\mathcal{P}^{(n-1)}_+(N\vert z_\mathrm{in}, N_\mathrm{in})$. Then, starting from the reflective boundary at time $N_\mathrm{b}$, one has to determine the probability that the system crosses the absorbing boundary at time $N$ without bouncing again against the reflective boundary. From the above definitions, this probability is nothing but $\mathcal{P}_-^{(0)}[N\vert z_+(N_{\mathrm{b}}),N_{\mathrm{b}}]$. This gives rise to \begin{equation}\begin{aligned} \mathcal{P}^{(n)}_-(N\vert z_\mathrm{in},N_\mathrm{in}) &=& \int_{N_\mathrm{in}}^N \mathrm{d} N_{\mathrm{b}} \mathcal{P}^{(n-1)}_+(N_\mathrm{b}\vert z_\mathrm{in}, N_\mathrm{in}) \mathcal{P}_{-}^{(0)}\left[N\vert z_+(N_{\mathrm{b}}),N_{\mathrm{b}}\right]. \end{aligned}\end{equation} Similarly, once at the reflective boundary for the $n^{\mathrm{th}}$ time at time $N_\mathrm{b}$, the probability to hit the reflective boundary for the $(n+1)^{\mathrm{th}}$ time at time $N$, without exiting through the absorbing boundary before then, is given by $\mathcal{P}_+^{(0)}[N\vert z_+(N_\mathrm{b}),N_\mathrm{b}]$, which leads to \begin{equation}\begin{aligned} \label{eq:Pplus_n_Volterra} \mathcal{P}^{(n)}_+(N\vert z_\mathrm{in}, N_\mathrm{in}) &=& \int_{N_\mathrm{in}}^N \mathrm{d} N_{\mathrm{b}} \mathcal{P}^{(n-1)}_+(N_\mathrm{b}\vert z_\mathrm{in}, N_\mathrm{in}) \mathcal{P}_{+}^{(0)}\left[ N\vert z_+(N_{\mathrm{b}}),N_{\mathrm{b}}\right] . \end{aligned}\end{equation} These allow one to compute the distribution functions $ \mathcal{P}^{(n)}_\pm$ iteratively, hence to obtain the full first-exit-time distribution function from \Eq{eq:PDF:sum:reflectionNumber}. \begin{figure} \centering \includegraphics[width=0.69\textwidth]{figs/pdf_iterative_yin_eq_5.pdf} \caption{Distribution function of the first passage time $\mathcal{N}$ for $\mu=1$ and $y_\mathrm{in}=5$, computed from the Volterra integral equations~\eqref{eq:Volterra}. The solid lines correspond to \Eq{eq:PDF:sum:reflectionNumber} truncated at various order $n$ labeled with different colours. One can check that, as $n$ increases, this approaches the distribution function reconstructed from direct simulations of the Langevin equations and displayed in green. The dashed lines stand for $\mathcal{P}_+^{(n)}$ given in \Eq{eq:Pplus_n_Volterra}, which correspond to the distribution function of the $(n+1)^\mathrm{th}$ bouncing time. \label{fig:Volterra}} \end{figure} For illustrative purpose, this is done in \Fig{fig:Volterra} for $\mu=1$ and $y_\mathrm{in}=5$. As $n$ increases, one can check that the distribution functions of the $(n+1)^\mathrm{th}$ bouncing time, $\mathcal{P}_+^{(n)}$, decay (dotted lines). As a consequence, when truncating \Eq{eq:PDF:sum:reflectionNumber} at increasing orders (solid lines), one quickly approaches the distribution reconstructed from Langevin simulations (green bars). Close to the maximum of the distribution, the result from the first iterations already provide an excellent approximation, while on the tail, one has to iterate a few times before reaching a good agreement. There, the statistical error associated to the Langevin reconstruction is large since the statistics is sparse, and the Volterra technique is particularly useful. In practice, one has to truncate the sum \eqref{eq:PDF:sum:reflectionNumber} at a fixed order $n_\mathrm{max}$. As already argued, the convergence of the sum is improved in situations where $ \mathcal{P}^{(n)}_\pm$ quickly decays with $n$, {i.e.~} when it is unlikely to undergo a large number of bouncing events. This implies that this computational scheme is well suited in the ``drift-dominated'' (or large initial velocity) regime of \Sec{sec:classicallimit}, and we find small values of $y_\mathrm{in}$ to be indeed numerically more challenging. \subsection{Revisiting the drift-dominated limit} The Volterra equations~\eqref{eq:Volterra} also enable to expand the first-passage-time distribution functions around any known limiting case, by iteratively plugging the approximated distributions on the right-hand sides of \Eqs{eq:Volterra} and reading their improved version on the left-hand sides. One such regime is the drift-dominated limit, or large-velocity limit, already studied in \Sec{sec:classicallimit}. In this limit, at leading order, each realisation of the stochastic process follows the classical path, hence \begin{equation}\begin{aligned} \label{eq:pdf:cl:lo:volterra} \mathcal{P}_-^{\mathrm{cl,LO}}\left(N\vert z_\mathrm{in},N_\mathrm{in}\right) = \delta\left[N-N_\mathrm{cl}(z_\mathrm{in},N_\mathrm{in})\right], \end{aligned}\end{equation} see \Eq{eq:pdf:cl:lo}, where the classical number of $e$-folds~is simply obtained by requiring that $z_-(N_\mathrm{cl})=z_\mathrm{in}$ (since $z$ remains still in the absence of quantum diffusion), which from \Eq{eq:boundaries:z} gives rise to \begin{equation}\begin{aligned} N_\mathrm{cl}(z_\mathrm{in},N_\mathrm{in}) = N_\mathrm{in} - \frac{1}{3}\ln\left(-\frac{z_\mathrm{in}}{y_\mathrm{in}}\frac{\mathpalette\DHLhksqrt{2}}{\mu}\right), \end{aligned}\end{equation} in agreement with \Eq{eq:meanefolds:classical:LO}. Along the classical path, the reflective boundary plays no role, hence whether it is absorbing or reflective is irrelevant and \Eq{eq:pdf:cl:lo:volterra} can also serve as the leading-order expression for $\mathcal{P}_-^{(0)}$. At that order, one simply has $\mathcal{P}_+^{(0)}(N\vert z_\mathrm{in}, N_\mathrm{in})=0$. Plugging these formulas into the right-hand sides of \Eqs{eq:Volterra}, one obtains, at next-to-leading order, \begin{equation}\begin{aligned} \label{eq:P(0):cl:NLO} \mathcal{P}^{(0),\mathrm{cl,NLO}}_-(N\vert N_\mathrm{in}, z_\mathrm{in}) =& \left[z_-'(N)-\frac{z_-(N)-z_\mathrm{in}}{N-N_\mathrm{in}}\right]\frac{e^{-\frac{\left[z_-(N)-z_\mathrm{in}\right]^2}{2(N-N_\mathrm{in})}}}{\mathpalette\DHLhksqrt{2\pi(N-N_\mathrm{in})}} \\ & \kern-8em -\left[ z_-'(N)-\frac{z_-(N)- z_\mathrm{in}}{N-N_\mathrm{cl}(z_\mathrm{in},N_\mathrm{in})}\right] \frac{e^{-\frac{\left[ z_-(N)-z_\mathrm{in}\right] ^2}{2 \left[N-N_\mathrm{cl}(z_\mathrm{in},N_\mathrm{in})\right]}}}{\mathpalette\DHLhksqrt{2\pi\left[N-N_\mathrm{cl}(z_\mathrm{in},N_\mathrm{in})\right]}}\theta\left[N-N_\mathrm{cl}(z_\mathrm{in},N_\mathrm{in})\right]\\ \mathcal{P}^{(0),\mathrm{cl,NLO}}_+(N\vert N_\mathrm{in}, z_\mathrm{in}) =& -\left[z_+'(N)-\frac{z_+(N)-z_\mathrm{in}}{N-N_\mathrm{in}}\right]\frac{e^{-\frac{\left[z_+(N)-z_\mathrm{in}\right]^2}{2(N-N_\mathrm{in})}}}{\mathpalette\DHLhksqrt{2\pi(N-N_\mathrm{in})}} \\ & \kern-8em +\left[ z_+'(N)-\frac{z_+(N)- z_\mathrm{in}}{N-N_\mathrm{cl}(z_\mathrm{in},N_\mathrm{in})}\right] \frac{e^{-\frac{\left[ z_+(N)-z_\mathrm{in}\right] ^2}{2 \left[N-N_\mathrm{cl}(z_\mathrm{in},N_\mathrm{in})\right]}}}{\mathpalette\DHLhksqrt{2\pi\left[N-N_\mathrm{cl}(z_\mathrm{in},N_\mathrm{in})\right]}}\theta\left[N-N_\mathrm{cl}(z_\mathrm{in},N_\mathrm{in})\right]. \end{aligned}\end{equation} In order to gain further analytical insight, one can expand $\mathcal{P}^{(0)}_-$ close to $N=N_\mathrm{cl}$, where it is expected to peak. One obtains \begin{equation}\begin{aligned} \mathcal{P}^{(0),\mathrm{cl,NLO}}_-(N\vert N_\mathrm{in}, z_\mathrm{in}) \simeq & \frac{-3z_\mathrm{in}}{\mathpalette\DHLhksqrt{2\pi \left[N_\mathrm{cl}(z_\mathrm{in},N_\mathrm{in})-N_\mathrm{in}\right]}}e^{-\frac{9 z_\mathrm{in}^2}{2}\frac{\left[N-N_\mathrm{cl}(z_\mathrm{in},N_\mathrm{in})\right]^2}{N_\mathrm{cl}(z_\mathrm{in},N_\mathrm{in})}}\, , \end{aligned}\end{equation} which is nothing but a Gaussian distribution centred on $N=N_\mathrm{cl}$ and with variance $\langle \delta N^2\rangle = (N_\mathrm{cl}-N_\mathrm{in})/(9 z_\mathrm{in}^2)$, in agreement with \Eq{eq:deltaN2:cl:nlo} [however, here, we are able to reconstruct the Gaussian PDF, something that was not directly possible with the expansion of \Sec{sec:classicallimit}, see the discussion around \Eq{eq:fnl}]. From \Eq{eq:P(0):cl:NLO}, one can see that the exponential terms in $\mathcal{P}^{(0)}_+$ rather peak at the time $N_+$ such that $z_+(N_+)=z_\mathrm{in}$, namely $ N_+ = N_\mathrm{in}-\frac{1}{3}\ln\left(\frac{1}{y_\mathrm{in}}-\frac{z_\mathrm{in}}{y_\mathrm{in}}\frac{\mathpalette\DHLhksqrt{2}}{\mu}\right)= N_\mathrm{in}-\frac{1}{3}\ln\left(1+\frac{1-x_\mathrm{in}}{y_\mathrm{in}}\right). $ If one starts at the location of the reflective boundary, $x_\mathrm{in}=1$, then one simply has $N_+=N_\mathrm{in}$ as one may have expected, but otherwise $N_+<N_\mathrm{in}$. This means that $\mathcal{P}^{(0)}_+$ is maximal near the origin $N=N_\mathrm{in}$, and can be expanded around there, \begin{equation}\begin{aligned} \mathcal{P}^{(0),\mathrm{cl,NLO}}_+(N\vert N_\mathrm{in}, z_\mathrm{in}) \simeq & \left[\frac{\mu\left(1-x_\mathrm{in}\right)}{2\mathpalette\DHLhksqrt{\pi}\left( N-N_\mathrm{in}\right)^{3/2}} +\frac{9 y_\mathrm{in} \mu}{4}\mathpalette\DHLhksqrt{\frac{N-N_\mathrm{in}}{\pi}} \right] \\ & \kern-5em \exp \left[ -\frac{\left(1-x_\mathrm{in}\right)^2 \mu^2}{4\left(N-N_\mathrm{in}\right)}+ \frac{3}{2}\mu^2 y_\mathrm{in}\left(x_\mathrm{in}-1\right)-\frac{9}{4}\mu^2 y_\mathrm{in} \left(x_\mathrm{in} + y_\mathrm{in} -1 \right)\left(N-N_\mathrm{in}\right)\right], \end{aligned}\end{equation} where the expansion has been performed at next-to-leading order in $(N-N_\mathrm{in})$ such as to also describe the case $x_\mathrm{in}=1$ for which the leading order result vanishes. One can use this approximation to integrate $\mathcal{P}_+^{(0)}$ and estimate the probability $p_+$ that the system bounces at least once on the reflective boundary, \begin{equation}\begin{aligned} p_+\simeq \begin{cases} e^{-\frac{3}{2}y_\mathrm{in} \mu^2 \left(1-x_\mathrm{in}\right)}\quad\mathrm{if}\quad x_\mathrm{in}<1\\ \frac{1}{3\mu^2 y_\mathrm{in}^2}\quad\mathrm{if}\quad x_\mathrm{in}=1 \end{cases} . \end{aligned}\end{equation} One can check that, for large initial velocities, the suppression is much less pronounced in the case where $x_\mathrm{in}=1$: indeed, if one starts precisely at the location of the reflective boundary, the probability to ``bounce'' against it (although this becomes a somewhat singular quantity) is larger than if one starts away from it. One can check that, when $\mu y_\mathrm{in} \gg 1$, this probability is small, in agreement with the results of \Sec{sec:classicallimit} where it was shown that $(\mu y_\mathrm{in})^{-1} $ is indeed the relevant parameter to perform the drift-dominated expansion with. It is also interesting to notice that already at next-to-leading order, the distributions~\eqref{eq:P(0):cl:NLO} are heavy-tailed. Indeed, when $N\gg N_\mathrm{cl}(z_\mathrm{in}, N_\mathrm{in})$, one has $z_- \simeq z_-' \simeq z_+'\simeq 0$ and $z_+\simeq \mathpalette\DHLhksqrt{\mu/2}$, which gives rise to \begin{equation}\begin{aligned} \mathcal{P}^{(0),\mathrm{cl,NLO}}_- &\simeq 3 z_\mathrm{in} \frac{N_\mathrm{in}-N_\mathrm{cl}}{2 \mathpalette\DHLhksqrt{2\pi}} N^{-5/2}\\ \mathcal{P}^{(0),\mathrm{cl,NLO}}_+ &\simeq \mu \frac{N^{-3/2}}{2\mathpalette\DHLhksqrt{\pi}} \, . \end{aligned}\end{equation} These power-law behaviours, which imply that not all moments of the distributions are defined, typically arise in situations where a single absorbing boundary condition is imposed and the inflating field domain is unbounded, a typical example being the Levy distribution. At this order, indeed, $\mathcal{P}^{(0),\mathrm{cl,NLO}}_-$ still carries no information about the reflective boundary [since $z_+$ does not appear explicitly in the first of \Eqs{eq:P(0):cl:NLO}]. This is no-longer the case at next-to-next-to-leading order, although unfortunately, there, the integrals appearing in \Eq{eq:Volterra} can no longer be performed analytically. \section{Applications} \label{sec:applications} So far we have studied stochastic effects in USR inflation by focusing on the toy model depicted in \Fig{fig:sketch}, where the inflationary potential contains a region that is exactly flat. In this section, we want to consider more generic potentials, and see how the toy model we have investigated helps to describe more realistic setups. \subsection{The Starobinsky model} \label{sec:StarobinskyModel} Let us start by considering the Starobinsky potential for inflation, where the potential has two linear segments with different gradients, namely \begin{equation}\begin{aligned} \label{eq:Staro:potential} V(\phi) = \begin{cases} \displaystyle V_{0}\left(1+\alpha\frac{\phi-\phi_*}{M_\usssPl}\right) &\quad \mathrm{for}\quad \phi< \phi_* \\ & \\ \displaystyle V_{0}\left(1+\beta\frac{\phi-\phi_*}{M_\usssPl}\right) &\quad \mathrm{for}\quad \phi> \phi_* \end{cases} \, , \end{aligned}\end{equation} where $\beta>\alpha>0$, see \Fig{fig:staro:potential}. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figs/staro_potential_2021_2.pdf} \caption{A sketch of the piece-wise linear potential~\eqref{eq:Staro:potential}, known as the Starobinsky model. When reaching the second branch of the potential where the potential slope suddenly drops, the inflaton experiences a phase of USR inflation before progressively relaxing back to slow roll. \label{fig:staro:potential}} \end{figure} For $\beta\ll 1$, we may take $H$ to be approximately constant throughout, and this model begins with a first phase of slow-roll inflation when $\phi>\phi_*$. This is followed by a phase of USR inflation for $\phi<\phi_*$ caused by the discontinuous jump in the gradient of the potential kicking the inflaton off the slow-roll trajectory at $\phi=\phi_*$. Finally, since USR is unstable in this model (note that it can be stable in other setups~\cite{Pattison:2018bct}), the inflaton relaxes back to the slow-roll regime in a second phase of slow-roll inflation. The velocity at the beginning of the USR phase is given by the slow-roll attractor before the discontinuity, namely \begin{equation}\begin{aligned} \label{eq:initalvelocity:staro} \dot{\phi}_* \simeq -M_\usssPl H \beta \, . \end{aligned}\end{equation} In the absence of quantum diffusion, the width of the USR well can be obtained by finding the point at which the inflaton relaxes back to slow roll. Taking $H$ to be constant, the Klein-Gordon equation~\eqref{eq:kleingordon} in the second branch of the potential can be solved to give \begin{equation}\begin{aligned} \label{eq:Staro:model:phi(t)} \frac{\phi(t)-\phi_* }{M_\usssPl}= - \alpha H\left(t-t_*\right)+\frac{\beta-\alpha}{3}\left[e^{-3H(t-t_*)}-1\right], \end{aligned}\end{equation} where we have made use of the initial condition~\eqref{eq:initalvelocity:staro} at $\phi=\phi_*$. The velocity of the inflaton, $\dot{\phi}$, is then \begin{equation}\begin{aligned} \label{eq:Staro:momentum} \frac{\dot{\phi}(t)}{M_\usssPl H } = -\alpha - \left(\beta-\alpha\right) e^{-3H(t-t_*)}. \end{aligned}\end{equation} The first term, $-\alpha$, represents the slow-roll attractor towards which the system asymptotes at late time, while the second term is the USR velocity. It provides the main contribution to the total velocity at the onset of the second branch of the potential if $\beta\gg\alpha$, and that condition ensures that the inflaton experiences a genuine USR phase. The two contributions are equal at the time $t_{\mathrm{USR}\to\mathrm{SR}} -t_*=\ln(\beta/\alpha-1)/(3H)$, and by evaluating \Eq{eq:Staro:model:phi(t)} at that time, one finds~\cite{Pattison:2019hef} \begin{equation}\begin{aligned} \frac{\phi_{\mathrm{USR}\to\mathrm{SR}}-\phi_*}{M_\usssPl} = -\frac{\alpha}{3}\ln\left(\frac{\beta-\alpha}{\alpha}\right)+\frac{2\alpha-\beta}{3}\, . \end{aligned}\end{equation} This allows one to evaluate the width of the USR well, $\Delta\phi_\mathrm{well}$, and the critical velocity with \Eq{eq:critical:velocity}. Since the initial velocity corresponds to \Eq{eq:initalvelocity:staro}, one obtains for the rescaled initial velocity \begin{equation}\begin{aligned} y_\mathrm{in} = \frac{1}{1+\frac{\alpha}{\beta} \left[\ln\left(\frac{\beta-\alpha}{\alpha}\right)-2\right]} \simeq 1 \quad\text{if} \quad\beta\gg \alpha\, . \end{aligned}\end{equation} The fact that $y_\mathrm{in} \simeq 1$ in this model should not come as a surprise, since the width of the USR well was defined as being where the classical trajectory relaxes to slow-roll, so by definition, the initial velocity of the inflaton is precisely the one such that the field reaches that point in finite time. The Starobinsky model therefore lies at the boundary between the drift-dominated and the diffusion-dominated limits studied in \Secs{sec:classicallimit} and~\ref{sec:stochasticlimit} respectively. The value of the parameter $\mu$ can be read of from \Eq{eq:def:mu} and is given by \begin{equation}\begin{aligned} \label{eq:mu:Staro:TM} \mu\simeq \frac{2\mathpalette\DHLhksqrt{2}}{3}\pi \beta \frac{M_\usssPl}{H}\, . \end{aligned}\end{equation} Recall that in order for the potential to support a phase of slow-roll inflation in the first branch, we have taken $\beta\ll 1$. However, for inflation to proceed at sub-Planckian energies, $H/M_\usssPl\ll 1$, and the current upper bound on the tensor to scalar ratio~\cite{Akrami:2018odb} imposes $H/M_\usssPl\lesssim 10^{-5}$ in single-field slow-roll models. Therefore, unless $\beta<10^{-5}$, one has $\mu\gg 1$, and as argued below \Eq{eq:usr:efolds:2ndmoment:analytic}, $y\sim 1$ corresponds to where an abrupt transition between the diffusion-dominated and the drift-dominated regime takes place. Hence neither of the approximations developed in \Secs{sec:classicallimit} and~\ref{sec:stochasticlimit} really apply, and it seems difficult to predict the details of the PBH abundance without performing numerical explorations of the model. If $\beta\ll H/M_\usssPl$, then $\mu\ll 1$ and $y\sim 1$ lies on the edge of the diffusion-dominated regime where the abundance of PBHs is highly suppressed in that case. Let us note however that the above description is not fully accurate since we have used the classical solution~\eqref{eq:Staro:model:phi(t)} to estimate when and where the transition between USR and the last slow-roll phase takes place. While this may be justified in the drift-dominated regime, we have found that $y_\mathrm{in} \simeq 1$ so this approximation is not fully satisfactory in the presence of quantum diffusion. In practice, different realisations may exit the USR phase at different values of $\phi_{\mathrm{USR}\to\mathrm{SR}}$, and we may not be able to map this problem to the toy model discussed in \Secs{sec:classicallimit} and~\ref{sec:stochasticlimit}. In fact, since quantum diffusion does not affect the dynamics of the momentum in USR, see \Eq{eq:eom:v:stochastic}, \Eq{eq:Staro:momentum} remains correct even in the presence of quantum diffusion, {i.e.~} $\bar{\pi}/M_\usssPl = -\alpha -(\beta-\alpha)e^{-3(N-N_*)} $, where we still assume that $H$ is almost constant. As a consequence, the transition towards the slow-roll regime occurs at the same time (rather than at the same field value) for all realisations, namely~\cite{Pattison:2019hef} \begin{equation}\begin{aligned} \mathcal{N}_{\mathrm{USR}\to\mathrm{SR}} = \frac{1}{3}\ln\left(\frac{\beta}{\alpha}-1\right)\, , \end{aligned}\end{equation} which is obtained by equating the two terms in the above expression for $\bar{\pi}/M_\usssPl$. \begin{figure} \centering \includegraphics[width=0.515\textwidth]{figs/pdf_beta_eq_0p1} \caption{Distribution function of the number of $e$-folds~realised in the Starobinsky model~\eqref{eq:Staro:potential}, between $\phi_*$ and $\phi_\mathrm{end}$, obtained from the iterative Volterra procedure detailed in \Sec{sec:volterra} and extended in \Eqs{eq:z+:z_:Starobinsky}. We have used $H/M_\usssPl=10^{-5}$ and $\beta=0.1$ while letting $\alpha$ vary, and we have set $\phi_\mathrm{end}$ such that the classical trajectory~\eqref{eq:Staro:model:phi(t)} realises $7$ $e$-folds~between $\phi_*$ and $\phi_\mathrm{end}$. The curves for $\alpha=10^{-2}$, $\alpha=10^{-3}$ and $\alpha=10^{-4}$ are so close to each other that they cannot be distinguished by eye. \label{fig:Starobinsky:potential:P(N)}} \end{figure} The problem should therefore be reconsidered, and conveniently, the methods developed in \Sec{sec:volterra} can be extended to the case of a linear potential. Indeed, one can introduce the new field variable \begin{equation}\begin{aligned} z &= \frac{\bar{\phi}+\frac{\bar{\pi}(N)}{3}+M_\usssPl\alpha(N-N_*)}{H/(2\pi)}\\ &=2\pi\frac{M_\usssPl}{H}\left[\frac{\bar{\phi}}{M_\usssPl}+\alpha\left(N-N_*-\frac{1}{3}\right)+\frac{\alpha-\beta}{3}e^{-3(N-N_*)}\right] \end{aligned}\end{equation} and, from \Eqs{eq:eom:phi:stochastic} and~\eqref{eq:eom:v:stochastic}, show that it undergoes free diffusion and obeys \Eq{eq:Langevin:z}. Assuming, without loss of generality, that inflation ends at $\bar{\phi}=0$, the location of the absorbing boundary, $z_-(N)$, and of the reflective boundary, $z_+(N)$, respectively correspond to $\bar{\phi}_-=0$ and $\bar{\phi}_+=\phi_*$, and are thus given by \begin{equation}\begin{aligned} \label{eq:z+:z_:Starobinsky} z_-(N) & = 2\pi\frac{M_\usssPl}{H}\left[\alpha\left(N-N_*-\frac{1}{3}\right)+\frac{\alpha-\beta}{3}e^{-3(N-N_*)}\right]\\ z_+(N) & = 2\pi\frac{M_\usssPl}{H}\left[\frac{\phi_*}{M_\usssPl}+\alpha\left(N-N_*-\frac{1}{3}\right)+\frac{\alpha-\beta}{3}e^{-3(N-N_*)}\right]\, . \end{aligned}\end{equation} By plugging these formulas in the iterative Volterra procedure outlined in \Sec{sec:volterra}, one can then extract the PDF of the number of $e$-folds~without performing any approximation. The result is displayed in \Fig{fig:Starobinsky:potential:P(N)} for $H/M_\usssPl=10^{-5}$ and $\beta=0.1$, and for a few values of $\alpha$. At leading order in quantum diffusion, one can use the classical $\delta N$ formalism to assess $\delta N \simeq \delta \phi / (\partial\phi/\partial N) = H/(2\pi)/ (\partial\phi/\partial N)$, where $\partial\phi/\partial N$ needs to be evaluated with the classical trajectory~\eqref{eq:Staro:model:phi(t)} (and given that the noise in the momentum direction is negligible both in the USR and slow-roll phases, see the discussion in \Sec{sec:StochasticInflation}). The integrated variance in the number of $e$-folds, $\langle \delta \mathcal{N}^2\rangle = \int (\delta N)^2 \mathrm{d} N_\mathrm{cl} $, can then be worked out, and in the limit where $\beta\gg \alpha$ and $N_\mathrm{cl}\gg \mathcal{N}_{\mathrm{USR}\to\mathrm{SR}} $, one obtains \begin{equation}\begin{aligned} \label{eq:StarobinskyModel:Gaussian:appr} \mathpalette\DHLhksqrt{\langle \delta\mathcal{N}^2 \rangle_\mathrm{cl}}\simeq \frac{H}{2\pi} \frac{\mathpalette\DHLhksqrt{N_\mathrm{cl}}}{\alphaM_\usssPl}. \end{aligned}\end{equation} In this regime, one therefore recovers the same result as in slow-roll inflation, and the USR phase only introduces corrections that are suppressed by $\alpha/\beta$ and by $e^{3(\mathcal{N}_{\mathrm{USR}\to\mathrm{SR}}-N_\mathrm{cl})}$ (the full expression can readily be obtained but we do not reproduce it here since it is not particularly illuminating). The fact that $\delta \mathcal{N}$ scales as $1/\alpha$ in this limit is the reason why the distribution of $\alpha \delta \mathcal{N}$ is displayed in \Fig{fig:Starobinsky:potential:P(N)}, where the black dashed line stands for a Gaussian distribution with standard deviation given by \Eq{eq:StarobinskyModel:Gaussian:appr}. One can see that when $\alpha\geq 10^{-4}$, it provides an excellent fit to the full result, which therefore simply follows the classical Gaussian profile. In these cases, the PBH mass fraction $\beta_M$, given by \Eq{eq:def:beta}, is such that $\beta_M<10^{-100}$, where this upper bound simply corresponds to the numerical accuracy of our code (and where we use $\zeta_\mathrm{c}=1$). When $\alpha=10^{-5}$ (blue curve), the distribution deviates from the classical Gaussian profile away from the immediate neighbourhood of its maximum: it gets skewed and acquires a heavier tail. In that case we find $\beta_M\simeq 0.0204$ while the classical Gaussian profile would give $\beta_M\simeq 0.0176$, hence it would slightly underestimate the amount of PBHs. When $\alpha=10^{-6}$ (red curve), the distribution is highly non Gaussian, and clearly features an exponential tail. We find $\beta_M\simeq 0.231$ in that case, while the classical Gaussian profile would give $\beta_M\simeq 0.81$, hence it would slightly underestimate the amount of PBHs. This is because, although the PDF has an exponential, heavier tail, it is also more peaked around its maximum. In those last two cases, PBHs are therefore overproduced (unless they correspond to masses that Hawking evaporate before big-bang nucleosynthesis, in which case they cannot be directly constrained, see \Refa{Papanikolaou:2020qtd}). Notice that the value of $\alpha$ at which the result starts deviating from the classical Gaussian profile corresponds to $\alpha \sim H/M_\usssPl$, which is what is expected for slow-roll inflation in a potential that is linearly tilted between $\phi_{\mathrm{USR}\to\mathrm{SR}}$ and $\phi_\mathrm{end}$, see \Refa{Ezquiaga:2019ftu} [this is also consistent with \Eq{eq:StarobinskyModel:Gaussian:appr}]. As a consequence, the heavy tails obtained for small values of $\alpha$ are provided by the slow-roll phase that follows USR, and not so much by the USR phase itself. We conclude that the existence of a phase of USR inflation does not lead, per se, to substantial PBH production in the Starobinsky model. \subsection{Potential with an inflection point} \label{sec:InflectionPoint:Plateau} \begin{figure} \centering \includegraphics[width=0.515\textwidth]{figs/potential.pdf} \includegraphics[width=0.47\textwidth]{figs/SRparam.pdf} \caption{Left panel: potential function~\eqref{eq:potential:garciabellido} considered in \Sec{sec:InflectionPoint:Plateau}, which features an inflection point (denoted with the vertical grey dashed line) at small field values and a plateau at large field values. The parameters are set to $\lambda=1$, $\xi=2.3$, $\alpha=6 \lambda \phi_\mathrm{c}/(3 + \xi^2 \phi_\mathrm{c}^4) - 4.3\times 10^{-5}$ and $m^2=\lambda \phi_\mathrm{c}^2 (3 + \xi \phi_\mathrm{c}^2)/(3 + \xi^2 \phi_\mathrm{c}^4)$, where $\phi_\mathrm{c}=0.64M_\usssPl$ is the location of the inflection point. Right panel: first and second slow-roll parameters as a function of the field value. Close to the inflection point, $\epsilon_2$ drops below $-3$ and reaches values close to $-6$, which signals the onset of a USR phase (see main text). \label{fig:inflectionpoint:potential}} \end{figure} Let us now investigate a model with a smoother inflection point, such as the one proposed in \Refs{Garcia-Bellido:2017mdw,Ezquiaga:2018gbw} where the potential function is given by \begin{equation}\begin{aligned} \label{eq:potential:garciabellido} V(\phi) = \frac{1}{12}\frac{6m^2\phi^2 - 4\alpha\phi^3+3\lambda\phi^4}{\left( 1+\xi\phi^2\right)^2} \, . \end{aligned}\end{equation} It features an inflection point at small field values, and approaches a constant at large field values, such that the scales observed in the CMB emerge when the potential has a plateau shape, in agreement with CMB observations~\cite{Akrami:2018odb}. The potential function is displayed in the left panel of \Fig{fig:inflectionpoint:potential}. In the right panel, we display the first and second slow-roll parameters obtained from solving the Klein-Gordon equation~\eqref{eq:kleingordon} numerically. Let us recall that the first slow-roll parameter was introduced below \Eq{eq:KG:efolds} and is defined as $\epsilon_1=-\dot{H}/H^2$, while the second slow-roll parameter is defined as $\epsilon_2=\mathrm{d}\ln\epsilon_1/\mathrm{d} N$. At large field values, the system first undergoes a phase of attractor slow-roll inflation, where both $\epsilon_1$ and $\epsilon_2$ are small, which guarantees that the result does not depend on our choice of initial conditions (if taken sufficiently high in the potential). When approaching the inflection point, the slow-roll parameters become sizeable, which signals that slow roll breaks down (but inflation does not stop since $\epsilon_1$ remains below one). This triggers a USR phase, where $\epsilon_1$ rapidly decays and $\epsilon_2$ reaches values close to $-6$. Past the inflection point, $\epsilon_1$ increases again and grows larger than one, at which point inflation stops. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{figs/y_effective.pdf} \includegraphics[width=0.49\textwidth]{figs/mu_effective.pdf} \caption{Effective $y$ and $\mu$ parameters for the inflection-point potential~\eqref{eq:potential:garciabellido}. The orange line, labeled ``criterion 1'', corresponds to defining USR as being when $\epsilon_2<\epsilon_{2,\mathrm{c}}$. For the brown line, labeled ``criterion 2'', USR starts when $\epsilon_2<\epsilon_{2,\mathrm{c}}$ and ends when $\epsilon_1$ reaches a minimum (hence $\epsilon_2$ becomes positive). \label{fig:inflectionpoint:ymu}} \end{figure} As explained in \Sec{sec:intro}, USR corresponds to when the acceleration term in the Klein-Gordon equation~\eqref{eq:KG:efolds}, $\ddot{\phi}$, dominates over the potential gradient term, $V'$. Using the definitions of the two first slow-roll parameters, one has~\cite{Pattison:2018bct} \begin{equation}\begin{aligned} \frac{V'}{\ddot{\phi}} = \frac{6}{2\epsilon_1-\epsilon_2}-1\, . \end{aligned}\end{equation} As a consequence, USR, {i.e.~} $\vert \ddot{\phi} \vert > \vert V'\vert$, corresponds to $\epsilon_2<2\epsilon_1-3$, and maximal USR corresponds to when $\epsilon_2=2\epsilon_1-6$. This can be used to define the boundaries of the USR well, hence the effective parameters $y$ and $\mu$, which are displayed in \Fig{fig:inflectionpoint:ymu}. More precisely, one can define USR as being the regime where $\epsilon_2<\epsilon_{2,\mathrm{c}}$, where $\epsilon_{2,\mathrm{c}}$ is some critical value. This defines a certain region in the potential, the width and entry velocity of which can be computed, and \Eqs{eq:xy:variabletransforms} and~\eqref{eq:def:mu} give rise to the orange curves in \Fig{fig:inflectionpoint:ymu}. Strictly speaking, the definition of USR corresponds to taking $\epsilon_{2,\mathrm{c}}=-3$ (if one neglects $\epsilon_1$). We however study the dependence of the result on the precise value of $\epsilon_{2,\mathrm{c}}$, as a way to assess the robustness of the procedure which consists in investigating the inflection-point potential in terms of the toy model studied in \Secs{sec:StochasticInflation}-\ref{sec:volterra}. This procedure indeed implies a sharp transition towards USR, hence little dependence on the precise choice of the critical value $\epsilon_{2,\mathrm{c}}$. For that same reason, in \Fig{fig:inflectionpoint:ymu}, we also display the result obtained by defining the USR phase in a slightly different way, namely USR starts when $\epsilon_2$ drops below $\epsilon_{2,\mathrm{c}}$ and ends when $\epsilon_1$ reaches a minimum (hence when $\epsilon_2$ becomes positive). This is displayed with the brown line. By definition, the two criteria match for $\epsilon_{2,\mathrm{c}}=0$, but one can see that they give very similar results even for smaller values of $\epsilon_{2,\mathrm{c}}$, and except if $\epsilon_{2,\mathrm{c}}$ is very close to $-6$. This is a consequence of the abruptness with which USR ends. One can also see that the value of $y$ is always of order one, regardless of the value of $\epsilon_{2,\mathrm{c}}$ and of the criterion being used (except if $\epsilon_{2,\mathrm{c}}$ is very close to $-6$ and if one uses the first criterion). The reason is similar to the one advocated for the Starobinsky model in \Sec{sec:StarobinskyModel}: since the width of the USR well is defined as being where the classical trajectory relaxes to slow roll, by definition, the initial velocity of the inflaton is precisely the one such that the field reaches that point in finite time. The value of the $\mu$ parameter is always larger than one and depends more substantially on $\epsilon_{2,\mathrm{c}}$, although it is never larger than $\mu\sim 50$. One may therefore expect the model to lie on the edge of the drift-dominated regime studied in \Sec{sec:classicallimit}, where the bulk of the PDF is quasi-Gaussian. However, given that the result produced by the toy model is very sensitive to the precise value of $\mu$, and that this value depends on the choice for $\epsilon_{2,\mathrm{c}}$, one may question the applicability of the toy model in this case. Moreover, as explained in \Sec{sec:StarobinskyModel}, in a stochastic picture, USR does not end at a fixed field value, but $\epsilon_2(\bar{\phi},\bar{\pi})=\epsilon_{2,\mathrm{c}}$ rather selects a certain hypersurface in phase space. The fact that we found USR ends abruptly in this model suggests that this may not be so problematic (namely that this hypersurface may have almost constant $\bar{\phi}$), but it is clear that our toy-model description can only provide qualitative results. A full numerical analysis seems therefore to be required to draw definite conclusions about this model. \section{Conclusions} \label{sec:conclusions} Single-field models of inflation able to enhance small-scale perturbations (e.g., leading to primordial black holes) usually feature deviations from slow roll towards the end of inflation, when stochastic effects may also play an important role. This is because, for large fluctuations to be produced, the potential needs to become very flat, such that the potential gradient may become smaller than the velocity inherited by the inflaton from the preceding slow-roll evolution, and possibly smaller than the quantum diffusion that vacuum fluctuations source as they cross out the Hubble radius. In order to consistently describe the last stages of inflation in such models, one thus needs to formulate stochastic inflation, which allows one to describe the backreaction of quantum vacuum fluctuations on the large-scale dynamics, without assuming slow roll. More specifically, in the limit where the inherited velocity is much larger than the drift induced by the potential gradient, one enters the regime of ultra-slow roll. This is why in this work, we have studied the stochastic formalism of inflation in the ultra-slow-roll limit. As a specific example, we have considered a toy model where the potential is exactly flat over a finite region (which we dub the USR well), between two domains where the dynamics of the inflaton is assumed to be dominated by the classical slow-roll equations. In this context, we have studied the distribution function of the first-exit time out of the well, which is identified with the primordial curvature perturbation via the $\delta N$ formalism. Contrary to the more usual, slow-roll case, the characteristic function of that distribution obeys a partial (instead of ordinary) differential equation, rendering the analysis technically more challenging. At the numerical level, we have presented results from simulations over a very large number of realisations of the Langevin equation. Although this method is direct, it is numerically expensive, and leaves large statistical errors on the tails of the distribution, where the statistics are sparse. This is why we have also shown how the problem can be addressed by solving Volterra integral equations, and designed an iterative scheme that provides a satisfactory level of numerical convergence as long as the inherited velocity of the field is not too small. At the analytical level, we have studied the two limits where the inherited velocity is respectively much smaller and much larger than the required velocity to cross the well without quantum diffusion. In the small-velocity limit, at leading order we have recovered the slow-roll results obtained in \Refa{Pattison:2017mbe}. At higher orders, we have found that while the exponential decay rate of the first exit time distribution is not altered by the inherited velocity, its overall amplitude is diminished, such that the predicted abundance of primordial black holes is suppressed. In the large-velocity limit, the classical result is recovered at leading order, and we have presented a systematic expansion to compute corrections to the moments of the distribution function at higher order, that lead to results that are consistent with those previously found in \Refa{Firouzjahi:2018vet}. The tails of the distributions remain difficult to characterise analytically in this regime, as is the case for the classical limit in slow roll. However, the new approach we have proposed based on solving Volterra equations provides an efficient way to probe those tails numerically, which is otherwise very difficult to do with the numerical techniques commonly employed. We have then argued that this toy model can be used to approximate more realistic models featuring phases of ultra-slow roll. The idea is to identify the region in the potential where ultra-slow roll occurs, and approximate this region as being exactly flat ({i.e.~} neglecting the potential gradient). In practice, if one measures (i) the field width of that region, (ii) its potential height and (iii) the velocity of the inflaton when entering the flat region, one can use the results obtained with the toy model, that only depend on those three parameters, to get approximate predictions. This is similar to the approach employed (and tested) in \Refs{Pattison:2017mbe, Ezquiaga:2019ftu} for slow-roll potentials featuring a region dominated by quantum diffusion, where only the two first parameters are relevant. Let us however highlight a fundamental difference with respect to the slow-roll case. In slow-roll potentials, it was found in \Refs{Vennin:2015hra, Pattison:2017mbe} that regions of the potential where stochastic effects dominate can be identified with the criterion $ v^2 \vert v'' \vert /(v')^2 \gg 1$ where $v=V/(24\pi^2M_\usssPl^4)$. For a given potential function $v(\phi)$, this leads to well-defined regions in the potential, with well-defined field widths. In the ultra-slow roll case, the use of the toy model does not require that quantum diffusion dominates, but rather that the classical drift be much smaller than the inherited velocity. Since the velocity decays in time at a fixed rate, the transition from ultra-slow roll back to slow roll happens at a field value that a priori depends on time. Namely, at early time, the field velocity is still large and one needs a larger potential gradient to overtake it, while at larger times, once the field velocity has decayed, a smaller potential gradient is enough. In models where there is a sudden increase in the potential slope at a given field value, the transition always occurs close to that field value, but otherwise one may have to refine the present approach, which we plan to address in future work. \acknowledgments We thank Jose Mar\'ia Ezquiaga, Hassan Firouzjahi, Juan Garc\'ia-Bellido, Amin Nassiri-Rad, Mahdiyar Noorbala, Ed Copeland, Cristiano Germani, and Diego Cruces for helpful discussions. HA and DW acknowledge support from the UK Science and Technology Facilities Council grants ST/S000550/1. Numerical computations were done on the Sciama High Performance Compute (HPC) cluster which is supported by the ICG, SEPNet and the University of Portsmouth.
{'timestamp': '2021-05-03T02:23:31', 'yymm': '2101', 'arxiv_id': '2101.05741', 'language': 'en', 'url': 'https://arxiv.org/abs/2101.05741'}
\section{Introduction} Emerging advanced accelerator concepts require precise control over the longitudinal phase space (LPS) of charged particle beams. Efficient beam-driven acceleration, for example, relies on longitudinally-tailored electron bunch profiles which can be produced with an appropriate energy modulation and dispersive section \cite{englandPRL, piotPRL, lemeryPRAB, andonianPRAB}. Phase-space linearization for bunch compression is especially important to optimize the performance of multistage linacs and X-ray free electron lasers (XFELs) \cite{paoloLinear, emmaLinear, antipovLinear}. There are several ways to control the LPS. Energy modulation approaches via self-wakes in e.g. dielectric or corrugated structures provide attractive and simple methods to produce microbunch trains and large peak currents \cite{antipovTHz, lemeryPRAB, lemeryPRL}. Laser-based energy modulation techniques using magnetic chicanes are particularly useful for FEL seeding \cite{hghg, eehg1, eehg2, eehgDemo1, eehgDemo2} and for beam acceleration \cite{sudar1}. Arbitrary laser-based phase space control was discussed in \cite{Hemsing:2013vv}, illustrating the potential for producing different current profiles for various applications. Unfortunately however, although the scheme works well in simulation, the approach is complex to implement, requiring several undulators and magnetic chicanes in addition to the modulating laser. In this paper we explore arbitrary waveform synthesis using self-wakes produced in dielectric-lined waveguides (DLW). By using segmented waveguides with varying cross sections, the excited wakefields carry different spectral contents throughout the structure, enabling control over the energy modulation across the bunch. The dependence of the modal content on the DLW geometry allows for enough degrees of freedom to optimize such a segmented structure according to the desired output LPS. Due to the nature of the physical process, the scheme is completely passive, removing the need for synchronization with e.g. a modulating laser beam. In the following, the device is referred to as a longitudinal phase space shaper (LPSS). The paper is structured as follows: Section~\ref{sec:DLW_Wakefields} provides an overview on 1D wakefield theory, Section~\ref{sec:Fourier} discusses Fourier synthesis for single-mode structures, Section~\ref{sec:Multimode_Example} provides examples for multimode structure optimizations using computational optimization, Section~\ref{sec:RealisticStructures} discusses the impact of manufacturing limitations of modern 3D printers by investigating the effect of printing resolution and segment transitions on the excited wakefields. Finally, Section~\ref{sec:robustness} discusses the effect of slight variations in the shape of the input LPS on a figure of merit of an output LPS, based on an example optimization case. \section{Wakefield generation in a DLW} \label{sec:DLW_Wakefields} The theory of Cherenkov wakefield generation in cylindrically-symmetric DLWs is well described in \cite{vossWeiland, ROSING:1990gj, ng}. Here we follow \cite{ROSING:1990gj}, for a structure with inner radius $r=a$, outer radius $r=b$ and dielectric permittivity $\epsilon_r$. The outer surface is assumed to be coated with a perfect conductor. See Fig.~\ref{fig:03_DLW_Schematic} for a schematic. A more rigorous theoretical investigation could include conductive and dielectric losses in DLWs \cite{dlwMikhail, klausDLW}. \begin{figure}[hbp] \centering \includegraphics[width=0.45\columnwidth]{FIG_1.pdf} \caption{Schematic of a cylindrical dielectric-lined waveguide. The lining with dielectric constant $\epsilon_r$ has an inner radius $a$ and an outer radius $b$. It is coated with a thin metallic layer on the outside.} \label{fig:03_DLW_Schematic} \end{figure} In the ultrarelativistic limit, a point charge travelling on-axis ($r=0$) will excite a wakefield with a corresponding Green's function with $M$ modes \cite{Chao:1993pc, Stupakov:2000sp}, \begin{equation} G(t)=\sum_{m=1}^M \kappa _m \cdot \cos(2 \pi f_m t), \end{equation} where $\kappa_m$ and $k_m$ are the loss factor and wave number of the $m$th mode respectively and are calculated numerically \cite{ROSING:1990gj, piotGitWake}. This Green's function is often also referred to as the single particle wake potential $W_z(\tau)\,[V/C]$, where $\tau$ denotes the time difference between the point charge and a trailing witness charge. Note that it is defined by the boundary conditions and hence - in our case - the geometry of the DLW. By varying e.g. the inner radius $a$ of a DLW, it can be seen that both wavelength and amplitude depend strongly on the geometry of the structure (see Fig~\ref{fig:DLW_Modes_Geometry}). Considering that the amplitude of the longitudinal wakefield scales as $1/a^2$ \cite{ROSING:1990gj}, it becomes apparent that potentially very high field strengths can be reached in small aperture DLWs. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{FIG_2.pdf} \caption{Plot of the numerically calculated wavelength and amplitude of a wakefield excited by an on-axis electron bunch in a single-mode DLW. The different colors correspond to different thicknesses of the dielectric lining.} \label{fig:DLW_Modes_Geometry} \end{figure} The overall wake potential $V(t)$ produced by a bunch can be calculated by convolving its current profile $I(t)$ with $W_z(\tau)$. Therefore \begin{equation} V(t) = -\int_{-\infty}^t I(\tau) W_z(t-\tau)d\tau. \label{eq:03_half_convo} \end{equation} The field excitation can also be described in terms of the frequency dependent bunch form factor $F$. Then \begin{equation} V(t)=q \cdot \sum_{m=1}^M F_m\kappa _m \cdot \cos(2 \pi f_m t), \end{equation} where $q$ is the total charge of the bunch. A strong mode excitation therefore requires a bunch with an appropriate spectral content i.e. a relatively short bunch, or also by having a relatively short rise time in e.g. a flat-top distribution \cite{pitzLaser,lemeryPRL}. In a cascaded arrangement of multiple DLWs, outside of experimental constraints due to e.g. limitations in beam transport, the energy modulations via wakefields from different structures can be concatenated. The following section illustrates the broad potential for a set of cascaded, or a single segmented structure to produce a versatile range of energy modulations. We note that the usage of segmented structures, and the produced effects of transient wakes is discussed in Section~\ref{sec:RealisticStructures}. \section{LPS Shaping in Single-Mode Structures}\label{sec:Fourier} Fourier synthesis provides a simple way to produce a large variety of waveforms which have various applications in conventional electronics. Here we explore how Fourier synthesis can be applied to charged particle beams using self-wakes imparted in high-impedance mediums, e.g. DLWs. We are specifically interested in the Fourier series for odd functions, since the wakefield at the head of the bunch must be zero. In the simplest case, each of the individual segments of an LPSS is a single mode structure with a specific fundamental mode frequency and amplitude. As discussed above, the wake function $W_z(\tau)$ for such a structure is simply given by \begin{equation} W_{z,m}(\tau)=\kappa _m \cdot \cos(2\pi f_m \tau). \end{equation} Using this and Eq.~\ref{eq:03_half_convo}, the energy modulation imparted by a single DLW segment $n$ can be estimated as (cf. \cite{Lemery:2014prstab}) \begin{equation} \begin{split} \Delta E_{n}(t) = &- l_n\cdot \kappa _{m(n)}\\&\cdot \int_{-\infty}^t I(\tau) \cos(2\pi f_{m(n)} (t-\tau))d\tau, \end{split} \end{equation} where $l_n$ is the length of the $n$th DLW segment. The total energy modulation imparted by an $N$-segment structure can hence be estimated as \begin{equation} \Delta E_\text{tot}(t) = \sum_{n=0}^N \Delta E_{n}(t) \end{equation} (see Section~\ref{sec:RealisticStructures} for a discussion on the effects of sharp segment transitions on the resulting wakefields). Assuming an idealized flat-top current profile $I(\tau)$, the total energy modulation reduces to \begin{equation} \Delta E_\text{tot}(t) = \sum_{n=0}^N A _n\cdot \sin(2\pi f_{m(n)} t), \label{eq:idealdeltaEtot} \end{equation} where $A_n$ is the amplitude factor of the $n$th segment. Considering the scaling laws shown in Fig.~\ref{fig:DLW_Modes_Geometry}, arbitrary LPS shapes can be constructed via Fourier composition. The amplitude $A_n$ of each frequency component can be adjusted by choosing an appropriate $l_n$. It should be noted that the harmonic content of the input current profile must be sufficient to excite the desired modes. Eq.~\ref{eq:idealdeltaEtot} essentially corresponds to an ordinary Fourier sine series. A saw-tooth wave, for example, can be constructed by summing up only even harmonics with proper normalization. Hence, the Fourier series for a given fundamental frequency $f_0$ is given by \begin{equation} F_\text{saw}(t) = A\cdot \sum_{n=0}^\infty \frac{1}{2n + 2} \sin(\pi (2n+2)f_0 \cdot t), \label{eq:sawtoothdefinition} \end{equation} where $A$ is an amplitude scaling factor. Another simple example is a square wave. Its Fourier series only contains odd harmonics. Thus \begin{equation} F_\text{squ}(t) = A\cdot \sum_{n=0}^\infty \frac{1}{2n + 1} \sin(2\pi (2n+1)f_0 \cdot t). \label{eq:squarewavedefinition} \end{equation} Fig.~\ref{fig:modulationtypes} visualizes the two modulation types for different values of $N$. \begin{figure}[htbp] \centering \includegraphics[width=0.9\columnwidth]{FIG_3.pdf} \caption{Plot of amplitude vs. long. coordinate for an arbitrary saw-tooth modulation (Eq.~\ref{eq:sawtoothdefinition}) and an arbitrary square wave modulation (Eq.~\ref{eq:squarewavedefinition}) for $N=1$, $N=3$ and $N=10$.} \label{fig:modulationtypes} \end{figure} In order to explore possible use cases of such energy modulations we investigated the effect of applying linear longitudinal dispersion ($R_\text{56}$) to the the phase space. In this work we adopt the convention that the head of the bunch is at $z<0$ and $R_{56} < 0$. Fig.~\ref{fig:singleModeGallery} shows contour plots of both the beam current within a single fundamental modulation wavelength $\lambda _0 = \SI{1}{\milli\meter}$, as well as the harmonic content of the bunch vs. the longitudinal dispersion $R_\text{56}$ for different values of $N$. The idealized input current is assumed to be flat-top. We also assume a cold beam in order to be able to explore the full mathematical limits of the scheme. The investigation is carried out for both a saw-tooth modulation (cf. Eq.~\ref{eq:sawtoothdefinition}), as well as for a square wave modulation (cf. Eq.~\ref{eq:squarewavedefinition}). It can be seen, as longitudinal dispersion is applied, interesting features emerge. In the case of the saw-tooth modulation first the higher frequency modulation on the rising part of the saw-tooth (see Fig.~\ref{fig:modulationtypes}) is compressed. Then, as $R_\text{56}$ increases, the minimum and maximum of the saw-tooth converge, which results in a current spike. The current spike is more defined as $N$ increases, which can be attributed to a less pronounced Gibbs ringing at the sharp edges of the saw-tooth, as well as an overall flattening for higher values of $N$. This behaviour is also represented by the ellipsoidal shape visible in the contour plots of the beam current vs. $\text{d}z$ and $R_{56}$, which becomes narrower as $N$ increases (see Fig~\ref{fig:singleModeGallery}). It is interesting to note that as the amplitude of the high frequency modulation along the rising part of the saw-tooth varies, different parts of the rising edge require different values of $R_\text{56}$ for optimal compression. This is clearly visible in the contour plots. For symmetry reasons, always two sub-microbunches emerge. By adjusting $R_\text{56}$, a specific pair of microbunches with a defined relative distance can be selected. It has to be noted, however, that - depending on the modulation depth - these sub-structures require very low slice energy spread to be significant vs. the background. If the respective bunching factor $b_n$ should not be reduced by more than roughly a factor of 2, then $\delta_\text{mod}/\delta_\text{sl} \leq n$ has to be satisfied, where $n$ is the harmonic number of $f_0$ and $\delta_\text{mod}$ and $\delta_\text{sl}$ are the relative modulation depth and slice energy spread respectively; cf. \cite{Hemsing:2014ju}. In case of the square wave modulation the plots show a different behaviour. As $R_{56}$ increases, first a single current spike is formed, which corresponds to the sharp edge of the energy modulation. As the edge becomes sharper (higher $N$), optimal bunching occurs for smaller values of $R_{56}$. Increasing $R_{56}$ beyond optimal bunching reveals a very particular rhombus pattern in the contour plot, which is explained by the fact that the negative and positive plateaus of the square wave are shifted on top of each other. The higher the value of $N$, the more intricate the rhombus pattern becomes. It is interesting to note that - by applying appropriately high $R_{56}$ - the two plateaus of the square wave modulation will form two sub-microbunch trains at their own distinct energy levels ($E_0 \pm \Delta E$). \begin{figure*}[htbp] \centering \includegraphics[width=0.94\textwidth]{FIG_4.png} \caption{Contour plots of both the beam current within a single fundamental modulation wavelength $\lambda _0 = \SI{1}{\milli\meter}$, as well as the harmonic content of the bunch vs. the longitudinal dispersion $R_\text{56}$. The scan was performed for $N \in [1,3,5,7,9,11]$. Both a saw-tooth modulation according to Eq.~\ref{eq:sawtoothdefinition}, as well as a square wave modulation according to Eq.~\ref{eq:squarewavedefinition} are shown. The idealized input current is assumed to be of flat-top shape and the initial energy $E_0 = \SI{100}{\mega e\volt}$ is constant along the bunch. It has a total bunch length of \SI{1}{\milli\meter} and $Q = \SI{42}{\pico\coulomb}$. The assumed maximum modulation depth of the lowest frequency component is \SI{500}{\kilo e\volt}. Note that a high slice energy spread would lead to blurring out the small features in the respective phase spaces. Here we assume a cold beam in order to explore the full mathematical potential of the scheme.} \label{fig:singleModeGallery} \end{figure*} The saw-tooth and square wave modulation are only two examples of possible Fourier series based LPS modulations. Many other interesting waveforms might exist, which are not discussed here. In order to show how drastic even small changes to a particular Fourier series definition can be, one can consider squaring the normalization factor in Eq.~\ref{eq:squarewavedefinition}. This yields, instead of a sharp square wave, a smooth rounded wave. The definition now reads \begin{equation} F_\text{rnd}(t) = A\cdot \sum_{n=0}^\infty \frac{1}{(2n + 1)^2} \sin(2\pi (2n+1)f_0 \cdot t). \label{eq:roundwavedefinition} \end{equation} Fig.~\ref{fig:singleModeRndWaveExample} shows both the shape of an $N = 11$ round wave modulation, as well as contour plots analogous to Fig.~\ref{fig:singleModeGallery}. It can be seen that applying $R_{56}$ to this kind of modulation at first glance leads to a dependence similar to a simple sine wave modulation. The main difference, however, is that the beam current of the sub-microbunches, which occur after over-bunching, shows multiple additional maxima of similar magnitude compared to the initial single microbunch. In the case of a simple sine modulation the peak current would decrease rapidly. As the number of additional maxima increases with $N$, this means that using a high-$N$ round wave modulation, one can obtain high-quality sub-microbunches with semi-continuously adjustable relative spacing. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{FIG_5.png} \caption{\textbf{Top:} Plot of an $N=11$ round wave modulation according to Eq.~\ref{eq:roundwavedefinition}. \textbf{Bottom:} Contour plots of both the beam current within a single fundamental modulation wavelength $\lambda _0 = \SI{1}{\milli\meter}$, as well as the harmonic content of the bunch vs. the longitudinal dispersion $R_\text{56}$. The scan was performed for $N = 11$. The idealized input current and modulation depth is assumed to be the same as described in Fig.~\ref{fig:singleModeGallery}.} \label{fig:singleModeRndWaveExample} \end{figure} \section{Arbitrary Multimode Optimization} \label{sec:Multimode_Example} So far we have investigated Fourier shaping of an idealized input LPS. In order to work with arbitrary input distributions, a more sophisticated optimization routine must be used. This is especially true if multi-mode DLW segments are to be included, as the number of degrees of freedom gets too large for manual optimization. Hence a routine based for example on the particle swarm algorithm (PSO) must be employed \cite{Kennedy:2004wp}. The algorithm varies all geometric parameters of the individual segments at the same time in order to find a global minimum of a given merit function. This merit function is given in the LPSS case by the similarity of the resulting LPS to the desired LPS shape. Since segment radius, length and wall thickness can be varied, the resulting number of independent variables is $3N$, where $N$ is the number of segments. For the LPSS study presented here, the PSO was implemented using PyOpt \footnote{\url{http://www.pyopt.org}, last access: 23rd of April 2019.}. At each iteration step either a simulation using a specifically generated input file for a numerical simulation code, or a semi-analytical calculation based on Eq.~\ref{eq:03_half_convo} is carried out. If space-charge effects are neglected, the difference between the numerical simulation using ASTRA \cite{ASTRAASpaceChar:LTSRiAsm} and the semi-analytical approach was found to be negligible. Hence, the much faster semi-analytical calculation was used for the simulations shown in the following discussion. \begin{figure*}[htbp] \centering \includegraphics[width=\textwidth]{FIG_6.pdf} \caption{Schematic of the layout of the ARES linac at DESY (not to scale). The LPSS interaction is simulated to take place in the experimental chamber of \emph{Experimental Area 1} at $z=\SI{16.8}{\meter}$.} \label{fig:ARESLinac} \end{figure*} As an example optimization goal the linearization of an incoming LPS obtained from close to on-crest acceleration was chosen. This scenario is interesting, because the resulting LPS shows a clear signature of the sinusoidal RF field of the linac structures, which would limit the achievable bunch length in subsequent compression. In order to keep the number of free parameters manageable, the number of LPSS segments was limited to 10. The optimizer was configured to bring the Pearson's $R$ value of centered $\tilde n\sigma_z$ regions within the final distribution as close to 1 as possible. Here $\tilde n \in \mathbb{N}$. Table~\ref{tab:LPSS_Opt_Settings} summarizes the possible ranges of values for the geometry parameters of the 10 individual segments. \begin{table}[!htb] \centering \caption{LPSS optimization variable ranges for each of the 10 segments.} \begin{ruledtabular} \begin{tabular}{lc} \textbf{Parameter} & \textbf{Value} \\ \hline Inner radius & $[0.1, 2.5]\,\SI{}{\milli\meter}$\\ Dielectric thickness & $[50, 1000]\,\SI{}{\micro\meter}$\\ Segment length & $[1, 100]\,\SI{}{\milli\meter}$\\ \end{tabular} \end{ruledtabular} \label{tab:LPSS_Opt_Settings} \end{table} For the input we consider three different electron bunch distributions with \SI{10}{\pico\coulomb}, \SI{100}{\pico\coulomb} and \SI{200}{\pico\coulomb} of total charge and a mean energy of $\sim \SI{100}{\mega e\volt}$, based on numerical simulations of the ARES linac at DESY \cite{Marchetti:2020EAAC}. This is done in order to provide a realistic example, which could be used as the basis for future experimental verification of the scheme. Fig.~\ref{fig:ARESLinac} shows a schematic of the ARES lattice. If both linac structures are driven at their respective maximum gradients of $\sim\SI{25}{\mega\volt /\meter}$, a final mean energy up to $\sim \SI{150}{\mega e\volt}$ is possible. The decision to limit the example working points to $\sim \SI{100}{\mega e\volt}$ is a practical one, as the overall LPSS structure length generally increases with the required LPS modulation strength and the experimental chamber at ARES imposes strict space limitations. The three working points were optimized to minimize transverse emittance at the interaction point ($z=\SI{16.8}{\meter}$) for three different charges using ASTRA, including space charge effects. Table~\ref{tab:ARES_WPs} summarizes the respective beam parameters. \begin{table}[!htbp] \centering \caption{Beam parameters of the three ARES linac working points (WP) at the interaction point ($z=\SI{16.8}{\meter}$), obtained from ASTRA simulations. Initial spatial and temporal profile: Gaussian. TWS: Travelling Wave Structure.} \begin{ruledtabular} \begin{tabular}{lccc} \textbf{Parameter} & \textbf{WP1} & \textbf{WP2} & \textbf{WP3} \\ \hline Charge & \SI{10}{\pico\coulomb} & \SI{100}{\pico\coulomb} & \SI{200}{\pico\coulomb}\\ TWS injection phase & \SI{-3}{\degree} & \SI{-5}{\degree} & \SI{-8}{\degree}\\ $E_0$ & \SI{108}{\mega e\volt} & \SI{109}{\mega e\volt} & \SI{108}{\mega e\volt}\\ $\sigma _E / E_0$ & $2.8 \cdot 10^{-4}$ & $4.1 \cdot 10^{-3}$ & $5.3 \cdot 10^{-3}$\\ $\sigma _t$ & \SI{673}{\femto\second} & \SI{1.95}{\pico\second} & \SI{2.65}{\pico\second}\\ $\varepsilon _{\text{n},x,y}$ & \SI{146}{\nano\meter} & \SI{370}{\nano\meter} & \SI{465}{\nano\meter}\\ \end{tabular} \end{ruledtabular} \label{tab:ARES_WPs} \end{table} \newpage In order to first investigate the effect of limiting the optimization goal to specific $\tilde n\sigma_z$ regions within the LPS on the resulting LPSS geometry, four different optimization runs were performed. As input, WP3 was chosen (see Table~\ref{tab:ARES_WPs}). Fig.~\ref{fig:LPSS_Sigma_ROI_Comp} shows the results. Starting from an overall linearity of the input LPS of $R=0.9568$, it can be seen that in all cases the use of the LPSS improved the linearity significantly. The smaller the region of interest (ROI) within the LPS, the better the results, reaching up to $R=0.999998$ in the case of $\tilde n = 1$. It is apparent that if the whole LPS is taken into account (i.e. a $6\,\sigma$ ROI), the results are noticeably worse than for a restricted ROI. This can be attributed to the fact that, due to the Gaussian time profile of the input LPS, the beam current in the head region of the bunch is low and hence the strength of the excited wakefield is weak. Thus, it is difficult for the optimizer to find configurations where this region is linearized sufficiently well, subsequently spoiling the overall linearity of the LPS. Excluding this head region of the LPS from the optimization, on the other hand, improves the performance significantly. In an experiment at ARES for example, the region outside of the ROI could be cut using the slit collimator implemented in the magnetic compressor (see below). \begin{figure}[H] \centering \includegraphics[width=\columnwidth]{FIG_7.pdf} \caption{\textbf{Left}: Comparison of LPSS linearization results, depending on the size of a defined region of interest (ROI) within the bunch. The solid part of the lines corresponds to the respective ROI. \textbf{Right:} Total LPSS structure length, minimal segment aperture radius within the structure and linearity of the output LPS within the respective ROI, depending on the ROI size.} \label{fig:LPSS_Sigma_ROI_Comp} \end{figure} In addition to the degree of linearity in the respective $\tilde n\sigma_z$ region, Fig.~\ref{fig:LPSS_Sigma_ROI_Comp} also shows that two important geometry parameters depend on the ROI as well. First, the overall structure length decreases with the ROI. This is of practical importance, not only in terms of beam transport through the structure, but also in terms of manufacturing. Second, the minimum aperture radius of the structure increases with a decreasing ROI, which is important from a beam transport point of view and in accordance with the time profile of the bunch and the dependence of the wake field strength per charge as shown in Fig.~\ref{fig:DLW_Modes_Geometry}. Taking these results into account, it is clearly worth considering trading - in case of a Gaussian time profile - less than \SI{5}{\percent} of the total bunch charge for the much better linearization performance of a $4\,\sigma$ ROI. Based on the results discussed above, optimization runs were performed for all of the three ARES working points, considering both a full $6\,\sigma$ and a smaller $2\,\sigma$ linearization ROI. Fig.~\ref{fig:semiAna_Results_ARES_WPs} shows the detailed results. It can be seen that in all cases a significant improvement of $R$ can be achieved within the ROI. Better results are obtained in the case of the limited ROI, as expected. Furthermore, the geometries of the resulting LPSS structures are shown. The shorter in time the input LPS, the shorter the resulting LPSS. This is partly due to the smaller required modulation depth, but also due to how the wakefield amplitude scales with the required inner radii of the segments ($E_z \propto 1/a^2$; see Fig.~\ref{fig:DLW_Modes_Geometry}). In order to accommodate a typical focused beam envelope, the individual segments of the LPSS structures are sorted such that the tightest segment is placed at the center of the structure, which then has increasing inner radii towards both entrance and exit. The results show that a similar degree of linearization can be achieved, regardless of the different bunch lengths across the different working points. The shape of the respective resulting structure does vary significantly however, due to the required modal content. \begin{figure*}[htbp] \centering \includegraphics[width=\textwidth]{FIG_8.png} \caption{LPSS optimization results for input LPSs based on the ARES working points shown in Table~\ref{tab:ARES_WPs}. The optimization goal was to achieve $R=1$ across the full $6\,\sigma$ ROI (\textbf{top row}), as well as a centered $2\,\sigma$ ROI (\textbf{bottom row}). From left to right: WP1, WP2, WP3. Each plot shows the LPS before and after the LPSS interaction. The color and thickness visualize the current profile. The gray shaded areas correspond to the 2,4 and 6\,$\sigma$ regions respectively. The head of the bunch is on the left (negative $z$ values). Below the main plot, the geometry of the final segmented DLW is visualized, with the orange line corresponding to the inner radius and the blue line to the outer radius.} \label{fig:semiAna_Results_ARES_WPs} \end{figure*} \subsection{Other Optimization Goals} As already discussed above, not only the linearization within a defined ROI can be set as an optimization goal. Another interesting case could be the removal of any correlated energy spread, aiming for a completely flat LPS. Fig.~\ref{fig:semiAna_Results_ARES_WP1_Flat} shows the result of such an optimization, based on the \SI{10}{\pico\coulomb} WP1 as shown in Table~\ref{tab:ARES_WPs}. It can be seen that the phase space is significantly flattened within the $4\,\sigma$ ROI. Note that this kind of structure could be used to prepare an LPS for further modulation as shown, for example, in section \ref{sec:Fourier}. \begin{figure}[htbp] \centering \includegraphics[width=0.85\columnwidth]{FIG_9.png} \caption{LPSS optimization results based on the ARES working point WP1 shown in Table~\ref{tab:ARES_WPs}. The optimization goal was to completely remove any correlated energy spread within a $4\,\sigma$ ROI. The color and thickness visualize the current profile. The gray shaded areas correspond to the 2,4 and 6\,$\sigma$ regions respectively. The head of the bunch is on the left (negative $z$ values). Below the main plot, the geometry of the final segmented DLW is visualized, with the orange line corresponding to the inner radius and the blue line to the outer radius.} \label{fig:semiAna_Results_ARES_WP1_Flat} \end{figure} \subsection{Example Case: Bunch Compression} It was shown in simulation that at ARES, based on magnetic compression and a slit-collimator, sub-fs bunch lengths can be achieved \cite{Zhu:2017phd, Zhu:2016ju}. Starting from an initial bunch charge of \SI{20}{\pico\coulomb} a final rms bunch length of \SI{0.51}{\femto\second} was achieved, \SI{1.75}{\meter} downstream of the chicane exit (cf. Fig.~\ref{fig:ARESLinac}). The remaining charge after the slit is \SI{0.79}{\pico\coulomb}, which corresponds to a $\sim \SI{4}{\percent}$ transmission. The full set of beam parameters is summarized in the first column of Table~\ref{tab:ExampleWPs}. \begin{table}[!htbp] \centering \caption{Beam parameters of different ARES working points (WP) \SI{1.75}{\meter} downstream of the chicane exit ($z=\SI{30.5}{\meter}$). WP,Zhu taken from \cite{Zhu:2017phd}, WP4 obtained from ASTRA and IMPACT-T simulations, as well as the LPSS optimization routine. The $\tilde n\sigma$ subscript refers to the LPSS optimization ROI. TWS: Travelling Wave Structure.} \begin{ruledtabular} \begin{tabular}{lccc} \textbf{Parameter} & \textbf{WP,Zhu} & \textbf{WP4,$0\sigma$} & \textbf{WP4,$4\sigma$} \\ \hline Initial charge & \SI{20}{\pico\coulomb} & \SI{10}{\pico\coulomb} & \SI{10}{\pico\coulomb}\\ Final charge & \SI{0.79}{\pico\coulomb} & \SI{2.2}{\pico\coulomb} & \SI{2.18}{\pico\coulomb}\\ TWS injection phase & \SI{-53}{\degree} & \SI{-38}{\degree} & \SI{-38}{\degree}\\ Chicane $R_{56}$ & \SI{-12.4}{\milli\meter} & \SI{-22.2}{\milli\meter} & \SI{-22.2}{\milli\meter}\\ Chicane slit width & \SI{0.4}{\milli\meter} & \SI{0.6}{\milli\meter} & \SI{0.6}{\milli\meter}\\ $E_0$ & \SI{100.5}{\mega e\volt} & \SI{126.0}{\mega e\volt} & \SI{126.5}{\mega e\volt}\\ $\sigma _E / E_0$ & $1.7 \cdot 10^{-3}$ & $2.5 \cdot 10^{-3}$ & $2.5 \cdot 10^{-3}$\\ $\sigma _t$ & \SI{0.51}{\femto\second} & \SI{0.84}{\femto\second} & \SI{0.73}{\femto\second}\\ $\varepsilon _{\text{n},x}$ & \SI{0.11}{\micro\meter} & \SI{0.35}{\micro\meter} & \SI{0.35}{\micro\meter}\\ $\varepsilon _{\text{n},y}$ & \SI{0.1}{\micro\meter} & \SI{0.17}{\micro\meter} & \SI{0.13}{\micro\meter}\\ $I _\text{p}$ & \SI{0.62}{\kilo\ampere} & \SI{1.32}{\kilo\ampere} & \SI{2.18}{\kilo\ampere}\\ \end{tabular} \end{ruledtabular} \label{tab:ExampleWPs} \end{table} Here we aim to show that based on using an LPSS before magnetic bunch compression, we can achieve similar beam parameters, but at higher mean energy and higher final peak current. To this end WP4, which is a modified version of WP1 (cf. Table~\ref{tab:ARES_WPs}), where the TWS structures are driven at \SI{-38}{\degree} is used in a start-to-end simulation using ASTRA, the LPSS optimization routine and IMPACT-T \cite{IMPACT-T:2006prab}. Up to the LPSS structure the simulation includes space charge forces via ASTRA and after that both space charge and CSR via IMPACT-T. Full linearization in a $4\,\sigma$ optimization ROI was considered as the LPSS optimization goal. The resulting beam parameters \SI{1.75}{\meter} downstream of the chicane exit are summarized in Table~\ref{tab:ExampleWPs}, where WP4,$0\sigma$ refers to our working point without LPSS linearization and WP4,$4\sigma$ to the case employing the optimized LPSS structure. The final longitudinal phase spaces are shown in Fig.~\ref{fig:example_final_phase_spaces}. It can be seen that using a passive LPSS structure upstream of the magnetic bunch compressor in $\tilde n\sigma$ linearization mode yields bunches with similar beam quality, but at \SI{26}{\percent} higher mean energy. At the same time, even though the initial charge is \SI{50}{\percent} less, the final charge is higher, due to the larger slit width. This is possible due to the high degree of linearization in the LPSS ROI. The peak current is noticeably higher in both WP4 cases ($\sim 2\times$ w.o. the LPSS and $\sim 3.5\times$ using the LPSS). We note that the transverse phase space of WP4 was not fully optimized as part of this study, which means that the transverse properties of the beam could be improved in future iterations of this particular working point. \begin{figure}[htbp] \centering \includegraphics[width=0.86\columnwidth]{FIG_10.png} \caption{Numerical simulation of the longitudinal phase space and current profile of the ARES working point WP4 shown in Table~\ref{tab:ExampleWPs}, \SI{1.75}{\meter} downstream of the chicane exit ($z=\SI{30.5}{\meter}$). \textbf{Top:} Bunch compression without applying the LPSS optimization, i.e. no structure. \textbf{Bottom:} Bunch compression after applying a $4\,\sigma$ linearization with an optimized LPSS structure.} \label{fig:example_final_phase_spaces} \end{figure} Finally it should be noted, that at higher overall charges significant energy modulation due to CSR can spoil the linearity of the LPS during bunch compression. This, however, could be included into future versions of the LPSS optimization routine as the virtual last element of the LPSS structure. \section{Realistic Structures} \label{sec:RealisticStructures} \subsection{Segment Transitions} Our previous discussion has treated the LPSS as a series of individual successive DLW segments. In order to calculate the resulting energy modulation, the individual wakefields of the segments were summed up and applied to the input LPS. Although this is a good first approximation, in reality there are two issues with this approach. First, the sharp transitions between the segments will disturb the wakefield slightly. Second and most importantly, this kind of segmented structure cannot be produced, because in some cases it turns out that $a_{i+1} > b_i$, which would mean that the ($i+1$)th segment could not actually be attached to the $i$th segment. It is hence necessary to include transition elements between the individual segments. These elements could for example be short linearly tapered sections. Although adding such a transition would enable production of the structure, it also alters the resulting wakefield. In order to investigate this effect, ECHO2D \cite{ECHO2D:33} simulations were performed. The longitudinal monopole wakefield, excited by a Gaussian current with an arbitrarily chosen $\sigma _t = \SI{500}{\femto\second}$, was compared for three different cases: \begin{enumerate} \item The sum of the resulting wakefield of two individually simulated DLW segments of length $l_1$ and $l_2$, \item The two segments directly behind one another. (sharp, unrealistic transition), \item The two segments connected with a linearly tapered transition region of length $l_t$. \end{enumerate} Note that the overall length $L$ of the structure is the same for both case 2 and 3. This means that for case 3 the individual segments are shortened by $0.5\cdot l_t$ each. Hence, case 2 is essentially case 3 with $l_t = 0$. See Fig.~\ref{fig:ECHO_SIM_1} for an illustration of the three different cases. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{FIG_11.pdf} \caption{Illustration of the DLW geometry used in the ECHO2D simulations. All cases include a (lossless) metal coating of \SI{100}{\micro\meter} thickness. The blue lines correspond to the outline of the metal coating and the orange lines to the outline of the dielectric. \textbf{1.a}: Single segment of length $l_1 = \SI{10}{\milli\meter}$. \textbf{1.b}: Single segment of length $l_2 = \SI{10}{\milli\meter}$. \textbf{2}: Segments right next to each other (sharp, unrealistic transition). \textbf{3}: Two segments connected with a linearly tapered transition region of length $l_t = \SI{1}{\milli\meter}$.} \label{fig:ECHO_SIM_1} \end{figure} Fig.~\ref{fig:ECHO_SIM_2} shows the integrated residual difference between the wakefield obtained from the case 1 and case 3 geometries using a drive bunch with $\sigma _t = \SI{500}{\femto\second}$ vs. different values of $l_t$. The exemplary dimensions of the DLW segments are $a_1 = \SI{0.2}{\milli\meter}$, $b_1 = \SI{0.4}{\milli\meter}$, $l_1 = \SI{10}{\milli\meter}$, $a_2 = \SI{0.6}{\milli\meter}$, $b_2 = \SI{0.7}{\milli\meter}$, $l_2 = \SI{10}{\milli\meter}$. The dielectric is defined by $\epsilon _r = 4.41, \mu _r = 1$ and the metal coating, which is assumed to be a perfect conductor, has a thickness of \SI{0.1}{\milli\meter}. The simulation results show that an optimal $l_t$ can be found depending on the area of interest around the peak of the drive current. It has to be noted that although this minimum does not depend strongly on the longitudinal dimensions of the segments, it does depend on the transverse dimensions $a_i$ and $b_i$ (and on $\sigma _t$, as the whole composition of the structure depends on it). It is hence implied that each transition has to be uniquely optimized. This, however, can be directly factored into the optimization routine discussed above (extending the number of degrees of freedom from $3N$ to $4N-1$). \begin{figure}[hbp] \centering \includegraphics[width=\columnwidth]{FIG_12.pdf} \caption{Normalized integrated residual difference between the wakefield obtained from the sum of two singular DLW segments and a combined device with a linearly tapered transition region of length $l_t$, as show in Fig.~\ref{fig:ECHO_SIM_1}. The different curves correspond to the $6\sigma, 4 \sigma$ and $2 \sigma$ parts of the drive bunch, as well as the complete simulation box (total).} \label{fig:ECHO_SIM_2} \end{figure} It was shown that the integrated difference between a case 1 and 3 geometry can be minimized by adjusting $l_t$. Fig.~\ref{fig:ECHO_SIM_3} shows the longitudinal wake for all three geometry cases based on a simulation using the exemplary parameters from above and an optimized $l_t$ of \SI{953}{\micro\meter}. In addition to the wakefields, the absolute and relative difference compared to case 1 is plotted for both the case 2 and 3 geometry respectively. It can be seen that, depending on the area of interest along the drive bunch, the error can be very small and is generally smaller than \SI{10}{\percent}. The error can be large, however, towards the tail of the bunch. The significance of this effect depends a lot on the specific input electron distribution and the particular use case. Assuming a Gaussian longitudinal current profile, $< \SI{16}{\percent}$ of the charge is affected. Recalling Fig.~\ref{fig:ECHO_SIM_2}, the goal should in general be to minimize the effect of the transition in the region of highest charge density. In summary, it can be concluded that it is possible to find transition regions, which minimize the difference of the produced wakefield compared to the summed up wakefield of individual segments, as used in the optimization routine discussed above. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{FIG_13.pdf} \caption{Comparison of the wakefield obtained using the geometries illustrated in Fig.~\ref{fig:ECHO_SIM_1}. $l_t = \SI{953}{\micro\meter}$, which is the value determined by the optimization scan shown in Fig.~\ref{fig:ECHO_SIM_2}. The shaded areas correspond to the $6\sigma, 4 \sigma$ and $2 \sigma$ parts of the drive bunch.} \label{fig:ECHO_SIM_3} \end{figure} \subsection{Manufacturing} The optimization shown above does not include any assumption about possible inaccuracies due to the manufacturing process. In reality, the exact shape of the individual segments is determined by the tolerances during production. Assuming a 3D-printed structure, the parameters $a_i$, $b_i$ and $l_i$ are determined by the transverse and longitudinal printing resolution and on how the structure is printed (flat or standing). We consider the ASIGA MAX X27 3D printer \cite{ASIGA:MAXX27} and its printing resolution as an example. This particular printer has a longitudinal resolution $\rho_{z}$ of \SI{10}{\micro\meter} (minimum layer thickness) and a lateral resolution $\rho_{xy}$ of \SI{27}{\micro\meter} (DLP pixel size). Fig.~\ref{fig:semiAna_Result_Gauss_ASIGA} shows the comparison between the linearization using an ideal LPSS and an LPSS, which was optimized taking the aforementioned printing resolution into account. Here we model the effect such that $\tilde a_i = \lfloor 2a_i/\rho_{xy} \rfloor \cdot \rho_{xy}/2$, $\tilde b_i = \lceil 2b_i/\rho_{xy} \rceil \cdot \rho_{xy}/2$ and $\tilde l_i = \lceil l_i/\rho_{z} \rceil \cdot \rho_{z}$, where the tilde denotes the radii and length of the segments after applying the printer resolution. The results show that the limited printing resolution only has a small impact on the final linearization. It has to be noted, that the chirp across the ROI is different, but only because it was not part of the particular optimization goal. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{FIG_14.pdf} \caption{Comparison of LPSS optimization results for a Gaussian input current profile. The optimization goal was to achieve $R=1$ in a 4\,$\sigma$ region of interest. The input beam parameters correspond to WP3 (see Table~\ref{tab:ARES_WPs}). \textbf{Blue:} Ideal LPSS, \textbf{orange:} LPSS taking a lateral printing resolution of \SI{27}{\micro\meter} and a longitudinal printing resolution of \SI{10}{\micro\meter} into account (as can, for example, be achieved with an ASIGA MAX X27 3D printer).} \label{fig:semiAna_Result_Gauss_ASIGA} \end{figure} \section{Robustness of the Scheme} \label{sec:robustness} \subsection{Input LPS} As discussed above, an LPSS must be specifically tailored to the incoming LPS. In reality the actual shape of the input LPS varies according to the stability of certain accelerator machine parameters. The LPS in particular is influenced by the stability of both amplitude and phase of the accelerating fields, but also by dispersive sections and collective effects, such as coherent synchrotron radiation (CSR). It is hence interesting to investigate the effect of the actual shape of the input LPS on the output LPS. To this end, the third ARES working point (WP3) with \SI{200}{\pico\coulomb} of total charge and a Gaussian time profile with $\sigma_t = \SI{2.65}{\pico\second}$ (see Section~\ref{sec:Multimode_Example}) is used. The sensitivity of the linearity parameter $R$ within a 4\,$\sigma$ ROI is determined for four different parameters, with the first two parameters being the amplitude and phase of the accelerating field, which define the curvature of the incoming LPS. The third parameter is $\sigma _t$, which in reality, of course, non-trivially depends on multiple factors, but is here varied independently, while keeping the total bunch charge constant. The fourth parameter is the bunch charge $Q$, keeping $\sigma _t$ constant. Fig.~\ref{fig:Robustness_Results} summarizes the results of the four scans. The results show that the relative change in $R$ is very little ($\ll \SI{0.1}{\percent}$), leading to the conclusion that, in the specific case of the example of LPS linearization, the LPSS scheme is robust within the limits of typical accelerator machine stability. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{FIG_15.pdf} \caption{Relative change of the linearity factor $R$ vs. four different parameters, which influence the input LPS.} \label{fig:Robustness_Results} \end{figure} \subsection{Systematic Manufacturing Errors} In addition to the uncertainty in the shape of the input LPS, there can also be systematic errors in the geometry of the LPSS itself. In order to investigate this, two scenarios were studied. The first one is a constant error $\Delta r$ of both the inner and outer radii, i.e. $\tilde a_i = a_i+\Delta r$ and $\tilde b_i = b_i+\Delta r$. The second scenario is a constant difference in wall thickness, meaning $\tilde a_i = a_i-\Delta r/2$ and $\tilde b_i = b_i+\Delta r/2$. The range is chosen to be according to the lateral resolution of the ASIGA printer discussed above. Hence $\Delta r \in [-30,30]\,\SI{}{\micro\meter}$. Fig.~\ref{fig:Robustness_Results_ASIGA} summarizes the results of the scan. The LPSS optimization scenario is the same as before. It can be seen that the change in global aperture has a very small effect on $R$ ($< \SI{0.01}{\percent}$). The wall thickness, on the other hand, has a $\sim 10\times$ stronger effect, with a slight asymmetry. It is still a small effect with $|\Delta R| < \SI{0.1}{\percent}$ within the given range of $\Delta r$. The slightly asymmetric behaviour might be explained by the non-linear dependence of the amplitude and frequency of the wake towards smaller inner radii (cf. Fig.~\ref{fig:DLW_Modes_Geometry}) in conjunction with an increase in the modal content as the thickness of the dielectric lining increases. A more thorough study of this behaviour would be interesting, but exceeds the scope of this work. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{FIG_16.pdf} \caption{Relative change of the linearity factor $R$ vs. a constant error $\Delta r$ for two different systematic error scenarios. The same $\Delta r$ is applied to all segments.} \label{fig:Robustness_Results_ASIGA} \end{figure} \section{Conclusion and Outlook} A completely passive LPSS solution, based on segmented DLWs was presented and studied both analytically and numerically. The results based on the idealized single-mode Fourier synthesis, coupled with a longitudinally dispersive section, reveal phase space configurations, which could be interesting for applications, especially in the context of radiation generation (multi-color microbunch trains, sub-microbunches with tunable relative spacing, etc.). Arbitrary multimode optimization was investigated, which enables application of the method to arbitrary input phase spaces. The results shown here are promising, as the exemplary goal of full linearization of the input LPS within a given $\tilde n\sigma_z$ ROI of an input LPS was achieved in a semi-analytical simulation to a very high degree. The input LPS used for the study were chosen to be realistic and are based on numerical simulation of an existing accelerator, the ARES linac at DESY. Motivated by these results, a start-to-end simulation of a possible experiment at the ARES linac was performed yielding sub-fs bunches comparable to reference working points, but at $\sim \SI{26}{\percent}$ higher mean energy and $\sim 3.5\times$ larger peak current, starting from \SI{50}{\percent} less initial charge. It was furthermore shown, based on ECHO2D simulations, that it is possible to integrate short transition regions between the segments, which enables realistic structure shapes that can be produced with a 3D-printer. The optimization routine used in this work can export its result as 3D models suitable for direct import into a 3D printing software. Fig.~\ref{fig:3DRendering} shows a rendering of such a file. The structures can be made from metallized 3D-printed plastic, or even 3D-printed quartz \cite{Kotz:2017ez}. Depending on the specific printing process, longer structures might be constructed of two or more cascaded macro segments. \begin{figure}[htbp] \centering \includegraphics[width=0.4\columnwidth]{FIG_17.png} \caption{Section view of a 3D rendering of a potential printed and metallized LPSS structure. The 3D model was obtained directly from the optimization routine.} \label{fig:3DRendering} \end{figure} The robustness of the scheme was investigated for the LPS linearization example and found to be satisfactory based on accelerator stability, as well as manufacturing tolerance considerations. This together with the low cost of the devices alleviates the fact that each LPSS device is specific to a given accelerator working point; multiple structures could be installed and swapped in as needed. Further studies could focus on transverse effects in LPSS structures, as potentially triggered dipole modes might lead to deflection. Also material-dependent charging of the dielectric could be studied. Finally, the LPSS optimization routine could be updated to take expected downstream LPS modulation, due to e.g. collective effects, into account. \begin{acknowledgments} The authors would like to thank I.~Zagorodnov for support in the use of ECHO2D. \end{acknowledgments}
{'timestamp': '2020-09-28T02:10:34', 'yymm': '2009', 'arxiv_id': '2009.12123', 'language': 'en', 'url': 'https://arxiv.org/abs/2009.12123'}
\section{Introduction} Recently, the interplay of chaos and quantum interference in $ballistic$ quantum dots has intrigued experimentalists and theorists alike. The quantum interference effects, $e.g$., ballistic weak localization (BWL) ~\cite{rf:Baranger} and ballistic conductance fluctuation ~\cite{rf:Jalabert} in such structures depend on whether the underlying classical dynamics is regular or chaotic. Therefore, these effects are interesting from the viewpoint of the theory of $quantum$ $chaos$ ~\cite{rf:Quantum chaos}. More recently, we predicted that $h/2e$ oscillation of magnetoconductance, analogous to Altshuler-Aronov-Spivak (AAS) effect ~\cite{rf:Altshuler} in disordered systems ~\cite{rf:AAS}, should be observable in a single $ballistic$ Aharonov-Bohm ring (hereafter called AB billiard) with magnetic flux penetrating $only$ through the hollow ~\cite{rf:Kawabata}. This phenomenon of conductance oscillation is caused by the interference between a pair of time-reversed coherent back-scattering classical paths that wind the center obstacle in the billiard. We calculated the diagonal part of the BWL correction to the wave-number averaged reflection coefficient by use of semiclassical scattering (SCS) theory ~\cite{rf:Baranger,rf:Jalabert,rf:BJS}. Our analysis ~\cite{rf:Kawabata} yielded for the $chaotic$ AB billiard \begin{equation} \delta {\cal R}_D (\Phi) \sim \sum_{n=1}^{\infty} \exp{\left( - \delta n \right)} \cos{\left( 4 \pi n \frac {\Phi} {\Phi_0} \right)}, \label{eqn:aas1} \end{equation} where \( \Phi_0 = h/e\) is the magnetic flux quantum and \( \Phi \) is the magnetic flux that penetrates the hollow. In eq. (\ref{eqn:aas1}) \( \delta = \sqrt { 2 T_0 \gamma / \alpha } \), where $\alpha$, $T_0$, and $\gamma$ are system-dependent constants and correspond to the variance of the winding number distribution ~\cite{rf:Berry}, the dwelling time for the shortest classical orbit and the escape rate ~\cite{rf:BJS,rf:LDJ}, respectively. In this case, the oscillation amplitude decays exponentially with increasing rank of higher harmonics $n$, so that the main contribution to the conductance oscillation comes from the $n$=1 component that oscillates with the period of $h$/2$e$. By contrast, for $regular$ and $mixed$ (Kolmogorov-Arnold-Moser tori and chaotic sea) AB billiard, we obtained ~\cite{rf:Kawabata} \begin{equation} \delta {\cal R}_D (\Phi) \sim \sum_{n=1}^{\infty} F \left( z - \frac1 {2} , z +\frac1 {2} ; - \frac{n^2} {2 \alpha} \right) \cos{\left( 4 \pi n \frac {\Phi} {\Phi_0} \right)}, \label{eqn:aas2} \end{equation} where $F$ and $\beta$ are the hypergeometric function of confluent type and the exponent of dwelling time distribution \( N(T) \sim T^{- z} \) ~\cite{rf:BJS,rf:LDJ}, respectively. In eq. (\ref{eqn:aas2}) the oscillation amplitude decays algebraically for large $n$, and therefore the higher-harmonics components give noticeable contribution to magnetoconductance oscillations. These discoveries indicate that {\em the $h/2e$ AAS oscillation occurs in both ballistic and diffusive systems forming the AB geometry and the behavior of higher harmonics components reflects a difference between chaotic and non-chaotic classical dynamics}. In real experiments, however, the magnetic field would be applied to the $entire$ region (both the hollow and annulus) in the billiard. Thus it is indispensable to apply the SCS theory to this case, in order to see the comparison between the experimental data and theoretical prediction. In this situation, we shall envisage $h/2e$ oscillation together with {\em the negative magnetoresistance} and {\em dampening of the $h/2e$ oscillation amplitude} with increasing magnetic field. In this paper, we shall focus our attention on two-dimensional ballistic AB billiards (e.g., the insets in Fig. 1) with the magnetic flux penetrating through the entire region and calculate reflection amplitude by use of SCS theory. \section{Semiclassical theory} Following Baranger, Jalabert and Stone's arguments ~\cite{rf:Baranger,rf:BJS}, we start with a quantum-mechanical reflection amplitude ~\cite{rf:FL} \begin{equation} r_{n,m} = \delta_{n,m} - i \hbar \sqrt{\upsilon_n \upsilon_m} \int dy \int dy' \psi_n^*(y') \psi_m(y) G(y',y,E_F), \label{eqn:a1} \end{equation} where \(\upsilon_m(\upsilon_n)\) and \(\psi_m(\psi_n)\) are the longitudinal velocity and transverse wave function for the mode $m(n)$, respectively. $G$ is the retarded Green's function. To approximate $r_{n,m}$ we replace $G$ by its semiclassical Feynman path-integral expression ~\cite{rf:Gutzwiller}, \begin{equation} G^{sc}(y',y,E) = \frac {2 \pi} {(2 \pi i \hbar)^{3/2}} \sum_{s(y,y')} \sqrt{D_s} \exp \left[ \frac i {\hbar} S_s (y',y,E) - i \frac \pi {2} \mu_s \right], \label{eqn:a2} \end{equation} where $S_s$ is the action integral along classical path $s$, \( D_s = ( \upsilon_F \cos{\theta'})^{-1} \left| ( \partial \theta /\partial y' )_y \right| \) , \( \theta \) (\( \theta' \)) is the incoming (outgoing) angle, and \( \mu_s \) is the Maslov index. Assuming hard walls in the leads, we substitute eq. (\ref{eqn:a2}) into eq. (\ref{eqn:a1}) and carry out the double integrals by a stationary-phase approximation. Thus we obtain \begin{equation} r_{n,m} = - \frac {\sqrt{2 \pi i \hbar}} {2 W} \sum_{s(\bar n,\bar m)} {\rm sgn} (\bar n) {\rm sgn} (\bar m) \sqrt{\tilde D_s} \exp{ \left[ \frac i {\hbar} \tilde S_s (\bar n,\bar m;E)-i \frac \pi {2} \tilde \mu_s \right] }, \label{eqn:a3} \end{equation} where $W$ is the width of the hard-wall leads and \( \bar m = \pm m \). The summation is over trajectories between the cross sections at $x$ and $x'$ with angle \( \sin{\theta} = \bar{n} \pi / k W \). In eq. (\ref{eqn:a3}), \( \tilde{S_s} (\bar{n},\bar{m};E) = S_s(y'_0,y_0;E)+ \hbar \pi ( \bar{m} y_0 - \bar{n} y'_0 ) / W \), \( \tilde{D_s} = ( m_e \upsilon_F \cos{\theta'})^{-1} \left| ( \partial y /\partial \theta' )_{\theta} \right| \) and \( \tilde{\mu_s} = \mu_s + u \left( -( \partial \theta / \partial y )_y' \right) + u \left( -( \partial \theta' / \partial y' )_{\theta} \right), \) respectively, where $u$ is the Heaviside step function. The Kronecker delta term in eq. (\ref{eqn:a1}) is exactly canceled by the contributions of paths of zero length ~\cite{rf:Lin}. Within the diagonal approximation ~\cite{rf:Baranger,rf:BJS}, the quantum correction \( \delta R \) to the classical reflection probability \(R_{cl}\), $viz.$, \begin{equation} R = \sum_{n,m=1}^{N_M} \left| r_{n,m} \right| ^{2} \approx R_{cl} + \delta R \label{eqn:refrection} \end{equation} with the mode number $N_{M}$, is given by \begin{equation} \delta R_D = \frac 1 {2} \frac{\pi} {k W} \sum_n \sum_{s \ne u} \sqrt{\tilde A_s \tilde A_u} \exp \left[ i k \left( \tilde L_s - \tilde L_u \right) +i \pi \nu_{s,u} \right], \label{eqn:a7} \end{equation} where $s$ and $u$ label the classical trajectories. In eq. (\ref{eqn:a7}), \( \tilde L_s = \tilde S_s / k \hbar \) , \( \nu_{s,u} = \left( \tilde \mu_u - \tilde \mu_s \right) / 2 \) , and \( \tilde A_s = \left( \hbar k / W \right) \tilde D_s \) . The wave-number averaging of \( \delta R_D \) over all $k$, denoted as \( \delta {\cal R}_D \) , eliminates all paths except those that satisfy \( \tilde L_s = \tilde L_u \) in eq. (\ref{eqn:a7}). In the absence of spatial symmetry, \( \tilde L_s = \tilde L_u \) holds if $u$ is the time reversal of $s$. A weak magnetic field does not change the classical trajectories appreciably but does change the phase difference between the time-reversed trajectories by \( (S_s - S_u)/\hbar = 2 \Theta_s B / \Phi_0 \), where \( \Theta_s \equiv 2 \pi \int_s {\bf A} \cdot d{\bf \ell}/ B \) is the effective area almost enclosed by the classical path. To evaluate the summation over $s$ and $n$, we shall reorder the backscattering classical paths according to the increasing effective area. Therefore, we obtain \begin{equation} \delta {\cal R}_D (B) \sim \int_{-\infty}^{\infty} d \Theta N(\Theta) \exp{ \left( i \frac {2 \Theta B} {\Phi_0} \right) }, \label{eqn:a9} \end{equation} where \( N(\Theta) \) is the distribution of $\Theta$. The phenomelogical statistical theory leading to a distribution of the enclosed area $N(\Theta)$ for chaotic AB billiards is given as follows. There exist two kind of classical paths and $N(\Theta)$ is the sum of the distribution of unwinding trajectories, viz., $N_0(\Theta)$ and that of $n (\ne 0)$ winding trajectories. This is due to all classical trajectories winding a center obstacle $n$ times until they exit, except for very short backscattered paths ($n$=0 component). $N_0(\Theta)$ is essentially the same as that of an ordinary chaotic billiard ($e.g.,$ stadium), obeying a monotonic exponential law ~\cite{rf:BJS,rf:LDJ}, $i.e$., \( N_0(\Theta) \sim \exp ( -\varepsilon_{cl} \left| \Theta \right|) \) , where \( \varepsilon_{cl} \) is the inverse of the average area enclosed by classical trajectories. Therefore, the full distribution of the enclosed area is given by \begin{equation} N(\Theta) \sim N_0 (\Theta) + \sum_{\stackrel{n=-\infty}{n \ne 0}}^{\infty} N(\Theta,n) P(n), \label{eqn:b1} \end{equation} where \( P(n) \) and \( N(\Theta,n) \) are the distribution of the winding number \( n \) and that of the enclosed area for a given \( n \), respectively. Owing to the ergodic properties of fully chaotic systems, \( N(\Theta,n) \) is assumed to obey a Gaussian distribution in which the variance of area is proportional to \( n \) , $i.e$., \begin{equation} N(\Theta,n) \sim \frac{1} {\sqrt{2 \pi \beta \left| n \right|}} \exp{ \left[ - \frac{{\left( \Theta - n \Theta_0 \right)}^2} {2 \beta \left| n \right|} \right]. } \label{eqn:b2} \end{equation} On the other hand, exploiting Berry and Keating's argument ~\cite{rf:Berry}, \( P(n) \) is given by \begin{equation} P(n) = \int_0^\infty dT P(n,T) N(T) \sim \exp { \left( - \delta \left| n \right| \right) }, \label{eqn:b3} \end{equation} where $\delta=\sqrt{ 2 T_0 \gamma / \alpha}$. In eq.(\ref{eqn:b3}), $N(T) \sim \exp (- \gamma T)$ and \begin{equation} P(n,T) = \sqrt{ \frac{T_0} {2 \pi \alpha T}} \exp \left( -\frac{n^2 T_0 } { 2 \alpha T} \right) \label{eqn:b4} \end{equation} are the exponential dwelling time distribution ~\cite{rf:BJS,rf:LDJ} and the Gaussian distribution of winding numbers $n$ for trajectories with a fixed \( T \) ~\cite{rf:Kawabata,rf:Berry}, respectively. With the use of eqs.(\ref{eqn:b2}) and (\ref{eqn:b3}), we reach \begin{equation} N(\Theta) \sim A e^{-\varepsilon_{cl} \left| \Theta \right|} + \sum_{\stackrel{n=-\infty}{n \ne 0}}^{\infty} \frac 1 {\sqrt{2 \pi \beta \left| n \right|}} \exp{ \left[ - \frac {{\left( \Theta - n \Theta_0 \right)}^2} {2 \beta \left| n \right|} - \delta \left| n \right| \right] }. \label{eqn:a10} \end{equation} To examine the validity of expression (\ref{eqn:a10}), we directly calculated $N(\Theta)$ for chaotic AB billiards (a single Sinai billiard ~\cite{rf:Sinai}) by mean of classical numerical simulations. In the calculations, we inject \( 10^8 \) particles into the billiard at different initial conditions. \( N(\Theta) \) for the chaotic AB billiard has proved to be nicely fitted by eq. (\ref{eqn:a10}) [see Fig. 1(a)]. (In this case, \( \Theta_0 / 2 \pi \) is approximately the average area between the outer square and inner circles.) As the size of center obstacle approaches zero, the Sinai geometry becomes square (regular billiard). In this case, we have confirmed that the oscillation structure disappears and \( N(\Theta) \) obeys a well-known power law ~\cite{rf:BJS,rf:LDJ}. Substituting eq. (\ref{eqn:a10}) into eq. (\ref{eqn:a9}), we finally obtain \begin{equation} \delta {\cal R}_D (\Phi) \sim \frac {A \varepsilon_{cl}^{-1}} { 1 + \displaystyle{ {\left( \frac{4 \pi} {\Theta_0 \varepsilon_{cl}} \frac {\Phi} {\Phi_0} \right)}^2 } } + \sum_{n=1}^{\infty} \exp { \left[ -\left\{ \delta + \frac {\beta} {2} {{\left( \frac{4 \pi} {\Theta_0} \frac {\Phi} {\Phi_0} \right)}^2} \right\} n \right] } \cos{\left(4 \pi n \frac {\Phi} {\Phi_0} \right)}, \label{eqn:a13} \end{equation} where \( \Phi=B \Theta_0 / 2 \pi \). The first term in eq. (\ref{eqn:a13}), $i.e$., \( A \varepsilon_{cl}^{-1} / \left\{ 1+ {\left( 2B / {\varepsilon}_{cl} \Phi_0 \right)}^2 \right\} \), which contributes to negative magnetoresistance, agrees with Baranger, Jalabert and Stone's Lorentzian BWL correction \begin{equation} \delta {\cal R}_D (B) = \frac{{\cal R}_{cl}} { 1 + \displaystyle{ \left( \frac{2B} {{\varepsilon}_{cl} \Phi_0} \right)^2 } } \label{eqn:a19} \end{equation} for chaotic billiard ~\cite{rf:Baranger,rf:BJS}, where \( {\cal R}_{cl} \) is the wave-number-averaged classical reflection probability. \begin{center} \begin{figure}[b] \hspace{0.3cm} \epsfxsize=8.5cm \epsfbox{Fig.1a.ps} \epsfxsize=8.5cm \epsfbox{Fig.1b.ps} \caption{Semi-logarithmic plot of the effective area distributions in scattering from the (a) Sinai (chaotic) billiard (\(R/W=3\) and \(L/W=10\)) and (b) square AB (regular) billiard (\(L_{in}/W=6\) and \(L_{out}/W=10\)). The numerical simulation results (diamond) for (a) and (b) are well fitted by eqs. (13) and (16) (solid line), respectively. The insets show the schematic views of two types of AB billiards.} \end{figure} \end{center} For the second term in eq. (\ref{eqn:a13}) (\( n>0 \) components), the oscillation amplitude decays exponentially with increasing $n$. Therefore, the main contribution to the conductance oscillation comes from the $n$=1 component, which oscillates with period \( h / 2e \). This behavior is consistent with our previous specific result [i.e., eq. (\ref{eqn:aas1})] for the chaotic AB billiard in which magnetic flux penetrates $only$ through the hollow ~\cite{rf:Kawabata}. In addition to this property, the oscillation amplitude damps exponentially with increasing magnetic field. On the other hand, for regular cases (the square AB billiard) the form of \( N(\Theta) \) has been estimated as \begin{equation} N(\Theta) \sim n_0 {\left( \left| \Theta \right| + \Delta_1 \right)}^{-z_2} + \sum_{\stackrel{n=-\infty}{n \ne 0}}^{\infty} {\left( \left| n \right| + n_1 \right)}^{-z_1} {\left( \left| \Theta -n \Theta_0 \right| + \Delta_2 \right)}^{-z_2} \label{eqn:a14} \end{equation} from the numerical simulation [see Fig. 1(b)]. In the calculation we injected \( 9 \times 10^8 \) particles into the billiard. In eq. (\ref{eqn:a14}) \( n_0, n_1 \), $z_1$, $z_2$, $\Delta_1$, and $\Delta_2$ are also fitting parameters and \( \Theta_0 / 2 \pi \) is approximately the average area between the outer and inner squares in this case. This distribution leads to ~\cite{rf:comment2} \begin{equation} \delta {\cal R}_D (\Phi) \sim n_0 A_1 (\Delta_1,\Phi) + 2 A_1 (\Delta_2,\Phi) \sum_{n=1}^{\infty} {\left( n + n_1 \right)}^{-z_1} \cos{\left(4 \pi n \frac {\Phi} {\Phi_0} \right)}, \label{eqn:a15} \end{equation} where \begin{equation} A_1(\Delta,\Phi) \equiv \int_{0}^{\infty} dx (x+\Delta)^{-z_2} \cos{ \left( \frac{4 \pi} {\Theta_0} \frac {\Phi} {\Phi_0} x \right) }. \label{eqn:a16} \end{equation} Since $A_1(\Delta_1,\Phi)$ is equal to the Fourier transform of a power-law function, one can expect a cusplike BWL peak near zero magnetic field ~\cite{rf:Baranger,rf:Stone}. In contrast to chaotic cases, the oscillation amplitude decays algebraically for $n$. Therefore, we can see that higher-harmonics components give a significant contribution to conductance oscillations. This is because the number of multiple-winding trajectories is much larger in regular billiards than in chaotic billiards. \section{AAS OSCILLATION AND NEGATIVE MAGNETORESISTANCE} In this section we shall discuss in more detail the difference of \( \delta {\cal R}_D (\Phi) \) between chaotic and regular AB billiards. In Fig. 2 we show \( \delta {\cal R}_D (\Phi) \) for Sinai (chaotic) and square AB (regular) billiards. The values of the fitting parameters, determined by the classical simulation, are substituted into eqs. (\ref{eqn:a13}) and (\ref{eqn:a15}). In order to see the marked difference of the $\Phi$ dependence of \( \delta {\cal R}_D (\Phi) \), we shall investigate the $n=0$ term in eqs. (\ref{eqn:a13}) and (\ref{eqn:a15}), denoted as \( \delta {\cal R}_{NMR} (\Phi) \) and the \(n > 0\) term, denoted as \( \delta {\cal R}_{AAS} (\Phi) \), separately, i.e., \begin{center} \begin{figure}[b] \hspace{0.3cm} \epsfxsize=8.5cm \epsfbox{Fig.2.ps} \caption{Semi-classical BWL correction \(\delta {\cal R}_D \) to the reflection coefficient for chaotic AB (Sinai) billiard (solid) and regular square AB billiard (dotted) as a function of \(\Phi(=\Theta_0 B / 2 \pi)\). \(\delta {\cal R}_D \) is normalized to the value at $\Phi=0$ , $i.e.,$ the classical reflection probability \({\cal R}_{cl} \).} \end{figure} \end{center} \begin{equation} \delta {\cal R}_D (\Phi)= \delta {\cal R}_{NMR}(\Phi) + \delta {\cal R}_{AAS}(\Phi). \label{eqn:a17} \end{equation} Figure 3(a) shows \( \delta {\cal R}_{NMR} (\Phi) \) which contributes to the negative magnetoresistance for two types of billiards. The shapes of \( \delta {\cal R}_{NMR} (\Phi) \) in the vicinity of zero magnetic field are quite different between chaotic and regular billiards, i.e., a quadratic curve versus a linear line [see the inset in Fig. 3(a)]. For large $\Phi$ (but with the cyclotron radius sufficiently large compared to the system dimension), \( \delta {\cal R}_{NMR} (\Phi) \) saturates in chaotic cases, but shows nosaturation in regular cases. Similarly, the \( \delta {\cal R}_{AAS} (\Phi) \) corresponding to the $h/2e$ AAS-like oscillation part is indicated in Fig.3(b). While the oscillation amplitude damps rapidly with increasing $\Phi$ for the chaotic AB billiard, it damps gently for regular AB billiards. Therefore, on the basis of the above results, the qualitative difference of \( \delta {\cal R}_D (\Phi) \) between chaotic and regular AB billiards is attributed to the different classical distribution of the effective areas. As the dimension of the center obstacle (e.g., $R$ for a Sinai billiard and $L_{in}$ for a square AB billiard) becomes zero, the oscillation structure of \( N(\Theta) \) is indistinct for two types of billiard, so that the $h/2e$ conductance oscillation would disappear. To consolidate the above semiclassical prediction of the $h$/2$e$ oscillation, we must compare eqs. (\ref{eqn:a13}) and (\ref{eqn:a15}) with quantum-mechanical calculations (e.g., a recursive Green's function method ~\cite{rf:Recursion}) and also check the influence of the off-diagonal contribution to \( \delta {\cal R} (\Phi) \) ~\cite{rf:Baranger,rf:BJS,rf:Takane}. Moreover, it is desirable to confirm our prediction by having recourse to a random matrix approach for systems with broken time reversal symmetries ~\cite{rf:RMT}. Such investigations will be given elsewhere. \section{Conclusion} In summary, we have derived the semiclassical formula for \( \delta {\cal R}_D (\Phi) \) of single chaotic and regular AB billiards in which a weak $B$ is applied to the entire region. We have shown that $h/2e$ oscillations and negative magnetoresistance would appear concurrently in \( \delta {\cal R}_D (\Phi) \): as for {\em h/2e conductance oscillations}, we find the oscillation mainly with a fundamental period $h$/2$e$ and rapid damping of the amplitude with increasing $B$ for the chaotic billiard versus the large contribution of higher harmonic components and mild damping of the oscillation amplitude for a regular billiard ; As for {\em negative magnetoresistance}, the Lorentzian peak and saturation for the chaotic billiard versus a cusplike structure and no saturation for a regular billiard are reproduced. Although Taylor $et$ $al.$ ~\cite{rf:Taylor} recently made an experimental study of the weak localization peak and self-similar structure of magnetoresistance in a semiconductor Sinai billiard, no detailed experimental result of $h/2e$ conductance oscillations has yet been reported. We hope that these characteristics of quantum chaos in the quantum magnetotransport will be experimentally observed in ballistic quantum dots forming AB geometry. \begin{center} \begin{figure}[t] \hspace{0.3cm} \epsfxsize=8.5cm \epsfbox{Fig.3a.ps} \epsfxsize=8.5cm \epsfbox{Fig.3b.ps} \caption{Magnetic flux dependence of two different components of BWL corrections: (a) $\delta {\cal R}_{NMR}$ contributing to negative magnetoresistance and (b) $\delta {\cal R}_{AAS}$ contributing to the $h/2e$ oscillation for chaotic (solid) and regular (dotted) AB billiards. \(\delta {\cal R}_{NMR} \) and \(\delta {\cal R}_{AAS} \) are normalized to the value at $\Phi=0$. The inset in (a) shows $\delta {\cal R}_{NMR}$ in the vicinity of zero magnetic flux.} \end{figure} \end{center} \section{acknowledgements} We would like to acknowledge Y. Takane, R.P. Taylor, Y. Ochiai, F. Nihey, W.A. Lin, and P. Gaspard for valuable discussions and comments. Numerical calculations were performed on FACOM VPP500 in the Supercomputer Center, Institute for Solid State Physics, University of Tokyo.
{'timestamp': '2000-02-05T06:22:24', 'yymm': '9707', 'arxiv_id': 'cond-mat/9707087', 'language': 'en', 'url': 'https://arxiv.org/abs/cond-mat/9707087'}
\section{Introduction} The dynamics of non-abelian gauge fields at finite temperature has attracted some attention recently~\cite{grigoriev}--\cite{son}. A lot of work on this subject has been done in perturbation theory. However, some physical quantities of interest are non-perturbative and the only known method of calculating them on the lattice is the classical approximation~\cite{grigoriev}. One problem of the classical approximation is related to the fact that the high momentum modes do not decouple from the dynamics of the low momentum modes. In the naive perturbative expansion, momenta of order $T$ lead to corrections in the Green's functions which are proportional to $T^2$. For Green's functions with soft external momenta $|\vec{p}|\sim g^2 T$, where $g$ is the gauge coupling, these ``corrections'' can be as large as or even larger than the tree level Green's functions. These large corrections have been named hard thermal loops (HTL) and a consistent perturbative expansion requires that they be resummed~\cite{pisarski}. In the resummed perturbation theory the gauge field propagators reflect two distinct collective phenomena. One is the plasma oscillations which involve simultaneous oscillations of the low and the high momentum degrees of freedom in the plasma. The characteristic frequency of these oscillations is proportional to $gT$. Secondly, there is an effect called Landau damping which is related to an energy transfer from the low momentum degrees of freedom to the high momentum ones. In the classical approximation the physics of the high momentum degrees of freedom is not correctly described. The hard thermal loops correspond to classically UV-divergent contributions \cite{bodeker}. How the hard thermal loops affect non-perturbative correlation functions of the low momentum fields is an open problem. The physical picture sketched above is based on 1-loop resummed perturbation theory. It is therefore interesting to see whether it also shows up in non-perturbative lattice studies. In a recent paper~\cite{tang} the correlation functions of two gauge invariant operators in the SU(2)+Higgs model (in continuum notation), \begin{eqnarray} H (t) &=& \frac{1}{\sqrt{V}}\int \!\! d^3 x \,\varphi^\dagger(t,\vec{x}) \varphi(t,\vec{x}), \la{operH} \\ W_i^a(t) &=& \frac{1}{\sqrt{V}}\int\!\! d^3 x \, i{\rm Tr\,}\[\Phi^\dagger(t,\vec{x}) D_i\Phi(t,\vec{x})\tau^a\], \la{operW} \end{eqnarray} were computed with classical lattice simulations. Here $\varphi$ denotes the Higgs doublet, $\tau^a$ is a Pauli matrix, $\Phi$ is the matrix $\Phi = (\widetilde\varphi\, \varphi)$ with $\widetilde{\varphi} = i \tau^2 \varphi^*$, and $V$ denotes the space volume. In the broken phase the correlator of $W_i^a(t)$ is given by the gauge field propagator. It was claimed that the plasmon frequency corresponding to this correlator is independent of the lattice spacing $a$. This contradicts the picture above since in the classical theory the plasmon frequency is proportional to $T/a$ instead of $T^2$. We will demonstrate here that the qualitative features of the correlators can be understood using perturbation theory and that the claim of ref.~\cite{tang} concerning the broken phase $W_i^a$ plasmon frequency is not justified. Ambj\o rn and Krasnitz \cite{ambjorn} have reported a measurement of the gauge field correlator in the Coulomb gauge in the symmetric phase. We will see that many of the features observed can be qualitatively understood in perturbation theory, but quantitative discrepancies remain. \subsubsection*{Real time correlators and the HTL effective action} The plasmon properties of the classical Hamiltonian SU(2)+Higgs gauge theory could be computed by solving the classical equations of motion perturbatively with given initial conditions, and by then averaging over the initial conditions with the Boltzmann weight~\cite{grigoriev}. However, it appears that in perturbation theory it is simpler to perform the computation in the full quantum theory and to take the classical limit $\hbar\to 0$ only in the end. We use this approach. In the classical limit the operator ordering in a correlation function is irrelevant. Thus the quantum expression of which we are going to take the classical limit can be anything but a commutator. This still leaves several possibilities, and we choose to consider \begin{eqnarray} C_{\cal O}(t,\vec{p}) = \fr12 \int\! d^3 x e^{-i\vec{p}\vec{x}} \left\langle {\cal O}(t,\vec{x}) {\cal O}(0,\vec{0}) + {\cal O}(0,\vec{0}) {\cal O}(t,\vec{x}) \right\rangle. \la{sym} \end{eqnarray} In a perturbative computation, one first computes the corresponding two-point Green's function ${\cal D}_{\cal O}(i\omega_n,{\bf p})$ in Euclidean space for the Matsubara frequencies $\omega_n$. Then one performs an analytic continuation which depends on the real time correlator in question. The symmetric combination in eq.~\nr{sym} is given by \begin{eqnarray} C_{\cal O}(t,{\bf p}) = \hbar\int_{-\infty}^{\infty}\frac{d\omega}{2\pi} e^{-i\omega t}\left[1+2 n(\omega)\right] \mathop{\rm Im} {\cal D}_{\cal O}\left( i\omega_n\to \omega+i\epsilon,{\bf p} \right), \la{relation} \end{eqnarray} where \begin{eqnarray} n(\omega)=\frac{1}{e^{\beta\hbar\omega}-1} \quad \end{eqnarray} is the Bose distribution function. To evaluate the $\omega$-integral in eq.\ (\ref{relation}) numerically, it is convenient to rewrite it (for $t>0$) as an integral in the upper half of the complex $\omega$-plane. In this way one avoids integrating along the poles and discontinuities on the real $\omega$-axis. Denoting $\omega = \omega_1 + i \omega_2$, one gets for $t>0$ in the classical limit $n(\omega)\to T/(\hbar\omega)$, \begin{eqnarray} C_{\cal O}^{\rm classical}(t,{\bf p}) = T \lim_{\hbar\to 0} {\cal D}_{\cal O}(0,{\bf p})+ e^{\omega_2 t}\int_{-\infty}^{\infty} \frac{d \omega_1}{2\pi i}\frac{T}{\omega_1+i\omega_2} e^{-i\omega_1 t} \lim_{\hbar\to 0} {\cal D}_{\cal O}(\omega_1+i\omega_2,{\bf p}). \la{Ctp} \end{eqnarray} Note that this expression is independent of the imaginary part $\omega_2>0$. The second term on the r.h.s.\ of eq.~\nr{Ctp} vanishes for the equal time case $t=0$. A further virtue of eq.~\nr{Ctp} is that one can take the limit ${\bf p}\to 0$ inside the integrand since $\omega_2>0$ guarantees that there are no singularities on the integration contour. In the remainder of this letter we will discuss only classical correlation functions\footnote{Except in eq.~\nr{AA} and in the discussion thereafter.} and we will therefore omit the superscript 'classical' from the l.h.s.\ of eq.~\nr{Ctp}. We thus have to evaluate ${\cal D}_{\cal O}(\omega,{\bf p})$. As discussed above, a consistent perturbative expansion requires the resummation of the hard thermal loops~\cite{pisarski}. In other words, the high momentum modes $p\mathop{\gsi} T$ (or $p\mathop{\gsi} a^{-1}$ in the classical theory) are integrated out, giving an effective theory for the low momentum modes $p\mathop{\lsi} gT$. This amounts to including the hard thermal loops in the tree level action. Denoting $P=(\omega_n,{\bf p})$, the quadratic part of the Euclidean HTL effective theory in momentum space is \begin{eqnarray} {\cal L}_{\rm HTL}^{(2)} = \fr12 A_\mu^a A_\nu^a \left[ P^2 \delta^{\mu\nu}-P^\mu P^\nu+\Pi_{\rm HTL}^{\mu\nu}(P) \right] +\varphi^\dagger \varphi \left[P^2 - \fr12 m_H^2 + \Sigma_{\rm HTL} \right], \la{htl} \end{eqnarray} where $m_H$ is the Higgs mass. Eq.~\nr{htl} gives directly the gauge and scalar field free propagators ${\cal D}_{\mu\nu}^{ab}(\omega, \vec{p})$, ${\cal D}(\omega, \vec{p})$, to be used in the computation of ${\cal D}_{\cal O}(\omega,{\bf p})$. Note that $\Sigma_{\rm HTL}$ is momentum independent. Of the HTL self-energies, we will need especially the components $\Pi_{\rm HTL}^{ij}(P)$, $\Sigma_{\rm HTL}$. The expressions are~\cite{pisarski,bodeker,arnold97}, after the replacement $\omega_n\to -i\omega$, \begin{eqnarray} \Pi_{\rm HTL}^{ij}(P) & = & -g^2 (2 N + N_{\rm s})\hbar \int\!\! \frac{d^3q}{(2\pi)^3} \, \frac{dn(\omega_q)}{d\omega_q} \frac{v^iv^j\omega}{\omega- p^iv^i}, \la{PiW} \\ \Sigma_{\rm HTL} & = & \(6 \lambda + \frac94 g^2\) \hbar \int\!\! \frac{d^3q}{(2\pi)^3}\,\frac{n(\omega_q)}{\omega_q}, \la{PiH} \end{eqnarray} where, for the SU(2)+Higgs model, $N=2$, $N_{\rm s}=1$. Furthermore, \begin{eqnarray} v^i = \frac{\partial \omega_q }{\partial q_i} \end{eqnarray} where $\omega_q$ is the tree level dispersion relation; for a space-time with discretized spatial dimensions (a cubic lattice), \begin{equation} \omega_q = \frac{2}{a} \sqrt{{\textstyle \sum_i}\sin^2\! \left(a q_i/2 \right)}, \quad v^i = \frac{\sin (a q^i)}{a\omega_q}. \end{equation} Eqs.~\nr{PiW}, \nr{PiH} hold actually for a generic dispersion relation~\cite{arnold97}. In ref.~\cite{bodeker}, scalar electrodynamics was considered\footnote{The result for $\Pi_{\rm HTL}^{ij}$ given in~\cite{bodeker} has the wrong sign.} for which case $g^2(2N+N_{\rm s})$ has to be replaced with $ 2e^2$ in eq.~\nr{PiW}. Note that in the theory of eq.~\nr{htl}, the gauge field $A_0$ is included. In the classical lattice simulations~\cite{tang,ambjorn}, in contrast, one puts $A_0=0$ and imposes the Gauss constraint explicitly. The hard thermal loop self-energy simplifies in the limit ${\bf p}=0$, which is relevant for the operators in eqs.~\nr{operH}, \nr{operW}. In that limit, the gauge field self-energy is \begin{eqnarray} \Pi_{\rm HTL}^{00}=0,\quad \Pi_{\rm HTL}^{0i}=0,\quad \Pi_{\rm HTL}^{ij}=\delta^{ij} \omega_W^2, \end{eqnarray} where $\omega_W$ is the plasmon frequency, \begin{eqnarray} \omega_W^2=-\fr13 g^2 (2N+N_{\rm s})\hbar \int\! \frac{d^3q}{(2\pi)^3} \frac{dn(\omega_q)}{d\omega_q}|{\bf v}|^2. \la{oW2} \end{eqnarray} Let us also introduce the ``scalar plasmon frequency'' \begin{eqnarray} \omega_H^2 = -\fr12 m_H^2 + \Sigma_{\rm HTL}. \la{oH2} \end{eqnarray} In the continuum limit of the full quantum theory, for which $|{\bf v}|^2=1$, one has \begin{eqnarray} \omega_W^2 & = & g^2(2 N+N_{\rm s})\frac{T^2}{18} \frac{1}{\hbar},\la{omegaWT} \\ \omega_H^2 & = & -\fr12 m_H^2+ \left(6\lambda + \fr94g^2\right)\frac{T^2}{12} \frac{1}{\hbar}, \la{omegaHT} \end{eqnarray} whereas in the classical limit $n(\omega)\to T/(\hbar\omega)$ on a cubic lattice, one gets \begin{eqnarray} \omega_W^2 & = & g^2 (2 N+N_{\rm s}) \left(\fr32\frac{\Sigma}{\pi}-1 \right) \frac{T}{12a},\la{omegaWa} \\ \omega_H^2 & = & -\fr12 m_H^2 + \left(6\lambda + \fr94g^2\right) \frac{\Sigma}{4\pi}\frac{T}{a}. \la{omegaHa} \end{eqnarray} Here \begin{eqnarray} \Sigma = \frac{1}{\pi^2}\int_{-\pi/2}^{\pi/2}\!\! d^3x \frac{1}{\sum_i{\sin}^2x_i}= 3.175911535625 \la{Sigma} \end{eqnarray} is a constant which can be expressed in terms of the complete elliptic integral of the first kind~\cite{fkrs}. The basic problem of the classical real time simulations can now be expressed as follows. For the scalar field, one has a mass parameter in the classical SU(2)+Higgs theory which does not break gauge invariance. It can be tuned such that the high momentum modes produce the correct quantum expression in eq.~\nr{omegaHT}, and that the lattice spacing dependence in eq.~\nr{omegaHa} is canceled. For the gauge fields, in contrast, the local classical SU(2)+Higgs theory does not allow a mass term and hence the divergent classical expression in eq.~\nr{omegaWa} cannot be arranged to coincide with the quantum expression in eq.~\nr{omegaWT}. This divergence should also show up in the classical simulations. This problem is specific for time dependent correlation functions: in the static case only the $\omega_n=0$ sector requires a resummation and $\Pi_{\rm HTL}^{ij}(0,\vec{p})=0$. Finally, let us fix some notation. The continuum field $\varphi$ in eqs.~\nr{operH}, \nr{operW} is related to the dimensionless lattice field $\bar{\varphi}$ in~\cite{tang} by \begin{eqnarray} \bar\varphi^\dagger\bar\varphi= \frac{a}{T\beta_G}\varphi^\dagger\varphi, \quad \beta_G = \frac{4}{ag^2T}. \end{eqnarray} The operators measured on the lattice are then \begin{eqnarray} \bar H & = & \frac{1}{\sqrt{N^3}}\sum_{\bf x} \bar\varphi^\dagger({\bf x})\bar\varphi({\bf x}), \\ \bar{W}^a_i & = & \frac{1}{\sqrt{N^3}}\sum_{\bf x} i{\rm Tr\,} \left[\bar\Phi^\dagger(\vec{x}) U_i(\vec{x}) \bar\Phi(\vec{x}+\vec{e}_i a)\tau^a\right], \end{eqnarray} where $U_i(\vec{x})$ is the link operator. The correlators measured with these operators are denoted by $C_{\bar H}(t)$, $\fr13\delta^{ab}\delta_{ij}C_{\bar W}(t)$, and they are related in the continuum limit $a\to 0$ to the correlators $C_H(t), C^{ab}_{W,ij}(t)$ of the continuum operators in eqs.~\nr{operH}, \nr{operW} through $C_H(t)=a \beta_G^2 T^2 C_{\bar H}(t)$, $C^{ab}_{W,ij}(t)=\fr13 \delta^{ab}\delta_{ij} a^{-1} \beta_G^2 T^2 C_{\bar W}(t)$. The factor $3$ in the definition of $C_{\bar W}(t)$ corresponds to a sum over the different isospin components. \subsubsection*{The broken phase} Let us first consider the broken phase. The real time correlators of the operators $\bar H(t)$, $\bar W^a_i(t)$ have been determined in the classical approximation in the broken phase of the SU(2)+Higgs model by Tang and Smit~\cite{tang}. We parameterize the scalar field in the broken phase as \begin{eqnarray} \Phi=\frac1{\sqrt{2}}\[v(T) + \phi_0+ i\tau^a\phi_a\]. \end{eqnarray} Then, in continuum notation, the lowest order terms in the gauge invariant operators of eqs.~\nr{operH}, \nr{operW} become \begin{eqnarray} H(t) & = & {\rm const.} + \frac{1}{\sqrt{V}} \int \!\!d^3x\, v(T)\phi_0(t,{\bf x}) , \\ W^a_i (t) & = & \frac{1}{\sqrt{V}} \int \!\!d^3x\, v(T) \Bigl[ m_W(T) A^a_i(t,{\bf x}) - \partial_i \phi_a(t,{\bf x}) \Bigr], \la{lWa} \end{eqnarray} where $m_W(T)=gv(T)/2$. Due to the integration $\int\! d^3x$, these operators correspond to zero spatial momentum, ${\bf p}=0$, so that the second term in eq.~\nr{lWa} does not contribute. It follows that the leading terms in the correlation functions $C_H(t)$ and $C^{ab}_{W,ij}(t)$ are given by the propagators of the elementary fields $\phi_0$ and $A^a_i$. To compute the required propagators, one can set ${\bf p}=0$ in the time dependent part of eq.~\nr{Ctp}, so that eqs.~\nr{oW2}, \nr{oH2} can be used. At zero spatial momentum the full Euclidean (Matsubara) propagators for the fields $A^a_i$, $\phi_0$ are of the form \begin{eqnarray} {\cal D}_{ij}^{ab}(i\omega_n, \vec{0}) = \frac{\delta^{ab}\delta_{ij}} {\omega_n^2+(\omega_W^{\rm b})^{2}+ \mbox{$^\ast\Pi(i\omega_n)$}},\quad {\cal D}(i\omega_n, \vec{0}) = \frac{1}{\omega_n^2+(\omega_H^{\rm b})^{2}+ \mbox{$^\ast\Sigma(i\omega_n)$}}, \end{eqnarray} where $^\ast\Pi$, $^\ast\Sigma$ denote the parts of the self-energies which are generated radiatively within the HTL effective theory. The tree-level terms are \begin{eqnarray} (\omega_W^{\rm b})^2 & = & m_W^2(T)+0.05379 \beta_G (g^2T)^2, \la{wbW} \\ (\omega_H^{\rm b})^2 & = & m^2_H(T), \la{wbH} \end{eqnarray} where $m_W(T)$, $m_H(T)$ are the mass parameters generated by the Higgs mechanism in the static theory: $m_W(T)=gv(T)/2$, $m_H^2(T)=\omega_H^2+3 \lambda v^2(T)$. Here the scalar mass parameter was tuned to its correct value by using the counterterm of the static classical theory and the fact that $\Sigma_{\rm HTL}$ is momentum independent. For $(\omega_W^{\rm b})^2$, in contrast, the $\beta_G$-dependent part comes from eq.~\nr{omegaWa} and cannot be removed. Parametrically, $(\omega^{\rm b})^{2}\sim g^2T^2$ and $^\ast\Pi(\omega_W^{\rm b})$, $^\ast\Sigma(\omega_H^{\rm b})\sim g^3T^2$, so that the dominant contributions to the $H$ and $W^a_i$ plasmon frequencies (appearing as $C(t,{\bf p}) \sim C_0 \exp({-\Gamma t}) \cos(\omega_{\rm pl} t + \delta)$ in the correlator) are just $\omega_H^{\rm b}$ and $\omega^{\rm b}_W$ according to eq.~\nr{Ctp}. The damping rates $\Gamma$ are related to $^\ast\Pi(\omega_W^{\rm b})$ and $^\ast\Sigma(\omega_H^{\rm b})$. \begin{figure}[tb] \vspace*{-1.0cm} \hspace{1cm} \epsfysize=18cm \centerline{\epsffile{omega.eps}} \vspace*{-6cm} \caption[a]{ A comparison of the leading order perturbative plasmon frequencies with those determined on the lattice in~\cite{tang}. The continuous curves are the perturbative values from eqs.~\nr{wbW}, \nr{wbH}, and the squares are the data points from Table~5 in~\cite{tang}. Open symbols correspond to $\omega_W^{\rm b}$ and filled to $\omega_H^{\rm b}$. The corresponding $\beta_G$ values are given next to the symbols.} \la{omega} \end{figure} In Fig.~\ref{omega} we compare the leading order plasmon frequencies in eqs.~\nr{wbW}, \nr{wbH} with the lattice results of ref.~\cite{tang}. The value of $v(T)$ has been determined from the 1-loop effective potential. The zero-temperature parameters used in~\cite{tang} correspond to $m_H$ = 80 GeV. It is seen that the lattice results are remarkably close to the leading order perturbative results. In particular, we conclude that the gauge field plasmon frequency $\omega_W^{\rm b}$ diverges in the continuum limit according to eq.~\nr{wbW}, while the scalar plasmon frequency $\omega_H^{\rm b}$ remains finite and equals the static screening mass at leading order. One sees that the lattice is so coarse that it is difficult to notice the divergence of $\omega_W^{\rm b}$ since this is shadowed by the finite $m_W(T)$. Thus one is in a sense not close enough to the continuum limit. It should also be noted that the amplitude of $C^{ab}_{W,ij}(t)$ dies out as $1/(\omega_W^{\rm b})^{2}$ in the continuum limit. The damping rates of the gauge and scalar fields are parametrically of order $g^2T$ and therefore, in contrast to the plasmon frequencies, they are classical. This has been demonstrated explicitly in a scalar field theory~\cite{aarts,jakovac}. A full computation of the damping rates in the broken phase of the SU(2)+Higgs theory is missing at the moment. However, the order of magnitude can apparently be understood~\cite{tang} using the known symmetric phase gauge and Higgs elementary field damping rates~\cite{braaten,biro}. \subsubsection*{The symmetric phase} Let us now turn to the symmetric phase. The correlators of the composite operators in eqs.~\nr{operH}, \nr{operW} have been determined in the symmetric phase of the SU(2)+Higgs model by Tang and Smit~\cite{tang}. In addition, the gauge field correlator of the pure SU(2) model has been measured in the Coulomb gauge by Ambj{\o}rn and Krasnitz~\cite{ambjorn}. Consider first the composite operator correlators measured in~\cite{tang}. In the symmetric phase, the composite operator character of $H(t)$ and $W^a_i(t)$ manifests itself more clearly than in the broken phase. For instance, the leading term in $\bar W^a_i$ is \begin{eqnarray} \bar W^a_i = \frac{1}{\sqrt{N^3}}\sum_\vec{x} \left[ \bar\phi_0(\vec{x} + \vec{e}_i a)\bar\phi_a(\vec{x})-\bar\phi_0(\vec{x})\bar\phi_a(\vec{x} + \vec{e}_i a)-\epsilon^{abc} \bar\phi_b(\vec{x})\bar\phi_c(\vec{x} + \vec{e}_i a)\right], \end{eqnarray} which does not contain the gauge field $A^a_i$ at all. \begin{figure}[tb] \vspace*{-4.0cm} \hspace{1cm} \epsfysize=30cm \centerline{\epsffile{diagram.ps}} \vspace*{-22cm} \caption[a]{The lowest order diagram contributing to $C_H(t)$ and $C^{ab}_{W,ij}(t)$ in the symmetric phase. The solid lines denote scalar propagators.} \label{diagram} \end{figure} Due to the fact that no gauge fields are involved, the leading terms in $C_H(t)$ and $C^{ab}_{W,ij}(t)$ can be easily computed: both are given by diagrams of the type depicted in Fig.~\ref{diagram}. To evaluate them one has to start with a Matsubara external momentum $p_0 = i\omega_n$. Then the sum over the loop frequencies is written in terms of an integral in the complex $k_0$ plane (see, e.g.,~\cite{kapusta}). Only then can one continue to arbitrary complex values of $p_0$. For the operator ${\cal O}(t,\vec{x}) = \varphi^\dagger\varphi(t,\vec{x})$, the diagram in Fig.~\ref{diagram} finally gives \begin{eqnarray} & & {\rm Im} {\cal D}_{\varphi^\dagger\varphi}(p_0 + i\epsilon,\vec{p}) = \nonumber \\ & & \quad\quad 4 \int\! \frac{d^4k}{(2\pi)^4} \[n(k_0) - n(k_0+p_0) \] {\rm Im} {\cal D}(k_0 +i\epsilon, \vec{k})\, {\rm Im} {\cal D}(k_0 + p_0 + i\epsilon, \vec{k} + \vec{p}). \hspace*{1.0cm} \label{imDphi2} \end{eqnarray} This expression\footnote{Eq.~\nr{imDphi2} can be written in other forms by a change of integration variables, but then one may get a wrong result for the free case, if $\epsilon$ is put to zero inside the integral and eq.~(\ref{help1}) is used.} is equivalent to a corresponding one derived in ref.~\cite{jeon}. The correlator $C_H(t)=\fr12\langle H(t)H(0)+H(0)H(t)\rangle$ for the operator $H(t)$ of eq.~\nr{operH} is given by $C_H(t)=C_{\varphi^\dagger\varphi}(t,\vec{p}=0)$, where $C_{\varphi^\dagger\varphi}(t,\vec{p})$ is obtained from eq.~\nr{relation}. To get the leading contribution in $C_H(t)$, one uses the free propagators for which \begin{eqnarray} {\rm Im} {\cal D}_{\rm free}(k_0 +i\epsilon, \vec{k}) = \frac{\pi}{2\sqrt{\vec{k}^2 + \omega_H^2}} \[\delta\(k_0 - \sqrt{\vec{k}^2 + \omega_H^2}\) - \delta\(k_0 + \sqrt{\vec{k}^2 + \omega_H^2}\)\]. \label{help1} \end{eqnarray} Then the integrals over $k_0$ and $p_0$ can be performed in eqs.~\nr{imDphi2}, \nr{relation}. In the limit $ \vec{p}\to 0$, one obtains \begin{eqnarray} \label{chfree} C_H(t) = T^2 \int \frac{d^3k}{(2\pi)^3} \frac{1}{(\vec{k}^2 +\omega_H^2)^2} \left[1 + \cos\left(2\sqrt{\vec{k}^2 +\omega_H^2}\, t\right)\right]. \la{loCHt} \end{eqnarray} On the lattice (in the continuum limit) this corresponds to \begin{eqnarray} C_{\bar H}(t)= \frac{1}{4 \pi^2\beta_G^2} \int_0^{\infty} dx \frac{x^2}{(x^2+z^2)^2}\left[1+ \cos(4 \bar t\sqrt{x^2+z^2})\right], \la{symch} \end{eqnarray} where $z=a\omega_H/2$ and $\bar t=t/a$. The vector correlator $ C^{ab}_{W,ij}(t)$ has an additional factor of $k_ik_j$ in the integrand compared with eq.~\nr{loCHt}. Therefore the continuum limit of $ C^{ab}_{W,ij}(t=0)$ does not exist. On the lattice one obtains \begin{eqnarray} C_{\bar W}(t) = \frac{2}{\pi^3\beta_G^2} \int_0^{\pi/2}\!d^3 x \frac{\sum_i\sin^2 2x_i}{[\sum_i\sin^2 x_i+z^2]^2}\left[1+ \cos\left(4 \bar t \sqrt{{\textstyle \sum_i}\sin^2 x_i + z^2} \right) \right]. \la{symcw} \end{eqnarray} \begin{figure}[tb] \vspace*{-1.0cm} \hspace{1cm} \epsfysize=18cm \centerline{\epsffile{ch.eps}} \vspace*{-6cm} \caption[a]{ The leading order perturbative correlator $C_{\bar H}(t)$ in lattice units in the symmetric phase at $T/T_c=1.52$. To be compared with Fig.~8 in~\cite{tang}. The oscillation frequency is $2\omega_H$ (this agrees at leading order with the static screening mass).} \la{ch} \end{figure} \begin{figure}[tb] \vspace*{-1.0cm} \hspace{1cm} \epsfysize=18cm \centerline{\epsffile{cw.eps}} \vspace*{-6cm} \caption[a]{ The leading order perturbative correlator $C_{\bar W}(t)$ in lattice units in the symmetric phase at $T/T_c=1.52$. To be compared with Fig.~10 in~\cite{tang}. The oscillation period here is proportional to the lattice spacing $a$.} \la{cw} \end{figure} The correlation function $C_{\bar H}(t)$ is shown in Fig.~\ref{ch} and $C_{\bar W}(t)$ in Fig.~\ref{cw}. These should be compared with Figs.~8, 10 in~\cite{tang}, respectively. It is seen that the qualitative features can be understood quite well with the leading order results. Note, in particular, that the scalar correlation function $C_{\bar H}(t)$ in Fig.~\nr{ch} is oscillating with an amplitude which decreases with time. It should be emphasized that this decrease is not related to damping (remember that Fig.~\ref{ch} shows the tree level results without any interactions and damping occurs only through interactions). The decrease is rather due to the fact that there is a continuous spectrum of frequencies $\omega>2\omega_H$ causing a destructive interference in the phase space integral, eq.~\nr{symch}. This shows that it is difficult to determine a damping rate from the gauge invariant operator $C_{\bar H}(t)$. Let us then try to estimate how higher order corrections could modify the qualitative behavior of $C_H(t)$. Consider the effect of self-energy insertions in the scalar propagators ${\cal D}(k_0,\vec{k})$. The self-energy has an imaginary part. Therefore the scalar propagator does not have poles at $k_0=\pm\sqrt{\vec{k}^2+\omega_H^2 }$ (the lowest order result for $C_H(t)$ is due to these poles). Since the imaginary part of the self-energy is small compared with $\sqrt{\vec{k}^2 + \omega_H^2 }$, $C_H(t)$ will nevertheless still be dominated by the region $k_0 \approx \pm\sqrt{\vec{k}^2+\omega_H^2 }$. One can therefore approximate the scalar propagator as \begin{eqnarray} {\cal D} (k_0 + i\epsilon, \vec{k}) \approx \frac{1}{-(k_0 + i\Gamma_k)^2 + \vec{k}^2 + \omega_H^2 }, \label{breit} \end{eqnarray} where the width $\Gamma_k$ is given by (see, e.g., \cite{jeon}) $\Gamma_k \equiv -{\rm Im}\Sigma(\sqrt{\vec{k}^2 + \omega_H^2 },\vec{k})/(2\sqrt{\vec{k}^2 + \omega_H^2 })$. Inserting this into eq.~(\ref{imDphi2}) and using eq.~(\ref{relation}), we find \begin{eqnarray} C_H(t) = T^2 \int\frac{d^3k}{(2\pi)^3} \frac{e^{-2\Gamma_k t}} { [\vec{k}^2 + \omega_H^2 ][\vec{k}^2 + \omega_H^2 + \Gamma_k^2]} \left[1 + \cos\left(2\sqrt{\vec{k}^2 + \omega_H^2} \,t - 2 \alpha_k\right)\right], \la{gCHt} \end{eqnarray} where $\alpha_k = \arctan\left({\Gamma_k}/{\sqrt{\vec{k}^2 + \omega_H^2}} \right)$. Thus the effect of damping should show up in Fig.~\ref{ch} such that the constant part (corresponding to the first term in the square brackets in eq.~\nr{gCHt}) decays away at large times. This is indeed the qualitative behavior observed in~\cite{tang}. Finally, we consider the transverse gauge field correlator $C_{A_{\scriptstyle \rm t}}(t,{\bf p})$ measured in~\cite{ambjorn}. Gauge fields can only be defined in a particular gauge, which in~\cite{ambjorn} was chosen to be the Coulomb gauge. We let the external momentum point into the $x_3$-direction, ${\bf p}={\bf e}_3 (2\pi k/L)$, where $L$ is the spatial extent of the lattice and $k$ is an integer. Then $C_{A_{\scriptstyle \rm t}}(t,{\bf p})$ is given, e.g., by the correlator of $A_1^a$, \begin{eqnarray} C_{A_{\scriptstyle \rm t}}(t,{\bf p}) = \frac12 \int d^3 x e^{-i\vec{p}\vec{x}} \langle A_1^a(t,{\bf x}) A_1^a(0,{\bf 0}) + A_1^a(0,{\bf 0}) A_1^a(t,{\bf x}) \rangle. \la{AA} \end{eqnarray} Let us first recall some features of $C_{A_{\scriptstyle \rm t}}(t,\vec{p})$ and of the corresponding analytic Green's function ${\cal D}_{A_{\scriptstyle \rm t}}(\omega,\vec{p})$ for $|\vec{p}|\sim g^2 T$ in the quantum theory. After the HTL resummation, ${\cal D}_{A_{\scriptstyle \rm t}}(\omega,\vec{p})$ has poles at $\omega\approx\pm\omega_W$, where $\omega_W$ is given by eq.~\nr{omegaWT}. These poles lead to an oscillation of $C_{A_{\scriptstyle \rm t}}(t,\vec{p})$ on the time scale $\sqrt{\hbar}/(gT)$. In addition, ${\cal D}_{A_{\scriptstyle \rm t}}(\omega,\vec{p})$ has a discontinuity on the real $\omega$-axis for $\omega < |{\bf p}|$. This discontinuity is related to Landau damping and it gives a contribution $f_{\rm Landau}(t,\vec{p})$ to $C_{A_{\scriptstyle \rm t}}(t,\vec{p})$. The function $f_{\rm Landau}(t,\vec{p})$ does not involve any oscillations and just constitutes a decaying background for the superimposed plasmon oscillations. The time scale on which $f_{\rm Landau}(t,\vec{p})$ varies is $\mathop{\gsi} 1/(g^2T)$. Higher order corrections to ${\cal D}_{A_{\scriptstyle \rm t}}(\omega,\vec{p})$, which are not included in HTL effective action but are generated radiatively within that theory, lead to a damping of the plasmon oscillations. The plasmon damping rate $\Gamma$ is of order $g^2T$ and has been computed for $\vec{p} = 0$ in ref.~\cite{braaten}. These two different damping effects manifest themselves in the correlator $C_{A_{\scriptstyle \rm t}}(t,{\bf p})$ in quite different ways, so that its functional form is expected to be \begin{eqnarray} C_{A_{\scriptstyle \rm t}}(t,{\bf p}) \sim A \exp({-\Gamma t})\cos (\omega_{W} t+\delta) + f_{\rm Landau}(t,\vec{p}). \label{form} \end{eqnarray} The time dependence of $f_{\rm Landau}(t,\vec{p})$ becomes non-perturbative for $t \mathop{\gsi} 1/(\hbar g^4T)$ \cite{arnold96}, where $f_{\rm Landau}(t,\vec{p})$ is expected to vanish. This means that $C_{A_{\scriptstyle \rm t}}(t,{\bf p})$ can be computed perturbatively up to a non-perturbative constant $C_{A_{\scriptstyle \rm t}}(0,{\bf p})$ as long as $t\ll 1/(\hbar g^4T)$. In the classical lattice gauge theory one expects a similar qualitative behavior. In the order of magnitude estimates for the quantum theory one has to replace $\hbar\to Ta$. The analytic structure of ${\cal D}_{A_{\scriptstyle \rm t}}(\omega,\vec{p})$ is more complicated than in the quantum case. The HTL resummed ${\cal D}_{A_{\scriptstyle \rm t}}(\omega,\vec{p})$ depends not only on the magnitude but also on the direction of $\vec{p}$ \cite{bodeker}. In particular, there are directions of $\vec{p}$ for which there is a discontinuity for arbitrarily large values of $\omega$ \cite{arnold97}. Nevertheless, the qualitative behavior of $C_{A_{\scriptstyle \rm t}}(t,\vec{p})$ should be given by eq.~\nr{form}. Unfortunately, the numerical results for $C_{A_{\scriptstyle \rm t}}(t,\vec{p})$ in~\cite{ambjorn} have been normalized to $C_{A_{\scriptstyle \rm t}}(0,\vec{p})$ which cannot be computed in perturbation theory. Comparing our perturbative estimates with the non-perturbative results therefore requires some model assumptions about $C_{A_{\scriptstyle \rm t}}(0,\vec{p})$. In order to account for the plasmon damping effects, we include the leading order damping rate $\Gamma$ in the HTL resummed propagator ${\cal D}^{ab}_{ij}(\omega,{\bf p})$. Since the momentum dependence of $\Gamma$ is not known we use its value for $\vec{p} = 0$ which is $\Gamma = 0.176 g^2T$ \cite{braaten}. We can then write for the transverse components in eq.~\nr{AA}, analogously to eq.~\nr{breit}, \begin{eqnarray} {\cal D}^{ab}_{11}(\omega+i\epsilon,{\bf p}) = \frac{\delta^{ab}}{ -(\omega+i \Gamma)^2 + \vec{p}^2 + \Pi_{\rm HTL}^{11}(\omega, {\bf p})}. \la{breit2} \end{eqnarray} Consider first the zero external momentum case, ${\bf p}={\bf 0}$. Eq.~\nr{breit2} only makes sense when $\Gamma^2 \ll \vec{p}^2 + \mathop{\rm Re} \Pi^{11}_{\rm HTL}(\omega,\vec{p})$. In the latter term in eq.~\nr{Ctp} there should be no problem, since for $\vec{p}=0$, $\Pi^{11}_{\rm HTL}(\omega,\vec{p}) \to \omega_W^2$, but for the first term (i.e., for the static limit $t=0$) the inequality is not satisfied (remember that $\Pi_{\rm HTL}^{11}(0, {\bf p})=0$). One might try to regulate the real part of the gauge fixed self-energy phenomenologically with a ``magnetic mass''; letting $M=\sqrt{m_{\rm magn}^2+\Gamma^2+\omega_W^2}$, $\widetilde M=\sqrt{m_{\rm magn}^2+\omega_W^2}$, it follows from eq.~\nr{Ctp} that this would give \begin{eqnarray} C_{A_{\scriptstyle \rm t}}(0) & = & \frac{T}{m_{\rm magn}^2+\Gamma^2}, \\ C_{A_{\scriptstyle \rm t}}(t)- C_{A_{\scriptstyle \rm t}}(0) & = & -\frac{T}{M^2}+\frac{T}{M\widetilde M} e^{-\Gamma t}\cos\! \left(\widetilde M t- \arctan\frac{\Gamma}{\widetilde M} \right). \end{eqnarray} Parameterized this way, one can indeed find quite reasonable agreement with Fig.~6 in~\cite{ambjorn}, but only if $\Gamma$ is chosen to have a large value, $\Gamma \sim (0.8\ldots 1.0) g^2T$. Otherwise one is getting too many oscillations in $C_{A_{\scriptstyle \rm t}}(t)$, not observed in~\cite{ambjorn}. For these large values of $\Gamma$, the magnetic mass parameter is in fact favored to be small or zero. However, these fits are quite phenomenological, and thus we will not consider them any more. For ${\bf p}\neq 0$, the $t=0$ --part of the leading order correlator is still parametrically non-perturbative, but at least it is formally finite for $\Gamma\to 0$ so that the perturbative approximation might be numerically reasonable. To get a feeling about the momentum scales in question, note that for the value $\beta_G=14$ considered in~\cite{ambjorn}, $|\vec{p}|=0.69g^2T\times k$ and $\omega_W=0.87g^2T$ (the latter can be obtained from eq.~\nr{wbW} with $m_W(T)=0$). We have evaluated numerically both the HTL self-energy in eq.~\nr{PiW} and the remaining $\omega_1$-integral in eq.~\nr{Ctp}, for $k=1,\ldots,4$. The resulting correlators are shown in Fig.~\ref{Acorr}. This should be compared with Fig.~6 in~\cite{ambjorn}. \begin{figure}[tb] \vspace*{-1.0cm} \hspace{1cm} \epsfysize=18cm \centerline{\epsffile{Acorr.eps}} \vspace*{-6cm} \caption[a]{ The gauge field correlator in the pure SU(2) theory. To be compared with Fig.~6 in~\cite{ambjorn}. Here $t$ is expressed in lattice units: $\bar t\equiv t/a$. Unless otherwise stated, $\beta_G=14$.} \la{Acorr} \end{figure} The main effects to be seen in Fig.~\ref{Acorr} are the following. For $k=1$ one can see an oscillation in Fig.~\ref{Acorr}, in contrast to Fig.~6 in~\cite{ambjorn}. Thus one could say that the non-perturbative plasmon damping rate is larger than the perturbative estimate $\Gamma = 0.176 g^2T$. Indeed, one has to go to a much larger value, $\Gamma\sim 1.0 g^2T$ ($>|\vec{p}|, \omega_W$), to get enough plasmon damping. Another observation to be made at $k=1$ is that in~\cite{ambjorn} the correlator is already very close to zero at $\bar t/\beta_G^2=0.10$. This is not quite so in Fig.~\ref{Acorr}, but one has to wait much longer for the correlator to vanish. Thus the non-perturbative Landau damping effects seem also to be larger than the leading order perturbative HTL result. At $k=2$ there is no large difference in Landau damping any more, but the plasmon damping appears to be somewhat too weak even with $\Gamma\sim 1.0g^2T$. Finally, at $k=4$ one gets reasonable agreement between the perturbative and lattice results, provided that $\Gamma\sim 1.0g^2T$. One can also see that the $\beta_G$-dependence is reproduced; the plasmon frequency thus diverges according to eq.~\nr{omegaWa}. We have also tried the expression $-\omega^2-2i\omega\Gamma$ in the denominator of eq.~\nr{breit2}, so that the real part is not modified by $\Gamma^2$. The qualitative conclusions and the preferred value $\Gamma\sim 1.0g^2T$ remain the same. Based on those curves, one would nevertheless say that even at $k=2$ there is too little Landau damping in the perturbative estimates, but at $k=4$ one again gets good agreement. \subsubsection*{Summary and Conclusions} We have computed several quantities related to real time correlation functions in the classical SU(2) and SU(2)+Higgs models on the lattice, using hard thermal loop resummed perturbation theory. Our results for the gauge field and scalar plasmon frequencies in the broken phase are in remarkable agreement with the numerical lattice simulations in ref.~\cite{tang}. We have reiterated that the classical gauge field plasmon frequency is divergent in the continuum limit and we have demonstrated that this is consistent with the results of ref.~\cite{tang}, where the plasmon frequency was claimed to be lattice spacing independent. For the symmetric phase we have computed gauge invariant scalar and vector correlators as functions of time at the lowest order in perturbation theory. Furthermore, we have estimated the effect of higher order corrections. Our results are in good agreement with ref.~\cite{tang}. We have shown that it is difficult to extract damping rates from the measurement of these correlators. Finally, we have studied the correlator of the transverse gauge field in pure SU(2) gauge theory. While the qualitative features of the numerical simulations in ref.~\cite{ambjorn} are consistent with our perturbative estimates, there appear to be significant quantitative discrepancies. The damping of the plasmon oscillations observed in~\cite{ambjorn} appears to be much stronger than one would expect from the perturbative result for the damping rate~\cite{braaten}. This is puzzling because the damping rate is of the order $g^2 T$ and should therefore have a classical continuum limit. However, one should keep in mind that the perturbative estimates are reliable only if the plasmon frequency is much larger than the damping rate which is not the case for the lattice spacing used in ref.~\cite{ambjorn}. {\bf Acknowledgments.} We are grateful to E.~Berger, P.~Overmann, O.~Philipsen, M.G.~Schmidt and I.O.~Stamatescu for useful discussions. \subsubsection*{Note added} After this paper was submitted, we were informed by J.Smit that in the revised version of Ref.~\cite{tang}, Tang and Smit have weakened their conclusions concerning the lattice spacing independence of the $W$ plasmon frequency in the broken phase, so that their statements are now in better accordance with ours. We thank J.Smit for communication on this issue. In view of the interpretation in~\cite{ay2}, let us stress that the discussion around eq.~\nr{breit} is not meant to be a consistent quantitative estimate of the higher order corrections. We just want to see in which qualitative way the higher order corrections might manifest themselves.
{'timestamp': '1997-10-15T06:43:25', 'yymm': '9707', 'arxiv_id': 'hep-ph/9707489', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-ph/9707489'}
\section{Summary} \label{sec:conclusion} In this work, we explored the transferability of HINTS on different computational domains for differential equations. Specifically, we considered two methods: (1) directly employ the HINTS for an equation defined in an unseen domain; (2) adopt the transfer operator learning to fine-tune the HINTS trained on the source domain for the unseen/target domain with limited data. Through presenting the results for Darcy flow and linear elasticity, we demonstrate the effectiveness of the two methods based on HINTS on fast, accurate solutions of differential equations. In particular, HINTS with transfer learning, by leveraging both the knowledge from the source domain and the target domain, converges even faster than using the direct application of HINTS on the target geometry. Despite the faster convergence, it comes with a price that one still needs a small dataset on the target domain. While we have focused on a specific instance of $k(\boldsymbol{x})$ ($E(\boldsymbol{x})$ for elasticity) and $f(\boldsymbol{x})$ in \autoref{sec:experiments}, the performance is consistent for the entire test dataset. \autoref{tab:quantified_results} shows the mean, median and standard deviation (STD) of the number of iterations needed for convergence for 100 different cases in the test dataset. \begin{table}[htbp!] \centering \begin{tabular}{||c | c | c | c | c ||} \hline Problem & Method & Mean & Median & STD \\ [0.5ex] \hline\hline \multirow{3}{*}{Darcy} & GS & 403 & 405 & 34.62 \\ \cline{2-5} & HINTS-GS & 165 & 164 & 19.03 \\ \cline{2-5} & Transferred HINTS-GS & \textbf{157} & \textbf{162} & \textbf{15.75} \\ \hline \multirow{3}{*}{Elasticity} & GS & 1029 & 1020 & 64.41 \\ \cline{2-5} & HINTS-GS & 257 & 253 & 34.56 \\ \cline{2-5} & Transferred HINTS-GS & \textbf{176} & \textbf{173} & \textbf{15.68} \\ \hline \end{tabular} \caption{Summary of the results for the two benchmark problems. Mean, median and standard deviation of the number of iterations it takes for each method to converge to machine precision. The statistical measures were computed for 100 target samples of each one of the Darcy and elasticity problems.} \label{tab:quantified_results} \end{table} The capability of the direct application of HINTS for unseen geometries is, to some extent, rather surprising. Seemingly, a DeepONet trained for a fixed geometry (e.g., L-shaped domain) should not be effective on another geometry (e.g., L-shaped domain with a cutout) that is not included in the training dataset. We attribute the functionality of HINTS for unseen geometry to the following two factors: $(1)$ DeepONet simply needs to provide an approximate solution within HINTS, while the task of achieving accuracy is accomplished by the embedded numerical solver; $(2)$ the differential equation defined on the unseen geometry, for the examples that we consider, is similar to the equation defined on the original geometry but with an input function $k(\boldsymbol x)$ ($E(\boldsymbol x)$) defined in an extended domain. For $(1)$, intuitively, the prediction error of the DeepONet caused by the mismatch between the original and the new geometry depends on the difference between the two geometries. Within a reasonable degree of similarity between the two geometries, DeepONet can still decrease the errors of the low-frequency modes. For $(2)$, using the case of Darcy flow as an example, it may be shown that the differential equation defined in the L-shaped domain excluding the cutout is equivalent to the same equation defined in the L-shaped domain, where (a) within the cutout $k(\boldsymbol x)$ is simply padded with zero, and (b) the boundary condition at the cutout boundary is zero Neumann boundary condition. Technically, our approach of padding $k(\boldsymbol x)$ inside the cutout with zeros conforms with such equivalence. Therefore, generalizing the L-shaped domain into a new geometry (L-shaped domain with a cutout) is transformed into the generalization of $k(\boldsymbol x)$ from GRF in the training dataset into an unseen $k(\boldsymbol x)$, where it is from GRF outside the cutout but equals to zero inside the cutout. \section{Introduction} Numerical simulations play a crucial role in scientific and engineering applications such as mechanics of materials and structures \cite{hughes2012finite,simo2006computational,hughes2005isogeometric,jing2002numerical,rappaz2003numerical, goswami2019adaptive,bharali2022robust}, bio-mechanics \cite{zhang2022g2,goswami2022neural}, fluid dynamics \cite{patera1984spectral,kim1987turbulence,cockburn2012discontinuous}, etc. The simulation approach is based on solving linear/nonlinear partial differential equations (PDEs). The efficiency and accuracy of a simulation approach is always comparable to conflict partners. This means that the quest for a more efficient numerical solver frequently results in lesser numerical accuracy. In engineering simulations, the main factor is to have an acceptable accuracy with feasible computational requirements for both memory utilization and computing time. Furthermore, the solution process must be stable and dependable. Therefore, determining an appropriate approach for the problem at hand is crucial, and usually determines the outcome of a simulation. In traditional numerical solvers like the finite element method, we often reduce the complex differential equations defining the physical system to a system of linear equations of the form: $\left[\mathbf{K}\right]\{\boldsymbol{u}\} = \{\boldsymbol{f}\}$, where $\left[\mathbf{K}\right]$ is referred as the stiffness matrix; $\{\boldsymbol{f}\}$ is the force vector and $\{\boldsymbol{u}\}$ is the set of unknowns. A simple, yet not recommended way to solve for the set of unknown is to use the direct method by inverting the stiffness matrix and multiplying it with a force vector. However, the direct method fails in cases of large degrees of freedom (the stiffness matrix is in the order of a few millions) and/or the stiffness matrix is sparse. At this juncture, the iterative solvers come to the rescue. We start with an initial guess for $\{\boldsymbol{u}\}$, and gradually progress towards the true solution for $\{\boldsymbol{u}\}$. The solver iterates until a reasonable solution that meets the stopping criterion (typically an error tolerance value) is obtained. Iterative solvers are appropriate for large computing problems because they can frequently be parallelized more efficiently using algorithms. However, proper pre-conditioning of the stiffness matrix is a mandatory requirement. The solution's high oscillatory component can be solved efficiently using a dense mesh and few steps with a simple iterative method like Jacobi iteration or Gauss-Seidel (GS) method. It suffers from divergence for non-symmetric and indefinite systems over a coarse grid, as well as slow convergence associated with low-frequency eigen modes, restricting its application for large scale linear systems. Recent advances in deep learning in addition to the developments in computational power have provided the means to employ neural networks as efficient approximators for PDEs. Their compositional character differs from the traditional additive form of trial functions in linear function spaces, where PDE solution approximations are built using Galerkin, collocation, or finite volume approaches. Their computational parametrization via statistical learning and large-scale optimization approaches makes them increasingly suitable for solving nonlinear and high-dimensional PDEs. However, the neural network often learns the low-frequency eigen modes, and tends to avoid the high-frequency modes. This phenomenon referred to as spectral bias is observed in numerous applications of neural networks. In one of the recent works \cite{zhang2022hybrid}, we proposed an efficient approach to integrate synergistically the iterative solvers with deep neural operators to exploit the merits of both the solvers in turn and to overcome the challenges of either of them. The approach acronym as \emph{HINTS}, improves the convergence of the solution across the spectrum of eigen modes by leveraging the spectral bias of a deep neural operator. As observed in the seminal work of HINTS, the solution is flexible with regards to the computational domain and is transferable to different discretizations. In this work, we investigate the transferability properties, i.e., the ``\emph{T}'' in HINTS, with respect to domain adaptation. Specifically, the information from a model trained on a specific domain (\textit{source}), is employed to infer the solution on a different but closely related domain (\textit{target}). Additionally, we integrate HINTS with the seminal work of operator level transfer learning \cite{goswami2022deep} to improve the convergence rate of HINTS on the target domain. In this scenario, we use a small number of labelled data to fine tune the target model using samples from the target domain. The model is initialized with the learnt parameters of the source model and is trained under a hybrid loss function, comprised of a regression loss and the Conditional Embedding Operator Discrepancy (CEOD) loss, used to measure the divergence between conditional distributions in a Reproducing Kernel Hilbert Space (RKHS). The target model is trained only for the deeper layers, acknowledging the widely accepted fact that the shallow layers are responsible for capturing the more general features. As a summary, we investigate the capability of HINTS trained on a source domain to operate on a target domain with the following two approaches: \begin{itemize} \item direct application of HINTS to an unseen target domain without retraining the DeepONet; \item usage of transfer learning to fine-tune the HINTS with limited data associated with the target domain. \end{itemize} This is illustrated in \autoref{fig:approaches}. \begin{figure}[!ht] \centering \includegraphics[width=10cm]{media/Fig_method.png} \caption{Diagram showing the proposed approaches. After building the source HINTS, one can use it directly on the target domain (left branch). In addition, we propose using transfer learning to fine tune the DeepONet of the source HINTS to get even better performance (right branch).} \label{fig:approaches} \end{figure} In this manuscript, we have considered the Darcy's model on a L-shaped domain (source), and the same domain with a circular or a triangular inclusion (target), and the linear elasticity model on a square domain (source) and a square domain with a circular inclusion (target). The remainder of the manuscript is organized as follows. In \autoref{sec:related_work}, we review the existing traditional numerical methods for solving linear systems, along with the recently popular deep learning solvers. This section also briefly covers the state-of-the-art neural operators, the Deep Operator Network (DeepONet), and a small discussion on the seminal work of HINTS and operator level transfer learning. In \autoref{approach}, we present the methodology of HINTS followed by the integration of the transfer learning approach. The experiments carried out to show the efficiency of domain adaptation with and without the transfer learning are presented in \autoref{sec:experiments}. Finally, we summarize our observations and provide concluding remarks in \autoref{sec:conclusion}. \section{Method} \label{approach} Without any loss of generality, we consider a family of PDEs defined in a domain $\Omega$: \begin{align}\label{eqn:diffeqn_general} \mathcal{L}_{\boldsymbol{x}}(\boldsymbol{u};k) &= \boldsymbol{f}(\boldsymbol{x}), \quad \boldsymbol{x} \in \Omega \\ \mathcal{B}_{\boldsymbol{x}}(\boldsymbol{u}) &= \boldsymbol{g}(\boldsymbol{x}), \quad \boldsymbol{x} \in \partial\Omega \label{eqn:bound_general}, \end{align} where $\mathcal{L}_{\boldsymbol{x}}$ is a differential operator, $\mathcal{B}_{\boldsymbol{x}}$ is a boundary operator, $k=k(\boldsymbol{x})$ parameterizes $\mathcal{L}_{\boldsymbol{x}}$, $\boldsymbol{f}(\boldsymbol{x})$ and $\boldsymbol{g}(\boldsymbol{x})$ are the forcing terms, and $\boldsymbol{u} = \boldsymbol{u}(\boldsymbol{x})$ is the solution of the given PDE. With a well-trained DeepONet embedded, a HINTS is capable of solving for $\boldsymbol{u}$ corresponding to $k$ and $\boldsymbol{f}$, i.e., it captures the solution operator $\mathcal G$ of the family of PDEs specified by \autoref{eqn:diffeqn_general} and \autoref{eqn:bound_general}: \begin{equation} \label{eqn:source_hints} \mathcal G: k, \boldsymbol{f} \mapsto \boldsymbol{u}\text{ s.t. \autoref{eqn:diffeqn_general} and \autoref{eqn:bound_general} are satisfied}. \end{equation} For detailed information on the implementation of HINTS, readers are suggested to refer to the work \cite{zhang2022hybrid}. Herein, we first construct a source HINTS, i.e., a HINTS with DeepONet trained offline for \autoref{eqn:diffeqn_general} and \autoref{eqn:bound_general} defined in the source domain $\Omega=\Omega^S$. This is conducted by using $N_S$ labelled source data, $\mathcal D_S = \{k_i, \boldsymbol{f}_i, \boldsymbol{u}_i\}_{i = 1}^{N_S}$. In the DeepONet architecture, the branch network is a convolution neural network comprising of channels, each taking as input one of the mapping functions. The trunk network takes as inputs the spatial location of the points in the domain $\Omega^S$. This DeepONet is trained with a standard regression loss (relative mean squared error) to obtain the optimized weights and biases of the source network, $\boldsymbol{\theta}^S$. Next, we build HINTS by assembling the DeepONet and the numerical solver (e.g., GS). We discretize $\Omega^S$ with triangular elements. The simulation starts by assuming an initial solution of the dependent variable and at every iteration of HINTS it adopts either the pre-decided fixed relaxation method or the pre-trained DeepONet to approximate the solution. The iterative solver and DeepONet is chosen based on a pre-decided ratio. Among the available numerical iterative solvers, we have employed the GS approach. With the source HINTS properly constructed, we now consider transferring it to become a target HINTS, i.e. a HINTS for solving the class of PDEs (\autoref{eqn:diffeqn_general} and \autoref{eqn:bound_general}) defined in a target domain $\Omega=\Omega^T$, with the following two methods: \textbf{Direct Application: Use Source DeepONet Directly in Target HINTS}. To do this, the first approach is to directly apply the source HINTS for inference in the target domain $\Omega^T$. Specifically, when implementing the target HINTS (i.e., HINTS for the target domain), the trained DeepONet from the source HINTS is directly invoked in the workflow of the target HINTS. \textbf{Transfer Learning: Fine-tune the DeepONet in Target HINTS}. In the second approach, we fine-tune the trained DeepONet from the source HINTS with a small number of labelled samples from the target domain. We generate $N_T$ labelled data, $\mathcal D_T = \{\tilde{k}_i, \tilde{\boldsymbol{f}}_i, \tilde{\boldsymbol{u}}_i\}_{i = 1}^{N_T}$ on the target domain, where $N_S \gg N_T$. In the examples presented in this work, $N_T \approx 0.01\times N_S$. We initialize a target model (having the same architecture as the source model), with the learnt parameters of the source model and fine tune the model by training the fully connected layers of the convolution neural network and the last layer of the trunk net under a hybrid loss function. The hybrid loss function reads as: \begin{equation} \label{eq:target-loss} \mathcal L(\theta^T) = \lambda_1 \mathcal L_r(\theta^T) + \lambda_2 \mathcal L_{\text{CEOD}}(\theta^T), \end{equation} where $\lambda_1 = 1$ and $\lambda_2\gg\lambda_1$ are trainable coefficients, which determine the importance of the two loss components during the optimization process \cite{kontolati2022influence}. In \autoref{eq:target-loss}, while the first term is a standard regression loss, the second term ensures the agreement between the conditional distributions of the target data. For details on the construction of the $\mathcal L_{\text{CEOD}}(\theta^T)$, readers are suggested to refer to \cite{goswami2022deep}. While employing the HINTS algorithm for the target domain, the source DeepONet is replaced by the target DeepONet. Even though the HINTS solution is transferable between related domains, and integrating the transfer learning approach would essentially mean an additional training time, we argue that the convergence rate of a transfer-learning integrated HINTS solution is in faster than its counterpart (see section \autoref{sec:conclusion} for statistical analysis). It is worth noting that the target model is trained with much less iterations and hence is very fast. We mention that there is potential for performing the transfer learning even better, and producing a more accurate target DeepONet, but in this work we did not aim for achieving this. We focus on showing that the transfer learning mechanism works and produces even faster convergence for HINTS. \section{Related work} \label{sec:related_work} \textbf{Classical Numerical Solvers of ODEs/PDEs}. Over the last few decades, many studies have been conducted for developing advanced numerical solvers mainly for approximating the solutions of ODEs and PDEs. Classical methods are the Jacobi and GS methods, which were proposed in the 19th century. Since then, although many advanced algorithms have been proposed, most of the state-of-the-art solvers still use some versions of these two algorithms. The current leaders of the field of numerical solvers are the family of MultiGrid (MG) methods \cite{briggs2000multigrid,bramble2019multigrid}, where one uses a set of discretizations to solve the system. It is common to use Jacobi or GS smoothing inside the MG iterations. Therefore, constraints such as positive definiteness that apply for the Jacobi and GS algorithms, also affect MG as well. There are methods to overcome this, such as the shifted Laplacian method \cite{van2007spectral}, but these suffer from other disadvantages. The solvers compete for the lowest number of iterations needed for convergence, and also other properties such as physical time per iteration, parallelization capabilities, and more. However, in some cases the solvers may diverge (as an example, approximating the solution of an indefinite system), and solvers that are robust and can solve all types of problems are sought after. \textbf{Machine Learning-Based Numerical Solvers}. Many authors have been investigating the use of AI when designing efficient numerical solvers. Some focus on creating an AI based solver for solving PDEs, replacing the numerical solvers. A notable example is the Physics-Informed Neural Networks \cite{raissi2019physics,goswami2020transfer}, where one trains a network to infer the solution of the PDE in a domain, without the need to assemble a linear system and solve, nor use a complex meshing algorithm. Another direction is to enhance numerical solvers using AI (which is the main focus of this paper). Most recent studies aim to replace components of the MG algorithms with AI based components, for example training a neural network to replace the restriction and prolongation operations \cite{moore2022learning,luz2020learning}. Others try to achieve a better performing preconditioner using learning \cite{gotz2018machine}. \textbf{DeepONet}. Another notable advancement in the field of AI is the invention of operator learning methods \cite{goswami2022operator}. In contrast to standard Machine Learning (ML) tasks, where one seeks to approximate a function that can connect input data to output data, in operator learning one seeks a mapping between a family of functions and another family of functions that satisfies an operator, hence the name deep operator learning. Learning the operator enables many possibilities, fir example, after the network has been trained and the operator has been learned, one does not need to neither re-train nor solve the system again for new conditions or parameters, but rather infer the solution using the trained system for the new problem definition. This dramatically lowers the cost of solving PDE related problems. In addition, the learned operator does not depend on a discretization and can be used to infer the solution on any given discretization. Several operator learning methods have been proposed, including \cite{li2021fourier}. In this work, we focus on the DeepONet \cite{lu2021learning,goswami2022physics}, and use this for the operator learning. \textbf{HINTS}. A new method to enhance numerical solvers using operator learning has been proposed under the name HINTS: Hybrid Numerical Iterative Transferable Solver \cite{zhang2022hybrid}. The idea of HINTS is to use an iterative solver such as Jacobi or GS, and replace some of the iterations with a DeepONet trained to receive the problem parameters and infer the solution. This DeepONet can also be used to receive the problem parameters and the residual at the current iteration and produce the correction term for the solution, so it can be applied in a similar way to the numerical solvers. Experiments show that using the existing numerical solvers, they tend to face slow convergence due to their difficulty to smooth the error for low frequency modes (where for the high modes they operate well), while the DeepONet excels in smoothing the low modes errors (but may fail for the high modes). Using both the numerical methods and the DeepONet, uniform convergence of all modes is achieved and the solvers converge to machine precision much faster. HINTS showed promising results on many tasks, and also a lot of potential for extensions, and in this paper we discuss a very important extension of HINTS that mechanics simulations may benefit from - the transferability of HINTS to new geometries and discretizations. \textbf{Transfer Learning}. The idea of transfer learning is to leverage the parameters of a model trained with sufficiently large labelled dataset to infer information on a related task with few labelled datasets and minimal training. In \cite{goswami2022deep}, an operator level transfer learning was proposed to lower the computational costs of training a DeepONet (from Scratch) for related tasks. In the categorization of TL approaches, one popular classification is based on the consistency between the distributions of source and the target input (or feature) spaces and output (or label) spaces. The shift between the source and target data distributions is considered the major challenge in modern TL. One type of distribution shifts include conditional shift, where the marginal distribution of source and target input data remains the same while the conditional distributions of the output differ (i.e., $P(\mathbf{x}_s) = P(\mathbf{x}_t)$ and $P(\mathbf{y}_s\vert\mathbf{x}_s) \ne P(\mathbf{y}_t\vert\mathbf{x}_t)$) and covariate shift, where the opposite occurs (i.e., $P(\mathbf{x}_s) \ne P(\mathbf{x}_t)$ and $P(\mathbf{y}_s\vert\mathbf{x}_s) = P(\mathbf{y}_t\vert\mathbf{x}_t)$). In this work, we have employed the TL model proposed in \cite{goswami2022deep} to work on conditional distribution discrepancy between the domains. In this work, the authors have reported that the target model essentially requires a smaller dataset to fine tune, which can thus be done in significantly less time. \section{Computational experiments} \label{sec:experiments} In this section, we explore our method using two benchmark problems. The first problem involves the flow in heterogeneous porous media (Darcy's model) on a two-dimensional L-shaped domain (source). The target domains considered in this case are the same L-shaped domain with a circular and a triangular cutout. In the next problem, we have considered a thin rectangular plate (source) subjected to in-plane loading that is modeled as a two-dimensional problem of plane strain elasticity. The target domain considered in this case is the same rectangular plate with a central circular cutout. In both the examples, the DeepONet is trained using the Adam optimizer \cite{kingma2014adam}. The implementation has been carried out using the \texttt{PyTorch} framework \cite{paszke2019pytorch}. For both the examples we initialize the DeepONet parameters using Xavier initialization. Details on the data generation, network architecture, such as number of layers, number of neurons in each layer, and the activation functions are provided with each example. \subsection{Darcy flow} \label{subsec:darcy} In the first example we consider the Darcy's flow on a L-shaped domain. The problem is defined on a L-shaped domain (source) as: \begin{align} \label{eqn:poisson_PDE} \nabla\cdot\left(k(\boldsymbol{x})\nabla u(\boldsymbol{x})\right) + f(\boldsymbol{x}) &= 0,\;\;\; \bm x = (x,y)\in\Omega^S_L:=(0,1)^2\backslash[0.5,1)^2 \\ u(\boldsymbol x) &= 0, \quad \boldsymbol x\in\partial\Omega^S_L, \end{align} where $k(\boldsymbol{x})$ is a spatially varying hydraulic conductivity, $u(\boldsymbol x)$ is the hydraulic head, and $f(\boldsymbol x)$ is a spatially varying force vector. We use triangular elements to discretize the L-shaped domain, $\Omega^S_L$. The spatial discretization is shown in \autoref{fig:domains}(a). \begin{figure}[!ht] \centering \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=1\textwidth]{media/l_shaped.png} \caption{L-shaped domain.} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=1\textwidth]{media/l_shaped_circle.png} \caption{L-shaped domain with a circular hole.} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=1\textwidth]{media/l_shaped_triangle.png} \caption{L-shaped domain with a triangular hole.} \end{subfigure} \hfill \caption{Mesh discretization for the geometries considered in the Darcy's problem: (a) source domain, (b-c) target domains considered for domain adaptation.} \label{fig:domains} \end{figure} \bigbreak \noindent \textbf{Training Source HINTS} In this example, the goal is to learn the operator of the system in \autoref{eqn:poisson_PDE}, which maps the random conductivity field and the random force vector to the output hydraulic head, i.e., $\mathcal{G}_\theta: k(\boldsymbol x), f(\boldsymbol x) \rightarrow h(\boldsymbol x)$, where $\mathcal{G}_\theta$ is the solution operator. To generate multiple samples of the conductivity fields and force vectors for training the source DeepONet, we describe $k(\boldsymbol x)$ and $f(\boldsymbol{x})$ as stochastic processes, the realizations of which are generated using a Gaussian Random Field (GRF), with a standard deviation of $0.3$ and $0.1$ for $k(\boldsymbol x)$ and $f(\boldsymbol x)$, respectively and a correlation length of $0.1$ for both the processes. To train the source DeepONet, we generate $N_S = 51\small{,}000$ samples as the labeled dataset of random fields and the corresponding responses, and an additional $N_S^{test} = 9\small{,}000$ samples to test the model. The branch network of the DeepONet is a combination of a CNN (input dimension $31 \times 31$, number of channels $[2, 40, 60, 100, 180]$, kernel size $3\times 3$, stride $2$, the number of channels of the input 2 comes from the concatenation of $K(\boldsymbol x)$ and $f(\boldsymbol x))$ and a fully-connected network (dimensions $[180, 80, 80]$). The dimension of the trunk network (fully-connected network) is $[2, 80, 80, 80]$. We train the DeepONet for $25\small{,}000$ epochs with a fixed learning rate of $1e-4$, and a mini-batch size of $10\small{,}000$. For the branch network we employ the \texttt{ReLU} activation function and for the trunk network, we use \texttt{Tanh} activation. The last layers of both the branch and the trunk networks have linear activation functions. The source model converges with a mean relative error of $4\%$ on the test dataset. The trained DeepONet is plugged into the HINTS algorithm and the HINTS iterations are executed until the error of the solution reaches machine zero. An example of such iterative solution is given in \autoref{fig:HINTS_Poisson}. When inspecting the solution, we focus mainly on the convergence efficiency of the solution to machine zero. For that, we observe the norm of the error per iteration, as demonstrated in \autoref{fig:HINTS_Poisson_error_norm}. The founding idea of HINTS is based on the uniform convergence of all the eigen modes, which is shown in \autoref{fig:HINTS_Poisson_modes}. To illustrate the solution of the problem at hand, we show \autoref{fig:HINTS_Poisson_sol}, which is the solution at the last iteration (after convergence). We also show the error between the approximate solution and the exact solution (obtained by directly solving the system instead of using an iterative method) in \autoref{fig:HINTS_Poisson_error}. Note that the scale of the error is machine precision limit, which means the desired convergence has been achieved. \begin{figure}[htbp!] \centering \begin{subfigure}[t]{0.35\textwidth} \centering \includegraphics[width=\textwidth]{media/error_per_iter.png} \caption{The error norm per iteration.} \label{fig:HINTS_Poisson_error_norm} \end{subfigure} \begin{subfigure}[t]{0.35\textwidth} \centering \includegraphics[width=\textwidth]{media/mode_errors.png} \caption{The mode errors.} \label{fig:HINTS_Poisson_modes} \end{subfigure} \begin{subfigure}[t]{0.35\textwidth} \centering \includegraphics[width=\textwidth]{media/approx_sol.png} \caption{The approximate solution at the last step.} \label{fig:HINTS_Poisson_sol} \end{subfigure} \begin{subfigure}[t]{0.35\textwidth} \centering \includegraphics[width=\textwidth]{media/final_error.png} \caption{The error at the last iteration of HINTS.} \label{fig:HINTS_Poisson_error} \end{subfigure} \hfill \caption{Employing HINTS to solve the Darcy's problem on a L-shaped domain. It is interesting to note that in (d) the scale shows convergence on the error to machine zero at the last iteration of the proposed solver.} \label{fig:HINTS_Poisson} \end{figure} Next, to investigate the domain adaptation capabilities, we consider the following two tasks: \begin{itemize} \item \textbf{Task1:} From a L-shaped domain to a L-shaped domain with a circular cutout. The target domain is defined as: $\Omega^{T1}_L = \Omega_L \backslash \{(x, y) | (x - 0.25) ^ 2 + (y - 0.25) ^ 2 \leq 0.15\}$. \item \textbf{Task2:} From a L-shaped domain to a L-shaped domain with a triangular cutout. The target domain is defined as: $\Omega^{T2}_L = \Omega_L \backslash \{(x, y) | (x, y) \in \bigtriangleup((0.2, 0.1), (0.6, 0.4), (0.3, 0.4))\}$. \end{itemize} The discretization of the target domains are shown in \autoref{fig:domains} (b) and (c). While the circular cutout has a smooth boundary and is easier to approximate, the triangular cutout has locations of singularity, and hence imposes a more challenging scenario for domain adaptation. \textbf{Direct Application:}\\ To begin with, we first investigate the domain adaptation capabilities of source DeepONet to make extrapolated approximations on target domain discretization for target HINTS. Precisely, the source DeepONet is directly employed and is iterated with the relaxation methods to approximate the solution of the dependent variable on the target discretization. On the target domains, we define $k(\boldsymbol{x})$, and $f(\boldsymbol x)$ by setting the input function values to be zero for points within the cutouts. The convergence of the solution is attributed to the generalization ability of DeepONet. The results for the two target tasks are presented in \autoref{fig:HINTS_geometries}. We observe that for Task1, convergence to machine precision is obtained in $182$ iterations. For Task2, even though much slower (takes longer than $300$ iterations), we do achieve convergence as well. We attribute the slow convergence to the three vertices of the triangle, which are singular points and are considered more difficult to handle. The HINTS was able to operate on this geometry and show good performance. \begin{figure}[htbp!] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{media/circle_hints.png} \caption{HINTS trained on $\Omega_L$, solving on $\Omega_L^{T1}$.} \label{fig:HINTS_circle} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{media/triangle_hints.png} \caption{HINTS trained on $\Omega_L$, solving on $\Omega_L^{T2}$.} \label{fig:HINTS_triangle} \end{subfigure} \hfill \caption{Direct application of HINTS for domain adaptation. The results presented here are the plots of the target domain when HINTS has been directly used for inference without any re-training. Uniform convergence of the modes is observed as shown in the bottom left images for (a) and (b).} \label{fig:HINTS_geometries} \end{figure} \textbf{Transfer Learning:}\\ As an alternative to the above approach, we propose to fine-tune the DeepONet with a small number of samples from the target domain. To that end, we employ the operator level transfer learning approach proposed in \cite{goswami2022deep}, where the target DeepONet is initialized with the weights and biases of the source DeepONet. To fine tune the target DeepONet, only the fully-connected layers of the branch network and the last layer of the trunk network are re-trained with a hybrid loss function that takes into account the difference in the conditional distribution of the source and the target. To train the target DeepONet for each of the two tasks, we generate $N_T = 500$ random samples of $k(\boldsymbol x)$ and $f(\boldsymbol x)$ and obtain the corresponding solution $u(\boldsymbol x)$. The target DeepONet is trained for $10\small{,}000$ iterations. When compared to the initial training of the network (from scratch), fine-tuning is faster. Once the target model is trained, we employ target DeepONet with the iterative solver on Target HINTS for the target domain. \begin{figure}[!ht] \centering \includegraphics[width=12cm]{media/compare_transfers.png} \caption{Convergence of the direct application of HINTS (left) and the transfer learning HINTS (right), done for the L-shaped Darcy problem with a circular cutout. The top figures show the error norm convergence and the bottom figures show the error norms of specific modes.} \label{fig:transfer_with_modes} \end{figure} Finally, a comparative study is carried out based on the number of iterations each approach takes to converge to machine precision. We randomly select a sample from the target dataset and compare the convergence of the error (both the norm of the error and the mode errors) over iterations, between the direct application of HINTS and the transfer learning HINTS. This is shown in \autoref{fig:transfer_with_modes}. In addition, a comparison of the convergence rate that includes the standard GS solver is shown in \autoref{fig:compare_transfer}. We conclude that the HINTS solution is transferable to different domain geometries, and integrating the transfer-learning approach to fine tune the source DeepONet on the target domain results in a $25\%$ faster convergence without any loss of accuracy for this example. \begin{figure}[!ht] \centering \includegraphics[width=10cm]{media/comparison.png} \caption{Convergence of the different methods for the L-shaped Darcy problem with a circular cutout. Comparison of the convergence of GS (without HINTS, shown in Blue Line), HINTS-GS (Orange Line) and the transfer learning HINTS-GS (Green Line) in terms of error decay.} \label{fig:compare_transfer} \end{figure} \subsection{Linear elasticity problem} In the second example, we consider linear elasticity on a square domain under plane-strain conditions. The governing equation for the model is defined as: \begin{align} \label{eqn:elasticity_PDE} \begin{cases} \epsilon_x(x, y) &= \frac{\partial u_x(x, y)}{\partial x} \\ \epsilon_y(x, y) &= \frac{\partial u_x(x, y)}{\partial y} \\ \gamma_{xy}(x, y) &= \frac{\partial u_x(x, y)}{\partial y} + \frac{\partial u_y(x, y)}{\partial x} \\ \end{cases} \end{align} with subscripts $i,j\in\{1,2\}$ refer to the two in-plane directions, $\boldsymbol{u}_i$ is the displacement in the $i$ direction, $\varepsilon_{ij}$ is the strain component measured in $j$ direction due to displacement in $i$ direction, $\sigma_{ij}$ is the stress component, and $f_i$ is the body force in the $i$ direction. The Lamé parameters, $\mu=\frac{E}{2(1+\nu)}$ and $\lambda=\frac{\nu E}{(1+\nu)(1-2\nu)}$ describe the mechanical properties of the material, where $E$ and $\nu$ are the Young's modulus and the Poisson's ratio, respectively. In this example, we consider the square domain, $\Omega^S = [0, 1]\times [0, 1]$ as the source. The discretization of the domain is presented in \autoref{fig:square_domains}(a). \begin{figure}[!ht] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{media/rect.png} \caption{Square mesh ($\Omega^S$).} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{media/rect_circle.png} \caption{Square mesh with a circular hole ($\Omega^{T3}$).} \end{subfigure} \hfill \caption{Illustration of the meshes used for the elasticity numerical experiments.} \label{fig:square_domains} \end{figure} \bigbreak \noindent \textbf{Employing HINTS to solve the Elasticity problem on source domain} The goal of the elasticity problem is to learn the operator of the system described by \autoref{eqn:elasticity_PDE}, which maps randomly generated spatially varying modulus of elasticity, $E(\boldsymbol x)$ and randomly varying force vector, $f(\boldsymbol{x})$ to the displacement vector, $\boldsymbol u (\boldsymbol x)$. In this example, we follow the same data generation approach as discussed in \autoref{subsec:darcy}, and the model was trained with $N_S = 85\small{,}000$ samples and tested with $N_S^{test} = 15\small{,}000$ samples. The branch net inputs two channels $(E(\boldsymbol{x})$ and $f(\boldsymbol x))$, and we adopt the same architecture for the convolution modules as discussed in the previous example. The fully-connected network following the convolution modules has a dimension $[256, 160]$. The dimension of the trunk network is $[2, 128, 128, 160]$. The network is trained for $25\small{,}000$ epochs to achieve roughly $4\%$ relative error, indicating a sufficiently well trained network. Now, we employ the HINTS algorithm to solve the elasticity problem on the source domain. \textbf{Domain adaptation for the elasticity problem} For investigating the domain adaptation capabilities, we define the following task: \begin{itemize} \item \textbf{Task3:} From a square domain to a square domain with a circular cutout. The target domain is: $\Omega^{T3} = \Omega^S \backslash \{(x, y) | (x - 0.5) ^ 2 + (y - 0.5) ^ 2 \leq 0.15\}$. \end{itemize} The discretization of the target domains is shown in \autoref{fig:square_domains}. To begin with, we first employ the source DeepONet to infer on the target domain. In this scenario, no additional training is carried out on the target model. The HINTS algorithm is employed on the target domain for the iterative solver and use the source DeepONet to approximate the solution of displacement. The results presented in \autoref{fig:HINTS_elasticity_no_transfer}(a) show that using HINTS as is we can converge to machine precision after $205$ iterations on the target domain. As discussed in the previous example, we now integrate the operator level transfer learning algorithm with the HINTS model. In this setup, to train the target model, we generate samples of the input function by appending zeros to the function values with the circular cutout. The target model is trained with $N_T = 100$ samples, where the model is initialized with the optimized parameters of the source model and while training all the layers except the fully-connected layers of the branch network and the last layer of the trunk network are frozen. The fine tuned target DeepONet is replaced in the HINTS algorithm to generate the solution for the target domain. The results obtained using transfer learning integrate HINTS are presented in \autoref{fig:HINTS_elasticity_transfer}(b). In this setup, we observe convergence to machine precision after $169$ iterations, a $22.5\%$ improvement over the previous setup of employing HINTS with the source DeepONet to approximate solution for the target domain. \begin{figure}[!ht] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{media/elasticity_no_transfer.png} \caption{HINTS (direct application) trained on $\Omega^S$, solving on $\hat{\Omega}^S_{circle}$.} \label{fig:HINTS_elasticity_no_transfer} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{media/elasticity_transfer.png} \caption{HINTS (transfer learning), solving on $\hat{\Omega}^S_{circle}$.} \label{fig:HINTS_elasticity_transfer} \end{subfigure} \hfill \caption{Results obtained using the direct application of HINTS (a) and transfer learning HINTS (b) for an example from the target data-set of the elasticity problem on the square domain with a circular cutout.} \label{fig:HINTS_elasticity} \end{figure}
{'timestamp': '2022-11-01T01:28:54', 'yymm': '2210', 'arxiv_id': '2210.17392', 'language': 'en', 'url': 'https://arxiv.org/abs/2210.17392'}
\section{Task and Dataset} \label{sec:task_and_data} Below, we describe the CONSTRAINT 2022 shared task and the corresponding dataset provided by the task organizers. More detail can be found in the shared task report \cite{sharma2022report}. \subsection{Task} \label{ssec:Task} The CONSTRAINT 2022 shared task asked participating systems to detect the role of the entities in the meme, given the meme and a list of these entities. Figure~\ref{fig:task_example} shows an example of an image with the extracted OCR text, implicit (image showing Salman Khan, who is not mentioned in the text), and explicit entities and their roles. The example illustrates various challenges: (\emph{i})~an implicit entity, (\emph{ii})~text extracted from the label of the vial, which has little connection to the overlaid written text, (\emph{iii})~unclear target entity in the meme (\emph{Vladimir Putin}). Such complexities are not common in the multimodal tasks we discussed above. The textual representation of the entities and their roles are different than for typical CoNLL-style semantic role labeling tasks \cite{carreras2005introduction}, which makes it more difficult to address the problem in the same formulation. By observing these challenges, we first attempted to address the problem in the same formulation: as a sequence labeling problem by converting the data to CoNLL format (see Section~\ref{sss:sequence_labeling}). Then, we further tried to address it as a classification task, i.e., predict the role of each entity in a given meme--entity pair. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{figures/task_example.png} \caption{An example image showing the implicit (\emph{Salman Khan}) and the explicit entities (from a text perspective) and their roles. } \label{fig:task_example} \end{figure} \subsection{Data} \label{ssec:data} We use the dataset provided for the CONSTRAINT 2022 shared task. It contains harmful memes, OCR-extracted text from these memes, and manually annotated entities with four roles: \textit{hero}, \textit{villian}, \textit{victim}, and \textit{other}. The datasets cover two domains: COVID-19 and US Politics. The COVID-19 domain consists of 2,700 training and 300 validation examples, while US Politics has 2,852 training and 350 validation examples. The test dataset combines examples from both domains, COVID-19 and US Politics, and has a total of 718 examples. For the experiments, we combined the two domains, COVID-19 and US Politics, which resulted in 5,552 training and 650 validation examples. The class distribution of the entity roles, aggregated over all memes, in the combined COVID-19 + US Politics dataset is highly imbalanced as shown in Table~\ref{tab:data_distribution}. We can see that overall the role of \emph{hero} represents only 2\%, and the role of \emph{victim} covers only 5\% of the entities. We can further see that the vast majority of the entities are labeled with the \emph{other} role. This skewed distribution adds additional complexity to the modeling task. \begin{table}[] \centering \setlength{\tabcolsep}{2.0pt} \scalebox{1.0}{ \begin{tabular}{@{}lrrrrrr@{}} \toprule \multicolumn{1}{c}{\textbf{Class label}} & \multicolumn{2}{c}{\textbf{Train}} & \multicolumn{2}{c}{\textbf{Val}} & \multicolumn{2}{c}{\textbf{Test}} \\ \midrule \multicolumn{1}{c}{\textbf{}} & \multicolumn{1}{c}{\textbf{Count}} & \multicolumn{1}{c}{\textbf{\%}} & \multicolumn{1}{c}{\textbf{Count}} & \multicolumn{1}{c}{\textbf{\%}} & \multicolumn{1}{c}{\textbf{Count}} & \multicolumn{1}{c}{\textbf{\%}} \\ \midrule Hero & 475 & 2 & 224 & 3 & 52 & 2 \\ Villain & 2,427 & 10 & 886 & 10 & 350 & 14 \\ Victim & 910 & 5 & 433 & 5 & 114 & 5 \\ Others & 13,702 & 83 & 6,937 & 82 & 1,917 & 79 \\ \midrule Total & 17,514 & & 8,480 & & 2,433 & \\ \bottomrule \end{tabular} } \caption{Distribution of the entity roles in the combined COVID-19 + US politics datasets.} \label{tab:data_distribution} \end{table} \section{Results and Discussion} \label{sec:results} Below, we first discuss our sequence labeling and classification experiments. We then perform some analysis, and finally, we put our results in a broader perspective in the context of the shared task. \subsection{Sequence Labeling Results} Table~\ref{tab:sequence_classification_results} shows the evaluation results on the test set for our sequence labeling reformulation of the problem. We performed two experiments: one where we used as input the entire meme text (i.e.,~all tokens), and another one where we used the concatenation of the target entities only. We can see that the latter performed marginally better, but overall the macro-F1 score is quite low in both cases. \begin{table}[t] \centering \setlength{\tabcolsep}{3.0pt} \scalebox{0.95}{ \begin{tabular}{@{}lrrrr@{}} \toprule \multicolumn{1}{c}{\textbf{Exp.}} & \multicolumn{1}{c}{\textbf{Acc}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c}{\textbf{F1}} \\ \midrule All tokens & 0.51 & 0.32 & 0.21 & 0.24 \\ Only entities & 0.77 & 0.40 & 0.27 & 0.25 \\ \bottomrule \end{tabular} } \caption{Evaluation results on the test set for the sequence labeling reformulation of the problem.} \label{tab:sequence_classification_results} \end{table} \subsection{Classification Results} Table \ref{tab:classification_results} shows the evaluation results on the test set for our classification reformulation of the problem. We computed the \textit{majority class} baseline (row 0), which always predicts the most frequent label in the training set. Due to time limitations, our official submission used the image modality only, which resulted in a very low macro-F1 score of 0.23, as shown in row 1. For our text modality experiments, we used the meme text and the entities. We experimented with BERT and XLM-RoBERTa, obtaining better results using the former. Using the BLOCK fusion technique on unimodal (text + entity) and multimodality (text + image + entity) yielded sizable improvements. The combination of image + text (rows 6 and 9) did not yield much better results compared to using text only (row 4). Next, we added attention on top of block fusion, which improved the performance, but there was no much difference between the different combinations (rows 7--9). Considering only the text and the entity, we observe an improvement using text augmentation. Among the different augmentation techniques, there was no performance difference between WordNet and BERT, and combining them yielded worse results. \begin{table}[h] \centering \setlength{\tabcolsep}{2.0pt} \scalebox{0.9}{ \begin{tabular}{@{}llrrrr@{}} \toprule & \multicolumn{1}{c}{\textbf{Exp.}} & \multicolumn{1}{c}{\textbf{Acc}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c}{\textbf{F1}} \\ \midrule \multicolumn{6}{c}{\textbf{Baseline}} \\ \midrule 0 & Majority & 0.79 & 0.20 & 0.25 & 0.22 \\ \midrule \multicolumn{6}{c}{\textbf{Image modality}} \\ \midrule \it 1 & \it EffNet feat + SVM & \it 0.72 & \it 0.24 & \it 0.25 & \it 0.23 \\ \midrule \multicolumn{6}{c}{\textbf{Text modality}} \\ \midrule 2 & BERT & 0.76 & 0.42 & 0.36 & 0.37 \\ 3 & XLM-RoBERTa & 0.75 & 0.38 & 0.32 & 0.32 \\ \midrule \multicolumn{6}{c}{\textbf{Multimodality/Fusion}} \\ \midrule \multicolumn{6}{l}{\textbf{BLOCK fusion}} \\ 4 & Entity + Text & 0.74 & 0.44 & 0.43 & 0.43 \\ 5 & Entity + Image & 0.74 & 0.39 & 0.39 & 0.39 \\ 6 & Entity + (Text + Image) & 0.75 & 0.43 & 0.42 & 0.41 \\ \midrule \multicolumn{6}{l}{\textbf{Attention}} \\ 7 & Entity + Text & 0.72 & 0.42 & 0.48 & 0.44 \\ 8 & Entity + Image & 0.71 & 0.42 & 0.48 & 0.44 \\ 9 & Entity + (Text + Image) & 0.71 & 0.42 & 0.49 & 0.44 \\\midrule \multicolumn{6}{l}{\textbf{Augmentation}} \\ 10 & Entity + Text (WordNet aug) & 0.76 & 0.48 & 0.46 & \textbf{0.46} \\ 11 & Entity + Text (BERT aug) & 0.74 & 0.46 & 0.46 & \textbf{0.46} \\ 12 & Entity + Text (Mix aug) & 0.77 & 0.49 & 0.41 & 0.43 \\ \bottomrule \end{tabular} } \caption{Evaluation results on the test set for our classification reformulation of the problem. Our official submission for the shared task is shown in \emph{italic}.} \label{tab:classification_results} \end{table} \begin{table*}[h] \centering \setlength{\tabcolsep}{4.0pt} \scalebox{0.92}{ \begin{tabular}{lrrrrrr|rrrrrr} \toprule \multicolumn{1}{c}{\textbf{}} & \multicolumn{3}{c}{\textbf{E+I, w/o Att.}} & \multicolumn{3}{c|}{\textbf{E+I, w/ Att.}} & \multicolumn{3}{c}{\textbf{E+[I+T], w/o Att.}} & \multicolumn{3}{c}{\textbf{E+[I+T], w/ Att.}}\\ \midrule \multicolumn{1}{c}{\textbf{Role}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c|}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c}{\textbf{F1}} \\ \midrule Hero & 0.06 & 0.02 & 0.03 & 0.09 & 0.15 & \textbf{0.12} & 0.22 & 0.12 & 0.15 & 0.09 & 0.21 & 0.12 \\ Villain & 0.35 & 0.44 & 0.39 & 0.40 & 0.51 & \textbf{0.45} & 0.39 & 0.54 & 0.45 & 0.39 & 0.54 & 0.45 \\ Victim & 0.30 & 0.25 & 0.28 & 0.33 & 0.39 & \textbf{0.35} & 0.23 & 0.18 & 0.20 & 0.31 & 0.45 & \textbf{0.36} \\ Other & 0.86 & 0.84 & 0.85 & 0.88 & 0.81 & 0.84 & 0.87 & 0.84 & 0.85 & 0.89 & 0.77 & 0.82 \\ \bottomrule \end{tabular} } \caption{Role-level results on the test set with (w/) or without (w/o) attention between the context (text, image) and the entity. (E: Entity, I: Image, Att.: Attention, T: Text)} \label{tab:attention-result} \end{table*} \begin{table*}[h] \centering \setlength{\tabcolsep}{4.0pt} \scalebox{0.95}{ \begin{tabular}{lrrrrrrrrr} \toprule \multicolumn{1}{c}{\textbf{}} & \multicolumn{3}{c}{\textbf{No Aug.}} & \multicolumn{3}{c}{\textbf{Aug. WordNet}} & \multicolumn{3}{c}{\textbf{Aug. BERT}} \\ \midrule \multicolumn{1}{c}{\textbf{Role}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c}{\textbf{F1}} \\ \midrule Hero & 0.21 & 0.12 & 0.15 & 0.33 & 0.21 & \textbf{0.26} & 0.30 & 0.25 & \textbf{0.27} \\ Villain & 0.36 & 0.49 & 0.42 & 0.41 & 0.52 & \textbf{0.46} & 0.39 & 0.51 & \textbf{0.44} \\ Victim & 0.31 & 0.27 & 0.29 & 0.30 & 0.27 & 0.29 & 0.29 & 0.27 & 0.28 \\ Other & 0.87 & 0.83 & 0.85 & 0.87 & 0.84 & 0.86 & 0.87 & 0.83 & 0.85 \\ \bottomrule \end{tabular} } \caption{Role-level results on the test set for the entity + text combination with and without augmentation.} \label{tab:augmentation-result} \end{table*} \subsection{Role-Level Analysis} Next, we studied the impact of using attention and data augmentation on the individual entity roles: \emph{hero}, \emph{villain}, \emph{victim}, and \emph{other}. Table~\ref{tab:attention-result} shows the impact of using attention on (a)~entity + image (left side), and (b)~entity + [image + text] (right side) combinations. We can observe a sizable gain for the \emph{hero} (+0.09), the \emph{villain} (+0.06), and the \emph{victim} (+0.07) roles in the former case (a). However, for case (b), there is an improvement for the \textit{victim} role only; yet, this improvement is quite sizable: +0.16. Table~\ref{tab:augmentation-result} shows the impact of data augmentation using WordNet or BERT on the individual roles. We can observe sizable performance gains of +0.11 for the \emph{hero} role, and +0.04 for the \emph{villain} role, when using WordNet-based data augmentation. Similarly, BERT-based data augmentation yields +0.12 for the \emph{hero} role, and +0.02 for the \emph{villain} role. However, the impact of either augmentation on the \emph{victim} and on the \emph{other} role is negligible. \subsection{Official Submission} For our official submission for the task, we used the image modality system from line 1 in Table~\ref{tab:classification_results}, which was quite weak, with a macro-F1 score of 0.23. Our subsequent experiments and analysis pointed to several promising directions: (\emph{i})~combining the textual and the image modalities, (\emph{ii})~using attention, (\emph{iii})~performing data augmentation. As a result, we managed to improve our results to 0.46. Yet, this is still far behind the F1-score of the winning system: 0.5867. \section{Related Work} \label{sec:related_work} Below, we discuss previous work on semantic role labeling and harmful content detection, both in general and in a multimodal context. \subsection{Semantic Role Labeling} \paragraph{Textual semantic role labeling} has been widely studied in NLP, where the idea is to understand who did what to whom, when, where, and why. Traditionally, the task has been addressed using sequence labeling, e.g.,~\citet{fitzgerald2015semantic} used local and structured learning, experimenting with PropBank and FrameNet, and \citet{larionov2019semantic} investigated recent transformer models. \paragraph{Visual semantic role labeling} has been explored for images and video. \citet{Yatskar_2016_CVPR} addressed situation recognition, and developed a large-scale dataset containing over 500 activities, 1,700 roles, 11,000 objects, 125,000 images, and 200,000 unique situations. The images were collected from Google and the authors addressed the task as a situation recognition problem. \citet{pratt2020grounded} developed a dataset for situation recognition consisting of 278,336 bounding-box groundings to the 11,538 entity classes. \citet{gupta2015visual} developed a dataset of 16K examples in 10K images with actions and associated objects in the scene with different semantic roles for each action. \citet{yang2016grounded} worked on integrating language and vision with explicit and implicit roles. \citet{silberer2018grounding} learned frame–semantic representations of the images. \citet{sadhu2021visual} approached the same problem for video, developing a dataset of 29K 10-second movie clips, annotated with verbs and semantics roles for every two seconds of video content. \subsection{Harmful Content Detection in Memes} There has been significant effort for identifying misinformation, disinformation, and malinformation online \cite{Schmidt2017survey,bondielli2019survey,zhou2020survey,da2020survey,alam2021survey,afridi2021multimodal,coordinated:communities:2022,10.1007/978-3-030-99739-7_52}. Most of these studies focused on textual and multimodal content. Compared to that, modeling the harmful aspects of memes has not received much attention. Recent effort in this direction include categorizing hateful memes \cite{kiela2020hateful}, detecting antisemitism \cite{chandra2021subverting}, detecting the propagandistic techniques used in a meme \cite{dimitrov2021detecting}, detecting harmful memes and the target of the harm \cite{pramanick-etal-2021-momenta-multimodal}, identifying the protected categories that were attacked~\cite{zia-etal-2021-racist}, and identifying offensive content \cite{suryawanshi-etal-2020-multimodal}. Among these studies, the most notable low-level efforts that advanced research by providing high-quality datasets to experiment with include shared tasks such as the \emph{Hateful Memes Challenge} \cite{kiela2020hateful}, the SemEval-2021 shared task on detecting persuasion techniques in memes \cite{SemEval2021-6-Dimitrov}, and the troll meme classification task \cite{dravidiantrollmeme-eacl}. \citet{chandra2021subverting} investigated antisemitism along with its types as a binary and a multi-class classification problem using pretrained transformers and convolutional neural networks (CNNs) as modality-specific encoders along with various multimodal fusion strategies. \citet{dimitrov2021detecting} developed a dataset with 22 propaganda techniques and investigated the different state-of-the-art pretrained models, demonstrating that joint vision--language models performed better than unimodal ones. \citet{pramanick-etal-2021-momenta-multimodal} addressed two tasks: detecting harmful memes and identifying the social entities they target, using a multimodal model with local and global information. \citet{zia-etal-2021-racist} went one step further than a binary classification of hateful memes, focusing on a more fine-grained categorization based on the protected category that was being attacked (i.e.,~race, disability, religion, nationality, sex) and the type of attack (i.e.,~contempt, mocking, inferiority, slurs, exclusion, dehumanizing, inciting violence) using the dataset released in the WOAH 2020 Shared Task.\footnote{\url{http://github.com/facebookresearch/fine_grained_hateful_memes}} \citet{fersini2019detecting} studied sexist memes and investigated the textual cues using late fusion. They also developed a dataset of 800 misogynistic memes covering different manifestations of hatred against women (e.g.,~body shaming, stereotyping, objectification, and violence), collected from different social media~\citep{france_mys2021}. \citet{kiela2021hateful} summarized the participating systems in the Hateful Memes Challenge, where the best systems fine-tuned unimodal and multimodal pre-training transformer models such as VisualBERT \citep{li2019visualbert} VL-BERT \citep{su2019vl}, UNITER \citep{chen2019uniter}, VILLA \citep{gan2020large}, and built ensembles on top of them. The SemEval-2021 propaganda detection shared task \citep{SemEval2021-6-Dimitrov} focused on detecting the use of propaganda techniques in the meme, and the participants' systems showed that multimodal cues were very important. In the troll meme classification shared task \cite{dravidiantrollmeme-eacl}, the best system used ResNet152 and BERT with multimodal attention, and most systems used pretrained transformers for the text, CNNs for the images, and early fusion to combine the two modalities. \paragraph{Combining modalities} causes several challenges, which arise due to representation issues (i.e.,~symbolic representation for language vs. signal representation for the visual modality), misalignment between the modalities, and fusion and transferring knowledge between the modalities. In order to address multimodal problems, a lot of effort has been paid to developing different fusion techniques such as (\emph{i})~{\em early fusion}, where low-level features from different modalities are learned, fused, and fed into a single prediction model \cite{jin2016novel,yang2018ti,zhang2019multi,spotfake,zhou2020safe,kang2020multi}, (\emph{ii})~{\em late fusion}, where unimodal decisions are fused with some mechanisms such as averaging and voting \cite{agrawal2017multimodal,qi2019exploiting}, and (\emph{iii})~{\em hybrid fusion}, where a subset of the learned features are passed to the final classifier (early fusion), and the remaining modalities are fed to the classifier later (late fusion) \cite{jin2017multimodal}. Here, we use early fusion and joint learning for fusion. \section{Introduction} \label{sec:intro} Social media have become one of the main communication channels for sharing information online. Unfortunately, they have been abused by malicious actors to promote their agenda using manipulative content, thus continuously plaguing political events, and the public debate, e.g.,~regarding the ongoing COVID-19 infodemic \cite{alam-etal-2021-fighting-covid,10.1007/978-3-030-99739-7_52}. Such type of content includes harm and hostility \cite{brooke-2019-condescending,joksimovic-etal-2019-automated}, hate speech \citep{fortuna2018survey}, offensive language \cite{zampieri-etal-2019-predicting,SOLID}, abusive language \citep{mubarak2017abusive}, propaganda \cite{EMNLP19DaSanMartino,da2020survey}, cyberbullying \citep{van-hee-etal-2015-detection}, cyber-aggression \citep{kumar2018benchmarking}, and other kinds of harmful content \citep{pramanick-etal-2021-momenta-multimodal,Survey:2022:Harmful:Memes}. The propagation of such content is often done by coordinated groups \cite{coordinated:communities:2022} using automated tools and targeting specific individuals, communities, and companies. There have been many research efforts to develop automated tools to detect such kind of content. Several recent surveys have highlighted these aspects, which include fake news \cite{zhou2020survey}, misinformation and disinformation \citep{alam2021survey,survey:media:2021,survey:stance:2022}, rumours \citep{bondielli2019survey}, propaganda \citep{da2020survey}, hate speech \citep{fortuna2018survey,Schmidt2017survey}, cyberbullying \citep{7920246}, offensive \citep{husain2021survey} and harmful content \citep{Survey:2022:Harmful:Memes}. The content shared on social media comes in different forms: textual, visual, or audio-visual. Among other social media content, recently, \textit{internet memes} became popular. Memes are defined as ``a group of digital items sharing common characteristics of content, form, or stance, which were created by associating them and were circulated, imitated, or transformed via the Internet by many users''~\citep{shifman2013memes}. Memes typically consist of images containing some text \citep{shifman2013memes,suryawanshi-etal-2020-multimodal,suryawanshi-etal-2020-dataset}. They are often shared for the purpose of having fun. However, memes can also be created and shared with bad intentions. This includes attacks on people based on characteristics such as ethnicity, race, sex, gender identity, disability, disease, nationality, and immigration status \cite{zannettou2018origins,kiela2020hateful}. There has been research effort to develop computational methods to detect such memes, such as detecting hateful memes \cite{kiela2020hateful}, propaganda \cite{dimitrov2021detecting}, offensiveness \cite{suryawanshi-etal-2020-multimodal}, sexist memes \cite{fersini2019detecting}, troll memes \cite{dravidiantrollmeme-eacl}, and generally harmful memes \cite{pramanick-etal-2021-momenta-multimodal,DISARM}. Harmful memes often target individuals, organizations, or social entities. \citet{pramanick-etal-2021-momenta-multimodal} developed a dataset where the annotation consists of (\emph{i})~whether a meme is harmful or not, and (\emph{ii})~whether it targets an individual, an organization, a community, or society. The CONSTRAINT-2022 shared task follows a similar line of research \cite{sharma2022report}. The entities in a meme are first identified and then the task asks participants to predict which entities are glorified, vilified, or victimized in the meme. The task is formulated as \textit{``Given a meme and an entity, determine the role of the entity in the meme: hero vs. villain vs. victim vs. other.''} More details are given in Section~\ref{sec:task_and_data}. Memes are multimodal in nature, but the textual and the visual content in a meme are sometimes unrelated, which can make them hard to analyze for traditional multimodal approaches. Moreover, context (e.g., where the meme was posted) plays an important role for understanding its content. Another important factor is that since the text in the meme is overlaid on top of the image, the text needs to be extracted using OCR, which can result in errors that require additional manual post-editing \cite{dimitrov2021detecting}. Here, we address a task about entity role labeling for harmful memes based on the dataset released in the CONSTRAINT-2022 shared task; see the task overview paper for more detail \cite{sharma2022report}. This task is different from traditional semantic role labeling in NLP \cite{palmer2010semantic}, where understanding \textit{who} did \textit{what} to \textit{whom}, \textit{when}, \textit{where}, and \emph{why} is typically addressed as a sequence labeling problem \cite{he-etal-2017-deep}. Recently, this has also been studied for visual content~\cite{sadhu2021visual}, i.e.,~situation recognition \cite{Yatskar_2016_CVPR,pratt2020grounded}, visual semantic role labeling \cite{gupta2015visual,silberer2018grounding,li2020cross}, and human-object interaction \cite{chao2015hico,chao2018learning}. To address the entity role labeling for a potentially harmful meme, we investigate textual, visual, and multimodal content using different pretrained models such as BERT \cite{text_bert}, VGG16 \cite{simonyan2014very}, and other vision--language models \cite{BlockFusion2019}. We further explore different textual data augmentation techniques and attention methods. For the shared task participation, we used only the image modality, which resulted in an underperforming system in the leaderboard. Further studies using other modalities and approaches improved the performance of our system, but it is still lower (0.464 macro F1) than the best system (0.586). Yet, our investigation might be useful to understand which approaches are useful for detecting the role of an entity in harmful memes. Our contributions can be summarized as follows: \begin{itemize} \item we addressed the problem both as sequence labeling and as classification; \item we investigated different pretrained models for text and images; \item we explored several combinations of multimodal models, as well as attention mechanisms, and various augmentation techniques. \end{itemize} The rest of the paper is organized as follows: Section~\ref{sec:related_work} presents previous work, Section~\ref{sec:task_and_data} describes the task and the dataset, Section~\ref{sec:classification} formulates our experiments, Section~\ref{sec:results} discusses the evaluation results. Finally, Section~\ref{sec:conclusion} concludes and points to possible directions for future work. \section{Conclusion and Future Work} \label{sec:conclusion} We addressed the problem of understanding the role of the entities in harmful memes, as part of the CONSTRAINT-2022 shared task. We presented a comparative analysis of the importance of different modalities: the text and the image. We further experimented with two task reformulations ---sequence labeling and classification---, and we found the latter to work better. Overall, we obtained improvements when using BLOCK fusion, attention between the image and the text representations, and data augmentation. In future work, we plan to combine the sequence and the classification formulations in a joint multimodal setting. We further want to experiment with multi-task learning using other meme analysis tasks and datasets. Last but not least, we plan to develop better data augmentation techniques to improve the performance on the low-frequency roles. \section*{Acknowledgments} The work is part of the Tanbih mega-project, which is developed at the Qatar Computing Research Institute, HBKU, and aims to limit the impact of ``fake news,'' propaganda, and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking. \section{Experiments} \label{sec:classification} \paragraph{Settings:} We addressed the problem both as a sequence labeling and as a classification task. Below, we discuss each of them in detail. \paragraph{Evaluation measures:} In our experiments, we used accuracy, macro-average precision, recall, and F$_1$ score. The latter was the official evaluation measure for the shared task. \subsection{Sequence Labeling} \label{sss:sequence_labeling} For the sequence labeling experiments, we first converted the OCR text and the entities to the CoNLL BIO-format. An example is shown in Figure~\ref{fig:data_iob_format}. To convert them, we matched the entities in the text and we assigned the same tag (role label) to the token in the text. For the implicit entity that is not in the text, we added them at the end of the text and we assigned them the annotated role; we labeled all other tokens with the O-tag. We trained the model using Conditional Random Fields (CRFs) \cite{lafferty2001conditional}, which has been widely used in earlier work. As features, we used part-of-speech tags, token length, tri-grams, presence of digits, use of special characters, token shape, w2vcluster, LDA topics, words present in a vocabulary list built on the training set, and in a name list, etc.\footnote{More details about the feature set can be found at \url{https://github.com/moejoe95/crf-vs-rnn-ner}} We ran two sets of experiments: (\emph{i})~using the same format, and (\emph{ii})~using only entities as shown in Figure~\ref{fig:data_iob_format}. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{figures/data_iob_format.png} \caption{Example with text in BIO format.} \label{fig:data_iob_format} \end{figure} \subsection{Classification} \label{sss:classification} For the classification experiments, we first converted the dataset into a classification problem. As it contains all examples with one or more entities, we reorganized the dataset so that an example contains an entity, OCR text, image, and entity role. Hence, the dataset size is now the same as the number of entity instances rather than memes. We ended up with 17,514 training examples, which is the number of training entities as shown in Table~\ref{tab:data_distribution}. We then ran different unimodal and multimodal experiments: (\emph{i})~only text, (\emph{ii})~only meme, and (\emph{iii})~text and meme together. For each setting, we also ran several baseline experiments. We further ran advanced experiments such as adding attention to the network and text-based data augmentation. Figure~\ref{fig:experimental_pipeline} shows our experimental pipeline for this classification task. For the unimodal experiments, we used individual modalities, and we trained them using different pre-trained models. Note that for the text modality, we ran several combinations of fusion (e.g., text and entity) experiments. For the multimodal experiments, we combined embedding from both modalities, and we ran the classification on the fused embedding, as shown in Figure~\ref{fig:experimental_pipeline}. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figures/system_architecture.png} \caption{Diagram of our experimental pipeline.} \label{fig:experimental_pipeline} \end{figure} \subsubsection{Text Modality} For the text modality, we experimented using BERT \cite{text_bert} and XLM-RoBERTa \cite{liu2019roberta}. We performed ten reruns for each experiment using different random seeds, and then we picked the model that performed best on the development set. We used a batch size of 8, a learning rate of 2e-5, a maximum sequence length of 128, three epochs, and categorical cross-entropy as the loss function. We used the Transformer toolkit to train the transformer-based models. Using the text-only modality, we also ran a different combination of experiments using the text and the entities, where we used bilinear fusion to combine them. We discuss this fusion technique in more detail in Section~\ref{sssec:multimodal}. \subsubsection{Image Modality} For our experiments using the image modality, we extract features from a pre-trained model, and then we trained an SVM classifier using these features. In particular, we extracted features from the penultimate layer of the EfficientNet-b1 (EffNet) model \cite{tan2019efficientnet}, which was trained using the ImageNet dataset. For training the model using the extracted features, we used SVM with its default parameter settings, with no further optimization of its hyper-parameter values. We chose EffNet as it was shown to achieve better performance for some social media image classification tasks \cite{alam2021medic,alam2021social}. \subsubsection{Multimodal: Text and Image} \label{sssec:multimodal} For the multimodal experiments, we used the BLOCK Fusion~\cite{BlockFusion2019} approach, which was originally proposed for question answering (QA). Our motivation is that an entity can be seen like a question about the meme context, asking for its role as an answer. In a QA setting, there are three elements: (\emph{i})~a context (image or text), (\emph{ii})~a question, and (\emph{iii})~a list of answers. The goal is to select the right answer from the answer list. Similarly, we have four types of answers (i.e.,~roles). The task formation is that for an entity and a context (image or text), we need to determine the role of the entity in that context. BLOCK fusion is a multi-modal framework based on block-superdiagonal tensor decomposition, where tensor blocks are decomposed into blocks of smaller sizes, with the size characterized by a set of mode-$n$ ranks \cite{Lathauwer2008}. It is a bilinear model that takes two vectors $x^1 \in R^I$ and $x^2 \in R^J$ as input and then projects them to a $K$-dimensional space with tensor products: $y=\mathcal{T}\times x^1\times x^2$, where $y\in R^K$. Each component of $y$ is a quadratic form of the inputs, $\forall k \in [1;K]$: \begin{equation} y_k = \sum^I_{i=1}\sum^J_{j=1}\mathcal{T}_{ijk} x^1_i x^j_2 \label{eq:block} \end{equation} BLOCK fusion can model bilinear interactions between groups of features, while limiting the complexity of the model, but keeping expressive high dimensional mono-model representations \cite{BlockFusion2019}. We used BLOCK fusion in different settings: (\emph{i})~for image and entity, (\emph{ii})~for text and entity, and (\emph{iii})~for text, image with entity. \paragraph{Text and entity:} We extracted embedding representation for the entity and the text using a pretrained BERT model. We then fed both embedding representations into linear layers of 512 neurons each. The output of two linear layers is taken as input to the trainable block fusion network. Then, a regularization layer and linear layer are used before the final layer. \paragraph{Image and entity:} To build embedding representations for the image and the entity, we used a vision transformer (ViT) \cite{Dosovitskiy2021} and BERT pretrained models. The output of two different modalities was then used as input to the block fusion network. \paragraph{Image, text, and entity:} In this setting, we first built embedding representations for the text and the image using a pretraind BERT and ViT models, respectively. Then, we concatenated these representations (text + image) and we passed them to a linear layer with 512 neurons. We then extracted embedding representation for the target entity using the pretraind BERT model. Afterwards, we merged the text + image and the entity representations and we fed them into the fusion layer. In this way, we combined the image and the text representations as a unified context, aiming to predict the role of the target entity in this context. In all the experiments, we uses a learning rate of $1e^{-6}$, a batch size of 8, and a maximum length of the text of 512. \subsubsection{Additional Experiments} We ran two additional sets of experiments using attention mechanism and augmentation, as using such approaches has been shown to help in many natural language processing (NLP) tasks. \paragraph{Attention:} In the entity + image block fusion network, we used block fusion to merge the entity and the image representations. Instead of using the image representation directly, we used attention mechanism on the image and then we fed the attended features along with the entity representation into the entity + image block. To compute the attention, we used the PyTorchNLP library.\footnote{\url{http://github.com/PetrochukM/PyTorch-NLP}} In a similar fashion, we applied the attention mechanism to the text and to the combined text + image representation. \paragraph{Augmentation:} Text data augmentation has recently gained a lot of popularity as a way to address data scarceness and class imbalance \cite{feng2021survey}. We used three types of text augmentation techniques to balance the distribution of the different class: (\emph{i})~synonym augmentation using WordNet, (\emph{ii})~word substitution using BERT, and (\emph{iii})~a combination thereof. In our experiments, we used the NLPAug data augmentation package.\footnote{\url{https://github.com/makcedward/nlpaug}} Note that we applied six times augmentation for the \emph{hero} class, twice for the \emph{villain} class, and three times for the \emph{victim} class. These numbers were empirically set and require further investigation in future work.
{'timestamp': '2022-05-10T02:42:23', 'yymm': '2205', 'arxiv_id': '2205.04402', 'language': 'en', 'url': 'https://arxiv.org/abs/2205.04402'}
\section{Introduction} Van der Waals complexes containing open-shell species are of great current interest. In particular, complexes containing atoms or molecules with orbital angular momentum necessarily involve multiple electronic states.\cite{DUBERNET:1991, DUBERNET:open:1994} They provide a test-bed for studying electronically non-adiabatic effects, which are important in the theory of reaction dynamics. \cite{alexander2004, retail2005, Ziemkiewicz:2005} In addition, the observation of pre-reactive Van der Waals complexes trapped in bound levels \cite{Loomis:1997, Liu:1999, wheeler2000, Lester:2001, Merritt:2005} can shed light on intermolecular forces in the entrance and exit channels of chemical reactions.\cite{Dubernet:clhcl:1994, Meuwly:2003, klos2004, Fishchuk:2006} The form of these shallow, long-range wells can be important in determining reaction outcomes \cite{Skouteris:1999, wang2002} and transition-state geometries. \cite{neumark2002, Neumark:2005} In this paper, we consider the complex consisting of an open-shell SH radical and an Ar atom. New \emph{ab initio} potential energy surfaces (PESs) for the \emph{X} $^2\Pi$ state are presented, and used in calculations of the bound rotation-vibration levels. We discuss the possibility of employing an approximate approach in the bound-state calculations, using a single adiabatic PES rather than the two surfaces used in the standard method. \cite{DUBERNET:1991} Finally, the bound-state energies and wavefunctions are used to simulate the vibrationally resolved electronic spectrum. The Ar--SH cluster was first detected experimentally by Miller and co-workers \cite{miller92} using laser-induced fluorescence excitation spectroscopy. Subsequently this group developed empirical potential energy surfaces for the complex, for both the \emph{A} state \cite{miller97} and the \emph{X} state, \cite{miller99} by fitting model functions to reproduce laser-induced fluorescence (LIF) results. The region of the \emph{A} state PES corresponding to the Ar--S-H configuration (Jacobi angles between $\sim 90$ and 180 degrees) was determined only approximately, because the fluorescence experiments did not probe this zone. More recently, Hirst \emph{et al.}\ \cite{hirst2004} have presented a PES for the \emph{A} state based on \emph{ab-initio} calculations at the RCCSD(T) level with the aug-cc-pV5Z basis set. This surface was used to predict bound vibrational levels in the Ar--S-H configuration \cite{hirst2004} which have not, to our knowledge, been observed in experiment so far. A possible reason why these levels have eluded detection is discussed in section IV of this paper. Sumiyoshi \emph{et al.}\ \cite{sumiyoshi2000} have recorded high-resolution spectra for Ar--SH in the ground electronic state using Fourier-transform microwave (FTMW) spectroscopy. These authors also produced PESs for the \emph{X} state based on fitting a function to reproduce their experimental results, \cite{sumiyoshi2000} and these surfaces were later improved with the aid of some \emph{ab initio} results.\cite{sumiyoshi2003} Most recently, results from microwave-millimeter-wave double-resonance spectroscopy \cite{sumiyoshi2005i} were employed to determine new three-dimensional PESs for the \emph{X} state. \cite{sumiyoshi2005ii} The family of weakly bound clusters containing a rare gas atom and either the OH or SH radical have been reviewed by Carter \emph{et al.} \cite{miller2000} in 2000 and by Heaven \cite{heaven2005} in 2005. The structure of the present paper is as follows. In Section II we present new PESs for Ar--SH ($^2\Pi$) based entirely on \emph{ab initio} calculations at the RCCSD(T) level with an aug-cc-pV5Z basis set. In Section III we describe bound rotation-vibration level calculations using these surfaces. We also investigate the wavefunctions and introduce a new adiabatic approximation for the bound states, including spin-orbit coupling. In Section IV the results are combined with those of a previous study of the \emph{A} $^2\Sigma^+$ state, in order to produce a high-quality simulation of the vibrationally resolved electronic spectrum. \section{Potential energy surfaces} The geometry of the complex is specified in terms of body-fixed Jacobi coordinates $r$, $R$ and $\theta$. $R$ is the length of the vector $\mathbf{R}$ which links the center of mass of the SH fragment to the Ar nucleus. The vector $\mathbf{r}$ links the S nucleus to the H nucleus: its modulus $r$ is the SH bond length. The angle between $\mathbf{R}$ and $\mathbf{r}$ is $\theta$, so that $\theta=0$ corresponds to a linear Ar--H-S configuration. For this work the bond length $r$ was held constant at the experimentally determined equilibrium value of 1.3409 \AA,\ \cite{herzbergIV} which is justified because the vibrational motion of the diatom is very weakly coupled to the relatively low-frequency Van der Waals modes of interest. Energies were calculated for a regular grid of geometry points using the MOLPRO quantum chemistry program.\cite{molpro} These points are at every distance $R$ from 3.25 {\AA } to 5.5 {\AA } in steps of 0.25 {\AA } and for every angle $\theta$ from 0$^{\circ}$ to 180$^{\circ}$ in steps of 15$^{\circ}$. This gives a total of 130 points. We used the RCCSD(T) method (restricted coupled cluster with single, double and non-iterative triple excitations) \cite{knowles93, knowles2000} with the aug-cc-pV5Z basis set. \cite{dunning89, dunning92, dunning94} The counterpoise procedure of Boys and Bernardi \cite{boys70} was used to correct for basis set superposition error (BSSE). This is the same level of theory and basis set as were used in recent calculations of the PES for the $A$-state of Ar--SH.\cite{hirst2004} Two potential surfaces were obtained from the \emph{ab initio} calculations. These correspond to two adiabatic electronic states: one symmetric ($A'$) and one antisymmetric ($A''$) with respect to reflection in the plane of the nuclei. The two states are degenerate at linear geometries but nondegenerate at nonlinear geometries: the splitting is an example of the Renner-Teller effect. The interaction energies for each state were interpolated using a 2D spline function, and contour plots of the resulting surfaces are shown in Fig.\ \ref{fig:adiabaticPESs}. The $A'$ and $A''$ surfaces result from the electronic Hamiltonian without spin-orbit coupling: a discussion of surfaces including spin-orbit coupling is presented in section IV. \begin{figure} \includegraphics[width=85mm]{one} \caption{\emph{Ab initio} potential energy surface contour plots for Ar--SH ({\em X}) in the $A'$ state (upper plot) and the $A''$ state (lower plot). Solid contour lines are shown at 10 cm$^{-1}$ intervals, ranging from 0 cm$^{-1}$ to $-150$ cm$^{-1}$ inclusive for the $A'$ surface and 0 cm$^{-1}$ to $-120$ cm$^{-1}$ inclusive for the $A''$ surface. Dashed contour lines are shown at 100 cm$^{-1}$ intervals from +100 cm$^{-1}$ to +500 cm$^{-1}$ inclusive for both surfaces. The linear Ar--H-S conformation corresponds to $\theta $ = 0$^{\circ}$. \label{fig:adiabaticPESs}} \end{figure} The adiabatic surfaces (adiabats) are qualitatively similar to those reported for He--SH and Ne--SH complexes.\cite{Cybulski:2000} The latter were calculated at the RCCSD(T) level, using the smaller aug-cc-pVTZ basis set, but with an additional set of bond functions, and counterpoise correction. A comparison of the positions and energies of the minima on the Ar--SH surfaces presented here with those for $X$-state He--SH and Ne--SH is given in Table \ref{tab1}. For all the $A''$ surfaces there is a global minimum in the linear X--S-H configuration (where X is He, Ne or Ar) and a local minimum in the linear X--H-S configuration. The X--H-S configurations are saddle points on the $A'$ surfaces, which have shallow local minima at $\theta=180^\circ$ and global minima at nonlinear configurations. The $A'$ global minima are deeper than those on the $A''$ surfaces, because in the $A'$ state the SH $\pi$ hole is directed towards the Ar atom, resulting in reduced repulsion. The global minima for the $A'$ state occur at angles $\theta$ that increase with the atomic number of the rare gas atom. Also, as expected, the minima are deeper for clusters containing heavier (and more polarizable) rare gas atoms. \begin{table} \caption{\label{tab1} Positions and well depths of potential minima on the $A'$ and $A''$ adiabatic surfaces for the $X$-state of SH--rare gas clusters. The results for Ne--SH and He--SH clusters are from ref.\ \onlinecite{Cybulski:2000}.} \begin{tabular}{ccccc} \hline\hline Cluster & State & $R$/\AA & \qquad $\theta/^{\circ}$ \qquad & depth/cm$^{-1}$\\ \hline Ar--SH & $A'$ & 3.678 & 66.6 & 157.69\\ Ne--SH & $A'$ & 3.611 & 57.2 & 57.05\\ He--SH & $A'$ & 3.639 & 54.4 & 25.97\\ \hline Ar--SH & \quad $A'$ and $A''$ \quad & 3.801 & 180 & 128.54\\ Ne--SH & \quad $A'$ and $A''$ \quad & 3.593 & 180 & 54.27\\ He--SH & \quad $A'$ and $A''$ \quad & 3.593 & 180 & 25.27\\ \hline Ar--SH & $A''$ & 4.274 & 0 & 125.22\\ Ne--SH & $A''$ & 4.101 & 0 & 45.75\\ He--SH & $A''$ & 4.126 & 0 & 21.16\\ \hline\hline \end{tabular} \end{table} In order to perform dynamical calculations on Ar--SH, we need to evaluate the matrix elements of the potential between electronic states labelled with an angular momentum quantum number $\lambda$. For this purpose it is convenient to re-express the potential energy surfaces as the sum ($V_0$) and difference ($V_2$) potentials \begin{displaymath} V_0(R,\theta)=\frac{1}{2}\left[{V_{A'}(R,\theta)+V_{A''}(R,\theta)}\right], \end{displaymath} \begin{displaymath} V_2(R,\theta)=\frac{1}{2}\left[{V_{A'}(R,\theta)-V_{A''}(R,\theta)}\right]. \end{displaymath} Contour plots of these surfaces are shown in Fig.\ \ref{fig:avdiffpots}. They are quite similar to those recently presented by Sumiyoshi \emph{et al.}\ \cite{sumiyoshi2005ii} The latter were fitted to reproduce experimental results, with starting values for the potential parameters obtained from \emph{ab initio} calculations (RCCSD(T)/aug-cc-pVTZ). The form of our sum potential is also qualitatively similar to those recently presented for Ne--SH and Kr--SH by Suma \emph{et al.}\cite{suma04} \begin{figure} \includegraphics[width=85mm]{two} \caption{Contour plots for the Ar--SH ({\em X}) difference (upper plot) and sum (lower plot) potential energy surfaces. Solid contour lines are shown at 10 cm$^{-1}$ intervals, ranging from 0 cm$^{-1}$ to $-300$ cm$^{-1}$ inclusive for the difference surface and from 0 cm$^{-1}$ to $-120$ cm$^{-1}$ inclusive for the sum surface. Dashed contour lines are shown at 100 cm$^{-1}$ intervals from +100 cm$^{-1}$ to +500 cm$^{-1}$ for the sum surface only. The linear Ar--H-S conformation corresponds to $\alpha$ = 0$^{\circ}$. \label{fig:avdiffpots}} \end{figure} \section{Bound-state calculations} \subsection{Coupled channel calculations} The bound states of a complex such as Ar--SH (\emph{X}$^2\Pi$) involve \emph{both} potential energy surfaces. In the present work we use a coupled-channel approach to calculate the bound states. In a body-fixed axis system the Hamiltonian operator is \begin{equation} H= -\frac{\hbar^2}{2\mu}R^{-1}\left(\frac{\partial^2}{\partial R^2}\right) R + H_{\rm mon} + \frac{\hbar^2(\hat J -\hat\jmath)^2}{2\mu R^2}+\hat V, \label{ham} \end{equation} where $H_{\rm mon}$ is the monomer Hamiltonian and $\hat V$ is the intermolecular potential. In a full treatment including overall rotation, the total wavefunction of the complex may be expanded  \cite{DUBERNET:1991} \begin{equation} \Psi^{JM}_n=R^{-1}\sum_{jP\lambda\sigma} \Phi^{JM}_{jP;\lambda\sigma}\,\chi^{J}_{Pn;j;\lambda\sigma}(R), \label{fullwave} \end{equation} where the channel basis functions are \begin{eqnarray} \Phi^{JM}_{jP;\lambda\sigma} = \varphi_\sigma \varphi_\lambda&& \left({2j+1\over4\pi}\right)^\frac{1}{2} D^{j*}_{P\omega}(\phi,\theta,0) \nonumber\\ \times&&\left({2J+1\over4\pi}\right)^\frac{1}{2} D^{J*}_{MP}(\alpha,\beta,0), \label{chanbas} \end{eqnarray} The monomer basis functions are labelled by Hund's case (a) quantum numbers $\lambda$ and $\sigma$, the projections of the electronic orbital and spin angular momentum along the SH axis, and $\omega=\lambda+\sigma$. \footnote{We adopt the convention of using {\it lower-case} letters for all quantities that refer to \emph{monomers}, and reserve upper-case letters for quantities that refer to the complex as a whole.} $\lambda$, $\sigma$ and $\omega$ are all signed quantities. The $D$ functions are Wigner rotation matrices.\cite{Brink} The first $D$ function describes the rotation of the monomer with respect to body-fixed axes, with angular momentum quantum number $j$ (including electronic orbital and spin angular momentum) and projection $P$ along the intermolecular vector $\mathbf{R}$. The second $D$ function describes the rotation of the complex as a whole, with total angular momentum $J$ and projections $M$ and $P$ onto space-fixed and body-fixed axes respectively. The angles ($\beta,\alpha)$ describe the orientation of the $\mathbf{R}$ vector in space. The monomer Hamiltonian used here for SH($X^2\Pi$) is \cite{Brown} \begin{equation} H_{\rm mon} = b(\hat j-\hat l-\hat s)^2 + H_{\rm so}, \end{equation} where the rotational constant $b$ is 9.465 cm$^{-1}$. \cite{herzbergIV} For simplicity the spin-orbit Hamiltonian $H_{\rm so}$ is taken to be independent of $R$ and $\theta$ and equal to $a\lambda\sigma$, with $a=-378.5$ cm$^{-1}$.\cite{herzbergIV} It is convenient to expand the sum and difference potentials in terms of renormalized spherical harmonics $C_{lm}(\theta,\phi)$, \begin{eqnarray} V_0(R,\theta) = \sum_l V_{l0}(R) C_{l0}(\theta,0); \\ V_2(R,\theta) = \sum_l V_{l2}(R) C_{l2}(\theta,0). \end{eqnarray} The potential matrix elements between the angular basis functions may then be written \begin{eqnarray}&&\langle JM;jP;\lambda \sigma |\hat V| JM;j'P';\lambda'\sigma'\rangle\nonumber\\ &&= \delta_{PP'} \delta_{\sigma\sigma'} \sum_l V_{l,|\lambda-\lambda'|}(R) \, g_{l,\lambda-\lambda'}(j\omega; j'\omega';P), \end{eqnarray} where the potential coupling coefficients are \begin{eqnarray} &&g_{l,\lambda-\lambda'}(j\omega; j'\omega';P) \nonumber\\ &&\quad=(-1)^{P-\omega}\left[(2j+1)(2j'+1) \right]^\frac{1}{2}\nonumber\\ &&\quad\times\left(\matrix{j&l&j'\cr -\omega&\lambda-\lambda'&\omega'\cr}\right) \left(\matrix{j&l&j'\cr -P&0&P\cr}\right). \end{eqnarray} The potential matrix elements are independent of $J$ and diagonal in $P$. Nevertheless, in a full treatment the wavefunctions of the complex are linear combinations of functions with different values of $P$, because the operator $(\hat J - \hat\jmath)^2$ in Eq.\ \ref{ham} has matrix elements off-diagonal in $P$ ($\Delta P=\pm1$). However, the full wavefunctions are eigenfunctions of the parity operator: symmetrized basis functions may be constructed by taking even and odd linear combinations of $\Phi^{JM}_{jP;\lambda\sigma}$ and $\Phi^{JM}_{j{-P};{-\lambda}{-\sigma}}$. In the present work, the coupled equations are solved using the BOUND program of Hutson.\cite{Hutson:bound:2006} The wavefunction log-derivative matrix is propagated outwards from a boundary point at short range ($R_{\rm min}$) and inwards from a boundary point at long range ($R_{\rm max}$) to a matching point ($R_{\rm mid}$) in the classically allowed region. If $E$ is an eigenvalue of the Hamiltonian, the determinant of the difference between the two log-derivative matrices at $R_{\rm mid}$ is zero. \cite{Johnson:1978,Hutson:1994} The BOUND program locates eigenvalues by searching for zeroes of the lowest eigenvalue of the matching determinant,\cite{Hutson:1994} using bisection followed by the secant method. In the present work we use $R_{\rm min}=3.0$~\AA, $R_{\rm max}=9.5$~\AA, $R_{\rm mid}=4.2$~\AA\ and a log-derivative sector size of 0.02 \AA. The basis set includes all SH functions up to $j_{\rm max}=15/2$ in both spin-orbit manifolds. The energies obtained from full close-coupling calculations for the lowest few $J=3/2$ levels of Ar--SH (actually carried out in the equivalent space-fixed basis set \cite{DUBERNET:1991}) are shown in Table \ref{tab:energies}. These levels all correlate with SH $^2\Pi_{3/2}$, $j=3/2$ and are labelled with the projection quantum number $P$ and Van der Waals stretching quantum number $n$. We use the convention that levels in which $P$ and $\omega$ for the dominant basis functions have the \emph{same} sign are labelled with positive $P$ and those where they have \emph{different} sign are labelled with negative $P$. \cite{DUBERNET:1991} In order of increasing energy, the lowest four levels for Ar--SH have $P$ = +3/2, +1/2, $-3/2$, $-1/2$, in contrast to Ar--OH where the order is +3/2, +1/2, $-1/2$, $-3/2$. \cite{DUBERNET:1991, Dubernet:1993} The difference is due to the anisotropy of the sum potential $V_0(R,\theta)$: the ratio $V_{20}/V_{10}$ is larger for Ar--SH. \begin{table} \caption{Bound-state energies for $J=3/2$ levels of Ar--SH from full close-coupling calculations (average $E_{\rm CC}$ and parity splitting $\Delta E_{\rm CC}$), helicity decoupling calculations ($E_{\rm HD}$) and single-surface calculations on the lower adiabatic surface including spin-orbit coupling ($E_{\rm Ad}$). All energies are relative to the dissociation energy to form SH ($X^2\Pi_{3/2},\ j=3/2$). All energies are given as wavenumbers in cm$^{-1}$. }\label{tab:energies} \begin{ruledtabular} \begin{tabular}{cccccc} $P$ & $n$ & $E_{\rm CC}$ & $\Delta E_{\rm CC}$ & $E_{\rm HD}$ & $E_{\rm Ad}$ \\ $+3/2$ & 0 & $-102.745$ & $+3.5\times10^{-5}$ & $-102.652$ & $-102.725$  \\ $+1/2$ & 0 &  $-97.766$ & +0.144              &  $-97.593$ &  $-97.667$  \\ $-3/2$ & 0 &  $-94.940$ & $-1.1\times10^{-3}$ &  $-94.894$ &  $-95.035$  \\ $-1/2$ & 0 &  $-92.116$ & $-0.138$            &  $-92.222$ &  $-92.293$  \\ \\ $+3/2$ & 1 &  $-77.292$ & $+2.6\times10^{-5}$ &  $-77.111$ &  $-77.258$   \\ $+1/2$ & 1 &  $-72.276$ & +0.134              &  $-72.100$ &  $-72.148$   \\ $-3/2$ & 1 &  $-69.356$ & $-1.3\times10^{-3}$ &  $-69.265$ &  $-69.382$   \\ $-1/2$ & 1 &  $-67.207$ & $-0.124$            &  $-67.291$ &  $-67.315$   \\ \end{tabular} \end{ruledtabular} \end{table} The close-coupling results may be compared with the microwave experiments of Sumiyoshi \emph{et al.}\ ,\cite{sumiyoshi2000} who obtained a rotational constant $B^{\rm eff}=1569.66$ MHz (0.05236 cm$^{-1}$) and parity doubling constant $q_J=0.32873$ MHz ($1.10\times10^{-5}$ cm$^{-1}$) for the ground state ($P=+3/2$). These correspond to a $J=3/2-5/2$ separation of 0.262 cm$^{-1}$ and a $J=3/2$ parity splitting of $6.6\times10^{-5}$ cm$^{-1}$, which compare with calculated values of 0.263 cm$^{-1}$ and $3.5\times10^{-5}$ cm$^{-1}$ respectively. The very good agreement for the rotational spacing suggests that the equilibrium distance of our \emph{ab initio} potential is quite accurate. The difference of almost a factor of 2 in the parity splitting is less satisfactory, but Dubernet \emph{et al.}\ \cite{Dubernet:1992} have shown that such terms involve complicated combinations of high-order terms involving the difference potential, spin uncoupling and Coriolis perturbations. Small differences between the energies of excited states can have a large effect on the parity splitting. Sumiyoshi \emph{et al.}\ \cite{sumiyoshi2005i} have very recently measured microwave--millimetre-wave double-resonance spectra of the $P=+1/2\leftarrow +3/2$ band of Ar--SH. The centre of gravity of the parity components of the $J=3/2\leftarrow 1/2$ line is 81.8 GHz (2.73 cm$^{-1}$). The corresponding calculated quantity from our potential is 4.805 cm$^{-1}$. In addition, the measured parity splitting for the $J=3/2$, $P=+1/2$ level is about 5300 MHz (0.177 cm$^{-1}$), which compares with 0.144 cm$^{-1}$ from our calculations. An interesting possibility for future work would be to adjust the \emph{ab initio} potential to improve the fit to the spectroscopic parameters using the morphing procedure of Meuwly and Hutson. \cite{Meuwly:1999} \subsection{Wavefunctions} The full wavefunctions (Eq.\ \ref{fullwave}) contain contributions from all possible values of $P$ and $\omega$ and are not separable between the body-fixed angles ($\theta,\phi$) and the space-fixed angles ($\beta,\alpha$). This makes them hard to visualize. In addition, since the mixings depend on the total angular momentum $J$, they are not convenient for calculating band intensities. We therefore introduce two approximations to simplify the description of the wavefunctions for this purpose. First, we introduce the \emph{helicity decoupling} approximation, where matrix elements of $(\hat J - \hat\jmath)^2$ off-diagonal in $P$ are neglected. Secondly, we neglect matrix elements of $H_{\rm mon}$ off-diagonal in $\sigma$ (spin-uncoupling terms). The coupled equations then simplify to \begin{widetext} \begin{eqnarray} \left[-\frac{\hbar^2}{2\mu} \frac{d^2}{dR^2} + E_{\omega j}^{\rm mon} \right. &+& \left. \frac{\hbar^2}{2\mu R^2}\left(J(J+1)+j(j+1)-2\omega^2\right)-E^J_{Pn}\right] \chi^{J}_{Pn;j;\lambda\sigma}(R) \nonumber\\ &=& -\sum_{j'\lambda'} \left<JM;j'P\lambda' \sigma\right|\hat V\left|JM;jP\lambda \sigma\right> \chi^{J}_{Pn;j';\lambda'\sigma}(R). \label{eqcoup} \end{eqnarray} \end{widetext} Since all matrix elements off-diagonal in $P$ and $\sigma$ have been neglected, states with quantum numbers $(P,\omega)$ and $(-P,-\omega$) are uncoupled and it is not necessary to take combinations of definite total parity. However, states with $(P,\omega)$ and $(-P,\omega)$ or $(P,-\omega)$ have different potential energies and are nondegenerate. The energy levels obtained from helicity decoupling calculations for Ar--SH are included in Table \ref{tab:energies}. The approximation is accurate to about 0.2 cm$^{-1}$ for $n=0$ and 1, but is less reliable for higher states. In particular, the region between $-60$ and $-40$ cm$^{-1}$ contains both $j=3/2,\ n=2$ and $j=5/2,\ n=0$ levels. In the presence of the resulting near-degeneracies, the terms that are omitted in the approximate Hamiltonian can cause quite significant level shifts. In the helicity decoupling approximation, $P$ is a good quantum number. However, $\omega$ is not because $V_2$ mixes levels with $\Delta\lambda=\pm2$ (but $\Delta\sigma=0$) and thus mixes $\omega=+3/2$ with $-1/2$ and $\omega=+1/2$ with $-3/2$. However, in the absence of terms off-diagonal in $\sigma$ the two sets are not mixed with one another. Each wavefunction thus has only \emph{two} components corresponding to different values of $\omega$. The wavefunctions may be written \begin{equation} \Psi^J_{Pn} = \sum_\omega \chi^J_{Pn;\omega}(R,\theta) \, \Phi^{JM}_{P;\lambda\sigma}, \label{eqpsip} \end{equation} where the basis functions now exclude the $\theta$-dependence, \begin{eqnarray} \Phi^{JM}_{P;\lambda\sigma} = \varphi_\sigma \varphi_\lambda \left(\frac{2J+1}{8\pi^2}\right)^{1/2} D^{J*}_{MP}(\alpha,\beta,\phi), \end{eqnarray} and the 2-dimensional functions that characterize the components of the wavefunction for each $\omega$ are \begin{equation} \chi^J_{Pn;\omega}(R,\theta) = \sum_j \left(j+\textstyle\frac{1}{2}\right)^{1/2} d^j_{P\omega}(\theta) \chi^{J}_{Pn;j;\lambda\sigma}(R), \label{eqchi} \end{equation} where $d^j_{j\omega}(\theta)$ is a reduced rotation matrix. \cite{Brink} We have adapted the BOUND program \cite{Hutson:bound:2006} to calculate wavefunctions for this case by back-substituting into the log-derivative propagation equations, as described for the closed-shell (single surface) case by Thornley and Hutson. \cite{Thornley:1994} Examples of the resulting wavefunctions $\chi^J_{Pn;\omega}(R,\theta)$ are shown in Fig.\ \ref{wfnbig}. It may be seen that the components for different values of $\omega$ have quite different radial and angular distributions. \begin{figure*} \includegraphics[width=\textwidth]{three} \caption{[Colour Figure] Contour plots of the wavefunction components, superimposed on the average potential.\label{wfnbig}} \end{figure*} The Ar--SH wavefunctions are qualitatively similar to those for Ne--SH obtained by Cybulski {\em et al.}\ \cite{Cybulski:2000} Since the potential anisotropy for Ar--SH is only a few tens of cm$^{-1}$ in the well region, there is only weak mixing of SH rotational functions with different values of $j$. For this reason the wavefunctions are dominated by the functions $d^{3/2}_{P\omega}(\theta)$, as described by Dubernet and Hutson \cite{DUBERNET:1991} for the case of Ar--OH. The $d$ functions are shown, for example, in Fig.\ 7 of ref.\ \onlinecite{DUBERNET:1991} and the angular parts of the wavefunctions of Fig.\ \ref{wfnbig} follow them quite closely. \subsection{Adiabatic approximations} In a basis set of Hund's case (a) functions with signed values of $\lambda=\pm1$ and $\sigma=\pm1/2$, we can define new adiabatic surfaces (adiabats) including spin-orbit coupling as eigenvalues of the Hamiltonian matrix at each value of $R$ and $\theta$, \begin{equation} \left(\matrix{V_0+\frac{1}{2}a & 0 & V_2 & 0 \cr               0 & V_0-\frac{1}{2}a & 0 & V_2 \cr               V_2 & 0 & V_0-\frac{1}{2}a & 0 \cr               0 & V_2 & 0 & V_0+\frac{1}{2}a \cr}\right), \end{equation} where again the spin-orbit Hamiltonian is taken to be simply $a\lambda\sigma$. This clearly factorizes into two equivalent $2\times2$ matrices, one containing $\omega=+3/2$ and $-1/2$ and the other containing $\omega=+1/2$ and $-3/2$. The resulting adiabats may be designated $V_+(R,\theta)$ and $V_-(R,\theta)$ with corresponding electronic functions $\psi_+(R,\theta)$ and $\psi_-(R,\theta)$ given by \begin{eqnarray} &&\left(\matrix{\psi_+(R,\theta) \cr \psi_-(R,\theta) \cr}\right) \nonumber\\&&\quad= \left(\matrix{\cos\alpha_{\rm ad}(R,\theta) & \sin\alpha_{\rm ad}(R,\theta) \cr -\sin\alpha_{\rm ad}(R,\theta) & \cos\alpha_{\rm ad}(R,\theta)}\right) \left(\matrix{\varphi_{\pm3/2} \cr \varphi_{\mp1/2} \cr}\right), \label{anglead} \end{eqnarray} where $\varphi_\omega=\varphi_\lambda\varphi_\sigma$. The adiabats for Ar--SH are shown in Fig.\ \ref{fig:so_adiabats}. Since for Ar--SH $V_2(R,\theta)$ is small compared to $a$ in the well region, the lower adiabat is always predominantly $\omega=\pm3/2$ in character and the upper adiabat is always predominantly $\omega=\mp1/2$ in character. The corresponding mixing angle $\alpha_{\rm ad}$ is 0 at $\theta=0$ and $180^\circ$ (where $V_2(R,\theta)=0$) and less than $20^\circ$ at other angles for $R>3.5$~\AA. A contour plot of the mixing angle is shown in Fig.\ \ref{mix_ang_surf}. \begin{figure} \includegraphics[width=85mm]{four} \caption{Contour plots for Ar--SH ({\em X}) adiabats including spin-orbit coupling. Contour lines are shown at 10 cm$^{-1}$ intervals, ranging from 70 cm$^{-1}$ to $+a/2$ for the upper surface and from $-310$ cm$^{-1}$ to $-a/2$ for the lower surface. \label{fig:so_adiabats}} \end{figure} \begin{figure} \includegraphics[width=85mm]{five} \caption{Contour plot of the adiabatic mixing angle $\alpha_{\textrm{ad}}$. This angle is derived from the adiabats and is defined in Eq.\ \ref{anglead}. The contour lines are spaced at $2^\circ$ intervals.\label{mix_ang_surf}} \end{figure} A further consequence of the large spin-orbit coupling constant is that both adiabats resemble the {\it sum} potential $V_0(R,\theta)$ much more than the $A'$ and $A''$ potentials. The spin-orbit coupling has in effect quenched the splitting between the $A'$ and $A"$ states. This explains why there is no tendency for the wavefunctions shown in Fig.\ \ref{wfnbig} to ``fall into" the non-linear minimum of the $A'$ state. The existence of adiabats including spin-orbit coupling suggests a Born-Oppenheimer separation in which the total wavefunctions are written approximately as \begin{equation} \Psi_{iPn} \approx R^{-1} \psi_i(R,\theta) \chi_{iPn}(R,\theta), \label{psibo} \end{equation} where $\psi_i$ is one of the functions of Eq.\ \ref{anglead} and $\chi_{in}(R,\theta)$ is a solution of an effective Schr\"odinger equation of the form \begin{eqnarray} &&\left[ -\frac{\hbar^2}{2\mu}\frac{\partial^2}{\partial R^2} + H_{\rm rot}+ \frac{\hbar^2(\hat J -\hat\jmath)^2}{2\mu R^2} \right. \nonumber\\ && +\ \Biggl. V_i(R,\theta) - E_{iPn} \Biggr]\chi_{iPn}(R,\theta) = 0. \label{eqad} \end{eqnarray} However, the appropriate angular operator $H_{\rm rot}$ to use in such a calculation is hard to define. The reduced rotation matrices $d^j_{P\omega}(\theta)$ that describe the free SH molecule are eigenfunctions of $H_{\rm rot} = b(\hat\jmath^2-2\omega^2)$, where \begin{equation} \hat\jmath^2 = \left[-\frac{1}{\sin\theta} \frac{\partial}{\partial\theta} \left(\sin\theta \frac{\partial}{\partial\theta} \right) + \frac{P^2 + \omega^2 -2P\omega\cos\theta}{\sin^2\theta} \right]. \label{eqhrot} \end{equation} This contains singularities at $\theta=0$ and/or $180^\circ$ that depend on the values of $P$ and $\omega$. However, there is no single value of $\omega$ that is appropriate at all configurations. The simplest approach is to replace $\omega$ in Eq.\ \ref{eqhrot} with the value that is appropriate at $\theta=0$ and $180^\circ$ and solve Eq.\ \ref{eqad} in a basis set of $d$ functions for each value of $P$. This is equivalent to solving the coupled equations using a basis set containing only functions with a single value of $\omega$. The results obtained with this approximation are included in Table \ref{tab:energies}: it may be seen that it gives energies that are generally slightly too low (compared to the helicity decoupling results), by 0.05 to 0.15 cm$^{-1}$. A slightly better but significantly more complicated approximation would be to replace $\omega$ with $\langle\omega\rangle$ and $\omega^2$ with $\langle\omega^2\rangle$ in Eq.\ \ref{eqhrot} to give an improved effective potential. One approach that is clearly {\it not} appropriate is to carry out a bound-state calculation on a single adiabat $V_\pm(R,\theta)$ assuming that the SH molecule behaves as a closed-shell rigid rotor. Such a calculation would give substantially incorrect energies and wavefunctions. It is in fact true that {\it no wavefunction of the form (\ref{psibo}) can have the correct behavior near both linear geometries}. To see this, consider an alternative definition of the mixing angle that can be obtained from a single wavefunction in the helicity decoupling approximation, \begin{equation} \tan\alpha^J_{Pn}(R,\theta) = \frac{\chi^J_{Pn;\mp1/2}(R,\theta)}{\chi^J_{Pn;\pm3/2}(R,\theta)}. \label{anglewav} \end{equation} This quantity is plotted for $n=0$ and all four $P$ values corresponding to $j=3/2$ in Fig.\ \ref{mix_ang_wfns}. The mixing angles for $P=+3/2$ and $P=+1/2$ bear some similarity to $\alpha_{\rm ad}(R,\theta)$ (Fig.\ \ref{mix_ang_surf}) at small $\theta$, but tend to $90^\circ$ instead of zero at $\theta = 180^\circ$. Conversely, the mixing angles for $P=-3/2$ and $P=-1/2$ tend to $90^\circ$ at $\theta=0$. This is easy to explain in terms of the reduced rotation matrices that appear in Eq.\ \ref{eqchi}. For example, the functions $d^j_{\pm3/2,\pm3/2}(\theta)$ all behave as $\cos^3 (\theta/2)$ as $\theta\rightarrow 180^\circ$, while the functions $d^j_{\mp1/2,\pm3/2}(\theta)$ behave as $\cos (\theta/2)$. This corresponds to $\tan\alpha^J_{3/2,n} \rightarrow \infty$ as $\theta\rightarrow 180^\circ$ so $\alpha^J_{3/2,n} \rightarrow 90^\circ$ in that limit. The point here is that the component of the $P=+3/2$ wavefunction on the $\omega=-1/2$ surface goes to zero more slowly than that on the $\omega=+3/2$ surface as $\theta\rightarrow 180^\circ$. Fig.\ \ref{mix_ang_surf} shows that the coupled-channel wavefunctions (\ref{eqpsip}) for $P=+3/2$ and $+1/2$ are predominantly in the $\omega=-1/2$ state near $\theta=180^\circ$, which corresponds to the {\it upper} adiabat rather than the lower one. The $P=-3/2$ and $-1/2$ wavefunctions show similar behavior around $\theta=0$. This is not the behavior implied by Eq.\ \ref{psibo}. \begin{figure} \includegraphics[width=85mm]{six} \caption{[Colour Figure] Comparison of the adiabatic mixing angle $\alpha_{\mathrm{ad}}$ (thick black line) with angles obtained from wavefunctions correlating with $j=3/2$, $\omega=+3/2$ for $P=+3/2$ (red), $P=+1/2$ (blue), $P=-1/2$ (green) and $P=-3/2$ (black). The mixing angles are shown as cuts through the corresponding surfaces taken at $R=3.9$ \AA. \label{mix_ang_wfns}} \end{figure} \section{Electronic spectrum calculation} \subsection{Transition wavenumbers} In order to calculate the line positions in the vibrationally resolved $A\ ^2\Sigma^+ \leftarrow X\ ^2\Pi$ electronic spectrum, we require the bound-state energies for the excited electronic state, as well as those for the ground state. For the \emph{A} $^2\Sigma^+$ state we make use of the recent PES presented by Hirst \emph{et al.}\ \cite{hirst2004} This surface has a global minimum of $-742.5$ cm$^{-1}$ at the linear Ar--H-S conformation ($\theta = 0^{\circ}$), and a secondary minimum of $-673.7$ cm$^{-1}$ for linear Ar--S-H ($\theta = 180^{\circ}$). The two minima are separated by a barrier more than $600$ cm$^{-1}$ high and the lowest-energy vibrational levels are localised within one or other of the two wells. The bound states of this PES have been analysed previously \cite{hirst2004} and only a brief discussion is given here. Bound-state energies were calculated as eigenvalues of the spin-free triatomic Hamiltonian in Jacobi coordinates. Discrete variable representations (DVRs) were employed for both the intermolecular distance $R$ and the angle $\theta$. For $R$, 120 sinc-DVR functions \cite{colbertmiller} were used, with DVR points ranging from 2.5 \AA\ to 8.5 \AA. For $\theta$, a 64-point DVR based on Legendre polynomials was used. With this basis set, the bound levels of interest were converged to at least seven significant figures. The resulting levels are labeled by quantum numbers ($v_{\mathrm{SH}},b^K,n$), where $v_{\mathrm{SH}}$ and $n$ are quantum numbers for the SH stretch and the atom--diatom stretch respectively. $K$ is the projection of the total angular momentum of the diatom, neglecting spin, onto the body-fixed $z$-axis, and $b$ is the number of nodes in the intermolecular angle $\theta$. The resulting energies for levels with total angular momentum $N = 0$ (neglecting spin) are in precise agreement with previous results. \cite{hirst2004} To facilitate the calculation of band intensities, for $N = 1$ the helicity decoupled approximation was employed, in which the Coriolis terms coupling different $K$-levels are ignored. The helicity-decoupled energies are within 0.5 cm$^{-1}$ of the full close-coupled results.\cite{hirst2004} For the purpose of calculating transition frequencies, the asymptotic separation of the potentials is taken to be the experimental excitation energy from the $v=0,\ j=3/2$ level of the $^2\Pi_{3/2}$ state to the lowest $v=0,\ j=1/2$ level of the $^2\Sigma^{+}$ state of isolated SH, which is 30832.68 cm$^{-1}$. \cite{ramsay52} All transitions of the complex were assumed to originate from the $P=+3/2$ level of Ar--SH ($^2\Pi$). The lowest-energy transition frequency for the complex is calculated to be 30488.5 cm$^{-1}$, which is 31.5 cm$^{-1}$ greater than the experimental value of 30457 cm$^{-1}$.\cite{miller97} This agreement is reasonable, considering the level of theory used in the calculation of the potentials. \subsection{Transition dipole moments} Calculations of spectroscopic intensities require transition dipole moments $\mu_{\mathrm{tot}}^{\mathrm{if}}$, where \begin{equation} \mu_{\mathrm{tot}}^{\mathrm{if}} = \left< \mathrm{i}| \mu_{\mathrm{el}} | \mathrm{f}\right>. \end{equation} The integrals involve the initial (i) and final (f) wavefunctions as determined from bound-state calculations. In this work we evaluated transition dipoles over \emph{internal} coordinates ($R,\theta$), neglecting overall rotation. This gives transition dipoles that correspond to band intensities between intermolecular vibrational states. The electronic dipole moment, $\mu_{\mathrm{el}}$, is in general a parametric function of the nuclear coordinates. In the body-fixed frame it may be expanded in terms of reduced rotation matrices, \begin{equation} \label{eqn:elecdipmom} \mu_{\mathrm{el}}(R,\theta) =\sum_j \mu_{\mathrm{el},j}^{\Delta\lambda}(R) d^j_{\Delta P, \Delta \lambda}(\theta), \end{equation} where $\Delta P = P_{\mathrm{i}}-P_{\mathrm{f}}$ and $\Delta \lambda = \lambda_{\mathrm{i}}-\lambda_{\mathrm{f}}$. In the excited electronic state, $P=K\pm\textstyle{\frac{1}{2}}$. In this work it is assumed that $\mu_{\mathrm{el}}$ consists purely of contributions from the SH monomer, so that only $j=1$ contributes in Eq.\ \ref{eqn:elecdipmom} and the coefficients $\mu_{\mathrm{el},j}^{\Delta\lambda}$ are independent of $R$. Since we are dealing with a perpendicular transition in SH, $\Delta\lambda=\pm1$. The transition dipoles were calculated as one-dimensional Gaussian quadratures in $\theta$, then integrated over $R$. \subsection{Intensities and lifetime factors} The signals in a pulsed-laser fluorescence excitation experiment decay exponentially following each pulse, with a lifetime equal to that of the excited state being probed. Intensities are typically measured as the area under the decay curve, and so the experimental intensities are proportional to the lifetime of the excited state. Ar--SH is somewhat unusual in the large range of lifetimes exhibited by different vibrational levels in the \emph{A} state. It is known that the presence of the Ar atom blocks the electronic predissociation of the SH radical, leading to a greatly increased lifetime of up to 600~ns for low-lying bound levels, compared to $\sim 1$ ns for the uncomplexed species. \cite{mccoy98, miller97} However, the actual lifetime depends on the degree of vibrational excitation, and lifetimes specific to particular levels have been calculated by McCoy.\cite{mccoy98} The top panel of Fig.\ \ref{fig:simspec} shows a spectrum calculated directly from the squares of transition dipoles, while the center panel shows a spectrum in which the intensities have been multiplied by McCoy's lifetime values. Clearly this is possible only for levels for which lifetime data exist, and transitions to other levels are omitted in the center panel (\emph{i.e.}, it is assumed that their lifetimes are small). Also shown is an experimental spectrum from Ref.\ \onlinecite{miller97}. The spacings between the peaks in the calculated Ar--SH spectrum are consistently $\sim 5$\% smaller than in experiment. It is clear that the intensity distributions are significantly different in the two calculated spectra, and that the one that includes lifetime factors gives considerably better agreement with experiment. The agreement in intensities is quite good, especially considering that the experimental spectrum was most likely not normalised for dye laser power. \cite{millerperscomm} From our results it seems likely that the small peak at $\sim 30810$ cm$^{-1}$ in the experimental spectrum can be assigned to the transition to $(0,0^0,6)$. \begin{figure} \includegraphics[width=85mm]{seven} \caption{Calculated vibrationally resolved fluorescence-excitation spectrum of Ar--SH for the \emph{A} $^2\Sigma^+$ -- \emph{X} $^2\Pi$ electronic transition. The lifetime weighting of the intensities is absent for the the top panel, and present for the middle panel. Lines labeled with $+$ indicate transitions to the $\theta=180^{\circ}$ well of the \emph{A} state. For comparison, the experimental spectrum is shown in the bottom panel, taken from Ref.\ \onlinecite{miller97}. Note that the experimental spectrum contains contributions from Ar$_2$--SH and larger clusters as well as Ar--SH.\label{fig:simspec}} \end{figure} Even without the lifetime weighting, transitions to levels localized in the $\theta=180^{\circ}$ well of the \emph{A} state, which are labelled with $+$ symbols in Fig.\ \ref{fig:simspec}, are weak in the simulated spectrum. This arises because of poor overlap with the $P=+3/2$ ground-state wavefunction (which is concentrated around $\theta=0$). In addition, it is likely that such levels have short lifetimes close to that of uncomplexed SH \cite{hirst2004} and so will have even lower intensities in the fluorescence excitation spectrum. These levels have not been observed experimentally to our knowledge. \section{Summary} We have obtained new \emph{ab initio} potential energy surfaces for the Ar--SH complex in its ground $^2\Pi$ electronic state and used them to calculate bound-state energies and wavefunctions using coupled-channel methods. We have also described a new adiabatic approximation that includes spin-orbit coupling and can be used to calculate bound states on a single potential energy surface. However, the adiabatic wavefunctions fail to reproduce some features of the true wavefunctions. We have used our results to simulate the vibrationally resolved laser-induced fluorescence excitation spectrum of Ar--SH, with intensities modelled using calculated transition dipole moments and calculated lifetimes. The inclusion of the lifetime factor is important to obtain satisfactory agreement with the experimental intensities. \section{Acknowledgements} The authors would like to thank Terry Miller for providing his experimental spectrum. RJD also thanks Dr. Stuart Mackenzie for helpful discussions, and is grateful to the Engineering and Physical Sciences Research Council (EPSRC) for funding.
{'timestamp': '2006-10-11T11:57:18', 'yymm': '0610', 'arxiv_id': 'physics/0610079', 'language': 'en', 'url': 'https://arxiv.org/abs/physics/0610079'}
\section{Introduction} Monopoles in Yang-Mills-Higgs theories are known to posses a spectrum of excitations called dyons. In the simplest of these theories the gauge group is ${\cal G}=SU(2)$ spontaneously broken to ${\cal H}=U(1)$ by a non-zero vacuum expectation value of the Higgs field. Gauss' Law implies that the system is invariant under small gauge transformations. Large gauge transformations with respect to the unbroken gauge group $U(1)$ induce an electric field and therefore do have an observable effect; the monopole acquires electric charge and is then called a dyon. The gauge parameter of large $U(1)$ gauge transformations is thus interpreted as a collective coordinate of the monopole. The simple picture described in the preceding paragraphs is still valid when the gauge group $G$ is arbitrary but the Higgs field enforces maximal symmetry breaking. The unbroken gauge group is of the form $U(1)\times\ldots\times U(1)$ and for each $U(1)$ factor there will be an electric charge. In this paper we shall generalize the previous ideas to larger gauge groups and general symmetry breaking, paying special attention to transformation properties of physical states under large gauge transformations. From the point of view of semiclassical quantization, the most important question is to identify the internal degrees of freedom of a monopole when the unbroken gauge group is non-abelian or, in more technical words, what is the moduli space of monopole solutions. The parameters of the moduli space are identified with collective coordinates, and quantization of ``motion" in moduli space yields a tower of dyon states. The example of maximal symmetry breaking suggests that the group parameters of the unbroken gauge group are collective coordinates of the monopole, and that motion in the moduli space corresponds to large gauge transformations. However, it has been known for a long time that in a spontaneously broken Yang-Mills-Higgs theory not all generators of the unbroken gauge group ${\cal H}$ correspond to collective coordinates in semiclassical quantization of monopoles If ${\cal H}$ is non-abelian \cite{abou, nelson, cole, bala}. Problems arise when we try to define the action of the generators of ${\cal H}$ that do not commute with the magnetic field of the monopole. We shall denote the set of these generators by ${\cal H}'$. These problems can be exposed through semiclassical quantization \cite{abou} of monopole solutions, through topological considerations \cite{nelson}, or by studying the quantum mechanics of a test particle in the presence of a non-abelian monopole \cite{bala}. One of the aims of this paper is to analyze these problems from the point of view of physical states It was shown in \cite{abou} that the momenta of inertia corresponding to ${\cal H}'$ are vanishing, and that therefore the collective coordinates associated to those gauge generators somehow decouple from the theory. This fact suggests that it should be possible to formulate the theory of non-abelian dyons in such a way that the decoupling of ${\cal H}'$ is proven from first principles. In this paper we shall present a Hamiltonian formulation of non-abelian Yang-Mills-Higgs dyons that meets that demand. The crucial feature of a gauge theory is the appearance of Gauss' Law and the necessity of ``choosing a gauge" in order to obtain a non-singular symplectic structure acting on its phase space. Once the asymptotic values of the group parameters are given, the gauge condition fixes their value for all points. Therefore only the boundary values of the group parameters remain undetermined and can be considered dynamical variables \cite{wadia}. The extension of the Hamiltonian formalism to Yang-Mills-Higgs theories with dyons requires, therefore, the introduction of boundary terms that act as dynamical variables. These new variables enforce a new, extended Gauss' Law that will be shown to restrict physical states to singlets under ${\cal H}'$. Therefore, there are no collective coordinates for the excitation of the ${\cal H}'$ modes. The effect of the inclusion of a vacuum angle $\vartheta$ will also be considered. The rest of this paper is organized as follows. Section 2 introduces the Hamiltonian formalism for Yang-Mills-Higgs theories, including the boundary terms that arise in the presence of dyons. Section 3 discusses the consequences of the extended Gauss' Law on physical states. In Section 4 we present our conclusions. \section{Boundary Terms in Yang-Mills-Higgs theories} Let s consider a Yang-Mills-Higgs theory with simply-connected gauge group ${\cal G}$ spontaneously broken to a subgroup ${\cal H}$ by a Higgs field in the adjoint representation of ${\cal G}$. We shall consider the theory to be defined on a large sphere ${\cal S}$ with boundary ${\cal B}$ in order to exhibit the importance of boundary terms in the action when the sphere contains magnetic monopoles. The lagrangean density of this theory is \begin{equation} {\cal L}={\rm Tr}\left(-{1\over 4} F_{\mu\nu}\,F^{\mu\nu}-\half D_{\mu}\phi\, D^{\mu}\phi\right) -V(\phi), \label{lagr} \end{equation} where the potential $V(\phi)$ must ensure spontaneous symmetry breaking. The classical action of this theory is, in first-order formalism, \begin{eqnarray} S&=&\int_{t_1}^{t_2} dt\,\left[\int\,dx\,{\rm Tr}\left(E_i \dot{A}_i + \pi\dot{\phi}\right)-H(E_i, A_i, \pi, \phi, A_0)\right],\nonumber\\ H&=&\int\,dx\,{\rm Tr}\left[\half \,E_i\,E_i+\half \,\pi^2 +{1\over 4}\,F_{ij}\,F_{ij}+\half\,D_i\phi\,D_i\phi+V(\phi) -A_0\,\left(D_i\,E_i+e\,[\phi,\pi]\right)\right]\nonumber\\ &+&\int_{{\cal B}} d\sigma_i\,{\rm Tr}\,A_0\,E_i. \label{act} \end{eqnarray} where the momenta are $E_i=F_{0i}$ and $\pi=D_0\phi$. Integrations in $x$ extend over the whole sphere ${\cal S}$. The surface element in ${\cal B}$ is defined as $d\sigma_i=r^2\,\hat{r}_i\, d\omega$ with $\hat{r}_i$ a unit vector normal to ${\cal B}$ and $d\omega$ the solid angle element ($\omega$ is a shorthand for angular coordinates on ${\cal B}$). The hamiltonian $H$ depends not only on the canonical coordinates and momenta, but also on $A_0$, the time component of the gauge potential. In standard expositions of gauge theories in the hamiltonian formalism it is assumed that the asymptotic value of $A_0$ is zero, so that when the radius of ${\cal S}$ tends to infinity the last term in $H$ vanishes. Of course it is always possible to choose a gauge where $A_0$ vanishes at infinity, or even everywhere as in the temporal gauge $A_0=0$. This possibility will be considered below, but for the sake of generality we shall keep the boundary value of $A_0$ arbitrary and possibly time-dependent \cite{wadia}. The spatial components of the gauge field $A_i^a$ are such that the monopole is a source of a magnetic field that, at large distances from the central region of the monopole, takes the form \begin{equation} B_i={G(\omega)\over r^2}\,\hat{r}_i, \label{B} \end{equation} with $G(\omega)=G^a(\omega)\,T_a$ an element of the unbroken gauge algebra ${\cal H}$. The magnetic field must satisfy the Bianchi identity, which implies that $G$ is covariantly constant, $D_i\,G=0$. At this point it is convenient to choose a Cartan basis for the generators of ${\cal H}$, \begin{eqnarray} {\cal H}&=&\left\{T_1, \ldots, T_l, T_{l+1},\ldots,T_r, E_{\alpha_1}, E_{-\alpha_1}, E_{\alpha_2}, E_{-\alpha_2},\ldots\right\} \nonumber\\ \left[ T_I, E_{\alpha}\right] &=& \alpha_I\,E_{\alpha}, \\ \left[ E_{\alpha}, E_{-\alpha} \right] &=& \sum_{I=1}^l\alpha_I\, T_I , \nonumber\\ \left[ T_I, T_J\right]&=&0. \label{alg} \end{eqnarray} The last $r-l$ generators $T_I$ correspond to possible abelian factors in ${\cal H}$, while the first $l$ generate its maximal torus. The roots $\alpha$ are all non-zero, distinct, have non-vanishing components for $I=1,\ldots,l$, and span the $l$-dimensional space \cite{gno}. It is always possible to choose the generators of ${\cal H}$ in such a way that \begin{eqnarray} {\rm Tr}(T_I\,T_J)&=&\delta_{IJ},\quad I,J=1,\ldots,l\nonumber\\ {\rm Tr}(E_{\alpha}\,E_{-\alpha})&=&1, \label{traces} \end{eqnarray} and the remaining traces of the products of two generators are zero. It will be convenient to work in the Wu-Yang formalism and use the freedom present in the definition of the maximal torus of ${\cal H}$ to have \begin{equation} G(\omega)=\sum_{I=1}^r g_I(\omega)\,T_I, \label{Q} \end{equation} with $g_i$ the magnetic weights introduced in \cite{gno}. In this formalism the Higgs field is constant on ${\cal B}$ and the unbroken gauge group ${\cal H}$ lies inside ${\cal G}$ in a position independent way. For future reference we will now give a criterion for the commutativity of a general element $X$ of ${\cal H}$ with $G$. Let us write $X$ is the Cartan basis as \begin{equation} X=X_I\,T_I +X_{\alpha}\,E_{\alpha}, \label{x} \end{equation} where repeated indices $I$ are summed over. The commutator of $X$ with $G$ is easily found to be \begin{equation} [X, G]=\sum_{\alpha}(g\cdot \alpha)\,X_{\alpha}\,E_{\alpha}. \label{xg} \end{equation} Therefore $X$ commutes with $G$ if and only if all the roots $\alpha$ included in the decomposition (\ref{x}) satisfy $g\cdot \alpha =0$. We shall denote those roots by $\alpha^{\perp}$: \begin{equation} [X, G(P)]=0 \quad \Longleftrightarrow\quad X=X_I\,T_I + X_{\alpha^{\perp}}\,E_{\alpha^{\perp}} \label{condition} \end{equation} Group elements will be represented by the parameters $\theta^a(x,t)$ that appear in the exponential map of the gauge group, $g=\exp(\theta^a\,T_a)\in{\cal H}$. The asymptotic values of the group parameters $\theta^a(\omega,t)$ and their canonical momenta $P_a(\omega,t)$ will be included in the phase space of the theory. The motivation for this extension of the phase space is that we shall impose Gauss' Law on physical states; this requirement, together with a choice of gauge, eliminates gauge transformations that vanish at infinity as symmetries of the theory but leaves the asymptotic values of the group parameters undetermined. These asymptotic values are candidates for collective coordinates of the dyon. As we want the dynamics of these degrees of freedom to appear as equations of motion of the theory, we have to extend the phase space and the hamiltonian (\ref{act}) to accommodate the new variables. We shall assume standard Poisson brackets for $\theta^a$ and $P_a$, \begin{eqnarray} \{\theta^a(\omega,t), P_b(\omega',t)\}&=&\delta_b^{\,\,a}\delta(\omega-\omega')\nonumber\\ \{\theta^a(\omega,t), \theta^b(\omega',t)\}&=&\{P_a(\omega,t), P_b(\omega',t)\}=0. \label{poi} \end{eqnarray} We shall consider physically static field configurations, that is those whose time dependence is given by a gauge transformation: \begin{eqnarray} \partial_0\,A_i&=&D_i\,\Lambda,\nonumber\\ \partial_0\,\phi&=&\left[ \phi, \Lambda\right], \label{static} \end{eqnarray} where $\Lambda=\Lambda^a\,T_a$ is a gauge parameter that, at long distance from the monopole core, lies in the unbroken gauge algebra. It is always possible to write the asymptotic values of $\Lambda$ at any point $\omega$ in the boundary ${\cal B}$ as \begin{equation} \Lambda(\omega)=\Lambda_I(\omega)\,T_I+\lambda_{\alpha}(\omega)\,E_{\alpha}. \label{lamb} \end{equation} We must also specify the boundary conditions that $A_i$ is assumed to obey. At large distance $A_i$ and $F_{\mu\nu}$ should vanish at least as fast as $r^{-1}$ and $r^{-2}$ respectively: \begin{equation} \lim_{r\to\infty}\,A_i = O(r^{-1}), \quad\quad\quad \lim_{r\to\infty}\,F_{\mu\nu}=O(r^{-2}). \label{cond} \end{equation} In addition we require that the radial component of $A_i$ decreases for large $r$ as $r^{-2}$, \begin{equation} \lim_{r\to\infty}\, \hat{r}^i\,A_i = O(r^{-2}). \label{radial} \end{equation} These boundary conditions imply that $\hat{r}^i\,D_iA_0$ should decrease as $r^{-2}$. This fact does not imply that $A_0$ decreases as $r^{-1}$; it may behave at long distances like the Higgs field in the BPS limit, which has a non-vanishing limit ar $r$ goes to infinity and at the same time satisfies $D_i\phi=B_i=O(r^{-2})$. We still have to determine what is the time evolution of the asymptotic group parameters $\theta^a(\omega,t)$. We shall follow the assumption that their time evolution is a gauge transformation of parameter $\Lambda=\Lambda^a\,T_a$, as in (\ref{static}). In order to express this idea we introduce of the Racah function $\Phi^a(\theta,\eta)$, which defines the product of two group elements: \begin{equation} \exp\left( i\,T_a\eta^a\right)\,\exp\left( i\,T_a\theta^a\right)=\exp\left( i\,T_a\Phi^a(\theta,\eta)\right). \label{racah1} \end{equation} The derivative of this function with respect to its second variable acts as a vierbein in the sense that gives the directions of small fluctuations about a certain point in the group manifold \begin{equation} E_b^{\,\,a}(\theta)={\partial\over\partial\eta^b}\,\Phi^a(\theta,\eta)\Big|_{\eta=0}. \label{racah2} \end{equation} This vierbein relates elements of the Lie algebra to tangent vectors of the group manifold at a generic group element $g$, \begin{equation} i\,T_a\,g=E_a^{\,\,b}\,\partial_b\,g,\quad\quad g=\exp(i\,\theta^a\,T_a) \label{vier} \end{equation} where $\partial_a$ is the derivative with respect to $\theta^a$. The integrability condition of the Lie group requires $E_a^{\,\,b}$ to satisfy the following relationship \begin{equation} E_a^{\,\,c}\,\partial_c E_b^{\,\,d}-E_b^{\,\,c}\,\partial_c E_a^{\,\,d}=f_{ab}^{\,\,\,c}\,E_c^{\,\,d}. \label{inte} \end{equation} The assumed time evolution of the group parameters of a general gauge group is therefore \begin{eqnarray} g(t+\delta t)&=&\exp\left\{i\Lambda\,\delta t\right\}\,g=\exp\left\{i\Lambda\,\delta t\right\}\,\exp\left\{i\,\theta^a\,T_a\right\} =\exp\left\{i\,\Phi^a(\theta, \Lambda\,\delta t)\right\}\nonumber\\ &=&\exp\left\{i\,\left(\Phi^a(\theta,0)+E_b^{\,\,a}\,\Lambda^b\,\delta t\right)\right\}= \exp\left\{i\,\left(\theta^a+E_b^{\,\,a}\,\Lambda^b\,\delta t\right)\right\} \label{asu} \end{eqnarray} Writing the group element $g(t+\delta t)$ as $\exp(i\,\theta^a(t+\delta t)\,T_a)$ we conclude that the time derivative of the parameter $\theta$ is \begin{equation} {d\over dt}\theta^a(\omega,t)= E_b^{\,\,a}(\theta)\,\Lambda^b(\omega,t), \label{evol} \end{equation} Once we accept that the boundary values of the gauge parameters must evolve in time according to (\ref{evol}) and that the phase space of the theory must include the group parameters and their canonical momenta, it is necessary to extend the hamiltonian shown in (\ref{act}) with a new term that reproduces the equation of motion (\ref{evol}): \begin{eqnarray} H'&=&\int\,dx\,{\rm Tr}\left[\half \,E_i\,E_i+\half \,\pi^2 +{1\over 4}\,F_{ij}\,F_{ij}+\half\,D_i\phi\,D_i\phi+V(\phi) -A_0\,\left(D_i\,E_i+e\,[\phi,\pi]\right)\right]\nonumber\\ &+&\int_{{\cal B}} d\sigma_i\,{\rm Tr}\,(A_0\,E_i) +\int_{{\cal B}} d\omega\, {\rm Tr}\,(\Lambda\,J), \label{ham} \end{eqnarray} where we have introduced the intrinsic momentum $J_a=E_a^{\,\,b}\/P_b$. The Poisson bracket of two $J$ reproduces the Lie algebra of the unbroken gauge group ${\cal H}$, \begin{equation} \{J_a(\omega), J_b(\omega')\}=if_{ab}^{\,\,\,c}\,J_c(\omega)\,\delta(\omega-\omega'), \label{lie} \end{equation} where $f_{ab}^{\,\,\,c}$ are the structure constants of ${\cal H}$. We can think of the momenta $J_a$ as generators of ${\cal H}$ and, after quantization, as the operators that implement large gauge transformations on physical states. \section{Generalized Gauss' Law} The last term of the extended hamiltonian $H'$ generates the desired time evolution for the group parameters through the Poisson brackets (\ref{poi}). We are interested only in the equations of motion that, after quantization, are first order constraints on physical states of the theory. These constraints should appear as stationary points of the action for variations of $A_0(x,t)$ and its boundary values $A_0(\omega,t)$. The difficulty is that the variations of a field are not entirely independent of the variations of its boundary values and therefore we cannot vary $A_0(x,t)$ and $A_0(\omega,t)$ independently. The way out of this problem is to restrict the phase space to a subspace where Gauss' Law is satisfied: \begin{equation} \left(D_i\,E_i +\left[ \phi, D_0\,\phi\right]\right)\,|\Psi\rangle =0. \label{ga} \end{equation} where $|\Psi\rangle$ is a physical state. This eliminates $A_0(x,t)$ from the variational problem and leaves its boundary value $A_0(\omega,t)$ as the only lagrange multiplier. This is not the only effect of Gauss' Law; inserting the conditions (\ref{static}) into (\ref{ga}) leads to a link between $A_0$ and the gauge parameter that will be most relevant to our discussion: \begin{equation} \left( D_i\,D_i + \left[\phi,\left[\phi,\quad\right]\right]\right)(\Lambda-A_0)=0. \label{link} \end{equation} An obvious solution is $\Lambda=A_0$, which implies that the electric field is zero. It is therefore clear that $\Lambda-A_0$ must not vanish if we want to turn a monopole into a dyon. An important non-trivial solution is $\Lambda-A_0=C\,\phi$ with $C$ a constant; this solution, however, corresponds to picking the $U(1)$ direction in the unbroken gauge group defined by the Higgs field. It is easy to show that this would reproduce the usual charge quantization condition on the Abelian electric charge of a dyon \cite{witten}. For that reason we will concentrate on the semisimple part of the unbroken gauge group, that is we shall ignore possible $U(1)$ factors in ${\cal H}$. Far from the monopole core, (\ref{link}) is the Schr{\"o}dinger equation of a zero-energy adjoint particle in the presence of the magnetic field of the monopole. The long-distance behaviour of the solutions depends on whether the ``wave function'' commutes with $G$ or not \cite{abou}. We should therefore consider three cases: \begin{enumerate} \item Components of $\Lambda-A_0$ within the maximal torus clearly commute with $G$ as defined in (\ref{Q}). Equation (\ref{link}) reduces to Laplace's far from the monopole core, and thus the solution is of the form $(\Lambda-A_0)_I \sim Q_I\,r^{-1}$ with $Q\in{\cal H}$ some $r$-independent operator. \item Components of $\Lambda-A_0$ corresponding to roots $\alpha^{\perp}$ defined by $\alpha^{\perp}_{\,I}\,g_{I}=0$ also commute with $G$ and thus behave as $(\Lambda-A_0)_{\alpha^{\perp}} \sim Q_{\alpha^{\perp}}\,r^{-1}$. \item Components corresponding to the rest of the generators do not commute with $G$ and thus decrease faster, $(\Lambda-A_0)_{\alpha} \sim Q_{\alpha}\,r^{-n}$ with $n>1$. \end{enumerate} From now on we shall understand that roots $\alpha$ are not orthogonal to the vector whose components are the magnetic weights $g_I$ unless explicitly indicated by the index $\perp$. The general solution for $\Lambda-A_0$ is then \begin{equation} \Lambda-A_0 \sim {1\over r}\,Q_I\,T_I + {1\over r}\,Q_{\alpha^{\perp}}\,E_{\alpha^{\perp}}+{1\over r^n}\,Q_{\alpha}\,E_{\alpha}, \quad{\rm with}\quad n>1. \label{so} \end{equation} It is clear that the asymptotic values $\Lambda(\omega,t)$ and $A_0(\omega,t)$ must coincide. This important fact implies that variations of the boundary value of $A_0$ produce a new constraint that includes $J$: \begin{equation} \lim_{r\to\infty} r^2\hat{r}_i\,E_a^i+J_a=0. \label{cons} \end{equation} The meaning of this new constraint is that large gauge transformations induce an electric field, thus giving electric charge to the monopole. Including the parameter $\Lambda$ of the gauge transformation, the quantum version of this constraint is \begin{equation} \left[\int_{{\cal B}}d\sigma_i\,{\rm Tr}\left(\Lambda\,E_i \right)+\int_{{\cal B}}d\omega\,{\rm Tr}(\Lambda\,J)\right]\,|\Psi\rangle=0. \label{cons2} \end{equation} Together with (\ref{ga}), this constitutes a generalized Gauss Law \cite{wadia}. It is important to note that the radial part of the electric field should decrease exactly as $r^{-2}$ in order to contribute to the constraint (\ref{cons2}). Taking into account the general solution (\ref{so}), together with the fact that the radial component of $A_i$ already decreases as $r^{-2}$, we find that the radial part of the electric field behaves as \begin{equation} \hat{r}^i\,E_i =\hat{r}^i \,D_i\,(\Lambda-A_0) \sim {\partial\over\partial r}\,\left( {1\over r}\,Q_I\,T_I + {1\over r}\,Q_{\alpha^{\perp}}\,E_{\alpha^{\perp}}\right) +O(r^{-m}), \quad{\rm with}\quad m>2. \label{oo} \end{equation} The implication of Eq. (\ref{oo}) is that the generators $E_{\alpha}$ disappear from the constraint (\ref{cons2}), due to the fact that the trace of $E_{\alpha}$ with $T_I$ or $E_{\alpha^{\perp}}$ vanishes: \begin{equation} \int_{{\cal B}}d\sigma_i\,{\rm Tr}(\Lambda\,E_i)\sim -\int_{{\cal B}}d\omega \left(\lambda_I\,Q_I + \lambda_{\alpha^{\perp}}\,Q_{-\alpha^{\perp}}\right). \label{hot} \end{equation} We have used the traces given in (\ref{traces}). Before inserting the result (\ref{hot}) into the constraint (\ref{cons2}) we will introduce a by now obvious decomposition of the generators $J$: \begin{equation} J(\omega)= J_I(\omega)\,T_I+J_{\alpha^{\perp}}(\omega)\,E_{\alpha^{\perp}}+J_{\alpha}(\omega)\,E_{\alpha}. \label{jota} \end{equation} Using (\ref{lamb}), (\ref{hot}) and (\ref{jota}) and comparing the terms in $\lambda_I$, $\lambda_{\alpha^{\perp}}$ and $\lambda_{\alpha}$ we finally find the action of the different components of $J$ on physical states: \begin{eqnarray} J_I\,|\Psi\rangle &=& Q_I\,|\Psi\rangle,\nonumber\\ J_{\alpha^{\perp}}\,|\Psi\rangle &=&Q_{\alpha^{\perp}}\,|\Psi\rangle,\\ J_{\alpha}\,|\Psi\rangle &=&0.\nonumber \label{result1} \end{eqnarray} We can summarize these results as follows: physical states transform under all the $T_I$ and under the generators $E_{\alpha^{\perp}}$. Large gauge transformations generated by the $E_{\alpha}$ leave physical states invariant and therefore do not correspond to collective coordinates. This phenomenon is due to the anomalous behaviour of the radial part of the electric field in the isodirections $E_{\alpha}$. \subsection{Inclusion of a vacuum angle} The discussion of the preceding Section can be readily generalized to include a vacuum angle $\vartheta$. There is no need to redo the calculations since the effect of the vacuum angle is essentially a shift in the electric field: \begin{equation} E_i^a\rightarrow E_i^a-\vartheta\,B_i^a, \label{angle} \end{equation} where $\vartheta$ absorbs some inessential constants. This can be plugged directly into Eq. (\ref{cons2}) with the following results \begin{eqnarray} J_I\,|\Psi\rangle &=& \left (Q_I+\vartheta\,g_I\right)\,|\Psi\rangle,\nonumber\\ J_{\alpha^{\perp}}\,|\Psi\rangle &=&Q_{\alpha^{\perp}}\,|\Psi\rangle,\\ J_{\alpha}\,|\Psi\rangle &=&0.\nonumber \label{result2} \end{eqnarray} We see that physical states are invariant under the $J_{\alpha}$ even when a vacuum angle is introduced. The effect of the vacuum angle is limited to transformations under the commuting generators $J_I$. Equations (\ref{result1}) or (\ref{result2}) can be interpreted as a symmetry breaking induced by the non-abelian monopole whereby the unbroken gauge algebra ${\cal H}$ is broken further down to its generators $T_I$ and $E_{\alpha^{\perp}}$. \section{Conclusions} We have demonstrated that physical states of non-abelian dyons are invariant under large gauge transformations that do not commute with the magnetic field of the dyon. In the language of root systems those gauge transformations correspond to generators $E_{\alpha}$ of the unbroken gauge group with the root $\alpha$ not orthogonal to the magnetic weights. This result is still true if a vacuum angle $\vartheta$ is included. The only effect of the vacuum angle is an additional term in the transformation properties of physical states under the generators of the maximal torus of ${\cal H}$. This additional term is proportional to the magnetic weights, which are weights of ${\cal H}^v$, the dual of the unbroken gauge group \cite{gno}; physical states of dyons therefore carry a representation of ${\cal H}^v$. This may be relevant to a better understanding of the Montonen-Olive conjecture \cite{mo}. Excitation of internal degrees of freedom of the monopole corresponding to the roots $\alpha$ not orthogonal to the magnetic weights induce no electric charge in the monopole and hence have no observable effects far from the monopole core. From the point of view of collective coordinate quantization, this fact implies that there are no collective coordinates associated to those degrees of freedom. This result complements and clarify the results of references \cite{abou} to \cite{bala}. \vskip 2cm \begin{center} \bf{Acknowledgements} \end{center} I would like to thank Professor T.J. Hollowood for many useful discussions. This research has been supported by the British Engineering and Physical Sciences Research Council (EPSRC). \vskip 2cm
{'timestamp': '1997-07-11T18:05:32', 'yymm': '9707', 'arxiv_id': 'hep-th/9707117', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-th/9707117'}
\section{Introduction}\label{se:intro} Let $n\ge 3$ and $M$ be a smooth $n$-dimensional manifold. For a Riemannian metric $g$ and a scalar-valued function $u$ on $M$, $n\ge 3$, we define the \emph{static vacuum operator} \begin{align} \label{eq:static} S(g, u) := \big(-u \mathrm{Ric}_g + \nabla^2_g u, \Delta_g u\big). \end{align} If $S( g, u)=0$, $( g, u)$ is called a static vacuum \emph{pair}, and $(M, g, u)$ is called a static vacuum \emph{triple}. We also refer to $ g$ as a static vacuum metric. When $ u>0$, $(M, g, u)$ defines a Ricci flat ``spacetime'' metric $\mathbf{g} = \pm u^2 dt^2 + g$ (with either a positive or minus sign in the $dt^2$ factor) on $\mathbb{R}\times M$ that carries a global Killing vector $\partial_t$. The most well-known examples of asymptotically flat, static vacuum metrics include the Euclidean metric and the family of (Riemannian) Schwarzschild metrics When $M$ has nonempty boundary $\partial M$, the induced boundary data $(g^\intercal, H_g)$ in \eqref{eq:bdv} is called the {\it Bartnik boundary data} of $g$, where $g^\intercal$ is the restriction of g (or the induced metric) on the tangent bundle of $\partial M$ and the mean curvature $H_g$ is the tangential divergence of the unit normal vector $\nu$ pointing to infinity. The main content of this paper is to solve for asymptotically flat $(g, u)$ to $S(g, u)=0$ with prescribed Bartnik boundary data $(g^\intercal, H_g)$ and confirm the following conjecture of R. Bartnik~\cite[Conjecture 7]{Bartnik:2002} for a wider class of boundary data. \begin{conjecture}[Bartnik's static extension conjecture]\label{conjecture} Let $(\Omega_0, g_0)$ be a compact, connected, $n$-dimensional Riemannian manifold with scalar curvature $R_{g_0}\ge 0$ and with nonempty boundary $\Sigma$. Suppose $H_{g_0}$ is strictly positive somewhere. Then there exists a unique asymptotically flat, static vacuum manifold $(M, g)$ with boundary $\partial M$ diffeomorphic to $\Sigma$ such that $(g^\intercal, H_g) = (g_0^\intercal, H_{g_0})$. \end{conjecture} The conjecture originated from the Bartnik's program on quasi-local mass~\cite{Bartnik:1989, Bartnik:1993, Bartnik:1997, Bartnik:2002} that goes back to 1989 and has drawn great interest in mathematical relativity and differential geometry in the past two decades; see, for example, the recent survey by M. Anderson~\cite{Anderson:2019}. It has been speculated that the conjecture may not hold in general. As observed by Anderson-M. Khuri and by Anderson-J. Jauregui~\cite{Anderson-Khuri:2013, Anderson-Jauregui:2016}, if $\Sigma$ is an immersed hypersurface in an asymptotically flat, static vacuum triple $(M,\bar g, \bar u)$ that is not \emph{outer} embedded, i.e., $\Sigma$ is touches itself from the exterior region $M\setminus \Omega$, then the induced data $(\bar{g}^\intercal, H_{\bar{g}})$ is valid Bartnik boundary data on $\Sigma$, but $(\bar{g}, \bar u)$ in $M\setminus \Omega$ is \emph{not} a valid extension because $M\setminus \Omega$ is not a manifold-with-boundary. Those hypersurfaces are conjectured to be counter-examples to Conjecture~\ref{conjecture} \cite[Conjecture 5.2]{Anderson-Jauregui:2016}. Nevertheless, Conjecture~\ref{conjecture} is of fundamental importance toward the structure theory of static vacuum manifolds, so it is highly desirable to prove it even under extra assumptions. Conjecture~\ref{conjecture} can be formulated as solving a geometric boundary value problem $T(g, u) = (0, 0, \tau, \phi)$ for given boundary data $(\tau, \phi)$ where \begin{align}\label{eq:bdv} \begin{split} T(g, u) := \begin{array}{l} \left\{ \begin{array}{l} -u \mathrm{Ric}_g + \nabla^2_g u \\ \Delta_g u \end{array} \right. \quad \mbox{ in } M\\ \left\{ \begin{array}{l} g^\intercal \\ H_g \end{array} \right. \quad \mbox{ on } \partial M. \end{array} \end{split} \end{align} Since the static vacuum operator is highly nonlinear, as the first fundamental step toward Conjecture~\ref{conjecture}, it is natural to make the following conjecture. \begin{conjecture}[Local well-posedness]\label{co:well-posed} The geometric boundary value problem~\eqref{eq:bdv} is locally well-posed at every background solution in the sense that if $(M, \bar g, \bar u)$ is an asymptotically flat, static vacuum triple, then for $(\tau, \phi)$ sufficiently close to $(\bar g^{\intercal}, H_{\bar g})$ on $\partial M$, there exists a solution $(g, u)$ to $T(g, u) = (0, 0, \tau, \phi)$ that is geometrically unique near $(\bar g, \bar u)$ and can depend continuously on the boundary data $(\tau, \phi)$. \end{conjecture} In particular, positive results toward Conjecture~\ref{co:well-posed} confirms Conjecture~\ref{conjecture} for large classes of boundary data. There are partial results toward Conjecture~\ref{co:well-posed} when the background solution $(\bar g, \bar u) = (g_{\mathbb E}, 1)$, the Euclidean pair. For boundary data $(\tau, \phi)$ sufficiently close to $(g_{S^2}, 2)$, the Bartnik boundary data of $g_{\mathbb E}$ on the unit round sphere in $\mathbb R^3$, P.~Miao~\cite{Miao:2003} confirmed the local well-posedness under a reflectional symmetric condition. Anderson~ \cite{Anderson:2015} removed the symmetry assumption, based on his work with Khuri~\cite{Anderson-Khuri:2013}. There is a proposed flow approach to axial-symmetric solutions with numerical results by C.~Cederbaum, O.~Rinne, and M. Strehlau \cite{Cederbaum-Rinne-Strehlau:2019}. In our recent work~\cite{An-Huang:2021}, we confirmed Conjecture~\ref{co:well-posed} for $(\tau, \phi)$ sufficiently close to the Bartnik data of $g_{\mathbb E}$ on wide classes of connected, embedded hypersurfaces $\Sigma$ in Euclidean $\mathbb{R}^n$ for any $n \ge 3$, including \begin{enumerate} \item \label{it:hypersurfaces}Hypersurfaces in an open dense subfamily of any foliation of hypersurfaces. \item Any star-shaped hypersurfaces. \end{enumerate} We also show that the solution $(g,u)$ is geometrically unique in an open neighborhood of $(g_{\mathbb{E}}, 1)$ in the weighted H\"older space. Note that we recently extended Item~\eqref{it:hypersurfaces} to more general family of hypersurfaces in Euclidean space \cite{An-Huang:2022-JMP} Those results can be viewed as confirming Conjecture~\ref{co:well-posed} for the Euclidean background. In particular, those static metrics obtained have small ADM masses. In this paper, we implement several new arguments and extend our prior results to a general background metric (including any Schwarzschild metrics). Consequently, the new static vacuum metrics obtained can have large ADM masses. To our knowledge, this paper presents first results toward Conjecture~\ref{conjecture} for ``large'' boundary data, i.e. those can be far from Euclidean. Before discussing the main results, we introduce some notations and definitions. \noindent {\bf Notation.} Let $(M, \bar g, \bar u)$ be an asymptotically flat, static vacuum triple with $\bar u>0$, possibly have a compact boundary $\partial M$. We will use $\Omega$ to denote a bounded, open subset in $M$ such that $M\setminus \Omega$ is connected and $\partial M\subset \Omega$ if $\partial M\neq \emptyset$. We use $\Sigma:=\partial \big(M\setminus \Omega\big)$ to denote the boundary, and we always assume that $\Sigma$ is a connected, embedded, smooth hypersurface in $M$. We say that $(g, u)$ is an \emph{asymptotically flat pair} if the metric $g$ is asymptotically flat and the scalar function $u\to 1$ at infinity. Let $L$ denote the linearized operator of $T$ at a static vacuum pair $(\bar g, \bar u)$ on $M\setminus\Omega$. That is, for any smooth family of asymptotically flat pairs $(g(s), u(s))$ on $M\setminus\Omega$ with $\big( g(0), u(0) \big) = (\bar g, \bar u)$ and $\big( g'(0), u'(0) \big) = (h, v)$, we define \[ L(h, v) := \left. \ds\right|_{s=0} T(g(s), u(s)). \] By diffeomorphism invariance of $T$, we trivially have $T(\psi_s^* \bar g, \psi_s^* \bar u) = T(\bar g, \bar u)$ on $M\setminus\Omega$ for any family of diffeomorphisms $\psi_s$ that fix the structure at infinity and boundary, i.e. $\psi_s \in \mathcal D(M\setminus \Omega)$. See Definition~\ref{de:diffeo} for the precise definitions of $\mathcal D(M\setminus \Omega)$ and its tangent space $\mathcal X(M\setminus \Omega)$ at the identity map. Therefore, by differentiating in $s$, $\Ker L $ contains all the ``trivial'' deformations $(L_X \bar g, X(\bar u))$ with vector fields $X\in \mathcal X(M\setminus \Omega)$. Our first main result says that Conjecture~\ref{co:well-posed} holds, provided that $\Ker L$ is ``trivial''. \begin{Theorem}\label{existence} Let $(M,\bar g,\bar u)$ be an asymptotically flat, static vacuum triple with $\bar u>0$. Suppose that \begin{align} \label{eq:no-kernel} \Ker L = \big\{ (L_X \bar g, X(\bar u)): X\in \mathcal X(M\setminus \Omega)\big\}. \end{align} Then there exist positive constants $\epsilon_0,C$ such that for each $\epsilon\in (0, \epsilon_0)$, if $(\tau,\phi)$ satisfies $\|(\tau,\phi)-(\bar g^\intercal,H_{\bar g})\|_{\C^{2,\a}(\Sigma)\times \C^{1,\alpha}(\Sigma)}<\epsilon$, then there exists an asymptotically flat, static vacuum pair $(g,u)$ with $\|(g,u)-(\bar g,\bar u)\|_{\C^{2,\a}_{-q}(M\setminus \Omega)}<C\epsilon$ solving the boundary condition $(g^\intercal, H_g)=(\tau, \phi)$ on $\Sigma$. The solution $(g, u)$ is geometrically unique in a neighborhood of $(\bar g, \bar u)$ in $\C^{2,\alpha}_{-q}(M\setminus \Omega)$ and can depend smoothly on $(\tau, \phi)$. \end{Theorem} We refer the precise meanings of ``geometric uniqueness'' and ``smooth dependence'' to Theorem~\ref{exist} and the dependence of the constants $\epsilon_0, C$ to Remark~\ref{re:constant}. We formulate two \emph{static regular} conditions that imply \eqref{eq:no-kernel}. Along the boundary~$\Sigma$, we denote the second fundamental form $A_{g} := (\nabla \nu)^\intercal$ and the $\ell$-th $\nu$-covariant derivative of the Ricci tensor, $\mathrm{Ric}_g$, by \[ \big(\nabla^\ell_\nu \, \mathrm{Ric}_g\big)(Y, Z):= (\nabla^\ell \mathrm{Ric}) (Y, Z; {\underbrace{{\nu, \dots, \nu}}_{\text{\tiny$\ell$ times}}}\,) \quad \mbox{ for any vectors } Y, Z. \] We linearize $\nu$, $A_g$, $H_g$ $\mathrm{Ric}_g$, $\nabla^\ell_\nu \, \mathrm{Ric}_g$ and denote their linearizations at a static vacuum pair $(\bar g, \bar u)$ by $\nu'(h)$, $A'(h)$, $H'(h)$, $\mathrm{Ric}'(h)$, $\big(\nabla_\nu^\ell\mathrm{Ric}\big)'(h)$, respectively. \begin{Definition}[Static regular]\label{de:static} \begin{itemize} \item The boundary $\Sigma$ is said to be \emph{static regular of type (I) in $(M\setminus \Omega , \bar g, \bar u)$} if for any $(h, v)\in \C^{2,\alpha}_{-q}(M\setminus \Omega)$ solving $L(h, v)=0$, there exists a nonempty, connected, open subset $\hat \Sigma\subset \Sigma$ with $\pi_1(M\setminus \Omega, \hat \Sigma)=0$ such that $(h, v)$ satisfies \begin{align} \label{eq:static1} v=0, \quad (\nu')(h) (\bar u) + \nu (v)=0, \quad A'(h)=0\qquad \mbox{ on } \hat \Sigma. \end{align} \item The boundary $\Sigma$ is said to be \emph{static regular of type (II) in $(M\setminus \Omega , \bar g, \bar u)$} if for any $(h, v)\in \C^{2,\alpha}_{-q}(M\setminus \Omega)$ solving $L(h, v)=0$, there exists a nonempty, connected, open subset $\hat \Sigma\subset \Sigma$ with $\pi_1(M\setminus \Omega, \hat \Sigma)=0$ satisfying that \begin{subequations} \begin{align} &\hat \Sigma \mbox{ is an analytic hypersurface, and} \label{eq:static2}\\ & A'(h)=0, \quad \big(\mathrm{Ric}'(h)\big)^\intercal =0, \quad \big((\nabla_\nu^k\, \mathrm{Ric})'(h)\big)^\intercal=0 \quad \mbox{ on } \hat \Sigma\label{eq:static3} \end{align} \end{subequations} for all positive integers $k$. \end{itemize} \end{Definition} We may refer to the boundary conditions in type (I) as the \emph{Cauchy boundary condition} and the boundary conditions type (II) as the \emph{infinite-order boundary condition} (noting it involves only conditions on $h$). \begin{Theorem}\label{th:trivial} Let $(M,\bar g,\bar u)$ be an asymptotically flat, static vacuum triple with $\bar u>0$. Let $\hat \Sigma$ be a nonempty, connected, open subset of $\Sigma$ with $\pi_1(M\setminus \Omega, \hat \Sigma)=0$. Let $(h, v)\in \C^{2,\alpha}_{-q}(M\setminus \Omega)$ satisfy $L (h, v)=0$ in $M\setminus \Omega$ such that either \eqref{eq:static1} holds or both \eqref{eq:static2},\eqref{eq:static3} hold on $\hat \Sigma$. Then $(h, v) = \big(L_X \bar g, X(\bar u) \big)$ for $X\in \mathcal X(M\setminus \Omega)$. As a consequence, if the boundary $\Sigma$ is static regular in $(M\setminus \Omega, \bar g, \bar u)$ of either type (I) or type (II), then \eqref{eq:no-kernel} holds. Conversely, if \eqref{eq:no-kernel} holds, then both \eqref{eq:static1} and \eqref{eq:static3} hold for $\hat \Sigma = \Sigma$. \end{Theorem} We make some general remarks on the conditions appearing above. \begin{remark}\label{re:intro} \begin{enumerate} \item For any subset $U\subset M$, the condition on the relative fundamental group $\pi_1(M, U)=0$ says that $U$ is connected and the inclusion map $U \hookrightarrow M$ induces a surjection $\pi_1(U)\to \pi_1(M)$. Thus, in the special case that $M\setminus \Omega$ is simply connected, the condition $\pi_1(M\setminus \Omega, \hat \Sigma)=0$ always holds. \item \label{it:nu} The condition $\nu'(h) + \nu(v)=0$ on $\hat \Sigma$ in \eqref{eq:static1} can be dropped in the following two general situations: (i) $\hat \Sigma = \Sigma$, or (ii) the mean curvature of $\hat \Sigma$ in $(M, \bar g)$ is not identically zero. See Section~\ref{sec:relaxed}. \item Both conditions $v=0, (\nu')(h) (\bar u) + \nu (v)=0$ in \eqref{eq:static1} can be dropped when $(\bar g, \bar u) = (g_{\mathbb E}, 1)$ and $\hat \Sigma=\Sigma$. Thus, the definition of static regular of type~(I) here recovers the definition of \emph{static regular} in \cite[Definition 2]{An-Huang:2021} (see \cite[Lemma 4.8]{An-Huang:2021}). \item The analyticity of $\hat \Sigma$ in \eqref{eq:static2} will be used together with a fundamental fact that a static vacuum pair $(\bar g, \bar u)$ is analytic in suitable coordinate charts by the result of M\"uller zum Hagen~\cite{Muller-zum-Hagen:1970}. See Appendix~\ref{se:analytic}. \end{enumerate} \end{remark} We verify that a large, open and dense subfamily of hypersurfaces in a general background $(M,\bar g,\bar u)$ are static regular of type (II). \begin{Definition}\label{def:one-sided} Let $\delta$ be a positive number and $\{ \Sigma_t \}, t\in [-\delta, \delta]$, be a family of hypersurfaces in $M$ so that $\Sigma_t = \partial (M\setminus \Omega_t)$ and $\Sigma_t, \Omega_t$ satisfy the same requirements for $\Sigma, \Omega$ as specified in Notation above. We say that $\{\Sigma_t\}$ is a {\it smooth one-sided family of hypersurfaces (generated by $X$) foliating along $\hat\Sigma_t\subset\Sigma_t$ with $M\setminus \Omega_t$ simply connected relative to $\hat \Sigma_t$} if \begin{enumerate} \item $\hat{\Sigma}_t$ is open subset of $\Sigma_t$ satisfying $\pi_1(M\setminus \Omega_t,\hat \Sigma_t)=0$. \item The smooth deformation vector field $X = \frac{\partial}{\partial t} \Sigma_t$ satisfies $\bar g(X,\nu)=\zeta\geq 0$ on each $\Sigma_t$, and $\zeta>0$ in $\hat\Sigma_t$. \end{enumerate} \end{Definition} \begin{Theorem}\label{generic} Let $(M,\bar g,\bar u)$ be an asymptotically flat, static vacuum triple with $\bar u>0$. Let $\{\Sigma_t\}$ be a smooth one-sided family of hypersurfaces foliating along $\hat{\Sigma}_t$ with $M\setminus \Omega_t$ simply connected relative to $\hat\Sigma_t$. Furthermore, suppose $\hat{\Sigma}_t$ is an analytic hypersurface for every $t$. Then there is an open dense subset $J\subset [-\delta, \delta]$ such that $\Sigma_t$ is static regular of type~(II) in $(M\setminus \Omega_t, \bar g, \bar u)$ for all $t\in J$. \end{Theorem} Together with Theorem~\ref{existence} and Theorem~\ref{th:trivial}, the above theorem confirms Conjecture~\ref{co:well-posed} on local well-posedness at an open and dense subfamily of background solutions $(M\setminus\Omega_t,\bar g,\bar u)$. We expect the local well-posedness to hold without the assumption that $\hat \Sigma_t$ is analytic, which is indeed the case if the background static vacuum pair is Euclidean, see \cite[Theorem 7]{An-Huang:2022-JMP}. For a general background static vacuum pair considered here, the main difficulty is that the system $L(h, v)$ is highly coupled (see the explicit formula in \eqref{eq:L} below) and we cannot derive $v$ is trivial without the analyticity assumption. We also note that if $\pi_1(M \setminus \Omega) =0$ (for example, $\Omega$ is a large ball), one can choose to foliate along any open subset $\hat \Sigma_t$ of an analytic hypersurface $\Sigma_t$ in Theorem~\ref{generic}. The above theorems also lead to other applications as we shall discuss below. The well-known Uniqueness Theorem of Static Black Holes says that an asymptotically flat, static vacuum triple $(M, \bar g, \bar u)$ with $\bar u=0$ on the boundary $\partial M$ must belong to the Schwarzschild family. Note that a Schwarzschild manifold has the following special properties: \begin{enumerate} \item \label{it:CMC}$\bar u =0$ on the boundary $\partial M$, which implies that $\partial M$ has zero mean curvature. \item \label{it:increasing} $(M, \bar g)$ is foliated by stable, constant mean curvature $(n-1)$-dimensional spheres. \end{enumerate} It is tempting to ask whether one can characterize Schwarzschild by replacing Item~\eqref{it:CMC} with other geometric conditions. For example, it is natural to ask whether one may replace Item~\eqref{it:CMC} by \begin{enumerate} \item[(1$^\prime$)] \label{it:CMC'}$\bar u>0$ is very small on $\partial M $, and $\partial M$ has constant mean curvature $H_{\partial M}>0$ that is very small. \end{enumerate} We make a general remark regarding Item~(\ref{it:CMC'}$^\prime$). Any asymptotically flat metric (under mild asymptotic assumptions) has a foliation of stable CMC surfaces in an asymptotically flat end with the mean curvature going to zero at infinity by~\cite{Huisken-Yau:1996} (also, for example, \cite{Metzger:2007, Huang:2010}), but when the metric is static $\bar g$, the static potential $\bar u$ is very close to $1$ there and hence cannot be arbitrarily small. Our next corollary says that under both the assumptions Item~(\ref{it:CMC'}$^\prime$) and Item~\eqref{it:increasing}, the static uniqueness fails, as there are many, geometrically distinct static vacuum metrics satisfying both assumptions. \begin{Corollary}\label{co2} Let $B\subset \mathbb R^n$ be an open unit ball and $M:=\mathbb R^n\setminus B$. Given any constant $c>0$, there exists many, asymptotically flat, static vacuum triple $(M, g, u)$ such that \begin{enumerate} \item $ 0<u< c$ on the boundary $\partial M$, and the mean curvature $\partial M$ is equal to a positive constant $H \in (0, c)$. \item $(M, g)$ is foliated by stable, constant mean curvature $(n-1)$-dimensional spheres. \item $(M, g)$ is not isometric to a Schwarzschild manifold. \end{enumerate} \end{Corollary} We include a fundamental result used in Theorem~\ref{th:trivial} that is of independent interest. Let $h$ be a symmetric $(0,2)$-tensor on a Riemannian manifold $(M, g)$. We say a vector field $X$ is \emph{$h$-Killing} if $L_X g =h$. The next theorem shows that if $h$ is analytic, then a local $h$-Killing vector can globally extend on an analytic manifold. It generalizes the classic result of Nomizu for local Killing vector fields when $h=0$. \begin{Theorem}[Cf. { \cite[Lemma 2.6]{Anderson:2008}}]\label{th:extension} Let $(U, g)$ be a connected, analytic Riemannian manifold. Let $h$ be an analytic, symmetric $(0,2)$-tensor on $U$. Let $\Omega \subset U$ be a connected open subset satisfying $\pi_1(U, \Omega)=0$. Then an $h$-Killing vector field $X$ in~$\Omega$ can be extended to a unique $h$-Killing vector field on the whole manifold~$U$. \end{Theorem} We remark that a similar statement already appeared in the work of Anderson~ \cite[Lemma 2.6]{Anderson:2008} with an outline of proof. We give an alternative proof similar to Nomizu's proof. The proof is included in Section~\ref{se:h}. \medskip \noindent{\bf Structure of the paper:} In Section~\ref{se:prelim}, we give preliminary results that will be used in later sections. In Section~\ref{se:gauge}, we introduce the gauge conditions essential to study the boundary value problem and discuss the kernel of the linearized problem. In Section~\ref{se:static-regular}, we prove Theorem~\ref{th:trivial} and, along the line, discuss the static regular assumptions. Then we complete the proof of Theorem~\ref{existence} in Section~\ref{se:solution}. Theorem~\ref{generic} and Corollary~\ref{co2} are proven in Section~\ref{se:perturbation}. We prove Theorem~\ref{th:extension} in Section~\ref{se:h}, which can be read independently from all other sections. \section{Preliminaries}\label{se:prelim} In this section, we collect basic definitions, notations, facts, and some fundamental results that will be used in the later sections. \subsection{The structure at infinity Let $n\ge 3$ and $M$ denote an $n$-dimensional smooth complete manifold (possibly with nonempty compact boundary) such that there are compact subsets $B\subset M$, $B_1\subset \mathbb{R}^n$, and a diffeomorphism $\Phi: M\setminus B\longrightarrow \mathbb R^n\setminus B_1$. We use the chart $\{ x \}$ on $M\setminus B$ from $\Phi$ and a fixed atlas of $B$ to define the weighted H\"older space $\C^{k,\alpha}_{-q}(M)$ for $k=0, 1, \dots$, $\alpha \in (0, 1)$, and $\frac{n-2}{2} < q < n-2$. See the definition in, for example, \cite[Section 2.1]{An-Huang:2021}. For $f\in \C^{k,\alpha}_{-q} (M)$, we may also write $f= O^{k,\alpha}(|x|^{-q})$ to emphasize its fall-off rate. Let $g$ be a Riemannian metric on $M$. We say that $(M, g)$ is asymptotically flat (at the rate $q$ and with the \emph{structure at infinity} $(\Phi, x)$) if $g- g_{\mathbb E} \in \C^{2,\alpha}_{-q}(M)$ where $(g_{\mathbb E})_{ij} = \delta_{ij}$ in the chart~$\{x\}$ of $M$ via $\Phi$ from a Cartesian chart of $\mathbb R^n$ and $g_{\mathbb E}$ is smoothly extended to the entire $M$. The ADM mass of $g$ is defined by \begin{align}\label{eq:mass} m_{\mathrm ADM}(g) = \frac{1}{2(n-1)\omega_{n-1}} \lim_{r\to \infty} \int_{|x|=r} \sum_{i,j=1}^n \left( \frac{\partial g_{ij}}{\partial x_i} - \frac{\partial g_{ij}}{\partial x_j} \right) \frac{x_j}{|x|} \, d\sigma. \end{align} where $d\sigma$ is the $(n-1)$-volume form on $|x|=r$ induced from the Euclidean metric and $\omega_{n-1}$ is the volume of the standard unit sphere $S^{n-1}$. It is well-known that two structures of infinity of an asymptotically flat manifold $(M, g)$ differ by a rigid motion of $\{x \}$, see~\cite[Corollary 3.2]{Bartnik:1986}. In particular, translations and rotations are continuous symmetries, generated by the vector fields \begin{align*} Z^{(i)} =\partial_i\quad \mbox{ and }\quad Z^{(i,j)}= x_i \partial_j - x_j \partial_i \qquad \mbox{ for }i, j=1,\dots, n. \end{align*} We can smoothly extend $Z^{(i)}, Z^{(i,j)}$ to the entire $M$ and denote the linear space of ``Euclidean Killing vectors'' by \begin{align} \label{de:Z} \mathcal Z = \Span\{ Z^{(i)}\mbox{ and } Z^{(i,j)}\mbox{ for } i, j = 1,\dots, n\}. \end{align} \begin{definition}\label{de:diffeo} Let $(M, g)$ be asymptotically flat. \begin{enumerate} \item Define the subgroup of $\C^{3,\alpha}_{\mathrm{loc}}$ diffeomorphisms of $M\setminus \Omega$ that fix the boundary $\Sigma$ and the structure at infinity as \begin{align*} \mathscr{D} (M\setminus \Omega)= &\Big\{\,\psi\in\C^{3,\alpha}_{\mathrm{loc}}(M): \psi|_\Sigma = \mathrm{Id}_\Sigma \mbox{ and } \psi(x) - (Ox+a) = O^{3,\alpha}(|x|^{1-q} ) \\ &\mbox{ for some constant matrix $O\in SO(n)$ and constant vector $a\in\mathbb R^n$ }\Big \}. \end{align*} \item Let ${\mathcal X}(M\setminus\Omega)$ be the tangent space of $\mathscr D (M\setminus \Omega)$ at the identity map. In other words, ${\mathcal X}(M\setminus \Omega)$ consists of vector fields so that \begin{align*} {\mathcal X}(M\setminus\Omega)=\Big\{& X\in \C^{3,\alpha}_{\mathrm{loc}}(M\setminus\Omega): X=0 \mbox{ on } \Sigma \mbox{ and }X-Z= O^{3,\a}(|x|^{1-q}) \mbox{ for some $Z\in \mathcal Z$} \Big\}. \end{align*} \end{enumerate} \end{definition} \subsection{Regge-Teitelboim functional and integral identities} Recall the static vacuum operator\footnote{In \cite{An-Huang:2021} the notation $S(g,u)$ was used to denote a different (but related) operator, which is $(R'|_g)^*(u)$ in below.} \[ S(g, u) = (- u \mathrm{Ric}_{g} + \nabla^2_{ g} u, \Delta_g u). \] Let $(\bar{g}, \bar{u})$ be a static vacuum pair; namely $S(\bar g, \bar u) = 0$. The linearization of $S$ at $(\bar g, \bar u)$ is given by \begin{align*} S'(h, v) =\Big(-\bar u \mathrm{Ric}'(h) + (\nabla^2)'(h) \bar u- v\mathrm{Ric} + \nabla^2 v, \Delta v + (\Delta'(h)) \bar u\Big). \end{align*} We say that $(h,v)$ is a \emph{static vacuum deformation} (at $(\bar g, \bar u)$) if $S'(h, v)=0$. Throughout the paper, we use $S'|_{(g,u)}$ to denote the linearization of S at $(g,u)$ and similar for other differential operators; and we often omit the subscript $|_{(\bar g, \bar u)}$ when linearizing at a static vacuum pair $(\bar g, \bar u)$, as well as the subscript $|_{\bar g}$ in linearizations of geometric operators such as $\mathrm{Ric}_g$, when the context is clear. We refer to Appendix~\ref{se:formula} for explicit formulas of those linearized operators. Let ${\mathcal M} (M\setminus \Omega)$ denote the set of \emph{asymptotically flat pairs} in $M\setminus \Omega$, consisting of pairs $(g, u)$ of Riemannian metrics and scalar functions on $M\setminus \Omega$ satisfying \begin{align}\label{eq:af-pair} {\mathcal M} (M\setminus \Omega) = \Big\{ (g, u): (g-g_{\mathbb E}, u-1)\in \C^{2,\alpha}_{-q}(M\setminus \Omega)\Big\}. \end{align} The Regge-Teitelboim functional for $(g, u)\in {\mathcal M} (M\setminus \Omega)$ is defined as \[ \mathscr{F}(g, u) = -2(n-1)\omega_{n-1} m_{\mathrm ADM }(g) + \int_{M\setminus \Omega} u R_g \, \dvol_g. \] We recall the first variation formula. \begin{lemma}[{\cite[Proposition 3.7]{Anderson-Khuri:2013} and \cite[Lemma 3.1]{Miao:2007})}]\label{le:first} Let $(g(s), u(s))$ be a one-parameter differentiable family of asymptotically flat pairs such that $(g(0), u(0) ) = (g, u)$ and $(h, v) = (g'(0), u'(0))\in \C^{2,\alpha}_{-q}(M\setminus \Omega)$. Then, \begin{align*} &\left. \ds\right|_{s=0} \mathscr{F}(g(s), u(s)) \\ &= \int_{M\setminus \Omega} \Big\langle \big(-u \mathrm{Ric}_g+ \nabla^2_g u - (\Delta_g u) g+ \tfrac{1}{2} u R_g g, R_g\big), \big(h, v\big)\Big\rangle_g \, \dvol_g\\ &\quad +\int_\Sigma \Big\langle \big(uA_g - \nu_g(u) g^\intercal, 2u\big), \big(h^\intercal, H'|_g(h)\big) \Big\rangle_g \, \da_g. \end{align*} \end{lemma} The first variation formula holds for any asymptotically flat pair $(g, u)$. In the next lemma, we see when $(g, u)$ is a static vacuum pair $(\bar g, \bar u)$ and $h$ satisfies $(h^\intercal, H'(h) )=0$, then for any $v$, $(h, v)$ is a critical point of the functional $\mathscr{F}$. If furthermore $R'(h)=0$ in $M\setminus \Omega$, then the deformation $h$ ``preserves the mass'' in the sense that it is also a critical point of the ADM mass functional. (Note that $S'(h, v)=0$ implies $R'(h)=0$.) \begin{lemma}\label{le:mass} Let $(\bar g, \bar u)$ be a static vacuum pair in $M\setminus \Omega$. Suppose $h\in \C^{2,\alpha}_{-q}(M\setminus \Omega)$ satisfies $R'(h)=0$ in $M\setminus \Omega$ and $(h^\intercal, H'(h))=0$ on $\Sigma$. Then \[ \lim_{r\to \infty} \int_{|x|=r} \sum_{i,j=1}^n \left( \frac{\partial h_{ij}}{\partial x_i} - \frac{\partial h_{ij}}{\partial x_j}\right) \frac{x_j}{|x|} \, d\sigma_{\bar g}=0. \] \end{lemma} \begin{proof} For any function $v\in \C^{2,\alpha}_{-q}(M\setminus\Omega)$, define $(g(s), u(s) ) =( \bar g, \bar u)+s (h, v)$. Notice that $\left.\ps\right|_{s=0} R_{g(s)} = R'(h)=0$. Thus, \[ -2(n-1)\omega_{n-1} \left. \ds\right|_{s=0} m_{\mathrm ADM }(g(s))= \left. \ds\right|_{s=0}\mathscr{F}(g(s), u(s)) =0. \] In the last equality, we use the first variation formula, Lemma~\ref{le:first}, and the assumptions on $h$. The integral identity follows \eqref{eq:mass}. \end{proof} When $(h, v)$ is a static vacuum deformation, the Laplace equation for $v$ in $S'(h, v)=0$ gives $\Delta v =O^{0,\alpha} (|x|^{-2-2q})$ and thus by harmonic asymptotics $v(x)= c|x|^{2-n} + O^{2,\alpha}(|x|^{\max\{ -2q, 1-n\}})$ for some real number $c$. We will show that $v$ also ``preserves the mass'' in that $c=0$ in Lemma~\ref{le:v-mass} below. In \cite[Section 3]{An-Huang:2021}, we used the first and second variations of the functional $\mathscr{F}$ to derive several fundamental properties for the static vacuum operators. We list the properties that will be used later in this paper whose proofs directly extend to our current setting. First, we recall the Green-type identity \cite[Proposition 3.3]{An-Huang:2021}. \begin{proposition}[Green-type identity Let $(g, u)$ be an asymptotically flat pair in $M\setminus \Omega$. For any $(h, v), (k,w)\in \C^{2}_{-q}(M\setminus \Omega)$, we have \begin{align}\label{equation:Green} \begin{split} &\int_{M\setminus \Omega}\Big\langle P(h, v),(k,w)\Big\rangle_g \dvol_g-\int_{M\setminus \Omega}\Big\langle P(k, w), (h, v)\Big\rangle_g \dvol_g\\ &=- \int_\Sigma \Big\langle Q(h,v) , \big(k^\intercal, H'|_g(k)\big)\Big\rangle_g \da_g+\int_\Sigma \Big\langle Q(k, w), \big(h^\intercal, H'|_g(h)\big)\Big \rangle_g \da_g. \end{split} \end{align} The linear differential operators $P, Q$ at $(g, u)$ are defined by\footnote{Technically speaking, the notations $P(h, v), Q(h, v)$ should have the subscript $(g, u)$ to specify their dependence on $(g, u)$, but for the rest of the paper we will only consider the case that $(g, u)$ is an arbitrary but fixed static vacuum pair $(\bar g, \bar u)$, and thus we omit the subscript.} \begin{align*} P(h, v) &= \Big(\big((R'|_g)^*(u)+ \tfrac{1}{2} u R_g g\big)', \, R'|_g(h)\Big) - \Big(2\big((R'|_g)^*(u)+ \tfrac{1}{2} u R_g g\big)\circ h, 0\Big) \\ &\quad + \tfrac{1}{2} (\tr_g h) \Big((R'|_g)^*(u) + \tfrac{1}{2} u R_g g,\, R_g \Big) \quad \quad \mbox{ in } M\setminus \Omega \\ Q(h, v)&= \Big( \big(uA_g-\nu_g(u) g^\intercal\big)', \, 2v\Big) - \Big(2\big(uA_g-\nu_g(u) g^\intercal\big)\circ h^\intercal, 0\Big)\\ &\quad +\tfrac{1}{2} (\tr_g h^\intercal )\Big(uA_g - \nu_g(u) g^\intercal , \, 2u \Big) \quad \quad \mbox{ on } \Sigma, \end{align*} where the prime denotes the linearization at $(g, u)$ and recall the formal $\mathcal L^2$-adjoint operator $(R'|_g)^*(u) := - u\mathrm{Ric}_g + \nabla^2_g u - (\Delta_g u) g$. \end{proposition} In the special case that $(g, u)$ is a static vacuum pair $(\bar g, \bar u)$, it is direct to verify that $P(h, v)=0$ if and only if $S'(h, v)=0$. Thus the Green-type identity leads to the following direct consequence. \begin{corollary}\cite[Corollary 3.5]{An-Huang:2021}\label{co:Green} Let $(\bar g, \bar u)$ be an asymptotically flat, static vacuum pair in $M\setminus \Omega$. Suppose that $(h, v), (k,w)\in \C^{2}_{-q}(M\setminus \Omega)$ are static vacuum deformations and that $(h, v)$ satisfies $h^\intercal=0, H' (h)=0$ on $\Sigma$. Then \[ \int_\Sigma \Big\langle Q(h,v) , \big(k^\intercal, H'(k)\big)\Big\rangle_{\bar g} \da_{\bar g}=0 \] where $Q(h,v) = \Big( vA_{\bar g} + \bar u A' (h) - \big(\left( \nu'(h)\right) \bar u + \nu (v)\big) g^\intercal, 2v \Big)$. \end{corollary} Let $\psi_s$ be a smooth family of diffeomorphisms in $\mathscr{D} (M\setminus \Omega)$ from Definition~\ref{de:diffeo} (also recall its tangent space $\mathcal{X}(M\setminus \Omega)$ defined there). Computing the first variation of the functional $\mathscr F$ along the family of pull-back pairs $(g(s), u(s))=\psi_s^*(g, u)$ as in \cite[Proposition 3.2]{An-Huang:2021} gives the following identities. \begin{proposition}[Orthogonality]\label{proposition:cokernel} Let $(M, g, u)$ be an asymptotically flat triple with $u>0$. For any $X\in \mathcal{X}(M\setminus \Omega)$ and any $(h, v) \in \C^{2}_{-q}(M \setminus \Omega)$, we have \begin{align} \int_{M\setminus \Omega} \bigg\langle S(g, u),\, \kappa_0(g, u, X) \bigg \rangle_g\, d\mathrm{vol}_g &=0 \label{equation:cokernel1}, \end{align} where \begin{align}\label{eq:kappa} \kappa_0 (g, u, X)&:= \Big( L_X g - \big(\Div_g X+ u^{-1} X(u) \big)g, \, - \Div_g X + u^{-1} X(u)\Big). \end{align} Linearizing the identity at a static vacuum $(g, u) = (\bar g, \bar u)$ yields \begin{align} \int_{M\setminus \Omega} \bigg\langle S' (h, v),\, \kappa_0 (\bar{g}, \bar{u}, X)\bigg\rangle_{\bar g} \, d\mathrm{vol}_{\bar g} &=0 \label{equation:cokernel2} \end{align} \end{proposition} The identities above are important because they will be used to find ``spaces'' complementing to the ranges of the nonlinear operator $S$ and its linearization $S'$. \subsection{$h$-Killing vectors and the geodesic gauge}\label{se:h1} In this section we study the situation when a symmetric $(0,2)$-tensor $h$ takes the form $L_X g$. The results here hold in a general Riemannian manifold $(U, g)$, and Corollary~\ref{co:geodesic} below is used to prove Theorem~\ref{th:trivial}. Given a symmetric $(0,2)$-tensor $h$, we define the $(1,2)$-tensor $T_h$ by, in local coordinates, \[ (T_h)^i_{jk}= \tfrac{1}{2} (h^i_{j;k} + h^i_{k;j} - h_{jk;}^{\;\;\;\;\; i} ) \] where the upper indices are all raised by $g$, e.g. $h^i_j=g^{i\ell} h_{\ell j}$. Note that $(T_h)^i_{jk}$ is symmetric in $(j, k)$, and thus we may use $T_h(V, \cdot)$ to unambiguously denote its contraction with a vector $V$ in the index either $j$ or $k$. We say $X$ is an \emph{$h$-Killing vector} if $L_X g = h$. \begin{lemma}\label{le:X} Let $X$ be an $h$-Killing vector field. Then for any vector $V$, \begin{align*} \nabla_V (\nabla X) = - R(X, V) + T_h (V,\cdot) \end{align*} where the curvature tensor $R(X, V) := \nabla_X \nabla_V - \nabla_V \nabla_X - \nabla_{[X, V]}$. \end{lemma} \begin{proof} We prove the identity with respect to a local orthonormal frame $\{ e_1, \dots, e_n \}$ and shall not distinguish upper or lower indices in the following computations. Write $V= V_k e_k$ and \[ (\nabla_V (\nabla X) )^i_j = V_k\, g(\nabla_{e_k} \nabla_{e_j} X - \nabla_{\nabla_{e_k} e_j} X, e_i ):= V_k X_{i;jk}. \] The desired identity follows from the following identity multiplied by $V_k$: \begin{align}\label{eq:derivativeX} X_{i;jk}=- R_{\ell kj i} X^\ell + \tfrac{1}{2} (h_{ij;k} + h_{ik;j} - h_{jk;i}). \end{align} The previous identity is a well-known fact, we include the proof for completeness. By commuting the derivatives, we get \begin{align*} X_{i;jk} - X_{i;kj} &= R_{kj\ell i} X^\ell\\ X_{j;ki} - X_{j;ik} &= R_{ik\ell j} X^\ell\\ X_{k;ij} - X_{k;ji} &= R_{ji\ell k} X^\ell. \end{align*} Adding the first two identities and then subtracting the third one, we get \begin{align*} &X_{i;jk} - X_{i;kj} + X_{j;ki} - X_{j;ik} - X_{k;ij} + X_{k;ji} \\ &=( R_{kj\ell i} + R_{ik\ell j} - R_{ji\ell k} )X^\ell\\ & = 2R_{ij\ell k} X^\ell =-2 R_{\ell kj i} X^\ell \end{align*} where we use the Bianchi identity to the curvature terms. Rearranging the terms in the left hand side above gives \begin{align*} &2 X_{i;jk} - (X_{i;jk} + X_{j;ik} ) - (X_{i;kj} + X_{k;ij} )+ (X_{j;ki} + X_{k;ji}) \\ &=2X_{i;jk} - h_{ij;k} - h_{ik;j} + h_{jk;i}. \end{align*} This gives \eqref{eq:derivativeX}. \end{proof} For a given $h$, in general there does not exist a corresponding $h$-Killing vector field. (For example, when $h=0$, an $h$-Killing vector field is just a Killing vector field, and a generic Riemannian manifold does not admit any Killing vector field.) Nevertheless, the next lemma says that it is still possible to find an $X$ such that $h$ and $L_X \bar g$ are equal when they are both contracted with a parallel vector~$V$. It uses an ODE argument motivated by the work of Nomizu~\cite{Nomizu:1960}. \begin{lemma}\label{le:V} Let $(U, g)$ be a Riemannian manifold whose $\partial U$ is an embedded hypersurface. Let $1\le k \le \infty$, $\Sigma$ be an open subset of $\partial U$, and $h$ be a symmetric $(0, 2$)-tensor in~$U$. Suppose $g, \Sigma\in \C^{k,\alpha}, h\in \C^{k-1,\alpha}$ (or analytic) in coordinate charts containing $\Sigma$. Let $V\in \C^{k,\alpha}$ (or analytic) be a complete vector field transverse to $\Sigma$ satisfying $\nabla_V V=0$ in~$U$. Then there is a vector field $X\in \C^{k-2, \alpha}$ (or analytic) in a collar neighborhood of $\Sigma$ such that $X=0$ on $\Sigma$ and \begin{align}\label{eq:V} L_X g (V, \cdot) = h(V, \cdot) \quad \mbox{ in the collar neighborhood of } \Sigma. \end{align} \end{lemma} \begin{proof} Since $V$ is complete, given $p\in \Sigma$ we let $\gamma(t)$ be the integral curve of $V$ emitting from $p$, i.e. $\gamma(0)=p$ and $\gamma'(t)=V$. Let $\{ e_1, \dots, e_n \}$ be a local orthonormal frame such that $e_n$ is a unit normal to $\Sigma$. Consider the first-order linear ODE system for the pair $(X,\omega)$ consisting of a vector field $X$ and a $(1,1)$-tensor $\omega$ along $\gamma(t)$: \begin{align*} \nabla_V X&=\omega (V)\\ \nabla_V \omega &= -R(X, V) + T_h (V, \cdot). \end{align*} (We remark that Lemma~\ref{le:X} implies an $h$-Killing vector $X$ satisfies the ODE system with $\omega(e_i) = \nabla_{e_i} X$.) We rewrite the above ODE system in the local orthonormal frame: \begin{align} \label{eq:system} \begin{split} X_{i;j} V_j &= \omega_{ij} V_j\\ \omega_{ij;\ell} V_\ell &= - R_{k\ell ij} X_k V_\ell + \tfrac{1}{2} \left( h_{ij,\ell} + h_{i\ell, j} - h_{j\ell, i} \right) V_\ell \end{split} \end{align} where $\omega_{ij} =\omega^i_j$ (the first index is lowered by $g$). We choose the initial conditions for $X_i$ and $\omega_{ij}$ at $p\in \Sigma$: \begin{align}\label{eq:initial} X_i= 0, \quad \omega_{ij}V_i V_j = \tfrac{1}{2} h_{ij}V_i V_j, \quad \omega_{ia}V_i = h_{ia}V_i, \quad \omega_{ab} = 0 \end{align} where the indices $i,j=1,\dots,n,~a, b=1, \dots, n-1$. Since the coefficients of the ODE are $\C^{k-2,\alpha}$ (or analytic) in $p$, the vector field $X$ and $\omega$ are defined everywhere in the collar neighborhood of $\Sigma$ by varying $p$ and is $\C^{k-2, \alpha}$ (or analytic) by smooth dependence of the ODE (or Cauchy-Kovalevskaya Theorem). We first show that in a collar neighborhood of $\Sigma$: \begin{align}\label{eq:omegaV} \omega_{ij} V_i V_j = \tfrac{1}{2} h_{ij} V_i V_j. \end{align} Note that $\omega_{ij}+\omega_{ji}-h_{ij}$ is constant along $\gamma(t)$ because $(\omega_{ij} + \omega_{ji} - h_{ij} )_{;\ell}V^\ell =0$ by \eqref{eq:system} and symmetry of the curvature tensor. Since $(\omega_{ij}+ \omega_{ji} - h_{ij})V_i V_j = (2\omega_{ij} - h_{ij} )V_i V_j=0$ at $p$ by the initial conditions at $p$, and it proves \eqref{eq:omegaV}. To prove \eqref{eq:V}, observe that $( X_{i;j} + X_{j;i} - h_{ij} )V_j $ satisfies a first-order linear ODE along~$\gamma(t)$, motivated by the result of~\cite[Lemma 2.6]{Ionescu-Klainerman:2013}: \begin{align*} &\big(( X_{i;j} + X_{j;i} -h_{ij} )V_j\big)_{;\ell} V_\ell \\ &= (\omega_{ij;\ell}+ X_{j;i\ell} - h_{ij;\ell})V_j V_\ell\\ &=\big (- R_{k\ell ji } X_k + h_{ij;\ell} -\tfrac{1}{2} h_{j\ell;i}\big) V_j V_\ell + X_{j;\ell i } V_j V_\ell + R_{\ell i k j} X_k V_j V_\ell - h_{ij;\ell} V_j V_\ell \\ &= -\tfrac{1}{2} h_{j\ell;i} V_j V_\ell + X_{j;\ell i} V_j V_\ell \\ &= -\tfrac{1}{2} h_{j\ell;i} V_j V_\ell + (X_{j;\ell} V_j V_\ell )_{;i}-X_{j;\ell}(V_{j;i}V_{\ell}+V_jV_{\ell;i})\\ &= -\tfrac{1}{2} h_{j\ell;i} V_j V_\ell + (\omega_{j\ell} V_j V_\ell )_{;i}-(X_{j;\ell}+X_{\ell;j})V_{j;i}V_{\ell}\\ &= -\tfrac{1}{2} h_{j\ell;i} V_j V_\ell + \tfrac{1}{2}(h_{j\ell} V_j V_\ell )_{;i}-(X_{j;\ell}+X_{\ell;j})V_{j;i}V_{\ell}\\ &=-(X_{j;\ell}+X_{\ell;j}-h_{j\ell})V_{\ell}V_{j;i} \end{align*} where in the third equality we use the Bianchi identity \[ ( - R_{k\ell ji } + R_{\ell i k j} ) V_j V_\ell =( R_{k\ell ij} + R_{\ell i kj} ) V_j V_\ell= -R_{ik\ell j} V_j V_\ell =0 \] and in the second-to-the-last equality we use \eqref{eq:omegaV}. Thus, we have shown that $(X_{i;j} + X_{j;i} -h_{ij} )V_j$ satisfies the first-order linear ODE. Our initial conditions \eqref{eq:initial} imply that at $p$: \begin{align*} (X_{i;j} + X_{j;i} -h_{ij} )V_iV_j &= (2\omega_{ij}- h_{ij} )V_iV_j = 0 \\ (X_{a; j} + X_{j;a} - h_{aj} )V_j &= (\omega_{aj} -h_{aj})V_n=0 \qquad \mbox{ for $a = 1, \dots, n-1$} \end{align*} where we use that $X_{j; a}=0$ at $p$ because $X$ is identically zero on $\Sigma$. Since $(X_{i;j} + X_{j;i} -h_{ij} )V_j=0$ at $p$, it is identically zero along the curve $\gamma(t)$, and thus it is identically zero in a collar neighborhood of $\Sigma$. \end{proof} Let $\nu$ be a unit normal vector field to $\Sigma$. We can extend $\nu$ parallelly in a collar neighborhood of $\Sigma$. We say that a symmetric $(0,2)$-tensor $h$ in $U$ satisfies the \emph{geodesic gauge} in a collar neighborhood of $\Sigma$ if, in the collar neighborhood, \begin{align*} h(\nu, \cdot)= 0,\qquad (\nabla_{\nu} h)(\nu, \cdot)=0. \end{align*} By Lemma~\ref{le:V} and letting $V= \nu$ there, we can give an alternative proof to the existence of geodesic gauge in \cite[Lemma 2.5]{An-Huang:2021}, and in particular, we obtain an \emph{analytic} vector field $X$ if the metric is analytic. \begin{corollary}[Geodesic gauge]\label{co:geodesic} Let $(U, g)$ be a Riemannian manifold whose $\partial U$ is an embedded hypersurface. Let $\Sigma$ be an open subset of $\partial U$ and $h$ be a symmetric $(0, 2$)-tensor in $U$. \begin{enumerate} \item Let $1\le k \le \infty$. Suppose $g, \Sigma \in \C^{k,\alpha}, h \in \C^{k-1,\alpha}$ in some coordinate charts containing $\Sigma$. Then there is a vector field $X\in \C^{k-2,\alpha}$ in a collar neighborhood of $\Sigma$ such that $X=0$ on $\Sigma$ and \begin{align*} L_X g (\nu, \cdot) = h(\nu, \cdot) \quad \mbox{ in the collar neighborhood of } \Sigma. \end{align*} \item Suppose $\Sigma$ is an analytic hypersurface and $g, h$ are analytic in some coordinate charts containing $\Sigma$. Then there is an analytic vector field $X$ in a collar neighborhood of $\Sigma$ such that $X=0$ on $\Sigma$ and \begin{align*} L_X g (\nu, \cdot) = h(\nu, \cdot) \quad \mbox{ in the collar neighborhood of } \Sigma. \end{align*} \end{enumerate} \end{corollary} \section{Static-harmonic gauge and orthogonal gauge} \label{se:gauge} Recall the operator $T$ defined in \eqref{eq:bdv}. As already mentioned in Section~\ref{se:intro}, if $(g, u)$ solves $T(g, u) = (0, 0, \tau, \phi)$, then any $\psi$ in the diffeomorphism group $\mathscr{D}(M\setminus \Omega)$ defined in Definition~\ref{de:diffeo} gives rise to another solution $(\psi^* g, \psi^* u)$. To overcome the infinite-dimensional ``kernel'' of $T$, one would like to introduce suitable ``gauges''. \subsection{The gauges} Fix a static vacuum pair $(\bar g, \bar u)$ with $\bar u>0$. For any pair $(g, u)$ of a Riemannian metric and a scalar function, we use the Bianchi operator $\b_{\bar g}$ (see \eqref{eq:Bianchi}) to define the covector \begin{align*} \mathsf{G}(g, u) =\b_{\bar g} g+\bar u^{-2}udu-{\bar u}^{-1}g( \nabla_{\bar g}\bar u, \cdot ). \end{align*} Note that of course $\mathsf{G}(\bar{g}, \bar{u})=0$. We use $\mathsf{G}'(h, v)$ to denote the linearization of $\mathsf{G}(g, u)$ at $(\bar{g}, \bar{u})$ and thus \[ \mathsf{G}'(h,v) = \b h+ \bar u^{-1} dv + \bar{u}^{-2} v d\bar{u} - \bar{u}^{-1} h(\nabla\bar{u}, \cdot ) \] where the Bianchi operator and covariant derivatives are with respect to $\bar g$. For the rest of this section, we omit the subscripts $\bar g$ when computing differential operators with respect to $\bar g$, as well as the subscript $(\bar g, \bar u)$ when linearizing at $(\bar g, \bar u)$. \begin{lemma} Let $(\bar g, \bar u)$ be a static vacuum pair with $\bar u>0$. Then for any vector field $X$, \begin{align} \label{eq:gauge} \mathsf{G}'(L_X \bar g, X(\bar u))=-\Delta X-\bar u^{-1}\nabla X(\nabla \bar u,\cdot)+\bar u^{-2}X(\bar u)d \bar u =: \Gamma(X). \end{align} (Here and the in rest of the paper, we slightly abuse the notation and blur the distinction of a vector and its dual covector with respect to $\bar g$ when the context is clear.) \end{lemma} \begin{proof} By the linearization formula, \begin{align}\label{eq:G} \begin{split} \mathsf{G}' (L_X \bar g, X(\bar u)) &=\beta (L_X \bar g) + \bar u^{-1} d(X( \bar u)) - \bar u^{-1} L_X \bar g (\nabla \bar u, \cdot)+ \bar u^{-2} X( \bar u) d \bar u \\ &=-\Delta X-\bar u^{-1}\nabla X (\nabla \bar u,\cdot)+\bar u^{-2}X(\bar u)d \bar u + ( - \mathrm{Ric} + \bar u^{-1} \nabla^2 \bar u) (X, \cdot ) \end{split} \end{align} where we use \eqref{eq:laplace-beta} for $\beta (L_X \bar g) = -\Delta X - \mathrm{Ric}(X,\cdot)$ and \begin{align}\label{eq:com} d(X( \bar u)) - L_X g (\nabla \bar u ,\cdot )&= \nabla^2 \bar u (X, \cdot ) - \nabla X (\nabla \bar u,\cdot). \end{align} Since $(\bar g, \bar u)$ is static vacuum, we can drop the term $ - \mathrm{Ric} + \bar u^{-1} \nabla^2\bar u =0$. \end{proof} Using the operator $\Gamma$ defined in \eqref{eq:gauge}, we define the ``gauged'' subspace of $\mathcal X(M\setminus \Omega)$ from Definition~\ref{de:diffeo}. \begin{definition} $(M, \bar g, \bar u)$ be an asymptotically flat, static vacuum triple with $\bar u>0$. Define ${\mathcal X}^\mathsf{G}(M\setminus\Omega)$ to be the subspace of ${\mathcal X}(M\setminus\Omega)$ as \[ {\mathcal X}^\mathsf{G}(M\setminus\Omega)=\big\{ X \in {\mathcal X}(M\setminus\Omega): \Gamma (X)=0 \mbox{ in } M\setminus\Omega \big\}. \] \end{definition} We show below that ${\mathcal X}^\mathsf{G}(M\setminus\Omega)$ is finite-dimensional, of dimension \[ N = n+\frac{n(n-1)}{2}, \] which is the same as the dimension of the space of Euclidean Killing vectors $\mathcal Z$ defined in~\eqref{de:Z}. We begin with a fundamental PDE lemma on the special structure of the operator $\Gamma$. For a given asymptotically flat pair $(g, u)$ with $u>0$, we define the operator on vectors by \begin{align} \label{eq:Gamma} \Gamma_{(g, u)} (X)= -\Delta_g X - u^{-1} \nabla X(\nabla_g u,\cdot) + u^{-2} X(u) du. \end{align} In this notation, the definition of $\Gamma$ in \eqref{eq:gauge} becomes the special case that $(g, u)$ is a static vacuum pair $(\bar g, \bar u)$. Recall \eqref{eq:af-pair} that ${\mathcal M}(M\setminus \Omega)$ consists of asymptotically flat pairs of fall-off rate $q$. \begin{lemma}\label{PDE} Let $(g, u)\in {\mathcal M} (M\setminus \Omega)\cap \C^{k,\alpha}_{-q}(M\setminus \Omega)$ with $u>0$ and $\delta$ be a real number and $k\ge 1$ be an integer. Consider \[ \Gamma_{(g, u)} : \big\{ X\in \C^{k,\a}_{\delta}(M\setminus\Omega): X=0 \mbox{ on } \Sigma \big\} \longrightarrow \C^{k-2,\a}_{\delta-2}(M\setminus\Omega). \] Then the following holds: \begin{enumerate} \item \label{item:0} For $2-n<\delta < 0$, the map is an isomorphism. Therefore, for any fixed boundary value $Z\in \C^{k,\a}(\Sigma)$, the following map is bijective \[ \Gamma_{(g, u)} : \big\{ X\in \C^{k,\a}_{\delta}(M\setminus\Omega): X=Z \mbox{ on } \Sigma \big\} \longrightarrow \C^{k-2,\a}_{\delta-2}(M\setminus\Omega). \] \item \label{item:n} For $0<\delta <1$, the map is surjective and the kernel space is $n$-dimensional, spanned by $\{ V^{(1)}, \dots, V^{(n)}\}$ where $V^{(i)} = \partial_i + O^{k,\alpha}(|x|^{-q})$. \end{enumerate} \end{lemma} \begin{proof} Note that $\Gamma_{( g, u)}$ has the same Fredholm index as the Laplace-Beltrami operator $\Delta_{ g}$. In the proof, the covariant derivatives and inner products are taken with with respect to $g$, and we omit the subscripts. For Item~\eqref{item:0}, $\Gamma_{( g, u)}$ has Fredholm index $0$. It suffices to show that the kernel is trivial. Observe the $g$-inner product: \begin{align*} \left\langle X, \Gamma_{( g, u)}(X) \right\rangle= -\tfrac{1}{2} \Delta |X|^2 - \tfrac{1}{2} u^{-1} {\nabla u} \cdot \nabla |X|_g^2 + |\nabla X|^2 + u^{-2} (X ( u))^2. \end{align*} Thus, if $\Gamma_{(g,u)}(X)=0$, then \[ \tfrac{1}{2} \Delta |X|^2 + \tfrac{1}{2} u^{-1} {\nabla u} \cdot \nabla |X|^2 \ge 0. \] By strong maximum principal and using $X=0$ on $\Sigma$ and $X\to 0$ at infinity, $X$ is identically zero. The statement about the general boundary value $Z$ is standard. For Item~\eqref{item:n}, $\Gamma_{(g,u)}$ has the Fredholm index $n$. By harmonic expansion (e.g. \cite[Theorem 1.17]{Bartnik:1986}), if $\Gamma_{(g,u)}(X)=0$, then \[ X = c^i\partial_i + O(|x|^{-q}) \] for some constants $c_i$. It implies that the kernel is at most $n$-dimensional because by Item~\eqref{item:0} if $c_1=\dots=c_n=0$, then $X$ is identically zero. Since the Fredholm index is $n$, the dimension of the kernel must be exactly $n$. \end{proof} \begin{corollary}\label{cor:dimension} $\dim {\mathcal X}^\mathsf{G}(M\setminus\Omega)=N$. \end{corollary} \begin{proof} Recall the basis $Z^{(i)}, Z^{(i, j)}$ of the space $\mathcal Z$ defined in \eqref{de:Z}, and, outside a compact set of $M$, \[ Z^{(i)} =\partial_i\quad \mbox{ or }\quad Z^{(i,j)}= x_i \partial_j - x_j \partial_i \qquad \mbox{ for }i, j=1,\dots, n. \] We compute $\Gamma (Z^{(i)})= O^{1,\alpha}(|x|^{-2-q})$. By Item~\eqref{item:0} of Lemma~\ref{PDE}, there is a unique $Y= O^{3,\alpha}(|x|^{-q})$ such that \begin{align*} \Gamma(Y) &=-\Gamma (Z^{(i)}) \quad \mbox{ in } M\setminus\Omega\\ Y &= -Z^{(i)} \quad \mbox{ on } \Sigma. \end{align*} Then $W^{(i)}:= Y+Z^{(i)} \in {\mathcal X}^\mathsf{G}(M\setminus\Omega)$. Similarly, we compute $\Gamma(Z^{(i, j)}) = O^{1,\alpha}(|x|^{-q-1})$ and can solve $W^{(i, j)}\in {\mathcal X}^\mathsf{G}(M\setminus \Omega)$ such that $W^{(i, j)} - Z^{(i, j)}\in O^{3,\alpha}(|x|^{1-q})$. It is clear that $W^{(i)}$ and $W^{(i, j)}$ are linearly independent. It remains to show that they span ${\mathcal X}^\mathsf{G}(M\setminus\Omega)$, and thus $\Dim {\mathcal X}^\mathsf{G}(M\setminus\Omega)=N$. Let $X\in {\mathcal X}^\mathsf{G}(M\setminus\Omega)$. Then $X-W\in O^{3,\alpha}(|x|^{1-q})$ where $W$ is a linear combination of $W^{(i)}$ and $W^{(i,j)}$. We separate the discussions into the case $1-q<0$ and the case $0<1-q<1$ (the latter case occurs only when $n=3$). \begin{itemize} \item If $1-q<0$, using $\Gamma(X-W)=0$ and Item~\eqref{item:0} of Lemma~\ref{PDE}, we obtain $X=W$. \item If $0<1-q <1$ (when $n=3$), by Item~\eqref{item:n} of Lemma~\ref{PDE}, we have \[ X - W = c_i W^{(i)} + O^{3,\alpha}(|x|^{-q}). \] Because $\Gamma (X-W-c_iW^{(i)})=0$, as in the first case we conclude that $X= W+c_i W^{(i)}$. \end{itemize} \end{proof} After introducing $\mathsf{G}$ and the gauged space of vectors ${\mathcal X}^\mathsf{G}$, we now define the gauge conditions in solving the boundary value problem for~$T$. \begin{definition}\label{de:gauge} Let $(M, \bar{g}, \bar{u})$ be an asymptotically flat, static vacuum triple with $\bar u>0$. \begin{enumerate} \item We say that $(g,u)$ satisfies the {\it static-harmonic gauge} (with respect to $(\bar g,\bar u)$) in $M\setminus\Omega$ if $\mathsf{G}(g,u)=0$ in $M\setminus\Omega$. \item Fix a positive scalar function $\rho$ in $M$ with $\rho=|x|^{-1}$ on the end $M\setminus K$. We say that $(g, u)$ satisfies an {\it orthogonal gauge} (with respect to $(\bar g, \bar u)$ and $\rho$) in $M\setminus\Omega$ if, for all $X\in {\mathcal X}^\mathsf{G}(M\setminus \Omega)$, \[ \int_{M\setminus\Omega} \Big\langle \big( (g, u) - (\bar g, \bar u)\big) , \big(L_X \bar g, X(\bar u) \big)\Big \rangle \rho \, \dvol = 0, \] where the inner product and volume form are of $\bar g$. \end{enumerate} \end{definition} \begin{remark} \begin{enumerate} \item When $(\bar{g}, \bar{u}) = (g_{\mathbb{E}}, 1)$ is the Euclidean pair, $\mathsf{G}(g,u)=\b_{g_{\mathbb{E}}} g+ udu$. While the above definition of static-harmonic gauge does not recover our prior definition $\b_{g_{\mathbb{E}}} g+ du=0$ in \cite[Definition 4.2]{An-Huang:2021}, both conditions give the same \emph{linearized} condition which is sufficient. See also \cite[Remark 4.3]{An-Huang:2021}. \item If $\bar u>0$, we can define the warped product metrics ${\bf \bar{g}} = \pm \bar{u}^2 dt^2 +\bar{g}$ and ${\bf g} = \pm u^2 dt^2 + g$ on $\mathbb{R} \times M$. The condition $\mathsf{G}(g,u)=0$ is equivalent to requiring that $\bf g$ satisfies the harmonic gauge $\b_{\bf \bar{g}} {\bf g}=0$. See Proposition~\ref{pr:harmonic}. \end{enumerate} \end{remark} We will soon see in Section~\ref{se:operator} below the static-harmonic gauge will be used to obtain ellipticity for the boundary value problem. The following lemma gives a justification why an orthogonal gauge is needed. \begin{lemma}\label{le:diffeo} Let $(M, \bar g, \bar u)$ be an asymptotically flat, static vacuum triple with $\bar u>0$. There is an open neighborhood $\mathcal{U}\subset {\mathcal M}(M\setminus\Omega)$ of $(\bar g, \bar u)$ and an open neighborhood $\mathscr{D}_0 \subset \mathscr D (M\setminus\Omega)$ of $\mathrm{Id}_{M\setminus \Omega}$ such that for any $(g, u) \in \mathcal U$, there is a unique $\psi \in \mathscr{D}_0$ such that $(\psi^* g, \psi^* u)$ satisfies both the static-harmonic and orthogonal gauge. \end{lemma} \begin{proof} Denote the weighted ${\mathcal L}^2$-inner product: \[ \big\langle (h, v), (k, w)\big\rangle_{{\mathcal L}^2_\rho} = \int_{M\setminus \Omega} (h, v)\cdot (k, w)\rho \, \dvol \] where the inner product and volume form are with respect to $\bar g$ and $\rho$ is the weight function in Definition~\ref{de:gauge}. By Corollary~\ref{cor:dimension}, let $\{ X^{(1)}, \dots, X^{(N)} \}$ be an orthonormal basis of ${\mathcal X}^\mathsf{G}(M\setminus \Omega)$ with respect to the ${\mathcal L}^2_\rho$-inner product in the sense that \[ \Big\langle \big(L_{X^{(i)}} \bar g, X^{(i)} (\bar u) \big ) ,\big( L_{X^{(j)}} \bar g, X^{(j)} (\bar u)\big) \Big\rangle_{{\mathcal L}^2_\rho} =\delta_{ij}. \] Define the map $F:\mathscr D (M\setminus\Omega)\times {\mathcal M}(M\setminus\Omega) \longrightarrow \C^{1,\a}_{-q-1}(M\setminus \Omega)\times\mathbb R^N$ by \begin{equation*} F(\psi,(g,u))=\big({\mathsf{G}}(\psi^*g,\psi^*u),(b_1,...,b_N)\big) \end{equation*} where each number $b_i=\Big\langle \big(\psi^*g-\bar g,\psi^*u-\bar u\big),\big(L_{X^{(i)}}\bar g,X^{(i)}(\bar u)\big)\Big\rangle_{{\mathcal L}^2_\rho}$. Linearizing $F$ in the first argument at $(\mathrm{Id}_{M\setminus\Omega},(\bar g,\bar u))$ gives \begin{equation*} \begin{split} &D_1F:{\mathcal X}(M\setminus \Omega)\longrightarrow \C^{1,\a}_{-q-1}(M\setminus \Omega)\times\mathbb R^N\\ &D_1F(X)=\Big({\mathsf{G}}'(L_X\bar g,X(\bar u)),\big(c_1(X),...,c_N(X)\big)\Big) \end{split} \end{equation*} with $c_i(X)=\Big\langle \big(L_X\bar g,X(\bar u)\big), \big(L_{X^{(i)}}\bar g,X^{(i)}(\bar u)\big)\Big\rangle_{{\mathcal L}^2_\rho}$. Once we show that $D_1F$ is an isomorphism, the lemma follows from implicit function theorem. If $D_1 F(X)=0$, then $\Gamma(X)=\mathsf{G}' (L_X\bar g,X(\bar u))=0$ and $c_i(X)=0$ for all $i$. It implies that $X\in {\mathcal X}^\mathsf{G}(M\setminus \Omega)$, and thus $X\equiv 0$ in $M\setminus\Omega$. To see that $D_1 F$ is surjective, for any covector $Z\in \C^{1,\alpha}_{-q-1} (M\setminus \Omega)$ and constants $a_1, \dots, a_N$, there exists $Y\in {\mathcal X}(M\setminus \Omega)$ solving $\mathsf{G}'(L_Y \bar g, Y(\bar u))= \Gamma (Y) = Z$ by Lemma~\ref{PDE}. Let $X= Y+ \big(a_1-c_1(Y)\big) X^{(1)}+\dots + \big(a_N-c_N(Y)\big) X^{(N)}$. Then we have $D_1 F(X) =\big (Z, (a_1,\dots, a_N)\big)$. \end{proof} \subsection{The gauged operator}\label{se:operator} Consider the operator $T$ defined in \eqref{eq:bdv} on the manifold $M\setminus\Omega$: \begin{align*} &T: {\mathcal M}(M\setminus \Omega) \longrightarrow \C^{0,\alpha}_{-q-2}(M\setminus\Omega)\times \mathcal{B}(\Sigma) \\ &T(g, u) := \begin{array}{l} \left\{ \begin{array}{l} -u \mathrm{Ric}_g + \nabla^2_g u \\ \Delta_g u \end{array} \right. \quad \mbox{ in } M\setminus\Omega\\ \left\{ \begin{array}{l} g^\intercal \\ H_g \end{array} \right. \quad \mbox{ on } \Sigma. \end{array} \end{align*} where $\mathcal B(\Sigma)$ denotes the space of pairs $(\tau, \phi)$ where $\tau \in \C^{2,\alpha}(\Sigma)$ is a symmetric $(0, 2)$-tensor on $\Sigma$ and $\phi\in \C^{1,\alpha}(\Sigma)$ is a scalar-valued function on $\Sigma$. Define the \emph{gauged} operator \[ T^\mathsf{G}: {\mathcal M} (M\setminus\Omega) \longrightarrow \C^{0,\alpha}_{-q-2}(M\setminus\Omega)\times \C^{1,\alpha}(\Sigma) \times \mathcal{B}(\Sigma) \] and \begin{align}\label{rsv} T^{\mathsf{G}}(g, u)= \begin{array}{l} \left\{ \begin{array}{l} -u \mathrm{Ric}_{ g}+\nabla^2_{ g} { u}-u\mathcal{D}_g {\mathsf{G}}(g,u)\\ \Delta_{g} { u}-{\mathsf{G}}(g,u) (\nabla_g u) \end{array}\right. \quad {\rm in }~M\setminus \Omega \\ \left\{ \begin{array}{l} \mathsf{G}(g, u) \\ g^\intercal\\ H_g \end{array} \right.\quad \mbox{ on } \Sigma \end{array}. \end{align} Recall the notation $\mathcal D_g X = \frac{1}{2} L_X g$ from \eqref{eq:Lie}. Obviously the gauged operator $T^\mathsf{G}$ becomes the operator $T$ when the ``gauge'' term $\mathsf{G}(g, u)$ vanishes in $M\setminus\Omega$, i.e., $(g, u)$ satisfies the static-harmonic gauge. The following lemma relates our desired boundary value problem for $T$ to solving $T^\mathsf{G} (g, u)$. \begin{lemma}\label{rsv-to-sv} Let $(M, \bar{g}, \bar{u})$ be an asymptotically flat, static vacuum triple with $\bar{u}>0$. There is an open neighborhood ${\mathcal U}$ of $(\bar g,\bar u)$ in ${\mathcal M}(M\setminus \Omega)$ such that if $(g,u)\in{\mathcal U}$ and $T^\mathsf{G}(g, u) = (0, 0, 0, \tau, \phi)$, then $\mathsf{G}(g, u)=0$ in $M\setminus \Omega$ and thus $T(g, u)= (0, 0, \tau, \phi)$. \end{lemma} \begin{proof} In the following computations, the volume measure, geometric operators, as well as $\beta_g, \mathcal D_g$, are all computed with respect to $g$, and we omit the subscripts $g$ for better readability. Recall the integral identity \eqref{equation:cokernel1} says the following two terms are ${\mathcal L}^2$-orthogonal, for any $X\in \mathcal X(M\setminus\Omega)$, \begin{align*} S(g, u) &= (-u \mathrm{Ric}+\nabla^2_{ g} { u}, \Delta u)\\ \kappa_0(g, u, X) &= \big(2\beta^* X - u^{-1} X(u) g, \, -\Div X + u^{-1} X(u)\big) \end{align*} where we re-expressed $\kappa_0(g, u, X)$ from \eqref{eq:kappa} using the operator $\beta^* = \beta_g^*$, defined in \eqref{eq:Bianchi*}. Below we write the covector $\mathsf{G} =\mathsf{G}(g, u)$ for short. Since $(g, u)$ solves $T^\mathsf{G}(g, u)=0$, we can substitute $S(g, u)= \big( u \mathcal{D} \mathsf{G} , \mathsf{G} (\nabla u)\big)$ to get \begin{align*} 0&=\int_{M\setminus\Omega} \Big\langle S(g, u), \kappa_0 (g, u, X) \Big\rangle \, \dvol \\ &=\int_{M\setminus\Omega} \Big\langle\big(u \mathcal{D} \mathsf{G} , \mathsf{G} (\nabla u)\big), \big( 2 \beta^* X - u^{-1} X(u) g, \, -\Div X + u^{-1} X(u) \big) \Big \rangle \, \dvol. \end{align*} Applying integration by parts and varying among compactly supported $X$, we obtain that $\mathsf{G}$ weakly solves \begin{align*} 0&=2 \b \big( u\mathcal{D} \mathsf{G} ) -( \Div \mathsf{G}) du + d (\mathsf{G}(\nabla u) ) + u^{-1} \mathsf{G}(\nabla u) du\\ &=2u \b \mathcal{D} \mathsf{G}- L_{\mathsf{G}} g(\nabla_g u, \cdot )+d(\mathsf{G}(\nabla u) ) + u^{-1} \mathsf{G}(\nabla u) du\\ &= -u\Delta \mathsf{G} - u\, \mathrm{Ric} (\mathsf{G}, \cdot) + \nabla^2 u (\mathsf{G}, \cdot)- \nabla\mathsf{G}(\nabla u,\cdot) + u^{-1} \mathsf{G} (\nabla u) \, du\\ &= u \Gamma_{(g,u)}(\mathsf{G})- u\mathrm{Ric} (\mathsf{G}, \cdot)+ \nabla^2 u(\mathsf{G}, \cdot ) =:\hat \Gamma(\mathsf{G}), \end{align*} where we use $2 \b \big( u\mathcal{D} \mathsf{G} ) = 2u \b (\mathcal{D} \mathsf{G}) - L_{\mathsf{G}} g (\nabla u, \cdot) + (\Div \mathsf{G} )du $ by \eqref{eq:product} in the second line, $2u \b \mathcal{D} \mathsf{G} = -u\Delta \mathsf{G} - u\, \mathrm{Ric} (\mathsf{G})$ by \eqref{eq:laplace-beta} and a similar computation as \eqref{eq:com} in the third line, and the definition of the operator $\Gamma_{(g,u)}$ in \eqref{eq:Gamma} in the last line. Recall that $\Gamma_{(g, u)}$ is an isomorphism by Item~\eqref{item:0} in Lemma~\ref{PDE}. For $(g, u)$ sufficiently close to the static vacuum pair $(\bar{g}, \bar{u})$, we have $u>0$ and $-u\mathrm{Ric}_g +\nabla^2_g u$ small, so we conclude that the operator $\hat \Gamma$ is also an isomorphism. Together with the boundary condition $\mathsf{G} = 0$ on $\Sigma$, we conclude that $\mathsf{G}$ is identically zero in $M\setminus \Omega$. \end{proof} In Section~\ref{se:solution} below, we shall solve the gauged boundary value problem \eqref{rsv} near a static vacuum pair via Inverse Function Theorem. As preparation, let us state some basic properties of the linearized operator. Denote by $L$ and $L^\mathsf{G}$ the linearizations of $T$ and $T^{\mathsf{G}}$ at a static vacuum pair $(\bar g, \bar u)$, respectively. Explicitly, \begin{align}\label{eq:L} L(h, v)= \begin{array}{l} \left\{ \begin{array}{l} -\bar u \mathrm{Ric}'(h) + (\nabla^2)'(h) \bar u - v \mathrm{Ric} + \nabla^2 v\\ \Delta v + \big( \Delta'(h) \big)\bar u \end{array}\right. \quad \mbox{ in } M\setminus \Omega \\ \left\{ \begin{array}{l} h^\intercal\\ H'(h) \end{array}\right. \quad \mbox{ on } \Sigma \end{array} \end{align} \begin{align}\label{lsv} L^{\mathsf{G}}(h, v)= \begin{array}{l} \left\{ \begin{array}{l} -\bar u \mathrm{Ric}'(h) + (\nabla^2)'(h) \bar u - v \mathrm{Ric}+ \nabla^2 v-\bar u\mathcal{D} {\mathsf{G}}'(h,v)\\ \Delta v + \big( \Delta'(h) \big)\bar u-{\mathsf{G}}'(h,v) (\nabla \bar u) \end{array}\right.\quad \mbox{in } M\setminus \Omega\\ \left\{ \begin{array}{l} \mathsf{G}'(h,v) \\ h^\intercal\\ H'(h) \end{array}\right. \quad \mbox{ on } \Sigma \end{array} \end{align} where \begin{align*} &L:\C^{2,\alpha}_{-q}(M\setminus \Omega) \longrightarrow \C^{0,\alpha}_{-q-2}(M\setminus\Omega)\times \mathcal{B}(\Sigma)\\ &L^\mathsf{G}: \C^{2,\alpha}_{-q}(M\setminus \Omega) \longrightarrow \C^{0,\alpha}_{-q-2}(M\setminus\Omega)\times \C^{1,\alpha}(\Sigma) \times \mathcal{B}(\Sigma). \end{align*} Here and for the rest of this section, the geometric operators are all computed with respect to $\bar g$ and the linearizations are all taken at $(\bar g, \bar u)$. From the formulas in Section~\ref{se:formula}, it is direct to check that the first two equations of $L^\mathsf{G}(h, v)$ can be expressed as \begin{align}\label{eq:S'} \begin{split} -\bar u \mathrm{Ric}'(h) + (\nabla^2)'(h) \bar u - v \mathrm{Ric}+ \nabla^2 v-\bar u\mathcal{D} {\mathsf{G}}'(h,v)&=\tfrac{1}{2} \bar u \Delta h + E_1\\ \Delta v + \big( \Delta'(h) \big)\bar u-{\mathsf{G}}'(h,v) (\nabla \bar u)&=\Delta v+ E_2 \end{split} \end{align} where lower order terms $E_1, E_2$ are linear functions in $h, v, \nabla h, \nabla v$ and of the order $O^{0,\alpha}(|x|^{-2-2q})$ at infinity. The linear operators $L$ and $L^\mathsf{G}$ also share similar relations as Lemmas~\ref{le:diffeo} and \ref{rsv-to-sv} for the nonlinear operators $T$ and $T^\mathsf{G}$, which we summarize in the next lemma. \begin{lemma}\label{le:linear} Let $(M, \bar{g}, \bar{u})$ be a static vacuum triple with $\bar{u}>0$. Then the following holds: \begin{enumerate} \item Let $(h,v)\in \C^{2,\a}_{-q}(M\setminus \Omega)$ solve $L^\mathsf{G}(h, v) = (0, 0, 0, \tau ,\phi)$. Then $\mathsf{G}'(h,v)=0$ in $M\setminus \Omega$ and thus $L(h, v)=(0, 0, \tau, \phi)$.\label{it:gauge} \item For any $(h,v)\in \C^{2,\a}_{-q}(M\setminus \Omega)$, there is a vector field $X\in \mathcal X(M\setminus \Omega)\cap \C^{3,\a}_{1-q}(M\setminus \Omega)$ such that $\mathsf{G}' \big(h+L_X\bar g, v+X(\bar u)\big )=0$.\label{it:SH} \end{enumerate} \end{lemma} \begin{proof} The proof of Item~\eqref{it:gauge} proceeds similarly as the proof of Lemma~\ref{rsv-to-sv} by linearizing those identities at $(\bar{g}, \bar{u})$. For Item~\eqref{it:SH}, given $(h, v)$, note ${\mathsf{G}}'{(h,v)}= O^{1,\a}(|x|^{-q-1})$. By Item~\eqref{item:n} in Lemma \ref{PDE}, we let $X\in \C^{3,\a}_{1-q}(M\setminus \Omega)$ solving $\Gamma(X)=-\mathsf{G}'(h,v)$ in $M\setminus \Omega$ and $X=0$ on $\Sigma$. Recall \eqref{eq:gauge} that $\Gamma(X) = \mathsf{G}'(L_X \bar g, X(\bar u))$, and thus we get the desired $X$. \end{proof} One can proceed as in \cite[Lemma 4.6]{An-Huang:2021} (see also \cite[Proposition 3.1]{Anderson-Khuri:2013}) to show that the operator $L^\mathsf{G}$ is elliptic and Fredholm of index zero. In fact, the operator $L^\mathsf{G}$ here has exactly the same leading order terms as the special case considered in \cite{An-Huang:2021}, and thus the same proof applies verbatim. Also, recall that \begin{align*} \Ker L \supseteq \Big\{ \big(L_X \bar g, X(\bar u) \big):X\in \mathcal X (M\setminus \Omega) \Big\} \end{align*} where $(L_X \bar g, X(\bar u))$ arises from diffeomorphisms. Therefore, $\Ker L^\mathsf{G}$ always contains an $N$-dimensional subspace arising from the ``gauged'' space $\mathcal X^\mathsf{G} (M\setminus \Omega)$. We summarize those properties in the following lemma. \begin{lemma}\label{le:Fred} The operator \[ L^\mathsf{G}: \C^{2,\alpha}_{-q}(M\setminus \Omega)\longrightarrow \C^{0,\alpha}_{-q-2}(M\setminus \Omega)\times \C^{1,\alpha}(\Sigma) \times \mathcal B(\Sigma) \] is elliptic and Fredholm of index zero whose kernel $\Ker L^\mathsf{G}$ contains an $N$-dimensional subspace: \begin{align} \label{eq:set} \Ker L^\mathsf{G} \supseteq \Big\{ \big(L_X \bar g, X(\bar u) \big):X\in \mathcal X^\mathsf{G} (M\setminus \Omega) \Big\}. \end{align} \end{lemma} To close this section, we include a basic fact about analyticity of the kernel elements. It will be used in the proof of Theorem~\ref{th:trivial} below. \begin{corollary}\label{co:analytic} Let $(M, \bar g, \bar u)$ be a static vacuum triple with $\bar{u}>0$. Let $(h, v)$ solve $L(h, v)=0$ in $M\setminus \Omega$. Then the following holds: \begin{enumerate} \item There exists $X\in \mathcal X(M\setminus \Omega)\cap \C^{3,\alpha}_{-q}(M\setminus \Omega)$ such that $(\hat h, \hat v) = (h, v)+ (L_X \bar g, X(\bar u))$ satisfies $L^\mathsf{G}(\hat h, \hat v)=0$ in $M\setminus \Omega$ and $(\hat h, \hat v)$ is analytic in $\Int (M\setminus \Omega)$. \item Furthermore, if an open subset $\hat \Sigma\subset \Sigma $ is an analytic hypersurface, then $(\hat h, \hat v)$ is analytic up to $\hat \Sigma$. \end{enumerate} \end{corollary} \begin{proof} The existence of $X$ is by Item~\eqref{it:SH} in Lemma~\ref{le:linear}. It remains to argue that $(\hat h, \hat v)$ is analytic. By \cite{Muller-zum-Hagen:1970} (or Theorem~\ref{th:analytic} below), there is an analytic atlas of $\Int M$ such that $(\bar g, \bar u)$ is analytic. Since $L^\mathsf{G}(h, v)$ is elliptic by Lemma~\ref{le:Fred}, by elliptic regularity \cite[Theorem 6.6.1]{Morrey:1966}, we see that $(h, v)$ is analytic in $\Int (M\setminus \Omega)$. If the portion $\hat \Sigma$ of the boundary is analytic, then $(h,v)$ is analytic up to $\hat \Sigma$ by \cite[Theorem 6.7.6$^\prime$]{Morrey:1966}. \end{proof} \section{Static regular and ``trivial'' kernel}\label{se:static-regular} Throughout this section, we fix a background asymptotically flat, static vacuum triple $(M, \bar g, \bar u)$ with $\bar u>0$. All geometric quantities (e.g. covariant derivatives, curvatures) are computed with respect to $\bar g$ and linearization are taken at $(\bar g, \bar u)$. We often skip labelling the subscripts in $\bar g$ or $(\bar g, \bar u)$ when the context is clear. We recall the linearized operator $L$ defined by \eqref{eq:L}. The goal of this section is to prove Theorem~\ref{th:trivial}. It follows directly from Theorem~\ref{th:trivial'} and Corollary~\ref{co:static} below. \begin{manualtheorem}{\ref{th:trivial}$^\prime$}\label{th:trivial'} Let $(M,\bar g,\bar u)$ be an asymptotically flat, static vacuum triple with $\bar u>0$. Let $\hat \Sigma$ be a nonempty, connected, open subset of $\Sigma$ with $\pi_1(M\setminus \Omega, \hat \Sigma)=0$. Let $(h, v)\in \C^{2,\alpha}_{-q}(M\setminus \Omega)$ satisfy $L (h, v)=0$ with either one of the following conditions: \begin{enumerate} \item \label{co:1} \begin{align}\label{eq:static1'} v= 0, \quad \nu'(h) (\bar u) + \nu(v)=0, \quad A'(h)=0 \qquad \mbox{ on }\hat \Sigma. \end{align} \item \label{co:2} $\hat \Sigma$ is an analytic hypersurface and \begin{align}\label{eq:static3'} \begin{split} &A'(h)=0,\quad \big( \mathrm{Ric}'(h)\big)^\intercal =0,\quad \big(( \nabla^k_\nu \, \mathrm{Ric})'(h)\big)^\intercal =0 \qquad \mbox { on } \hat \Sigma\\ &\mbox{for all positive integers $k$}. \end{split} \end{align} \end{enumerate} Then $(h, v) = \big(L_X \bar g, X(\bar u) \big)$ for some $X\in \mathcal X(M\setminus \Omega)$. \end{manualtheorem} Recall Definition~\ref{de:static} for static regular of type (I) and type (II). We get the following consequence from the above theorem, which completes the proof of Theorem~\ref{th:trivial}. \begin{corollary}\label{co:static} Let $(M\setminus \Omega, \bar g, \bar u)$ be static vacuum with $\bar u>0$. \begin{enumerate} \item If $\Sigma$ is static regular in $(M\setminus \Omega, \bar g, \bar u)$ of either type (I) or type (II), then \begin{align}\label{eq:kernel2} \Ker L = \Big\{ \big(L_X \bar g, X(\bar u) \big):X\in \mathcal X (M\setminus \Omega) \Big\}. \end{align} \item If \eqref{eq:kernel2} holds, then by \eqref{eq:set}, \[ \Ker L^\mathsf{G} = \Big\{ \big(L_X \bar g, X(\bar u) \big):X\in \mathcal X^\mathsf{G}(M\setminus \Omega) \Big\}, \] and thus $\Dim \Ker L^\mathsf{G}= N$. \item \label{it:X} If $(h, v) = \big(L_X \bar g, X(\bar u)\big)$ for some $X\in\mathcal X (M\setminus \Omega)$, then both \eqref{eq:static1'} and \eqref{eq:static3'} hold everywhere on $\Sigma$, i.e. those boundary conditions \eqref{eq:static1'} and \eqref{eq:static3'} are ``gauge invariant''. \end{enumerate} \end{corollary} \begin{proof} We just need to verify in Item~\eqref{it:X}. Clearly, $X(\bar u)=0$ on $\Sigma$. We then verify $\big(( \nabla^k_\nu \, \mathrm{Ric})'(L_X \bar g)\big)^\intercal =0$, and the rest equalities can be verified similarly. For $e_a, e_b$ tangential to $\hat \Sigma$, \begin{align*} &\big((\nabla^k_\nu \,\mathrm{Ric})'(L_X \bar g) \big) (e_a, e_b)\\ &= \big( L_X (\nabla^k_\nu \, \mathrm{Ric} )\big)(e_a, e_b)\\ & = \big ( \nabla_X (\nabla^k_\nu \, \mathrm{Ric})\big)(e_a, e_b) + \big(\nabla^k_\nu \, \mathrm{Ric}\big)(\nabla_{e_a} X, e_b) + \big(\nabla^k_\nu \, \mathrm{Ric}\big)({e_a} , \nabla_{e_b} X)\\ &=0. \end{align*} \end{proof} We shall refer to Condition~\eqref{co:1} in Theorem~\ref{th:trivial'} as the \emph{Cauchy boundary condition} and to Condition~\eqref{co:2} as the \emph{infinite-order boundary condition}. We prove Theorem~\ref{th:trivial'} for the Cauchy boundary condition and the infinite-order boundary condition respectively in Section~\ref{se:uniqueness} and Section~\ref{se:uniqueness2} below. In both sections, we observe that to show $(h, v) = (L_X \bar g, X(\bar u))$, it suffices to show $h$ by itself is $L_X\bar g$. \begin{lemma}\label{le:trivial-h} Let $(h,v) \in \C^{2,\alpha}_{-q}(M\setminus \Omega)$. Suppose $h = L_X \bar g$ for some locally $\C^3$ vector field $X$ in $M\setminus \Omega$. Then the following holds. \begin{enumerate} \item $X-Z\in \C^{3,\alpha}_{1-q}(M\setminus \Omega)$ where $Z\in \mathcal Z$. \label{it:asymptotics} \item If $h^\intercal=0, H'(h)=0$ on $\Sigma$ and $X=0$ in an open subset of $\Sigma$, then $X\equiv 0$ on $\Sigma$ and thus $X\in \mathcal X(M\setminus \Omega)$. \label{it:boundary} \item If $S'(h, v)=0$ in $M\setminus \Omega$, then $v = X(\bar u)$ in $M\setminus \Omega$. \label{it:v} \end{enumerate} Consequently, if $L(h, v)=0$ and $h=L_X\bar g$ for some vector field $X\in \C^{3,\alpha}_{\mathrm{loc}}(M\setminus \Omega)$ with $X=0$ in an open subset of $\Sigma$, then $X\in \mathcal X(M\setminus\Omega)$ and $(h, v) = (L_X \bar g, X(\bar u))$. \end{lemma} \begin{remark} It is clear in the following proof that the assumption $X=0$ in an open subset of $\Sigma$ in Item~\eqref{it:boundary} can be replaced by a weaker assumption that $X$ along $\Sigma$ vanishes at infinite order at some point $p\in \Sigma$. \end{remark} \begin{proof} To prove Item~\eqref{it:asymptotics}, we first recall \eqref{eq:derivativeX} that \begin{align*} X_{i;jk}=- R_{\ell kj i} X^\ell + \tfrac{1}{2} (h_{ij;k} + h_{ik;j} - h_{jk;i}). \end{align*} In an asymptotically flat end, let $\gamma(t)$, $t\in [1,\infty)$, be a radial geodesic ray, parametrized by arc length, that goes to infinity. Note that by asymptotical flatness, the parameter $t$ and $|x|$ are comparable, i.e. $C^{-1} |x| \le t \le C|x|$ for some positive constant $C$. Then the previous identity implies that, along $\gamma(t)$, \[ \frac{d^2}{dt^2} X(\gamma(t)) = B(t) X(\gamma(t)) + D(t) \] where the coefficient matrix $B(t)$ is the restriction of the curvature tensor $\mathrm{Rm}(\gamma(t))$ and $D(t)$ is the restriction $\nabla h(\gamma(t))$. Consequently, $B(t) = O(t^{-q-2}), D(t)=O(t^{-q-1})$. By elementary ODE estimates, we have $|X| + t|X'|\le Ct$ for some constant $C$. (See, for example, \cite[Lemma B.3]{Huang-Martin-Miao:2018}, where the homogeneous case $B=0$ was proved, but a similar argument extends to our inhomogeneous case here.) By varying geodesic rays, we obtain $X= O^{3,\alpha}(|x|)$. Together with the elliptic equation $\Delta X + \mathrm{Ric} (X, \cdot) = - \beta h \in \C^{1,\alpha}_{-q-1} (M\setminus \Omega)$ and harmonic expansion, it implies that \[ X_i -c_{ij} x_j \in \C^{3,\alpha}_{1-q}(M\setminus \Omega) \] for some constants $c_{ij}$. Using that $L_X \bar g = h= O(|x|^{-q})$, the leading term of $X$ must correspond to a rotation vector (or zero). This gives the desired asymptotics. To prove Item~\eqref{it:boundary}, we decompose $X=\eta\nu+X^\intercal$ on $\Sigma$ where $X^\intercal$ is tangent to~$\Sigma$. Recall \begin{align}\label{eq:vector} \begin{split} (L_X \bar g)^\intercal &= 2\eta A + L_{X^\intercal} \bar g^\intercal\\ H'(L_X\bar g) &=-\Delta_\Sigma \eta- (|A|^2 + \mathrm{Ric}(\nu, \nu) )\eta + X^\intercal (H). \end{split} \end{align} The assumptions $(L_X g)^\intercal =0$ and $H'(L_X \bar g)=0$ imply that $\eta, X^\intercal$ satisfy a linear $2$nd-order elliptic system on $\Sigma$: \begin{align*} \Delta_\Sigma X^\intercal + \mathrm{Ric}_\Sigma(X^\intercal,\cdot) +2 \beta_\Sigma (\eta A)&=0\\ \Delta_{\Sigma}\eta+\big(|A|^2+\mathrm{Ric}(\nu,\nu)\big)\eta-X^\intercal(H)&=0 \end{align*} where for the first equation we use $\beta_{\Sigma}( L_{X^\intercal}\bar g^\intercal +2\eta A )=0$ and $\beta_{\Sigma}$ denote the \emph{tangential} Bianchi operator, i.e., $\beta_{\Sigma} \tau = - \Div_{\Sigma}\tau + \tfrac{1}{2} d_\Sigma (\tr_{\Sigma} \tau )$. By unique continuation, both $\eta, X^\intercal$ are identically zero on $\Sigma$. To prove Item~\eqref{it:v}, using $h=L_X \bar g$ and noting $S'\big (L_X \bar g, X(\bar u)\big ) =0 $, we obtain \[ 0 = S'( h , v) - S'\big (L_X \bar g, X(\bar u)\big ) = S'(0, v - X(\bar u))=0 \quad \mbox{ in } M\setminus \Omega. \] Let $f := v-X(\bar u)$. The previous identity implies $-f \mathrm{Ric} + \nabla^2 f=0$ in $M\setminus \Omega$. Noting $f(x)\to 0$ at infinity because both $v, X(\bar u)\to 0$ (by Item~\eqref{it:asymptotics}), we can conclude $f\equiv 0$ (see, for example, \cite[Proposition B.4]{Huang-Martin-Miao:2018}). \end{proof} \subsection{Cauchy boundary condition}\label{se:uniqueness} In this section, we assume $(h, v)\in \C^{2,\alpha}_{-q}(M\setminus \Omega)$ satisfies $L(h, v)=0$ and \begin{align} \label{eq:Cauchy-all} v= 0,\quad \nu'(h) (\bar u) + \nu(v)=0, \quad A'(h)=0 \quad \mbox{ on } \hat \Sigma, \end{align} where $\hat \Sigma$ is a connected open subset of $\Sigma$ satisfying $\pi_1(M\setminus\Omega, \hat \Sigma)=0$. Since $L(h, v)=0$ implies $h^\intercal = 0, H'(h)=0$ on~$\Sigma$, all those boundary conditions together are referred to as the \emph{Cauchy boundary condition}. The reason is that if $h$ satisfies the geodesic gauge, then $h^\intercal=0, A'(h)=0$ imply \begin{align}\label{eq:Cauchy1} h=0, \quad \nabla h=0 \quad \mbox{ on } \hat \Sigma \end{align} while $v= 0, \nu'(h) (\bar u) + \nu(v)=0$ imply \begin{align}\label{eq:Cauchy2} v=0, \quad \nu(v)=0\quad \mbox{ on }\hat \Sigma. \end{align} Since $(h, v)$ satisfies a system of $2$nd-order partial differential equations, the conditions \eqref{eq:Cauchy1} and \eqref{eq:Cauchy2} are precisely the classical Cauchy boundary condition. However, Theorem~\ref{th:trivial'} does not follow from the classical theorem on uniqueness because the system $L(h, v)$ is not elliptic under the geodesic gauge. We outline the proof below. We first show that $h$ is ``locally trivial'' in the sense that $h=L_X \bar g$ in some open subset of $M$, achieved by suitably extending $(h, v)$ across $\hat\Sigma$. Then we would like to show that $h$ must be globally trivial. In general, a locally trivial $h$ need not be globally trivial, but here we know $h$ is analytic (after changing to the static-harmonic gauge), so we can use Theorem~\ref{th:extension}. To carry out the argument, it is important to work with the appropriate gauge at each step of the proof. \begin{proof}[Proof of Theorem~\ref{th:trivial'} (under Cauchy boundary condition)] We without loss of generality assume that $(h, v)$ satisfies the geodesic gauge, and thus we have \begin{align}\label{eq:Cauchy} h=0, \quad \nabla h=0,\quad v=0, \quad \nu(v)=0\quad \mbox{ on }\hat \Sigma \end{align} as discussed earlier. We then extend $(h, v)$ by $(0, 0)$ across $\hat \Sigma$ into some small open subset $U\subset \Omega$ and $\overline U\cap (M\setminus\Omega)$ is exactly the closure of $\hat\Sigma$. Denote the $\C^1$ extension pair by $(\hat h, \hat v)$: \begin{align}\label{eq:hat-h} (\hat h, \hat v) = \left\{ \begin{array}{ll} (h, v) & \mbox{ in } M\setminus \Omega \\ (0,0) & \mbox{ in } \overline U \end{array} \right.. \end{align} Here, the open subset $U$ is chosen sufficiently small such that the ``extended'' manifold \[ \hat M = (M\setminus \Omega) \cup \overline U \] has a smooth embedded hypersurface boundary $\partial \hat M$ and $\pi_1(\hat M, U)=0$ based on the assumption $\pi_1( M\setminus \Omega, \hat \Sigma) = 0$. Using the Green-type identity~\eqref{equation:Green}, we verify that $(\hat h, \hat v)$ solves $S'(\hat h, \hat v)=0$ in $\hat M$ weakly, which is equivalent to showing that $P(\hat h, \hat v)=0$ weakly: For $(k, w) \in \C^{\infty}_c(\hat M)$, we have \begin{align*} \int_{\hat M} (\hat h, \hat v) \cdot P(k, w) \dvol &= \int_{ M\setminus \Omega } (\hat h, \hat v) \cdot P(k, w) \dvol\\ &=\int_{M\setminus \Omega} P(\hat h, \hat v) \cdot (k, w) \dvol \\ &\quad +\int_{\hat \Sigma} Q(h, v)\cdot \big(k^\intercal, H'(k)\big) \da- \int_{\hat \Sigma} Q(k,w)\cdot \big(h^\intercal, H'(h)\big)\, \da\\ &=\int_{M\setminus \Omega} P(\hat h, \hat v) \cdot (k, w) \dvol \end{align*} where the boundary terms vanish because \eqref{eq:Cauchy} (also note that $(k,w)$ on $\Sigma$ is supported in $\hat \Sigma$). Then we solve for $Y\in \C^{1,\alpha}_{-q}(\hat M)$ with $Y=0$ on $\partial \hat M$ such that \begin{align}\label{eq:tilde-h} (\tilde h, \tilde v):= (\hat h, \hat v) + \big(L_Y \bar g, Y(\bar u) \big) \end{align} weakly solves the static-harmonic gauge $\mathsf{G}(\tilde h, \tilde v)=0$ in $\hat M$, and thus $(\tilde h, \tilde v)\in \C^{2,\alpha}_{-q}(\hat M)$ by elliptic regularity. By Corollary~\ref{co:analytic}, $(\tilde h, \tilde v)$ is analytic in $\Int \hat M$, and note $(\tilde h, \tilde v)= (L_Y \bar g, Y ( \bar u))$ in $U$. Applying Theorem~\ref{th:extension} to the analytic manifold $(\hat M, \bar g)$ and analytic $\tilde h$, the vector field $Y|_{U}$ restricted in $U$ extends to a vector field $\tilde Y$ in $\hat M$ that is $\tilde h$-Killing in $\hat M$ with $\tilde Y = Y$ in $U$. Denote $X= \tilde Y - Y$ in $\hat M$. Then $X$ is $\hat h$-Killing in $\hat M$ by \eqref{eq:tilde-h}. Thus, $X$ is $h$-Killing in $M\setminus \Omega$ by \eqref{eq:hat-h} with $X=0$ on an open subset of $\hat \Sigma\subset \Sigma$. By Lemma~\ref{le:trivial-h}, we conclude that $X\in \mathcal X(M\setminus \Omega)$ and $(h, v) = (L_X \bar g, X(\bar u))$. \end{proof} \subsubsection{Relaxed Cauchy boundary condition}\label{sec:relaxed} In this subsection, we note that the condition $\nu'(h) (\bar u) + \nu(v)=0$ in \eqref{eq:Cauchy-all} can be replaced or removed under mild assumptions as noted in Item~\eqref{it:nu} in Remark~\ref{re:intro}. For the rest of this section, we denote \[ (\nu(u))'=\nu'(h) (\bar u) + \nu(v) \] since the right hand side is exactly the linearization of $\nu(u)$ on $\Sigma$. We begin with general properties. It is well-known that a static vacuum metric $\bar g$ has a Schwarzschild expansion in the harmonic coordinates~\cite{Bunting-Masood-ul-Alam:1987}. We show that a similar expansion holds for the static vacuum deformation. \begin{lemma}\label{le:v-mass} Let $(M, \bar g, \bar u)$ be a static vacuum triple with $\bar u>0$. Let $(h, v)\in \C^{2,\alpha}_{-q}(M\setminus \Omega)$ be a static vacuum deformation. \begin{enumerate} \item \label{it:harmonic} Suppose $\mathsf{G}'(h, v)=0$. Then $(h, v)$ has the following expansions near infinity: \begin{align*} h_{ij} (x)&= \tfrac{2}{n-2} c \delta_{ij} |x|^{2-n} + O^{2,\alpha}(|x|^{\max\{-2q, 1-n \}})\\ v(x) &= -c |x|^{2-n} + O^{2,\alpha}(|x|^{\max\{-2q, 1-n \}}) \end{align*} for some real number $c$. \item \label{it:v-mass}Suppose $h^\intercal =0$ and $H'(h)=0$ on $\Sigma$. Then $v(x)= O^{2,\alpha}(|x|^{\max\{-2q, 1-n \}})$. (Note that $\mathsf{G}'(h, v)=0$ is not assumed here.) \end{enumerate} \end{lemma} \begin{proof} We prove Item~\eqref{it:harmonic}. By \eqref{eq:S'}, the assumption $S'(h, v)=0$ and $\mathsf{G}'(h, v)=0$ imply that $(h, v)$ satisfies $\Delta h , \Delta v\in O^{2,\alpha}(|x|^{-2-2q})$. Thus, $h$ and $v$ have the harmonic expansions for some real numbers $c_{ij}$ and $c$: \begin{align*} h_{ij} &= \tfrac{2}{n-2} c_{ij} |x|^{2-n} + O^{2,\alpha}(|x|^{\max\{-2q, 1-n \}})\\ v &= -c |x|^{2-n} + O^{2,\alpha}(|x|^{\max\{-2-q, 1-n \}}). \end{align*} To see that $c_{ij}=c\delta_{ij}$, we compute \begin{align*} 0&=(\mathsf{G}'(h, v))_j=\sum_i \left( -h_{ij,i} + \tfrac{1}{2} h_{ii,j}\right) + v_{,j} + o(|x|^{1-n})\\ &=\Big(2\sum_i c_{ij}x_i - \sum_i c_{ii} x_j + (n-2) cx_j\Big)|x|^{-n}+ o(|x|^{1-n}). \end{align*} A direct computation gives the desired identity. For Item~\eqref{it:v-mass}, we use Lemma~\ref{le:linear} to find $X\in \mathcal X(M\setminus \Omega)\cap \C^{3,\alpha}_{1-q}(M\setminus \Omega)$ such that $\mathsf{G}'\big(h+L_X\bar g, v + X(\bar u) \big)=0$ and apply Item~\eqref{it:harmonic} to obtain \begin{align*} (h + L_X\bar g)_{ij} &= \tfrac{2}{n-2} c\delta _{ij} |x|^{2-n} + O^{2,\alpha}(|x|^{\max\{-2q, 1-n \}})\\ v + X(\bar u) &= -c |x|^{2-n} + O^{2,\alpha}(|x|^{\max\{-2-q, 1-n \}}) \end{align*} for some real number $c$. Use the boundary condition and apply Lemma~\ref{le:mass} to $\big(h+L_X\bar g, v + X(\bar u) \big)$, we compute $c=0$. Since $X(\bar u)$ is of the order $O(|x|^{-2q})$, the result follows. \end{proof} In the next lemma, we derive the ``hidden'' boundary conditions for $(h,v)$ satisfying $L(h, v)=0$. \begin{lemma}[Hidden boundary conditions Let $\hat \Sigma$ be an open subset of $\Sigma$. Suppose $(h, v)\in \C^{2,\alpha}_{-q}(M\setminus \Omega)$ satisfy \begin{align*} &\quad S' (h, v)=0 \quad \mbox { in a collar neighborhood $U$ of $\hat\Sigma$ in } M\setminus \Omega\\ &\left\{ \begin{array}{l} h^\intercal =0\\ H'(h)=0\end{array} \right. \quad \mbox { on } \hat \Sigma. \end{align*} Then the following equations hold on $\hat \Sigma$: \begin{align} \Delta_\Sigma v + ( \nu (u))'H + v\mathrm{Ric} (\nu, \nu) - \bar u A'(h) \cdot A&=0\label{eq:hidden1}\\ A(\nabla^\Sigma v, \cdot ) + A' (h) (\nabla^\Sigma \bar u, \cdot) + \bar u \Div_\Sigma A'(h)- d_\Sigma \big( ( \nu (u))'\big)+ v \mathrm{Ric} (\nu, \cdot) &=0. \label{eq:hidden2} \end{align} Consequently, \begin{enumerate} \item \label{item:subset} Suppose $v=0$ and $A'(h)=0$ on $ \hat \Sigma$. Then $(\nu(u))'H\equiv 0$ on $\hat \Sigma$. \item \label{item:v} Suppose $\hat \Sigma=\Sigma$, $v=0, A'(h)=0$ on $\Sigma$, and $S'(h, v)=0$ everywhere in $M\setminus \Omega$. Then $(\nu(u))'\equiv 0$ on $\Sigma$. \end{enumerate} \end{lemma} \begin{proof} Let $X$ be a vector field with support in $U$ and with support in $\hat \Sigma$ when restricting on $\Sigma$. Apply Corollary~\ref{co:Green}, and let $(k, w) = (L_X \bar g, X(\bar u))$ there. Since both $(h, v)$ and $(k, w)$ are static vacuum deformations in $U$ and $h$ satisfies $(h^\intercal, H'(h))=0$ on $\hat \Sigma$, we get \[ \int_{ \Sigma} \Big\langle Q(h, v) , \big((L_X\bar g)^\intercal, H'(L_X\bar g)\big) \Big\rangle\, \da =0. \] Using again the boundary condition of $h$, we get \begin{align*} Q(h, v) = \Big( v A + \bar u A' (h) - (\nu(u))' \bar g^\intercal, \, 2v\Big)\quad \mbox{ on } \hat \Sigma. \end{align*} Write $X = \eta \nu + X^\intercal$ for a scalar function $\eta$ and $X^\intercal$ tangent to $\Sigma$ and recall~\eqref{eq:vector}: \begin{align*} (L_X \bar g)^\intercal &= 2 \eta A + L_{X^\intercal} \bar g^\intercal\\ H'(L_X \bar g) &= - \Delta_\Sigma\eta - \big( |A|^2 + \mathrm{Ric}(\nu, \nu) \big ) \eta + X^\intercal (H). \end{align*} By choosing $X^\intercal\equiv 0$, we get \begin{align*} 0&=\int_{\Sigma} \bigg\langle \Big ( vA + \bar u A' (h) -(\nu(u))' \bar g^\intercal, \, 2v\Big), \Big(2 \eta A , - \Delta_\Sigma\eta -\big( |A|^2 + \mathrm{Ric}(\nu, \nu) \big) \eta \Big) \bigg \rangle \, \da\\ &=\int_{\Sigma} 2\eta\Big (v|A|^2 + \bar u A'(h) \cdot A- (\nu(u))' H - \Delta_{\Sigma} v - (|A|^2 + \mathrm{Ric}(\nu, \nu) )v \Big)\, \da. \end{align*} Simplifying the integrand and choosing arbitrary $\eta$ supported in $\hat \Sigma$, we prove \eqref{eq:hidden1}. For \eqref{eq:hidden2}, we let $\eta\equiv 0$ \begin{align*} 0&= \int_{\Sigma} \bigg\langle \Big ( vA + \bar u A' (h) -(\nu(u))' g^\intercal, \, 2v\Big), \Big( L_{X^\intercal} \bar g^\intercal, X^\intercal (H) \Big) \bigg \rangle \, \da. \end{align*} Applying integration by part and letting $X^\intercal$ vary, we get, on $\hat \Sigma$, \begin{align*} 0&=A(\nabla^\Sigma v, \cdot) + v \Div_\Sigma A + A' (h) (\nabla^\Sigma \bar u, \cdot) + \bar u \Div_\Sigma A'(h)- d_\Sigma (\nu(u))'- vdH\\ &=A(\nabla^\Sigma v, \cdot) + A' (h) (\nabla^\Sigma \bar u, \cdot) + \bar u \Div_\Sigma A'(h)- d_\Sigma (\nu(u))'+ v \mathrm{Ric}(\nu, \cdot) \end{align*} where we use the Codazzi equation. Item~\eqref{item:subset} follows directly from \eqref{eq:hidden1} by letting $v=0$ and $A'(h)=0$. For Item~\eqref{item:v}, we have $d_\Sigma (\nu (u))'=0$ and hence $(\nu(u))'= c$ on $\Sigma$ for some real number $c$ from \eqref{eq:hidden2}. The assumption $S'(h, v)=0$ in $M\setminus \Omega$ says that \[ \Delta v + \big( \Delta'(h) \big)\bar u=0 \quad \mbox{ in } M\setminus \Omega. \] We integrate and apply divergence theorem (recall $\nu$ on $\Sigma$ points to infinity) and use Lemma~\ref{le:W} below to get \begin{align*} 0&=\int_{M\setminus \Omega}\Big( \Delta v + \big( \Delta'(h) \big)\bar u\Big) \, \dvol\\ &= -\int_{\Sigma}\big( \nu (v) + \left( \nu'(h)\right) \bar u\big) \, \da = -\int_\Sigma (\nu(u))' \, \da, \end{align*} which implies $(\nu(u))'= 0$ on $\Sigma$, where we use $\int_{S_\infty} \nu(v)=0$ by Item~\eqref{it:v-mass} in Lemma~\ref{le:v-mass} and the integral formula~\eqref{eq:it} \[ \int_{M\setminus \Omega} (\Delta'(h))\bar u \, \dvol = -\int_\Sigma \left( \nu'(h)\right) \bar u\, \da \] proven in the next lemma. \end{proof} Recall the formulas \eqref{eq:laplace} and \eqref{equation:normal}: for a scalar function $f$, \begin{align*} \big( \Delta'(h) \big)f &= - h^{ij}f_{;ij} +\bar g^{ij} \left(-(\Div h)_i + \tfrac{1}{2} (\tr h)_{;i} \right) f_{,j}\\ \nu'(h) &= - \tfrac{1}{2} h(\nu, \nu) \nu - \bar g^{ab} \omega(e_a) e_b \end{align*} where $\omega(\cdot) = h(\nu, \cdot)$ and the indices are raised by $\bar g$. \begin{lemma}\label{le:W} Let $h\in \C^{2,\alpha}_{-q}(M\setminus \Omega)$ be a symmetric $(0,2)$-tensor. Given a scalar function $f$, define the covector $W_f = -h(\nabla f, \cdot) + \frac{1}{2} (df) \tr h$. Then \begin{align*} \big( \Delta'(h) \big)f + \tfrac{1}{2} (\Delta f) \tr h&= \Div W_f \quad \mbox{ in } M\setminus \Omega, \end{align*} and if we further assume that $\tr_\Sigma h=0$ on $\Sigma$, then \begin{align*} \left( \nu'(h)\right) f &=W_f(\nu) \quad \mbox{ on } \Sigma. \end{align*} Consequently, if $f$ is harmonic and is of the order $O^{1}(|x|^{-q})$, then \begin{align}\label{eq:it} \int_{M\setminus \Omega} \big( \Delta'(h) \big) f \, \dvol = - \int_\Sigma \left( \nu'(h)\right) f \, d\sigma. \end{align} \end{lemma} \begin{proof} Compute at a point with respect to a normal coordinates of~$\bar g$: \begin{align*} \Div W_f = (W_f)_{i;i} &= - f_{;ji} h_{ij} -f_{j} h_{ij;i} + \tfrac{1}{2} (\Delta f) \tr h + \tfrac{1}{2} f_{;i} (\tr h)_i. \end{align*} It gives the first identity. For the second identity, \begin{align*} W_f( \nu) &= -h(\nabla f, \nu) + \tfrac{1}{2} \nu (f) \tr h\\ &=-h(\nu, \nu) \nu(f) - h(\nabla^\Sigma f, \nu) + \tfrac{1}{2} \nu (f)\, h (\nu, \nu)\\ &=-\tfrac{1}{2} h(\nu, \nu) \nu(f)- h(\nabla^\Sigma f, \nu). \end{align*} If $f$ is harmonic, we have \begin{align*} \int_{M\setminus \Omega} \big( \Delta'(h) \big)f \, \dvol &= \int_{M\setminus \Omega} \Div W_{f} \, \dvol\\ &=- \int_\Sigma W_{f}(\nu)\, \da\\ &=-\int_\Sigma \left( \nu'(h)\right) f\, \da \end{align*} where the boundary term at infinity vanishes because $\nabla f = O(|x|^{-q-1})$ and thus $W_{f}\in O(|x|^{-2q-1})$. \end{proof} \subsection{Infinite-order boundary condition} \label{se:uniqueness2} In this section, we assume $(h, v)\in \C^{2,\alpha}_{-q}(M\setminus\Omega)$ and $(h, v)$ satisfies \[ A'(h)=0,\quad \big( \mathrm{Ric}'(h)\big)^\intercal =0,\quad \big(( \nabla^k_\nu \, \mathrm{Ric} )'(h)\big)^\intercal =0 \quad \mbox { on }\quad \hat \Sigma \] for all positive integers $k$, where $\hat\Sigma$ satisfies $\pi_1(M\setminus \Omega, \hat \Sigma)=0$ and $\hat \Sigma$ is an analytic hypersurface. It is worth noting that above boundary conditions are imposed only on $h$ (not on $v$ at all). The following lemma justifies the term ``infinite-order'' boundary condition. Recall that a symmetric $(0,2)$-tensor $h$ is said to satisfies the geodesic gauge in a collar neighborhood of $\hat \Sigma$ if $h(\nu, \cdot) = 0$ in the collar neighborhood, where $\nu$ is the parallel extension of the unit normal to $\hat \Sigma$. \begin{proposition}\label{pr:infinite} Fix an integer $\ell\ge 0$. Suppose $h \in \C^{\ell+2,\alpha}(M\setminus \Omega)$ satisfies the geodesic gauge in a collar neighborhood of $\hat \Sigma$. Then the following two boundary conditions are equivalent \begin{enumerate} \item \label{it:infinite1} For all $k=0, 1,\dots, \ell$, \[ h^\intercal =0, \quad A'(h)=0 \quad \big( (\nabla^k_\nu \, \mathrm{Ric} )'(h)\big)^\intercal=0\qquad\mbox{ on } \hat \Sigma. \] Here, we interpret the $0$-th covariant derivative as $ (\nabla^0_\nu \, \mathrm{Ric} )'(h)=\mathrm{Ric}'(h)$. \item \label{it:infinite2} For all $k=0, 1, \dots, \ell$, \[ h=0, \quad \nabla h=0, \quad \nabla^{k+2} h=0 \qquad \mbox{ on }\hat \Sigma. \] \end{enumerate} \end{proposition} \begin{proof} It is clear that Item~\eqref{it:infinite2} implies Item~\eqref{it:infinite1} by the formulas of $A'(h)$ and $\mathrm{Ric}'(h)$ in \eqref{eq:sff} and \eqref{equation:Ricci}. The rest of proof is devoted to prove that Item~\eqref{it:infinite1} implies Item~\eqref{it:infinite2}. We compute with respect to an orthonormal frame $\{ e_0, e_1,\dots, e_{n-1}\}$ where $e_0 =\nu$ parallel along itself. In the following computations, we let $i, j, k_1,\dots, k_{\ell}, k_{\ell +1} \in \{ 0, 1, \dots, n-1\}$ denote all directions and let $a, b, c\in \{ 1,\dots, n-1\}$ denote only the tangential directions. Since $h$ satisfies the geodesic gauge in a collar neighborhood of $\Sigma$, $h_{0i}$ and any covariant derivatives of $h_{0i}$ vanish in the neighborhood. Already noted in \eqref{eq:Cauchy1}, from $h^\intercal=0,A'(h)=0$ and geodesic gauge, we have $h=0,\nabla h=0$ on $\Sigma$. We will prove inductively in $\ell$. First, for $\ell=0$, we have $h=0$, $\nabla h=0$, $\big( (\mathrm{Ric}')(h)\big)^\intercal =0$ on $\hat \Sigma$. We would like to show that $\nabla^2 h=0$ on $\hat \Sigma$. By geodesic gauge, we automatically have $\nabla^2 h (\nu, e_i)=0$. Since $\nabla h=0$, we have $\nabla_{e_a}\nabla h =0$ and $\nabla_\nu\nabla_{e_a}h=\nabla_{e_a}\nabla_\nu h +\mathrm{Rm}\circ \nabla h =0$. To summarize, when expressed in the local frame, we already have \[ h_{0j; k_1k_2}=0,\quad h_{ij; k_1 a}=0, \quad h_{ij; a k_1 }=0\quad \mbox{ on } \hat \Sigma. \] Therefore, it remains to show that $(\nabla_\nu \nabla_\nu h)^\intercal =0$; namely $h_{ab;00}=0$. Recall the formula of $\mathrm{Ric}'(h)$ from \eqref{equation:Ricci}, restricted on the tangential vectors $e_a, e_b$: \begin{align}\label{eq:Ricci2} \begin{split} \mathrm{Ric}'(h)_{ab} & = -\tfrac{1}{2} h_{ab;ii} + \tfrac{1}{2} (h_{ai; ib} + h_{bi; ia} ) - \tfrac{1}{2} h_{ii; ab} \\ &\quad + \tfrac{1}{2} (R_{ai} h_{jb} + R_{bi} h_{ja} ) - R_{aijb} h_{ij}. \end{split} \end{align} Since $0=\mathrm{Ric}'(h)_{ab} = -\tfrac{1}{2} h_{ab;00}$, we obtain $\nabla^2 h=0$ on $\hat \Sigma$. We proceed inductively in $\ell>0$. Suppose that Item~\eqref{it:infinite1} holds for $k=0,1,\dots, \ell$ and that we already have $h=0, \nabla h=0, \nabla^{k+2} h =0$ on $\hat \Sigma$ for $k=0, 1, \dots, \ell -1$. We would like to derive $\nabla^{\ell+2} h =0$ on $\hat \Sigma$. We compute $\nabla^{\ell+2}h$ in the following three cases according to the location of $\nu$: \begin{align*} (\nabla^{\ell+2}_{\tiny\mbox{arbitrary}} h) (\nu, e_i)&=h_{0i; k_1 \cdots k_{\ell+1} } = 0 \quad \quad \mbox{ (by geodesic gauge)}\\ (\nabla_{\tiny\mbox{tangential}} \nabla^{\ell+1}_{\tiny\mbox{arbitrary}} h) (e_a, e_b) &=h_{ab;k_1k_2\cdots k_{\ell}c }\\ &= \big(h_{ab;k_1k_2\cdots k_{\ell}}\big)_{,c} +(\mbox{Christoffel symbols}) \circ \nabla^{\ell+1} h \\ & =0 \quad \quad (\mbox{by the inductive assumption } \nabla^{\ell+1} h=0 \mbox{ on } \hat \Sigma), \end{align*} It remains to compute $(\nabla_{\nu} \nabla^{\ell+1}_{\tiny\mbox{arbitrary}} h) (e_a, e_b)=0$. If any of those `arbitrary' covariant derivatives is tangential, we can as before commute the derivatives to express it in the form $(\nabla_{\tiny\mbox{tangential}} \nabla^{\ell+1}_{\tiny\mbox{arbitrary}} h) (e_a, e_b)$ plus the contraction between curvature tensors of $\bar g$ and derivatives of $h$ in the order~$\ell+1$ or less. Therefore, it suffices to show \begin{align}\label{eq:h-nu} (\nabla^{\ell+2}_\nu h)(e_a, e_b)=0, \quad \mbox{ or in the local frame } h_{ab;\tiny\underbrace{0\cdots 0}_{\ell+2-\mbox{times}}} = 0. \end{align} To prove \eqref{eq:h-nu}, note that from \eqref{eq:Ricci2}, we exactly have \[ (\nabla^{\ell}_\nu (\mathrm{Ric}'(h)) \big)(e_a, e_b) = - \tfrac{1}{2} h_{ab;\tiny\underbrace{0\cdots 0}_{\ell+2-\mbox{times}}} \quad \mbox{ on } \hat \Sigma \] because all other terms vanish there. We \emph{claim} that\footnote{The notations unfortunately become overloaded. Recall that $(\nabla^\ell_\nu \mathrm{Ric} )'(h) $ denotes the linearized $\nabla^\ell_\nu \mathrm{Ric}$, and note $\nabla^{\ell}_\nu (\mathrm{Ric}'(h))$ represents the $\ell$th covariant derivative in $\nu$ of $\mathrm{Ric}'(h)$. The claimed identity says that ``taking the $\nu$-covariant derivative'' and ``linearizing'' are commutative under the assumption of $h$. } for each $i=0, 1, \dots, \ell$: \begin{align}\label{eq:commutative} (\nabla^\ell_\nu \mathrm{Ric} )'(h) = \nabla^{\ell-i}_\nu \big((\nabla_\nu^i \mathrm{Ric})'(h)\big) \qquad \mbox{ on } \hat\Sigma \end{align} provided $h=0, \nabla h=0, \dots, \nabla^{\ell-i} h=0$ on $\hat \Sigma$. Once the claim is proven, together with the assumption $ \big((\nabla^\ell_\nu \mathrm{Ric} )'(h) \big)^\intercal=0$ on $\hat \Sigma$ and the above computations, we conclude $\nabla^{\ell+2} h=0$ on $\hat \Sigma$ and complete the proof. \begin{proof}[Proof of Claim] Let $g(s)$ be a differentiable family of metrics with $g(0) = \bar g$ and $g'(0)=h$. By definition, \begin{align*} (\nabla^\ell_\nu \mathrm{Ric} )'(h) =\left. \ds\right|_{s=0}\left( \nabla^\ell_{\nu_{g(s)}} \mathrm{Ric}_{g(s)} \right) \end{align*} where $\nabla_{\nu_{g(s)}}$ is the $g(s)$-covariant derivative in the unit normal $\nu_{g(s)}$. When the $s$-derivative falls on $\nabla^i_\nu \mathrm{Ric}_{g(s)}$, we get the desired term $\nabla^{\ell -i }_\nu \big(\nabla_\nu^i \mathrm{Ric})'(h)\big)$. It suffices to see that those terms involve the $s$-derivative on one of the covariant derivative $\nabla_{\nu_{g(s)}}$ must vanish. It follows that $\left.\ds\right|_{s=0}\nabla_{\nu_{g(s)}}$ is a linear function in only $h$ and $\nabla h$. So $\left.\ds\right|_{s=0}\nabla^{\ell-i}_{\nu_{g(s)}}$ is a linear function in $h$ and the derivatives of $h$ up to the $(\ell-i)$th order, and thus it vanishes on $\hat \Sigma$. We prove the claim. \end{proof} \end{proof} \begin{remark} We will not use the following fact, but nevertheless it is interesting to note that, using similar computations, both items in Proposition~\ref{pr:infinite} are equivalent to the condition: for $k=0, 1, \dots, \ell$, \[ h^\intercal=0, \quad A'(h)=0,\quad (\nabla^k_\nu A)'(h)=0 \mbox{ on } \hat \Sigma \] where the second fundamental form $A$ is defined in the neighborhood as the second fundamental form of equidistant hypersurfaces to $\hat \Sigma$. \end{remark} \begin{proof}[Proof of Theorem~\ref{th:trivial'} (under infinite-order boundary condition)] Without loss of generality, we may assume $(h, v)$ satisfies the static-harmonic gauge. Note that $\hat \Sigma$ is analytic and $(\bar g, \bar u)$ is analytic in some local coordinate chart $\{x_1, \dots, x_n\}$ near $\hat \Sigma$ by Theorem~\ref{th:analytic}. By Corollary~\ref{co:analytic}, $(h, v)$ is analytic up to $\hat \Sigma$ in $\{ x_1,\dots, x_n\}$. Then by Corollary~\ref{co:geodesic}, there is an analytic vector field $Y$ in a collar neighborhood $U$ of $\hat \Sigma$ with $Y=0$ on $\hat \Sigma$ such that $\hat h:= h-L_Y \bar g$ satisfies the geodesic gauge in $U$. In particular, $\hat h$ is analytic. Recall that $h$ satisfies the infinite-order boundary condition, so does $\hat h$ (by Item~\eqref{it:X} in Corollary~\ref{co:static}). Applying Proposition~\ref{pr:infinite} to $\hat h$, we see that $\hat h$ vanishes at infinite order toward $\hat \Sigma$, and thus $\hat h\equiv 0$ by analyticity. We then conclude $h\equiv L_Y \bar g$ in $U$. Applying Theorem~\ref{th:extension}, there is a global vector field $X$ so that $X=Y$ in $U$, $X=0$ on $\hat \Sigma$, and $L_X \bar g = h$ in $M\setminus \Omega$. By Lemma~\ref{le:trivial-h}, $X\in \mathcal X(M\setminus \Omega)$ and $v = X(\bar u)$. It completes the proof. \end{proof} \section{Existence and local uniqueness}\label{se:solution} In this section, we prove Theorem~\ref{existence}. We restate the theorem and also spell out the precise meaning of ``geometric uniqueness'' and ``smooth dependence'' of the theorem. Recall the diffeomorphism group $\mathscr D (M\setminus \Omega)$ in Definition~\ref{de:diffeo}. \begin{theorem}\label{exist} Let $(M, \bar g, \bar u)$ be an asymptotically flat, static vacuum with $\bar u>0$. Suppose for $L$ defined on $M\setminus\Omega$ \begin{align} \label{eq:no-kernel2} \Ker L = \big\{ (L_X \bar g, X(\bar u)): X\in \mathcal X(M\setminus \Omega)\big\}. \end{align} Then there exist positive constants $\epsilon_0,C$ such that for each $\epsilon\in (0, \epsilon_0)$, if $(\tau, \phi)$ satisfies $\|(\tau,\phi)-(g^\intercal,H)\|_{\C^{2,\a}(\Sigma)\times \C^{1,\a}(\Sigma)}<\epsilon$, there exists an asymptotically flat, static vacuum pair $(g,u)$ with $\|(g,u)-(\bar g,\bar u)\|_{\C^{2,\a}_{-q}(M\setminus \Omega)}<C\epsilon$ solving the boundary condition $(g^\intercal, H_g) = (\tau, \phi) $ on $\Sigma$. Furthermore, the solution $(g, u)$ is locally, geometrically unique. Namely, there is a neighborhood $\mathcal U\subset {\mathcal M}(M\setminus\Omega)$ of $(\bar g, \bar u)$ and $\mathscr D_0\subset \mathscr D(M\setminus\Omega)$ of the identity map $\mathrm{Id}_{M\setminus \Omega}$ such that if $( g_1, u_1)\in \mathcal U$ is another static vacuum pair with the same boundary condition, there is a unique diffeomorphism $\psi\in \mathscr D_0$ such that $(\psi^* g_1, \psi^* u_1) = (g, u)$ in~$M\setminus \Omega$. Also, the solution $(g, u)$ satisfies both the static-harmonic and orthogonal gauges depends smoothly on the Bartnik boundary data $(\tau, \phi)$. \end{theorem} Recall the operators $T, T^\mathsf{G}, L, L^\mathsf{G}$ defined in \eqref{eq:bdv}, \eqref{rsv}, \eqref{eq:L}, and \eqref{lsv} respectively. The proof of Theorem~\ref{exist} is along a similar line as the special case for Euclidean $(\bar g, \bar u) = (g_{\mathbb E}, 1)$ considered in \cite{An-Huang:2021}. We outline the proof: Under the ``trivial'' kernel assumption~\eqref{eq:no-kernel2} and by Corollary~\ref{co:static}, $\Ker L^\mathsf{G} $ is the $N$-dimensional space \[ \Ker L^\mathsf{G} =\big\{ \big(L_X \bar g, X(\bar u) \big):X\in \mathcal X^\mathsf{G} (M\setminus \Omega) \big\}. \] Since the operator $L^\mathsf{G}$ is Fredholm of index zero by Lemma~\ref{le:Fred}, the cokernel of $L^\mathsf{G}$ must be also $N$-dimensional. In Lemma~\ref{le:range} below, we explicitly identify the cokernel by \eqref{equation:cokernel2} in Proposition~\ref{proposition:cokernel}. From there, we construct a modified operator $\overline{T^\mathsf{G}}$ whose linearization $\overline{L^\mathsf{G}}$ is an isomorphism in Proposition~\ref{pr:modified}. \begin{lemma}\label{le:range} Under the same assumptions as in Theorem~\ref{exist}, we have \begin{align*} (\Range L^\mathsf{G})^\perp&={\mathcal Q} \end{align*} where $\mathcal{Q}$ consists of elements $\kappa(X) \in \big(\C^{0,\alpha}_{-q-2}(M\setminus \Omega)\times \C^{1,\alpha}(\Sigma)\times \mathcal{B}(\Sigma)\big)^*$ (the dual space) where \[ \kappa(X)=\Big(2\b^* X \!-\bar u^{-1} X(\bar u) \bar g, -\Div X +\bar u^{-1} X(\bar u), \big(\!-2\bar u \b^* X + X(\bar u) \bar g\big) (\nu, \cdot ), \, 0, \, 0\Big) \] for $X\in \mathcal X^\mathsf{G}(M\setminus \Omega)$. Recall $\b^* = \b_{\bar g}^*$ defined by \eqref{eq:Bianchi*}. Furthermore, the codomain of $L^\mathsf{G}$ can be decomposed as \begin{align}\label{eq:decom} \Range L^\mathsf{G} \oplus \eta \mathcal Q \end{align} where $\eta \mathcal Q=\big\{ \eta \kappa(X): \kappa(X)\in \mathcal Q\big\} $ and $\eta$ is a positive smooth function on $M\setminus\Omega$ satisfying $\eta=1$ near $\Sigma$ and $\eta(x)=|x|^{-1}$ outside a compact set. \end{lemma} \begin{remark} Recall the definition of $\kappa_0(g, u, X)$ in \eqref{eq:kappa}. The first two components of $\kappa(X)$ above are exactly $\kappa_0(\bar g, \bar u, X)$. \end{remark} \begin{proof} All the geometric operators are with respect to $\bar g$ in the proof. By Corollary~\ref{co:static}, $\Dim \Ker L^\mathsf{G} = N$. Since $L^\mathsf{G}$ has Fredholm index zero by Lemma~\ref{le:Fred} and $\Dim \mathcal Q=N$, we just need to show ${\mathcal Q}\subseteq (\Range L^\mathsf{G})^\perp$. Namely, we will show $\big\langle L^\mathsf{G}(h, v), \kappa(X)\big\rangle_{{\mathcal L}^2}=0$ for any $(h, v)$ and $X\in \mathcal X^\mathsf{G}(M\setminus \Omega)$. Since $S'(h, v) $ is ${\mathcal L}^2$-orthogonal to $\kappa_0(\bar{g}, \bar{u}, X)$ by \eqref{equation:cokernel2} in Proposition~\ref{proposition:cokernel}, we can reduce the computation as follows, where we denote by $Z= \mathsf{G}'(h,v)$: \begin{align*} &\big\langle L^\mathsf{G}(h, v), \kappa(X)\big\rangle_{{\mathcal L}^2} &\\ &= \int_{M\setminus \Omega} \Big\langle \big( -\bar u\, \mathcal{D} Z, -Z(\nabla \bar u) \big), \big( 2\b^* X - \bar u^{-1} X(\bar u) \bar g,\, -\Div X + \bar u^{-1} X(\bar u)\big) \Big\rangle \, \dvol\\ &\quad +\int_\Sigma\big( -2\bar u \b^* X + X(\bar u)\bar g\big) \big(\nu, Z\big) \, d\sigma\\ &=\int_{M\setminus \Omega} Z \Big(\Div \big(2 \bar u \beta^* X - X(\bar u) \bar g \big) - \big( -\Div X + \bar u^{-1} X(\bar u) \big) d\bar u \Big) \, \dvol \end{align*} where we apply integration by parts in the last identity and note that the boundary terms on $\Sigma$ cancel. The last integral is zero because \begin{align*} &\Div \big(2 \bar u \beta^* X - X(\bar u) \bar g \big) - \big( -\Div X + \bar u^{-1} X(\bar u) \big) d\bar u \\ &= 2 \bar u \Div \beta^* X - d\big (X(\bar u)\big)+ 2\beta^* X (\nabla u, \cdot) - \big( -\Div X + \bar u^{-1} X(\bar u) \big) d\bar u\\ & = 2 \bar u \Div \beta^* X - d\big (X(\bar u)\big)+L_X \bar g(\nabla u, \cdot) - \bar u^{-1} X(\bar u) d\bar u\\ &= \bar u \Big( \, 2 \Div \beta^* X -\bar u^{-1} d\big (X(\bar u)\big)+\bar u^{-1} L_X \bar g(\nabla u, \cdot) - \bar u^{-2} X(\bar u) d\bar u\Big)\\ &= -\bar u \, \Gamma(X)=0 \end{align*} where in the second equality we use $2\beta^* X(\nabla u, \cdot) = L_X \bar g(\nabla u, \cdot) - \Div X d\bar u$ by definition, and in the last line we use $2\Div \beta^* X =-\beta L_X \bar g$ and the definition of $\Gamma$ (see either \eqref{eq:gauge} or \eqref{eq:G}). We verify \eqref{eq:decom}. With respect to the weighted inner product ${\mathcal L}^2_\eta$, we denote by $\kappa_1, \dots, \kappa_N$ an orthonormal basis of ${\mathcal Q}$. For any element $f$ in the codomain of $L^\mathsf{G}$, one can verify that \[ f - \eta \sum_{\ell=1}^N a_\ell \kappa_\ell \in \mathcal Q^{\perp}=\Range L^{\mathsf{G}} \] where the numbers $a_\ell = \langle f, \kappa_\ell \rangle_{{\mathcal L}^2}$. Since $\Range L^{\mathsf{G}}\cap \eta {\mathcal Q}=0$, we get the desired decomposition. \end{proof} Let $\rho$ be the weight function in Definition~\ref{de:gauge}. Define a complement to $\Ker L^\mathsf{G}$ in $\C^{2,\alpha}_{-q}(M)$ by \begin{equation*} {\mathcal E}=\left\{(h,v):\int_{M\setminus\Omega}\Big\langle (h,v), \big(L_X\bar g,X(\bar u)\big)\Big \rangle \rho\dvol=0\mbox{ for all }X\in{\mathcal X}^\mathsf{G} (M\setminus \Omega) \right \}. \end{equation*} In other words, $(h, v)\in {\mathcal E}$ if and only if $(\bar g +h, \bar u + v)$ satisfies the orthogonal gauge (recall Definition~\ref{de:gauge}). To summarize, we have decomposed the domain and codomain of $L^\mathsf{G}$ by \[ L^\mathsf{G} :{\mathcal E} \oplus \Ker L^\mathsf{G}\longrightarrow \Range L^\mathsf{G} \oplus \eta {\mathcal Q}. \] Similarly as in \cite[Section 4.3]{An-Huang:2021}, we define the \emph{modified operator} $\overline {T^\mathsf{G}}$ as \begin{align*} \overline {T^\mathsf{G}}(g,u,W)= & \begin{array}{l} \left\{ \begin{array}{l} -u \mathrm{Ric}_{ g}+\nabla^2_{ g} { u}-u\mathcal{D}_g {\mathsf{G}}(g,u)-\eta\big(2\b^*_gW-u^{-1} W(u)g\big)\\ \Delta_{ g} {u}-\mathsf{G}(g,u) (\nabla_g u) +\eta\, \big(\Div_g W-u^{-1} W(u)\big)\\ \Gamma_{(g,u)}(W)+\big(\!-\mathrm{Ric}_g+u^{-1}\nabla_g^2u\big)(W, \cdot ) \end{array} \right. ~\mbox{in }M\setminus \Omega \\ \left\{ \begin{array}{l} \mathsf{G}(g,u)+\big(2u\b^*_gW-W(u)g\big)(\nu_g, \cdot )\\ g^\intercal\\ H_g \end{array} \right. \quad\mbox{ on }\Sigma \end{array} \end{align*} where $\Gamma_{(g,u)}(W)$ is as in \eqref{eq:Gamma}. The operator $\overline {T^\mathsf{G}}$ is defined on the function space of $(g, u)\in \mathcal U$ and $W\in \widehat{{\mathcal X}}$. Here, $\mathcal U= \big((\bar g,\bar u)+{\mathcal E}\big)\cap {\mathcal M}(M\setminus \Omega)$ consists of asymptotically flat pairs satisfying the orthogonal gauge. The linear space $\widehat {\mathcal X}$ is defined similarly as ${\mathcal X} (M\setminus \Omega)$, with the only difference in regularity.\footnote{The slight technicality arises to avoid a potential ``loss of derivatives'' issue. Some coefficients in the third equation of $\overline {T^\mathsf{G}}$ are only $\C^{0,\alpha}$, e.g. $\mathrm{Ric}_g$. If we use the space ${\mathcal X}(M\setminus \Omega)$ (of $\C^{3,\alpha}$-regularity) instead of $\widehat {\mathcal X}$, the codomain of the map may still be only in $\C^{0,\alpha}_{-q}$, and the map cannot be surjective.)} Explicitly, \begin{align*} \widehat {\mathcal X}=\Big\{&X\in \C^{2,\alpha}_{\mathrm{loc}}(M\setminus \Omega):\, X=0 \mbox{ on } \Sigma \mbox{ and }X-Z= O^{2,\a}(|x|^{1-q}) \mbox{ for some } Z\in \mathcal Z\Big\}. \end{align*} In other words, ${\mathcal X} (M\setminus \Omega)$ can be view as a subspace of $\widehat{\mathcal X}$ with $\C^{3,\alpha}$ regularity. With the above choices of function spaces, we get \[ \overline {T^\mathsf{G}}: \mathcal U \times \widehat {\mathcal X} \longrightarrow \C^{0,\a}_{-q-2}(M\setminus \Omega)\times \C^{0,\a}_{-q-1}(M\setminus \Omega )\times \C^{1,\a}(\Sigma)\times \mathcal{B}(\Sigma). \] (Note that we slightly abuse the notation and blur the distinction of $W$ and its covector with respect $g$.) \begin{proposition}\label{pr:modified} Under the same assumptions as in Theorem~\ref{exist}, we have \begin{enumerate} \item $\overline {T^\mathsf{G}}$ is a local diffeomorphism at $(\bar g,\bar u,0)$.\label{it:diffeo} \item If $\overline {T^\mathsf{G}}(g,u,W)=(0,0,0,0,\tau,\phi)$, then $W=0$ and $T^\mathsf{G}(g,u)=(0,0,0,\tau,\phi)$. \label{it:solution} \end{enumerate} \end{proposition} \begin{proof} To prove Item~\eqref{it:diffeo}, it suffices to show that the linearization of $\overline {T^\mathsf{G}}$ at $(\bar g,\bar u,0)$, denoted by $\overline{ L^\mathsf{G}}$, is an isomorphism, where \[ \overline{ L^\mathsf{G}}: \mathcal E\times \widehat {\mathcal X } \longrightarrow \C^{0,\alpha}_{-q-2}(M\setminus \Omega) \times \C^{0,\alpha}_{-q-1}(M\setminus \Omega)\times \C^{1,\alpha}(\Sigma) \times \mathcal B(\Sigma). \]For $(h, v)\in \mathcal E$ and $X\in \widehat{{\mathcal X}}$, $\overline{ L^\mathsf{G}} (h, v, X)$ is given by \begin{equation*} \begin{split} \begin{array}{l} \left\{ \begin{array}{l} -\bar u \mathrm{Ric}'(h) + (\nabla^2)'(h) \bar u - v\mathrm{Ric} + \nabla^2 v -\bar u \mathcal{D} \mathsf{G}'(h, v) -\eta\Big(2\b^*X-\bar u^{-1} X(\bar u)\bar g\Big)\\ \Delta v+\Delta'(h)\bar u-\mathsf{G}'{(h,v)}(\nabla \bar u)+\eta\Big(\Div X-\bar u^{-1}X(\bar u)\Big)\qquad \qquad \qquad \mbox{ in }M \setminus \Omega \\ \Gamma(X \end{array} \right. \\ \left\{ \begin{array}{l} \mathsf{G}'{(h,v)}+(2\bar u\b^*X-X(\bar u)\bar g)(\nu)\\ h^\intercal\\ H'(h) \end{array}\right.\quad\mbox{ on }\Sigma. \end{array} \end{split} \end{equation*} Note that we use $X$ to denote both the vector field and the dual covector, and in the third component recall $\Gamma(X)$ defined ~\eqref{eq:gauge}. Observe that when dropping the third component, we can understand $\overline{L^\mathsf{G}}(h,v,X)$ as the sum $L^\mathsf{G}(h,v)+\eta \kappa(X)$. We show that $\overline{L^\mathsf{G}}$ is an isomorphism. To see that $\overline{L^\mathsf{G}}$ is surjective: Since the third component $\Gamma(X)$ is surjective by Lemma~\ref{PDE}, we just need to show that $\overline{ L^\mathsf{G}}$ is surjective onto other components for those $X$ satisfying $\Gamma(X)=0$, i.e. $X\in \mathcal X^\mathsf{G} (M\setminus \Omega)$. It is equivalent to showing that $L^\mathsf{G}(h,v)+\eta\kappa(X)$ is surjective for $(h, v)\in \mathcal E$ and for $X\in X^\mathsf{G} (M\setminus \Omega)$, which follows from Lemma~\ref{le:range}. To see that $\overline{L^\mathsf{G}}$ is injective: If $(h, v)\in {\mathcal E}$ and $X\in \widehat {{\mathcal X}}$ solves $\overline{ L^\mathsf{G}}(h,v,X)=0$, then $L^\mathsf{G}(h,v)+\eta \kappa(X)=0$ and $\kappa(X)\in {\mathcal Q}$, which implies $L^\mathsf{G} (h, v)=0$ and $\kappa(X)=0$ by the decomposition \eqref{eq:decom} in Lemma~\ref{le:range}. From there we can conclude $(h, v)=0$ and $X=0$. For Item~\eqref{it:solution}, suppose $\overline {T^\mathsf{G}}(g,u,W)=(0,0,0,0,\tau,\phi)$. Then \begin{align*} -u \mathrm{Ric}_{ g}+\nabla^2_{ g} { u}-u \mathcal{D}_g {\mathsf{G}}(g,u)&=\eta\, \big(2\b^*_gW-u^{-1}W(u)g\big)\\ \Delta_{ g} {u}-\mathsf{G}(g,u) (\nabla_g u) &=-\eta\, \big(\Div_g W-u^{-1} W(u)\big). \end{align*} We denote $\mathsf{G}=\mathsf{G}(g, u)$ in the following computations. Pair the first equation with $2\b^*_g W- u^{-1} W(u)g$ and the second equation with $-\big(\Div_gW-u^{-1}W(u)\big)$. \begin{align*} &\int_{M\setminus \Omega }\eta\, \Big|2\b^*_gW-u^{-1}W(u)g\big|^2+\eta \, \Big|\Div_g W-u^{-1}W(u)\big|^2\dvol_g\\ &=\int_{M\setminus \Omega}\bigg\langle \Big(-u\mathrm{Ric}_{g}+\nabla^2_{ g} u-u\mathcal{D}_g\mathsf{G}, \Delta_{ g} {u}-\mathsf{G}(\nabla_g u)\Big),\\ &\qquad \qquad \qquad\qquad \big(2\b^*_gW-u^{-1} W(u)g\,, - \Div_g W +u^{-1}W(u)\big)\bigg\rangle\dvol_g\\ &=\int_{M\setminus \Omega}\bigg\langle \Big(-u\mathcal{D}_g\mathsf{G},-\mathsf{G}(\nabla u)\Big),\big(2\b^*_gW-u^{-1} W(u)g\,, - \Div_g W +u^{-1}W(u)\big)\bigg\rangle\dvol_g\\ &=\int_\Sigma \Big\langle \mathsf{G}, \big(2u\beta^*_g W - W(u) g \big)(\nu) \Big\rangle \, d\sigma_g\\ &= \int_\Sigma - |\mathsf{G}|^2 \, d\sigma_g \qquad \Big(\mbox{use $-\mathsf{G} = 2u\beta^*_g W - W(u) g$ from the equation for $\overline {T^\mathsf{G}}$}\Big). \end{align*} In the above identities, we use that in the second equality the ${\mathcal L}^2$-pairing involving $(-u \mathrm{Ric}_g + \nabla^2_g u, \Delta_g u)$ is zero by \eqref{equation:cokernel1}. Then in the third equality we apply integration by parts and note that the integral over $M\setminus \Omega$ is zero because \begin{align*} &\Div_g \big(u \, 2\b^*_gW-W(u)g\big)-\big(-\Div_g W+u^{-1}W(u)\big)d u\\ &= 2 u \Div_g \beta^*_g W - d(W(u)) + L_W g (\nabla u, \cdot) - u^{-1} W(u) du \\ &=- u \Big( \beta_g L_W g + u^{-1} d(W(u)) - u^{-1} L_W g(\nabla, \cdot) + u^{-2} W(u) du \Big)\\ &=- u \Big(-\Delta_g W - u^{-1} d W (\nabla u)+ u^{-2} W(u) du + \big( -\mathrm{Ric}_g + u^{-1} \nabla^2 u\big) (W) \Big)\\ &=- u \Big(\Gamma_{(g,u)}(W) + \big( -\mathrm{Ric}_g + u^{-1} \nabla^2 u\big) (W) \Big)\\ &=0 \end{align*} where we compute similarly as in \eqref{eq:G} in the third equality and use $\overline {T^\mathsf{G}}(g,u,W)=(0, 0, 0, 0, \tau, \phi)$ in the last equality. The previous integral identity implies $2\b^*_gW-u^{-1}W(u)g=0$ and $\Div W-u^{-1}W(u)=0$, so we conclude $W\equiv 0$. \end{proof} \begin{proof}[Proof of Theorem~\ref{exist}] From Item~\eqref{it:diffeo} in Proposition~\ref{pr:modified}, $\overline{T^\mathsf{G}}$ is a local diffeomorphism at $\big((\bar g, \bar u), 0\big)$. That is, there are positive constants $\epsilon_0, C$ such that for every $0 < \epsilon <\epsilon_0$, there is an open neighborhood $\mathcal U\times \mathcal V$ of $\big((\bar g, \bar u), 0\big)$ in $\big((\bar g, \bar u) + \mathcal E \big)\times \widehat X$ with the diameter less than $C\epsilon$ such that $\overline{T^\mathsf{G}}$ is a diffeomorphism from $\mathcal U\times \mathcal V$ onto an open ball of radius $\epsilon$ in the codomain of $\overline{T^\mathsf{G}}$. Therefore, given any $(\tau, \phi)$ satisfying $\|(\tau , \phi) - (\bar g^\intercal, H_{\bar g})\|_{\C^{2,\alpha}(\Sigma)\times \C^{1,\alpha}(\Sigma) } < \epsilon$, there exists a unique $(g, u, W)$ with $\|(g, u) - (\bar g, \bar u ) \|_{\C^{2,\alpha}_{-q}(M\setminus \Omega)} <C\epsilon$ and $\| W \|_{\C^{2,\alpha}_{1-q}(M\setminus \Omega)} < C \epsilon$ satisfying \[ \overline{T^\mathsf{G}}(g, u , W) = (0, 0, 0, 0, \tau, \phi) \] and depending smoothly on $(\tau, \phi)$. By Item~\eqref{it:solution} in Proposition~\ref{pr:modified}, we have $T^\mathsf{G}(g, u) = (0, 0, 0, \tau, \phi)$. By Lemma~\ref{rsv-to-sv}, $(g, u)$ satisfies the static-harmonic gauge and $T(g, u)=(0, 0, 0, \tau, \phi)$. By the definition of the complement space $\mathcal E$ and $(g, u)\in (\bar g, \bar u) + \mathcal E$, $(g, u)$ also satisfies the orthogonal gauge. \end{proof} \begin{remark}\label{re:constant} The constants $\epsilon_0, C$ in the above proof depend on the global geometry $\Sigma$ in $M\setminus \Omega$. More precisely, by Inverse Function Theorem, the constants depend on the operator norms of $\overline{L^\mathsf{G}}$, $\overline{L^\mathsf{G}}^{-1}$, as well as the second Frech\'et derivative $D^2\overline{T^\mathsf{G}}|_{(g, u)}$ for $(g, u)$ in a neighborhood of $(\bar g, \bar u)$, between those function spaces as specified above. \end{remark} \section{Perturbed hypersurfaces}\label{se:perturbation} Through out this section, we let $(M,\bar g,\bar u)$ be an asymptotically flat, static vacuum triple with $\bar u>0$. In this section, we prove Theorem~\ref{generic}, which follows directly from Theorem~\ref{th:zero} below, and then Corollary~\ref{co2}. Let $\{ \Sigma_t\}$ be a smooth one-sided family of hypersurfaces generated by $X$ foliating along $\hat \Sigma_t\subset \Sigma_t$ with $M\setminus \Omega_t$ simply connected relative to $\hat \Sigma_t$, as defined in Definition~\ref{def:one-sided}. Let $\psi_t: M\setminus\Omega\longrightarrow M\setminus\Omega$ be the flow of $X$. If we denote by $\Omega = \Omega_0$ and $\Sigma = \Sigma_0$, then $\Omega_t = \psi_t (\Omega)$ and $\Sigma_t = \psi_t (\Sigma)$. Denote the pull-back static pair defined on $M\setminus \Omega$ by \[ (g_t, u_t) = \psi_t^* \big( \left.\big(\bar g, \bar u\big)\right|_{M\setminus \Omega_t}\big). \] In that notation $(g_0,u_0) = (\bar g, \bar u)$. Recall the operator $L$ defined in \eqref{eq:L}, which is the linearization of $T$ at $(\bar g,\bar u)$ in $M\setminus\Omega$. We shall consider the corresponding family of operators $L_t$ raised by linearization of $T$ at $(g_t,u_t)$ in $M\setminus\Omega$: \begin{align*} L_t(h, v)= \begin{array}{l} \left\{ \begin{array}{l} -u_t \mathrm{Ric}'|_{g_t} (h) + \left.(\nabla^2)'\right|_{g_t} (h) u_t - v \mathrm{Ric}_{g_t} + \nabla^2_{g_t} v\\ \Delta_{g_t} v + \left.\Delta'\right|_{g_t} (h) \bar u \end{array}\right. \quad {\rm in }~M\setminus \Omega \\ \left\{ \begin{array}{l} h^\intercal\\ H'|_{g_t}(h) \end{array}\right.\quad {\rm on }~\Sigma. \end{array} \end{align*} As is for $L$, we need to add gauge terms to modify $L_t$ for the sake of ellipticity. To that end, we generalize the static-harmonic gauge $\mathsf{G}(g,u)$ with respect to $(\bar g,\bar u)$ defined in Section 3.1 to a gauge term $\mathsf{G}_t(g,u)$ with respect to $(g_t,u_t)$ and consider its linearization $\mathsf{G}'_t$ at $(g_t,u_t)$: \begin{align*} \mathsf{G}_t(g, u) &:= \beta_{g_t} g + u_t^{-2} u du - u_t^{-1} g(\nabla_{g_t} u_t, \cdot )\\ \mathsf{G}_t'(h, v)&:= \beta_{g_t} h + u_t^{-1} dv + v u_t^{-2} du_t - u_t^{-1} h(\nabla_{g_t} u_t, \cdot ). \end{align*} Then we define the family of operators $L^{\mathsf{G}}_t$ that have the same domain and co-domain as $L^\mathsf{G}$ in \eqref{lsv}, where $L^{\mathsf{G}}_t(h, v)=$ \begin{align*} \begin{array}{l} \left\{ \begin{array}{l} -u_t \mathrm{Ric}'|_{g_t} (h) + \left.(\nabla^2)'\right|_{g_t} (h) u_t - v \mathrm{Ric}_{g_t} + \nabla^2_{g_t} v-u_t\mathcal{D}_{g_t} {\mathsf{G}}'_t(h,v)\\ \Delta_{g_t} v + \left.\Delta'\right|_{g_t} (h) \bar u-{\mathsf{G}}'_t(h,v) (u_t) \end{array}\right. \quad {\rm in }~M\setminus \Omega \\ \left\{ \begin{array}{l} \mathsf{G}'_t(g, u) \\ h^\intercal\\ H'|_{g_t}(h) \end{array}\right.\quad {\rm on }~\Sigma. \end{array} \end{align*} Note that in our notations $L_0^\mathsf{G}=L^\mathsf{G}$ and $L_0 = L$. It is direct to verify that the results in Lemma~\ref{le:linear}, Lemma~\ref{le:Fred} and Corollary~\ref{co:analytic} are also true for $L_t$ and $L_t^\mathsf{G}$ with $(\bar g,\bar u)$ replaced by $(g_t,u_t)$. \medskip \noindent {\bf Outline of the proof of Theorem~\ref{generic}:} Most part of this section, except Theorem~\ref{th:zero}, directly extends the arguments in \cite[Section 6]{An-Huang:2021}, which we overview here. For $a$ in an open dense subset $J\subset [-\delta, \delta]$, we show that any $(h, v)\in \Ker L_a$ is the limit of a ``differentiable'' sequence of kernel elements $(h(t_j), v(t_j)) \in \Ker L_{t_j}$ as $t_j\to a$. From the sequence, we construct a ``test'' static vacuum deformation $(k, w)$ at $(g_a, u_a)$, which is essentially the $X$-derivative of $(h, v)$, with certain Bartnik boundary data. We then use the Green-type identity in Corollary~\ref{co:Green} to compute $A'|_{g_a}(h)=0$. The new observation in Theorem~\ref{th:zero} is that the test pair $(k, w)$ indeed has zero Bartnik boundary data, i.e. $(k, w) \in \Ker L_a$. Therefore, we can inductively compute higher order derivatives of $h$ on $\hat\Sigma$. \medskip The next two results find the open dense interval $J$. Just as the operator $L^\mathsf{G}$, each $L_t^\mathsf{G}$ is elliptic and thus Fredholm as in Lemma \ref{le:Fred}. We can use elliptic estimates to show that $\Ker L_t^\mathsf{G}$ varies ``continuously'' for $t$ in an open dense subset $J\subset [-\delta, \delta]$. The arguments follow verbatim as in \cite[Section 6.1]{An-Huang:2021}, so we omit the proof. (Note the notation discrepancy: $L_t$ and $L_t^{\mathsf{G}}$ here correspond respectively to $S_t$ and $L_t$ in \cite{An-Huang:2021}). \begin{proposition}[Cf. {\cite[Proposition 6.6]{An-Huang:2021}}]\label{pr:limit} There is an open dense subset $J\subset (-\d, \d)$ such that for every $a\in J$ and every $(h, v)\in \Ker L_a^\mathsf{G}$, there is a sequence $\{ t_j \} $ in $J$ with $t_j \searrow a$, $\big(h(t_j), v(t_j)\big)\in \Ker L_{t_j}^\mathsf{G}$, and $(p, z)\in \C^{2,\alpha}_{-q}(M\setminus \Omega)$ such that, as $t_j \searrow a$, \begin{align*} \big( h(t_j), v(t_j) \big) &\longrightarrow(h,v)\\ \frac{\big( h(t_j), v(t_j) \big) - (h, v) }{t_j - a}&\longrightarrow (p, z) \end{align*} where the both convergences are taken in the $\C^{2,\alpha}_{-q}(M\setminus \Omega)$-norm. \end{proposition} It is more convenient to consider the above convergence for the ``un-gauged'' operators as in the next corollary. \begin{corollary}\label{co:limit2} Let $J\subset (-\d, \d)$ be the open dense subset in Proposition~\ref{pr:limit}. Then for every $a\in J$ and every $(h, v)\in \Ker L_a$, there is a sequence $\{ t_j \} $ in $J$ with $t_j \searrow a$, $\big(h(t_j), v(t_j)\big)\in \Ker L_{t_j}$, and $(p, z)\in \C^{2,\alpha}_{-q}(M\setminus \Omega)$ such that, as $t_j \searrow a$, \begin{align*} \big( h(t_j), v(t_j) \big) &\longrightarrow(h,v)\\ \frac{\big( h(t_j), v(t_j) \big) - (h, v) }{t_j - a}&\longrightarrow (p, z) \end{align*} where the both convergences are taken in the $\C^{2,\alpha}_{-q}(M\setminus \Omega)$-norm. \end{corollary} \begin{proof} Let $(h, v)\in \Ker L_a$. By Item \eqref{it:SH} in Lemma~\ref{le:linear}, there is a vector field $V\in \mathcal X(M\setminus \Omega)$ such that \[ (\hat h, \hat v):=\big(h+L_V \bar g, v+ V (\bar u)\big) \] satisfies $\mathsf{G}_a'(\hat h, \hat v)=0$, and thus $(\hat h, \hat v)\in \Ker L^\mathsf{G}_a$. By Proposition~\ref{pr:limit}, there exists a sequence $t_j\in J$ with $t_j \searrow a$ and $(\hat h(t_j), \hat v(t_j))\in \Ker L^\mathsf{G}_{t_j}$, $(\hat p,\hat z)\in \C^{2,\alpha}_{-q}(M\setminus \Omega)$ such that as $t_j \searrow a$, \begin{align*} \big(\hat h(t_j),\hat v(t_j) \big) &\longrightarrow (\hat h, \hat v)\\ \frac{\big( \hat h(t_j), \hat v(t_j) \big) - (\hat h, \hat v)}{t_j - a}&\longrightarrow (\hat p, \hat z). \end{align*} We now define \begin{align*} \big( h(t_j), v(t_j) \big) &=\big( \hat h(t_j) , \hat v(t_j) \big) -\big (L_V g_{t_j}, V( u_{t_j} ) \big)\\ (p, z) &= (\hat p, \hat z) -\Big(L_V L_X g_a, V \big(X(u_a)\big) \Big) \end{align*} where recall that $X$ denotes the deformation vector of $\{ \Sigma_t \}$. It is direct to verify the desired convergences. \end{proof} For each $t$, we let $\Sigma^+_t\subset \Sigma $ be the subset $\psi_t^{-1}\big(\{p\in\Sigma_t: \zeta(p)>0\}\big)$, and write $\Sigma^+=\Sigma^+_0$. Note that $\Sigma_t^+$ contains $\psi_t^{-1} (\hat \Sigma_t)$. \begin{theorem}\label{th:zero} Let $J\subset (-\d, \d)$ be the open dense subset in Proposition~\ref{pr:limit}. Then for every $a\in J$ and every $(h, v)\in \Ker L_a$, we have \begin{align*} A'|_{g_a}(h)=0,\quad \big(\mathrm{Ric}'|_{g_a}(h)\big)^\intercal=0,\quad \Big(\big(\nabla^k_\nu \, \mathrm{Ric}^\intercal \big)'\big|_{g_a}(h)\Big)^\intercal = 0 \quad \mbox{ on } \Sigma^+_a \end{align*} for all positive integers $k$. \end{theorem} \begin{proof} Let $(h, v)\in \C^{2,\alpha}_{-q}(M\setminus \Omega)$ solve $L_a (h, v)=0$. We first prove $A'|_{g_a}(h)=0$ on $\Sigma$ for all $a\in J$. We may without loss of generality assume $a=0$ and that $(h, v)$ satisfies the geodesic gauge $h(\nu,\cdot) = 0$ on $\Sigma$. (See \cite[Lemma 2.5]{An-Huang:2021}.) Let $\big(h(t_j), v(t_j) \big)\in \Ker L_{t_j}$ and $(p, z)$ be from Corollary~\ref{co:limit2}. We \emph{claim} that $\big(p-L_X h, z-X(v)\big)$ is a static vacuum deformation, i.e. $S'\big(p-L_X h, z-X(v)\big)=0$ in $M\setminus\Omega$, with the Bartnik boundary data: \begin{align}\label{eq:perturbed-bdry} \begin{split} (p-L_X h)^\intercal &= -2\zeta A'(h)\\ H'(p-L_X h) &= \zeta A\cdot A'(h). \end{split} \end{align} We compare $\big(h(t_j), v(t_j) \big)$ and the pull-back pair $\psi_{t_j}^* (h, v)$. Since they are equal at $t_j=0$ and both satisfy \[ S'|_{g_{t_j}}\big(h(t_j), v(t_j) \big)=0 \quad \mbox{ and }\quad S'|_{g_{t_j}}\big(\psi_{t_j}^* (h, v)\big)=0\quad \mbox{ in } M\setminus \Omega. \] We take the difference quotient: \begin{align*} 0&=\lim_{t_j\to 0}\frac{1}{t_j} \left( S'|_{g_{t_j}}\big(h(t_j), v(t_j) \big)-S'|_{g_{t_j}}\big(\psi_{t_j}^* (h, v)\big) \right)\\ &=\lim_{t_j\to 0}\frac{1}{t_j} \left( S'|_{g_{t_j}}\big(h(t_j)-h, v(t_j)-v \big)-S'|_{g_{t_j}}\big( \psi_{t_j}^* (h, v) -(h,v)\big)\right)\\ &= S' \big(p-L_X h, z- X(v) \big). \end{align*} For the boundary data, since $(h^\intercal, H'(h))=0$ and $\big( h(t_j)^\intercal, H'|_{g_{t_j}} (h(t_j)) \big)=0$ for all $t_j$ on $\Sigma$, we have \begin{align*} p^\intercal &= \lim_{t_j\to 0}\frac{ \big(h(t_j)-h\big)^\intercal}{t_j} = 0\\ H'(p-L_Xh)& =\lim_{t_j\to 0} \frac{1}{t_j} H'|_{g_{t_j} } \big( h(t_j)- \psi^*_t(h) \big)\\ &=-\lim_{t_j\to 0} \frac{1}{t_j} \psi_t^*\big(H'|_{g}(h)\big)=-X(H'(h)), \end{align*} where we recall $H'|_{g_{t_j}}(h(t_j))=0$. By \eqref{eq:sff} and \eqref{eq:Ricatti}, we have $(L_X h)^\intercal = 2\zeta A'(h)$ and $X\big(H'(h)\big)=- \zeta A\cdot A'(h)$, and it completes the proof of the claim. We apply the Green-type identity, Corollary~\ref{co:Green}, for $(h, v)$ and $(k, w):= \big(p-L_X h, z-X(v) \big)$ and get \begin{align*} 0 &= \int_\Sigma \Big \langle Q(h, v), \big( k^\intercal, H'(k) \big) \Big\rangle d\sigma\\ &= \int_\Sigma \Big \langle \big( vA + \bar u A'(h) - \nu (v) g^\intercal, 2v \big), \big( -2\zeta A'(h),\zeta A\cdot A'(h) \big) \Big\rangle\, d\sigma\\ &= -\int_\Sigma 2\bar u\zeta |A'(h)|^2 \, d\sigma \end{align*} where in the second equality we use the definition of $Q(h, v)$ (noting $\nu'(h)=0$ in geodesic gauge) and in the last equality $g^\intercal \cdot A'(h) = 0$. This shows that $A'(h)=0$ on $\Sigma^+$. It follows that for all $(h, v)\in \Ker L_a~(a\in J)$, we have $A'|_{g_a}(h)=0$ on $\Sigma^+_a$, because $A'(h)$ is gauge invariant by Corollary \ref{co:static} Item (3). So far, the argument follows closely Theorem~7{$^\prime$} in \cite{An-Huang:2021}. We further observe that for $(k, w)= (p-L_X h, z- X(v))$ as defined above, its Bartnik boundary data is zero from \eqref{eq:perturbed-bdry}, and thus $(k,w)\in \Ker L$. Therefore, we must have $A'(k)=0$ on $\Sigma^+$: \begin{equation} \begin{split}\label{limit} 0 &= A'(p-L_Xh)\\ &= \lim_{t_j\to 0}\frac{1}{t_j}A'|_{g_{t_j}}\big(h(t_j)-\psi_{t_j}^*(h)\big) = \lim_{t_j\to 0}\frac{1}{t_j}A'|_{\psi_{t_j}^*g}(-\psi_{t_j}^*(h))\\ &=-\lim_{t_j\to 0}\frac{1}{t_j}\psi_{t_j}^*\big( A'(h) \big)=-L_X\big(A'(h)\big)=-\zeta\nabla_\nu \big( A'(h) \big). \end{split} \end{equation} In the second line above, we use that $A'|_{g_{t_j}}(h(t_j))=0$ on $\Sigma^+_{t_j}$ for all $t_j$; and since the deformation vector field $X$ is smooth, $\Sigma_{t_j}^+\to\Sigma^+$ as $t_j\to 0$. In the last equality we use that $A'(h)=0$ on $\hat \Sigma$. Thus we obtain $\nabla_\nu \big(A'(h)\big)=0$ on $\Sigma^+$. By the formula~\eqref{eq:sff} for $A'(h)$ and noticing $h=0, \nabla h=0$ on $\Sigma$ (because $h^\intercal =0, A'(h)=0$ and geodesic gauge), we obtain \begin{equation*} (\nabla_\nu^2 h)^\intercal =0\ \ \mbox{on } \Sigma^+. \end{equation*} It implies $\big(\mathrm{Ric}'(h)\big)^\intercal=0$ by \eqref{equation:Ricci}. Again, since $\big(\mathrm{Ric}'(h)\big)^\intercal$ is gauge invariant, this holds for all $(h,v)\in \Ker L_a~(a\in J)$. We proceed to prove inductively in $k$ that for all $a\in J$ and for all $(h, v)\in \Ker L_a$ we have $ \big((\nabla_\nu^k \mathrm{Ric} )'\big|_{g_a}(h)\big)^\intercal=0$ on $\Sigma^+_a$. In the previous paragraph, we prove the statement for $k=0$, i.e. $\big(\mathrm{Ric}'|_{g_a}(h)\big)^\intercal=0$ on $\Sigma^+_a$ for all $(h,v)\in \Ker L_a$. Suppose the inductive assumption holds for $k\le \ell$, i.e. for all $a\in J$ and all $(h,v)\in \Ker L_a$, $\big( (\nabla^k_\nu \mathrm{Ric})'(h) \big)^\intercal=0$ on $\Sigma^+_a$ for $k=0,1,...,\ell$. We prove the statement holds for $k=\ell+1$. Let $(h, v)\in \Ker L_a$. We may without loss of generality assume $a=0$ and that $h$ satisfies the geodesic gauge. Let $(p, z)$ be defined as above. Because of \eqref{eq:perturbed-bdry} and $A'(h)=0$, we see that $ \big(p-L_X h, z-X(v) \big)\in \Ker L$. Therefore, we can apply the inductive assumption for $\big(p-L_X h, z-X(v) \big)$ and get \[ \big((\nabla^\ell_\nu \mathrm{Ric} )'(p-L_X h)\big)^\intercal=0 \ \ \mbox{ on } \Sigma^+. \] Similar computations as in \eqref{limit} yield \begin{align*} 0 &= \big((\nabla^\ell_\nu \mathrm{Ric} )'(p-L_X h)\big)^\intercal= \lim_{t_j\to 0}\frac{1}{t_j}\Big((\nabla^\ell_\nu\mathrm{Ric} )'|_{g_{t_j}}\big(h(t_j)-\psi_{t_j}^*(h) \big) \Big)^\intercal\\ & = -\lim_{t_j\to 0}\frac{1}{t_j}\Big((\nabla^\ell_\nu\mathrm{Ric} \big)'|_{g_{t_j}}(\psi_{t_j}^*(h))\Big)^\intercal =-\lim_{t_j\to 0}\frac{1}{t_j}\Big(\psi_{t_j}^*\big( (\nabla^\ell_\nu\mathrm{Ric} )'(h)\big) \Big)^\intercal\\ &=-\Big(L_X\big( (\nabla^\ell_\nu \mathrm{Ric}^\intercal \big)'(h) \big)\Big)^\intercal=-\zeta \Big(\nabla_\nu \big( (\nabla^\ell_\nu \mathrm{Ric} \big)'(h)\big) \Big)^\intercal. \end{align*} It implies that $\Big(\nabla_\nu \big( (\nabla^\ell_\nu \mathrm{Ric} \big)'(h)\big) \Big)^\intercal=0$ on $\Sigma^+$, and thus $\big((\nabla_\nu^{\ell+1} \mathrm{Ric})'(h)\big)^\intercal=0$ on $\Sigma^+$ by \eqref{eq:commutative}. \end{proof} \begin{proof}[Proof of Theorem~\ref{generic}] So far we have not used the assumption that $\pi_1(M\setminus \Omega_t, \hat \Sigma_t)=0$ nor that $\hat \Sigma_t$ is analytic. Using those assumptions and Theorem~\ref{th:zero}, we see that for $t\in J$, $\Sigma_t$ is static regular of type (II) in $(M\setminus \Omega_t, \bar g, \bar u)$. \end{proof} For the rest of this section, we discuss how Corollary~\ref{co2} follows directly from Theorem~\ref{existence} and Theorem~\ref{generic}. \begin{proof}[Proof of Corollary~\ref{co2}] Fix the background metric as a Schwarzschild manifold $(\mathbb R^n\setminus B_{r_m}, \bar g_m, \bar u_m)$, where $m>0$ and \[ r_m = (2m)^{\frac{1}{n-2}}, \quad g_m = \left( 1- \tfrac{2m}{r^{n-2}}\right)^{-1} dr^2 + r^2 g_{S^{n-1}}, \quad u_m = \sqrt{1-\tfrac{2m}{r^{n-2}}}. \] Denote the $(n-1)$-dimensional spheres $S_r = \{ x\in \mathbb{R}^n: |x| = r \}$. Then the manifold is foliated by strictly stable CMC hypersurfaces $\{ S_r\}$ and note $H_{g_m}=0$ on $S_{r_m}$. Also note that the family $S_r$ obviously gives a one-sided family of hypersurfaces as in Definition~\ref{def:one-sided}. We also note that each $S_r$ is an analytic hypersurface in the spherical coordinates $\{r, \theta_1,\dots, \theta_{n-1}\}$ in which $g_m$ is analytic. Given any $c>0$, we can find some $\delta>0$ such that $S_{r}$ has mean curvature less than $c$, for $r\in (r_m+\delta, r_m+2\delta)$ and $u_m < \frac{c}{2}$. We apply Theorem~\ref{generic} and obtain that for $r$ in an open dense subset $J\subset (r_m+\delta, r_m+2\delta)$, $S_r$ is static regular of type (II) in $(\mathbb R^n\setminus B_r, \bar g_m, \bar u_m)$. Fix $r \in J$ and we denote the boundary $S_r$ by $\Sigma$. By Theorem~\ref{existence}, there exists positive constants $\epsilon_0, C$ such that for any $\epsilon\in (0, \epsilon_0)$ and for any $(\tau, \phi)$ satisfying $\|(\tau, \phi) - ( g_m^\intercal, H_{g_m})\|_{\C^{2,\alpha}(\Sigma)\times \C^{1,\alpha}(\Sigma)} < \epsilon$, there is a static vacuum pair $(g, u)$ such that $\|(g, u) - (g_m, u_m)\|_{\C^{2,\alpha}_{-q}(\mathbb R^n \setminus B_{r})}\le C\epsilon$. By choosing $\epsilon$ small, we have $|u-u_m|<\frac{c}{4}$ and thus $u<c$. In particular, we pick the prescribed mean curvature $\phi= H_{g_m}< c$ on $\Sigma$ and the prescribed metric $\tau$ not isometric to the standard metric of a round $(n-1)$-dimensional sphere for any radius. Since the background Schwarzschild manifold has a foliation of strictly stable CMC, for $\epsilon$ sufficiently small, the metric $g$ is also foliated by strictly stable CMC $(n-1)$-dimensional spheres. However, such $g$ cannot be rotationally symmetric. To see that, we suppose on the contrary that $g$ is rotationally symmetric, then by uniqueness of CMC hypersurfaces of \cite{Brendle:2013}, the boundary $\Sigma$ must be a round metric on the sphere of some radius. It contradicts to our choice of $\tau$. We can vary $\tau$ to get many, asymptotically flat, static vacuum metrics that are not isometric to one another. \end{proof} \section{Extension of local $h$-Killing vector fields}\label{se:h} Let $h$ be a symmetric $(0,2)$-tensor on a Riemannian manifold $(M, g)$. We say a vector field $X$ is \emph{$h$-Killing} if \[ L_X g =h. \] The following result extends the classical result of Nomizu \cite{Nomizu:1960} for $h\equiv 0$. \begin{manualtheorem}{\ref{th:extension}} Let $(M, g)$ be a connected, analytic Riemannian manifold. Let $h$ be an analytic, symmetric $(0,2)$-tensor on $M$. Let $U\subset M$ be a connected open subset satisfying $\pi_1(M, U)=0$. Then every $h$-Killing vector field $X$ in $U$ can be extended to a unique $h$-Killing vector field on the whole manifold $M$. \end{manualtheorem} Given a symmetric $(0,2)$-tensor $h$, recall in Section~\ref{se:h1} we define the $(1,2)$-tensor $T_h$ by, in local coordinates, \[ (T_h)^i_{jk}= \tfrac{1}{2} (h^i_{j;k} + h^i_{k;j} - h_{jk;}^{\;\;\;\;\; i} ) \] where the upper index $h^i_j$ is raised by $g$, and note that $(T_h)^i_{jk}$ is symmetric in $(j, k)$. The formula from Lemma~\ref{le:X} is the main motivation for defining the ODE system~\eqref{eq:ODE} below. \begin{manuallemma}{\ref{le:X}} Let $X$ be an $h$-Killing vector field. Then for any vector field $V$, we have \begin{align*} \nabla_V (\nabla X) = - R(X, V) + T_h (V) \end{align*} where $R(X, V) := \nabla_X \nabla_V - \nabla_V \nabla_X - \nabla_{[X, V]}$. \end{manuallemma} Let $p\in U$ and let $\Omega\subset M$ be neighborhood of $p$ covered by the geodesic normal coordinate chart. We shall extend $X$ to a unique $h$-Killing vector field in $\Omega$. For any point $q\in \Omega$, let $\gamma(t)$ be the geodesic connecting $p$ and $q$. We denote $V= \gamma'(t)$. Consider the following inhomogeneous, linear ODE system for a vector field $\hat X$ and a $(1,1)$-tensor $\omega$ along $\gamma(t)$: \begin{align}\label{eq:ODE} \begin{split} \nabla_{V} \hat X &= \omega (V)\\ \nabla_V \omega&= -R(\hat X, V) + T_h (V). \end{split} \end{align} Let $\hat X, \omega$ be the unique solution to \eqref{eq:ODE} with the initial conditions at $\gamma(0)=p$: \begin{align*} \hat X(p) = X(p) \quad \mbox{ and } \quad \omega(p)= (\nabla X )(p). \end{align*} On the other hand, by Lemma~\ref{le:X} $X, \nabla X$ also solve~\eqref{eq:ODE} with the same initial conditions. By uniqueness of ODE we have $\hat X = X$ and $\omega = \nabla X$ on the connected segment of $\gamma(t)\cap U$ containing the initial point $\gamma(0)$. By varying the point $q$, the vector field $\hat{X}$ is defined everywhere in $\Omega$ and is smooth by smooth dependence of ODE. Moreover, since $(M,g)$ is analytic, its geodesics are analytic curves, and hence $X$ is analytic along $\gamma(t)$. We summarize the above construction as the following lemma. \begin{lemma}\label{le:Xhat} Let $(M, g)$ be a smooth manifold such that $\Int M$ is analytic. Let $h$ be an analytic symmetric $(0,2)$-tensor in $\Int M$ and smooth in $\overline M$, and $X$ be an $h$-Killing vector field in an open subset $U\subset \Int M$. For $p\in U$, let $\Omega\subset M$ be a geodesic normal neighborhood of $p$. Then there is a smooth vector field $\hat X$ and a smooth $(1,1)$-tensor $\omega$ in $\Omega$ such that $\hat X=X, \omega = \nabla X$ in a neighborhood of~$p$ in $U$, and $(\hat X, \omega)$ solves \eqref{eq:ODE} along each $\gamma(t)$ and is analytic in $t$. \end{lemma} After extending $X$ to $\hat X$ in $\Omega$ by as above, we show that ${\hat X}$ is $h$-Killing in $\Omega$. As in Lemma \ref{le:V}, one may use Cauchy-Kovalevskaya Theorem to say that $\hat X$ also depends analytically on angular variables (not only on $t$) in $\Omega$ and hence $L_{\hat X} g= h$ in $\Omega$ by analyticity. Alternatively, we give another proof that is similar to Nomizu's original argument in the next proposition. \begin{proposition}\label{pr:extension} Let $\hat{X}, \omega$ be from Lemma~\ref{le:Xhat}. Then ${\hat X}$ is $h$-Killing in $\Omega$ and $\hat X = X$ everywhere in $U$. \end{proposition} \begin{proof} We first show that \begin{align}\label{eq:omega} g(\omega(Y), Z)+ g(\omega(Z), Y) = h(Y, Z) \end{align} for any vectors $Y, Z$ at an arbitrary point $q\in\Omega$. We can extend $Y, Z$ analytically in $t$ along the geodesic $\gamma(t)$ from $p$ to $q$. Thus, the left hand side of \eqref{eq:omega} along $\gamma(t)$ is analytic. We also know that \eqref{eq:omega} holds on $\gamma(t)$ for $t$ sufficiently small because $\omega = \nabla X$ in a neighborhood of $p$ and $X$ is $h$-Killing. Thus, \eqref{eq:omega} holds along the whole path $\gamma(t)$ by analyticity, and in particular at $\gamma(t_0)$. Next, we \emph{claim} that given an arbitrary vector $Y$ at a point in $\Omega$, say $\gamma(t_0)$ for some geodesic $\gamma$ starting at $p$, if we extend $Y$ such that $[V, Y]=0$ along $\gamma$, then $\nabla_Y \hat X - \omega(Y )$ along~$\gamma(t)$ is analytic in $t$. (We remark that clearly $\nabla_V \hat X$ is already analytic along $\gamma$, so the main point in the following proof is to show that it holds for general $Y$.) Once the claim is proved, we get \begin{align}\label{eq:X} \nabla \hat X = \omega \quad\mbox{ in } \Omega. \end{align} \begin{proof}[Proof of Claim] Note that along $\gamma(t)$, $[V, Y] = \nabla_V Y - \nabla_Y V=0$ becomes a first-order linear ODE for $Y$ with analytic coefficients along $\gamma(t)$, and thus $Y$ is analytic along $\gamma(t)$. We show that $\nabla_Y \hat X - \omega(Y )$, together with $\nabla_Y \omega - R(Y, \hat X)$, solves the following ODE system: \begin{align}\label{eq:ODE2} \begin{split} \nabla_V \big(\nabla_Y \hat X - \omega(Y )\big) &= \big( \nabla_Y \omega - R(Y, \hat X)\big) V- T_h (V, Y)\\ \nabla_V \left(\nabla_Y \omega - R(Y, \hat X) \right) &= - R\big(\nabla_Y \hat X - \omega (Y), V\big) \\ &\quad -R(\omega(Y), V) + \nabla_Y (T_h(V))+ R(V, Y) \omega \\ &\quad + (\nabla_{\hat X} R) (V, Y)- R(Y, \omega(V)). \end{split} \end{align} Note that the inhomogeneous term $T_h (V, Y)$ and those inhomogeneous terms in in the 3rd and 4th lines above are all analytic along $\gamma(t)$. Since any solutions to above the linear ODE system~\eqref{eq:ODE2} with analytic coefficients and analytic inhomogeneous terms are analytic along $\gamma(t)$. In particular, $\nabla_Y \hat X - \omega(Y )$ is analytic along $\gamma(t)$. The computations are similar to \cite[Proof of Theorem 4]{Nomizu:1960}. We include the proof for completeness. For the first identity, \begin{align*} &\nabla_V \left(\nabla_Y \hat X - \omega(Y )\right) \\ &= \nabla_Y \nabla_V \hat X + R(V, Y)\hat X - (\nabla_V \omega)(Y) - \omega(\nabla_V Y)\\ &=\nabla_Y (\omega (V)) + R(V, Y) \hat X + R(\hat X, V) Y - T_h (V, Y) - \omega(\nabla_V Y)\quad \mbox{(by \eqref{eq:ODE})}\\ &=(\nabla_Y \omega )(V) - R(Y, \hat X) V - T_h (V, Y)\quad \mbox{(by Bianchi identity)} \end{align*} where we also use $[V, Y]=0$. To prove the second identity, we compute each term below: \begin{align*} \nabla_V \nabla_Y \omega &= \nabla_Y \nabla_V \omega + R(V, Y) \omega \\ &=\nabla_Y \big(-R(\hat X, Y) + T_h(V)\big)+ R(V, Y) \omega\\ &= -(\nabla_Y R)(\hat X, Y) - R(\nabla_Y \hat X, V) - R(\hat X, \nabla_Y V) + \nabla_Y (T_h(V))+ R(V, Y) \omega\\ \nabla_V (R(Y, \hat X)) &= (\nabla_V R)(Y, \hat X) + R(\nabla_V Y, \hat X) + R(Y, \nabla_V \hat X)\\ &= (\nabla_V R)(Y, \hat X) - R(\hat X, \nabla_V Y) + R(Y, \omega(V)). \end{align*} Subtracting the previous two identities and noting that the terms $ - R(\hat X, \nabla_Y V)$ and $R(\hat X, \nabla_V Y)$ cancel out by $[V, Y]=0$, we derive \begin{align*} &\nabla_V \nabla_Y \omega -\nabla_V \big(R(Y, \hat X)\big)\\ &= -(\nabla_Y R)(\hat X, Y) - R(\nabla_Y \hat X, V) + \nabla_Y (T_h(V))+ R(V, Y) \omega\\ &\quad - (\nabla_V R)(Y, \hat X) - R(Y, \omega(V))\\ &= - R(\nabla_Y \hat X -\omega(Y), V) -R(\omega(Y), V) + \nabla_Y (T_h(V))+ R(V, Y) \omega \\ &\quad + (\nabla_{\hat X} R) (V, Y)- R(Y, \omega(V))\quad \mbox{(by Bianchi identity)}. \end{align*} Rearranging the terms gives the second identity in \eqref{eq:ODE2}. \end{proof} Lastly, \eqref{eq:omega} and \eqref{eq:X} together imply that $\hat X$ is $h$-Killing in $\Omega$. Since $U$ is connected and $X-\hat X$ is Killing in $U$ which is identically zero in an open subset, we have $X = \hat X$ in $U$. \end{proof} We have shown how to extend the $h$-Killing vector field $X$ in a geodesic normal neighborhood. Using the assumption that $\pi_1(X, U)=0$, we show how to extend $X$ globally and complete the proof of Theorem~\ref{th:extension}. \begin{proof}[Proof of Theorem~\ref{th:extension}] For any point $q$ in $M$, we let $\gamma$ be a path from $p\in U$ to $q$. The path $\gamma(t)$ is covered by finitely many geodesic normal neighborhoods. We extend $X$ at $q$ along the path by Proposition~\ref{pr:extension}. For any other path $\tilde{\gamma}$ from $U$ to $q$ that is sufficiently close to $\gamma$, $\tilde \gamma$ is also covered by the same collection of neighborhoods, and thus the extension along $\tilde{\gamma}$ gives the same definition of $X$ at $q$. To show that the definition of $X$ at $q$ doesn't depend on the paths from $U$ to~$q$, we use the assumption that $\pi_1(M, U)=0$. Since any path $\tilde \gamma $ from $U$ to $q$ is homotopic to $\gamma$ relative to $U$, there are finitely many paths from $U$ to $q$, say $\gamma_1=\gamma, \gamma_2, \gamma_3, \dots, \gamma_k = \tilde{\gamma}$, such that each pair of consecutive paths, $\gamma_i$ and $\gamma_{i+1}$, can be covered by the same collections of neighborhoods. Thus the extension of $X$ at $q$ is the same on each pair and thus on all those paths. That completes the proof. \end{proof}
{'timestamp': '2022-06-02T02:02:39', 'yymm': '2206', 'arxiv_id': '2206.00079', 'language': 'en', 'url': 'https://arxiv.org/abs/2206.00079'}
\section{Introduction} \label{Introduction} In relativistic quantum field theory, the nature of the field modes associated with the spontaneous breaking of an internal symmetry is now standard lore. When a global internal symmetry is spontaneously broken, one or more massless modes called Nambu-Goldstone (NG) modes appear \cite{ng}. If instead the symmetry is local, the Higgs mechanism can occur: the massless NG modes play the role of additional components of the gauge fields, which then propagate as massive modes \cite{hm}. In either case, the spontaneous symmetry breaking is typically driven by a potential term $V$ in the Lagrange density. The vacuum field configuration lies in a minimum $V_0$ of $V$. The massless NG modes can be understood as field excitations about the vacuum that preserve the value $V_0$, and they are associated with the broken generators of the symmetry. For many potentials, there are also additional excitations involving other values of $V$. These excitations, often called Higgs modes, correspond to additional massive modes that are distinct from any massive gauge fields. This standard picture changes when the spontaneous breaking involves a spacetime symmetry rather than an internal one. In this work, we focus on spontaneous breaking of Lorentz and diffeomorphism symmetries, for which the corresponding Higgs mechanisms exhibit some unique features \cite{ks}. Spontaneous Lorentz violation occurs when one or more Lorentz-nonsinglet field configurations acquire nonzero vacuum expectation values. The field configurations of interest can include fundamental vectors or tensors, derivatives of scalars and other fields, and Lorentz-nonsinglet composites. The nonzero vacuum values are manifest both on the spacetime manifold and in local frames \cite{akgrav}. Their origin in spontaneous violation implies both local Lorentz violation and diffeomorphism violation, along with the existence of NG modes \cite{bk}. At the level of an underlying Planck-scale theory, numerous proposals exist that involve spontaneous Lorentz violation, including ones based on string theory \cite{ksp}, noncommutative field theories \cite{ncqed}, spacetime-varying fields \cite{spacetimevarying}, quantum gravity \cite{qg}, random-dynamics models \cite{fn}, multiverses \cite{bj}, brane-world scenarios \cite{brane}, supersymmetry \cite{susy}, and massive gravity \cite{modgrav1,modgrav2,modgrav3}. At experimentally accessible scales, the observable signals resulting from Lorentz breaking can be described using effective field theory \cite{kp}. The general realistic effective field theory containing the Lagrange densities for both general relativity and the Standard Model along with all scalar terms involving operators for Lorentz violation is called the Standard-Model Extension (SME) \cite{ck,akgrav}. Searches for low-energy signals of Lorentz violation represent a promising avenue of investigation involving the phenomenology of quantum gravity \cite{cpt07}. Numerous experimental measurements of SME coefficients for Lorentz violation have already been performed \cite{kr}, including ones with photons \cite{photonexpt}, electrons \cite{eexpt,eexpt2,eexpt3}, protons and neutrons \cite{ccexpt,spaceexpt,bnsyn}, mesons \cite{hadronexpt}, muons \cite{muexpt}, neutrinos \cite{nuexpt}, the Higgs \cite{higgs}, and gravity \cite{gravexpt,bak06}. For spontaneous Lorentz and diffeomorphism breaking, a general analysis of the nature of the NG modes and Higgs mechanisms is provided in Ref.\ \cite{bk}. One result is that the spontaneous breaking of local Lorentz symmetry implies spontaneous breaking of diffeomorphism symmetry and vice versa. Since six local Lorentz transformations and four diffeomorphisms can be broken, up to ten NG modes can appear. To characterize these, it is natural to adopt the vierbein formalism \cite{uk}, in which the roles of local Lorentz transformations and diffeomorphisms are cleanly distinguished. It turns out that the vierbein itself naturally incorporates all ten modes. In an appropriate gauge, the six Lorentz NG modes appear in the antisymmetric components of the vierbein, while the four diffeomorphism NG modes appear along with the usual gravitational modes in the symmetric components. The dynamical behavior of the various NG modes is determined by the structure of the action \cite{bk}. In a Lagrange density formed from tensor quantities and with diffeomorphism-covariant kinetic terms, the diffeomorphism NG modes are nonpropagating. This feature, unique to diffeomorphism symmetry, can be viewed as arising because the diffeomorphism NG field excitations that preserve $V_0$ include metric excitations, and the combined excitations cancel at propagation order in covariant derivatives and curvature. In contrast, the number of propagating Lorentz NG modes is strongly model dependent. For example, choosing the kinetic terms in the Lagrange density of the original field theory to eliminate possible ghost modes can also prevent the propagation of one or more Lorentz NG modes, leaving them instead as auxiliary fields. Several types of possible Higgs mechanisms can be distinguished when spacetime symmetries are spontaneously broken. These have features distinct from the conventional Higgs mechanism of gauge field theories \cite{hm} and Higgs mechanisms involving gravity without Lorentz violation \cite{hmnonlv}. The analysis of Higgs mechanisms can be performed either using the vierbein formalism, which permits tracking of Lorentz and diffeomorphism properties, or by working directly with fields on the spacetime manifold. The results of both approaches are equivalent. For local Lorentz symmetry, the role of the gauge fields in the vierbein formalism is played by the spin connection. The \it Lorentz Higgs mechanism \rm occurs when the Lorentz NG modes play the role of extra components of the spin connection \cite{bk}. Some components of the spin connection then acquire mass via the covariant derivatives in the kinetic part of the Lagrange density. Explicit models displaying the Lorentz Higgs mechanism are known. For this mechanism to occur, the components of the spin connection must propagate as independent degrees of freedom, which requires a theory based on Riemann-Cartan geometry. If instead the theory is based on Riemann geometry, like General Relativity, the spin connection is fixed nonlinearly in terms of the vierbein and its derivatives. The presence of these derivatives ensures that no mass terms emerge from the kinetic part of the Lagrange density, although the vierbein propagator is modified. In the context of diffeomorphism symmetry, the role of the gauge fields is played by the metric. For spontaneous diffeomorphism breaking in Riemann spacetime, a conventional \it diffeomorphism Higgs mechanism \rm cannot generate a mass for the graviton because the connection and hence the analogue of the usual mass term involve derivatives of the metric \cite{ks}. Also, since diffeomorphism NG modes are nonpropagating in a Lagrange density with covariant kinetic terms for reasons mentioned above, the propagating NG degrees of freedom required to generate massive fields via a conventional diffeomorphism Higgs mechanism are lacking. In a conventional gauge theory with a nonderivative potential $V$, the gauge fields are absent from $V$ and so the potential cannot directly contribute to the gauge masses. However, in spontaneous Lorentz and diffeomorphism violation, massive Higgs-type modes involving the vierbein can arise via the \it alternative Higgs mechanism, \rm which involves the potential $V$ \cite{ks}. The key point is that $V$ contains both tensor and metric fields, so field fluctuations about the vacuum value $V_0$ can contain quadratic mass terms involving the metric. In this work, we study the nature and properties of the additional massive Higgs-type modes arising from this alternative Higgs mechanism. A general treatment is provided for a variety of types of potentials $V$ in gravitationally coupled theories with Riemann geometry. We investigate the effects of the massive modes on the physical properties of gravity. In certain theories with spontaneous Lorentz violation, the NG modes can play the role of photons \cite{bk}, and we also examine the effects of the massive modes on electrodynamics in this context. The next section of this work provides a general discussion of the origin and basic properties of the massive Higgs-type modes. Section \ref{Bumblebee Models} analyzes the role of these modes in vector theories with spontaneous Lorentz violation, known as bumblebee models. In Sec.\ \ref{Examples}, the massive modes in a special class of bumblebee models are studied in more detail for several choices of potential in the Lagrange density, and their effects on both the gravitational and electromagnetic interactions are explored. Section \ref{Summary} summarizes our results. Some details about transformation laws are provided in the Appendix. Throughout this work, the conventions and notations of Refs.\ \cite{akgrav,bk} are used. \section{Massive Modes} \label{Massive Modes} The characteristics of the massive Higgs-type modes that can arise from the alternative Higgs mechanism depend on several factors. Among them are the type of field configuration acquiring the vacuum expectation value and the form of the Lagrange density, including the choice of potential $V$ inducing spontaneous breaking of Lorentz and diffeomorphism symmetries. In this section, we outline some generic features associated with the alternative Higgs mechanism in a theory of a general tensor field $T_{\lambda\mu\nu\cdots}$ in a Riemann spacetime with metric $g_{\mu\nu}$. We consider first consequences of the choice of potential $V$, then discuss properties of vacuum excitations, and finally offer comments on the massive modes arising from the alternative Higgs mechanism. \subsection{Potentials} \label{Potentials} The potential $V$ in the Lagrange density is taken to trigger a nonzero vacuum expectation value \begin{equation} \vev{T_{\lambda\mu\nu\cdots}}= t_{\lambda\mu\nu\cdots} \label{Tvev} \end{equation} for the tensor field, thereby spontaneously breaking local Lorentz and diffeomorphism symmetries. In general, $V$ varies with $T_{\lambda\mu\nu\cdots}$, its covariant derivatives, the metric $g_{\mu\nu}$, and possibly other fields. However, for simplicity we suppose here that $V$ has no derivative couplings and involves only $T_{\lambda\mu\nu\cdots}$ and $g_{\mu\nu}$. We also suppose that $V$ is everywhere positive except at its degenerate minima, which have $V = 0$ and are continuously connected via the broken Lorentz and diffeomorphism generators. The vacuum is chosen to be the particular minimum in which $T_{\lambda\mu\nu\cdots}$ attains its nonzero value \rf{Tvev}. Since the Lagrange density is an observer scalar, $V$ must depend on fully contracted combinations of $T_{\lambda\mu\nu\cdots}$ and $g_{\mu\nu}$. Provided $T_{\lambda\mu\nu\cdots}$ has finite rank, the number of independent scalar combinations is limited. For example, for a symmetric two-tensor field $C_{\mu\nu}$ there are four independent possibilities, which can be given explicitly in terms of traces of powers of $C_{\mu\nu}$ and $g_{\mu\nu}$ \cite{kpgr}. It is convenient to denote generically these scalar combinations as $X_m$, where $m= 1,2,\ldots, M$ ranges over the number $M$ of independent combinations. The functional form of the potential therefore takes the form \begin{equation} V = V(X_1,X_2,\ldots,X_M) \end{equation} in terms of the scalars $X_m$. The definition of the scalars $X_m$ can be chosen so that $X_m = 0$ in the vacuum for all $m$. For example, a choice involving a quadratic combination of the tensor with zero vacuum value is \begin{equation} X = T_{\lambda\mu\nu\cdots} g^{\lambda\alpha} g^{\mu\beta} g^{\nu\gamma} \cdots T_{\alpha\beta\gamma\cdots} \pm t^2, \label{X} \end{equation} with $t^2$ the norm \begin{equation} t^2 = \mp t_{\lambda\mu\nu\cdots} \vev{g^{\lambda\alpha}} \vev{g^{\mu\beta}} \vev{g^{\nu\gamma}} \cdots t_{\alpha\beta\gamma\cdots} , \label{norm} \end{equation} where $\vev{g^{\lambda\alpha}}$ is the vacuum value of the inverse metric. The $\mp$ sign is introduced for convenience so that $t^2$ can be chosen nonnegative. In principle, $t^2$ could vary with spacetime position, which would introduce explicit spacetime-symmetry breaking, but it suffices for present purposes to take $t^2$ as a real nonnegative constant. The $M$ conditions $X_m = 0$ fix the vacuum value $t_{\lambda\mu\nu\cdots}$. If only a subset of $N$ of these $M$ conditions is generated in a given theory, then the value of $t_{\lambda\mu\nu\cdots}$ is specified up to $(M-N)$ coset transformations, and so the vacuum is degenerate under $(M-N)$ additional gauge symmetries. Note that these $(M-N)$ freedoms are distinct from Lorentz and diffeomorphism transformations. It is useful to distinguish two classes of potentials $V$. The first consists of smooth functionals $V$ of $X_m$ that are minimized by the conditions $X_m=0$ for at least some $m$. These potentials therefore satisfy $V = V_m^\prime = 0$ in the vacuum, where $V_m^\prime$ denotes the derivative with respect to $X_m$, and they fix the vacuum value of $T_{\lambda\mu\nu\cdots}$ to $t_{\lambda\mu\nu\cdots}$ modulo possible gauge freedoms. A simple example with the quantity $X$ in Eq.\ \rf{X} is \begin{equation} V_S(X) = {\textstyle{1\over 2}} \kappa X^2, \label{VS} \end{equation} where $\kappa$ is a constant. In this case, the vacuum value $t_{\lambda\mu\nu\cdots}$ is a solution of $V = V^\prime = 0$, where $V^\prime$ denotes a derivative with respect to $X$. If the matrix $V_{mn}^{\prime\prime}$ of second derivatives has nonzero eigenvalues, the smooth functionals $V$ can give rise to quadratic mass terms in the Lagrange density involving the tensor and metric fields. Potentials $V$ of this type are therefore associated with the alternative Higgs mechanism, and they are the primary focus of our attention. A second class of potentials introduces Lagrange-multiplier fields $\lambda_m$ for at least some $m$, to impose directly the conditions $X_m=0$ as constraint terms in the Lagrange density. We consider here both linear and quadratic functional forms for these constraints. An explicit linear example is \begin{equation} V_L(\lambda, X) = \lambda X, \label{VL} \end{equation} while a quadratic one is \begin{equation} V_Q(\lambda, X) = {\textstyle{1\over 2}} \lambda X^2. \label{VQ} \end{equation} In each example, $\lambda$ is a Lagrange-multiplier field and has equation of motion with solution $X=0$, so the value $t_{\lambda\mu\nu\cdots}$ is a vacuum solution for the tensor. Potentials in the Lagrange-multiplier class are unlikely to be physical in detail because they enforce singular slicings in the phase space for the fields. However, when used with care they can be useful as limiting approximations to potentials in the smooth class, including those inducing the alternative Higgs mechanism \cite{ks}. Note that positivity of the potentials can constrain the range of the Lagrange multiplier field. For example, the off-shell value of $\lambda$ in $V_L$ must have the same sign as $X$, while that of $\lambda$ in $V_Q$ must be non-negative. \subsection{Excitations} \label{Excitations} Field excitations about the vacuum solution \rf{Tvev} can be classified in five types: gauge modes, NG modes, massive modes, Lagrange-multiplier modes, or spectator modes. Gauge modes arise if the potential $V$ fixes only $(M-N)$ of the $M$ conditions $X_m = 0$, so that the vacuum is unspecified up to $N$ conditions. These modes can be disregarded for most purposes here because they can be eliminated via gauge fixing without affecting the physics. The NG modes are generated by the virtual action of the broken Lorentz and diffeomorphism generators on the symmetry-breaking vacuum. They preserve the vacuum condition $V=0$. Massive modes are excitations for which the symmetry breaking generates quadratic mass terms. Lagrange-multiplier modes are excitations of the Lagrange multiplier field. Finally, spectator modes are the remaining modes in the theory. For smooth potentials, field excitations preserving the conditions $X_m = 0$ for all $m$ have potential $V = 0$. They also satisfy $V_m^\prime = 0$. The NG modes are of this type. Excitations with $X_m \neq 0$ having nonzero potential $V \neq 0$ and $V_m^\prime \neq 0$ are massive modes, with mass matrix determined by the second derivatives $V_{mn}^{\prime\prime}$. Since the smooth potentials depend on the tensor $T_{\lambda\mu\nu\cdots}$ and the metric $g_{\mu\nu}$, the corresponding massive modes involve combinations of excitations of these fields. In contrast, for Lagrange-multiplier potentials the conditions $X_m = 0$ always hold on shell, which implies $V=0$ for all on-shell excitations. If also $V_m^\prime = 0$, then the excitations remain in the potential minimum and include the NG modes. For linear functional forms of $V$, it follows that $V_m^\prime = \lambda_m$, so $V_m^\prime$ is nonzero only when the Lagrange-multiplier field $\lambda_m$ is excited. Any excitations of $T_{\lambda\mu\nu\cdots}$ and $g_{\mu\nu}$ must have $V = V_m^\prime = 0$. For quadratic functional forms of $V$, one finds $V_m^\prime = 0$ for all excitations, including the $\lambda$ field. We therefore can conclude that the combinations of $T_{\lambda\mu\nu\cdots}$ and $g_{\mu\nu}$ playing the role of massive modes for smooth potentials are constrained to zero for Lagrange-multiplier potentials. Evidently, studies of the alternative Higgs mechanism must be approached with care when the Lagrange-multiplier approximation to a smooth potential is adopted. The tensor excitations about the vacuum can be expressed by expanding $T_{\lambda\mu\nu\cdots}$ as \begin{equation} T_{\lambda\mu\nu\cdots} = t_{\lambda\mu\nu\cdots} + \tau_{\lambda\mu\nu\cdots}, \label{tau} \end{equation} where the excitation $\tau_{\lambda\mu\nu\cdots}$ is defined as the difference $\tau_{\lambda\mu\nu\cdots} \equiv \delta T_{\lambda\mu\nu\cdots} = T_{\lambda\mu\nu\cdots} - t_{\lambda\mu\nu\cdots}$ between the tensor and its vacuum value. We also expand the metric \begin{equation} g_{\mu\nu} = \vev{g_{\mu\nu}} + h_{\mu\nu} \label{ghmunu} \end{equation} in terms of the metric excitations $h_{\mu\nu}$ about the metric background value $\vev{g_{\mu\nu}}$. For simplicity and definiteness, in much of what follows we take the background metric to be that of Minkowski spacetime, $\vev{g_{\mu\nu}} = \eta_{\mu\nu}$. We also suppose $t_{\lambda\mu\nu\cdots}$ is constant in this background, so that $\partial_\alpha t_{\lambda\mu\nu\cdots} = 0$ in cartesian coordinates. Other choices can be made for the expansion of the tensor about its vacuum value. One alternative is to expand the contravariant version of the tensor as \begin{equation} T^{\lambda\mu\nu\cdots} = \overline t^{\lambda\mu\nu\cdots} + \widetilde T^{\lambda\mu\nu\cdots}. \label{olt} \end{equation} In a Minkowski background, the vacuum values in the two expansions \rf{tau} and \rf{olt} are related simply by \begin{equation} \overline t^{\lambda\mu\nu\cdots} = \eta^{\lambda\alpha}\eta^{\mu\beta}\eta^{\nu\gamma}\cdots t_{\alpha\beta\gamma\cdots}. \end{equation} However, the relationship between the two tensor excitations at leading order involves also the metric excitation: \begin{equation} \widetilde T^{\lambda\mu\nu\cdots} = \tau^{\lambda\mu\nu\cdots} - h^{\lambda\alpha}t_\alpha^{\pt{\alpha}\mu\nu\cdots} - h^{\mu\alpha}t^{\lambda\pt{\alpha}\nu\cdots}_{\pt{\lambda}\alpha} - h^{\nu\alpha}t^{\lambda\mu\pt{\alpha}\cdots}_{\pt{\lambda\mu}\alpha} - \ldots. \end{equation} In this expression, indices have been raised using the Minkowski metric. Any Lagrange multipliers $\lambda_m$ in the theory can also be expanded about their vacuum values $\overline\lambda_m$ as \begin{equation} \lambda_m = \overline\lambda_m + \widetilde\lambda_m . \end{equation} For linear Lagrange-multiplier potentials, the equations of motion for $T_{\lambda\mu\nu\cdots}$ and $g_{\mu\nu}$ provide constraints on the vacuum values $\overline\lambda_m$. In a Minkowski background and a potential yielding $X_m = 0$ for all $m$, the equations of motion for $T_{\lambda\mu\nu\cdots}$ can be solved to give \begin{equation} \overline\lambda_m = 0. \end{equation} For quadratic Lagrange-multiplier potentials, the $\lambda_m$ are absent from the equations of motion. Their vacuum values are therefore physically irrelevant, and $\overline\lambda_m = 0$ can also be adopted in this case. For the remainder of this work we take $\overline\lambda_m = 0$, and for notational simplicity we write $\lambda_m$ for both the full field $\lambda_m$ and the excitation $\widetilde\lambda_m$. \subsection{Massive modes} \label{Massive modes} For a smooth potential $V$, the massive excitations typically involve a mixture of tensor and gravitational fields. As an example, consider the expression for $V_S$ in Eq.\ \rf{VS}. This can be expanded in terms of the excitations $\tau_{\lambda\mu\nu\cdots}$ and $h_{\mu\nu}$. Retaining only terms up to quadratic order in a Minkowski background gives \begin{eqnarray} V_S &\approx& 2 \kappa [ t^{\lambda\mu\nu\cdots} (\tau_{\lambda\mu\nu\cdots} - {\textstyle{1\over 2}} h_{\lambda\alpha} t^\alpha_{\pt{\alpha}\mu\nu\cdots} - {\textstyle{1\over 2}} h_{\mu\beta} t_{\lambda\pt{\beta}\nu\cdots}^{\pt{\lambda}\beta} \nonumber\\ && \qquad \qquad \qquad \quad - {\textstyle{1\over 2}} h_{\nu\gamma} t_{\lambda\mu\pt{\gamma}\cdots}^{\pt{\lambda\mu}\gamma} - \ldots)]^2 , \label{Vexpans} \end{eqnarray} where index contractions are performed with the Minkowski metric $\eta_{\mu\nu}$. Evidently, the massive excitations in this example involve linear combinations of $\tau_{\lambda\mu\nu\cdots}$ with contractions of $h_{\mu\nu}$ and $t_{\lambda\mu\nu\cdots}$. The explicit expressions for the massive modes can be modified by local Lorentz and diffeomorphism gauge fixing. The action is symmetric under 10 local Lorentz and diffeomorphism symmetries, so there are 10 possible gauge conditions. For an unbroken symmetry generator, the effects of a gauge choice are conventional. For a broken symmetry generator, a gauge choice fixes the location in field space of the corresponding NG mode. For example, suitable gauge choices can place all the NG modes in the vierbein \cite{bk}. These gauge choices also affect the form of the massive modes. They can be used to isolate some or all of the massive modes as components of either the gravitational or tensor fields by gauging other components to zero. Alternatively, the gauge freedom can be used to simplify the equations of motion, while the massive modes remain a mixture of the excitations $\tau_{\lambda\mu\nu\cdots}$ and $h_{\mu\nu}$. The behavior of the massive modes depends on the form of the kinetic terms in the Lagrange density as well as the form of the potential $V$. A gravitational theory with a dynamical tensor field $T_{\lambda\mu\nu\cdots}$ has kinetic terms for both $g_{\mu\nu}$ and $T_{\lambda\mu\nu\cdots}$ and hence for both $h_{\mu\nu}$ and $\tau_{\lambda\mu\nu\cdots}$. Since in the alternative Higgs mechanism the potential $V$ acts only as a source of mass, the issue of whether the massive modes propagate dynamically depends on the structure of these kinetic terms. In particular, propagating massive modes can be expected only if the theory \it without \rm the potential $V$ contains the corresponding propagating massless modes. It is desirable that any propagating modes be unitary and ghost free. To avoid unitarity issues with higher-derivative propagation, the kinetic term for the metric excitation $h_{\mu\nu}$ can be taken to emerge as usual from the Einstein-Hilbert action. In the absence of the potential $V$, this is also a ghost-free choice. For the tensor field, higher-derivative propagation can be avoided by writing the general kinetic term ${\cal L}_{\rm K}$ as a weighted sum of all scalar densities formed from quadratic expressions in $T_{\lambda\mu\nu\cdots}$ that involve two covariant derivatives. For example, one such scalar density is \begin{equation} {\cal L}_{\rm K} \sim e T_{\lambda\mu\nu\cdots} D_\alpha D^\alpha T^{\lambda\mu\nu\cdots}, \label{kinetic} \end{equation} where $e = \sqrt{-g}$ is the vierbein determinant. In the absence of the potential $V$, the ghost-free requirement places strong constraints on allowed forms of ${\cal L}_{\rm K}$ and typically involves gauge symmetry for the tensor. For a vector field, for example, the Maxwell action is the unique ghost-free combination in the Minkowski spacetime limit. In any case, since the massive modes are combinations of $h_{\mu\nu}$ and $\tau_{\lambda\mu\nu\cdots}$, it follows that ghost-free propagation is possible only if the kinetic terms for these combinations are ghost free. Note that the potential $V$ may explicitly break the tensor gauge symmetry, so requiring ghost-free kinetic terms is by itself insufficient to ensure ghost-free massive modes. Under the assumption of Lorentz invariance, the Fierz-Pauli action involving quadratic terms for $h_{\mu\nu}$ is the unique ghost-free choice for a free massive spin-2 field \cite{fp}. However, when coupled to a covariantly conserved energy-momentum tensor, the small-mass limit of a massive spin-2 field includes modes that modify the gravitational bending of light in disagreement with observation \cite{vdvz}. This presents an obstacle to constructing a viable theory of massive gravity. One avenue of investigation that might permit evading this obstacle is to relax the assumption of Lorentz invariance. Spontaneous Lorentz violation in closed string theory has been proposed \cite{modgrav1} as a mechanism that might lead to graviton mass terms. Models involving infrared modifications of gravity have also been proposed that involve spontaneous Lorentz violation with ghosts \cite{modgrav2} and that have explicit Lorentz violations \cite{modgrav3}. Explicit Lorentz violation is incompatible with Riemann and Riemann-Cartan geometries but may be compatible with Finsler or other geometries \cite{akgrav,gyb} or may be viewed as an approximation to spontaneous violation. In the present context, the possibility exists that the massive modes from the alternative Higgs mechanism could evade the Veltman-van Dam-Zakharov constraint via their origin in spontaneous Lorentz violation and their nature as mixtures of gravitational and tensor modes. For example, the expansion \rf{Vexpans} includes quadratic terms involving contractions of $h_{\mu\nu}$ with the vacuum value $t_{\lambda\mu\nu}$, so a model with suitable vacuum values and incorporating also a ghost-free propagator for the corresponding modes would describe a type of propagating massive gravity without conventional Fierz-Pauli terms. In the alternative Higgs mechanism, the existence of propagating massive modes involving the metric can be expected to affect gravitational physics. Effects can arise directly from the modified graviton propagator and also from the massive modes acting as sources for gravitational interactions. The latter can be understood by considering the energy-momentum tensor $T^{\mu\nu}$ of the full theory, which can be found by variation of the action with respect to the metric $g_{\mu\nu}$. The contribution $T_V^{\mu\nu}$ to $T^{\mu\nu}$ arising from the potential $V$ is \begin{equation} T_V^{\mu\nu} = - g^{\mu\nu} V + 2 V_m^\prime \fr {\delta X_m} {\delta g_{\mu\nu}} , \label{TV} \end{equation} where a sum on $m$ is understood. For the vacuum solution $X_m = 0$, which satisfies $V = V_m^\prime = 0$, $T_V^{\mu\nu}$ remains zero and the gravitational sourcing is unaffected. The same is true for excitations for which $V$ and $V_m^\prime$ both vanish, such as the NG modes. However, the massive modes arising from a smooth potential have nonzero $V$ and $V_m^\prime$ and can therefore act as additional sources for gravity. These contributions can lead to a variety of gravitational effects including, for example, modifications of the Newton gravitational potential in the weak-field limit, which could have relevance for dark matter, or cosmological features such as dark energy \cite{akgrav}. Note that nonpropagating modes can also have similar significant physical effects on gravitational properties. This possibility holds for any excitations appearing in $V$, whether they are physical, ghost, or Lagrange-multiplier modes. For example, any Lagrange-multiplier fields are auxiliary by construction and so the $\lambda_m$ excitations are nonpropagating. Nonetheless, for linear Lagrange-multiplier potentials these excitations can contribute to $T_V^{\mu\nu}$ even though $V = 0$ because $V_m^\prime$ is nonvanishing. However, a theory with a quadratic Lagrange-multiplier potential has $T_V^{\mu\nu}=0$ and therefore leaves the gravitational sourcing unaffected. \section{Bumblebee Models} \label{Bumblebee Models} In this section, we focus attention on the special class of theories in which the role of the tensor $T_{\lambda\mu\nu\cdots}$ is played by a vector $B_\mu$ that acquires a nonzero vacuum expectation value $b_\mu$. These theories, called bumblebee models, are among the simplest examples of field theories with spontaneous Lorentz and diffeomorphism breaking. In what follows, bumblebee models are defined, their properties under local Lorentz and diffeomorphism transformations are presented, their mode content is analyzed, and issues involving gauge fixing and alternative mode expansions are considered. \subsection{Basics} \label{Basics} The action $S_B$ for a single bumblebee field $B_\mu$ coupled to gravity and matter can be written as \begin{eqnarray} S_B &=& \int d^4 x~ {\cal L}_B \nonumber\\ &=& \int d^4 x~ ({\cal L}_g + {\cal L}_{gB} + {\cal L}_{\rm K} + {\cal L}_V + {\cal L}_{\rm M}). \end{eqnarray} In Riemann spacetime, the pure gravitational piece ${\cal L}_g$ of the Lagrange density is usually taken to be the Einstein-Hilbert term supplemented by the cosmological constant $\Lambda$. The gravity-bumblebee couplings are described by ${\cal L}_{gB}$, while ${\cal L}_{\rm K}$ contains the bumblebee kinetic terms and any self-interaction terms. The component ${\cal L}_V$ consists of the potential $V(B_\mu)$, including terms triggering the spontaneous Lorentz violation. Finally, ${\cal L}_{\rm M}$ involves the bumblebee coupling to the matter or other sectors in the model. The forms of ${\cal L}_{gB}$ and ${\cal L}_K$ are complicated in the general case. However, if attention is limited to terms quadratic in $B_\mu$ involving no more than two derivatives, then only five possibilities exist. The Lagrange density ${\cal L}_B$ can then be written as \begin{eqnarray} {\cal L}_B &=& \fr 1 {16\pi G} e (R - 2 \Lambda) + \si_1 eB^\mu B^\nu R_{\mu\nu} + \si_2 e B^\mu B_\mu R \nonumber\\ && - \frac 14 \ta_1 e B^{\mu\nu} B_{\mu\nu} + \frac 12 \ta_2 e D_\mu B_\nu D^\mu B^\nu \nonumber\\ && + \frac 12 \ta_3 e D_\mu B^\mu D_\nu B^\nu - eV + {\cal L}_{\rm M} , \label{bb} \end{eqnarray} where $G$ is the Newton gravitational constant and the field-strength tensor $B_{\mu\nu}$ is defined as \begin{equation} B_{\mu\nu} = \partial_\mu B_\nu-\partial_\nu B_\mu . \label{Bfield} \end{equation} The five real parameters $\si_1$, $\si_2$, $\ta_1$, $\ta_2$, $\ta_3$ in Eq.\ \rf{bb} are not all independent. Up to surface terms, which leave unaffected the equations of motion from the action, the condition \begin{eqnarray} \int d^4x~ ( eB^\mu B^\nu R_{\mu\nu} - \frac 12 e B^{\mu\nu} B_{\mu\nu} \hskip 50pt \nonumber\\ + e D_\mu B_\nu D^\mu B^\nu - e D_\mu B^\mu D_\nu B^\nu ) = 0 \label{cond} \end{eqnarray} is identically satisfied, so only four of the corresponding five terms in ${\cal L}_B$ are independent. No term of the form $e B^\mu B_\mu R$ appears in the condition \rf{cond}, so the parameter $\si_2$ remains unaffected while the four parameters $\si_1$, $\ta_1$, $\ta_2$, $\ta_3$ become linked. For example, the identity \rf{cond} implies that the action for the Lagrange density \rf{bb} with five nonzero parameters $\si_1$, $\si_2$, $\ta_1$, $\ta_2$, $\ta_3$ is equivalent to an action of the same form but with only four nonzero parameters $\si_1^\prime = \si_1 + \ta_3$, $\si_2^\prime = \si_2$, $\ta_1^\prime = \ta_1 - 2 \ta_3$, $\ta_2^\prime = \ta_2 - 2 \ta_3$, while $\ta_3^\prime = 0$. Moreover, other factors may also constrain some of the five parameters. For example, certain models of the form ${\cal L}_B$ yield equations of motion that imply additional relationships among the parameters. Also, certain physical limits such as the restriction to Minkowski spacetime can limit the applicability of the condition \rf{cond}. Some cases with specific parameter values may be of particular interest for reasons of physics, geometry, or simplicity, such as the models with $\ta_1 = 1$, $\ta_2 = \ta_3 = 0$ considered below, or the model with $\si_2 = - \si_1/2$ for which the bumblebee-curvature coupling involves the Einstein tensor, ${\cal L}_B \supset \si_1 eB^\mu B^\nu G_{\mu\nu}$. Since the most convenient choice of parameters depends on the specific model and on the physics being addressed, it is useful to maintain the five-parameter form \rf{bb} for generality. Following the discussion in Sec.\ \ref{Potentials}, we suppose the potential $V$ in Eq.\ \rf{bb} has no derivative couplings and is formed from scalar combinations of the bumblebee field $B_\mu$ and the metric $g_{\mu\nu}$. Only one independent scalar exists. It can be taken as the bumblebee version of $X$ in Eq.\ \rf{X}: \begin{equation} X = B_\mu g^{\mu\nu} B_{\nu} \pm b^2, \label{XB} \end{equation} where $b^2$ is a real nonnegative constant. The potential $V(X)$ itself can be smooth in $X$, like the form \rf{VS}, or it can involve Lagrange multipliers like the form \rf{VL} or \rf{VQ}. In any case, the vacuum is determined by the single condition \begin{equation} X = B_\mu g^{\mu\nu} B_\nu \pm b^2 = 0 . \label{Bsquare} \end{equation} In the vacuum, the potential vanishes, $V(X)=0$, and the fields $B_\mu$, $g_{\mu\nu}$ acquire vacuum values \begin{equation} B_\mu \to \vev{B_\mu} = b_\mu , \quad g_{\mu\nu} \to \vev{g_{\mu\nu}}. \label{vevs} \end{equation} The nonzero value $b_\mu$, which obeys $b_\mu \vev{g^{\mu\nu}}b_\nu = \mp b^2$, spontaneously breaks both Lorentz and diffeomorphism symmetry. Note that the choice of the potential $V$ can also have implications for the parameters in Eq.\ \rf{bb}. For example, if the potential takes the quadratic Lagrange-multiplier form \rf{VQ}, then $\si_2$ can be taken as zero because a nonzero value merely acts to rescale $G$ and $\Lambda$. However, a nonzero value of $\si_2$ can have nontrivial consequences for models with other potentials, such as the smooth quadratic form \rf{VS}. For generic choices of parameters, the Lagrange density \rf{bb} is unitary because no more than two derivatives appear. However, as discussed in Sec.\ \ref{Massive modes}, the indefinite metric and generic absence of gauge invariance typically imply the presence of ghosts and corresponding negative-energy problems, which can tightly constrain the viability of various models \cite{ems}. If the gravitational couplings and the potential $V$ are disregarded, there is a unique set of parameters ensuring the absence of ghosts: $\ta_1 = 1$, $\ta_2 = \ta_3 = 0$. With this choice, the kinetic term for the bumblebee becomes the Maxwell action, in which the usual U(1) gauge invariance excludes ghosts. When the gravitational terms and couplings are included, this gauge invariance is maintained for the parameter choice $\ta_1 = 1$, $\ta_2 = \ta_3 = \si_1 = \si_2 = 0$. The further inclusion of the potential $V$ breaks the U(1) gauge symmetry, but the form of the kinetic term ensures a remnant constraint on the equations of motion arising from the identity \begin{equation} D_\mu D_\nu B^{\mu\nu} \equiv 0. \label{remconst} \end{equation} The action for these bumblebee models, introduced in Ref.\ \cite{ks}, is therefore of particular interest. The corresponding Lagrange density \begin{eqnarray} {\cal L}_{\rm KS} &=& \fr 1 {16\pi G} e (R - 2 \Lambda) - \frac 14 e B^{\mu\nu} B_{\mu\nu} - eV + {\cal L}_{\rm M} , \qquad \label{ksbb} \end{eqnarray} is investigated in more detail in Sec.\ \ref{Examples}. The reader is warned that some confusion about the relationship between these models and ones with nonzero $\ta_2$ and $\ta_3$ exists in the literature. In particular, results for the models \rf{ksbb} can differ from those obtained in models with nonzero $\ta_2$, $\ta_3$ via straightforward adoption of the limit $\ta_2, \ta_3 \to 0$, due to the emergence of the remnant constraint \rf{remconst}. The discussion above defines bumblebee models on the spacetime manifold, without introducing a local Lorentz basis. In this approach, the Lorentz NG modes are hidden within the bumblebee field $B_\mu$. Adopting instead a vierbein formulation reveals explicitly the local Lorentz properties of the models, and it also provides a natural way to incorporate spinor fields in the matter Lagrange density ${\cal L}_{\rm M}$. The vierbein $\vb \mu a$ converts tensors expressed in a local basis to ones on the spacetime manifold. The spacetime metric $g_{\mu\nu}$ is related to the Minkowski metric $\eta_{ab}$ in the local frame as \begin{equation} g_{\mu\nu} = \vb \mu a \vb \nu b \eta_{ab} , \label{gvb} \end{equation} while the bumblebee spacetime vector $B_\mu$ is related to the local bumblebee vector $B_a$ as \begin{equation} B_\mu = \vb \mu a B_a . \label{Bvb} \end{equation} A complete treatment in the vierbein formalism involves also introducing the spin connection $\nsc \mu a b$, which appears in covariant derivatives acting on local quantities. In Riemann-Cartan geometry, where the spacetime has both curvature and torsion, the spin connection represents degrees of freedom independent of the vierbein. However, experimental constraints on torsion are tight \cite{torsion}. In this work, we restrict attention to Riemann geometry, for which the torsion vanishes and the spin connection is fixed in terms of the vierbein. It therefore suffices for our purposes to consider the vierbein degrees of freedom and their role relative to the gravitational and NG modes. Bumblebee models in the more general context of Riemann-Cartan spacetime with nonzero torsion are investigated in Refs.\ \cite{akgrav,bk}. There is a substantial literature concerning theories of vacuum-valued vectors coupled to gravity. The five-parameter Lagrange density \rf{bb} excluding the potential $V$ and the cosmological constant was investigated by Will and Nordtvedt in the context of vector-tensor models of gravity \cite{wn}. Kosteleck\'y and Samuel (KS) \cite{ks} introduced the potential $V$ triggering spontaneous Lorentz violation and studied both the smooth quadratic potential \rf{VS} and the linear Lagrange-multiplier case \rf{VL} for the class of models given by Eq.\ \rf{ksbb}. The presence of the potential introduces a variety of qualitatively new effects, including the necessary breaking of U(1) gauge invariance \cite{akgrav}, the existence of massless NG modes and massive modes \cite{bk}, and potentially observable novel effects for post-newtonian physics \cite{bak06} and for the matter sector \cite{kleh}. The potential $V$ also leads to a candidate alternative description of the photon \cite{bk} and the graviton \cite{akgrav,kpgr}. In Minkowski spacetime, more general potentials $V$ of hypergeometric form are known to satisfy the one-loop exact renormalization group \cite{baak}. Models of the form \rf{bb} with $\si_1 = \si_2 = 0$, a unit timelike $b_\mu$, a linear Lagrange-multiplier potential $V_L$, and an additional fourth-order interaction for $B_\mu$ have been studied in some detail as possible unconventional theories of gravity \cite{bb1,bb2}. Other works involving bumblebee models include Refs.\ \cite{ems,cli,bmg,gjw,cfn,bb3}. An aspect of bumblebee models of particular interest is the appearance of massless propagating vector modes. This feature suggests the prospect of an alternative to the usual description of photons via U(1) gauge theory. The central idea is to identify the photon modes with the NG modes arising from spontaneous Lorentz violation. Early discussions along these lines centered on reinterpretating the photon or electron in the context of special relativity without physical Lorentz violation \cite{dhfb,yn}. More recently, the Lorentz NG modes in certain bumblebee models with physical Lorentz violation have been shown to obey the Einstein-Maxwell equations in Riemann spacetime in axial gauge \cite{bk}. These models are further considered below, with the discussion initiated in Sec.\ \ref{Bumblebee electrodynamics}. One motivation of the present work is to investigate the role of massive modes and Lagrange-multiplier fields in this context. In particular, it is of interest to investigate possible modifications to electrodynamics and gravity, with an eye to novel phenomenological applications of spontaneous Lorentz and diffeomorphism breaking. \subsection{Transformations} \label{Transformations} In considering theories with violations of Lorentz and diffeomorphism symmetry, it is important to distinguish between {\it observer} and {\it particle} transformations \cite{ck}. Under either an observer general coordinate transformation or an observer local Lorentz transformation, geometric quantities such as scalars, vectors, tensors, and their derivatives remain unchanged, but the coordinate basis defining their components transforms. In contrast, particle transformations can change geometric quantities, independently of any coordinate system and basis. In theories without spacetime-symmetry breaking, the component forms of the transformation laws for particle and observer transformations are inversely related. For example, under infinitesimal particle diffeomorphisms described by four infinitesimal dispacements $\xi^\mu$, the components of the bumblebee field transform according to the Lie derivative as \begin{eqnarray} B_\mu &\rightarrow & B_\mu - (\partial_\mu \xi^\nu) B_\nu - \xi^\nu \partial_\nu B_\mu , \nonumber\\ B^\mu &\rightarrow & B^\mu + (\partial_\nu \xi^\mu) B^\nu - \xi^\nu \partial_\nu B^\mu, \label{Bdiff} \end{eqnarray} while the metric transforms as \begin{eqnarray} g_{\mu\nu} &\rightarrow & g_{\mu\nu} - (\partial_\mu \xi^\alpha) g_{\alpha\nu} - (\partial_\nu \xi^\alpha) g_{\mu\alpha} - \xi^\alpha \partial_\alpha g_{\mu\nu} , \nonumber\\ g^{\mu\nu} &\rightarrow & g^{\mu\nu} + (\partial_\alpha \xi^\mu) g^{\alpha\nu} + (\partial_\alpha \xi^\nu) g^{\mu\alpha} - \xi^\alpha \partial_\alpha g^{\mu\nu} . \qquad \label{gdiff} \end{eqnarray} Under infinitesimal observer general coordinate transformations, which are the observer equivalent of diffeomorphisms, the formulae for the transformations of the bumblebee and metric components take the same mathematical form up to a possible sign change in the arbitrary parameter $\xi^\mu$, even though these transformations are only the result of a change of coordinates. Similarly, under infinitesimal particle local Lorentz transformations with six parameters $\epsilon_{ab} = -\epsilon_{ba}$ related to the local Lorentz group element by $\Lambda_a^{\pt{a} b} \approx \delta_a^{\pt{a} b} + \epsilon_a^{\pt{a} b}$, the local components of the bumblebee field transform as \begin{equation} B_a \rightarrow B_a + \epsilon_a^{\pt{a} b} B_b , \label{BLT} \end{equation} with the formula for $B^a$ following from this by raising indices with $\eta^{ab}$. Under observer local Lorentz transformations, the corresponding transformation formulae again take the same mathematical form. However, any form of physical Lorentz and diffeomorphism breaking destroys the details of these equivalences. A fundamental premise in classical physics is that the properties of a physical system are independent of the presence of a noninteracting observer. The observer is free to select a coordinate system to describe the physics of the system, but the physics cannot depend on this choice. In particular, this must remain true even when Lorentz and diffeomorphism symmetry are broken, whether explicitly or spontaneously. Viable candidate theories with spacetime-symmetry breaking must therefore be invariant under observer transformations, which are merely changes of coordinate system. For example, the SME is formulated as a general effective field theory that is invariant under observer general coordinate transformations and under observer local Lorentz transformations \cite{akgrav,kp,ck}. In contrast, a theory with physical Lorentz and diffeomorphism breaking cannot by definition remain fully invariant under the corresponding particle transformations. For example, if the breaking is explicit, the action of the theory changes under particle transformations. If instead the breaking is spontaneous, the action remains invariant and the equations of motion transform covariantly, as usual. However, the vacuum solution to the equations of motion necessarily contains quantities with spacetime indices that are unchanged by particle transformations. These vacuum values and the excitations around them can lead to physical effects revealing the symmetry breaking. In bumblebee models, the relevant spacetime vacuum values are those of the bumblebee and metric fields, denoted $\vev{B_\mu}$ and $\vev{g_{\mu\nu}}$. These are unaffected by particle diffeomorphisms, and their nonzero components thereby reveal the broken particle diffeomorphisms. In contrast, $\vev{B_\mu}$ and $\vev{g_{\mu\nu}}$ both transform as usual under observer general coordinate transformations, which therefore are unbroken. Analogous results hold for the local-frame vacuum value $\vev{B_a}$ of the local bumblebee field. The components $\vev{B_a}$ remain unaffected by particle local Lorentz transformations, with the invariance of the nonzero components resulting from the breaking of local Lorentz generators, while the components $\vev{B_a}$ transform under observer local Lorentz transformations in the usual way. Similarly, the vacuum value $\vev{\vb \mu a}$ of the vierbein is unchanged by both particle diffeomorphisms and particle Lorentz transformations, but it transforms as a vector under the corresponding observer transformations. By virtue of the relation \rf{Bvb} between the spacetime and local bumblebee fields, it follows that spontaneous local Lorentz violation is necessarily accompanied by spontaneous diffeomorphism violation and vice versa \cite{bk}. The point is that the vacuum value of the vierbein is nonzero, so the existence of a nonzero $\vev{B_a}$ spontaneously breaking local Lorentz symmetry also implies the existence of a nonzero $\vev{B_\mu}$ spontaneously breaking diffeomorphism symmetry, and vice versa. Note, however, that this result fails for explicit violation, where the analogue of the relation \rf{Bvb} is absent. In general, explicit local Lorentz violation occurs when a nonzero quantity $t_{abc\ldots}$ is externally prescribed in the local frame, but the corresponding spacetime quantity $t_{\lambda\mu\nu\ldots}\equiv \vb \lambda a \vb \mu b \vb \nu c \cdots t_{abc\ldots}$ is defined using the full vierbein and hence remains invariant under diffeomorphisms. Similarly, explicit diffeomorphism violation occurs when a nonzero quantity $t_{\lambda\mu\nu\ldots}$ is externally prescribed on the spacetime manifold, but the corresponding local quantity defined via the inverse vierbein remains invariant under local Lorentz transformations. \subsection{Mode expansions} \label{Mode expansions} To study the content and behavior of the modes, the fields can be expanded as infinitesimal excitations about their vacuum values. At the level of the Lagrange density, it suffices for most purposes to keep only terms to second order in the field excitations, which linearizes the equations of motion. Assuming for simplicity a Minkowski background $\vev{g_{\mu\nu}} = \eta_{\mu\nu}$, we expand the metric and its inverse as \begin{equation} g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu}, \quad g^{\mu\nu} \approx \eta^{\mu\nu} - h^{\mu\nu} . \label{gh} \end{equation} For the local bumblebee vector, we write \begin{equation} B_a = b_a + \beta_a , \label{Bbe} \end{equation} where the vacuum value $\vev{B_a}$ is denoted $b_a$ and assumed constant, and the infinitesimal excitations are denoted $\beta_a$. The vacuum condition \rf{Bsquare} implies \begin{equation} b_a \eta^{ab} b_b = \mp b^2. \end{equation} In a Minkowski background, the vierbein vacuum value can be chosen to be $\vev{\vb \mu a} = \delta_\mu^{\pt{\mu}a}$ in cartesian coordinates. The expansions of the vierbein and its inverse are \begin{eqnarray} \vb \mu a &\approx & \delta_\mu^{\pt{\mu}a} + {\textstyle{1\over 2}} h_\mu^{\pt{\mu}a} + \chi_\mu^{\pt{\mu}a} , \nonumber\\ \ivb \mu a &\approx & \delta^\mu_{\pt{\mu}a} - {\textstyle{1\over 2}} h^\mu_{\pt{\mu}a} + \chi^\mu_{\pt{\mu}a} , \label{vhch} \end{eqnarray} where $h_{\mu a}$ and $\chi_{\mu a}$ are, respectively, the symmetric and antisymmetric components of the vierbein. The covariant and contravariant components of the bumblebee field follow from Eq.\ \rf{Bvb} as \begin{eqnarray} B_\mu &\approx & \delta_\mu^{\pt{\mu}a} (b_a + \beta_a) + ( {\textstyle{1\over 2}} h_\mu^{\pt{\mu}a} + \chi_\mu^{\pt{\mu}a}) b_a , \nonumber\\ B^\mu &\approx & \delta^\mu_{\pt{\mu}a} (b^a + \beta^a) + (-{\textstyle{1\over 2}} h^\mu_{\pt{\mu}a} + \chi^\mu_{\pt{\mu}a}) b^a . \label{Bvbup} \end{eqnarray} Since the local and spacetime background metrics are both Minkowski and since the excitations are infinitesimal, the distinction between the Latin local indices and the Greek spacetime indices can be dropped. We adopt Greek indices for most purposes that follow, raising and lowering indices on purely first-order quantities with $\eta^{\mu\nu}$ and $\eta_{\mu\nu}$. The vierbein expansions \rf{vhch} can then be rewritten as \begin{eqnarray} \lvb \mu \nu &\approx & \eta_{\nu\sigma} \vb \mu \sigma \approx ~ \eta_{\mu\nu} + {\textstyle{1\over 2}} h_{\mu\nu} + \chi_{\mu\nu} , \nonumber\\ \uvb \mu \nu &\approx & \eta^{\nu\sigma} \ivb \mu \sigma \approx ~ \eta^{\mu\nu} - {\textstyle{1\over 2}} h^{\mu\nu} + \chi^{\mu\nu} . \label{vhch2} \end{eqnarray} The vacuum value for the bumblebee vector becomes \begin{equation} \vev{B_\mu} = \vev{\vb \mu a} b_a = \delta_\mu^{\pt{\mu}a} b_a \equiv b_\mu. \label{bbvac} \end{equation} It is convenient to decompose the local vector excitations $\beta_\mu \equiv \delta_\mu^{\pt{\mu}a} \beta_a$ into transverse and longitudinal pieces with respect to $b_\mu$. Excluding for simplicity the case of lightlight $b_\mu$, we write \begin{equation} \beta_\mu = \beta^{\rm t}_\mu + \beta \hat b_\mu , \quad \beta^{\rm t}_\mu b^\mu = 0, \label{beta} \end{equation} where $\hat b_\mu = b_\mu/\sqrt{b^2}$ is a vector along the direction of $b_\mu$ obeying $\hat b^\mu \hat b_\mu = \mp 1$. Using this decomposition, the bumblebee mode expansions \rf{Bvbup} become \begin{eqnarray} B_\mu &\approx & b_\mu + ({\textstyle{1\over 2}} h_{\mu\nu} + \chi_{\mu\nu}) b^\nu + \beta^{\rm t}_\mu + \beta \hat b_\mu , \nonumber\\ B^\mu &\approx & b^\mu + (- {\textstyle{1\over 2}} h^{\mu\nu} + \chi^{\mu\nu}) b_\nu + \beta^{{\rm t}\mu} + \beta \hat b^\mu . \label{Bvb2} \end{eqnarray} It is instructive to count degrees of freedom in the expressions \rf{vhch2} and \rf{Bvb2}. On the left-hand side, the vierbein has 16 components and the bumblebee field 4, for a total of 20. On the right-hand side, the symmetric metric component $h_{\mu\nu}$ has 10, the antisymmetric component $\chi_{\mu\nu}$ has 6, the transverse bumblebee excitation $\beta^{\rm t}_\mu$ has 3, and the longitudinal excitation $\beta$ has 1, producing the required matching total of 20. Of these 20 degrees of freedom, 6 are metric modes, 4 are bumblebee modes, while 6 are associated with local Lorentz transformations and 4 with diffeomorphisms. The explicit transformations of all the field components under particle diffeomorphisms and local Lorentz transformations can be deduced from the full-field expressions and from the invariance of the vacuum values. A list of formulae for both particle and observer transformations is provided in the Appendix. Among all the excitations, only the longitudinal component $\beta$ of the bumblebee field is invariant under both particle diffeomorphisms and local Lorentz transformations. It is therefore a physical degree of freedom in any gauge. Moreover, using Eqs.\ \rf{Bsquare} and \rf{Bvb2} reveals that exciting the $\beta$ mode alone produces a nonzero value of $X$, given at first order by \begin{equation} X = B_\mu g^{\mu\nu} B_\nu \pm b^2 \approx 2 (b^\mu \hat b_\mu) \beta . \label{Bsquare2} \end{equation} As a result, the excitation $\beta$ is associated with a nonminimal value of the potential, and it therefore cannot be an NG mode. In fact, for the case of a smooth quadratic potential, $\beta$ is a massive mode. In contrast, in a theory with a Lagrange-multiplier potential where the constraint $X = 0$ is enforced as an equation of motion, the massive mode $\beta$ identically vanishes. \subsection{Gauge fixing and NG modes} \label{Gauge fixing and NG modes} Since the spacetime-symmetry breaking in bumblebee models is spontaneous, the four diffeomorphisms parametrized by $\xi_\mu$ and the six local Lorentz transformations parametrized by $\epsilon_{\mu\nu}$ leave invariant the bumblebee action \rf{bb} and transform covariantly the equations of motion. Fixing the gauge freedom therefore requires 10 gauge conditions. For the diffeomorphism freedom, a choice common in the literature is the harmonic gauge \begin{equation} \partial_\mu \ol h^{\mu\nu} = 0 , \label{harm} \end{equation} where $\ol h^{\mu\nu} = h^{\mu\nu} - {\textstyle{1\over 2}} \eta_{\mu\nu} h$ and $\ol h = - h \equiv - h^\mu_{\pt{\mu}\mu}$. In the harmonic gauge, the Einstein tensor becomes $G_{\mu\nu} \approx -{\textstyle{1\over 2}} \mathchoice\sqr66\sqr66\sqr{2.1}3\sqr{1.5}3 \, \ol h_{\mu\nu}$ at linear order. An alternative choice for the diffeomorphism degrees of freedom is the axial gravitational gauge \begin{equation} h_{\mu\nu} b^\nu = 0. \label{haxial} \end{equation} Both these gauge choices represent four conditions. To fix the local Lorentz freedom, it is common to eliminate all six antisymmetric vierbein components by imposing the six conditions \begin{equation} \chi_{\mu\nu} = 0. \label{chigauge} \end{equation} Other choices are possible here too. Consider, for example, the decomposition of $\chi_{\mu\nu}$ in terms of projections along $b_\mu$, \begin{eqnarray} \chi_{\mu\nu} &=& \chi^{tt}_{\mu\nu} \mp \chi_\mu^t \hat b_\nu \pm \chi_\nu^t \hat b_\mu , \nonumber\\ \chi^{tt}_{\mu\nu} b^\nu &=& \chi^t_\nu b^\nu = 0, \label{chidec} \end{eqnarray} which is the analogue of Eq.\ \rf{beta} for $\beta_\mu$. The components $\chi^{tt}_{\mu\nu}$ and $\chi_\mu^t \equiv \chi_{\mu\nu} \hat b^\nu$ each contain three degrees of freedom. Inspection of the transformation laws shows that an alternative to fixing the local Lorentz gauge via Eq.\ \rf{chigauge} is the set of six conditions \begin{equation} \chi^{tt}_{\mu\nu} = \beta^t_\mu = 0. \label{btgauge} \end{equation} Note, however, that the combination $(\chi_{\mu\nu} b^\nu + \beta^{\rm t}_\mu)$ appearing in $B_\mu$ in Eq.\ \rf{Bvb2} is invariant and therefore cannot be gauged to zero. Evidently, the associated local Lorentz degrees of freedom must remain somewhere in the theory. The bumblebee vacuum value $\vev{B_\mu} = b_\mu$ breaks one of the four diffeomorphism symmetries. The broken generator is associated with the projected component $\xi_\nu b^\nu$ of the parameter $\xi_\mu$. Analogously, the vacuum value $\vev{B_a} = b_a$ breaks three of the six local Lorentz symmetries. The broken generators are associated with the components $\epsilon_{ab} b^b \approx \epsilon_{\mu\nu} b^\nu$ of $\epsilon_{\mu\nu}$ projected along $b^\nu$. In each case, the unbroken generators are associated with the orthogonal complements to the projections. Since the vacuum breaks one particle diffeomorphism and three local Lorentz transformations, four NG modes appear. A useful general procedure to identify these modes is first to make virtual particle transformations using the broken generators acting on the appropriate vacuum values for the fields in $V$, and then to promote the corresponding parameters $\epsilon_{\mu\nu}$ and $\xi_\mu$ to field excitations \cite{bk}: \begin{equation} \xi_\mu \to \Xi_\mu, \quad \epsilon_{\mu\nu} \to {\cal E}_{\mu\nu} = - {\cal E}_{\nu\mu}. \label{promotion} \end{equation} The properties of $\Xi_\mu$ and ${\cal E}_{\mu\nu}$ under various particle and observer transformations are given in the Appendix. The projections $\xi_\nu b^\nu$ and $\epsilon_{\mu\nu} b^\nu$ associated with the broken generators determine the NG modes, which are therefore $\Xi_\nu b^\nu$ and ${\cal E}_{\mu\nu} b^\nu$. We can follow this procedure to elucidate the relationship between the NG modes and the component fields in the decomposition \rf{Bvb2} of $B_\mu$. Consider first the diffeomorphism NG mode, which is generated by a virtual transformation acting on the spacetime vacuum value $\vev{B_\mu}$ and involving the broken diffeomorphism generator. The relevant transformation is given in Eq.\ \rf{pdiffs}, and it yields \begin{equation} \vev {B_\mu} \to b_\mu - (\partial_\mu \Xi_\nu) b^\nu . \end{equation} Comparison of this result to the form of Eq.\ \rf{Bvb2} reveals that the four vierbein combinations $({\textstyle{1\over 2}} h_{\mu\nu} + \chi_{\mu\nu})b^\nu$ contain the diffeomorphism NG mode. This agrees with the result obtained by combining virtual diffeomorphisms on the vacuum values of the component fields in Eq.\ \rf{Bvb2}: \begin{eqnarray} \vev{h_{\mu\nu} b^\nu} &\to & -(\partial_\mu \Xi_\nu + \partial_\nu \Xi_\mu) b^\nu , \nonumber\\ \vev{\chi_{\mu\nu} b^\nu} &\to & - {\textstyle{1\over 2}} (\partial_\mu \Xi_\nu - \partial_\nu \Xi_\mu) b^\nu , \nonumber\\ \vev{\beta^t_\mu} &\to & 0, \quad \vev{\beta} \to 0. \label{XiEpredef} \end{eqnarray} Note also that the combinations $({\textstyle{1\over 2}} h_{\mu\nu} + \chi_{\mu\nu})b^\nu$ maintain the potential minimum $V=0$, as is expected for an NG mode. In contrast, the three Lorentz NG modes ${\cal E}_{\mu\nu} b^\nu$ are generated by virtual transformations acting on the local-frame vacuum value $\vev{B_a}$ and involving the broken local Lorentz generators: \begin{equation} \vev {B_a} \to b_a + {\cal E}_{ab} b^b , \label{LLvirttr} \end{equation} Comparison with Eqs.\ \rf{Bbe} and \rf{beta} shows that the transverse field $\beta^t_\mu$ contains the three Lorentz NG modes ${\cal E}_{\mu\nu} b^\nu$. Note that there are exactly three degrees of freedom in $\beta^t_\mu$, all of which maintain the potential minimum $V=0$. This result can be used to connect the component fields in the decomposition of $B_\mu$ with the three Lorentz NG modes ${\cal E}_{\mu\nu} b^\nu$. The vacuum value $\vev{B_\mu}$ itself is invariant under local Lorentz transformations, so another component field must also contain the Lorentz NG modes. Performing virtual local Lorentz transformations on the vacuum values of the component fields gives \begin{eqnarray} \vev{h_{\mu\nu} b^\nu} &\to & 0, \quad \vev{\chi_{\mu\nu} b^\nu} \to -{\cal E}_{\mu\nu} b^\nu , \nonumber\\ \vev{\beta^t_\mu} &\to & {\cal E}_{\mu\nu} b^\nu , \quad \vev{\beta} \to 0, \label{vllt} \end{eqnarray} which shows that the three combinations $\vev{\chi_{\mu\nu} b^\nu}$ also contain the three Lorentz NG modes. Combining the results \rf{XiEpredef} and \rf{vllt} reveals the following NG mode content for the component field combinations in the decomposition \rf{Bvb2} of $B_\mu$: \begin{eqnarray} (-{\textstyle{1\over 2}} h_{\mu\nu} + \chi_{\mu\nu}) b^\nu &= & - (\partial_\mu \Xi_\nu) b^\nu - {\cal E}_{\mu\nu} b^\nu , \nonumber\\ \beta^t_\mu &= & {\cal E}_{\mu\nu} b^\nu , \nonumber\\ \beta &\supset& 0. \label{combined} \end{eqnarray} We see that the four combinations $({\textstyle{1\over 2}} h_{\mu\nu} + \chi_{\mu\nu})b^\nu$ contain a mixture of the diffeomorphism NG mode and the three Lorentz NG modes, while the three combinations $\beta^t_\mu$ contain only the three Lorentz NG modes. By fixing the 10 gauge freedoms in the theory, the above results can be used to determine the physical content of the field $B_\mu$. Suppose first we adopt the 10 conditions \rf{haxial} and \rf{chigauge}. Then, the bumblebee field becomes \begin{eqnarray} B_\mu &\approx & b_\mu + \beta^t_\mu + \beta \hat b_\mu \nonumber\\ &=& b_\mu + {\cal E}_{\mu\nu} b^\nu + \beta \hat b_\mu , \label{bgauge1} \end{eqnarray} where the result \rf{combined} has been used. Note that the fixed gauge means that fields with different transformation properties can appear on the left- and right-hand sides. In this gauge, the four components of $B_\mu$ are decomposed into three Lorentz NG modes associated with $\beta^t_\mu$ and the one massive mode $\beta$. The diffeomorphism NG mode is absent. It obeys \begin{equation} (\partial_\mu \Xi_\nu)b^\nu = -(\partial_\nu \Xi_\mu) b^\nu \label{diffeocond1} \end{equation} and hence $b^\mu(\partial_\mu \Xi_\nu)b^\nu = 0$, and it is locked to the Lorentz NG modes via the equation \begin{equation} (\partial_\mu \Xi_\nu)b^\nu = - {\cal E}_{\mu\nu} b^\nu. \label{diffeocond2} \end{equation} For the diffeomorphism NG mode, this gauge is evidently analogous to the unitary gauge in a nonabelian gauge theory. An alternative gauge choice could be to impose the 10 conditions \rf{haxial} and \rf{btgauge} instead. This gives \begin{eqnarray} B_\mu &\approx & b_\mu + \chi^t_\mu + \beta \hat b_\mu \nonumber\\ &=& b_\mu + {\cal E}_{\mu\nu} b^\nu + \beta \hat b_\mu , \label{bgauge2} \end{eqnarray} where in this gauge the Lorentz NG modes ${\cal E}_{\mu\nu} b^\nu$ are identified with $\chi^t_\mu$ instead of $\beta^t_\mu$. One way to understand this identification is to perform a local Lorentz transformation with parameter $\epsilon_{\mu\nu} = -{\cal E}_{\mu\nu}$ on the first result in Eq.\ \rf{bgauge1}. This changes the value of $\beta^t_\mu$ from ${\cal E}_{\mu\nu} b^\nu$ to zero, while simultaneously converting $\chi^t_\mu$ from zero to $\chi^t_\mu ={\cal E}_{\mu\nu} b^\nu$. In this gauge, the explicit decomposition \rf{bgauge2} of $B_\mu$ in terms of NG modes is the same as that in Eq.\ \rf{bgauge1}, but the three Lorentz NG modes are now associated with the components $\chi^t_\mu$ of the vierbein. The diffeomorphism mode remains absent. It still obeys the condition \rf{diffeocond1} and is locked to the Lorentz NG modes by Eq.\ \rf{diffeocond2}. Partial gauge conditions can also be imposed. For example, suppose the choice \rf{btgauge} is made for the local Lorentz gauge, while the diffeomorphism gauge remains unfixed. Then, the four degrees of freedom in the combination $({\textstyle{1\over 2}} h_{\mu\nu} + \chi_{\mu\nu})b^\nu$ consist of the three Lorentz NG modes and the diffeomorphism NG mode, and all the NG modes are contained in the vierbein \cite{bk}. The bumblebee field can be written as \begin{equation} B_\mu \approx b_\mu -(\partial_\mu \Xi_\nu) b^\nu + {\cal E}_{\mu\nu} b^\nu + \beta \hat b_\mu . \label{Bvbredef} \end{equation} Note that only one projection of $\Xi_\nu$ appears, even though four diffeomorphism choices remain. The corresponding expression for $B^\mu$ includes additional metric contributions and is given by \begin{equation} B^\mu \approx b^\mu + (\partial_\nu \Xi^\mu) b^\nu + {\cal E}^{\mu\nu} b_\nu + \beta \hat b^\mu . \label{upBvbredef} \end{equation} This equation contains contributions from all four fields $\Xi_\mu$. However, if the diffeomorphism excitations are restricted only to the one for the broken generator, for which $\Xi_\mu$ obeys $b_\mu \Xi_\nu = b_\nu \Xi_\mu$, then $B^\mu$ also reduces to an expression depending only on the diffeomorphism NG mode $\Xi_\nu b^\nu$. Related results are obtained in Ref.\ \cite{bk}, which investigates the fate of the NG modes using a decomposition of the vierbein into transverse and longitudinal components along $b_\mu$. This decomposition leads to the same relations as Eqs.\ \rf{Bvbredef} and \rf{upBvbredef} when the condition $b_\mu \Xi_\nu = b_\nu \Xi_\mu$ is applied. Even without gauge fixing, the fields $\Xi_\mu$ cancel at linear order in both $G_{\mu\nu}$ and $B_{\mu\nu}$. As a result, propagating diffeomorphism NG modes cannot appear. This is a special case of a more general result. By virtue of their origin as virtual particle transformations, the diffeomorphism NG modes appear as certain components of representation-irreducible fields with nonzero vacuum values. However, diffeomorphism invariance ensures these modes enter in the metric and bumblebee fields in combinations that cancel in a diffeomorphism-invariant action, including the general bumblebee action \rf{bb}. In contrast, the Lorentz NG modes do play a role in the bumblebee action. They can be identified with a massless vector field $A_\mu \equiv {\cal E}_{\mu\nu} b^\nu$ in the fixed axial gauge $A_\mu b^\mu \approx 0$. The fully gauge-fixed expression \rf{bgauge1} for $B_\mu$ then becomes \begin{equation} B_\mu \approx b_\mu + A_\mu + \beta \hat b_\mu , \label{bba} \end{equation} where the transverse components have the form of photon fields in the axial gauge, and the longitudinal mode is the massive mode $\beta$. \subsection{Alternative expansions} \label{Alternative expansions} The analysis in the previous subsections is based on the local-frame expansion \rf{Bbe} of the bumblebee field $B_a$. However, other mode expansions are possible, including ones in which the vierbein makes no explicit appearance and the local Lorentz transformations are no longer manifest. This subsection offers a few comments on two alternative expansions used in some of the literature. The first alternative mode expansion is specified using the covariant spacetime components of the bumblebee field \cite{bk,kp}, \begin{equation} B_\mu = b_\mu + {\cal E}_\mu , \label{epdown} \end{equation} where ${\cal E}_\mu$ represents the excitations of the spacetime bumblebee field around the vacuum value $b_\mu$. It follows that the contravariant components are \begin{equation} B^\mu \approx b^\mu + {\cal E}^\mu - h^{\mu\nu} b_\nu . \label{epup} \end{equation} These fields are linked to the vierbein and the local-frame fluctuations $\beta_\mu$ via \begin{equation} {\cal E}_\mu = ({\textstyle{1\over 2}} h_{\mu\nu} + \chi_{\mu\nu}) b^\nu + \beta^{\rm t}_\mu + \beta \hat b_\mu . \label{redef1} \end{equation} The fields ${\cal E}_\mu$ are scalars under particle local Lorentz transformations, but transform under particle diffeomorphisms as \begin{eqnarray} {\cal E}_\mu &\rightarrow& {\cal E}_\mu - (\partial_\mu \xi_\alpha) b^\alpha , \nonumber\\ {\cal E}^\mu \equiv \eta^{\mu\nu} {\cal E}_\nu &\rightarrow& {\cal E}^\mu - \eta^{\mu\nu} (\partial_\nu \xi_\alpha) b^\alpha . \label{epdiff} \end{eqnarray} Note that this usage of ${\cal E}^\mu$ differs from that in Ref.\ \cite{bk}. The second alternative expansion \cite{bak06,bb1,cli,bmg,gjw} starts instead with the contravariant bumblebee components \begin{equation} B^\mu = b^\mu + {\cal E}^{\prime\mu} , \label{epprimeup} \end{equation} where a prime is used to distinguish the excitations ${\cal E}^{\prime\mu}$ from the previous case. The corresponding covariant components are then \begin{equation} B_\mu \approx b_\mu + {\cal E}^\prime_\mu + h_{\mu\nu} b^\nu . \label{epprimedown} \end{equation} The field redefinitions connecting these to the vierbein and previous case are \begin{eqnarray} {\cal E}^{\prime\mu} &=& (- {\textstyle{1\over 2}} h^{\mu\nu} + \chi^{\mu\nu}) b_\nu + \beta_{\rm t}^\mu + \beta \hat b^\mu \nonumber\\ &=& \eta^{\mu\nu} {\cal E}_\mu - h^{\mu\nu} b_\nu . \label{redef2} \end{eqnarray} The fields ${\cal E}^{\prime\mu}$ are scalars under particle local Lorentz transformations but transform under particle diffeomorphisms as \begin{eqnarray} {\cal E}^{\prime\mu} &\rightarrow& {\cal E}^{\prime\mu} + (\partial_\alpha \xi^\mu) b^\alpha , \nonumber\\ {\cal E}^\prime_\mu \equiv \eta_{\mu\nu} {\cal E}^{\prime\mu} &\rightarrow& {\cal E}^\prime_\mu + (\partial_\alpha \xi_\mu) b^\alpha . \label{epprimediff} \end{eqnarray} Note that the fields ${\cal E}_\mu$ and ${\cal E}^\prime_\mu$ have different transformation properties and contain different mixes of the bumblebee and metric excitations. However, ${\cal E}_\mu$ and ${\cal E}^\prime_\mu$ take the same gauge-fixed form in the gauge \rf{haxial}. The two alternative expansions are useful because the excitations ${\cal E}_\mu$ and ${\cal E}^\prime_\mu$ are invariant under infinitesimal local particle Lorentz transformations. In these variables, the local Lorentz symmetry is hidden and only the diffeomorphism symmetry is manifest. The Lorentz NG modes lie in the $b_\mu$-tranverse components of ${\cal E}_\mu$ or ${\cal E}^\prime_\mu$. The massive mode $\beta$ joins the diffeomorphism mode in the longitudinal component of ${\cal E}_\mu$. It can be identified from Eq.\ \rf{redef1} as the combination \begin{equation} \beta = \fr {b^\mu ({\cal E}_\mu - {\textstyle{1\over 2}} h_{\mu\nu} b^\nu)} {b^\alpha\hat b_\alpha} = \mp \hat b^\mu ({\cal E}_\mu - {\textstyle{1\over 2}} h_{\mu\nu} b^\nu) . \label{massep} \end{equation} Notice that $\beta$ appears here as a diffeomorphism-invariant combination of field components that transform nontrivially. The choice of diffeomorphism gauge has interesting consequences for ${\cal E}_\mu$ and ${\cal E}^\prime_\mu$. First, note that ${\cal E}_\mu$ cannot be gauged to zero, since only one degree of freedom $\xi_\alpha b^\alpha$ appears in its transformation law. However, it is possible to gauge ${\cal E}^\prime_\mu$ to zero. The corresponding gauge condition for ${\cal E}_\mu$ is ${\cal E}_\mu = h_{\mu\nu} b^\nu$. In either of these gauges, the massive mode becomes \begin{equation} \beta = \mp {\textstyle{1\over 2}} \hat b^\mu h_{\mu\nu} \hat b^\nu , \label{massh} \end{equation} thereby becoming part of the metric. The bumblebee field strength reduces to \begin{equation} B_{\mu\nu} = (\partial_\mu h_{\nu\sigma} - \partial_\nu h_{\mu\sigma}) b^\sigma , \label{Bmunuh} \end{equation} and so is also given by the metric. Evidently, in these gauges the theory is defined entirely in terms of the metric excitations $h_{\mu\nu}$. Requiring ${\cal E}_\mu$ to vanish or dropping the term $h_{\mu\nu} b^\nu$ in Eq.\ \rf{epprimedown} therefore improperly sets to zero the bumblebee field strength $B_{\mu\nu}$, which alters the equations of motion governing the Lorentz NG and massive modes \cite{bmg}. It follows from Eq.\ \rf{Bmunuh} that the effective Lagrange density can contain additional kinetic terms for $h_{\mu\nu}$, beyond those arising from the gravitational action, that originate from the bumblebee kinetic terms ${\cal L}_{\rm K}$. These additional terms propagate the Lorentz NG modes, disguised as massless modes contained in the metric. \section{Examples} \label{Examples} In this section, some simple examples are developed to illustrate and enrich the general results obtained in the discussions above. We choose to work within the class of KS\ bumblebee models with Lagrange density given by Eq.\ \rf{ksbb}, which avoids \it a priori \rm propagating ghost fields and the complications of nonminimal gravitational couplings. For definiteness, we set $\Lambda=0$ and choose the matter-bumblebee coupling to be \begin{equation} {\cal L}_{\rm M} = - e B_\mu J_{\rm M}^\mu , \end{equation} where $J_{\rm M}^\mu$ is understood to be a current formed from matter fields in the theory. In principle, this current could be formed from dynamical fields in the theory or could be prescribed externally. For the latter case, diffeomorphism invariance is explicitly broken unless the current satisfies a suitable differential constraint. The section begins by presenting some general results for this class of models, including ones related to the equations of motion, conservation laws, and the connection to Einstein-Maxwell electrodynamics. We then turn to a more detailed discussion of three specific cases with different explicit potentials $V(X)$, where the bumblebee field combination $X$ is defined as in Eq.\ \rf{XB}. The three cases involve the smooth quadratic potential $V_S(X)$ of Eq.\ \rf{VS}, the linear Lagrange-multiplier potential $V_L(\lambda, X)$ of Eq.\ \rf{VL}, and the quadratic Lagrange-multiplier potential $V_Q(\lambda, X)$ of Eq.\ \rf{VQ}. The massive modes are studied for each potential, and their effects on gravity and electromagnetism are explored. \subsection{General considerations} \label{General considerations} \subsubsection{Equations of motion and conservation laws} \label{Equations of motion and conservation laws} Varying the Lagrange density \rf{ksbb} with respect to the metric yields the gravitational equations of motion \begin{eqnarray} G^{\mu\nu} &=& 8 \pi G T^{\mu\nu} . \label {geq} \end{eqnarray} In this expression, the total energy-momentum tensor $T^{\mu\nu}$ can be written as a sum of two terms, \begin{equation} T^{\mu\nu} = T^{\mu\nu}_{\rm M }+ T^{\mu\nu}_B . \label{Ttotal} \end{equation} The first is the energy-momentum tensor $T^{\mu\nu}_{\rm M}$ arising from the matter sector. The second is the energy-momentum tensor $T^{\mu\nu}_B$ arising from the bumblebee kinetic and potential terms, \begin{equation} T^{\mu\nu}_B = T^{\mu\nu}_{\rm K} + T_V^{\mu\nu} , \label{TBmunu} \end{equation} where $T^{\mu\nu}_{\rm K}$ and $T_V^{\mu\nu}$ are given by \begin{eqnarray} T^{\mu\nu}_{\rm K} &=& B^{\mu\alpha} B^\nu_{\pt{\nu}\alpha} - \frac 1 4 g^{\mu\nu} B_{\alpha\beta} B^{\alpha\beta} \label{tk} \end{eqnarray} and \begin{eqnarray} T_V^{\mu\nu} &=& - V g^{\mu\nu} + 2 V^\prime B^\mu B^\nu . \label{tv} \end{eqnarray} Varying instead with respect to the bumblebee field generates the remaining equations of motion, \begin{eqnarray} D_\nu B^{\mu\nu} &=& J^\mu. \label{Beq} \end{eqnarray} Like the total energy-momentum tensor, the total current $J^\mu$ can be written as the sum of two terms \begin{equation} J^\mu = J_{\rm M}^\mu + J_B^\mu . \end{equation} The partial current $J_{\rm M}^\mu$ is associated with the matter sector and acts as an external source for the bumblebee field. The partial current $J_B^\mu$ arises from the bumblebee self-interaction, and it is given explicitly as \begin{equation} J_B^\mu = - 2 V^\prime B^\mu . \label{JB} \end{equation} The contracted Bianchi identities for $G_{\mu\nu}$ lead to conservation of the total energy-momentum tensor, \begin{equation} D_\mu T^{\mu\nu} \equiv D_\mu (T^{\mu\nu}_{\rm M} + T^{\mu\nu}_B) = 0 . \label{Tconsv} \end{equation} The antisymmetry of the bumblebee field strength $B_{\mu\nu}$ implies the remnant constraint \rf{remconst} and hence a second conservation law, \begin{equation} D_\mu J^\mu \equiv D_\mu (J_{\rm M}^\mu + J_B^\mu) = 0 . \label{Jconsv} \end{equation} Note that this second conservation law is a special feature of KS\ bumblebee models. It is a direct consequence of choosing the \it a priori \rm ghost-free action \rf{ksbb}. Note also that this conservation law holds even though the potential term $V$ excludes any local U(1) gauge symmetry in these models. \subsubsection{Bumblebee currents} \label{Bumblebee currents} The bumblebee current $J_B^\mu$ defined in Eq.\ \rf{JB} vanishes when $V^\prime = 0$. This situation holds for the vacuum solution and for NG modes, and it then follows from Eq.\ \rf{Jconsv} that the matter current $J_{\rm M}^\mu$ is covariantly conserved. However, $V^\prime \ne 0$ in the presence of a nonzero massive mode or a nonzero linear Lagrange multiplier, whereupon the bumblebee current $J_B^\mu$ acts as an additional source for the bumblebee field equation \rf{Beq}. A nonzero massive mode or nonzero linear Lagrange multiplier contributes to the bumblebee component $T^{\mu\nu}_B$ of the energy-momentum tensor too, and it therefore also acts as an additional source for the gravitational field equations \rf{geq}. The contributions to the energy-momentum tensor $T^{\mu\nu}_B$ stemming from $V^\prime \ne 0$ can be positive or negative. This is a generic feature, as can be seen from the expression for $T_V^{\mu\nu}$ in Eq.\ \rf{TV} and the general bumblebee Lagrange density \rf{bb}. The full energy-momentum tensor $T^{\mu\nu}$ is conserved, but the prospect of negative values for $T^{\mu\nu}_B$ implies stability issues for the models. Under suitable circumstances, for example, unbounded negative values of $T^{\mu\nu}_B$ can act as unlimited sources of energy for the matter sector. Stability is also a potential issue in the absence of matter. For example, for the case where $J_{\rm M}^\mu$ is disregarded and the potential involves a linear Lagrange multiplier producing $\lambda \ne 0$ on shell, at least one set of initial values is known that yields a negative-energy solution \cite{cj}. Whether instabilities occur in practice depends on the form of the Lagrange density and on the initial conditions. The situation is comparatively favorable for KS\ bumblebee models because the extra conservation law \rf{Jconsv} can play a role. In principle, negative-energy contributions in the initial state can be eliminated by choosing initial conditions such that $V^\prime = 0$, while the conservation law can prevent the eventual development of a destabilizing mode. One way to see this is to expand the conservation law Eq.\ \rf{Jconsv} at leading order in a Minkowski background. For nonzero $\vev{B_0}$, we obtain \begin{equation} \partial_0 V^\prime \approx \fr 1 {2 B^0} ( \partial_\mu J_{\rm M}^\mu - 2 V^\prime \partial_\mu B^\mu - 2 B^j \partial_j V^\prime ) . \label{d0Vprime} \end{equation} Suppose the matter current is independently conserved, $D_\mu J_{\rm M}^\mu \approx \partial_\mu J_{\rm M}^\mu = 0$. The initial condition $V^\prime = 0$ then implies $\partial_0 V^\prime = 0$ initially. Taking further derivatives shows that all time derivatives of $V^\prime$ vanish initially and so $V^\prime$ remains zero for all time, indicating stability is maintained. It is interesting to note that the matter-current conservation law $D^\mu J^{\rm M}_\mu = 0$ emerges naturally in the limit of vanishing NG and massive modes, where the bumblebee field reduces to its vacuum value $B_\mu \rightarrow \vev{B_\mu} = b_\mu$. This agrees with a general conjecture made in Ref.\ \cite{akgrav}. The line of reasoning is as follows. In the limit, the bumblebee theory with spontaneous Lorentz violation takes the form of a theory with explicit Lorentz violation with couplings only to $b_\mu$. However, theories with explicit Lorentz violation contain an incompatibility between the Bianchi identities and the energy-momentum conservation law. The incompatibility is avoided for spontaneous Lorentz violation by the vanishing of a particular variation in the action, which reduces in the limit to the requirement of current conservation $D^\mu J^{\rm M}_\mu = 0$. Nonetheless, if $B_\mu$ is excited away from this limit, current conservation in the matter sector may fail to hold. In practice, the potential instability may be irrelevant for physics. For example, if a bumblebee model is viewed as an effective field theory emerging from an underlying stable quantum theory of gravity, any apparent instabilities may reflect incomplete knowledge of constraints on the massive modes or of countering effects that come into play at energy scales above those of the effective theory. Under these circumstances, in KS\ bumblebee models it may suffice for practical purposes simply to postulate that the matter and bumblebee currents do not mix and hence obey separate conservation laws, \begin{equation} D_\mu J_{\rm M}^\mu = 0 , \qquad D_\mu J_B^\mu = 0 . \label{JJB0} \end{equation} For instance, one can impose that only matter terms ${\cal L}_M$ with a global U(1) symmetry are allowed, as is done for the case of Minkowski spacetime in Ref.\ \cite{cfn}. Another option might be to disregard $J^{\rm M}_\mu$ altogether and allow couplings between the matter and bumblebee fields only through gravity. In what follows, we adopt Eqs.\ \rf{JJB0} as needed to investigate modifications of gravity arising from massive modes and to study the conditions under which Einstein-Maxwell solutions emerge in bumblebee models. \subsubsection{Bumblebee electrodynamics} \label{Bumblebee electrodynamics} An interesting aspect of the KS\ bumblebee models is their prospective interpretation as alternative theories of electrodynamics with physical signatures of Lorentz violation \cite{bk}. In this approach, basic electrodynamics properties such as the masslessness of photons are viewed as consequences of spontaneous local Lorentz and diffeomorphism breaking rather than of exact local U(1) symmetry. A key point is that KS\ bumblebee models have no dependence on second time derivatives of $B_0$. They therefore feature an additional primary constraint relative to the bumblebee models \rf{bb} with more general kinetic terms. The primary constraint generates a secondary constraint in the form of a modified Gauss law, which permits a physical interpretation of the theory in parallel to electrodynamics. This modified Gauss law is unavailable to other bumblebee models. In a suitable limit, the equations of motion \rf{geq} and \rf{Beq} for the KS\ bumblebee models reduce to the Einstein-Maxwell equations in a fixed gauge for the metric and photon fields. To demonstrate this, we work in asymptotically Minkowski spacetime and choose the gauge in which the bumblebee field is given by Eq.\ \rf{bba}. The limit of interest is $\beta \to 0$, corresponding to zero massive mode. For the case with the Lagrange-multiplier potential \rf{VL}, where $\beta$ is constrained to zero, an equivalent result also follows in the limit $\lambda \to 0$. Since $T_V^{\mu\nu}$ acquires nonzero contributions only from the massive mode or linear Lagrange multiplier, it follows that as $\beta \to 0$ or $\lambda \to 0$ the bumblebee energy-momentum tensor $T^{\mu\nu}_B$ reduces to that of Einstein-Maxwell electrodynamics, \begin{eqnarray} T^{\mu\nu}_B &\to& T^{\mu\nu}_{\rm K} \equiv T^{\mu\nu}_{\rm EM} = F^{\mu\alpha} F_{\alpha}^{\pt{\alpha}\nu} - \frac 1 4 g_{\mu\nu} F_{\alpha\beta} F^{\alpha\beta} . \qquad \label{TBBEM} \end{eqnarray} In this equation, the bumblebee field strength is reinterpreted as the electromagnetic field strength \begin{equation} B_{\mu\nu} \equiv F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu , \end{equation} via the mode expansion \rf{bba}, and the gravitational field equations \rf{geq} reduce to their Einstein-Maxwell counterparts. Similarly, the bumblebee current $J_B^\mu$ in Eq.\ \rf{JB} vanishes as $\beta \to 0$ or $\lambda \to 0$ because $V^\prime \to 0$. This leaves only the conventional matter current $J_{\rm M}^\mu$, which by virtue of Eq.\ \rf{Jconsv} obeys covariant conservation. The bumblebee field equations \rf{Beq} therefore also reduce to their Einstein-Maxwell counterparts. Even when $\beta$ vanishes, the interpretation of the model as bumblebee electrodynamics can in principle be distinguished from conventional Einstein-Maxwell electrodynamics through nontrivial SME-type couplings involving the vacuum value $b_\mu$ in the matter equations of motion. Depending on the types of couplings appearing in the matter-sector Lagrange density ${\cal L}_{\rm M}$, a variety of effects could arise. At the level of the minimal SME, for example, axial vector couplings involving $b_\mu$ can be sought in numerous experiments, including ones with electrons \cite{eexpt,eexpt2}, protons and neutrons \cite{ccexpt,spaceexpt}, and muons \cite{muexpt}. Other types of SME coefficients can also be generated from the vacuum value $b_\mu$. For example, the nonzero symmetric traceless SME coefficients $c_{\mu\nu}$ might emerge via the symmetric traceless product $b_\mu b_\nu - b^\alpha b_\alpha \eta_{\mu\nu}/4$, and experimental searches for these include ones with electrons \cite{eexpt,eexpt3}, protons, and neutrons \cite{ccexpt,spaceexpt}. If the model conditions permit the field $\beta$ to be nonzero, further deviations from Einstein-Maxwell electrodynamics can arise. For example, in the weak-field regime where $\beta\hat b_\mu$ is small compared to $b_\mu$, the bumblebee current $J_B^\mu$ is linear in $\beta$ and acts as an additional source in the equation of motion \rf{Beq} for the field strength $B_{\mu\nu}$. Effects of this type are investigated in the next few subsections. For large fields, nonlinear effects in the bumblebee current and energy-momentum tensor may also play a role. However, the field value of $\beta$ is typically suppressed when the mass of $\beta$ is large, so if the mass is set by the Planck scale then the linear approximation is likely to suffice for most realistic applications. In the limit of zero $\beta$, the physical fields are the field strengths $B_{\mu\nu}$ and the gravitational field. Excitations of the bumblebee field $B_\mu$ in the classical theory are then unobservable, in parallel with the potential $A_\mu$ in classical electromagnetism. This may have consequences for leading-order effects in the weak-field limit. For example, the weak static limit in general bumblebee models \rf{bb} requires $B_\mu$ itself to be time independent. However, at leading order in the KS\ bumblebee model, it can suffice to require that only the field strength $B_{\mu\nu}$ be time independent while $B_\mu$ itself has time dependence, in analogy with Maxwell electrodynamics in Minkowski spacetime. For example, the Coulomb electric field $\vec E = - \partial_0 \vec A$ for a static point charge $q$ emerges in temporal gauge $A_0 = 0$ from a time-dependent $A_\mu = (0, t \partial_j \Phi_q)$, where $\Phi_q$ is the Coulomb potential. Likewise, in certain leading-order weak static solutions for which $\beta$ acts as a source of charge, the bumblebee field $B_\mu$ may naturally exhibit potential lines converging at the source of charge, similar to the physical configurations with singular derivatives of $A_\mu$ that occur in classical electrodynamics. Another interesting issue is the role of quantum effects. Since bumblebee electrodynamics involves gravity, it faces the same quantization challenges as other theories of gravity and electrodynamics, including Einstein-Maxwell theory. Some work on the renormalization structure at one loop has been performed \cite{baak,renorm}, and in the limit of zero massive mode and in Minkowski spacetime the usual properties of quantum electrodynamics are expected to hold. Addressing this issue in detail is an interesting open problem that lies beyond the scope of this work. \subsection{Smooth quadratic potential} \label{Smooth quadratic potential} This subsection considers the specific KS\ bumblebee model \rf{ksbb} having smooth quadratic potential \rf{VS} with the definition \rf{XB}: \begin{equation} V = V_S(X) = {\textstyle{1\over 2}} \kappa (B_\mu g^{\mu\nu} B_{\nu} \pm b^2)^2. \label{smoothlagv} \end{equation} For definiteness, we adopt the mode expansion $B_\mu = b_\mu + {\cal E}_\mu$ of Eq.\ \rf{epdown} in a Minkowski background and assume weak fields $h_{\mu\nu}$ and ${\cal E}_\mu$ so that the gravitational and bumblebee equations of motion \rf{geq} and \rf{Beq} can be linearized. One goal is to investigate deviations from Einstein-Maxwell theory arising from the presence of a weak nonzero massive mode $\beta$. We therefore focus on dominant corrections to the linearized Einstein-Maxwell equations arising from $\beta$. In this approximation, the bumblebee component of the energy-momentum tensor becomes \begin{equation} T_B^{\mu\nu} \approx T_{\rm EM}^{\mu\nu} + 4 \kappa (b^\alpha \hat b_\alpha) b^\mu b^\nu \beta , \label{linTV} \end{equation} where $T_{\rm EM}^{\mu\nu}$ is the zero-$\beta$ limit of $T_{\rm K}^{\mu\nu}$ given in Eq.\ \rf{TBBEM} and the other term arises from $T_V^{\mu\nu}$. The bumblebee current reduces to \begin{equation} J_B^{\mu} \approx - 4 \kappa (b^\alpha \hat b_\alpha) b^\mu \beta . \label{Jbe} \end{equation} The constraint obtained by assuming current conservation in the matter sector becomes \begin{equation} b^\mu \partial_\mu \beta \approx 0. \label{linbeconstraint} \end{equation} These expressions reveal that at leading order the massive mode $\beta$ acts as a source for gravitation and electrodynamics, subject to the constraint \rf{linbeconstraint}. Note that the linearization procedure can alter the time behavior and dynamics of the fields in the presence of a nonzero massive mode $\beta$. Suppose, for example, that $b_\mu$ is timelike and we adopt the global observer frame in which $b_\mu = (b,0,0,0)$. Nonlinearities in the current $J_B^\mu$ associated with nonzero $\beta$ generate time dependence for most solutions because the spatial current $\vec J_B$ does work on the field strength $B_{\mu\nu}$. However, the linearization \rf{Jbe} implies $J_B^\mu$ is nonzero only along the direction of $b^\mu$, reducing to a pure charge density $J_B^\mu \approx (\rho_B,0,0,0)$ with $\rho_B \approx - 4 \kappa b^2 \beta$, and the current-conservation law \rf{linbeconstraint} then requires $\rho_B$ and hence $\beta$ to be static. Nonetheless, the linearization procedure captures the dominant effects of nonzero $\beta$. \subsubsection{Propagating modes} \label{Propagating modes} To study free propagation of the gravitational and bumblebee fields in the absence of charge and matter, set $J_{\rm M}^\mu = T_{\rm M}^{\mu\nu} = 0$ in the linearized equations of motion. The gravitational equations \rf{geq} then become \begin{eqnarray} & \mathchoice\sqr66\sqr66\sqr{2.1}3\sqr{1.5}3 \, \ol h_{\mu\nu} + \eta_{\mu\nu} \partial^\alpha\partial^\beta \ol h_{\alpha\beta} - \partial^\alpha\partial_\mu \ol h_{\alpha\nu} - \partial^\alpha\partial_\nu \ol h_{\alpha\mu} \nonumber\\ & \hskip 20pt \approx - 64 \pi G \kappa (b^\alpha \hat b_\alpha) b_\mu b_\nu \beta . \label{grav3} \end{eqnarray} Contributions from $T_{\rm EM}^{\mu\nu}$ are second-order in ${\cal E}_\mu$ and can be neglected in this context. The bumblebee equations \rf{Beq} reduce to \begin{equation} \mathchoice\sqr66\sqr66\sqr{2.1}3\sqr{1.5}3 \, {\cal E}_\mu - \partial_\mu \partial^\nu {\cal E}_\nu \approx 4 \kappa (b^\alpha \hat b_\alpha) b_\mu \beta . \label{epeq2} \end{equation} Note that the massive mode $\beta$ is defined by Eq.\ \rf{massep} in terms of ${\cal E}_\mu$ and $h_{\mu\nu}$, so it is not an independent field in these equations. Note also that the linearized current-conservation law \rf{linbeconstraint} follows by taking a derivative of Eq.\ \rf{epeq2}. To investigate the behavior of the massive mode $\beta$, it is convenient to combine the above two equations to obtain \begin{equation} \mathchoice\sqr66\sqr66\sqr{2.1}3\sqr{1.5}3 \, \beta - 4 \kappa (b^\alpha b_\alpha) (1 + 4 \pi G b^\beta b_\beta ) \beta \approx \fr 1 {b^\gamma \hat b_\gamma} b^\lambda \partial_\lambda \partial^\mu ({\cal E}_\mu - \ol h_{\mu\nu} b^\nu). \label{beeq} \end{equation} At first glance, this equation might suggest that $\beta$ can be a propagating massive field. However, $\beta$ depends on the fields appearing on the right-hand side, so the modes are still coupled and care is required in determining the dispersion properties, including the mass value. A suitable choice of diffeomorphism gauge clarifies matters. The modes remain entangled in the harmonic gauge \rf{harm} and also in the barred axial gravitational gauge $\ol h_{\mu\nu} b^\nu = 0$. However, a sufficient decoupling can be achieved by adopting the axial gravitational gauge \rf{haxial} and by decomposing ${\cal E}_\mu$ into pieces parallel and perpendicular to $b_\mu$. In this gauge, the longitudinal component ${\cal E}$ of ${\cal E}_\mu$ reduces to ${\cal E} = \beta$, so that \begin{eqnarray} {\cal E}_\mu &=& {\cal E}^t_\mu + {\cal E} \hat b_\mu = A_\mu + \beta \hat b_\mu . \end{eqnarray} In this equation, we write ${\cal E}^t_\mu \equiv A_\mu $ for the transverse components of ${\cal E}_\mu$ to make easier the task of tracking $\beta$-dependent deviations from conventional electrodynamics. These components satisfy the axial condition $A_\mu b^\mu= 0$. We emphasize that the axial condition is \it not \rm a gauge choice. It is a consequence of projecting along $b_\mu$ and is independent of the gravitational gauge fixing. It does, however, have the same mathematical form as an axial-gauge condition in electrodynamics, even though the bumblebee models have no U(1) symmetry. With these choices, the usual Einstein equations for $\ol h_{\mu\nu}$ in axial gravitational gauge are recovered from the result \rf{grav3} when $\beta$ vanishes \cite{bk}. For nonzero $\beta$, only one linearly independent combination of the Einstein equations is changed. It can be written \begin{eqnarray} \mathchoice\sqr66\sqr66\sqr{2.1}3\sqr{1.5}3 \, \ol h + 2 \partial^\mu\partial^\nu \ol h_{\mu\nu} &\approx& - 64 \pi G \kappa (b^\alpha \hat b_\alpha) (b^\beta b_\beta) \beta . \label{grav4} \end{eqnarray} Since the usual propagation-transverse components of $\ol h_{\mu\nu}$ remain unaffected, the physical graviton modes propagate normally. The bumblebee equations \rf{epeq2} become \begin{equation} \mathchoice\sqr66\sqr66\sqr{2.1}3\sqr{1.5}3 \, A_\mu - \partial_\mu \partial^\nu A_\nu + \fr 1 {b^\beta b_\beta} {b_\mu b^\alpha } \partial_\alpha \partial^\nu A_\nu \approx 0 \end{equation} and \begin{equation} \mathchoice\sqr66\sqr66\sqr{2.1}3\sqr{1.5}3 \, \beta - 4 \kappa (b^\alpha b_\alpha) \beta \approx \fr 1 {b^\beta \hat b_\beta} b^\alpha \partial_\alpha \partial^\mu A_\mu . \label{axbeq} \end{equation} When the massive mode $\beta$ vanishes, these equations reduce to those of electrodynamics in axial gauge \cite{bk}. A nonzero value of $\beta$ modifies the equations, but the usual propagation-transverse components of $A_\mu$ and hence the physical photon modes remain unaffected. Since $\beta$ and $A_\mu$ are independent fields, the dispersion properties of the massive mode $\beta$ can be identified from Eq.\ \rf{axbeq}. Any solutions of the theory describing freely propagating modes are required to obey the boundary conditions that the spacetime be asymptotically flat and that the bumblebee fields vanish at infinity. At linear order, these solutions are formed from harmonic plane waves with energy-momentum vectors $p^\mu = (E,\vec p)$ obeying suitable dispersion relations. The modes satisfying the asymptotic boundary conditions are then constructed as Fourier superpositions of the plane-wave solutions. For the massive mode $\beta$, the constraint equation \rf{linbeconstraint} imposes the additional requirement \begin{equation} b^\mu p_\mu \approx 0 . \label{bk} \end{equation} Any freely propagating massive mode is therefore constrained to have an energy-momentum vector orthogonal to the vacuum value $b_\mu$. For harmonic-mode solutions to the equations of motion, this requirement implies that the term on the right-hand side of Eq.\ \rf{axbeq} vanishes in Fourier space. The resulting dispersion law for the massive mode $\beta$ is then \begin{equation} p^\mu p_\mu \approx - 4 \kappa b^\alpha b_\alpha , \label{kdisp} \end{equation} which involves the squared-mass parameter \begin{equation} M_\beta^2 = 4 \kappa b^\alpha b_\alpha . \label{bemass} \end{equation} The sign of $M_\beta^2$ depends on whether $b_\mu$ is timelike or spacelike. The scale of $M_\beta^2$ depends on both $\kappa$ and $b$. Inspection of Eqs.\ \rf{grav4} or \rf{axbeq} shows that the limits $|M_\beta|^2 \to \infty$ or $\kappa\to \infty$ with $b_\mu$ fixed are equivalent to taking the limit of vanishing massive mode, $\beta\to 0$. The discussion in Sec.\ \ref{Bumblebee electrodynamics} then implies that the limit of large $|M_\beta|^2$ approximates Einstein-Maxwell theory. For the case of a timelike vacuum value $b_\mu$, the parameter $M_\beta^2 = -4\kappa b^2$ has the wrong sign for a physical mass. Adopting the global observer frame in which $b_\mu = (b,0,0,0)$, we see that the constraint \rf{bk} forces the energy $E$ of this mode to vanish, leaving the condition that the magnitude of the momentum must remain fixed at $|\vec p| = |M_\beta|$. A time-independent mode with fixed spatial wavelength cannot be Fourier superposed to form a physical wave packet satisfying the asymptotic boundary conditions. It follows that no physical propagating massive mode appears when $b_\mu$ is timelike. If instead the vacuum value $b_\mu$ is spacelike, then $M_\beta^2 = 4\kappa b^2$ is positive and is a candidate for a physical mass. Choose the global observer coordinate system such that the spacelike $b_\mu$ takes the form $b_\mu = (0,0,0,b)$. The constraint \rf{bk} then requires the vanishing of the momentum along the spatial direction $b_3$, and the dispersion law \rf{kdisp} becomes \begin{equation} E^2 - (p_x^2 + p_y^2) \approx M_\beta^2 . \label{spacelikedisp} \end{equation} This suggests that $\beta$ could propagate as a harmonic plane wave along spatial directions transverse to $b_\mu$. However, these harmonic plane waves are constant in $z$ and hence fail to satisfy the asymptotic boundary conditions as $z\to \pm \infty$ unless their amplitude is zero. Since all harmonics propagate under the same constraint \rf{bk}, no Fourier superposition can be constructed to obey the boundary conditions. It follows that no physical propagating massive mode exists for the case of a spacelike vector either. The above results can also be confirmed by obtaining the eigenvalues of the Fourier-transformed equations of motion, which form a 14$\times$14 matrix determining the nature of the modes. Consider, for example, the case of purely timelike $b_\mu$. In the axial gravitational gauge, four of the modes are zero, corresponding to the four fixed degrees of freedom. Another four eigenvalues describe modes propagating via a massless dispersion law, corresponding to the two photon and two graviton modes. One eigenvalue has zero energy and sets $|\vec p| = |M_\beta|$, corresponding to the massive mode $\beta$. The remaining 5 modes are auxiliary, and all have zero energy. Related results can be obtained for the case of spacelike $b_\mu$. \subsubsection{Weak static limit} \label{Weak static limit} Although the massive mode $\beta$ is nonpropagating, it plays the role of an auxiliary field and can thereby produce various effects. For example, its presence can affect the weak static limits of the gravitational and bumblebee interactions. In particular, a nonzero $\beta$ can modify the forms of both the Newton and Coulomb potentials. To demonstrate this, we consider the weak static limit of the field equations with an external matter sector containing massive point charges. The matter Lagrange density can be taken as \begin{eqnarray} {\cal L}_{\rm M} &=& \sum_n m_n \int d\tau \left[ g_{\mu\nu} \fr {dx_n^\mu}{d\tau} \fr {dx_n^\nu}{d\tau} \right]^{1/2} \delta^4 (x - x_n(\tau)) \nonumber \\ && + \sum_n q_n \int d\tau ~ B_\mu(x) \fr {dx_n^\mu} {d\tau} \delta^4 (x - x_n(\tau)), \label{LM} \end{eqnarray} where $m_n$ and $q_n$ are the masses and charges of the point particles, respectively. The energy-momentum tensor for the matter sector is \begin{equation} T_{\rm M}^{\mu\nu} = \fr 1 e \sum_n m_n \int d\tau \fr {dx_n^\mu} {d \tau} \fr {dx_n^\nu} {d \tau} \delta^4 (x - x_n(\tau)) . \label{TMm} \end{equation} Its combination with $T_B^{\mu\nu}$ given in Eq.\ \rf{TBmunu} forms the total energy-momentum tensor, which is conserved. The matter-sector current $J_{\rm M}^\mu \equiv (\rho_q, \vec J)$ is \begin{equation} J_{\rm M}^\mu = \fr 1 e \sum_n q_n \int d\tau \fr {d x_n^\mu} {d \tau} \delta^4 (x - x_n(\tau)) . \label{Jq} \end{equation} It is conserved, $D_\mu J_{\rm M}^\mu = 0$. The standard route to adopting the weak static limit is to linearize the equations of motion in the fields and then discard all time derivatives. This meets no difficulty under suitable circumstances, such as the vacuum value $b_\mu$ being purely spacelike, and we follow this route in the present subsection. However, in certain cases additional care is required because the $b_\mu$-transverse components of the bumblebee field obey the axial condition $b^\mu {\cal E}^t_\mu = 0$, which can imply time dependence even for static fields. For example, as discussed in Sec.\ \ref{Bumblebee electrodynamics}, the case of purely timelike $b_\mu$ and zero $\beta$ is equivalent to electrodynamics in temporal gauge, for which the static Coulomb potential involves a linear time dependence in $\vec A$. The corresponding weak static limit with nonzero $\beta$ can therefore be expected to exhibit this feature. We return to this issue in the next subsection. To proceed, it is useful to choose a diffeomorphism gauge that simplifies the Einstein tensor and allows direct comparison between the geodesic equation and the Newton force law. One possible choice is the harmonic gauge \rf{harm}, in which the static gravitational potential $\Phi_g$ is related to the metric by $\Phi_g = -{\textstyle{1\over 2}} h_{00} = -{\textstyle{1\over 2}} \ol h_{00} + \frac 1 4 \ol h$. The linearized gravitational equations become \begin{equation} -{\textstyle{1\over 2}} \mathchoice\sqr66\sqr66\sqr{2.1}3\sqr{1.5}3 \, \ol h^{\mu\nu} \approx 8\pi G [T_{\rm M}^{\mu\nu} + T_{\rm EM}^{\mu\nu} + 4 \kappa (b^\alpha \hat b_\alpha) b^\mu b^\nu \beta] , \label{lsharm} \end{equation} where $\beta$ is the massive mode given in terms of ${\cal E}_\mu$ and $h_{\mu\nu}$ in Eq.\ \rf{massep}, and $T_{\rm EM}^{\mu\nu}$ is the zero-$\beta$ form of $T_{\rm K}^{\mu\nu}$ given in Eq.\ \rf{TBBEM}. We can now extract the relevant equations in the weak static limit. The equation for the gravitational potential $\Phi_g$ is the 00 component of the trace-reversed form of Eq.\ \rf{lsharm}, \begin{equation} \nabla^2 \Phi_g \approx 8\pi G [\ol T{}_{\rm M}^{00} + \ol T{}_{\rm EM}^{00} + 2 \kappa (b^\alpha \hat b_\alpha) (b_0^2 + \vec b^{\, 2}) \beta] , \label{lsharm2} \end{equation} where the trace-reversed energy-momentum tensors are defined by $\ol T_{\mu\nu} \equiv T_{\mu\nu} - \eta_{\mu\nu} T^\alpha_{\pt{\alpha}\alpha}$ and are evaluated in the static limit. The weak static bumblebee equations become \begin{eqnarray} \vec \nabla^2 {\cal E}_0 &\approx & \rho_q + 4 \kappa (b^\alpha \hat b_\alpha) b_0 \beta , \nonumber\\ \vec \nabla^2 \vec {\cal E} - \vec\nabla (\vec\nabla\cdot\vec {\cal E}) &\approx & \vec J + 4 \kappa (b^\alpha \hat b_\alpha)\vec b \beta , \label{stlinkappaeqs} \end{eqnarray} where $\rho_q$ and $\vec J$ are static charge and current sources. The above equations show that the massive mode $\beta$ can be viewed in the weak static limit as a simultaneous source of energy density, charge density, and current density. The relative weighting of the contributions is controlled by the relative sizes of the components of $b_\mu$. In fact, the same is also true of contributions to gravitomagnetic and higher-order gravitational effects arising from other components of the gravitational equations. In the interpretation as bumblebee electrodynamics, the equations show that we can expect deviations from the usual Newton and Coulomb force laws when $\beta$ is nonzero. Since the effects are controlled by components of $b_\mu$, they are perceived differently from different observer frames, much like charge and current in ordinary electrodynamics. Also, the unconventional source terms are linear in $\beta$, so the solutions can be interpreted as superpositions of conventional gravitational and electrodynamic fields with effects due to $\beta$. However, since $\beta$ itself is formed as the combination \rf{massep} of the metric and bumblebee fields, the extra source terms in fact reflect contributions from the gravitational and electrodynamic fields that are absent in the Einstein-Maxwell limit. \subsubsection{Timelike case} \label{Timelike case} In this subsection, we consider in more detail the weak static limit with timelike $b_\mu$. We solve explicitly the weak static equations in the special observer frame for which $b_\mu = (b,0,0,0)$ is purely timelike, and we provide some remarks about the more general case. The special frame might be identified with the natural frame of the cosmic microwave background. However, in laboratory searches for modifications to weak static gravitation and electrodynamics, this frame can at best be an approximation because the Earth rotates and revolves about the Sun, which in turn is moving with respect to the cosmic microwave background. Nonetheless, the explicit solutions obtained here offer useful insight into the modifications to the Newton and Coulomb potentials that are introduced by the massive mode. For the purely timelike case, both $h_{\mu\nu}$ and $B_{\mu\nu}$ are expected to be time independent in the weak static limit. However, time dependence must appear in $\vec {\cal E}$ even in the static limit because as $\beta \to 0$ the match to electrodynamics arises in the temporal gauge, for which the standard Coulomb solution has a linear time dependence. For this reason, in taking the weak static limit in what follows, we keep time derivatives acting on the vector components $\vec {\cal E}$. The analysis for purely timelike $b_\mu$ could be performed in the harmonic gauge, but it turns out that choosing the transverse gauge instead results in the complete decoupling from the massive mode of nine of the 10 gravitational equations. The transverse gauge is defined by \begin{equation} \partial^j h_{0j} = 0, \qquad \partial^j h_{jk} = \frac 1 3 \partial_k h^j_{\pt{j}j}, \label{htransverse} \end{equation} where $j$, $k$ range over the spatial directions. In this gauge, $G_{00} \approx - \nabla^2 h_{00} = 2 \nabla^ 2 \Phi_g$, where $\Phi_g$ is the gravitational potential. Taking the weak static limit of the gravitational and bumblebee equations of motion in this gauge yields \begin{eqnarray} \nabla^2 \Phi_g &\approx& 4\pi G (T_{\rm M}^{00} + T_{\rm EM}^{00} - 4 \kappa b^3 \beta) , \nonumber\\ \vec \nabla^2 {\cal E}_0 - \vec \nabla \cdot \partial_0 \vec {\cal E} &\approx & \rho_q - 4 \kappa b^2 \beta , \nonumber\\ \mathchoice\sqr66\sqr66\sqr{2.1}3\sqr{1.5}3 \, \vec {\cal E} - \vec\nabla (\vec\nabla\cdot\vec {\cal E}) &\approx & \vec J . \label{trlinkappaeqs} \end{eqnarray} The other nine components of $h_{\mu\nu}$ obey equations that are identical to those of general relativity in the weak static limit. In Eq.\ \rf{trlinkappaeqs}, $T_{\rm M}^{00}$ is the static energy density in the matter sector, and $T_{\rm EM}^{00}$ is the static $00$ component of the zero-$\beta$ limit of $T_{\rm K}^{\mu\nu}$ given in Eq.\ \rf{TBBEM}. The charge density $\rho_q$ and current density $\vec J$ are also understood to be time-independent distributions. The explicit form of the massive mode $\beta$ follows from Eq.\ \rf{massep} and is found to be \begin{equation} \beta = {\cal E}_0 - b \Phi_g . \label{betimelike} \end{equation} For purely timelike $b_\mu$, the constraint \rf{linbeconstraint} reduces to \begin{equation} \partial_0 \beta \approx 0, \label{timeindepbe} \end{equation} so $\beta$ is time independent at this order. As described in the previous subsection, $\beta$ can be interpreted as a source for $\Phi_g$ and ${\cal E}_0$. Here, $\beta$ acts as a static source of energy density and charge density with \begin{eqnarray} T_B^{\mu\nu} &=& b \rho_B \eta^{0\mu} \eta^{0\nu} , \nonumber\\ J_B^\mu &=& (\rho_B,0,0,0), \label{betastsource} \end{eqnarray} where $\rho_B = - 4 \kappa b^2 \beta$. To find the explicit modifications to the gravitational and electromagnetic potentials caused by $\beta$, we focus on the case of a point particle of mass $m$ and charge $q$ located at the origin. The source terms in this case have the form \begin{eqnarray} T_{\rm M}^{00} &=& \rho_m = m \delta^{(3)}(\vec x) , \nonumber\\ J_{\rm M}^0 &\equiv& \rho_q = q \delta^{(3)}(\vec x) , \label{pointsource} \end{eqnarray} and all other components of $T_{\rm M}^{\mu\nu}$ and $J_{\rm M}^\mu$ are zero. In the absence of Lorentz violation, the weak static solution to the Einstein-Maxwell equations with these source terms consists of the linearized Reissner-Nordstr\"om metric and corresponding electromagnetic fields. Denote by $\Phi_m$ the associated gravitational potential. It obeys \begin{equation} \vec \nabla^2 \Phi_m = 4 \pi G (\rho_m + T_{\rm EM}^{00} ). \label{RN} \end{equation} In the limit $q \rightarrow 0$, it reduces to the Newton gravitational potential and obeys the usual Poisson equation $\nabla^2 \Phi_m = 4 \pi G \rho_m$. Similarly, denote the conventional Coulomb potential by $\Phi_q$. It obeys the Poisson equation \begin{equation} \nabla^2 \Phi_q = - \rho_q. \label{fish} \end{equation} It is also convenient to express the massive mode $\beta$ in terms of a potential $\Phi_B$. We define \begin{equation} \beta \equiv \fr 1 {| M_\beta |^2} \vec \nabla^2 \Phi_B (\vec x) , \label{Phbe} \end{equation} where $| M_\beta |^2$ is the absolute value of the squared mass in Eq.\ \rf{bemass}. Note that $\Phi_B$ is time-independent due to the constraint \rf{timeindepbe}. Since $\beta$ acts as a source of static charge density given by Eq.\ \rf{betastsource}, the definition of $\Phi_B$ can be interpreted as a third Poisson equation, \begin{equation} \vec\nabla^2 \Phi_B = - \rho_B . \end{equation} The time independence of both $\beta$ and $\Phi_B$ means they are determined by their initial values. For the point particle of mass $m$ and charge $q$, the general solution for the gravitational potential $\Phi_g$ and the bumblebee field ${\cal E}_\mu$ in the weak static limit can be expressed in terms of the conventional potentials $\Phi_m$ and $\Phi_q$ and the bumblebee potential $\Phi_B$. We obtain \begin{eqnarray} \Phi_g &=& \Phi_m - 4\pi G b \Phi_B , \nonumber\\ {\cal E}_0 &=& b \Phi_m - 4\pi G b^2 \Phi_B + \fr 1 {| M_\beta |^2} \vec \nabla^2 \Phi_B , \nonumber \\ {\cal E}_j &=& t \partial_j [ \Phi_q + b \Phi_m +(1 - 4 \pi G b^2 ) \Phi_B + \fr 1 {| M_\beta |^2} \vec \nabla^2 \Phi_B ]. \nonumber \\ \label{Phisol} \end{eqnarray} The corresponding static gravitational field $\vec g$, static electric field $\vec E$, and static magnetic field $\vec B$ are found to be \begin{eqnarray} \vec g &=& - \vec \nabla \Phi_m + 4\pi G b \vec \nabla \Phi_B , \nonumber\\ \vec E &=& - \vec \nabla \Phi_q - \vec \nabla \Phi_B , \nonumber\\ \vec B &=& 0 . \label{EB} \end{eqnarray} The static Maxwell equations include a modified Gauss law \begin{eqnarray} \vec \nabla \cdot \vec E &=& - \vec \nabla^2 \Phi_q - \vec \nabla^2 \Phi_B \nonumber \\ &=& \rho_q + \rho_B , \label{gauss} \end{eqnarray} together with the usual static laws \begin{eqnarray} \vec \nabla \cdot \vec B &=& 0, \nonumber\\ \vec \nabla \times \vec E &=& 0, \nonumber\\ \vec \nabla \times \vec B &=& 0. \end{eqnarray} These equations demonstrate for the purely timelike case the ways in which a nonzero massive mode $\beta$ modifies the conventional static gravitational and electrodynamic fields. Compared to electrodynamics, the purely timelike KS\ bumblebee model introduces an additional degree of freedom into the problem of determining the static electric field $\vec E$. This extra degree of freedom is the massive mode $\beta$, and the extent of its effects depends on the initial conditions for $\beta$. In the absence of a satisfactory underlying theory predicting $\beta$ or of direct experimental observation of $\beta$, these initial conditions are undetermined. One natural choice is to adopt $\Phi_B = 0$ as an initial condition, which implies $\Phi_B (\vec x) \approx 0$ for all time and hence corresponds to zero massive mode $\beta$. The solution \rf{Phisol} then reduces to the weak static limit of Einstein-Maxwell theory, as expected. The gravitational potential becomes $\Phi_g = \Phi_m$. The bumblebee field reduces to ${\cal E}_\mu = (b \Phi_m, t \partial_j (\Phi_q + b \Phi_m))$, which despite the appearance of the gravitational potential implies $\vec E = - \vec \nabla \Phi_q$ and the usual Gauss and Poisson equations $\vec \nabla \cdot E = - \vec \nabla^2 \Phi_q = \rho_q$. We note in passing that conventional electrodynamics is also recovered in the limit $|M_\beta|^2 \rightarrow \infty$ because taking this limit in Eq.\ \rf{trlinkappaeqs} implies $\beta \to 0$ and hence $\Phi_B \to 0$. Thus, even with a nonzero massive mode $\beta$, a theory with a large mass parameter $|M_\beta|$ approximates Einstein-Maxwell theory in the low-energy regime, in agreement with the discussion following Eq.\ \rf{bemass}. Other choices of initial conditions on $\Phi_B$ might appear in the context of a more fundamental theory for which a bumblebee model provides an effective partial low-energy theory. One set of examples with nonzero $\Phi_B$ consists of solutions with zero matter current, $J_{\rm M}^\mu\equiv 0$. Imposing $J_{\rm M}^\mu\equiv 0$ by hand is a common approach in the literature. It implies the bumblebee field has no couplings to matter and hence is unrelated to electrodynamics, being a field that interacts only gravitationally. The massless Lorentz NG modes then propagate freely as sterile fields that carry energy and momentum but convey no direct forces on charged particles. We emphasize here that a theory of this type can nonetheless lead to modifications of the gravitational interaction, contrary to some claims in the literature. This is readily illustrated in the context of the purely timelike example considered in this subsection. With $J_{\rm M}^\mu\equiv 0$, the linearized solutions for a static point particle of mass $m$ are still given by Eqs.\ \rf{Phisol}, but $\Phi_q = 0$ and the fields $\vec E$ and $\vec B$ are largely irrelevant. However, the gravitational potential $\Phi_g$ is modified by the potential $\Phi_B$ for the massive mode $\beta$, with the value of $\Phi_B$ fixed by initial conditions. In the absence of additional theoretical or experimental information, the possibilities for model building are vast. One could, for example, consider initial conditions with $\Phi_B$ proportional to $\Phi_m$, \begin{equation} \Phi_B = \fr \alpha {4 \pi G b} \Phi_m , \label{scalePh} \end{equation} with $\alpha$ a constant. For this class of solutions, $\Phi_g$ has the usual form $\Phi_g = -G^* m/r$ for a point mass but with a scaled value of the Newton gravitational constant \begin{equation} G^* = ( 1 - \alpha ) G_N , \label{Gstar} \end{equation} where $G_N$ is the value of $G^*$ for $\Phi_B = 0$. This rescaling has no observable effects in the special observer frame because laboratory measurements would determine $G^*$ instead of $G_N$, although sidereal, annual, and solar motions might introduce detectable effects in realistic experiments. One could equally well consider instead examples in which $\Phi_B$ has nonlinear functional dependence on $\Phi_m$ or has other coordinate dependence. Various forms could be proposed as candidate models to explain phenomena such as dark matter or possibly dark energy. With the choice of $\Phi_B$ being conjecture at the level of the linearized equations, this approach is purely phenomenological. However, if a bumblebee model appears as part of the effective theory in a complete theory of quantum gravity, the form of $\Phi_B$ could well be fixed and predictions for dark matter and perhaps dark energy could become possible. We conclude this subsection with some comments about the case where $b_\mu$ takes a more general timelike form in the given observer frame $O$. To investigate this, one can either study directly the weak static limit of the gravitational and bumblebee equations with a charged massive point particle at rest at the origin in $O$ as before, or one can boost the observer frame $O$ to another frame $O^\prime$ in which $b_\mu$ is purely timelike but the point particle is moving. The two pictures are related by observer Lorentz transformations and are therefore physically equivalent. In the frame $O$ in which the particle is at rest, the matter energy-momentum tensor and current are given by Eq.\ \rf{pointsource} as before, but the bumblebee energy-momentum tensor $T_B^{\mu\nu}$ and current $J_B^\mu$ have additional nonzero components due to the nonzero spacelike components of $b_\mu$. The complete solution therefore involves components of $h_{\mu\nu}$ other than the gravitational potential $\Phi_g$. However, we can gain insight into the behavior of the massive mode by considering the constraint \rf{linbeconstraint} and performing an observer Lorentz boost with velocity $\vec v_B = - \vec b/b_0$ to a frame $O^\prime$ in which $b_\mu$ becomes purely timelike. Since this transformation must reduce the constraint \rf{linbeconstraint} to the purely timelike form \rf{timeindepbe}, it follows that in the original frame $O$ the constraint takes the form \begin{equation} \partial_0 \beta + \vec v_B \cdot \vec \nabla \beta = 0, \end{equation} which implies \begin{equation} \beta = \beta (\vec x - \vec v_B t). \end{equation} We see that the massive mode $\beta$ moves in the frame $O$ with a velocity $\vec v_B$ equal to the relative boost velocity linking $b_\mu$ to its purely timelike form. The gravitational and electrodynamic equations are therefore modified by time-dependent source terms moving with velocity $v_B$ relative to the point particle. An equivalent result is obtained by working directly in the frame $O^\prime$, in which $b_\mu$ is purely timelike and $\beta$ is therefore static, but in which all components of the matter energy-momentum tensor and current are nonzero and represent a particle moving with velocity $-\vec v_B$ relative to $\beta$. As before, the form of the massive mode $\beta$ is set by initial conditions. Since the equations are linear in either frame, the solutions consist of a superposition of conventional potentials and the massive-mode contributions with relative motion. \subsection{Linear Lagrange-multiplier potential} \label{Linear Lagrange-multiplier potential} In this subsection, we discuss the KS\ bumblebee model \rf{ksbb} with the linear Lagrange-multiplier potential \rf{VL} and the definition \rf{XB}: \begin{equation} V = V_L(\lambda, X) = \lambda (B_\mu g^{\mu\nu} B_{\nu} \pm b^2). \label{linlagv} \end{equation} Bumblebee models with this potential have been widely studied at the linearized level for about two decades \cite{ks}. Here, we compare and contrast this theory to the case with the smooth quadratic potential $V_S$. We show the Lagrange multiplier $\lambda$ produces effects in the $V_L$ model that are very similar to those of the massive mode $\beta$ in the $V_S$ model. Paralleling the treatment of the $V_S$ model in Sec.\ \ref{Smooth quadratic potential}, the mode expansion $B_\mu = b_\mu + {\cal E}_\mu$ of Eq.\ \rf{epdown} in a Minkowski background is adopted, and the fields $h_{\mu\nu}$ and ${\cal E}_\mu$ are assumed weak so that linearization can be performed. Variation of the action produces the gravitational and bumblebee equations of motion \rf{geq} and \rf{Beq}, together with the Lagrange-multiplier constraint \begin{equation} X = B_\mu g^{\mu\nu} B_\nu \pm b^2 = 0 . \label{lmconst} \end{equation} This condition enforces the vanishing of the massive mode, as discussed in Sec.\ \ref{Excitations}, which leaves only the Lorentz NG modes as possible excitations of the bumblebee field $B_\mu$. However, there is also an additional degree of freedom, the Lagrange multiplier $\lambda$ itself, that appears in the equations of motion. One might naively expect the $V_L$ model to yield solutions identical to those obtained in the infinite-mass limit $|M_\beta| \rightarrow \infty$ of the $V_S$ model. For example, one might reason that an infinite mass would make energetically impossible any field excitations away from the minimum of $V_S$, leading to the constraint \rf{lmconst}. However, the potential $V_L$ is a function of two combinations of fields, $V_L = V_L(\lambda, X)$, whereas $V_S(X)$ involves only one. The infinite-mass limit indeed suppresses $X$ excitations away from $X=0$ in $V_S$, but it contains no match for $\lambda$. There is thus an extra field degree of freedom in $V_L$ relative to the infinite-mass limit of $V_S$. For example, $V_S^\prime \rightarrow 0$ in the infinite-mass limit because no excitations of $X$ are allowed, whereas $V_L^\prime = \lambda$, which need not vanish. We therefore conclude that the correspondence between the infinite-mass limit of the $V_S$ model and the $V_L$ model can occur only when $\lambda = 0$. To gain intuition about the effects associated with the Lagrange-multiplier field $\lambda$, consider its role in the equations of motion. Since the constraint \rf{lmconst} ensures no massive excitations can appear, the bumblebee energy-momentum tensor \rf{TBmunu} reduces to \begin{equation} T^{\mu\nu}_B = T^{\mu\nu}_{\rm EM} + T_V^{\mu\nu} . \label{lmTBmunu} \end{equation} We find \begin{eqnarray} T_V^{\mu\nu} &=& 2 \lambda B^\mu B^\nu \approx 2 b^\mu b^\nu \lambda, \end{eqnarray} where the last form is the leading-order contribution in the linearized limit. Similarly, the bumblebee current \rf{JB} becomes \begin{eqnarray} J_B^\mu &=& - 2 \lambda B^\mu \approx -2 b^\mu \lambda. \label{TJlm} \end{eqnarray} Conservation of the matter current $J_M^\mu$ implies the constraint \begin{equation} b^\mu \partial_\mu \lambda \approx 0. \label{lmconstraint} \end{equation} When $\lambda \to 0$, all these equations reduce to those of Einstein-Maxwell theory, in agreement with the discussion of bumblebee electrodynamics in Sec.\ \ref{Bumblebee electrodynamics}. Moreover, by comparison with Eqs.\ \rf{linTV}--\rf{linbeconstraint}, we see that the Lagrange multiplier $\lambda$ in the $V_L$ model plays a role very comparable to that of the massive mode $\beta$ in the $V_S$ model. In effect, $\lambda$ acts as an additional source of energy-momentum density and current density in the equations of motion. The propagating modes for the KS\ bumblebee model with the linear Lagrange-multiplier potential $V_L$ have been investigated elsewhere \cite{ks,bk}. The usual two graviton modes propagate as free transverse massless modes independently of $\lambda$, as do the usual two photon modes emerging from the Lorentz NG modes. Since $\lambda$ has no kinetic terms, it is auxiliary and cannot propagate. The weak static limit of the $V_L$ model can also be studied, following the lines of the discussion for the $V_S$ model in Sec.\ \ref{Timelike case}. Consider, for example, the case of purely timelike $b_\mu = (b,0,0,0)$. The constraint \rf{lmconstraint} then implies that $\lambda$ must be time independent at leading order, \begin{equation} \partial_0 \lambda \approx 0. \label{lmfixing} \end{equation} A nonzero $\lambda$ therefore acts as an additional static source of energy density and charge density, which can modify the static potentials. We again adopt the transverse gauge \rf{htransverse}, for which the gravitational potential is $\Phi_g = -h_{00}/2$. Linearizing the constraint \rf{lmconst} for the purely timelike case yields \begin{equation} {\cal E}_0 - b \Phi_g = 0 . \label{lambdaconstraint} \end{equation} As expected, this condition corresponds in the context of the $V_S$ model to enforcing the vanishing of the massive mode $\beta$. In the context of Einstein-Maxwell theory, it corresponds as before to a gauge-fixing condition that reduces to the usual temporal gauge in the absence of gravity. For a single point particle of charge $q$ and mass $m$ at rest at the origin, the energy density $\rho_m$ and charge density $\rho_q$ are given by Eq.\ \rf{pointsource}. The equations of motion in the weak static limit become \begin{eqnarray} \nabla^2 \Phi_g &\approx& 4\pi G (\rho_m + T_{\rm EM}^{00} + 2 b^2 \lambda ) , \nonumber\\ \vec \nabla^2 {\cal E}_0 - \vec \nabla \cdot \partial_0 \vec {\cal E} &\approx & \rho_q + 2 b \lambda , \nonumber\\ \mathchoice\sqr66\sqr66\sqr{2.1}3\sqr{1.5}3 \, \vec {\cal E} - \vec\nabla (\vec\nabla\cdot\vec {\cal E}) &\approx & 0 . \label{linlambdaeqs} \end{eqnarray} Comparison of these expressions with Eqs.\ \rf{trlinkappaeqs} for the $V_S$ model again reveals the correspondence between the roles of the Lagrange-multiplier field $\lambda$ and the massive mode $\beta$. With $\lambda$ acting as an effective source of charge and energy density, we can introduce a potential $\Phi_B$, \begin{equation} \lambda \equiv - \fr 1 {2 b} \nabla^2 \Phi_B , \label{lambdaPhi} \end{equation} which equivalently can be viewed as the solution to the Poisson equation \begin{equation} \nabla^2 \Phi_B = - \rho_B , \qquad \rho_B \equiv 2b\lambda. \label{lafish} \end{equation} As before, let $\Phi_m$ denote the usual gravitational potential obeying Eq.\ \rf{RN}, and let $\Phi_q$ denote the usual Coulomb potential obeying Eq.\ \rf{fish}. In terms of these three potentials, the weak-field static solutions to the equations of motion \rf{linlambdaeqs} can be written as \begin{eqnarray} \Phi_g &=& \Phi_m - 4\pi G b \Phi_B , \nonumber\\ {\cal E}_0 &=& b \Phi_m - 4\pi G b^2 \Phi_B , \nonumber \\ {\cal E}_j &=& t \partial_j [ \Phi_q + b \Phi_m +(1 - 4 \pi G b^2 ) \Phi_B ]. \label{Philambdasol} \end{eqnarray} The associated static gravitational field $\vec g$, static electric field $\vec E$, and static magnetic field $\vec B$ are all given by the same mathematical expressions as Eqs.\ \rf{EB}, but the potential $\Phi_B$ is now defined in terms of $\lambda$ according to Eq.\ \rf{lambdaPhi}. Similarly, a nonzero Lagrange-multipler field $\lambda$ yields the same modified mathematical form \rf{gauss} of the Gauss law, but the potential $\Phi_B$ and charge $\rho_B$ are now defined in terms of $\lambda$. The bumblebee potential $\Phi_B$ in Eqs.\ \rf{Philambdasol} is undetermined by the equations of motion \rf{linlambdaeqs} and must be specified as an initial value. This specification also fixes the initial value of the Lagrange multiplier $\lambda$, and the condition \rf{lmfixing} then ensures that $\lambda$ remains unchanged for all time. The situation here parallels that of the $V_S$ model, where initial conditions must be imposed that subsequently fix the bumblebee potential and the massive mode $\beta$ for all time. These results are special cases of a more general fact often overlooked in the literature: to be well defined, all bumblebee models require explicit initial conditions on the massive modes and Lagrange-multiplier fields. The subsequent development of the massive modes and Lagrange multipliers can then be deduced from the equations of motion or from derived constraints. In the absence of specified initial conditions, physical interpretations and predictions cannot be reliable. Moreover, in the absence of direct experimental observation or prediction from an underlying theory, the choice of initial conditions is largely unrestricted and can lead to widely differing effects. One natural initial condition is $\Phi_B = 0$, $\lambda = 0$. The above results then reduce to Einstein-Maxwell theory in the weak static limit, as expected. This $V_L$ model has only field excitations maintaining both $V_L=0$ and $V_L^\prime = 0$. It corresponds to the infinite-mass limit of $V_S$, which is itself equivalent to Einstein-Maxwell theory. Another possibility is to consider models without direct coupling to matter, $J_{\rm M}^\mu\equiv 0$, which implies $\rho_q \equiv 0$, $\Phi_q \equiv 0$ in the above equations. Like their $V_S$ counterparts discussed in Sec.\ \ref{Timelike case}, these models are purely gravitational and are unrelated to electrodynamics. As always, initial conditions on $\Phi_B$ must be specified. One simple possibility, for example, is to choose $\Phi_B = - b \Phi_g$. This $V_L$ model is contained in the analysis of Ref.\ \cite{cli}. The spatial components of the bumblebee vanish, ${\cal E}_j = 0$, so the bumblebee field $B_\mu$ is parallel to a timelike Killing vector. The modifications to $\Phi_g$ in this example involve a rescaling of the Newton gravitational constant. Examples with both $\Phi_B$ and $\Phi_q$ nonzero can also be considered. These incorporate direct charge couplings to matter, but the static weak-field limit yields modified gravitational and Coulomb potentials given by Eq.\ \rf{Philambdasol}. One simple example is the choice $\Phi_B = - \Phi_q$. Since $\Phi_g \ne \Phi_m$, this $V_L$ model has a modified gravitational potential. The solution for the bumblebee field has the form of a total derivative, ${\cal E}_\mu = \partial_\mu (t b \Phi_g)$. In the limit $q \to 0$ and $\lambda \to 0$, it provides an example of a solution that with $\lambda = 0$ and hence $\Phi_B = 0$ has been identified in Ref.\ \cite{bb1} as potentially flawed due to the formation of shock discontinuities in ${\cal E}_\mu$. Whether or not $\lambda$ is zero, this solution is unusual. Both the field strength $F_{\mu\nu}$ and the energy-momentum tensor $T_{\rm EM}^{\mu\nu}$ vanish because the effective charge density $\rho_B$ associated with the Lagrange-multiplier field cancels the matter charge density $\rho_q$. Although the field strengths are zero, the bumblebee field ${\cal E}_\mu$ must be nonzero because the constraint \rf{lambdaconstraint} implies ${\cal E}_0 = b \Phi_g$, which cannot vanish in the presence of gravity. For a point charge the solution ${\cal E}_\mu$ does indeed contain a singularity, but this is physically unremarkable as it merely reflects the usual $1/r$ dependence in $\Phi_g$. The same behavior arises in the standard solutions of Einstein-Maxwell theory in a gauge fixed by Eq.\ \rf{lambdaconstraint}, to which the $V_L$ model is equivalent. \subsection{Quadratic Lagrange-multiplier potential} \label{Quadratic Lagrange-multiplier potential} As a final example, we consider in this subsection the specific KS\ bumblebee model \rf{ksbb} with the quadratic Lagrange-multiplier potential \rf{VQ} \begin{equation} V = V_Q(\lambda, X) = {\textstyle{1\over 2}} \lambda (B_\mu g^{\mu\nu} B_{\nu} \pm b^2)^2, \label{quadlagv} \end{equation} where $X$ is defined in Eq.\ \rf{XB}. As before, we adopt the mode expansion $B_\mu = b_\mu + {\cal E}_\mu$ of Eq.\ \rf{epdown} in a Minkowski background, and where useful we assume weak fields $h_{\mu\nu}$ and ${\cal E}_\mu$. Bumblebee models with a quadratic Lagrange-multiplier potential have not previously been considered in detail in the literature. We introduce them here partly as a foil for the $V_L$ case, in which the Lagrange multiplier plays a key role in the physics of the model. In contrast, the Lagrange multiplier for the potential $V_Q$ decouples entirely from the classical dynamics. The point is that variation with respect to $\lambda$ yields the constraint $X^2=0$, which is equivalent to $X=0$ and forces the massive mode to vanish. However, the quadratic nature of $V_Q$ means that $\lambda$ always appears multiplied by $X$ in the gravitational and bumblebee field equations, so the on-shell condition $X=0$ forces the field $\lambda$ to decouple. Also, the potential obeys both $V_Q =0$ and $V_Q^\prime = 0$ on shell, so the bumblebee energy-momentum tensor and bumblebee current reduce to the standard expressions for electrodynamics in curved spacetime. The equations of motion are therefore equivalent to those of Einstein-Maxwell electrodynamics in the gauge $X=0$. This correspondence holds both in the linearized limit and in the full nonlinear theory. Since the equations of motion generated by a theory with a quadratic Lagrange-multiplier potential are the equations in the absence of the constraint plus the constraint itself, introducing this type of potential is equivalent at the classical level to imposing the constraint by hand on the equations of motion. The only distinction between the two approaches is the presence of the decoupled Lagrange-multiplier field. Bumblebee models with the potential $V_Q$ therefore incorporate a variety of models in which constraints are imposed by hand. For example, Will and Nordtvedt have considered solutions involving a nonzero background value for a vector in bumblebee-type models with Lagrange density of the form \rf{bb} but without a potential term \cite{wn}. This approach produces equations of motion identical to those of the corresponding $V_Q$ model. The $V_Q$ models are also related to theories in which the constraint is substituted into the action before varying, although the correspondence is inexact. For example, Nambu has investigated the Maxwell action in Minkowski spacetime using the constraint $X=0$ for the purely timelike case as a nonlinear gauge condition substituted into the action prior to the variation \cite{yn}. This represents gauge fixing at the level of the action, and it yields a total of four equations for the fields: the original constraint and three equations of motion from the variation. However, the corresponding $V_Q$ model yields five equations of motion instead, one of which is the constraint. The extra equation is the Gauss law, which in the Nambu approach is imposed as a separate initial condition that subsequently holds at all times by virtue of the three equations of motion and the constraint. For the specific KS\ bumblebee model \rf{ksbb} with potential $V_Q$ in Eq.\ \rf{quadlagv}, the freely propagating modes can be found by linearizing the gravitational and bumblebee equations of motion \rf{geq} and \rf{Beq} and the constraint $X=0$. The linearization generates the same equations as emerge for the $V_L$ model with potential \rf{linlagv} in the limit $\lambda \to 0$. The propagating modes consist of the usual graviton and photon in an axial gauge \cite{bk}, as expected. Similarly, the weak static limit of the $V_Q$ model with potential \rf{quadlagv} produces equations identical at linear level to those of the $V_L$ model with potential \rf{linlagv} in the limit $\lambda \to 0$. The usual gravitational and electromagnetic Poisson equations therefore emerge, and the correct Newton and Coulomb potentials hold at the linearized level. The exact correspondence between the static limit of the $V_Q$ bumblebee model and the Newton and Coulomb potentials involves the nonlinear constraint \rf{lmconst}. The explicit forms of the solutions are therefore also nonlinear. For example, consider again the case of a point particle of mass $m$ and charge $q$ at rest at the origin in the presence of a purely timelike vacuum value $b_\mu = (b,0,0,0)$. The equation of motion for $\lambda$ is the constraint \rf{lmconst}, which represents a nonlinear condition relating ${\cal E}_\mu$ and $h_{\mu\nu}$. Expanding this constraint through quadratic order gives \begin{equation} {\cal E}_0^2 +2 b {\cal E}_0 - 2 b^2 \Phi_g - 4 b \Phi_m {\cal E}_0 - {\cal E}_j^2 \approx 0 , \label{minquadV} \end{equation} where $\Phi_m$ obeys the conventional Poisson equation \rf{RN} for the point source with mass and charge density given by Eq.\ \rf{pointsource}. In the transverse gauge, the gravitational potential $\Phi_g$ obeys \begin{equation} \vec \nabla^2 \Phi_g \approx 4\pi G (\rho_m + T_{\rm EM}^{00}) , \label{EMeq} \end{equation} where the energy-momentum tensor $T_{\rm EM}^{00}$ has the usual Einstein-Maxwell form \rf{TBBEM}. The solution for $\Phi_g$ is therefore the standard gravitational potential for a static point charge in a curved spacetime in the weak-field limit. At quadratic order, the solution ${\cal E}_\mu = ({\cal E}_0, {\cal E}_j)$ satisfying the constraint \rf{minquadV} and the field equations is found to be \begin{eqnarray} {\cal E}_0 &\approx & b \Phi_g + \fr {3b} 2 \Phi_g^2 + \fr {t^2} {2b} [\partial_j (\Phi_q + b \Phi_g)]^2 , \nonumber \\ {\cal E}_j &\approx & t \partial_j (\Phi_q + b \Phi_g) + 3bt \Phi_g \partial_j \Phi_g \nonumber \\ && + \fr {t^3} {3b} [\partial_k (\Phi_q + b \Phi_g)] \partial_j \partial_k (\Phi_q + b \Phi_g) , \label{quadep} \end{eqnarray} where $\Phi_q$ is the conventional Coulomb potential obeying the Poisson equation \rf{fish}. These expressions are both time dependent and nonlinear, but they nonetheless generate the usual static electric field $\vec E$ and magnetic field $\vec B$ for a point charge. Explicit calculation to quadratic order yields \begin{equation} \vec E \approx - \vec \nabla \Phi_q , \qquad \vec B \approx 0 , \label{EBfields2} \end{equation} which implies the usual form $\vec \nabla \cdot \vec E \approx \rho_q$ of the Gauss law. From these equations, we see again that Einstein-Maxwell theory is recovered despite the absence of U(1) symmetry. In effect, the nonlinear condition \rf{minquadV} plays the role of a nonlinear gauge-fixing condition in a U(1)-invariant theory, removing a degree of freedom and leaving only NG modes that propagate as photons and generate the usual Coulomb potential in the weak-field limit. \section{Summary} \label{Summary} In this paper, we have investigated the properties of the massive modes that can emerge from spontaneous local Lorentz and diffeomorphism breaking. In Riemann spacetime, no massive modes of the conventional Higgs type can appear because covariant kinetic terms involve connections with derivatives. However, an alternative form of the Higgs mechanism can occur instead, in which massive modes originate from quadratic couplings in the potential $V$ inducing the symmetry breaking. Section \ref{Massive Modes} provides an analysis of this alternative Higgs mechanism in the general context of an arbitrary tensor field. Both smooth potentials and Lagrange-multiplier potentials are considered. Massive modes appear for a smooth potential when excitations with $V^\prime \ne 0$ exist, and they are formed as combinations of field and metric fluctuations. For Lagrange-multiplier potentials the massive modes are constrained to vanish, but the Lagrange multiplier fields can play a related role. The propagation of the massive modes depends on the nature of the kinetic terms in the theory, and the requirements of unitarity and ghost-free propagation constrain possible models. Even if the massive modes are nonpropagating they can influence gravitational phenomena through, for example, effects on the static gravitational potential. These modifications are of potential interest in alternative theories of gravity and descriptions of phenomena such as dark matter or dark energy. Following the general treatment, we investigate in Sec.\ \ref{Bumblebee Models} a broad class of theories called bumblebee models that involve gravitationally coupled vector fields with spontaneous local Lorentz and diffeomorphism breaking. For arbitrary quadratic kinetic terms, the Lagrange density is given in Eq.\ \rf{bb}. Along with the symmetry-breaking potential $V$ and a matter sector, this Lagrange density involves five parameters, four of which can be linearly independent in specific models. A particularly attractive class of theories are the KS\ bumblebee models, which have kinetic term of the Maxwell form and hence an additional constraint that minimizes problems with unitarity and ghosts. These models also offer candidate alternatives to Einstein-Maxwell electrodynamics. In a series of subsections and the associated appendix, we provide some results valid for general bumblebee models. The observer and particle forms of local Lorentz transformations and diffeomorphisms are presented. Using the vierbein formalism, some decompositions of the bumblebee field and metric are given that are suitable for the identification of Lorentz and diffeomorphism NG modes. The effects of various choices of Lorentz and diffeomorphism gauges are described. Alternative decompositions used in some of the literature are also discussed, in which the Lorentz NG modes are hidden and only spacetime variables are used. To provide explicit examples and to gain insight via a more detailed analysis, we focus attention in Sec.\ \ref{Examples} on the class of KS\ bumblebee models. The basic equations of motion and conservation laws are obtained, and some properties of the bumblebee currents are considered. The interpretation of these bumblebee models as theories of electromagnetism and gravity, known as bumblebee electrodynamics, is discussed. They contain four transverse massless modes, two of which are massless gravitons and two of which are massless photons, along with a massive mode or Lagrange-multiplier mode. When the massive mode or Lagrange multipler vanishes, or in the limit of infinite mass, conventional Einstein-Maxwell theory in an axial gauge is recovered. Section \ref{Examples} also contains subsections considering in more detail various types of potentials, including the smooth potential $V_S$ in Eq.\ \rf{smoothlagv}, the linear Lagrange-multiplier potential $V_L$ in Eq.\ \rf{linlagv}, and the quadratic Lagrange-multiplier potential $V_Q$ in Eq.\ \rf{quadlagv}. For the $V_S$ model, the gravitational and bumblebee equations of motion are investigated to determine whether a physical massive mode can propagate as a free field. The sign of the squared mass term depends on whether the bumblebee vacuum value $b_\mu$ is timelike or spacelike. In the timelike case, the massive mode is a ghost, while in the spacelike case the squared mass has the usual sign. However, in both cases the dispersion law is unconventional and no localized physical solutions satisfying suitable asymptotic boundary conditions exist, so the $V_S$ model has no freely propagating massive modes. Nonetheless, the massive mode has a physical impact as an auxiliary field, acting as an additional source of energy-momentum density and current density. In the weak static limit, for example, the solutions to the equations of motion in the presence of mass and charge describe modified Newton and Coulomb potentials according to Eq.\ \rf{Phisol}. These may be of phenomenological interest in the context of dark matter and perhaps also dark energy. The effects are controlled by the massive mode, but its form is dynamically undetermined and must be imposed via initial conditions. Many of the results obtained for the $V_S$ model apply also to the $V_L$ and $V_Q$ models that have Lagrange-multiplier fields. For example, all the models contain four massless propagating modes behaving like gravitons and photons. Although the $V_L$ and $V_Q$ models generate an additional constraint that eliminates the massive mode on shell, the Lagrange-multiplier field appears instead as a extra degree of freedom playing a similar role to that of the massive mode in the $V_S$ model. The form of the Lagrange multiplier must be set by initial conditions, and if it is chosen to vanish then Einstein-Maxwell theory is recovered. The key difference between the $V_L$ and $V_Q$ models lies in the role of the Lagrange multiplier, which can affect the physics as an auxiliary mode in the $V_L$ model but which decouples from the theory in the $V_Q$ model. In the context of bumblebee electrodynamics, the massive mode or Lagrange-multiplier field acts as an additional degree of freedom relative to Einstein-Maxwell theory. The extra freedom arises because the bumblebee model has no U(1) gauge symmetry, and the structure of the kinetic terms implies that the freedom must be fixed as an initial condition. In the absence of experimental evidence for a massive mode or of guidance from an underlying theory, the choice of initial condition is largely arbitrary. A natural choice sets the massive mode or Lagrange multiplier field to zero, reducing the theory to Einstein-Maxwell electrodynamics up to possible SME matter-sector couplings. Bumblebee electrodynamics therefore provides a candidate alternative explanation for the existence of massless photons, based on the masslessness of NG modes instead of the usual gauge symmetry. In any case, the possibility of spontaneous breaking of local Lorentz and diffeomorphism symmetry remains a promising avenue for exploring physics emerging from the Planck scale. \section*{Acknowledgments} This work was supported in part by DOE grant DE-FG02-91ER40661, NASA grant NAG3-2914, and NSF grant PHY-0554663.
{'timestamp': '2008-02-14T17:00:48', 'yymm': '0712', 'arxiv_id': '0712.4119', 'language': 'en', 'url': 'https://arxiv.org/abs/0712.4119'}
\section{introduction} Experimental data on the quark and lepton masses and mixing provide important clues to the nature of new physics beyond the Standard Model(SM). However, in SM the Yukawa coupling constants which are responsible for the fermion masses and mixing, can be freely adjusted without disturbing the internal consistency of the theory, one must rely on experiments to fix their values. The origin of fermion mass hierarchies and flavor mixing is a longstanding puzzle in the SM of particle physics. Family symmetry is a fascinating idea to this issue. Current data strongly suggests that there should be a new symmetry that acts horizontally across the three standard model family\cite{review}. Ideally, only the top quark Yukawa coupling is allowed by this symmetry, and all the remaining couplings are generated, as this symmetry is spontaneously broken down. In the original work of Froggatt and Nielsen, they suggested the continuous Abelian $U(1)$ as the flavor symmetry, its spontaneous breaking produces the correct orders of quark mass hierarchies and Cabibbo-Kobayashi-Maskawa (CKM) matrix elements\cite{Froggatt:1978nt}. Models with various horizontal symmetries gauged or global, continuous or discrete, Abelian or non-Abelian, have been proposed\cite{horizontal}. Recently, it is found that discrete group $A_4$ is especially suitable to derive the so-called tri-bimaximal(TB) mixing\cite{TBmix} in the lepton sector in a natural way\cite{Ma:2001dn,Babu:2002dz,Ma:2004zv,Altarelli:2005yp,Chen:2005jm,Zee:2005ut,Altarelli:2005yx,He:2006dk,Ma:2006sk,King:2006np, Morisi:2007ft,Bazzocchi:2007na,Bazzocchi:2007au,Lavoura:2007dw,Brahmachari:2008fn,Altarelli:2008bg,Bazzocchi:2008rz}. The left-handed electroweak lepton doublets $l_i(i=1,2,3)$ transform as $A_4$ triplet, the right-handed charged leptons $e^{c}$, $\mu^{c}$ and $\tau^{c}$ transform as $\mathbf{1}$, $\mathbf{1}''$ and $\mathbf{1}'$ respectively, and two triplets $\varphi_{T}$ and $\varphi_{S}$ and a singlet $\xi$ are introduced to break the $A_4$ symmetry spontaneously\cite{Altarelli:2005yx}. If we adopt for quark the same classification scheme under $A_{4}$ that we have used for leptons, an identity CKM mixing matrix is obtained at the leading order, which is a good first order approximation. The non leading corrections in the up and down quark sector almost exactly cancel in the mixing matrix. It seems very difficult to implement $A_4$ as a family symmetry for both the quark and lepton sectors. Double tetrahedral group $T'$ has three inequivalent irreducible doublet representations $\mathbf{2}$, $\mathbf{2}'$, $\mathbf{2}''$ in addition to the triplet representation $\mathbf{3}$ and three singlet representations $\mathbf{1}$, $\mathbf{1}'$, $\mathbf{1}''$ as $A_4$. Furthermore, the kronecker products of the triplet and singlet representations are identical to those of $A_4$. Therefore $T'$ can reproduce the success of $A_4$ model building in the lepton sector, and $T'$ as a family symmetry for both quark and lepton has been considered\cite{Carr:2007qw,Feruglio:2007uu,Chen:2007afa,Frampton:2007et,Aranda:2007dp,Frampton:2007gg,Frampton:2007hs}. In Ref.\cite{Feruglio:2007uu} a supersymmetric (SUSY) model with $T'\otimes Z_3\otimes U(1)_{FN}$ flavor symmetry is presented, which is identical to $A_4$ in the lepton sector. While the quark doublet and the antiquarks of the third generations transforms as $\mathbf{1}$ under $T'$, the other quark doublets and the antiquarks transforms as $\mathbf{2}''$. TB mixing is derived naturally as in $A_4$ model, whereas only the masses of the second and the third generation quarks and the mixing between them are generated at the leading order. The masses and mixing angles of the first generation quark are induced by higher dimensional operators. The authors built a model with $T'\otimes Z_{12}\otimes Z_{12}$ flavor symmetry in the context of SU(5) grand unification in Ref.\cite{Chen:2007afa}. Both the quarks and leptons are assumed to transform as $\mathbf{2}\oplus\mathbf{1}$ under $T'$ in Ref.\cite{Aranda:2007dp}. A renormalizable model with $T'\otimes Z_2\otimes Z'_2\otimes Z''_2$ flavor symmetry is presented in Ref.\cite{Frampton:2007hs}, where the flavor symmetry breaking scale is very low in the range 1 GeV-10 GeV. The $T'$ symmetry can replicate the success of $A_4$ model, and it allows the heavy third family to to be treated differently, therefore $T'$ is a very promising flavor symmetry to understand the origin of fermion mass hierarchies and flavor mixing. In this work we shall build a SUSY model based on the $T'\otimes Z_3 \otimes Z_9$ flavor symmetry, the transformation rules of $l_i$, $e^{c}$, $\mu^{c}$ and $\tau^{c}$ are the same as those in the $A_4$ model\cite{Altarelli:2005yx}. In the quark sector, we exploit the singlet and doublet representation. The fermion mass hierarchies are generated via the spontaneous breaking of the discrete flavor symmetry in contrast with Ref.\cite{Altarelli:2005yx,Feruglio:2007uu}. The Yukawa matrices of the up and down quarks have the same textures as those in the well-known U(2) flavor theory\cite{u2f}. The hierarchies in the masses of the known quarks and leptons, the realistic pattern of CKM matrix elements and the TB mixing are naturally produced. The paper is organized as follows. In section II we present the current experimental data and the parameterizations of fermion mass hierarchies and flavor mixing. A short review of model with U(2) flavor symmetry is given in section III; In section IV a model with $T'\otimes Z_3\otimes Z_9$ flavor symmetry is constructed, its basic features and predictions are discussed. We present the vacuum alignment and the subleading corrections to the leading order results in section V and section VI respectively. We summarize our results in section VII. Appendix A gives the basic properties of the $T'$ group. The corrections to the vacuum alignment induced by higher dimensional operators are discussed in Appendix B. \section{current experimental data on fermion mass hierarchies and flavor mixing and their parameterizations } The observed fermion mass hierarchy is apparent in the quark sector. The masses of up type quarks are\cite{pdg} \begin{eqnarray} \nonumber m_{u}&\simeq& 1.5-3 \rm{MeV}\\ \nonumber m_{c}&\simeq& 1.16-1.34 \rm{GeV}\\ \label{1} m_{t}&\simeq& 170.9.1-177.5 \rm{GeV} \end{eqnarray} and the masses of down type quarks are \begin{eqnarray} \nonumber m_{d}&\simeq& 3-7\rm{MeV}\\ \nonumber m_{s}&\simeq& 70-120 \rm{MeV}\\ \label{2} m_{b}&\simeq& 4.13-4.27 \rm{GeV} \end{eqnarray} We note that all the quark masses except the top quark mass are given in the $\overline{\rm{MS}}$ scheme. The light $u$, $d$, $s$ quark masses are estimates of so-called current quark mass at the scale about 2 GeV. There is some ambiguity in the measurement of the absolute quark masses since they are scheme dependent, but the ratios of the masses are more concrete \begin{eqnarray} \nonumber&& \frac {m_{u}}{m_{d}}\simeq 0.3-0.6\\ \nonumber&& \frac{m_{s}}{m_{d}}\simeq 17-22\\ \label{3}&&\frac{m_{s}-(m_{u}+m_{d})/2}{m_{d}-m_{u}}\simeq 30-50 \end{eqnarray} The masses of the charged leptons have been measured much more unambiguously than the quark masses. The charged lepton sector is also seen to exhibit a large mass hierarchy. Their masses are measured to be\cite{pdg} \begin{eqnarray} \nonumber m_{e}&\simeq& 0.511 \rm{MeV}\\ \nonumber m_{\mu}&\simeq& 105.7 \rm{MeV}\\ \label{4} m_{\tau}&\simeq& 1777 \rm{MeV} \end{eqnarray} The $e$, $\mu$ and $\tau$ masses are the pole masses, and their mass hierarchy is similar to that in the down type quark sector. Including the renormalization group equation evolution, the fermion mass ratios at the GUT scale are parameterized in terms of the Cabibbo angle $\lambda\simeq 0.23$ as follows\cite{Fusaoka:1998vc,Ross:2007az,Xing:2007fb} \begin{eqnarray} \nonumber &&\frac{m_{u}}{m_{t}}\sim \lambda^{8},~~~~~~~~\frac{m_{c}}{m_{t}}\sim\lambda^{4},\\ \nonumber &&\frac{m_{d}}{m_{b}}\sim \lambda^{4},~~~~~~~~\frac{m_{s}}{m_{b}}\sim\lambda^{2},\\ \nonumber &&\frac{m_{e}}{m_{\tau}}\sim \lambda^{4},~~~~~~~~\frac{m_{\mu}}{m_{\tau}}\sim\lambda^{2}\\ \label{5}&&\frac{m_b}{m_t}\sim\lambda^{3} \end{eqnarray} Recent precision measurements have greatly improved the knowledge of the CKM matrix, the experimental constraints on the CKM mixing parameters are\cite{pdg} \begin{equation} \label{6}|V^{\rm{Exp}}_{\rm{CKM}}|\simeq\left(\begin{array}{ccc} 0.97377\pm0.00027&0.2257\pm0.0021&(4.31\pm0.30)\times10^{-3}\\ 0.230\pm0.011&0.957\pm0.095&(41.6\pm0.6)\times10^{-3}\\ (7.4\pm0.8)\times10^{-3}&(40.6\pm2.7)\times10^{-3}&>0.78 \;at\; 95\%\;CL \end{array}\right) \end{equation} The hierarchy in the quark mixing angles is clearly presented in the Wolfenstein's parameterization of the CKM matrix\cite{pdg}. Considering the scaling factor associated with the renormalization group evolution of the CKM mixing angles from the electroweak scale to the high scale, the magnitudes of the CKM matrix elements are given in powers of $\lambda$ as follows \begin{equation} \label{7} |V_{us}|\sim\lambda\,,~~~|V_{cb}|\sim\lambda^2\,,~~~|V_{td}|\sim\lambda^{3},~~~|V_{ub}|\sim\lambda^{4} \end{equation} Observations in the neutrino sector currently provide the strongest indication for physics beyond the standard model. Including the new data released by the MINOS and KamLAND collaborations, the global fit of neutrino oscillation data at 2$\sigma$ indicates the following values for the lepton mixing angles\cite{fit} \begin{equation} \label{8}0.28\leq\sin^2\theta_{12}\leq0.37,~~~0.38\leq\sin^2\theta_{23}\leq0.63,~~~\sin^2\theta_{13}\leq0.033 \end{equation} and the best fit values are\cite{fit} \begin{equation} \label{9}\sin^2\theta_{12}=0.32,~~~\sin^2\theta_{23}=0.50,~~~\sin^2\theta_{13}=0.007 \end{equation} The current data within 1$\sigma$ is well approximated by the so-called TB mixing\cite{TBmix} \begin{equation} \label{10}U_{\rm{TB}}=\left(\begin{array}{ccc} \sqrt{\frac{2}{3}}&\frac{1}{\sqrt{3}}&0\\ -\frac{1}{\sqrt{6}}&\frac{1}{\sqrt{3}}&-\frac{1}{\sqrt{2}}\\ -\frac{1}{\sqrt{6}}&\frac{1}{\sqrt{3}}&\frac{1}{\sqrt{2}} \end{array}\right) \end{equation} which predicts $\sin^2\theta_{12,\rm{TB}}=\frac{1}{3}$, $\sin^2\theta_{23,\rm{TB}}=\frac{1}{2}$ and $\sin^2\theta_{13,\rm{TB}}=0$. \section{brief review on the theory with U(2) flavor symmetry } We shall briefly review the theory with U(2) flavor symmetry in the following, which has been described in detail in the literatures\cite{u2f}. The three generations of the matter fields are assigned to transform as $\mathbf{2}\oplus\mathbf{1}$, the sfermions of the first two generations are exactly degenerate in the limit of unbroken U(2). In the low energy, this degeneracy is lifted by the small symmetry breaking parameters which determine the light fermion Yukawa couplings, therefore the flavor changing neutral current(FCNC) and CP violating phenomena are sufficiently suppressed so that the corresponding experimental bounds are not violated. Three flavon fields $\phi^{a}$, $S^{ab}$ and $A^{ab}(a,b=1, 2)$ are introduced, where $\phi$ is a U(2) doublet, $S$ and $A$ are symmetric and antisymmetric tensors, and they are U(2) triplet and singlet respectively. The hierarchies in the fermion masses and mixing angles arise from the two step flavor symmetry breaking \begin{equation} \label{11}{\rm{U(2)}}\stackrel{\epsilon}{\rightarrow}{\rm{U(1)}}\stackrel{\epsilon'}{\rightarrow}nothing \end{equation} where both $\epsilon$ and $\epsilon'$ are small parameters with $\epsilon>\epsilon'$. Both $\phi^{a}$ and $S^{ab}$ participate in the first stage of symmetry breaking ${\rm{U(2)}}\stackrel{\epsilon}{\rightarrow}{\rm{U(1)}}$ with $\langle\phi^1\rangle=0$, $\langle S^{11}\rangle=\langle S^{12}\rangle=\langle S^{21}\rangle=0$, $\langle\phi^2\rangle={\cal O}(\epsilon)$ and $\langle S^{22}\rangle={\cal O}(\epsilon)$. The last stage of symmetry breaking is accomplished by $A^{ab}$ with $\langle A^{12}\rangle=-\langle A^{21}\rangle={\cal O}(\epsilon')$. The different mass hierarchies in the up sector and the down sector can be understood by the combination of U(2) flavor symmetry and grand unified symmetries\cite{u2f}, then the Yukawa matrices have the following textures \begin{eqnarray} \nonumber Y_{U}&=&\left(\begin{array}{ccc} 0&\epsilon'\rho&0\\ -\epsilon'\rho&\epsilon\rho'&x_{u}\epsilon\\ 0&y_{u}\epsilon&1 \end{array}\right)\zeta\\ \label{12}Y_{D,E}&=&\left(\begin{array}{ccc} 0&\epsilon'&0\\ -\epsilon'&(1,\pm3)\epsilon&(x_{d},x_e)\epsilon\\ 0&(y_{d},y_{e})\epsilon&1 \end{array}\right)\varsigma \end{eqnarray} where $x_i,y_i={\cal O}(1)$ and $\varsigma\ll\zeta$. The model with U(2) flavor symmetry successfully accounts for the quarks masses, the charged lepton masses and the CKM mixing angles, and the phenomenological constraints from FCNC and CP violation are satisfied. It has been shown that the flavor models based on $T'$ symmetry could reproduce the Yukawa matrices in the U(2) flavor theory\cite{qtp}, However, these models predicted the excluded small mixing angle solution in the lepton sector. In the following we will use triplet representation in the lepton sector to derive the TB mixing naturally, singlet and doublet representations are exploited in the quark sector, the Yukawa matrices in U(2) model are generated at the leading order. Both the vacuum alignment and the next to leading order corrections are discussed, which are crucial to the flavor model building, however, these issues are omitted in Ref.\cite{qtp}. \section{the SUSY model with $T'\otimes Z_3\otimes Z_9$ flavor symmetry} In our scheme, the symmetry group is $SU(3)_c\otimes SU(2)_L\otimes U(1)_Y\otimes G_F$, where $G_F$ is the global flavor symmetry group $G_F=T'\otimes Z_3\otimes Z_9$. The $Z_3$ symmetry is to guarantee the correct misalignment in flavor space between the neutrino masses and the charged lepton masses as in Ref.\cite{Altarelli:2005yx,Feruglio:2007uu}, and $Z_9$ is crucial to obtain the realistic hierarchies in the fermion masses and mixing angles. In addition to the minimal supersymmtric standard model (MSSM) matter fields, we need to introduce the fields which are responsible for the flavor symmetry breaking, we refer to these fields as flavons which are gauge singlets. Both the MSSM fields and the flavon fields and their transformation properties under $T'\otimes Z_3\otimes Z_9$ are shown in Table \ref{table1}, where $\alpha$ and $\beta$ are respectively the generators of $Z_3$ and $Z_9$ with $\alpha=\exp[i2\pi/3]$ and $\beta=\exp[i2\pi/9]$. Note that although the flavons $\theta'$ and $\chi$ are not involved in the leading order Yukawa superpotential, they play an important role in the vacuum alignment mechanism. \begin{table}[hptb] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c||c|c|c|c|c|c|c|c|c|}\hline\hline Fields& $~\ell~$ & $~e^{c}~$ & $~\mu^{c}~$ & $~\tau^{c}~$ & $Q_L$ & $U^{c}$ & $D^{c}$& $~Q_3~$ & $~t^{c}~$ & $~b^{c}~$ & $H_{u,d}$& $\varphi_{T}$ & $\varphi_{S}$ & $~\xi,\tilde{\xi}~$ & ~$\phi$~ & ~$\theta''$~ & ~$\theta'$~ &~$\Delta$~ & ~$\bar{\Delta}$~ & ~$\chi$~\\\hline $T'$& 3 & $\mathbf{1}$& $\mathbf{1}''$ & $\mathbf{1}'$ & $\mathbf{2}'$ & $\mathbf{2}$& $\mathbf{2}$ & $\mathbf{1}''$ & $\mathbf{1}'$ & $\mathbf{1}'$& $\mathbf{1}$& $\mathbf{3}$ &$\mathbf{3}$& $\mathbf{1}$ & $\mathbf{2}'$ & $\mathbf{1}''$ & $\mathbf{1}'$ & $\mathbf{1}$&$\mathbf{1}$ &$\mathbf{1}$\\\hline $Z_{3}$& $\alpha$ & $\alpha^2$ & $\alpha^2$ & $\alpha^2$ & $\alpha$ &$\alpha^2$ & $\alpha^2$ &$\alpha$ & $\alpha^2$ & $\alpha^2$ &$\mathbf{1}$& $\mathbf{1}$ & $\alpha$ & $\alpha$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ \\\hline $Z_{9}$&$\mathbf{1}$ &$\mathbf{1}$ & $\beta^6$ & $\beta^8$ & $\beta^3$ & $\beta^3$& $\beta$& $\mathbf{1}$ & $\mathbf{1}$ & $\beta^7$ &$\mathbf{1}$& $\beta$ & $\mathbf{1}$ & $\mathbf{1}$ & $\beta^6$ & $\beta$ &$\beta$ & $\beta^2$& $\beta^4$ &$\beta$\\\hline\hline \end{tabular} \end{center} \caption{\label{table1}The transformation rules of the MSSM fields and the flavon fields under the flavor symmetry $T'\otimes Z_3\otimes Z_9$. We denote $Q_L=(Q_1,Q_2)^T$, where $Q_1=(u_L,d_L)^T$ and $Q_2=(c_L,s_L)^T$ are the electroweak $SU(2)_L$ doublets of the first two generations. $U^c=(u^c,c^c)^T$ and $D^c=(d^c,s^c)^T$, $Q_L$, $U^c$ and $D^c$ are $T'$ doublets. $Q_3=(t_L,b_L)^T$ is the electroweak $SU(2)_L$ doublet of the third generation, $Q_3$, $t^c$ and $b^c$ are $T'$ singlets . The up type and down type Higgs transform as a singlet under the flavor group. } \end{table} As we shall demonstrate in section V, at the leading order, the scalar components of the flavon supermultiplets $\varphi_{T}$, $\varphi_{S}$ etc. develop vacuum expectation values(VEV) along the following diractions \begin{eqnarray} \nonumber &&\langle \varphi_{T}\rangle=(v_{T},0,0),~~~\langle \varphi_{S}\rangle=(v_S,v_S,v_S),~~~\langle\phi\rangle=(v_1,0),\\ \nonumber&&\langle\xi\rangle=u_{\xi},~~~\langle\tilde{\xi}\rangle=0,~~~\langle\theta'\rangle=u'_{\theta},~~~\langle\theta''\rangle=u''_{\theta},\\ \label{13}&&\langle\Delta\rangle=u_{\Delta},~~~\langle\bar{\Delta}\rangle=\bar{u}_{\Delta},~~~\langle\chi\rangle=u_{\chi} \end{eqnarray} The electroweak symmetry is broken by the up and down type Higgs with $\langle H_{u,d}\rangle=v_{u,d}$. As we shall see in the following, in order to obtain the realistic pattern of charged fermion masses and mixing angles, these VEVs should be of the orders \begin{equation} \label{14}|\frac{v_{T}}{\Lambda}|\approx|\frac{v_{S}}{\Lambda}|\approx|\frac{v_1}{\Lambda}|\sim\lambda^2,~~~~|\frac{u'_{\theta}}{\Lambda}|\approx|\frac{u''_{\theta}}{\Lambda}|\approx|\frac{u_{\Delta}}{\Lambda}|\approx|\frac{\bar{u}_{\Delta}}{\Lambda}|\sim\lambda^3 \end{equation} where $\Lambda$ is the cut off scale of the theory, these relations imply that the VEVs of the $T'$ triplets and doublet are required to be of order $\lambda^2\Lambda$, while the VEVs of the $T'$ singlets $\theta$, $\theta'$, $\Delta$ and $\bar{\Delta}$ are of order $\lambda^3\Lambda$. Naturally $u_{\xi}$ and $u_{\chi}$ should be of the order $\lambda^2\Lambda\sim\lambda^3\Lambda$ as well. The VEVs of required orders in Eq.(\ref{14}) can be achieved in a finite portion of the parameter space, which will be illustrated in the discussion of the vacuum alignment. \subsection{The lepton sector} The Yukawa interactions in the lepton sector are controlled by the superpotential \begin{equation} \label{15}w_{\ell}=w_{e}+w_{\nu} \end{equation} where we have separated the contribution to the neutrino masses and the charged lepton masses, both $w_{e}$ and $w_{\nu}$ are invariant under the gauge group of the standard model and the flavor symmetry $T'\otimes Z_3\otimes Z_9$. The leading order terms of the Yukawa superpotential $w_{e}$ are \begin{eqnarray} \nonumber&&w_{e}=y_ee^{c}(\ell\varphi_T)\bar{\Delta}^2H_{d}/\Lambda^3+h_{e1}e^{c}(\ell\varphi_S)(\varphi_S\varphi_S)H_{d}/\Lambda^3+h_{e2}e^{c}(\ell\varphi_s)'(\varphi_S\varphi_S)''H_{d}/\Lambda^3\\ \nonumber&&+h_{e3}e^{c}(\ell\varphi_S)''(\varphi_S\varphi_S)'H_{d}/\Lambda^3+h_{e4}e^{c}(\ell\varphi_S)\xi^2H_d/\Lambda^3+y_{\mu1}\mu^{c}(\ell\phi\phi)'H_{d}/\Lambda^2+y_{\mu2}\mu^{c}(\ell\varphi_T)'\Delta H_{d}/\Lambda^2\\ \nonumber&&+h_{\mu1}\mu^{c}(\ell\varphi_T)'(\varphi_T\varphi_T)H_{d}/\Lambda^3+h_{\mu2}\mu^{c}((\ell\varphi_T)_{\mathbf{3}_{S}}(\varphi_T\varphi_T)_{\mathbf{3}_S})'H_{d}/\Lambda^3+h_{\mu3}\mu^{c}((\ell\varphi_T)_{\mathbf{3}_{A}}(\varphi_T\varphi_T)_{\mathbf{3}_S})'H_{d}/\Lambda^3\\ \nonumber&&+h_{\mu4}\mu^{c}(\ell\varphi_T\varphi_T)'\chi H_{d}/\Lambda^3+h_{\mu5}\mu^{c}(\ell\varphi_T\varphi_T)\theta'H_{d}/\Lambda^3+h_{\mu6}\mu^{c}(\ell\varphi_T\varphi_T)''\theta''H_{d}/\Lambda^3+h_{\mu7}\mu^{c}(\ell\varphi_T)'\chi^2H_{d}/\Lambda^3\\ \nonumber&&+h_{\mu8}\mu^{c}(\ell\varphi_T)'\theta'\theta''H_{d}/\Lambda^3+h_{\mu9}\mu^{c}(\ell\varphi_T)\chi\theta'H_{d}/\Lambda^3+h_{\mu10}\mu^{c}(\ell\varphi_T)\theta''\theta''H_{d}/\Lambda^3+h_{\mu11}\mu^{c}(\ell\varphi_T)''\chi\theta''H_{d}/\Lambda^3\\ \label{16}&&+h_{\mu12}\mu^{c}(\ell\varphi_T)''\theta'\theta'H_{d}/\Lambda^3+y_{\tau}\tau^{c}(\ell\varphi_T)''H_{d}/\Lambda+... \end{eqnarray} where dots stand for additional operators of order $1/\Lambda^3$, whose contributions to the charged lepton masses vanish at the leading order. The coefficients $y_{e}$, $h_{ei}(i=1,2,3,4)$, $y_{\mu1}$, $y_{\mu2}$, $h_{\mu i}(i=1-12)$ and $y_{\tau}$ are naturally ${\cal O}(1)$ coupling constants. After the electroweak symmetry breaking and the flavor symmetry breaking, the charged lepton mass terms from $w_{e}$ are \begin{eqnarray} \nonumber&&w_{e}=y_{e}\frac{\bar{u}^2_{\Delta}v_{T}}{\Lambda^3}v_de^{c}e+[3(h_{e1}+h_{e2}+h_{e3})\frac{v^3_S}{\Lambda^3}+h_{e4}\frac{u^2_{\xi}v_S}{\Lambda^3}]v_de^{c}(e+\mu+\tau)\\ \nonumber&&+(iy_{\mu1}\frac{v^2_1}{\Lambda^2}+y_{\mu2}\frac{u_{\Delta}v_T}{\Lambda^2})v_d\mu^c\mu+(h_{\mu5}\frac{2u'_{\theta}v^2_T}{3\Lambda^3}+h_{\mu9}\frac{u_{\chi}u'_{\theta}v_T}{\Lambda^3}+h_{\mu10}\frac{u''^{2}_{\theta}v_T}{\Lambda^3})v_d\mu^{c}e\\ \nonumber&&+[(h_{\mu1}-\frac{2}{9}h_{\mu2}-\frac{1}{3}h_{\mu3})\frac{v^3_T}{\Lambda^3}+h_{\mu4}\frac{2u_{\chi}v^2_T}{3\Lambda^3}+h_{\mu7}\frac{u^2_{\chi}v_T}{\Lambda^3}+h_{\mu8}\frac{u'_{\theta}u''_{\theta}v_T}{\Lambda^3}]v_d\mu^{c}\mu\\ \nonumber&&+(h_{\mu6}\frac{2u''_{\theta}v^2_T}{3\Lambda^3}+h_{\mu11}\frac{u_{\chi}u''_{\theta}v_T}{\Lambda^3}+h_{\mu12}\frac{u'^{2}_{\theta}v_T}{\Lambda^3})v_d\mu^{c}\tau+y_{\tau}\frac{v_{T}}{\Lambda}v_{d}\tau^{c}\tau\\ \nonumber&&\equiv (y_{e}\frac{\bar{u}^2_{\Delta}v_{T}}{\Lambda^3}+y'_e\frac{v^3_S}{\Lambda^3})v_de^{c}e+y'_e\frac{v^3_S}{\Lambda^3}v_de^{c}\mu+y'_e\frac{v^3_S}{\Lambda^3}v_de^{c}\tau+y_{\mu e}\frac{u'_{\theta}v^2_T}{\Lambda^3}v_d\mu^ce+y_{\mu}\frac{v^2_1}{\Lambda^2}v_d\mu^c\mu\\ \label{17}&&+y_{\mu\tau}\frac{u''_{\theta}v^2_T}{\Lambda^3}v_d\mu^{c}\tau+y_{\tau}\frac{v_T}{\Lambda}v_d\tau^{c}\tau \end{eqnarray} where $y'_{e}=3(h_{e1}+h_{e2}+h_{e3})+h_{e4}\frac{u^2_{\xi}}{v^2_S}$ , $y_{\mu}\approx iy_{\mu1}+y_{\mu2}\frac{u_{\Delta}v_T}{v^2_1}$, $y_{\mu e}=\frac{2}{3}h_{\mu5}+h_{\mu9}\frac{u_{\chi}}{v_T}+h_{\mu10}\frac{u''^2_{\theta}}{u'_{\theta}v_T}$ and $y_{\mu\tau}=\frac{2}{3}h_{\mu6}+h_{\mu11}\frac{u_{\chi}}{v_T}+h_{\mu12}\frac{u'^2_{\theta}}{u''_{\theta}v_T}$. Therefore at the leading order, the charged lepton mass matrix is given by \begin{equation} \label{18}M^{e}=\left(\begin{array}{ccc} y_e\frac{\bar{u}^{2}_{\Delta}v_{T}}{\Lambda^3}+y'_{e}\frac{v^3_S}{\Lambda^3}&y'_{e}\frac{v^3_S}{\Lambda^3}&y'_{e}\frac{v^3_S}{\Lambda^3}\\ y_{\mu e}\frac{u'_{\theta}v^2_T}{\Lambda^3}&y_{\mu}\frac{v^2_1}{\Lambda^2}&y_{\mu\tau}\frac{u''_{\theta}v^2_T}{\Lambda^3}\\ 0&0&y_{\tau}\frac{v_{T}}{\Lambda} \end{array}\right)v_d \end{equation} Note that the charged lepton mass matrix is no longer diagonal at the leading order in contrast with Ref. \cite{Altarelli:2005yx,Feruglio:2007uu}. Since the charged lepton masses receive contribution from the VEV of $\varphi_S$, $T'$ is completely broken already at the leading order. Whereas $T'$ is broken down to $Z_3$ at the leading order, then it is broken to nothing by the higher dimensional operators in Ref.\cite{Altarelli:2005yx,Feruglio:2007uu}. The mass matrix $M^{e}$ is diagonalized by a biunitary transformation $V^{e\dagger}_{R}M^{e}V^{e}_{L}=diag(m_e,m_{\mu},m_{\tau})$, therefore $V^{e\dagger}_{L}M^{e\dagger}M^{e}V^{e}_{L}=diag(m^2_e,m^2_{\mu},m^2_{\tau})$. The matrix $V^{e}_L$ approximately is \begin{equation} \label{19}V^{e}_L\approx\left(\begin{array}{ccc} 1& s^{e}_{12}&0\\ -s^{e*}_{12}&1&0\\ 0&0&1 \end{array}\right) \end{equation} where $s^{e}_{12}=(\frac{y_{\mu e}}{y_{\mu}}\frac{u'_{\theta}v^2_T}{v^2_1\Lambda})^*+\frac{|y'_{e}|^2}{|y_{\mu}|^2}\frac{|v_S|^6}{|v_{1}|^4\Lambda^2}$, and the charged lepton masses are approximately given by \begin{eqnarray} \nonumber m_{e}&\approx&\Big|(y_e\frac{\bar{u}^2_{\Delta}v_T}{\Lambda^3}+y'_e\frac{v^3_S}{\Lambda^3})v_d\Big|\\ \nonumber m_{\mu}&\approx&\Big|y_{\mu}\frac{v^2_1}{\Lambda^2}v_d\Big|\\ \label{20}m_{\tau}&\approx&\Big|y_{\tau}\frac{v_T}{\Lambda}v_d\Big| \end{eqnarray} Therefore the mass ratios are estimated \begin{equation} \label{21} \frac{m_e}{m_{\tau}}\approx\Big|\frac{y_{e}}{y_{\tau}}\frac{\bar{u}_{\Delta}}{\Lambda^2}+\frac{y'_{e}}{y_{\tau}}\frac{v^3_S}{v_{T}\Lambda^2}\Big|\approx\Big|\frac{y'_{e}}{y_{\tau}}\frac{v^3_S}{v_{T}\Lambda^2}\Big|,~~~~~\frac{m_{\mu}}{m_{\tau}}\approx\Big|\frac{y_{\mu}}{y_{\tau}}\frac{v^2_1}{v_T\Lambda}\Big| \end{equation} From Eq.(\ref{14}) and Eq.(\ref{21}), we see that the realistic hierarchies among the charged lepton masses $m_{\tau}:m_{\mu}:m_{e}\approx1:\lambda^2:\lambda^4$ are produced naturally. For the neutrino sector, we have \begin{equation} \label{23}w_{\nu}=(y_{\xi}\xi+\tilde{y}_{\xi}\tilde{\xi})(\ell\ell)H_{u}H_{u}/\Lambda^2+y_{S}(\varphi_S\ell\ell)H_{u}H_{u}/\Lambda^2+... \end{equation} after the electroweak and flavor symmetry breaking, $w_{\nu}$ gives rise to the following mass terms for the neutrinos \begin{equation} \label{24}w_{\nu}=y_{\xi}\frac{u_{\xi}}{\Lambda}\frac{v^2_u}{\Lambda}(\nu^2_e+2\nu_{\mu}\nu_{\tau})+\frac{2}{3}y_{S}\frac{v_S}{\Lambda}\frac{v^2_u}{\Lambda}(\nu^2_e+\nu^2_{\mu}+\nu^2_{\tau}-\nu_e\nu_{\mu}-\nu_{e}\nu_{\tau}-\nu_{\mu}\nu_{\tau})+... \end{equation} Therefore at the leading order the neutrino mass matrix is \begin{equation} \label{25}M^{\nu}=\left(\begin{array}{ccc} 2y_{\xi}\frac{u_{\xi}}{\Lambda}+\frac{4}{3}y_S\frac{v_S}{\Lambda}&-\frac{2}{3}y_S\frac{v_S}{\Lambda} &-\frac{2}{3}y_S\frac{v_S}{\Lambda}\\ -\frac{2}{3}y_S\frac{v_S}{\Lambda}&\frac{4}{3}y_S\frac{v_S}{\Lambda}&2y_{\xi}\frac{u_{\xi}}{\Lambda}-\frac{2}{3}y_S\frac{v_S}{\Lambda}\\ -\frac{2}{3}y_S\frac{v_S}{\Lambda}&2y_{\xi}\frac{u_{\xi}}{\Lambda}-\frac{2}{3}y_S\frac{v_S}{\Lambda}&\frac{4}{3}y_S\frac{v_S}{\Lambda} \end{array}\right)\frac{v^2_u}{\Lambda} \end{equation} $M^{\nu}$ is diagonalized by a unitary transformation $V^{\nu}_{L}$ \begin{equation} \label{26}V^{\nu T}_LM^{\nu}V^{\nu}_L=diag(2y_{\xi}\frac{u_{\xi}}{\Lambda}+2y_{S}\frac{v_S}{\Lambda},2y_{\xi}\frac{u_{\xi}}{\Lambda},-2y_{\xi}\frac{u_{\xi}}{\Lambda}+2y_{S}\frac{v_S}{\Lambda})\frac{v^2_u}{\Lambda} \end{equation} Where the diagonalization matrix $V^{\nu}_{L}$ is the tri-bimaximal mixing matrix $V^{\nu}_{L}=U_{TB}$, therefore the Maki-Nakagawa-Sakata-Pontecorvo(MNSP) mixing matrix, at this order, is \begin{equation} \label{27}V_{\rm{MNSP}}=V^{e\;\dagger}_LV^{\nu}_L\approx\left(\begin{array}{ccc} \sqrt{\frac{2}{3}}+\frac{1}{\sqrt{6}}s^{e}_{12}&\frac{1}{\sqrt{3}}-\frac{1}{\sqrt{3}}s^{e}_{12}&\frac{1}{\sqrt{2}}s^{e}_{12}\\ -\frac{1}{\sqrt{6}}+\sqrt{\frac{2}{3}}s^{e*}_{12}&\frac{1}{\sqrt{3}}+\frac{1}{\sqrt{3}}s^{e*}_{12}&-\frac{1}{\sqrt{2}}\\ -\frac{1}{\sqrt{6}}&\frac{1}{\sqrt{3}}&\frac{1}{\sqrt{2}} \end{array}\right) \end{equation} We see that the MNSP matrix deviates from the TB mixing pattern due to the corrections from the charged lepton sector, in particular, $(V_{\rm{MNSP}})_{e3}$ is no longer identically zero \begin{eqnarray} \nonumber|(V_{\rm MNSP})_{e3}|&\approx&\frac{1}{\sqrt{2}}|s^e_{12}|=\frac{1}{\sqrt{2}}\Big|\big(\frac{y_{\mu e}}{y_{\mu}}\frac{u'_{\theta}v^2_T}{v^2_1\Lambda}\big)^*+\frac{|y'_{e}|^2}{|y_{\mu}|^2}\frac{|v_S|^6}{|v_{1}|^4\Lambda^2}\Big|\\ \nonumber\tan^2\theta_{23}&\approx&1\\ \label{28}\tan^2\theta_{12}&\approx&\frac{1}{2}-\frac{3}{4}\Big[\frac{y_{\mu e}}{y_{\mu}}\frac{u'_{\theta}v^2_T}{v^2_1\Lambda}+\big(\frac{y_{\mu e}}{y_{\mu}}\frac{u'_{\theta}v^2_T}{v^2_1\Lambda}\big)^*+2\frac{|y'_{e}|^2}{|y_{\mu}|^2}\frac{|v_S|^6}{|v_{1}|^4\Lambda^2}\Big] \end{eqnarray} From Eq.(\ref{14}), we learn that $s^{e}_{12}$ is of order $\lambda^3$, therefore at leading order the MNSP matrix is very close to the TB mixing matrix, and the corrections from the charged lepton sector are very small. \subsection{The quark sector} The Yukawa interactions in the quark sector are \begin{equation} \label{29}w_{q}=w_{u}+w_{d} \end{equation} For the up quark sector, we have \begin{eqnarray} \nonumber w_{u}&=&y_{u1}(\varphi_TQ_LU^{c})\Delta H_{u}/\Lambda^2+y_{u2}((Q_LU^{c})_{\mathbf{3}}(\phi\phi)_{\mathbf{3}})H_{u}/\Lambda^2+y_{u3}(Q_LU^{c})'\theta''\Delta H_{u}/\Lambda^2\\ \label{30}&&+y_{u4}(Q_L\phi)''t^{c}H_{u}/\Lambda+y_{u5}Q_3(U^c\phi)'H_{u}/\Lambda+y_tQ_3t^{c}H_{u}+... \end{eqnarray} In the down quark sector, we obtain \begin{eqnarray} \nonumber w_d&=&y_{d1}(\varphi_TQ_LD^c)\bar{\Delta}H_{d}/\Lambda^2+y_{d2}(Q_LD^{c})'\theta''\bar{\Delta}H_{d}/\Lambda^2+y_{d3}(Q_L\phi)''b^{c}\Delta H_{d}/\Lambda^2\\ \nonumber&&+y_{d4}Q_3(D^{c}\phi)'\Delta H_{d}/\Lambda^2+y_{b1}Q_3b^{c}\Delta H_{d}/\Lambda+y_{b2}Q_3b^c(\varphi_T\varphi_T)h_d/\Lambda^2+y_{b3}Q_3b^c\chi^2h_d/\Lambda^2\\ \label{31}&&+y_{b4}Q_3b^c\theta'\theta''/\Lambda^2... \end{eqnarray} After electroweak and flavor symmetry breaking, we have the quark mass terms \begin{eqnarray} \nonumber w_{q}&=&y_{u1}\frac{u_{\Delta}v_{T}}{\Lambda^2}v_{u}cc^{c}+iy_{u2}\frac{v^2_1}{\Lambda^2}v_{u}cc^{c}+y_{u3}\frac{u''_{\theta}u_{\Delta}}{\Lambda^2}v_u(uc^c-cu^c)+y_{u4}\frac{v_1}{\Lambda}v_uct^{c}+y_{u5}\frac{v_1}{\Lambda}v_utc^{c}\\ \nonumber&&+y_tv_utt^c+y_{d1}\frac{\bar{u}_{\Delta}v_T}{\Lambda^2}v_dss^{c}+y_{d2}\frac{u''_{\theta}\bar{u}_{\Delta}}{\Lambda^2}v_d(ds^{c}-sd^{c})+y_{d3}\frac{u_{\Delta}v_1}{\Lambda^2}v_dsb^{c}+y_{d4}\frac{u_{\Delta}v_1}{\Lambda^2}v_dbs^{c}\\ \label{32}&&+y_{b}\frac{u_{\Delta}}{\Lambda}v_dbb^{c} \end{eqnarray} where $y_b=y_{b1}+y_{b2}\frac{v^2_T}{u_{\Delta}\Lambda}+y_{b3}\frac{u^2_{\chi}}{u_{\Delta}\Lambda}+y_{b4}\frac{u'_{\theta}u''_{\theta}}{u_{\Delta}\Lambda}$, and the resulting quark mass matrices are \begin{eqnarray} \nonumber M^{u}&=&\left(\begin{array}{ccc} 0&-y_{u3}\frac{u''_{\theta}u_{\Delta}}{\Lambda^2}&0\\ y_{u3}\frac{u''_{\theta}u_{\Delta}}{\Lambda^2}&y_{u1}\frac{u_{\Delta}v_{T}}{\Lambda^2}+iy_{u2}\frac{v^2_1}{\Lambda^2}&y_{u5}\frac{v_1}{\Lambda}\\ 0&y_{u4}\frac{v_1}{\Lambda}&y_t \end{array}\right)v_u\\ \label{33}M^{d}&=&\left(\begin{array}{ccc} 0&-y_{d2}\frac{u''_{\theta}\bar{u}_{\Delta}}{\Lambda^2}&0\\ y_{d2}\frac{u''_{\theta}\bar{u}_{\Delta}}{\Lambda^2}&y_{d1}\frac{\bar{u}_{\Delta}v_{T}}{\Lambda^2}&y_{d4}\frac{u_{\Delta}v_1}{\Lambda^2}\\ 0&y_{d3}\frac{u_{\Delta}v_1}{\Lambda^2}&y_b\frac{u_{\Delta}}{\Lambda} \end{array}\right)v_d \end{eqnarray} We see that both $M^{u}$ and $M^{d}$ have the same textures as those in the U(2) flavor model\cite{u2f}. From the Appendix A, we see that under the $T'$ generator $T$, the quark fields transform as $Q_1\stackrel{T}{\longrightarrow}Q_1$, $Q_2(Q_3,u^c,d^c)\stackrel{T}{\longrightarrow}\omega^2 Q_2(Q_3,u^c,d^c)$ and $c^c(t^c,s^c,b^c)\stackrel{T}{\longrightarrow}\omega c^c(t^c,s^c,b^c)$. Consequently, if the vacuum expectation value of $\theta''$ vanishes, the above mass matrices are the most general ones invariant under the subgroup $Z^{''}_3$ generated by the generator $T$. In this work, $u''_{\theta}$ further breaks $Z_3$ to nothing. Diagonalizing the quark mass matrices in Eq.(\ref{33}) using the standard perturbation technique\cite{Hall:1993ni,Leurer:1992wg}, we obtain the quark masses as follows \begin{eqnarray} \nonumber m_{u}&\approx&\Big|\frac{y^{2}_{u3}y_tu''^{2}_{\theta}u^2_{\Delta}}{(iy_{u2}y_t-y_{u4}y_{u5})v^2_1\Lambda^2}v_u\Big|\\ \nonumber m_{c}&\approx&\Big|(iy_{u2}-\frac{y_{u4}y_{u5}}{y_t})\frac{v^2_1}{\Lambda^2}v_u\Big|\\ \nonumber m_t&\approx&|y_tv_u|\\ \nonumber m_{d}&\approx&\Big|\frac{y^2_{d2}u''^{2}_{\theta}\bar{u}_{\Delta}}{y_{d1}v_{T}\Lambda^2}v_d\Big|\\ \nonumber m_s&\approx&\Big|y_{d1}\frac{\bar{u}_{\Delta}v_{T}}{\Lambda^2}v_d\Big|\\ \label{34}m_{b}&\approx&\Big|y_b\frac{u_{\Delta}}{\Lambda}v_d\Big| \end{eqnarray} and the CKM matrix elements are estimated as \begin{eqnarray} \nonumber&&V_{ud}\approx V_{cs}\approx V_{tb}\approx1\\ \nonumber&&V^{*}_{us}\approx -V_{cd}\approx\frac{y_{d2}}{y_{d1}}\frac{u''_{\theta}}{v_{T}}-\frac{y_{u3}y_tu''_{\theta}u_{\Delta}}{(iy_{u2}y_t-y_{u4}y_{u5})v^2_1}\\ \nonumber&&V^{*}_{cb}\approx-V_{ts}\approx(\frac{y_{d3}}{y_b}-\frac{y_{u4}}{y_t})\frac{v_1}{\Lambda}\\ \nonumber&&V^{*}_{ub}\approx -\frac{y_{u3}y_t}{iy_{u2}y_t-y_{u4}y_{u5}}(\frac{y_{d3}}{y_b}-\frac{y_{u4}}{y_{t}})\frac{u''_{\theta}u_{\Delta}}{v_1\Lambda}+\frac{y_{d2}y^{*}_{d4}}{|y_b|^2}\frac{u''_{\theta}\bar{u}_{\Delta}v^{*}_1}{u_{\Delta}\Lambda^2}\\ \label{35}&&V_{td}\approx\frac{y_{d2}}{y_{d1}}(\frac{y_{d3}}{y_{b}}-\frac{y_{u4}}{y_t})\frac{u''_{\theta}v_1}{v_{T}\Lambda}-\frac{y_{d2}y^{*}_{d4}}{|y_b|^2}\frac{u''_{\theta}\bar{u}_{\Delta}v^{*}_1}{u_{\Delta}\Lambda^2} \end{eqnarray} From Eq.(\ref{14}) and Eq.(\ref{34}), we see that the correct quark mass hierarchies are reproduced $m_t:m_c:m_u\sim1:\lambda^4:\lambda^8$, $m_b:m_s:m_d\sim1:\lambda^2:\lambda^4$ and $m_t:m_{b}\sim1:\lambda^3$. Moreover, Eq.(\ref{20}) and Eq.(\ref{34}) imply that the tau lepton and bottom quark masses are respectively of the order $\lambda^2$ and $\lambda^3$. Since $b-\tau$ unification $m_b\simeq m_{\tau}$ is usually predicted in many unification models, we expect to achieve $b-\tau$ unification in GUT model with $T'$ flavor symmetry as well, without changing drastically the successful predictions for flavor mixings and fermion mass hierarchies presented here\cite{ding}. In our model $\tan\beta\equiv v_u/v_{d}$ is of order one, the hierarchy between the top quark and bottom quark masses is due to the flavor symmetry breaking pattern. However, in Ref.\cite{Feruglio:2007uu} the large mass difference between the top and bottom quark is due to large $\tan\beta$, consequently there are large radiative corrections to the quark masses and the CKM matrix elements, which may significantly alter the low energy predictions of quark masses and CKM matrix. From Eq.(\ref{14}) and Eq.(\ref{35}), we learn that the correct hierarchy of the CKM matrix elements in Eq.(\ref{7}) is generated as well. Two interesting relations between the quark masses and mixing angles are predicted \begin{equation} \label{36} \Big|\frac{V_{td}}{V_{ts}}\Big|\approx\sqrt{\frac{m_d}{m_s}}\,,~~~~\Big|\frac{V_{ub}}{V_{cb}}\Big|\approx\sqrt{\frac{m_u}{m_c}} \end{equation} The above relations are also predicted in U(2) flavor theory. The first relation is satisfied within the large theoretical errors of both sides, and the second relation is not so well fulfilled as the first one. Both relations will be corrected by the next to leading order operators. \section{vacuum alignment} In section IV we have demonstrated that the realistic pattern of fermion masses and flavor mixing are generated, if $T'$ is broken along the directions shown in Eq.(\ref{13}), in the following we will illustrate that the VEVs in Eq.(\ref{13}) is really a local minimum of the scalar potential of the model in a finite portion of the parameter space. Using the technique in Ref.\cite{Altarelli:2005yx,Bazzocchi:2007na,Feruglio:2007uu}, a global continuous $U(1)_R$ symmetry is exploited to simplify the vacuum alignment problem, and this symmetry is broken to the discrete R-parity once we include the gaugino masses in the model. The Yukawa superpotentials $w_{\ell}$ and $w_{q}$ in Eq.(\ref{15}) and Eq.(\ref{29}) are invariant under the $U(1)_R$ symmetry, if +1 R-charge is assigned to the matter fields (i.e. the lepton and quark superfields), and 0 R-charge to the Higgs and flavon supermultiplets. Since the superpotential must have +2 R-charge, we should introduce some driving fields which carry +2 R-charge in order to avoid the spontaneous breaking of the $U(1)_R$ symmetry, consequently the driving fields enter linearly into the terms of the superpotential. The driving fields and and their transformation properties under $T'\otimes Z_3\otimes Z_9$ are shown in Table \ref{table2}. \begin{table}[hptb] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|}\hline\hline Fields& $\varphi^{R}_{T}$ & $\varphi^{R}_{S}$ & $~\xi^{R}~$ & ~$\phi^{R}$~ & ~$\theta''^{R}$~ & ~$\Delta^R$~ &~$\bar{\Delta}^R$~ & ~$\chi^{R}$~ \\\hline $T'$& $\mathbf{3}$ & $\mathbf{3}$& $\mathbf{1}$ & $\mathbf{2}''$ & $\mathbf{1}''$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ \\\hline $Z_{3}$& $\mathbf{1}$ & $\alpha$ & $\alpha$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ \\\hline $Z_{9}$& $\beta^6$ & $\mathbf{1}$ & $\mathbf{1}$ & $\beta^2$ & $\beta^7$ & $\beta^7$ & $\beta^5$ & $\beta^7$ \\\hline\hline \end{tabular} \end{center} \caption{\label{table2}The driving fields and their transformation rules under $T'\otimes Z_3\otimes Z_9$} \end{table} At the leading order, the superpotential depending on the driving fields, which is invariant under all the symmetry of the model, is given by \begin{eqnarray} \nonumber&&w_{v}=g_1(\varphi^{R}_T\phi\phi)+g_2(\varphi^{R}_T\varphi_T)\Delta+g_3(\phi^R\phi)\chi+g_4(\varphi_T\phi^{R}\phi)+g_5\chi^R\chi^2+g_6\chi^R\theta'\theta''\\ \nonumber&&+g_7\chi^R(\varphi_T\varphi_T)+g_8\theta''^R\theta''^2+g_9\theta''^R\theta'\chi+g_{10}\theta''^R(\varphi_T\varphi_T)'+M_{\Delta}\Delta^R\Delta+g_{11}\Delta^{R}\chi^2\\ \nonumber&&+g_{12}\Delta^{R}\theta'\theta''+g_{13}\Delta^R(\varphi_T\varphi_T)+\bar{M}_{\Delta}\bar{\Delta}^R\bar{\Delta}+g_{14}\bar{\Delta}^{R}\Delta^2+g_{15}(\varphi^{R}_{S}\varphi_S\varphi_S)+g_{16}(\varphi^R_S\varphi_S)\tilde{\xi}\\ \label{37}&&+g_{17}\xi^R(\varphi_S\varphi_S)+g_{18}\xi^R\xi^2+g_{19}\xi^R\xi\tilde{\xi}+g_{20}\xi^R\tilde{\xi}^2 \end{eqnarray} Since there is no distinction between $\xi$ and $\tilde{\xi}$, we define $\tilde{\xi}$ as the field that couples to $(\varphi^R_S\varphi_S)$ in the superpotential $w_{v}$ as in Ref.\cite{Altarelli:2005yx,Feruglio:2007uu}, and $\tilde{\xi}$ is necessary to achieve the correct vacuum alignment. Similarly the quantum numbers of $\Delta^R$ and $\chi^R$ are exactly identical, we define $\Delta^R$ as the one which couples to $\Delta$. From the superpotential $w_{v}$ in Eq.(\ref{37}), we can derive the scalar potential of this model \begin{equation} \label{38}V=\sum_i|\frac{\partial w_{v}}{\partial {\cal S}_i}|^2+V_{soft} \end{equation} where ${\cal S}_i$ denotes the scalar component of the superfields involved in the model, and $V_{soft}$ includes all possible SUSY soft terms for the scalar fields ${\cal S}_i$, and it is invariant under the $T'\otimes Z_3\otimes Z_9$ flavor symmetry. \begin{equation} \label{39}V_{soft}=\sum_i m^2_{{\cal S}_i}|{\cal S}_i|^2+... \end{equation} where $m^2_{{\cal S}_i}$ is the soft mass, and dots stand for other soft SUSY breaking bilinear and trilinear operators. By choosing positive soft mass $m^2_{{\cal S}_i}$ for the driving fields, all the driving fields don't acquire VEVs. Since the superpotential $w_v$ is linear in the driving fields, in the SUSY limit all the derivatives with respect to the scalar components of the superfields not charged under $U(1)_R$ symmetry vanish. Therefore in discussing the minimization of the scalar potential, we have to take into account only the derivatives with respect to the scalar components of the driving fields, then we have \begin{eqnarray} \nonumber \frac{\partial w_{v}}{\partial\varphi^R_{T1}}&=&ig_1\phi^2_1+g_2\varphi_{T1}\Delta=0\\ \nonumber\frac{\partial w_{v}}{\partial\varphi^R_{T2}}&=&(1-i)g_1\phi_1\phi_2+g_2\varphi_{T3}\Delta=0\\ \nonumber\frac{\partial w_{v}}{\partial\varphi^R_{T3}}&=&g_1\phi^2_2+g_2\varphi_{T2}\Delta=0\\ \nonumber\frac{\partial w_{v}}{\partial\phi^R_{1}}&=&g_3\phi_2\chi+g'_4(\varphi_{T1}\phi_2-(1-i)\varphi_{T3}\phi_1)=0\\ \nonumber\frac{\partial w_{v}}{\partial\phi^R_{2}}&=&-g_3\phi_1\chi+g'_4(\varphi_{T1}\phi_1+(1+i)\varphi_{T2}\phi_2)=0\\ \nonumber\frac{\partial w_{v}}{\partial\chi^R}&=&g_5\chi^2+g_6\theta'\theta''+g_7(\varphi^2_{T1}+2\varphi_{T2}\varphi_{T3})=0\\ \nonumber\frac{\partial w_{v}}{\partial\theta''^R}&=&g_8\theta''^2+g_9\theta'\chi+g_{10}(\varphi^2_{T3}+2\varphi_{T1}\varphi_{T2})=0\\ \nonumber\frac{\partial w_{v}}{\partial\Delta^R}&=&M_{\Delta}\Delta+g_{11}\chi^2+g_{12}\theta'\theta''+g_{13}(\varphi^2_{T1}+2\varphi_{T2}\varphi_{T3})=0\\ \nonumber\frac{\partial w_{v}}{\partial\bar{\Delta}^R}&=&\bar{M}_{\Delta}\bar{\Delta}+g_{14}\Delta^2=0\\ \nonumber\frac{\partial w_{v}}{\partial\varphi^R_{S1}}&=&\frac{2}{3}g_{15}(\varphi^2_{S1}-2\varphi_{S2}\varphi_{S3})+g_{16}\varphi_{S1}\tilde{\xi}=0\\ \nonumber\frac{\partial w_{v}}{\partial\varphi^R_{S2}}&=&\frac{2}{3}g_{15}(\varphi^2_{S2}-\varphi_{S1}\varphi_{S2})+g_{16}\varphi_{S3}\tilde{\xi}=0\\ \nonumber\frac{\partial w_{v}}{\partial\varphi^R_{S3}}&=&\frac{2}{3}g_{15}(\varphi^2_{S3}-\varphi_{S1}\varphi_{S2})+g_{16}\varphi_{S2}\tilde{\xi}=0\\ \label{40}\frac{\partial w_{v}}{\partial\xi^R}&=&g_{17}(\varphi^2_{S1}+2\varphi_{S2}\varphi_{S3})+g_{18}\xi^2+g_{19}\xi\tilde{\xi}+g_{20}\tilde{\xi}^2=0 \end{eqnarray} where $g'_4=\frac{1-i}{2}g_4$, hereafter we simply denote $g'_4$ with $g_4$ if there is no confusion. These sets of equations admit the solutions \begin{eqnarray} \nonumber~~~~~~~~~~~~~~~ \langle\chi\rangle&=&u_{\chi}\\ \nonumber~~~~~~~~~~~~~~~\langle\theta'\rangle&=&u'_{\theta}=-\Big[\frac{(g^2_3g_7+g^2_4g_5)^2g_8}{g^4_4g^2_6g_9}\Big]^{1/3}u_{\chi}\\ \nonumber~~~~~~~~~~~~~~~\langle\theta''\rangle&=&u''_{\theta}=\Big[\frac{(g^2_3g_7+g^2_4g_5)g_9}{g^2_4g_6g_8}\Big]^{1/3}u_{\chi}\\ \nonumber~~~~~~~~~~~~~~~\langle\Delta\rangle&=&u_{\Delta}=\frac{g^2_3(g_7g_{12}-g_6g_{13})+g^2_4(g_5g_{12}-g_6g_{11})}{g^2_4g_6}\frac{u^2_{\chi}}{M_{\Delta}}\\ \nonumber~~~~~~~~~~~~~~~ \langle\bar{\Delta}\rangle&=&\bar{u}_{\Delta}=-\frac{[g^2_3(g_{7}g_{12}-g_6g_{13})+g^2_4(g_5g_{12}-g_6g_{11})]^2g_{14}}{g^4_4g^2_6}\frac{u^4_{\chi}}{M^2_{\Delta}\bar{M}_{\Delta}}\\ \nonumber~~~~~~~~~~~~~~~\langle\phi\rangle&=&(v_1,0),~~~~~v_1=\Big(\frac{ig_2g_3[g^2_3(g_7g_{12}-g_6g_{13})+g^2_4(g_5g_{12}-g_6g_{11})]}{g_1g^3_4g_6}\Big)^{1/2}M^{\,-1/2}_{\Delta}u^{3/2}_{\chi}\\ \nonumber~~~~~~~~~~~~~~~\langle\varphi_{T}\rangle&=&(v_T,0,0),~~~~~~v_{T}=\frac{g_3}{g_4}u_{\chi}\\ \nonumber~~~~~~~~~~~~~~~\langle\tilde{\xi}\rangle&=&0\\ \nonumber~~~~~~~~~~~~~~~\langle\xi\rangle&=&u_{\xi}\\ \label{41}~~~~~~~~~~~~~~~\langle\varphi_{S}\rangle&=&(v_{S},v_{S},v_{S}),~~~~~v_S=\Big(-\frac{g_{18}}{3g_{17}}\Big)^{1/2}u_{\xi} \end{eqnarray} where both $u_{\xi}$ and $u_{\chi}$ are undetermined, by choosing $m^2_{\xi}$ and $m^2_{\chi}$ to be negative, $u_{\xi}$ and $u_{\chi}$ would take non-zero values. From Eq.(\ref{41}), we see that the correct vacuum alignment shown in Eq.(\ref{13}) is realized. As for the values of the VEVs, we can choose the parameters in the superpotential $w_{v}$ so that the required orders of the VEVs in Eq.(\ref{14}) can be achieved. \section{corrections to the leading order predictions for the fermion masses and flavor mixing} In the previous section, we have shown that realistic fermion mass hierarchies and flavor mixing are successfully produced at the leading order in our model. However, the leading order results would receive corrections from the higher dimensional operators consistent with the symmetry of the model, which are suppressed by additional powers of $\Lambda$. We will study these terms and analyze their physical effects case by case. The next to leading order corrections can be classified into two groups: the first class of corrections are induced by the higher dimensional operators present in the superpotential $w_{v}$, which can change the vacuum alignment in Eq.(\ref{13}), therefore the leading order mass matrices are modified. The second are induced by the higher dimensional operators in the Yukawa superpotentials $w_{\ell}$ and $w_{q}$, which could modify the Yukawa couplings after the electroweak and flavor symmetry breaking . \subsection{Higher dimensional operators in the flavon superpotential and the corrections to the vacuum alignment} If we include the next to leading order operators in the flavon superpotential $w_{v}$, the vacuum alignment in Eq.(\ref{13}) would be modified, and the higher order corrections to the vacuum alignment are discussed in detail in the Appendix B. The corrections result in a shift in the VEVs of the scalar fields, and therefore the new vacuum configuration is given by \begin{eqnarray} \nonumber&&\langle\varphi_{T}\rangle=(v_T+\delta v_{T1},\delta v_{T2},\delta v_{T3}),~~~\langle\varphi_S\rangle=(v_S+\delta v_{S1},v_S+\delta v_{S2},v_S+\delta v_{S3}),\\ \nonumber&&\langle\phi\rangle=(v_1+\delta v_1,\delta v_2),~~~\langle\xi\rangle=u_{\xi},~~~\langle\tilde{\xi}\rangle=\delta\tilde{u}_{\xi},~~~\langle\theta'\rangle=u'_{\theta}+\delta u'_{\theta},\\ \label{42}&&\langle\theta''\rangle=u''_{\theta}+\delta u''_{\theta},~~~\langle\Delta\rangle=u_{\Delta}+\delta u_{\Delta},~~~\langle\bar{\Delta}\rangle=\bar{u}_{\Delta}+\delta\bar{u}_{\Delta},~~~\langle\chi\rangle=u_{\chi} \end{eqnarray} In the Appendix B, we show that the corrections $\delta v_{T2}$, $\delta v_{T3}$, $\delta v_{1}$, $\delta v_{2}$, $\delta u'_{\theta}$, $\delta u''_{\theta}$ and $\delta\bar{u}_{\Delta}$ arise at order $1/\Lambda$. $\delta v_{T1}$ and $\delta u_{\Delta}$ are of order $1/\Lambda^2$, and the corrections $\delta v_{S1}$, $\delta v_{S2}$, $\delta v_{S3}$ and $\delta \tilde{u}_{\xi}$ are suppressed by $1/\Lambda^3$, which are small enough and can be negligible. Note that there should also be corrections to the VEVs of $\xi$ and $\chi$, but we do not have to indicate this explicitly by the addition of terms $\delta u_{\xi}$ and $\delta u_{\chi}$, since both $u_{\xi}$ and $u_{\chi}$ are undetermined at the leading order. Repeating the calculations in section IV and substituting the modified vacuum into the Yukawa superpotentials $w_{\ell}$ and $w_{q}$, we can obtain the new vacuum corrections to the fermion mass matrices as follows \begin{eqnarray} \label{43}\delta M^e_1&=&\left(\begin{array}{ccc} y_{e}\frac{\bar{u}^2_{\Delta}\delta v_{T1}+2\bar{u}_{\Delta}\delta\bar{u}_{\Delta}v_T}{\Lambda^3}&y_{e}\frac{\bar{u}^2_{\Delta}\delta v_{T3}}{\Lambda^3}&y_{e}\frac{\bar{u}^2_{\Delta}\delta v_{T2}}{\Lambda^3}\\ y_{\mu2}\frac{u_{\Delta}\delta v_{T2}}{\Lambda^2}&2iy_{\mu1}\frac{v_1\delta v_1}{\Lambda^2}&(1-i)y_{\mu1}\frac{v_1\delta v_2}{\Lambda^2}+y_{\mu2}\frac{u_{\Delta}\delta v_{T3}}{\Lambda^2}\\ y_{\tau}\frac{\delta v_{T3}}{\Lambda}&y_{\tau}\frac{\delta v_{T2}}{\Lambda}&y_{\tau}\frac{\delta v_{T1}}{\Lambda} \end{array}\right)v_d\\ \label{44}\delta M^{\nu}_1&=&\left(\begin{array}{ccc} \frac{4}{3}y_S\frac{\delta v_{S1}}{\Lambda}&-\frac{2}{3}y_S\frac{\delta v_{S3}}{\Lambda}&-\frac{2}{3}y_S\frac{\delta v_{S2}}{\Lambda}\\ -\frac{2}{3}y_S\frac{\delta v_{S3}}{\Lambda}&\frac{4}{3}y_S\frac{\delta v_{S2}}{\Lambda}&-\frac{2}{3}y_{S}\frac{\delta v_{S1}}{\Lambda}\\ -\frac{2}{3}y_S\frac{\delta v_{S2}}{\Lambda}&-\frac{2}{3}y_{S}\frac{\delta v_{S1}}{\Lambda}&\frac{4}{3}y_S\frac{\delta v_{S3}}{\Lambda} \end{array}\right)\frac{v^2_u}{\Lambda}\\ \label{45}\delta M^{u}_1&=&\left(\begin{array}{ccc} iy_{u1}\frac{u_{\Delta}\delta v_{T2}}{\Lambda^2}&\delta u_1&-y_{u5}\frac{\delta v_2}{\Lambda}\\ \delta u'_1&y_{u1}\frac{u_{\Delta}\delta v_{T1}+\delta u_{\Delta}v_T}{\Lambda^2}+2iy_{u2}\frac{v_1\delta v_1}{\Lambda^2}&y_{u5}\frac{\delta v_1}{\Lambda}\\ -y_{u4}\frac{\delta v_2}{\Lambda}&y_{u4}\frac{\delta v_1}{\Lambda}&0 \end{array} \right)v_u\\ \label{46}\delta M^{d}_1&=&\left(\begin{array}{ccc} iy_{d1}\frac{\bar{u}_{\Delta}\delta v_{T2}}{\Lambda^2}&\delta d_1&-y_{d4}\frac{u_{\Delta}\delta v_2}{\Lambda^2}\\ \delta d'_1&y_{d1}\frac{\bar{u}_{\Delta}\delta v_{T1}+\delta\bar{u}_{\Delta}v_T}{\Lambda^2}&y_{d4}\frac{u_{\Delta}\delta v_1+\delta u_{\Delta}v_1}{\Lambda^2}\\ -y_{d3}\frac{u_{\Delta}\delta v_2}{\Lambda^2}&y_{d3}\frac{u_{\Delta}\delta v_1+\delta u_{\Delta}v_1}{\Lambda^2}&y_b\frac{\delta u_{\Delta}}{\Lambda} \end{array}\right)v_d \end{eqnarray} where $\delta e_1$, $\delta e'_1$, $\delta u_1$, $\delta u'_1$, $\delta d_1$ and $\delta d'_1$ are given by \begin{eqnarray} \nonumber\delta u_1&=&\frac{1-i}{2}y_{u1}\frac{u_{\Delta}\delta v_{T3}}{\Lambda^2}-iy_{u2}\frac{v_1\delta v_2}{\Lambda^2}-y_{u3}\frac{\delta u''_{\theta}u_{\Delta}+u''_{\theta}\delta u_{\Delta}}{\Lambda^2}\\ \nonumber\delta u'_1&=&\frac{1-i}{2}y_{u1}\frac{u_{\Delta}\delta v_{T3}}{\Lambda^2}-iy_{u2}\frac{v_1\delta v_2}{\Lambda^2}+y_{u3}\frac{\delta u''_{\theta}u_{\Delta}+u''_{\theta}\delta u_{\Delta}}{\Lambda^2}\\ \nonumber\delta d_1&=&\frac{1-i}{2}y_{d1}\frac{\bar{u}_{\Delta}\delta v_{T3}}{\Lambda^2}-y_{d2}\frac{\delta u''_{\theta}\bar{u}_{\Delta}+u''_{\theta}\delta\bar{u}_{\Delta}}{\Lambda^2}\\ \nonumber\delta d'_1&=&\frac{1-i}{2}y_{d1}\frac{\bar{u}_{\Delta}\delta v_{T3}}{\Lambda^2}+y_{d2}\frac{\delta u''_{\theta}\bar{u}_{\Delta}+u''_{\theta}\delta\bar{u}_{\Delta}}{\Lambda^2} \end{eqnarray} As for $\delta M^{e}_1$, we have neglected the corrections induced by $\delta v_{Si}(i=1,2,3)$ and $\delta \tilde{u}_{\xi}$, since they are of higher order $1/\Lambda^3$ and can be negligible comparing with the corrections proportional to $\delta v_{Ti}(i=1,2,3)$ and $\delta\bar{u}_{\Delta}$. Eq.(\ref{44}) implies that the corrections to the neutrino mass matrix are suppressed by additional power of $1/\Lambda^3$ relative to the leading order results. Concerning the quark sector, the correction terms $y_{u3}\frac{\delta u''_{\theta}u_{\Delta}+u''_{\theta}\delta u_{\Delta}}{\Lambda^2}$, $y_{d2}\frac{\delta u''_{\theta}\bar{u}_{\Delta}+u''_{\theta}\delta\bar{u}_{\Delta}}{\Lambda^2}$ and $y_b\frac{\delta u_{\Delta}}{\Lambda}$ can be absorbed by the redefinition of $y_{u3}$, $y_{d2}$ and $y_b$ respectively. \subsection{Corrections induced by higher dimensional operators in the Yukawa superpotential } \begin{enumerate} \item{Corrections to $w_{\ell}$} The leading order operators relevant to $e^{c}$ are of order $1/\Lambda^3$, which are shown in Eq.(\ref{16}), at the next order $1/\Lambda^4$ there are two operators \begin{equation} \label{47} e^{c}(\ell\varphi_T)\Delta^2\bar{\Delta}H_{d}/\Lambda^4,~~~~~~~e^{c}(\ell\phi\phi)\Delta\bar{\Delta}H_{d}/\Lambda^4 \end{equation} Because $(\phi\phi)_{\mathbf{3}}=(i\phi^2_1,\phi^2_2,(1-i)\phi_1\phi_2)$, its VEV is parallel to that of $\varphi_T$. Therefore both operators have the same structure as the leading operator $e^{c}(\ell\varphi_T)\bar{\Delta}^2H_{d}/\Lambda^3$, their effects can be absorbed by the redefinition of $y_e$. Concerning the $\mu^{c}$ relevant terms in the leading order Yukawa superpotential in Eq.(\ref{16}), they comprise both terms of order $1/\Lambda^2$ and terms of order $1/\Lambda^3$. The subleading operators invariant under the symmetry of the model arise at order $1/\Lambda^5$, and their contributions are completely negligible relative to the corrections from the modified vacuum configuration. The leading operator of the $\tau^c$ relevant term is of order $1/\Lambda$, and the next to leading order corrections are of order $1/\Lambda^4$, therefore their contributions can be neglected comparing with the corrections from the new vacuum. The same arguments used for the charged lepton mass are applicable to the neutrino sector as well. The leading operators contributing to $M^{\nu}$ are of order $1/\Lambda^2$ from Eq.(\ref{23}), and the leading order results receive corrections from higher dimensional operators of order $1/\Lambda^5$ at the next to leading order. Therefore, the charged lepton mass matrix mainly receives corrections from the modified vacuum configuration. Whereas, the corrections to the neutrino mass matrix from the next to leading order operators in both $w_{v}$ and $w_{\nu}$ are negligible, and $T'$ is approximately broken to $Z_4$ subgroup in the neutrino sector, even if higher order corrections are included. \item{Corrections to $w_{u}$} As is shown in Eq.(\ref{30}), the leading operators, which give rise to the $M^{u}_{11}$, $M^{u}_{12}$, $M^{u}_{21}$ and $M^{u}_{22}$, are of order $1/\Lambda^2$. At the next order $1/\Lambda^3$, there are two operators whose contributions can not be absorbed by parameter redefinition \begin{equation} \label{49}x_{u1}((Q_LU^c)_{\mathbf{3}}(\varphi_T\varphi_T)_{\mathbf{3}_S})'\theta''H_{u}/\Lambda^3,~~~~x_{u2}((Q_LU^c)_{\mathbf{3}}(\varphi_T\varphi_T)_{\mathbf{3}_S})''\theta'H_{u}/\Lambda^3 \end{equation} The leading operators contributing to $M^{u}_{13}$, $M^{u}_{23}$, $M^{u}_{31}$ and $M^{u}_{32}$ are of order $1/\Lambda$ from Eq.(\ref{30}), and the next to leading order corrections arise at order $1/\Lambda^4$. These contributions are negligible relative to the corrections induced by the modified vacuum, which is shown in Eq.(44). The corrections to $M^{u}_{33}$ can be absorbed by redefining the parameter $y_t$. As a result, the higher dimensional operators corrections to the up quark mass matrix are \begin{equation} \label{50}\delta M^{u}_2=\left(\begin{array}{ccc} \frac{2i}{3}x_{u2}\frac{u'_{\theta}v^2_{T}}{\Lambda^3}&\frac{1-i}{3}x_{u1}\frac{u''_{\theta}v^2_T}{\Lambda^3}&0\\ \frac{1-i}{3}x_{u1}\frac{u''_{\theta}v^2_T}{\Lambda^3}&0&0\\ 0&0&0 \end{array} \right)v_{u} \end{equation} \item{Corrections to $w_d$} Concerning the $M^{d}_{11}$, $M^{d}_{12}$,$M^{d}_{21}$ and $M^{d}_{22}$ relevant operators, the leading terms are of order $1/\Lambda^2$, which are shown in Eq.(\ref{31}), and there are three operators at the order $1/\Lambda^3$ \begin{equation} \label{51}(\varphi_{T}Q_LD^c)\Delta^2H_{d}/\Lambda^3,~~~~((Q_LD^c)_{\mathbf{3}}(\phi\phi)_{\mathbf{3}})\Delta H_{d}/\Lambda^3,~~~~(Q_LD^c)'\theta''\Delta^2H_{d}/\Lambda^3 \end{equation} After electroweak and flavor symmetry breaking, the above operators have the same structures as the leading ones, and their contributions can be absorbed by redefinition of $y_{d1}$ and $y_{d2}$. Nontrivial higher dimensional operators arise at the order $1/\Lambda^4$, their contributions are negligible comparing with the corrections from the modified vacuum, which is shown in Eq.(45). Similarly the next to leading operators contributing to $M^{d}_{13}$, $M^{d}_{23}$, $M^{d}_{31}$ and $M^{d}_{32}$ are of order $1/\Lambda^3$, and only two operators remain after symmetry breaking and parameter redefinition \begin{equation} \label{52} x_{d1}(\varphi_TQ_L\phi)b^{c}\theta''H_{d}/\Lambda^3,~~~~x_{d2}Q_3(\varphi_TD^{c}\phi)''\theta''H_{d}/\Lambda^3 \end{equation} Therefore the corrections to the down quark mass matrix from the higher dimensional operators are \begin{equation} \label{53}\delta M^{d}_{2}=\left(\begin{array}{ccc} 0&0&ix_{d2}\frac{u''_{\theta}v_1v_{T}}{\Lambda^3}\\ 0&0&0\\ ix_{d1}\frac{u''_{\theta}v_1v_{T}}{\Lambda^3}&0&0 \end{array}\right)v_d \end{equation} \end{enumerate} \subsection{Fermion masses and flavor mixing including the next to leading order corrections} \begin{enumerate} \item{Lepton masses and MNSP matrix} Combining the leading order predictions Eq.(\ref{18}) for the charged lepton mass matrix with the subleading corrections in Eq.(\ref{43}), we obtain that the charged lepton mass matrix is modified as \begin{equation} \label{54}{{\cal M}^{e}}=M^{e}+\delta M^{e}_1=\left(\begin{array}{ccc} y_{e}\frac{\bar{u}^2_{\Delta}v_T}{\Lambda^3}+y'_e\frac{v^3_S}{\Lambda^3}&y_e\frac{\bar{u}^2_{\Delta}\delta v_{T3}}{\Lambda^3}+y'_e\frac{v^{3}_S}{\Lambda^3} &y_e\frac{\bar{u}^2_{\Delta}\delta v_{T2}}{\Lambda^3}+y'_e\frac{v^3_S}{\Lambda^3}\\ \delta e'_2&y_{\mu}\frac{v^2_1}{\Lambda^2} &\delta e_2\\ y_{\tau}\frac{\delta v_{T3}}{\Lambda}& y_{\tau}\frac{\delta v_{T2}}{\Lambda}& y_{\tau}\frac{v_{T}}{\Lambda} \end{array}\right) \end{equation} where \begin{eqnarray} \nonumber\delta e_2&=&(1-i)y_{\mu1}\frac{v_1\delta v_2}{\Lambda^2}+y_{\mu2}\frac{u_{\Delta}\delta v_{T3}}{\Lambda^2}+y_{\mu\tau}\frac{u''_{\theta}v^2_T}{\Lambda^3}\\ \nonumber\delta e'_2&=&y_{\mu2}\frac{u_{\Delta}\delta v_{T2}}{\Lambda^2}+y_{\mu e}\frac{u'_{\theta}v^2_T}{\Lambda^3} \end{eqnarray} we have set $v_T+\delta v_{T1}\rightarrow v_{T}$, $v_1+\delta v_1\rightarrow v_1$ and $\bar{u}_{\Delta}+\delta\bar{u}_{\Delta}\rightarrow\bar{u}_{\Delta}$. In the neutrino sector, since the corrections from the new vacuum and the higher dimensional operators in the the Yukawa superpotential $w_{\nu}$ are of order $1/\Lambda^5$, as are shown in the previous subsections, these contributions are negligible. Therefore the neutrino mass matrix is approximately not affected by the subleading operators. Performing the same procedure as that in section IV, we see that both the charged lepton masses and the neutrino masses approximately are not modified by the next to leading order operators, and the MNSP matrix becomes \begin{equation} \label{55}V_{\rm MNSP}\approx\left(\begin{array}{ccc} \sqrt{\frac{2}{3}}+\frac{1}{\sqrt{6}}\frac{\delta v^*_{T3}}{v^*_{T}}&\frac{1}{\sqrt{3}}-\frac{1}{\sqrt{3}}\frac{\delta v^*_{T3}}{v^*_{T}}&-\frac{1}{\sqrt{2}}\frac{\delta v^*_{T3}}{v^*_{T}}\\ -\frac{1}{\sqrt{6}}+\frac{1}{\sqrt{6}}\frac{\delta v^*_{T2}}{v^*_T}&\frac{1}{\sqrt{3}}-\frac{1}{\sqrt{3}}\frac{\delta v^*_{T2}}{v^*_T}&-\frac{1}{\sqrt{2}}-\frac{1}{\sqrt{2}}\frac{\delta v^*_{T2}}{v^*_T}\\ -\frac{1}{\sqrt{6}}+\frac{1}{\sqrt{6}}\frac{2\delta v_{T3}-\delta v_{T2}}{v_T}&\frac{1}{\sqrt{3}}+\frac{1}{\sqrt{3}}\frac{\delta v_{T2}+\delta v_{T3}}{v_T}&\frac{1}{\sqrt{2}}-\frac{1}{\sqrt{2}}\frac{\delta v_{T2}}{v_T} \end{array} \right) \end{equation} therefore \begin{eqnarray} \nonumber&&|(V_{\rm MNSP})_{e3}|\approx\frac{1}{\sqrt{2}}\Big|\frac{\delta v_{T3}}{v_{T}}\Big|\\ \nonumber&&\tan^2\theta_{23}\approx1+2\Big(\frac{\delta v_{T2}}{v_{T}}+\frac{\delta v^*_{T2}}{v^*_{T}}\Big)\\ \label{56}&&\tan^2\theta_{12}\approx\frac{1}{2}-\frac{3}{4}\Big(\frac{\delta v_{T3}}{v_{T}}+\frac{\delta v^*_{T3}}{v^*_{T}}\Big) \end{eqnarray} We see $\delta v_{T2}/v_{T}\sim\lambda^2$ and $\delta v_{T3}/v_{T}\sim\lambda^2$ from the Appendix B, therefore the deviations of the mixing angles from the TB mixing predictions are of order $\lambda^2$, which are allowed by the current neutrino oscillation data in Eq.(\ref{8}). \item{Quark masses and CKM matrix} Including the corrections $\delta M^{u}_i$ and $\delta M^{d}_i (i=1,2)$ induced by the new vacuum and the higher dimensional operators in the Yukawa superpotential $w_q$, the up quark and down quark mass matrices becomes \begin{eqnarray} \nonumber{\cal M}^{u}&=&M^{u}+\delta M^{u}_1+\delta M^{u}_2=\left(\begin{array}{ccc} iy_{u1}\frac{u_{\Delta}\delta v_{T2}}{\Lambda^2}+\frac{2i}{3}x_{u2}\frac{u'_{\theta}v^2_T}{\Lambda^3}&-y_{u3}\frac{u''_{\theta}u_{\Delta}}{\Lambda^2}+\delta u_2&-y_{u5}\frac{\delta v_2}{\Lambda}\\ y_{u3}\frac{u''_{\theta}u_{\Delta}}{\Lambda^2}+\delta u_2&y_{u1}\frac{u_{\Delta}v_{T}}{\Lambda^2}+iy_{u2}\frac{v^2_1}{\Lambda^2}&y_{u5}\frac{v_1}{\Lambda}\\ -y_{u4}\frac{\delta v_2}{\Lambda}&y_{u4}\frac{v_1}{\Lambda}&y_t \end{array}\right)v_u\\ \nonumber{\cal M}^{d}&=&\left(\begin{array}{ccc} iy_{d1}\frac{\bar{u}_{\Delta}\delta v_{T2}}{\Lambda^2}&-y_{d2}\frac{u''_{\theta}\bar{u}_{\Delta}}{\Lambda^2}+\frac{1-i}{2}y_{d1}\frac{\bar{u}_{\Delta}\delta v_{T3}}{\Lambda^2}&-y_{d4}\frac{u_{\Delta}\delta v_2}{\Lambda^2}+ix_{d2}\frac{u''_{\theta}v_1v_T}{\Lambda^3}\\ y_{d2}\frac{u''_{\theta}\bar{u}_{\Delta}}{\Lambda^2}+\frac{1-i}{2}y_{d1}\frac{\bar{u}_{\Delta}\delta v_{T3}}{\Lambda^2}&y_{d1}\frac{\bar{u}_{\Delta}v_T}{\Lambda^2}&y_{d4}\frac{u_{\Delta}v_1}{\Lambda^2}\\ -y_{d3}\frac{u_{\Delta}\delta v_2}{\Lambda^2}+ix_{d1}\frac{u''_{\theta}v_1v_T}{\Lambda^3}&y_{d3}\frac{u_{\Delta}v_1}{\Lambda^2}&y_b\frac{u_{\Delta}}{\Lambda} \end{array}\right)v_d \end{eqnarray} where $\delta u_2=\frac{1-i}{2}y_{u1}\frac{u_{\Delta}\delta v_{T3}}{\Lambda^2}-iy_{u2}\frac{v_1\delta v_2}{\Lambda^2}+\frac{1-i}{3}x_{u1}\frac{u''_{\theta}v^2_T}{\Lambda^3}$, and we have set $v_T+\delta v_{T1}\rightarrow v_{T}$, $v_1+\delta v_1\rightarrow v_1$, $u_{\Delta}+\delta u_{\Delta}\rightarrow u_{\Delta}$ and $\bar{u}_{\Delta}+\delta\bar{u}_{\Delta}\rightarrow \bar{u}_{\Delta}$. Diagonalizing the above mass matrices perturbatively, we obtain the quark masses as follows \begin{eqnarray} \nonumber m_{u}&\approx&\Big|\Big(iy_{u1}\frac{u_{\Delta}\delta v_{T2}}{\Lambda^2}-iy_{u2}\frac{\delta v^2_2}{\Lambda^2}+\frac{y^2_{u3}y_tu''^2_{\theta}u^2_{\Delta}}{(iy_{u2}y_t-y_{u4}y_{u5})v^2_1\Lambda^2}+\frac{2i}{3}x_{u2}\frac{u'_{\theta}v^2_T}{\Lambda^3}\Big)v_u\Big|\\ \nonumber m_{c}&\approx&\Big|\Big(iy_{u2}-\frac{y_{u4}y_{u5}}{y_t}\Big)\frac{v^2_1}{\Lambda^2}v_u\Big|\\ \nonumber m_{t}&\approx&\Big|y_tv_u\Big|\\ \nonumber m_{d}&\approx&\Big|\Big(iy_{d1}\frac{\bar{u}_{\Delta}\delta v_{T2}}{\Lambda^2}+\frac{y^2_{d2}}{y_{d1}}\frac{u''^2_{\theta}\bar{u}_{\Delta}}{v_T\Lambda^2}\Big)v_d\Big|\\ \nonumber m_s&\approx&\Big|y_{d1}\frac{\bar{u}_{\Delta}v_T}{\Lambda^2}v_d\Big|\\ \label{57}m_b&\approx&\Big|y_b\frac{u_{\Delta}}{\Lambda}v_d\Big| \end{eqnarray} and the CKM matrix elements are approximately given by \begin{eqnarray} \nonumber V_{ud}&\approx& V_{cs}\approx V_{tb}\approx1\\ \nonumber V^{*}_{us}&\approx&-V_{cd}\approx\frac{y_{d2}}{y_{d1}}\frac{u''_{\theta}}{v_{T}}+\frac{1-i}{2}\frac{\delta v_{T3}}{v_T}+\frac{\delta v_2}{v_1}-\frac{y_{u3}y_tu''_{\theta}u_{\Delta}}{(iy_{u2}y_t-y_{u4}y_{u5})v^2_1}\\ \nonumber V^{*}_{cb}&\approx&-V_{ts}\approx(\frac{y_{d3}}{y_b}-\frac{y_{u4}}{y_t})\frac{v_1}{\Lambda}\\ \nonumber V^{*}_{ub}&\approx&-\frac{y_{u3}y_t}{iy_{u2}y_t-y_{u4}y_{u5}}(\frac{y_{d3}}{y_b}-\frac{y_{u4}}{y_t})\frac{u''_{\theta}u_{\Delta}}{v_1\Lambda}+i\frac{x_{d1}}{y_b}\frac{u''_{\theta}v_1v_{T}}{u_{\Delta}\Lambda^2}\\ \label{58}V_{td}&\approx&\frac{y_{d2}}{y_{d1}}(\frac{y_{d3}}{y_b}-\frac{y_{u4}}{y_t})\frac{u''_{\theta}v_1}{v_T\Lambda}+(\frac{y_{d3}}{y_b}-\frac{y_{u4}}{y_t})(\frac{\delta v_2}{\Lambda}+\frac{1-i}{2}\frac{v_1\delta v_{T3}}{v_T\Lambda})-i\frac{x_{d1}}{y_b}\frac{u''_{\theta}v_1v_T}{u_{\Delta}\Lambda^2} \end{eqnarray} Because $\delta v_{T2}/\Lambda$, $\delta v_{T3}/\Lambda$ and $\delta v_2/\Lambda$ are of order $\lambda^4$ from the Appendix B, to get the appropriate magnitude of the up quark mass, we assume that the couplings $y_{u1}$ and $x_{u2}$ are smaller than one by a factor of $\lambda$, i.e. $y_{u1}\sim x_{u2}\sim\lambda$. From Eq.(\ref{14}), Eq.(\ref{57}) and Eq.(\ref{58}), we see that the realistic hierarchies in quark masses and CKM matrix elements are generated, and the relations between quark masses and mixing angles in Eq.(\ref{36}) are no longer satisfied after including the subleading contributions. It is very likely that the higher order contributions would improve the agreement between the model predictions and the experimental data. \end{enumerate} \section{summary and discussion } $T'$ is a promising discrete group for a unified description of both quark and lepton mass hierarchies and flavor mixing. $T'$ can reproduce the success of $A_4$ model building, and $T'$ has advantage over $A_4$ in extension to the quark sector because it has doublet representations in addition to singlet representations and triplet representation. We have built a SUSY model based on $T'\otimes Z_3\otimes Z_9$ flavor symmetry, where the fermion mass hierarchies arise from the flavor symmetry breaking which is crucial in producing the flavor mixing as well. In the lepton sector, the left handed electroweak lepton doublets $l_i(i=1,2,3)$ are $T'$ triplet, and the right handed charged leptons $e^{c}$, $\mu^{c}$ and $\tau^{c}$ transform as $\mathbf{1}$, $\mathbf{1}''$ and $\mathbf{1}'$ respectively. The charged lepton mass matrix is no longer diagonal at the leading order, and $T'$ is broken completely in the charged lepton sector. However, it is broken down to the $Z_3$ subgroup generated by the element $T$ in $A_4$ model\cite{Altarelli:2005yx} and in the $T'$ model of Ref.\cite{Feruglio:2007uu} at the leading order, then it is further broken to nothing by the subleading operators. The MNSP matrix is predicted to be nearly TB mixing matrix at the leading order, and the deviations due to the contributions of the charged lepton sector are of order $\lambda^3$ and are negligible. In the neutrino sector, $T'$ is broken down to the $Z_4$ subgroup generated by the element $TST^2$ at the leading order as Ref.\cite{Feruglio:2007uu}. The higher order corrections to the neutrino mass matrix are strongly suppressed, consequently the $Z_4$ symmetry almost remains. Considering the next to leading order operators in the Yukawa superpotential $w_{\ell}$ and the flavon superpotential $w_v$, then the mixing angles are predicted to deviate from the TB mixing predictions by terms of order $\lambda^2$, which are in the interval indicated by the experimental data at the $3\sigma$ level. In the quark sector, doublet representation are exploited. The first two generations transform as doublet ($\mathbf{2}$ or $\mathbf{2}'$), and the third generation is $T'$ singlet ($\mathbf{1}'$ or $\mathbf{1}''$). At the leading order, both the up and down quark Yukawa matrices textures in U(2) flavor theory are produced, and the correct hierarchies in quark masses and mixing angles are obtained. $T'$ is completely broken at the leading order, this is in contrast with Ref.\cite{Feruglio:2007uu}, where the $T'$ flavor symmetry is broken down to $Z_3$ at the leading order and is further broken to nothing by the next to leading order contributions. After including the corrections induced by the next to leading order operators, the correct orders of quark masses and CKM matrix elements at the leading order remain except the up quark mass, we need to mildly fine-tune the coupling coefficients $y_{u1}$ and $x_{u2}$ to be smaller than one by a factor of $\lambda$. The vacuum alignment and the higher order corrections are discussed in details. We have shown that the scalar potential in the model presents the correct $T'$ breaking alignment in a finite portion of the parameter space, which plays an important role in producing the realistic fermion mass hierarchies and flavor mixing. The VEVs should be of the orders shown in Eq.(\ref{14}), the minor hierarchy in the VEVs can be achieved by moderately fine-tuning the parameters in $w_v$. The origin of this hierarchies may be qualitatively understood in the grand unification models\cite{ding}, in which $b-\tau$ unification may be predicted as well. The higher order corrections are due to the higher dimensional operators which modify the Yukawa couplings and the the vacuum alignment, and they don't spoil the leading order predictions. Our model is different from the model in Ref.\cite{Feruglio:2007uu} mainly in the following three aspects: \begin{enumerate} \item {We have introduced the auxiliary discrete symmetry $Z_9$ instead of continuous $U(1)_{FN}$, both the fermion mass hierarchies and flavor mixing arise from the spontaneous breaking of the flavor symmetry, whereas a continuous abelian flavor symmetry $U(1)_{FN}$ is introduced to describe the fermion mass hierarchies in Ref.\cite{Feruglio:2007uu}.} \item{There are large differences between the two models in the quark sector, at the leading order, the favorable Yukawa matrix textures of the U(2) flavor theory are obtained, and the realistic quark mass hierarchies and CKM matrix elements are produced in our model. However, in the model in Ref.\cite{Feruglio:2007uu}, only the masses of the second and third generation quarks and the mixing between them are generated at the leading order, the masses and mixing angles of the first generation quarks are produced via subleading effects induced by the higher dimensional operators.} \item{The large mass hierarchy between the top quark and the bottom quark arises from the flavor symmetry breaking, and $\tan\beta\equiv v_{u}/v_{d}$ is of order one in our model. Nevertheless, this hierarchy is due to large $\tan\beta$ in Ref.\cite{Feruglio:2007uu}, therefore the quark masses and mixing angles may receive large radiative corrections, and the successful predictions in Ref.\cite{Feruglio:2007uu} could be destroyed at low energy.} \end{enumerate} We would like to stress that the origin of all the above differences is due to the different flavor symmetry ( discrete $Z_9$ instead of continuous $U(1)_{FN}$ ), different charge assignments and different flavor symmetry breaking patterns. As most flavor models, there are a large number of operators with dimensionless order one coefficients in front of them, However, experimental tests of this model is not impossible\cite{ding}. Since quarks and leptons have their superpartners in SUSY, the flavor symmetry would affect mass matrices of squarks and sleptons as well, and specific pattern of sfermion masses would be predicted, which could be tested by measuring squark and slepton masses in future experiments. Moreover, the squarks and sleptons mass matrices are severely constrained by the experiments of flavor changing neutral current (FCNC) processes, and the off-diagonal elements of sfermion mass matrices must be suppressed in the super-CKM basis. Hence searching for FCNC and CP violating phenomena such as lepton flavor violation $\mu\rightarrow e\gamma$ and $\mu-e$ conversion in atom, electric dipole moments of the electron and neutron, and $B-\bar{B}$ mixing etc are also possible tests of the model. Moreover, we should check whether there is some accidental continuous symmetry in the scalar potential, which could affect the above FCNC processes\cite{Bazzocchi:2007na,Bazzocchi:2004dw}. In addition the cosmological consequences of the $T'$ flavor symmetry and its spontaneous broken deserve studying further\cite{cosmology}. The Yukawa superpotential consists of non-renormalizable interactions except the top quark relevant term in the model, and a lot of non-renormalizable operators are involved in the higher order corrections. These non-renormalizable interactions may be generated from a renormalizable theory by integrating out some heavy fields\cite{ding}. Searching for the origin of these non-renormalizable interactions is a challenging and interesting question, then the free parameters of the model would be drastically reduced, and the model become more predictive. \section*{ACKNOWLEDGEMENTS} \indent I am grateful to Prof. Dao-Neng Gao and Mu-Lin Yan for very helpful and stimulating discussions. This work is supported by the China Postdoctoral Science foundation (20070420735). \begin{appendix} \section{basic properties of the discrete group $T'$} The group $T'$ is denoted as $24/13$ in the Thomas-Wood classification\cite{group}, and it is isomorphic to the group $SL_2(F_3)$\cite{Carr:2007qw,Frampton:2007et}, which consists of $2\times2$ unimodular matrices whose elements are added and multiplied as integers modulo 3. $T'$ is the double cover of $A_4$ which is the even permutation of 4 objects, and the order of $T'$ is 24. Geometrically, $T'$ is proper rotations leaving a regular tetrahedron invariant in the SU(2) space. $T'$ can be generated by two generators $S$ and $T$ with the multiplication rules\cite{qtp,group} \begin{equation} \label{a1}S^4=T^3=1,~~~TS^2=S^2T,~~~ST^{-1}S=TST \end{equation} The 24 elements can be written in the form $T^{l}S^{m}T^{n}$, where $l=0,\pm1$, and if $m=0$ or 2 then $n=0$, while if $m=\pm1$ then $n=0,\pm1$. The character table, the explicit matrix representations and the Clebsch-Gordan coefficients of $T'$ have already been calculated\cite{qtp}, which are reformulated in Ref.\cite{Feruglio:2007uu}. $T'$ has seven inequivalent irreducible representations: three singlet representations $\mathbf{1}^{0}$ and $\mathbf{1}^{\pm}$, three doublet representations $\mathbf{2}^{0}$ and $\mathbf{2}^{\pm}$, and one triplet representation $\mathbf{3}$. The triality superscript can describe the multiplication rules of these representations concisely. We identify $\pm$ as $\pm1$, trialities add under addition modulo three, and the multiplication rules are as follows \begin{eqnarray} \nonumber&& \mathbf{1}^{i}\otimes\mathbf{1}^{j}=\mathbf{1}^{i+j}\;,~~~\mathbf{1}^{i}\otimes\mathbf{2}^{j}=\mathbf{2}^{j}\otimes\mathbf{1}^{i}=\mathbf{2}^{i+j}~~({\rm{with}}~ i,j=0,\pm1)\\ \nonumber&&\mathbf{1}^{i}\otimes\mathbf{3}=\mathbf{3}\otimes\mathbf{1}^{i}=\mathbf{3}\;,~~~\mathbf{2}^{i}\otimes\mathbf{2}^{j}=\mathbf{3}\oplus\mathbf{1}^{i+j}\\ \label{a2}&&\mathbf{2}^{i}\otimes\mathbf{3}=\mathbf{3}\otimes\mathbf{2}^{i}=\mathbf{2}^{0}\oplus\mathbf{2}^{+}\oplus\mathbf{2}^{-}\;,\mathbf{3}\otimes\mathbf{3}=\mathbf{3}_{S}\oplus\mathbf{3}_{A}\oplus\mathbf{1}^{0}\oplus\mathbf{1}^{+}\oplus\mathbf{1}^{-} \end{eqnarray} where the triality notations are related the usually used notations $\mathbf{1}$, $\mathbf{1}'$, $\mathbf{1}''$, $\mathbf{2}$, $\mathbf{2}'$ and $\mathbf{2}''$ \cite{Ma:2001dn,Babu:2002dz,Ma:2004zv,Altarelli:2005yp,Chen:2005jm,Zee:2005ut,Altarelli:2005yx,He:2006dk,Ma:2006sk,King:2006np,Morisi:2007ft,Bazzocchi:2007na,Bazzocchi:2007au,Lavoura:2007dw,Brahmachari:2008fn,Altarelli:2008bg,Bazzocchi:2008rz,Carr:2007qw,Feruglio:2007uu}by the relations $\mathbf{1}^{0}\equiv\mathbf{1}$, $\mathbf{1}^{+}\equiv\mathbf{1}'$, $\mathbf{1}^{-}\equiv\mathbf{1}''$ and similarly for the doublet representations. The representations $\mathbf{1}'$ and $\mathbf{1}''$ are complex conjugated to each other, and the same for the $\mathbf{2}'$ and $\mathbf{2}''$ representations. Since $T'$ is generated by the elements $S$ and $T$, we only need explicit matrix representations of both $S$ and $T$ as follows \begin{eqnarray} \nonumber&&S(\mathbf{1}^{0})=S(\mathbf{1}^{+})=S(\mathbf{1}^{-})=1\\ \nonumber&&T(\mathbf{1}^{0})=1\;,~~~T(\mathbf{1}^{+})=\omega\;,~~~T(\mathbf{1}^{-})=\omega^2\\ \nonumber&&S(\mathbf{2}^{0})=S(\mathbf{2}^{+})=S(\mathbf{2}^{-})=N_1\\ \nonumber&&T(\mathbf{2}^{0})=\omega N_2\;,~~~T(\mathbf{2}^{+})=\omega^2 N_2\;,~~~T(\mathbf{2}^{-})=N_2\\ \label{a3}&&S(\mathbf{3})=\frac{1}{3}\left(\begin{array}{ccc} -1&2\,\omega&2\,\omega^2\\ 2\,\omega^2&-1&2\,\omega\\ 2\,\omega&2\,\omega^2&-1 \end{array}\right)\;,~~~T(\mathbf{3})=\left(\begin{array}{ccc} 1&0&0\\ 0&\omega&0\\ 0&0&\omega^2 \end{array}\right) \end{eqnarray} where $\omega=e^{i2\pi/3}$, and the matrices $N_1$ and $N_2$ are defined as \begin{equation} \label{a4}N_1=\frac{-1}{\sqrt{3}}\left(\begin{array}{cc} i&\sqrt{2}\,e^{i\pi/12}\\ -\sqrt{2}\,e^{-i\pi/12}&-i\\ \end{array}\right)\;,~~~N_2=\left(\begin{array}{cc} \omega&0\\ 0&1\\ \end{array}\right) \end{equation} \section{Higher order corrections to the vacuum alignment} We will discuss how the vacuum alignment achieved at the leading order is modified by the inclusion of higher dimensional operators, then the superpotential $w_{v}$ is modified into \begin{equation} \label{b1}w_{v}=w^{LO}_v+w^{NL}_v \end{equation} where $w^{LO}_v$ is the leading order contributions \begin{eqnarray} \nonumber&&w^{LO}_{v}=g_1(\varphi^{R}_T\phi\phi)+g_2(\varphi^{R}_T\varphi_T)\Delta+g_3(\phi^R\phi)\chi+g_4(\varphi_T\phi^{R}\phi)+g_5\chi^R\chi^2+g_6\chi^R\theta'\theta''\\ \nonumber&&+g_7\chi^R(\varphi_T\varphi_T)+g_8\theta''^R\theta''^2+g_9\theta''^R\theta'\chi+g_{10}\theta''^R(\varphi_T\varphi_T)'+M_{\Delta}\Delta^R\Delta+g_{11}\Delta^{R}\chi^2\\ \nonumber&&+g_{12}\Delta^{R}\theta'\theta''+g_{13}\Delta^R(\varphi_T\varphi_T)+\bar{M}_{\Delta}\bar{\Delta}^R\bar{\Delta}+g_{14}\bar{\Delta}^{R}\Delta^2+g_{15}(\varphi^{R}_{S}\varphi_S\varphi_S)+g_{16}(\varphi^R_S\varphi_S)\tilde{\xi}\\ \label{b2}&&+g_{17}\xi^R(\varphi_S\varphi_S)+g_{18}\xi^R\xi^2+g_{19}\xi^R\xi\tilde{\xi}+g_{20}\xi^R\tilde{\xi}^2 \end{eqnarray} The above leading order superpotential gives rise to the following vacuum configuration \begin{eqnarray} \nonumber &&\langle \varphi_{T}\rangle=(v_{T},0,0),~~~\langle \varphi_{S}\rangle=(v_S,v_S,v_S),~~~\langle\phi\rangle=(v_1,0),\\ \nonumber&&\langle\xi\rangle=u_{\xi},~~~\langle\tilde{\xi}\rangle=0,~~~\langle\theta'\rangle=u'_{\theta},~~~\langle\theta''\rangle=u''_{\theta},\\ \label{b3}&&\langle\Delta\rangle=u_{\Delta},~~~\langle\bar{\Delta}\rangle=\bar{u}_{\Delta},~~~\langle\chi\rangle=u_{\chi} \end{eqnarray} The effect of the next to leading order superpotential $w^{NL}_{v}$ on the above SUSY conserving vacuum configuration is just a shift in the VEVs of the scalar fields, therefore the vacuum configuration is modified into \begin{eqnarray} \nonumber&&\langle\varphi_{T}\rangle=(v_T+\delta v_{T1},\delta v_{T2},\delta v_{T3}),~~~\langle\varphi_S\rangle=(v_S+\delta v_{S1},v_S+\delta v_{S2},v_S+\delta v_{S3}),\\ \nonumber&&\langle\phi\rangle=(v_1+\delta v_1,\delta v_2),~~~\langle\xi\rangle=u_{\xi},~~~\langle\tilde{\xi}\rangle=\delta\tilde{u}_{\xi},~~~\langle\theta'\rangle=u'_{\theta}+\delta u'_{\theta},\\ \label{b4}&&\langle\theta''\rangle=u''_{\theta}+\delta u''_{\theta},~~~\langle\Delta\rangle=u_{\Delta}+\delta u_{\Delta},~~~\langle\bar{\Delta}\rangle=\bar{u}_{\Delta}+\delta\bar{u}_{\Delta},~~~\langle\chi\rangle=u_{\chi} \end{eqnarray} and $w^{NL}_{v}$ is given by \begin{eqnarray} \nonumber w^{NL}_{v}&=&\frac{1}{\Lambda}\sum_{i=1}^{14}t_i{\cal O}^{T}_i+\frac{1}{\Lambda^2}\big(f_1{\cal O}^{\phi}_1+\sum_{i=1}^{8}k_i{\cal O}^{\chi}_i+\sum_{i=1}^{4}c_i{\cal O}^{\theta}_i+\sum_{i=1}^{8}d_i{\cal O}^{\Delta}_i\big)+\frac{1}{\Lambda}\sum_{i=1}^{4}\bar{d}_i{\cal O}^{\bar{\Delta}}_i \end{eqnarray} where ${\cal O}^{T}_i$, ${\cal O}^{\phi}_1$ etc are operators linear in the driving fields $\varphi^{R}_T$ and $\phi^{R}$ et al., which are consistent with the symmetry of the model, and each operator comprises 4 or 5 superfields. Since the next to leading order operators linear in $\varphi^{R}_{S}$ and $\xi^{R}$ are of order $1/\Lambda^3$, therefore the shifts $\delta v_{Si}(i=1,2,3)$ and $\delta\tilde{u}_{\xi}$ are suppressed by $1/\Lambda^3$, and these operators are omitted in the $w^{NL}_v$ above. The operators ${\cal O}^{T}_i(i=1-14)$ and ${\cal O}^{\phi}_1$ are given by \begin{eqnarray} \label{b5}&&\begin{array}{ll} {\cal O}^{T}_1=(\varphi^{R}_T\varphi_T)(\varphi_T\varphi_T),&{\cal O}^{T}_2=(\varphi^{R}_T\varphi_T)'(\varphi_T\varphi_T)'',\\ {\cal O}^{T}_3=(\varphi^{R}_T\varphi_T)''(\varphi_T\varphi_T)',&{\cal O}^{T}_4=((\varphi^{R}_T\varphi_T)_{\mathbf{3}_S}(\varphi_T\varphi_T)_{\mathbf{3}_S}),\\ {\cal O}^{T}_5=((\varphi^{R}_T\varphi_T)_{\mathbf{3}_A}(\varphi_T\varphi_T)_{\mathbf{3}_S}),&{\cal O}^{T}_6=(\varphi^{R}_T\varphi_T\varphi_T)\chi,\\ {\cal O}^{T}_7=(\varphi^{R}_T\varphi_T\varphi_T)''\theta',& {\cal O}^{T}_8=(\varphi^{R}_{T}\varphi_T\varphi_{T})'\theta'',\\ {\cal O}^{T}_9=(\varphi^{R}_T\varphi_T)\chi^2,& {\cal O}^{T}_{10}=(\varphi^{R}_T\varphi_T)\theta'\theta'',\\ {\cal O}^{T}_{11}=(\varphi^{R}_T\varphi_T)''\chi\theta',& {\cal O}^{T}_{12}=(\varphi^{R}_T\varphi_T)''\theta''^2,\\ {\cal O}^{T}_{13}=(\varphi^{R}_T\varphi_T)'\chi\theta'',& {\cal O}^{T}_{14}=(\varphi^{R}_T\varphi_T)'\theta'^2 \end{array}\\ \nonumber&&\\ \label{b6}&&{\cal O}^{\phi}_1=(\phi^{R}\phi)\Delta\bar{\Delta}^2 \end{eqnarray} The structures ${\cal O}^{\chi}_i(i=1-8)$ are explicitly written as follows \begin{equation} \label{b7}\begin{array}{ll} {\cal O}^{\chi}_1=\chi^{R}\Delta\bar{\Delta}^2\chi,~~~~~~~~~~~~~~~~&{\cal O}^{\chi}_2=\chi^{R}\Delta(\varphi_S\varphi_S\varphi_S),\\ {\cal O}^{\chi}_3=\chi^{R}\Delta(\varphi_S\varphi_S){\xi},&{\cal O}^{\chi}_{4}=\chi^{R}\Delta(\varphi_S\varphi_S)\tilde{\xi},\\ {\cal O}^{\chi}_5=\chi^{R}\Delta\xi^3,&{\cal O}^{\chi}_6=\chi^{R}\Delta\xi^2\tilde{\xi},\\ {\cal O}^{\chi}_7=\chi^{R}\Delta\xi\tilde{\xi}^2,&{\cal O}^{\chi}_8=\chi^{R}\Delta\tilde{\xi}^3 \end{array} \end{equation} The operators involving $\theta''^{R}$ and $\Delta^R$ are \begin{eqnarray} \label{b8}&&\begin{array}{ll} {\cal O}^{\theta}_1=\theta''^{R}\theta'\Delta\bar{\Delta}^2,~~~~~~~~~~~~~~&{\cal O}^{\theta}_2=\theta''^{R}\Delta(\varphi_S\varphi_S\varphi_S)',\\ {\cal O}^{\theta}_3=\theta''^{R}\Delta(\varphi_S\varphi_S)'\xi,&{\cal O}^{\theta}_4=\theta''^{R}\Delta(\varphi_S\varphi_S)'\tilde{\xi} \end{array}\\ \nonumber&&\\ \label{b9}&&\begin{array}{ll} {\cal O}^{\Delta}_1=\Delta^{R}\Delta\bar{\Delta}^2\chi,~~~~~~~~~~~~~~&{\cal O}^{\Delta}_2=\Delta^R\Delta(\varphi_S\varphi_S\varphi_S),\\ {\cal O}^{\Delta}_3=\Delta^{R}\Delta(\varphi_S\varphi_S)\xi,&{\cal O}^{\Delta}_4=\Delta^{R}\Delta(\varphi_S\varphi_S)\tilde{\xi},\\ {\cal O}^{\Delta}_5=\Delta^{R}\Delta\xi^3,&{\cal O}^{\Delta}_{6}=\Delta^{R}\Delta\xi^2\tilde{\xi},\\ {\cal O}^{\Delta}_7=\Delta^{R}\Delta\xi\tilde{\xi}^2,&{\cal O}^{\Delta}_{8}=\Delta^{R}\Delta\tilde{\xi}^3 \end{array} \end{eqnarray} and the operators ${\cal O}^{\bar{\Delta}}_i (i=1-4)$ are given by \begin{eqnarray} \label{b10}\begin{array}{ll} ~~~~~~~{\cal O}^{\bar{\Delta}}_1=\bar{\Delta}^{R}\Delta(\varphi_T\varphi_T),~~~~~~~~~~~~~~~&{\cal O}^{\bar{\Delta}}_2=\bar{\Delta}^{R}\Delta\chi^2,\\ ~~~~~~~{\cal O}^{\bar{\Delta}}_3=\bar{\Delta}^{R}\Delta\theta'\theta'',&{\cal O}^{\bar{\Delta}}_4=\bar{\Delta}^{R}(\varphi_T\phi\phi),\\ \end{array} \end{eqnarray} We perform the same minimization procedure as that in section V, again we search for the zero of the $F$ terms associated with the driving fields, Only terms linear in the shift $\delta v$ are kept, and terms of order $\delta v/\Lambda$ are neglected, then the minimization equations become \begin{eqnarray} \nonumber&&2ig_1v_1\delta v_1+g_2u_{\Delta}\delta v_{T1}+g_2v_{T}\delta u_{\Delta}+\frac{v_T}{\Lambda}(t_1v^2_T+\frac{4}{9}t_4v^2_T+\frac{2}{3}t_6u_{\chi}v_{T}+t_{9}u^2_{\chi}+t_{10}u'_{\theta}u''_{\theta})=0\\ \nonumber&&(1-i)g_1v_1\delta v_2+g_2u_{\Delta}\delta v_{T3}+\frac{v_{T}}{\Lambda}(\frac{2}{3}t_{8}u''_{\theta}v_T+t_{13}u''_{\theta}u_{\chi}+t_{14}u'^2_{\theta})=0\\ \nonumber&&g_2u_{\Delta}\delta v_{T2}+\frac{v_T}{\Lambda}(\frac{2}{3}t_7u'_{\theta}v_T+t_{11}u'_{\theta}u_{\chi}+t_{12}u''^2_{\theta})=0\\ \nonumber&&(g_3u_{\chi}+g_4v_{T})\delta v_2-(1-i)g_4v_1\delta v_{T3}=0\\ \nonumber&&g_4v_1\delta v_{T1}-\frac{f_1}{\Lambda^2}u_{\Delta}\bar{u}^2_{\Delta}v_1=0\\ \nonumber&&g_{6}u'_{\theta}\delta u''_{\theta}+g_6 u''_{\theta}\delta u'_{\theta}+2g_{7}v_{T}\delta v_{T1}+\frac{u_{\Delta}}{\Lambda^2}(k_1\bar{u}^2_{\Delta}u_{\chi}+3k_3u_{\xi}v^2_S+k_5u^3_{\xi})=0\\ \nonumber&&2g_{8}u''_{\theta}\delta u''_{\theta}+g_9u_{\chi}\delta u'_{\theta}+2g_{10}v_T\delta v_{T2}+\frac{u_{\Delta}}{\Lambda^2}(c_1u'_{\theta}\bar{u}^2_{\Delta}+3c_3u_{\xi}v^2_S)=0\\ \nonumber&&M_{\Delta}\delta u_{\Delta}+g_{12}u'_{\theta}\delta u''_{\theta}+g_{12}u''_{\theta}\delta u'_{\theta}+2g_{13}v_{T}\delta v_{T1}+\frac{u_{\Delta}}{\Lambda^2}(d_1\bar{u}^2_{\Delta}u_{\chi}+3d_3u_{\xi}v^2_S+d_5u^3_{\xi})=0\\ \label{b11}&&\bar{M}_{\Delta}\delta\bar{u}_{\Delta}+2g_{14}u_{\Delta}\delta u_{\Delta}+\frac{1}{\Lambda}(\bar{d}_1u_{\Delta}v^2_T+\bar{d}_2u_{\Delta}u^2_{\chi}+\bar{d}_3u_{\Delta}u'_{\theta}u''_{\theta}+i\bar{d}_4v^2_1v_T)=0 \end{eqnarray} Solving the above linear equations, then the shifts of the VEVs are \begin{eqnarray} \nonumber\delta v_{T1}&=&\frac{f_1}{g_4}\frac{u_{\Delta}\bar{u}^2_{\Delta}}{\Lambda^2}\\ \nonumber\delta v_{T2}&=&-\frac{v_T}{g_2u_{\Delta}\Lambda}(\frac{2}{3}t_7u'_{\theta}v_T+t_{11}u'_{\theta}u_{\chi}+t_{12}u''^2_{\theta})\\ \nonumber\delta v_{T3}&=&-\frac{v_T}{2g_{2}u_{\Delta}\Lambda}(\frac{2}{3}t_8u''_{\theta}v_T+t_{13}u''_{\theta}u_{\chi}+t_{14}u'^2_{\theta})\\ \nonumber\delta v_1&=&\frac{iv_T}{2g_1v_1\Lambda}(t_1v^2_T+\frac{4}{9}t_4v^2_T+\frac{2}{3}t_6u_{\chi}v_T+t_9u^2_{\chi}+t_{10}u'_{\theta}u''_{\theta})+{ \cal O}(\frac{1}{\Lambda^2})\\ \nonumber\delta v_2&=&-\frac{(1-i)v_1}{4g_2u_{\Delta}\Lambda}(\frac{2}{3}t_8u''_{\theta}v_T+t_{13}u''_{\theta}u_{\chi}+t_{14}u'^2_{\theta})\\ \nonumber\delta u'_{\theta}&=&-\frac{2g_{10}u'_{\theta}v^2_T}{3g_2g_8u''^2_{\theta}u_{\Delta}\Lambda}(\frac{2}{3}t_7u'_{\theta}v_T+t_{11}u'_{\theta}u_{\chi}+t_{12}u''^2_{\theta})+{ \cal O}(\frac{1}{\Lambda^2})\\ \nonumber\delta u''_{\theta}&=&\frac{2g_{10}v^2_T}{3g_2g_8u''_{\theta}u_{\Delta}\Lambda}(\frac{2}{3}t_7u'_{\theta}v_T+t_{11}u'_{\theta}u_{\chi}+t_{12}u''^2_{\theta})+{\cal O}(\frac{1}{\Lambda^2})\\ \nonumber\delta u_{\Delta}&=&\frac{2(g_7g_{12}-g_6g_{13})f_1}{g_4g_6}\frac{u_{\Delta}\bar{u}^2_{\Delta}v_T}{M_{\Delta}\Lambda^2}+\frac{g_{12}u_{\Delta}}{g_6M_{\Delta}\Lambda^2}(k_1\bar{u}^2_{\Delta}u_{\chi}+3k_3u_{\xi}v^2_S+k_5u^3_{\xi})\\ \nonumber&&-\frac{u_{\Delta}}{M_{\Delta}\Lambda^2}(d_1\bar{u}^2_{\Delta}u_{\chi}+3d_3u_{\xi}v^2_S+d_5u^3_{\xi})\\ \label{b12}\delta\bar{u}_{\Delta}&=&-\frac{1}{\bar{M}_{\Delta}\Lambda}(\bar{d}_1u_{\Delta}v^2_T+\bar{d}_2u_{\Delta}u^2_{\chi}+\bar{d}_3u_{\Delta}u'_{\theta}u''_{\theta}+id_4v^2_1v_T)+{ \cal O}(\frac{1}{\Lambda^2}) \end{eqnarray} where the contributions of order $1/\Lambda^2$ in $\delta v_1$, $\delta u'_{\theta}$, $\delta u''_{\theta}$ and $\delta\bar{u}_{\Delta}$, which are not written out explicitly, are also higher order in $\lambda$ relative to the leading contributions suppressed by $1/\Lambda$. Since $\delta v_{T1}$ and $\delta u_{\Delta}$ are of order $1/\Lambda^2$, terms of the same order should not be omitted in the relevant minimization equations, then $\delta v_{T1}$ is modified into \begin{eqnarray} \nonumber\delta v_{T1}&=&\frac{f_1}{g_4}\frac{u_{\Delta}\bar{u}^2_{\Delta}}{\Lambda^2}-\frac{v_T}{2g^2_2u^2_{\Delta}\Lambda^2}(\frac{2}{3}t_7u'_{\theta}v_T+t_{11}u'_{\theta}u_{\chi}+t_{12}u''^2_{\theta})(\frac{2}{3}t_8u''_{\theta}v_T+t_{13}u''_{\theta}u_{\chi}\\ \label{b13}&&+t_{14}u'^2_{\theta}) \end{eqnarray} \end{appendix}
{'timestamp': '2008-08-11T15:45:12', 'yymm': '0803', 'arxiv_id': '0803.2278', 'language': 'en', 'url': 'https://arxiv.org/abs/0803.2278'}
\section{Introduction} Total variation (TV) denoising is a nonlinear filtering method based on the assumption that the underlying signal is piecewise constant (equivalently, the derivative of the underlying signal is \emph{sparse}) \cite{ROF_1992}. Such signals arise in geoscience, biophysics, and other areas \cite{Little_2011_RSoc_Part1}. The TV denoising technique is also used in conjunction with other methods in order to process more general types of signals \cite{Gholami_2013_SP, Easley_2009_shearlet_tv, DurandFroment_2003_SIAM, Ding_2015_SPL}. Total variation denoising is prototypical of methods based on sparse signal models. It is defined by the minimization of a convex cost function comprising a quadratic data fidelity term and a non-differentiable convex penalty term. The penalty term is the composition of a linear operator and the $ \ell_1 $ norm. Although the $ \ell_1 $ norm stands out as the convex penalty that most effectively induces sparsity \cite{Hastie_2015_CRC_book}, non-convex penalties can lead to more accurate estimation of the underlying signal \cite{Nikolova_2000_SIAM, Nikolova_2005_MMS, Nikolova_2010_TIP, Rodriguez_2009_TIP, Storath_2014_TSP}. A few recent papers consider the prescription of non-convex penalties that maintain the convexity of the TV denoising cost function \cite{Lanza_2016_JMIV, MalekMohammadi_2016_TSP, Selesnick_SPL_2015, Astrom_2015_cnf}. (The motivation for this is to leverage the benefits of both non-convex penalization and convex optimization, e.g., to accurately estimate the amplitude of jump discontinuities while guaranteeing the uniqueness of the solution.) The penalties considered in these works are separable (additive). But non-separable penalties can outperform separable penalties in this context. This is because preserving the convexity of the cost function is a severely limiting requirement. Non-separable penalties can more successfully meet this requirement because they are more general than separable penalties \cite{Selesnick_2016_TSP_BISR}. This paper proposes a non-separable non-convex penalty for total variation denoising that generalizes the standard penalty and maintains the convexity of the cost function to be minimized.% \footnote{Software is available at {http://eeweb.poly.edu/iselesni/mtvd}} The new penalty, which is based on the Moreau envelope, can more accurately estimate the amplitudes of jump discontinuities in an underlying piecewise constant signal. \subsection{Relation to Prior Work} Numerous non-convex penalties and algorithms have been proposed to outperform $ \ell_1 $-norm regularization for the estimation of sparse signals e.g., \cite{Castella_2015_camsap_noabr, Candes_2008_JFAP, Nikolova_2011_chap, Mohimani_2009_TSP, Marnissi_2013_ICIP, Chartrand_2014_ICASSP, Chouzenoux_2013_SIAM, Portilla_2007_SPIE, Zou_2008_AS, Chen_2014_TSP_Convergence, Wipf_2011_tinfo}. However, few of these methods maintain the convexity of the cost function. The prescription of non-convex penalties maintaining cost function convexity was pioneered by Blake, Zisserman, and Nikolova \cite{Blake_1987, Nikolova_1998_ICIP, Nikolova_2010_TIP, Nikolova_2011_chap}, and further developed in Refs.~\cite{Bayram_2015_SPL, Bayram_2016_TSP, Chen_2014_TSP_ncogs, Ding_2015_SPL, He_2016_MSSP, Lanza_2016_JMIV, MalekMohammadi_2016_TSP, Parekh_2016_SPL_ELMA, Selesnick_2014_TSP_MSC, Selesnick_SPL_2015}. These works rely on the presence of both strongly and weakly convex terms, which is also exploited in \cite{Mollenhoff_2015_SIAM}. The proposed penalty is expressed as a differentiable convex function subtracted from the standard penalty (i.e., $ \ell_1 $ norm). Previous works also use this idea \cite{Parekh_2015_SPL, Selesnick_2016_TSP_BISR, Parekh_2016_SPL_ELMA}. But the differentiable convex functions used therein are either separable \cite{Parekh_2015_SPL, Parekh_2016_SPL_ELMA} or sums of bivariate functions \cite{Selesnick_2016_TSP_BISR}. In parallel with the submission of this paper, Carlsson has also proposed using Moreau envelopes to prescribe non-trivial convex cost functions \cite{Carlsson_2016_arxiv}. While the approach in \cite{Carlsson_2016_arxiv} starts with a given non-convex cost function (e.g., with the $ \ell_0 $ pseudo-norm penalty) and seeks the convex envelope, our approach starts with the $ \ell_1 $-norm penalty and seeks a class of convexity-preserving penalties. Some forms of generalized TV are based on infimal convolution (related to the Moreau envelope) \cite{Setzer_2011_CMS, Chambolle_1997_NumerMath, Burger_2016, Becker_2014_JNCA_nomonth}. But these works propose convex penalties suitable for non-piecewise-constant signals, while we propose non-convex penalties suitable for piecewise-constant signals. \section{Total Variation Denoising} \begin{defn} Given $ y \in \mathbb{R}^N $ and $ {\lambda} > 0 $, total variation denoising is defined as \begin{align} \label{eq:tvd} \tvd(y ; {\lambda} ) & = \arg\min_{ x \in \mathbb{R}^N } \bigl\{ \tfrac{1}{2} \norm{ y - x }_2^2 + {\lambda} \norm{ D x }_1 \bigr\} \\ \label{eq:prox} & = \operatorname{prox}_{{\lambda} \norm{ D \, \cdot \, }_1 }(y) \end{align} where $ D $ is the $ (N-1) \times N $ matrix \begin{equation} \label{eq:defD} D = \begin{bmatrix} -1 & 1 & & & \\ & -1 & 1 & & \\ & & \ddots & \ddots & \\ & & & -1 & 1 \end{bmatrix}. \end{equation} \end{defn} As indicated in \eqref{eq:prox}, TV denoising is the proximity operator \cite{Combettes_2011_chap} of the function $ x \mapsto {\lambda} \norm{ D x }_1 $. It is convenient that TV denoising can be calculated exactly in finite-time \cite{Condat_2013, Dumbgen_2009, Johnson_2013_JCGS, Darbon_2006_JMIV_part1}. \section{Moreau Envelope} Before we define the non-differentiable non-convex penalty in Sec.~\ref{sec:pen}, we first define a differentiable convex function. We use the Moreau envelope from convex analysis \cite{Bauschke_2011}. \begin{defn} Let $ \alpha \ge 0 $. We define $ S_\alpha \colon \mathbb{R}^N \to \mathbb{R} $ as \begin{equation} \label{eq:defS} S_\alpha( x ) = \min_{ v \in \mathbb{R}^N } \bigl\{ \norm{ D v }_1 + \tfrac{ \alpha }{ 2 } \norm{ x - v }_2^2 \, \bigr\} \end{equation} where $ D $ is the first-order difference matrix \eqref{eq:defD}. \end{defn} If $ \alpha > 0 $, then $ S_\alpha $ is the \emph{Moreau envelope} of index $ \alpha^{-1} $ of the function $ x \mapsto \norm{ D x }_1 $. \begin{prop} \label{prop:calcS} The function $ S_\alpha $ can be calculated by \ifTwoColumn \begin{align} \label{eq:zeroS} S_0( x ) & = 0 \\ \nonumber S_\alpha(x) & = \norm{ D \tvd(x ; 1/\alpha) }_1 \\ & \hspace{3em} {} + \tfrac{ \alpha }{ 2 } \norm{ x - \tvd(x ; 1/\alpha) }_2^2 , \quad \alpha > 0. \end{align} \else \begin{align} \label{eq:zeroS} S_0( x ) & = 0 \\ S_\alpha(x) & = \norm{ D \tvd(x ; 1/\alpha) }_1 + \tfrac{ \alpha }{ 2 } \norm{ x - \tvd(x ; 1/\alpha) }_2^2 , \quad \alpha > 0. \end{align} \fi \end{prop} \begin{proof} For $ \alpha = 0 $: Setting $ \alpha = 0 $ and $ v = 0 $ in \eqref{eq:defS} gives \eqref{eq:zeroS}. For $ \alpha > 0 $: By the definition of TV denoising, the $ v \in \mathbb{R}^N $ minimizing the function in \eqref{eq:defS} is the TV denoising of $ x $, i.e., $ v\opt = \tvd( x , 1/\alpha ) $. \end{proof} \begin{prop} Let $ \alpha \ge 0 $. The function $ S_\alpha $ satisfies \begin{equation} \label{eq:boundS} 0 \le S_\alpha(x) \le \norm{ D x }_1, \ \ \forall x \in \mathbb{R}^N. \end{equation} \end{prop} \begin{proof} From \eqref{eq:defS}, we have $ S_\alpha( x ) \le \norm{ D v }_1 + (\alpha/2) \norm{ x - v }_2^2 $ for all $ v \in \mathbb{R}^N $. In particular, $ v = x $ leads to $ S_\alpha(x) \le \norm{ D x }_1 $. Also, $ S_\alpha(x) \ge 0 $ since $ S_\alpha(x) $ is defined as the minimum of a non-negative function. \end{proof} \begin{prop} Let $ \alpha \ge 0 $. The function $ S_\alpha $ is convex and differentiable. \end{prop} \begin{proof} It follows from Proposition 12.15 in Ref.~\cite{Bauschke_2011}. \end{proof} \begin{prop} \label{prop:Sgrad} Let $ \alpha \ge 0 $. The gradient of $ S_\alpha $ is given by \begin{align} \label{eq:gradS0} \nabla S_0(x) & = 0 \\ \label{eq:gradS} \nabla S_\alpha(x) & = \alpha \bigl( x - \tvd( x ; 1 / \alpha ) \bigr), \quad \alpha > 0 \end{align} where $ \tvd $ denotes total variation denoising \eqref{eq:tvd}. \end{prop} \begin{proof} Since $ S_\alpha $ is the Moreau envelope of index $ \alpha^{-1} $ of the function $ x \mapsto \norm{ D x }_1 $ when $ \alpha > 0 $, it follows by Proposition 12.29 in Ref.~\cite{Bauschke_2011} that \begin{equation} \nabla S_\alpha(x) = \alpha \bigl( x - \operatorname{prox}_{(1/\alpha)\norm{ D \, \cdot \, }_1 }(x) \bigr). \end{equation} This proximity operator is TV denoising, giving \eqref{eq:gradS}. \end{proof} \section{Non-convex Penalty} \label{sec:pen} To strongly induce sparsity of $ D x $, we define a non-convex generalization of the standard TV penalty. The new penalty is defined by subtracting a differentiable convex function from the standard penalty. \begin{defn} Let $ \alpha \ge 0 $. We define the penalty $ \psi_\alpha \colon \mathbb{R}^N \to \mathbb{R} $ as \begin{equation} \label{eq:defpsi} \psi_\alpha( x ) = \norm{ D x }_1 - S_\alpha( x ) \end{equation} where $ D $ is the matrix \eqref{eq:defD} and $ S_\alpha $ is defined by \eqref{eq:defS}. \end{defn} The proposed penalty is upper bounded by the standard TV penalty, which is recovered as a special case. \begin{prop} Let $ \alpha \ge 0 $. The penalty $ \psi_\alpha $ satisfies \begin{equation} \psi_0( x ) = \norm{ D x }_1, \ \ \forall x \in \mathbb{R}^N \end{equation} and \begin{equation} \label{eq:psibound} 0 \le \psi_\alpha(x) \le \norm{ D x }_1, \ \ \forall x \in \mathbb{R}^N. \end{equation} \end{prop} \begin{proof} It follows from \eqref{eq:zeroS} and \eqref{eq:boundS}. \end{proof} When a convex function is subtracted from another convex function [as in \eqref{eq:defpsi}], the resulting function may well be negative on part of its domain. Inequality \eqref{eq:psibound} states that the proposed penalty $ \psi_\alpha $ avoids this fate. This is relevant because the penalty function should be non-negative. Figures in the supplemental material show examples of the proposed penalty $ \psi_\alpha $ and the function $ S_\alpha $. \section{Enhanced TV Denoising} We define `Moreau-enhanced' TV denoising. If $ \alpha > 0 $, then the proposed penalty penalizes large amplitude values of $ D x $ less than the $ \ell_1 $ norm does (i.e., $ \psi_\alpha(x) \le \norm{D x}_1 $), hence it is less likely to underestimate jump discontinuities. \begin{defn} Given $ y \in \mathbb{R}^N $, $ {\lambda} > 0 $, and $ \alpha \ge 0 $, we define Moreau-enhanced total variation denoising as \begin{equation} \label{eq:mtvd} \mtvd( y ; {\lambda} , \alpha) = \arg \min_{ x \in \mathbb{R}^N } \bigl\{ \tfrac{1}{2} \norm{ y - x }_2^2 + {\lambda} \psi_\alpha( x ) \bigr\} \end{equation} where $ \psi_\alpha $ is given by \eqref{eq:defpsi}. \end{defn} The parameter $ \alpha $ controls the non-convexity of the penalty. If $ \alpha = 0 $, then the penalty is convex and Moreau-enhanced TV denoising reduces to TV denoising. Greater values of $ \alpha $ make the penalty more non-convex. What is the greatest value of $ \alpha $ that maintains convexity of the cost function? The critical value is given by Theorem \ref{thm:cond}. \begin{theorem} \label{thm:cond} Let $ {\lambda} > 0 $ and $ \alpha \ge 0 $. Define $ F_\alpha \colon \mathbb{R}^N \to \mathbb{R} $ as \begin{equation} \label{eq:defF} F_\alpha(x) = \tfrac{1}{2} \norm{ y - x }_2^2 + {\lambda} \psi_\alpha( x ) \end{equation} where $ \psi_\alpha $ is given by \eqref{eq:defpsi}. If \begin{equation} \label{eq:convcond} 0 \le \alpha \le 1 / {\lambda} \end{equation} then $ F_\alpha $ is convex. If $ 0 \le \alpha < 1/{\lambda} $ then $ F_\alpha $ is strongly convex. \end{theorem} \begin{proof} We write the cost function as \ifTwoColumn \begin{align} F_\alpha(x) & = \tfrac{1}{2} \norm{ y - x }_2^2 + {\lambda} \norm{ D x }_1 - {\lambda} S_\alpha( x ) \\ \nonumber & = \tfrac{1}{2} \norm{ y - x }_2^2 + {\lambda} \norm{ D x }_1 \\ & \hspace{3em} {} - {\lambda} \min_{ v \in \mathbb{R}^N } \bigl\{ \norm{ D v }_1 + \tfrac{ \alpha }{ 2 } \norm{ x - v }_2^2 \, \bigr\} \\ \nonumber & = \max_{ v \in \mathbb{R}^N } \bigl\{ \tfrac{1}{2} \norm{ y - x }_2^2 + {\lambda} \norm{ D x }_1 \\ & \hspace{6em} {} - {\lambda} \norm{ D v }_1 - \tfrac{ {\lambda} \alpha }{ 2 } \norm{ x - v }_2^2 \, \bigr\} \\ & = \max_{ v \in \mathbb{R}^N } \bigl\{ \tfrac{1}{2} ( 1 - {\lambda} \alpha ) \norm{ x }_2^2 + {\lambda} \norm{ D x }_1 + g(x, v) \, \bigr\} \\ & = \tfrac{1}{2} ( 1 - {\lambda} \alpha ) \norm{ x }_2^2 + {\lambda} \norm{ D x }_1 + \max_{ v \in \mathbb{R}^N } g(x, v) \end{align} \else \begin{align} F_\alpha(x) & = \tfrac{1}{2} \norm{ y - x }_2^2 + {\lambda} \norm{ D x }_1 - {\lambda} S_\alpha( x ) \\ & = \tfrac{1}{2} \norm{ y - x }_2^2 + {\lambda} \norm{ D x }_1 - {\lambda} \min_{ v \in \mathbb{R}^N } \bigl\{ \norm{ D v }_1 + \tfrac{ \alpha }{ 2 } \norm{ x - v }_2^2 \, \bigr\} \\ & = \max_{ v \in \mathbb{R}^N } \bigl\{ \tfrac{1}{2} \norm{ y - x }_2^2 + {\lambda} \norm{ D x }_1 - {\lambda} \norm{ D v }_1 - \tfrac{ {\lambda} \alpha }{ 2 } \norm{ x - v }_2^2 \, \bigr\} \\ & = \max_{ v \in \mathbb{R}^N } \bigl\{ \tfrac{1}{2} ( 1 - {\lambda} \alpha ) \norm{ x }_2^2 + {\lambda} \norm{ D x }_1 + g(x, v) \, \bigr\} \\ & = \tfrac{1}{2} ( 1 - {\lambda} \alpha ) \norm{ x }_2^2 + {\lambda} \norm{ D x }_1 + \max_{ v \in \mathbb{R}^N } g(x, v) \end{align} \fi where $ g(x, v) $ is affine in $ x $. The last term is convex as it is the point-wise maximum of a set of convex functions. Hence, $ F_\alpha $ is a convex function if $ 1 - {\lambda} \alpha \ge 0 $. If $ 1 - {\lambda} \alpha > 0 $, then $ F_\alpha $ is strongly convex (and strictly convex). \end{proof} \section{Algorithm} \label{sec:alg} \begin{prop} Let $ y \in \mathbb{R}^N $, $ {\lambda} > 0 $, and $ 0 < \alpha < 1/{\lambda} $. Then $ x\iter{k} $ produced by the iteration \begin{subequations} \label{eq:alg} \begin{align} z\iter{k} & = y + {\lambda} \alpha \bigl( x\iter{k} - \tvd( x\iter{k} ; 1 / \alpha ) \bigr) \\ x\iter{k+1} & = \tvd( z\iter{k} ; {\lambda} ). \end{align} \end{subequations} converges to the solution of the Moreau-enhanced TV denoising problem \eqref{eq:mtvd}. \end{prop} \begin{proof} If the cost function \eqref{eq:defF} is strongly convex, then the minimizer can be calculated using the forward-backward splitting (FBS) algorithm \cite{Bauschke_2011, Combettes_2011_chap}. This algorithm minimizes a function of the form \begin{equation} \label{eq:FBSd} F(x) = f_1(x) + f_2(x) \end{equation} where both $ f_1 $ and $ f_2 $ are convex and $ \nabla f_1 $ is additionally Lipschitz continuous. The FBS algorithm is given by \begin{subequations} \label{eq:fbs} \begin{align} z\iter{k} & = x\iter{k} - \mu \bigl[ \nabla f_1( x\iter{k} ) \bigr] \\ x\iter{k+1} & = \arg \min_x \big\{ \tfrac{1}{2} \norm{ z\iter{k} - x }_2^2 + \mu f_2( x ) \big\} \end{align} \end{subequations} where $ 0 < \mu < 2 / \rho $ and $ \rho $ is the Lipschitz constant of $ \nabla f_1 $. The iterates $ x\iter{k} $ converge to a minimizer of $ F $. To apply the FBS algorithm to the proposed cost function \eqref{eq:defF}, we write it as \begin{align} F_\alpha(x) & = \tfrac{1}{2} \norm{ y - x }_2^2 + {\lambda} \psi_\alpha( x ), \\ & = \tfrac{1}{2} \norm{ y - x }_2^2 + {\lambda} \norm{ D x }_1 - {\lambda} S_\alpha( x ) \\ & = f_1(x) + f_2(x) \end{align} where \begin{subequations} \label{eq:deff12} \begin{align} \label{eq:deff1} f_1(x) & = \tfrac{1}{2} \norm{ y - x }_2^2 - {\lambda} S_\alpha( x ) \\ \label{eq:deff2} f_2(x) & = {\lambda} \norm{ D x }_1. \end{align} \end{subequations} The gradient of $ f_1 $ is given by \begin{align} \nabla f_1(x) & = x - y - {\lambda} \nabla S_\alpha( x ) \\ & = x - y - {\lambda} \alpha \bigl( x - \tvd( x ; 1 / \alpha ) \bigr) \end{align} using Proposition \ref{prop:Sgrad}. Subtracting $ S_\alpha $ from $ f_1 $ does not increase the Lipschitz constant of $ \nabla f_1 $, the value of which is 1. Hence, we may set $ 0 < \mu < 2 $. Using \eqref{eq:deff12}, the FBS algorithm \eqref{eq:fbs} becomes \ifTwoColumn \begin{subequations} \begin{align} \nonumber z\iter{k} & = x\iter{k} - \mu \bigl[ x\iter{k} - y \\ & \hspace{4em} {} - {\lambda} \alpha \bigl( x\iter{k} - \tvd( x\iter{k} ; 1 / \alpha ) \bigr) \bigr] \\ \label{eq:xupdate} x\iter{k+1} & = \arg \min_x \big\{ \tfrac{1}{2} \norm{ z\iter{k} - x }_2^2 + \mu {\lambda} \norm{ D x }_1 \big\}. \end{align} \end{subequations} \else \begin{subequations} \begin{align} z\iter{k} & = x\iter{k} - \mu \bigl[ x\iter{k} - y - {\lambda} \alpha \bigl( x\iter{k} - \tvd( x\iter{k} ; 1 / \alpha ) \bigr) \bigr] \\ \label{eq:xupdate} x\iter{k+1} & = \arg \min_x \big\{ \tfrac{1}{2} \norm{ z\iter{k} - x }_2^2 + \mu {\lambda} \norm{ D x }_1 \big\}. \end{align} \end{subequations} \fi Note that \eqref{eq:xupdate} is TV denoising \eqref{eq:tvd}. Using the value $ \mu = 1 $ gives iteration \eqref{eq:alg}. (Experimentally, we found this value yields fast convergence.) \end{proof} Each iteration of \eqref{eq:alg} entails solving two standard TV denoising problems. In this work, we calculate TV denoising using the fast exact C language program by Condat \cite{Condat_2013}. Like the iterative shrinkage/thresholding algorithm (ISTA) \cite{Daubechies_2004, Fig_2003_TIP}, algorithm \eqref{eq:alg} can be accelerated in various ways. We suggest not setting $ \alpha $ too close to the critical value $ 1/{\lambda} $ because the FBS algorithm generally converges faster when the cost function is more strongly convex ($ \alpha < 1 $). In summary, the proposed Moreau-enhanced TV denoising method comprises the steps: \begin{enumerate} \item Set the regularization parameter $ {\lambda} $ ($ {\lambda} > 0 $). \item Set the non-convexity parameter $ \alpha $ ($ 0 \le \alpha < 1/{\lambda} $). \item Initialize $ x\iter{0} = 0 $. \item Run iteration \eqref{eq:alg} until convergence. \end{enumerate} \section{Optimality Condition} To avoid terminating the iterative algorithm too early, it is useful to verify convergence using an optimality condition. \begin{prop} \label{prop:opt} Let $ y \in \mathbb{R}^N $, $ {\lambda} > 0 $, and $ 0 < \alpha < 1/{\lambda} $. If $ x $ is a solution to \eqref{eq:mtvd}, then \begin{equation} \label{eq:opt} \bigl[ C \bigl( (x - y)/{\lambda} + \alpha ( \tvd( x ; 1 / \alpha ) - x ) \bigr) \bigr]_n \in \sign( [ D x ]_n ) \end{equation} for $ n = 0, \dots, N-1 $, where $ C \in \mathbb{R}^{ (N-1) \times N } $ is given by \begin{equation} \label{eq:defC} C_{m, n} = \begin{cases} 1, \ & m \ge n \\ 0, & m < n, \end{cases} \quad \text{i.e.,} \quad [C x]_n = \sum_{ m \le n } x_m \end{equation} and $ \sign $ is the set-valued signum function \begin{equation} \sign(t) = \begin{cases} \{ -1 \}, \ \ & t < 0 \\ [-1, 1], & t = 0 \\ \{ 1 \}, & t > 0. \end{cases} \end{equation} \end{prop} According to \eqref{eq:opt}, if $ x \in \mathbb{R}^N $ is a minimizer, then the points $ ( [Dx]_n, u_n) \in \mathbb{R}^2 $ must lie on the graph of the signum function, where $ u_n $ denotes the value on the left-hand side of \eqref{eq:opt}. Hence, the optimality condition can be depicted as a scatter plot. Figures in the supplemental material show how the points in the scatter plot converge to the signum function as the algorithm \eqref{eq:alg} progresses. \begin{proof}[Proof of Proposition \ref{prop:opt}] A vector $ x $ minimizes a convex function $ F $ if $ 0 \in \partial F( x ) $ where $ \partial F(x) $ is the subdifferential of $ F $ at $ x $. The subdifferential of the cost function \eqref{eq:defF} is given by \begin{equation} \partial F_\alpha( x ) = x - y - {\lambda} \nabla S_\alpha(x) + \partial ( {\lambda} \norm{ D \, \cdot \, }_1 )(x) \end{equation} which can be written as \ifTwoColumn \begin{multline} \partial F_\alpha( x ) = \{ x - y - {\lambda} \nabla S_\alpha(x) + {\lambda} D\tr \! u \\ {} : u_n \in \sign( [ D x ]_n ) , \, u \in \mathbb{R}^{N-1} \}. \end{multline} \else \begin{equation} \partial F_\alpha( x ) = \{ x - y - {\lambda} \nabla S_\alpha(x) + {\lambda} D\tr \! u : u_n \in \sign( [ D x ]_n ) , \, u \in \mathbb{R}^{N-1} \}. \end{equation} \fi Hence, the condition $ 0 \in \partial F_\alpha( x ) $ can be written as \ifTwoColumn \begin{multline} (y - x)/{\lambda} + \nabla S_\alpha(x) \\ {} \in \{ D\tr \! u : u_n \in \sign( [ D x]_n ) , \, u \in \mathbb{R}^{N-1} \}. \end{multline} \else \begin{equation} (y - x)/{\lambda} + \nabla S_\alpha(x) \in \{ D\tr \! u : u_n \in \sign( [ D x]_n ) , \, u \in \mathbb{R}^{N-1} \}. \end{equation} \fi Let $ C $ be a matrix of size $ (N-1) \times N $ such that $ C D\tr = -I $, e.g., \eqref{eq:defC}. It follows that the condition $ 0 \in \partial F_\alpha( x ) $ implies that \begin{equation} \bigl[ C \bigl( (x - y)/{\lambda} - \nabla S_\alpha(x) \bigr) \bigr]_n \in \sign( [ D x]_n ) \end{equation} for $ n = 0, \dots, N-1 $. Using Proposition \ref{prop:Sgrad} gives \eqref{eq:opt}. \end{proof} \section{Example} \begin{figure}[t] \centering \includegraphics{example1_singlecolumn} \caption{ Total variation denoising using three different penalties. (The dashed line is the true noise-free signal.) } \label{fig:example1} \end{figure} This example applies TV denoising to the noisy piecewise constant signal shown in Fig.~\ref{fig:example1}(a). This is the `blocks' signal (length $ N = 256 $) generated by the Wavelab \cite{wavelab} function \texttt{MakeSignal} with additive white Gaussian noise ($ \sigma = 0.5 $). We set the regularization parameter to $ {\lambda} = \sqrt{N} \sigma /4 $ following a discussion in Ref.~\cite{Dumbgen_2009}. For Moreau-enhanced TV denoising, we set the non-convexity parameter to $ \alpha = 0.7 / {\lambda} $. Figure~\ref{fig:example1} shows the result of TV denoising with three different penalties. In each case, a \emph{convex} cost function is minimized. Figure~\ref{fig:example1}(b) shows the result using standard TV denoising (i.e., using the $ \ell_1 $-norm). This denoised signal consistently underestimates the amplitudes of jump discontinuities, especially those occurring near other jump discontinuities of opposite sign. Figure~\ref{fig:example1}(c) shows the result using a separable non-convex penalty \cite{Selesnick_SPL_2015}. This method can use any non-convex scalar penalty satisfying a prescribed set of properties. Here we use the minimax-concave (MC) penalty~\cite{Zhang_2010_AnnalsStat, Bayram_2015_SPL} with non-convexity parameter set to maintain cost function convexity. This result significantly improves the root-mean-square error (RMSE) and mean-absolute-deviation (MAE), but still underestimates the amplitudes of jump discontinuities. Moreau-enhanced TV denoising, shown in Fig.~\ref{fig:example1}(d), further reduces the RMSE and MAE and more accurately estimates the amplitudes of jump discontinuities. The proposed non-separable non-convex penalty avoids the consistent underestimation of discontinuities seen in Figs.~\ref{fig:example1}(b) and \ref{fig:example1}(c). To further compare the denoising capability of the considered penalties, we calculate the average RMSE as a function of the noise level. We let the noise standard deviation span the interval $ 0.2 \le \sigma \le 1.0 $. For each $ \sigma $ value, we calculate the average RMSE of 100 noise realizations. Figure~\ref{fig:rmse} shows that the proposed penalty yields the lowest average RMSE for all $ \sigma \ge 0.4 $. However, at low noise levels, separable convexity-preserving penalties \cite{Selesnick_SPL_2015} perform better than the proposed non-separable convexity-preserving penalty. \begin{figure}[t] \centering \includegraphics{example2_rmse} \caption{ TV denoising using four penalties: RMSE as a function of noise level. } \label{fig:rmse} \end{figure} \section{Conclusion} This paper demonstrates the use of the Moreau envelope to define a non-separable non-convex TV denoising penalty that maintains the convexity of the TV denoising cost function. The basic idea is to subtract from a convex penalty its Moreau envelope. This idea should also be useful for other problems, e.g., analysis tight-frame denoising \cite{Parekh_2015_SPL}. Separable convexity-preserving penalties \cite{Selesnick_SPL_2015} outperformed the proposed one at low noise levels in the example. It is yet to be determined if a more general class of convexity-preserving penalties can outperform both across all noise levels. \bibliographystyle{plain}
{'timestamp': '2017-01-03T02:08:02', 'yymm': '1701', 'arxiv_id': '1701.00439', 'language': 'en', 'url': 'https://arxiv.org/abs/1701.00439'}
\section{Introduction} \label{sec:intro} The gravitational scattering of classical objects at large impact parameter $b$ is relevant for the study of the inspiral phase of black-hole binaries since it can be used to determine the parameters of the Effective-One-Body description (see~\cite{Damour:2016gwp} and references therein). For this reason, gravitational scattering has been at the centre of renewed attention and has been recently investigated using a variety of techniques, including the use of quantum field theory (QFT) amplitudes to extract the relevant classical physics~\cite{Goldberger:2004jt,Melville:2013qca,Goldberger:2016iau,Luna:2016idw,Luna:2017dtq,Bjerrum-Bohr:2018xdl,Cheung:2018wkq,Kosower:2018adc,Bern:2019nnu,KoemansCollado:2019ggb,Cristofoli:2019neg,Bern:2019crd,Kalin:2019rwq,Bjerrum-Bohr:2019kec,Kalin:2019inp,Damour:2019lcq,Cristofoli:2020uzm,Kalin:2020mvi,Kalin:2020lmz,Kalin:2020fhe,Mogull:2020sak,Huber:2020xny}. Here we will focus in particular on the eikonal approach~\cite{Amati:1987wq, Amati:1987uf,Muzinich:1987in, Sundborg:1988tb}, where the classical gravitational dynamics is derived from standard QFT amplitudes by focusing on the terms that exponentiate in the eikonal phase $e^{2i \delta}$. The Post-Minkowskian (PM) expansions writes $\delta$ as a perturbative series in the Newton constant $G$ at large values of $b$ and the state-of-the-art results determine the real part of the 3PM ({\em i.e.} 2-loop) eikonal $\operatorname{Re} 2\delta_2$ (or the closely related scattering angle) and to some extent the imaginary part, both in standard GR~\cite{Amati:1990xe,Bern:2019nnu,Bern:2019crd,Cheung:2020gyp,Kalin:2020fhe,Damour:2020tta} and various supersymmetric generalisations~\cite{DiVecchia:2019kta,Bern:2020gjj,Parra-Martinez:2020dzs,DiVecchia:2020ymx}. In this letter we expand on the approach discussed in~\cite{Amati:1990xe,DiVecchia:2020ymx} where the relation between the real and the imaginary part of $\delta_2$ was used to derive the 3PM scattering angle in the ultrarelativistic limit and to show that it is a universal feature of all gravitational theories in the two derivative approximation. Furthermore, it was shown in~\cite{DiVecchia:2020ymx} for ${\cal N}=8$ supergravity that taking into account the full soft region in the loop integrals was crucial to obtain a smooth interpolation between the behaviour of $\delta_2$ in the non-relativistic, {\em i.e.} Post-Newtonian (PN), regime and the ultrarelativistic (or massless) one. The additional contributions coming from the full soft region had the feature of contributing half-integer terms in the PN expansion and were therefore interpreted as radiation-reaction (RR) contributions. This connection was further confirmed in~\cite{Damour:2020tta} by Damour, who used a linear response relation earlier derived in~\cite{Bini:2012ji} to connect these new RR terms to the loss of angular momentum in the collision. In this way the result of~\cite{DiVecchia:2020ymx} was extended to the case of General Relativity~\cite{Damour:2020tta}. In this paper we argue that there is actually a direct relation between the RR and the much studied soft-bremsstrahlung limits. We claim that the real part of the RR eikonal at 3PM (indicated by ${\rm Re}\, 2 \delta_2^{(rr)}$) is simply related to the infrared divergent contribution of its imaginary part $({\rm Im}\, 2 \delta_2)$. This relation holds at all energies and reads \begin{equation} \lim_{\epsilon\to 0} {\rm Re} \,2 \delta_2^{(rr)} = -\lim_{\epsilon\to 0}\left[ \pi \epsilon ({\rm Im}\,2 \delta_2)\right] \;, \label{1.5} \end{equation} where, as usual, $\epsilon = \frac{4-D}{2}$ is the dimensional regularisation parameter. On the other hand, there is a simple connection (see e.g.~\cite{Addazi:2019mjh}) between the infrared divergent imaginary part of $\delta_2$ and the so-called zero-frequency limit~\cite{Smarr:1977fy} of the bremsstrahlung spectrum reading: \begin{equation} \lim_{\epsilon\to 0} \left[ - 2\epsilon ({\rm Im}\,2 \delta_2)\right] = \frac{d E^{rad}}{2 \hbar d \omega}(\omega \to 0) ~ \Rightarrow~ \lim_{\epsilon\to 0}{\rm Re} \,2 \delta_2^{(rr)} = \frac{\pi}{4 \hbar} \frac{d E^{rad}}{d \omega}(\omega \to 0) \;, \label{ZFL} \end{equation} so that, in the end, RR gets directly related to soft bremsstrahlung. We stress that all (massless) particles can contribute to the r.h.s. of \eqref{ZFL} and therefore to the RR. This result was first noticed in the ${\cal N}=8$ supergravity setup of~\cite{Caron-Huot:2018ape,Parra-Martinez:2020dzs} by using the results of~\cite{DiVecchia:2020ymx,DiVecchia:2021bdo}, see also~\cite{Herrmann:2021tct}, where the full 3PM eikonal is derived by a direct computation of the 2-loop amplitude describing the scattering of two supersymmetric massive particle. Here we give an interpretation of this connection and conjecture its general validity in gravity theories at the 3PM level (the first non trivial one) by reconstructing the infrared divergent part of ${\rm Im} \, 2 \delta_2$ from the three-body discontinuity involving the two massive particles and a massless particle. The building block is of course the $2 \rightarrow 3$ five-point tree-level amplitude where, for our purposes, it is sufficient to keep only the leading classical divergent term in the soft limit (the so-called Weinberg term) of the massless particle. When focusing on pure GR, the only massless particle that can be involved in the three-particle cut mentioned above is the graviton. We will see that, by using Eq. \eqref{1.5}, we reproduce the deflection angle recently derived in~\cite{Damour:2020tta} on the basis of a linear-response formula and of a lowest-order calculation of the angular momentum flux. In the massive ${\cal N}=8$ case, one needs to consider, in addition to the graviton, the contributions of the relevant vectors and scalar fields (including the dilaton). Once all massless particles that can appear in the three-particle cut are taken into account, one obtains~\eqref{3.6} which, as already mentioned, satisfies~\eqref{1.5}. The basic idea underlying all cases is that the calculation of ${\rm Im}\, 2 \delta_2$ from sewing tree-level, on shell, inelastic amplitudes is far simpler than the derivation of the full two-loop elastic amplitude even when focusing on just the classical contributions. Both for GR and for $\mathcal N=8$, the infrared divergent piece of $\delta_2$ can be equivalently obtained exploiting the exponentiation of infrared divergences in momentum space for the elastic amplitude itself (details will be presented elsewhere). The arguments supporting~\eqref{1.5} appear to be valid within a large class of gravitational theories and so this equation provides a direct, general way to calculate the RR contributions at the 3PM level. It remains to be seen whether this approach can be generalized, and in which form, beyond 3PM. The paper is organized as follows. In Sect.~\ref{softm} we introduce our kinematical set-up for the relevant elastic ($2 \rightarrow 2$) and inelastic ($2 \rightarrow 3$) processes and discuss the standard soft limit of the latter in momentum space. In Sect.~\ref{RRIR} we present the empirical connection between $\operatorname{Re} 2\delta_2^{(rr)}$ and the IR divergent part of $\operatorname{Im} 2\delta_2$ in the maximally supersymmetric case. Using unitarity and analyticity of the scattering amplitude, we provide arguments in favour of its general validity. We also outline the logic of the calculations that follow. In Sect.~\ref{softb} we transform the soft-limit results of Sect.~\ref{softm} to impact-parameter space in the large-$b$ limit. In Sect.~\ref{probability} we use these to compute the divergent part of $\operatorname{Im} 2 \delta_2$ and, through our connection, the RR terms in $\operatorname{Re} 2 \delta_2$. This is first done for the case of ${\cal N}=8$ supergravity, where we recover the result of~\cite{DiVecchia:2020ymx}, and then for Einstein's gravity, reproducing the result of~\cite{Damour:2020tta}, and for Jordan-Brans-Dicke theory. \section{Soft Amplitudes in Momentum Space} \label{softm} Let us start by better defining the processes under consideration. We shall be interested in the scattering of two massive scalar particles in $D=4-2\epsilon$ dimensions, with or without the additional emission of a soft massless quantum. For GR, we thus consider minimally coupled scalars with masses $m_1$, $m_2$ in $4-2\epsilon$ dimensions. For $\mathcal N=8$ supergravity, that can be obtained by compactifying six directions in ten-dimensional type II supergravity, we instead choose incoming Kaluza--Klein (KK) scalars whose $(10-2\epsilon)$-dimensional momenta read as follows: \begin{equation} \label{eq:kin10D} P_1= (p_1;0,0,0,0,0,m_1)\,,\qquad P_2 = (p_2;0,0,0,0,m_2 \sin\phi,m_2 \cos\phi)\;, \end{equation} where the last six entries refer to the compact KK directions and provide $p_1$, $p_2$ with the desired effective masses $m_1$, $m_2$ in $4-2\epsilon$ dimensions. The angle $\phi$ thus describes the relative orientation between the KK momenta, We work in a centre-of-mass frame and for our purposes it is convenient to regard the amplitudes as functions of $\bar{p}$, encoding the classical momentum of the massive particles, the transferred momentum $q$ (which is related to the impact parameter after Fourier transform) and the emitted momentum $k$. We thus parametrise the momenta of the incoming states as follows, \begin{equation} \label{eq:kin} \begin{aligned} p_1&= (E_1,\vec{p}\,) = \bar{p}_1 - a q + c k \,, &&\bar{p}_1= (E_1,0,\ldots,0, \bar{p}\,) \,, \\ p_2&= (E_2,-\vec{p}\,) = \bar{p}_2 + a q - c k \,, &&\bar{p}_2= (E_2,0,\ldots,0, -\bar{p}\,) \,, \end{aligned} \end{equation} while the outgoing\footnote{We treat all vectors as formally ingoing.} states are a soft particle of momentum $k$ and massive states with momenta \begin{equation} \label{eq:kin2} k_1= -\bar{p}_1 - (1-a) q - c k \,,\qquad \ \; \, k_2= -\bar{p}_2 + (1-a) q - (1-c) k \,. \end{equation} We singled out the direction of the classical momentum $\bar{p}$, while $q$ is non-trivial only along the $2-2\epsilon$ space directions orthogonal to $\bar{p}_i$. In the elastic case of course $k=0=c$ and we have $a=1/2$. For the inelastic amplitudes one can fix $a$ and $c$ by imposing the on-shell conditions and using $\bar{p}_i q=0$, but we will not need their explicit expression in what follows. We shall now collect the tree-level amplitudes that will enter our calculation of $\operatorname{Im}\, 2\delta_2$ via unitarity, focusing for the most part on $\mathcal N=8$ and commenting along the way on small amendments that are needed to obtain the GR amplitudes. The simplest building block for our analysis of $\mathcal N=8$ supergravity is the elastic tree-level amplitude \begin{equation} A_{tree} \simeq - \frac{32\pi G m_1^2 m_2^2 (\sigma - \cos \phi)^2}{t}\,,\qquad\text{with } \sigma = - \frac{p_1p_2}{m_1 m_2}\,, \label{2.1} \end{equation} where we retained only the terms with the pole at $t=-q^2 = 0$, since we restrict our attention to long-range effects. When $\phi=\frac{\pi}{2}$, the KK momenta are along orthogonal directions, and, in this case, the pole at $t = 0$ corresponds to the exchange of the graviton and of the dilaton that are coupled universally to all massive states with the following three-point on-shell amplitudes in $D=4$: \begin{equation} A_3^{\mu \nu} =-i \kappa ( p_j^\mu k_j^\nu + p_j^\nu k_j^\mu )\,,\qquad A_3^{dil} = -i \kappa \sqrt{2}\, m_j^2\;, \label{2.2} \end{equation} with $j=1,2$ and $\kappa = \sqrt{8 \pi G}$. Using the vertices~\eqref{2.2} and standard propagators, the graviton and the dilaton exchanges yield \begin{equation} A^{gr}_{tree} \simeq - \frac{16\pi G m_1^2 m_2^2 (2 \sigma^2 - 1)}{t} ~,\quad A^{dil}_{tree} \simeq -\frac{16\pi G m_1^2 m_2^2 }{t} ~. \label{2.1b} \end{equation} Their sum reproduces~\eqref{2.1} for $\phi=\frac{\pi}{2}$. For generic $\phi$, in addition to the couplings mentioned above, we also need to consider massless vectors and scalars coming from the KK compactification of the ten dimensional graviton. We have a scalar and a vector whose three-point amplitudes involving the massive fields are \begin{equation} \begin{aligned} A_3^\mu &= - i \kappa m_1 \sqrt{2} (p_1-k_1)^\mu \,, \qquad \qquad \; A_3 = - i \kappa 2 m_1^2 \;, \\ A_3^\mu &= - i \kappa m_2 \sqrt{2} (p_2-k_2)^\mu \cos\phi\,, \qquad A_3 = - i \kappa 2 m_2^2 \cos^2 \phi \;. \end{aligned} \label{2.3} \end{equation} Including also the contribution of these states one can reproduce the tree-level amplitude~\eqref{2.1} for $\phi=0$, which provides a useful cross-check for the normalization of the three-point amplitudes. The particle with mass $m_2$ couples to another vector and another scalar with a strength depending to the other component of the KK momentum, \begin{equation} B_3^\mu = - i \kappa m_2 \sqrt{2} (p_2-k_2)^\mu \sin\phi\,,\qquad B_3 = - i \kappa 2 m_2^2 \sin^2 \phi \,. \label{2.3b} \end{equation} There is also an extra scalar related to the off-diagonal components of the internal metric whose coupling is proportional to $\cos\phi\sin\phi$; here we will not use this coupling as we will mainly focus on the cases $\phi=0$ and $\phi=\frac{\pi}{2}$. Let us now move to the inelastic, $2 \to 3$ amplitude. As stressed in the introduction, we can restrict ourselves to the leading soft term that diverges as $k ^{-1}$ for $k\to 0$. It is given by the product of the elastic tree-level amplitude times a soft factor. For instance, the leading term for the emission of a soft graviton is \cite{Weinberg:1965nx}: \begin{equation} \label{eq:grnde} A_5^{\mu\nu} \simeq {\kappa} \left(\frac{p_1^\mu p_1^\nu}{p_1 k} + \frac{k_1^\mu k_1^\nu}{k_1 k} + \frac{p_2^\mu p_2^\nu}{p_2 k} + \frac{k_2^\mu k_2^\nu}{k_2 k} \right) A_{tree}\,, \end{equation} while in the case of the dilaton one finds\footnote{We neglect possible terms proportional to $\delta(\omega)$ which play no role in the present discussion.} \cite{DiVecchia:2015jaq} \begin{equation} \label{eq:dinde} A_5^{dil} \simeq -\frac{\kappa}{\sqrt{2}} \left(\frac{m_1^2}{p_1 k} + \frac{m_1^2}{k_1 k} + \frac{m_2^2}{p_2 k} + \frac{m_2^2}{k_2 k} \right) A_{tree}\,. \end{equation} We now use~\eqref{eq:kin} and~\eqref{eq:kin2} and keep the leading terms in the soft limit $k\to 0$. By further keeping only the {\em classical} contributions, which are captured by the linear terms in the $q \to 0$ limit, one obtains \begin{equation} A_5^{\mu \nu} \simeq \kappa \left[ \left(\frac{\bar{p}_1^\mu \bar{p}_1^\nu}{(\bar{p}_1k)^2} - \frac{\bar{p}_2^\mu \bar{p}_2^\nu}{(\bar{p}_2k)^2} \right) (qk) - \frac{\bar{p}_1^\mu q^\nu+ \bar{p}_1^\nu q^\mu}{(\bar{p}_1k) } + \frac{\bar{p}_2^\mu q^\nu+ \bar{p}_2^\nu q^\mu}{(\bar{p}_2k) } \right] A_{tree} \label{2.4} \end{equation} for the graviton and \begin{equation} \label{eq:dfe} A_5^{dil} \simeq -\frac{\kappa}{\sqrt{2}} \left( \frac{m_1^2 (qk)}{(\bar{p}_1k)^2} - \frac{m_2^2 (qk)}{(\bar{p}_2k)^2} \right)A_{tree} \end{equation} for the dilaton. From now on we focus for simplicity on the case $\phi=\frac{\pi}{2}$ and so only the first line of~\eqref{2.3} is non-trivial; together with the contribution of~\eqref{2.3b} we need to consider the emission of the two vectors and of the two scalars. For the soft amplitudes we find: \begin{equation} A_5^\mu \simeq \kappa m_1 \sqrt{2} \left( \frac{\bar{p}_1^\mu (qk)}{(\bar{p}_1k)^2} - \frac{q^\mu}{\bar{p}_1k}\right) A_{tree} \,, \quad B_5^\mu \simeq \kappa m_2 \sqrt{2} \left(- \frac{\bar{p}_2^\mu (qk)}{(\bar{p}_2k)^2} + \frac{q^\mu}{\bar{p}_2k}\right)A_{tree} \; , \label{2.5} \end{equation} \begin{equation} A_5 \simeq \kappa m_1^2 \frac{(qk)}{(\bar{p}_1k)^2} A_{tree} \,,\qquad B_5 \simeq - \kappa m_2^2 \frac{(qk)}{(\bar{p}_2k)^2} A_{tree} \;. \label{2.6} \end{equation} \section{Radiation Reaction from Infrared Singularities} \label{RRIR} In this section we briefly present our arguments for the validity, at two-loop level and for generic gravity theories, of the relation \eqref{1.5}. We leave a more detailed discussion to a longer paper \cite{DiVecchia:2021bdo}. Our starting point is an empirical observation made in the context of a recent calculation in ${\cal{N}}=8$ supergravity \cite{DiVecchia:2020ymx} whose set-up has been recalled in the previous section. An interesting outcome of that calculation (made for $\cos \phi =0$) was the identification of a radiation-reaction contribution to the real part of the (two loop) eikonal phase, given by \begin{equation} {\rm Re}\,2 \delta_2^{(rr)} = \frac{16 G^3 m_1^2 m_2^2 \sigma^4}{\hbar b^2 (\sigma^2-1)^2} \Bigg[ \sigma^2 + \frac{\sigma (\sigma^2-2)}{(\sigma^2-1)^{\frac{1}{2}}}\cosh^{-1} (\sigma)\Bigg] + {\cal O}(\epsilon)\; . \label{Redel2} \end{equation} This contribution emerges from the inclusion of radiation modes in the loop integrals and gives rise to half-integer-PN corrections to the deflection angle. Considering the full massive ${\cal N}=8$ result~\cite{DiVecchia:2021bdo}, we then noticed a simple relation between the contribution in eq.~\eqref{Redel2} and two terms appearing in the imaginary part of the same eikonal phase so that, in the full expression for $\delta_2^{(rr)}$, there are three terms that appear in the following combination: \begin{equation} \left[ 1 + \frac{i}{\pi} \left( - \frac{1}{\epsilon} + \log(\sigma^2-1) \right) \right] {\rm Re}\,2\delta_2^{(rr)}. \label{comb2} \end{equation} The two imaginary contributions to $2 \delta_2^{(rr)}$ that appear in \eqref{comb2} are an IR-singular term, which captures the full contribution proportional to $\epsilon^{-1}$, and a $\log(\sigma^2-1)$ term, which captures the branch cuts starting at $\sigma=\pm1$. Let us now examine whether this feature is to be regarded as an accident of the maximally supersymmetric theory or rather as a more general fact. As we shall discuss below and will explain in more detail in \cite{DiVecchia:2021bdo}, the precise combination of the two imaginary terms in the round bracket of \eqref{comb2} is dictated by the three-particle unitarity cut, where the phase space integration over the soft momentum of the massless quantum is responsible for the infrared singularity in ${\rm Im}\,2\delta_2$ (let us recall that ${\rm Im}\,2\delta_2$ contains just the inelastic contribution to the cut \cite{Amati:1990xe}). Furthermore, using real-analyticity of the amplitude forces the $\log (\sigma^2 -1)$ to appear in $\delta_2$ as $\log (1-\sigma^2) = \log (\sigma^2 -1) - i \pi$ yielding precisely the analytic structure of \eqref{comb2}. Combining these two observations, which are based purely on unitarity, analyticity and crossing symmetry, we are led to conjecture the validity of \eqref{1.5} independently of the specific theory under consideration. As anticipated, this relation opens the way to a much simpler calculation of RR effects since it trades the computation of $\operatorname{Re}2\delta_2^{(rr)}$ to that of the IR-divergent part of ${\rm Im}\,2\delta_2$. In the following sections we will carry out this calculation both for the supersymmetric case at hand, for pure gravity where we shall recover a recent result by Damour~\cite{Damour:2020tta}, and for the scalar-tensor theory of Jordan-Brans-Dicke. For the purpose of computing the IR-divergent piece in ${\rm Im}\,2\delta_2$, one can focus on the leading ${\cal O}(k^{-1})$ term in the soft expansion of the inelastic amplitudes given in Sect~\ref{softm}. This allows us to factor out, for each specific theory, the corresponding elastic amplitude. Next, and in this order, one has to take the leading term in a small-$q$ expansion so as to get the sought-for classical contribution. In terms of the impact parameter $b$ which will be introduced in~\eqref{eq:ft}, the small-$q$ limit is equivalent to an expansion for large values of $b$. Since the soft factor is linear in $q$ (it goes to zero at zero scattering angle), and the tree amplitude has a $q^{-2}$ singularity, the result for the inelastic amplitude is (modulo $\epsilon$ dependence) of $\mathcal O(b^{-1})$ and thus of the desired $\mathcal O(b^{-2})$ in ${\rm Im}\, 2\delta_2$. \section{Soft Amplitudes in $b$-space} \label{softb} We now start from the momentum space soft amplitudes given in Sect.~2 and go to impact parameter space using for a generic amplitude the notation \begin{equation}\label{eq:ft} \tilde A(b) = \int \frac{d^{2-2\epsilon} q}{(2\pi)^{2-2\epsilon}}\frac{A(q)}{4m_1 m_2 \sqrt{\sigma^2-1}}\, e^{i b \cdot q}\;. \end{equation} We can now simply replace the factors of $q_j$ in the numerators of the various amplitudes $A_5$ by the derivative $- i \frac{\partial}{\partial b^j}$ and then perform the Fourier transform where the $q$-dependence appears only in $A_{tree}$. Starting from the ${\cal N}=8$ elastic tree-level amplitude with $\phi=\frac{\pi}{2}$, given, up to analytic terms as $q^2\to0$, by \begin{equation} A_{tree} = 8 \pi \beta(\sigma) \frac{m_1m_2}{q^2} \,, \qquad \beta(\sigma) = 4 G m_1 m_2 \sigma^2\,, \label{A0} \end{equation} the leading eikonal takes the form \begin{equation} 2 \delta_0 = -\beta(\sigma) \frac{\Gamma(1-\epsilon) (\pi b^2)^{\epsilon}}{2 \epsilon \hbar \sqrt{\sigma^2-1}} ~ \Rightarrow - i \frac{\partial}{\partial b^j} 2 \delta_0 = \frac{ i\,\Gamma(1-\epsilon)\, b^j (\pi b^2)^{\epsilon}}{ b^2 \hbar \sqrt{\sigma^2-1}} \beta(\sigma)\;. \label{delta0} \end{equation} As clear from~\eqref{2.1b}, one can move from $\mathcal N=8$ to the case of pure GR simply by replacing the prefactor $\beta(\sigma)$ by \begin{equation} \label{eq:tAd} \beta^{GR}(\sigma) = 2 G m_1 m_2 (2\sigma^2-1)\;. \end{equation} We then obtain the following result for the classical part of the soft graviton and soft dilaton amplitudes in impact parameter space \begin{align} \label{2.81l} \begin{split} {\tilde{A}}_5^{\mu \nu} (\sigma, b, k) &\simeq {i \frac{\kappa \beta(\sigma)(\pi b^2)^{\epsilon}}{b^2 \sqrt{\sigma^2-1}}} \\ &\times \left[ (k b)\left( \frac{\bar{p}_1^\mu \bar{p}_1^\nu}{(\bar{p}_1k)^2} - \frac{\bar{p}_2^\mu \bar{p}_2^\nu}{(\bar{p}_2k)^2}\right)- \frac{\bar{p}_1^\mu {b}^\nu+ \bar{p}_1^\nu {b}^\mu}{ (\bar{p}_1k) } + \frac{\bar{p}_2^\mu {b}^\nu+ \bar{p}_2^\nu {b}^\mu}{ (\bar{p}_2 k) } \right]\,, \end{split} \\ {\tilde{A}}_5^{dil} (\sigma, b, k) &\simeq - {i \frac{\kappa \beta(\sigma)(\pi b^2)^{\epsilon}}{ \sqrt{2(\sigma^2-1)}}} {\frac{(k b)}{b^2} } \left[ \frac{m_1^2 }{(\bar{p}_1k)^2} - \frac{m_2^2}{(\bar{p}_2k)^2} \right] \,, \label{2.82l} \end{align} where we approximated the factor of $\Gamma(1-\epsilon)$ in~\eqref{delta0} to 1 as we are interested in the $D\to 4$ case, but we continue to keep track of the dimensionful factor of $b^{2\epsilon}$. Having obtained Eqs.~\eqref{2.81l}, \eqref{2.82l} with the appropriate normalization, we follow the same procedure to go over to $b$-space for the other fields relevant to the ${\cal N}=8$ analysis. For the two vectors we obtain \begin{align} {\tilde{A}}_5^\mu &\simeq {i \sqrt{2} \frac{\kappa m_1 \beta(\sigma)(\pi b^2)^{\epsilon}}{ b^2 \sqrt{\sigma^2-1}} } \left[ \frac{ (k b) \bar{p}_1^\mu}{ (\bar{p}_1k)^2}- \frac{{b}^\mu}{ \bar{p}_1k}\right]\,, \\ {\tilde{B}}_5^\mu &\simeq {- i \sqrt{2} \frac{\kappa m_2 \beta(\sigma)(\pi b^2)^{\epsilon}}{ b^2 \sqrt{\sigma^2-1}} } \left[ \frac{ (k b)\bar{p}_2^\mu}{(\bar{p}_2k)^2}- \frac{{b}^\mu}{\bar{p}_2k}\right], \label{2.10} \end{align} while for the two scalars we get \begin{eqnarray} {\tilde{A}}_5 \simeq{i \frac{\kappa m_1^2 \beta(\sigma)(\pi b^2)^{\epsilon}}{ b^2 \sqrt{\sigma^2-1}} } \frac{(k b)}{(\bar{p}_1k)^2} \,,\qquad {\tilde{B}}_5 \simeq - {i \frac{\kappa m_2^2 \beta(\sigma)(\pi b^2)^{\epsilon}}{ b^2 \sqrt{\sigma^2-1}} } \frac{(k b)}{(\bar{p}_2 k)^2}\,. \label{2.11} \end{eqnarray} Note that all our soft amplitudes are homogeneous functions of $\omega$ and $b$ of degree $-1$ and $-1 + 2 \epsilon$, respectively. \section{IR Divergence of the 3PM Eikonal} \label{probability} Motivated by the discussion of Sect.~\ref{RRIR} and armed with the results of the Sect.~\ref{softb}, we now turn to the calculation of the infrared divergent part of ${\rm Im} \,2 \delta_2$ from the three-particle unitarity cut. Indeed the unitarity convolution in momentum space diagonalizes in impact parameter space giving (see e.g. \cite{Amati:2007ak}) \begin{equation} 2 {\rm Im} \,2 \delta_2 = \sum_i \int \frac{d^{D-1} \vec k}{2|\vec{k}| (2\pi)^{D-1}} | \tilde{A}_{5i} |^2 \,, \label{3.1} \end{equation} where the sum is over each massless state in the theory under consideration. For spin-one and spin-two particles this also includes a sum over helicities. Instead of separating different helicity contributions, we use the fact that all the $2 \to 3$ amplitudes we use are gauge invariant/transverse and simply insert the corresponding on shell Feynman and de Donder propagators, {\em i.e.} $\eta^{\mu\nu}$ for the vectors and $\frac{1}{2} \left(\eta^{\mu\rho} \eta^{\nu\sigma} + \eta^{\mu \sigma} \eta^{\nu \rho} - \eta^{\mu\nu} \eta^{\rho\sigma} \right)$ for the graviton. Equation \eqref{3.1} implies that $\beta^2(\sigma)$ always factors out of the integral over $\vec k$. In spherical coordinates the latter splits into an integral over the modulus $|\vec{k}|$ and one over the angles defined by the following parametrisation of the vector $\vec{k}$: \begin{equation} \vec{k} = |\vec{k}| ( \sin \theta \cos \varphi,\, \sin \theta \sin \varphi,\, \cos \theta)\,,\qquad (k b) = - |\vec{k}| b \sin \theta \cos \varphi\, , \label{3.1a} \end{equation} that implies \begin{equation} \label{eq:pkang} (\bar{p}_1k) = |\vec{k}| (E_1 - \bar{p} \cos\theta) \,, \qquad (\bar{p}_2k) = |\vec{k}| (E_2 + \bar{p} \cos\theta)\, , \end{equation} where we have taken $b$ in \eqref{3.1a} along the $x$ axis. It is clear that the integral over $|\vec{k}| = \hbar \omega$ in~\eqref{3.1} factorises together with an $\epsilon$-dependent power of $b$ to give\footnote{We need to keep $D=4-2\epsilon$ only for the integral over $|\vec{k}|$ while the integration over the angular variables can be done for $\epsilon=0$, so that effectively $d^{D-1} \vec k = |\vec{k}|^{2-2\epsilon} d|\vec{k}|\, \sin \theta \,d \theta\, d \varphi$.} \begin{equation} 2 {\rm Im} \,2 \delta_2 \sim \int \frac{d \omega}{\omega} \omega^{- 2 \epsilon} (b^2)^{-1 +2 \epsilon} \sim (b^2)^{-1+ 3 \epsilon} \int \frac{d \omega}{\omega} (\omega b)^{- 2 \epsilon} \label{eq:of} \end{equation} where the factor $ (b^2)^{-1+ 3 \epsilon}$ is precisely the one expected (also on dimensional grounds) to appear in $\delta_2$. On the other hand, the integral over $\omega$ produces a $\frac{1}{\epsilon}$ divergence in the particular combination: \begin{equation} \int \frac{d \omega}{\omega} (\omega b)^{- 2 \epsilon} = - \frac{1}{2 \epsilon} (\,\overline{\omega b}\,)^{-2 \epsilon} = -\frac{1}{2 \epsilon} + \log \overline{\omega b}+ {\cal O}(\epsilon) \label{comb} \end{equation} where $\overline{\omega b}$ is an appropriate upper limit on the classical dimensionless quantity $\omega b$. To determine $\overline{\omega b}$ one can argue as follows. By energy conservation: \begin{equation} \label{hom} \hbar \omega = \Delta E_1 + \Delta E_2 \end{equation} where $\Delta E_i$ is the energy loss for the $i^{\rm th}$ particle. On the other hand, in order for the spatial components of the momentum transfers $q_i = -(p_i + k_i)$ to provide a classical contribution, they should be of order $\hbar/b \ll |\vec{p}_i|$. But then we can estimate \eqref {hom} by using (for on-shell particles): \begin{equation} \label{onshell} \Delta E_i \lesssim\frac{ |\vec{p}_i|}{E_i} |\Delta \vec{p}_i| \qquad (i = 1,2)\,. \end{equation} Combining \eqref{hom} and \eqref{onshell} we arrive at \begin{equation} \omega b \lesssim \frac{ |\vec{p}_1|}{E_1}+ \frac{ |\vec{p}_2|}{E_2}\,. \end{equation} Using now the following (centre-of-mass) expressions, \begin{equation} \label{eq:rws} \begin{gathered} \bar{p} \; \simeq \; |\vec{p}\,| = \frac{m_1 m_ 2 \sqrt{\sigma^2-1}}{\sqrt{m_1^2 +m_2^2+2m_1 m_2 \sigma}}\;,\\ E_1 = m_1 \frac{m_1+ \sigma m_2}{\sqrt{m_1^2 +m_2^2+2 m_1 m_2 \sigma}}\;,\quad E_2 = m_2 \frac{m_2+ \sigma m_1}{\sqrt{m_1^2 +m_2^2+2 m_1 m_2 \sigma}}\;, \end{gathered} \end{equation} we find: \begin{equation} \overline{\omega b} \sim \sqrt{\sigma^2-1} \left( \frac{m_2}{m_1 + \sigma m_2} + \frac{m_1}{m_2 + \sigma m_1} \right) = \sqrt{\sigma^2-1}( 1 + {\cal O}(\sigma -1))\;. \label{baromb} \end{equation} Therefore, inserting this result in \eqref{comb} and using the real-analyticity argument mentioned in Sect.~3, precisely the combination appearing in \eqref{comb2} is indeed recovered. This is the essence of our argument for conjecturing \eqref{comb2} as a general connection between RR and soft limits. The rest of this section provides examples and non trivial tests of such a connection. \subsection{Massive ${\cal{N}}=8$ Supergravity} We evaluate separately the ${\cal O}(\epsilon^{-1})$ contribution to~\eqref{3.1} for each massless state: the graviton, the dilaton, two vectors and two scalars coupling to the particle of mass $m_1$ and other two vectors and two scalars coupling to the particle of mass $m_2$. We first start from the dilaton contribution. By using~\eqref{2.82l} in~\eqref{3.1} we obtain \begin{equation} \label{eq:dp1} ({\rm Im} \,2 \delta_2)_{dil} \simeq \frac{\kappa^2 {\beta}^2(\sigma)}{4 b^2(\sigma^2-1)} \int \frac{d |\vec{k}| |\vec{k}|^{-2\epsilon-1} }{2 (2\pi)^{3}} \!\int_{-1}^1\!\!\!dx\,\pi (1-x^2)\! \left[ \frac{m_1^2}{({E_1} -\bar{p} x)^2} - \frac{m_2^2}{({E_2} +\bar{p} x)^2} \right]^2\!\!\!, \end{equation} where $x=\cos\theta$. The extra factor of $\pi\sin^2\theta=\pi (1-x^2) $ in the integrand follows from the integration over the angle $\varphi$. As already mentioned the integral over $|\vec{k}|$ factorises out of the whole integral and provides the sought for $\epsilon^{-1}$ factor. Finally, by using~\eqref{eq:rws}, we express everything in terms of $\sigma$ introduced in \eqref{2.1}. Then, using~\eqref{eq:rws} in~\eqref{eq:dp1} and performing the integral over $x$, we obtain \begin{eqnarray} ({\rm Im} \,2 \delta_2)_{dil} (\sigma, b) \simeq -\frac{1}{2 \epsilon } \frac{G {\beta}^2(\sigma)}{\pi \hbar b^2 (\sigma^2-1)^2} \left[ \frac{\sigma^2+2}{3} - \frac{\sigma}{ (\sigma^2-1)^{\frac{1}{2}} } \cosh^{-1} (\sigma) \right] . \label{3.3} \end{eqnarray} Note that the final result depends on the masses only through $\sigma$ even if the integrand depends on $m_1, m_2$ and $\sigma$ separately. The term with the factor of $\cosh^{-1}(\sigma)$ emerges from the cross-product of the square in~\eqref{eq:dp1}, while the other terms yield only rational contributions in $\sigma$. For the graviton's contribution, using~\eqref{2.81l} in~\eqref{3.1}, we obtain % \begin{align} ({\rm Im} \,& 2 \delta_2)_{gr} (\sigma, b) \simeq - \frac{\kappa^2 {\beta}^2(\sigma)}{2 b^2 (\sigma^2-1)} \left(-\frac{1}{2 \epsilon} \,\frac{1}{2 (2\pi)^{3}} \right) \pi \int_{-1}^1\!\! dx \nonumber \\ \times & \Bigg\{4 \left[ \frac{m_1^2}{(E_1-\bar{p}x)^2} + \frac{m_2^2}{(E_2+\bar{p} x)^2} - \frac{2m_1m_2 \sigma}{(E_1-\bar{p} x) (E_2 + \bar{p} x)} \right] \\ \nonumber & - \frac{1-x^2}{2} \left[ \frac{m_1^4}{(E_1- \bar{p} x)^4} + \frac{m_2^4}{(E_2+ \bar{p}x)^4} - \frac{2 m_1^2m_2^2 (2\sigma^2-1)}{(E_1 -\bar{p} x)^2 (E_2+\bar{p} x)^2}\right] \Bigg\}\,. \end{align} The integral over $x$ is again elementary\footnote{Surprisingly, it turns out to be the same as the integral appearing in Eq.~(4.4) of~\cite{Damour:2020tta} and thus reproduces exactly the function ${\cal I}$ in~(4.7) of that reference.}. In terms of the variable $\sigma$ we obtain: \begin{equation} ({\rm Im} \,2 \delta_2)_{gr} (\sigma, b) \simeq -\frac{1}{2 \epsilon } \frac{G {\beta}^2(\sigma)}{\pi \hbar b^2 (\sigma^2-1)^2} \left[ \frac{ 8 -5 \sigma^2}{3} - \frac{\sigma(3-2\sigma^2)}{(\sigma^2-1)^{\frac{1}{2}}} \cosh^{-1} (\sigma) \right] . \label{3.2} \end{equation} Following the same procedure for the contribution of the two vectors in~\eqref{2.5} we get \begin{eqnarray} ({\rm Im} \,2 \delta_2)_{vec} (\sigma, b) \simeq -\frac{1}{2 \epsilon} \frac{G {\beta}^2(\sigma)}{\pi \hbar b^2 (\sigma^2-1)^2} \left[\frac{8}{3} (\sigma^2-1)\right] \label{3.4} \end{eqnarray} and for the sum of the two scalars in~\eqref{2.6} we obtain \begin{eqnarray} ({\rm Im} \,2 \delta_2)_{sca} (\sigma, b)\simeq -\frac{1}{2 \epsilon} \frac{G {\beta}^2(\sigma)}{\pi \hbar b^2 (\sigma^2-1)^2} \left[\frac{2}{3} (\sigma^2-1)\right]. \label{3.5} \end{eqnarray} In the last two types of contributions the soft particles are attached to the same massive state, so there are no terms in the integrand with the structure appearing in the cross term of~\eqref{eq:dp1} and hence no factors of $\cosh^{-1}(\sigma)$ in the final result. Also the graviton and the dilaton results contain contributions of this type corresponding to the terms in the integrands which depend only on $E_1$ or $E_2$. In the ${\cal N}=8$ setup these contributions cancel when summing over all soft particles. Notice also that the static limit $\sigma\to 1$ of~\eqref{3.4} and~\eqref{3.5} is qualitatively different from that of the full graviton and dilaton contributions as it starts one order earlier. Then thanks to~\eqref{1.5} also the leading term of the PN expansion of the ${\cal N}=8$ eikonal or deflection angle is due to the vectors and the scalars in~\eqref{2.3} and~\eqref{2.3b}. By summing the contributions~\eqref{3.3}--\eqref{3.6}, we get the following result for the infrared divergent part of the three-particle discontinuity in ${\cal N}=8$ supergravity with $\phi=\frac{\pi}{2}$ \begin{eqnarray} \left({\rm Im} 2\delta_2\right) \simeq -\frac{G {\beta}^2(\sigma)}{\pi \hbar b^2 \epsilon (\sigma^2-1)^2} \left[ \sigma^2 + \frac{\sigma (\sigma^2-2)}{(\sigma^2-1)^{\frac{1}{2}}} \cosh^{-1} (\sigma) \right] \; \label{3.6} \end{eqnarray} and we can check that it is consistent with~\eqref{1.5} and~\eqref{Redel2}. Further checks of the relation~\eqref{1.5} could be performed by extending the same analysis to the case of $\cos \phi \ne 0$ or to supergravity theories with $0<\mathcal {N} < 8$. \subsection{General Relativity and Jordan-Brans-Dicke Theory} The calculation in pure GR follows exactly the same steps with only the contribution of the graviton and yields again the result in Eq.~\eqref{3.2} just with the prefactor $({\beta}^{GR}(\sigma))^2$ in place of ${\beta}^2(\sigma)$. Then, assuming that Eq.~\eqref{1.5} is also valid in GR, we get \begin{equation} ({\rm Re} \,2 \delta^{(rr)}_2)_{GR} (\sigma, b) = \frac{G ({\beta}^{GR}(\sigma))^2}{2 \hbar b^2 (\sigma^2-1)^2} \left[ \frac{ 8 -5 \sigma^2}{3} - \frac{\sigma(3-2\sigma^2)}{(\sigma^2-1)^{\frac{1}{2}}} \cosh^{-1} (\sigma) \right] \label{3.2b} \end{equation} and, from it, we obtain the deflection angle \begin{equation} \label{eq:6.6} (\chi^{(rr)}_3)_{GR} = - \frac{\hbar}{|\vec{p}|} \frac{\partial {\rm Re} 2 \delta^{(rr)}_2}{\partial b} = \frac{G ({\beta}^{GR}(\sigma))^2}{|\vec{p}| b^3 (\sigma^2-1)^2} \left[ \frac{ 8 -5 \sigma^2}{3} - \frac{\sigma(3-2\sigma^2)}{(\sigma^2-1)^{\frac{1}{2}}} \cosh^{-1} (\sigma) \right] \end{equation} which reproduces the one given in Eq. (6.6) of~\cite{Damour:2020tta}. At the moment, the physical reason for this agreement is unclear. The results obtained so far allow one to derive in a straightforward way the zero-frequency limit (ZFL) of the energy spectrum $\frac{dE^{rad}}{d\omega}$. Indeed, the energy spectrum is just the integrand of~\eqref{3.1} for the graviton multiplied by an extra factor of $ \hbar \omega$ (see also~\cite{Goldberger:2016iau}) so that, \begin{equation} \label{eq:EradG3} E^{\rm rad} = \int \frac{d^{D-1} k}{2 (2\pi)^{D-1}} \tilde{A}^{*\, \mu\nu}_{5}\left(\eta_{\mu\rho} \eta_{\nu\sigma} - \frac{1}{2} \eta_{\mu\nu} \eta_{\rho\sigma}\right) \tilde{A}_5^{\rho\sigma} \equiv \int_0^\infty\!\!\! d\omega \,\frac{d E^{rad}}{d\omega}\;. \end{equation} Since we computed only the $k\to 0$ limit of this integrand, we can reliably extract just the ZFL \begin{equation} \frac{d E^{rad}}{d\omega} ( \omega\to 0) = \lim_{\epsilon\to 0} \left[-4 \hbar \epsilon ({\rm Im} 2\delta_2)\right] \,. \label{3.7} \end{equation} In the case of GR we can use~\eqref{3.2} with $({\beta}^{GR}(\sigma))^2$ in place of ${\beta}^2(\sigma)$ and reproduce Eq.~(2.11) of~\cite{Kovacs:1978eu} (taken from~\cite{Ruffini:1970sp}) by taking the static limit $\sigma \to 1$ \begin{equation} \frac{dE}{d \omega} ( \omega \to 0) = \frac{32 G^3 m_1^2 m_2^2}{5\pi b^2}\;. \label{3.12} \end{equation} Our result \eqref{3.7} should hold true\footnote{T. Damour kindly informed us that he has carried out the explicit check.} at all values of $\sigma$, extending Smarr's original result \cite{Smarr:1977fy} to arbitrary kinematics (see~\cite{Kovacs:1978eu}). Possibly, our approach can be extended to compute the energy spectrum to sub and sub-sub leading order in $\omega$ and to reproduce, in particular cases, the results of~\cite{Sahoo:2018lxl}, \cite{Addazi:2019mjh} and~\cite{Saha:2019tub}. On the other hand, our method looks inadequate to deal with the full spectrum and with the total energy loss\footnote{Such a calculation has been recently tackled by a different approach in \cite{Herrmann:2021lqe}.}. For instance, extrapolating the ZFL result \eqref{3.7} to the upper limit given in \eqref{baromb} would reproduce, at large $\sigma$, the qualitative behaviour of Eq. (5.10) of \cite{Kovacs:1978eu}. But, as anticipated to be the case in \cite{Kovacs:1978eu}, and discussed in \cite{Gruzinov:2014moa} and \cite{Ciafaloni:2018uwe}, such a result needs to be amended, as in the ultra-relativistic/massless limit, at fixed $G$, it would violate energy conservation. Our connection between RR and soft limits readily applies to Jordan-Brans-Dicke (JBD) scalar-tensor theory. The coupling of the massless scalar to massive particles is very much like that of the dilaton except for a rescaling of the coupling by a function of the JBD parameter $\omega_J$ (the coefficient of the JBD kinetic term): \begin{equation} g_{JBD} = \frac{1}{\sqrt{2 \omega_J + 3}}\, g_{dil} \label{JBDcoup}\,. \end{equation} The string dilaton case is recovered for $\omega_J = -1$. It is then straightforward to calculate the RR in JBD theory. It amounts to inserting in~\eqref{3.2} and in~\eqref{3.3} the JBD $\beta(\sigma)$ factor, \begin{equation} \beta^{JBD}(\sigma) = 4G m_1 m_2 \left(\sigma^2- \frac{\omega_J +1}{2 \omega_J + 3}\right), \label{JBDbeta} \end{equation} and to further multiplying the dilaton's contribution of \eqref{3.3} by a factor $(2 \omega_J + 3)^{-2}$. Thus the contribution to the radiation reaction part of the eikonal from the JBD scalar reads \begin{equation} \frac{G {(\beta^{JBD})}^2 (2 \omega_J + 3)^{-2}}{2 \hbar b^2 (\sigma^2-1)^2} \left[ \frac{\sigma^2+2}{3} - \frac{\sigma}{ (\sigma^2-1)^{\frac{1}{2}} } \cosh^{-1} (\sigma) \right] . \label{JBDdelta} \end{equation} In the limit $\omega_J \to \infty$, this result vanishes leaving just the contribution of the graviton and thus reproducing the GR result. Since the present lower limit on $\omega_J$ is about $4\times 10^4$ the effect is unfortunately unobservable. \subsection*{Acknowledgements} We thank Enrico Herrmann, Julio Parra-Martinez, Michael Ruf and Mao Zeng for sharing with us a first draft of their paper \cite{Herrmann:2021lqe} and for useful comments on ours. We also thank Zvi Bern, Emil Bjerrum-Bohr, Poul Henrik Damgaard, Thibault Damour, Henrik Johansson, Rafael Porto and Ashoke Sen for valuable observations on a preliminary version of this letter. The research of RR is partially supported by the UK Science and Technology Facilities Council (STFC) Consolidated Grant ST/P000754/1 ``String theory, gauge theory and duality''. The research of CH (PDV) is fully (partially) supported by the Knut and Alice Wallenberg Foundation under grant KAW 2018.0116. \providecommand{\href}[2]{#2}\begingroup\raggedright
{'timestamp': '2021-04-15T02:01:08', 'yymm': '2101', 'arxiv_id': '2101.05772', 'language': 'en', 'url': 'https://arxiv.org/abs/2101.05772'}
\section{Introduction} The mediation of supersymmetry breaking to the observable sector via supersymmetric gauge interactions (GMSB) has already been proposed during the very early days of super\-symmetric model building \cite{early1,earlyoraif}. The essential ingredients of this class of models are a sequestered sector containing a spurion or a dynamical superfield $\widehat{X}$, whose $F$-component $F_X$ does not vanish (there could exist several such fields). In addition, a messenger sector $\widehat{\varphi}_i$ exists, whose fields have a super\-symmetric mass $M$, but a mass splitting between its scalar/pseudoscalar components due to its coupling to $F_X$. They carry Standard Model gauge quantum numbers such that the messengers couple to the Standard Model gauge supermultiplets. Possible origins of supersymmetry breaking in the form of a nonvanishing $F_X$ component can be O'Raifeartaigh-type models \cite{earlyoraif}, models based on no-scale supergravity \cite{des,ue1995} or Dynamical Supersymmetry Breaking \cite{dsb,dsbnmssm,iss}. If supersymmetric gauge interactions would be the only interactions that couple the visible sector with the messenger/sequestered sector, the phenomenologically required $\mu$ and $B\mu$ terms of the Minimal Supersymmetric Standard Model (MSSM) would be difficult to generate. The simplest solution to this problem is the introduction of a gauge singlet superfield $\widehat{S}$ and a superpotential including the $\lambda\widehat{S}\widehat{H}_u\widehat{H}_d$ term, which has been used in early globally \cite{nmssm1} and locally supersymmetric \cite{nmssm2} models. Let us point out a possible connection between gravity mediated supersymmetry breaking and GMSB-like models \cite{des,ue1995}: standard gravity mediated supersymmetry breaking within the MSSM requires Giudice-Masiero-like terms (depending on the Higgs doublets) in the K\"ahler potential \cite{gm} in order to generate the $\mu$ and $B\mu$ terms (see \cite{hmz} for a possible 5-dimensional origin of such terms). Given a possible source for such terms, one can replace the Higgs doublets by the messengers of GMSB models and proceed as in the usual analysis of gauge mediation. The advantage of such models is that {\it no other} gravity mediated source of supersymmetry breaking as scalar or gaugino soft masses is required; such sources of supersymmetry breaking are frequently absent in higher dimensional setups. On the other hand, the solution of the standard $\mu$-problem for the Higgs doublets still requires the introduction of a singlet $\widehat{S}$. Then one is also led to the scenario considered in this paper, the Next-to-Minimal Supersymmetric Standard Model (NMSSM) with gauge mediated supersymmetry breaking. In order to generate a sufficiently large vacuum expectation value of the scalar component $S$ of $\widehat{S}$ (and hence a sufficiently large effective $\mu$ term $\mu_{eff} = \lambda \left< S\right>$), the singlet superfield $\widehat{S}$ should possess additional Yukawa interactions with the messenger/sequestered sector. Then, an effective potential for $S$ with the desired properties can be radiatively generated. Note that the so-called singlet tadpole problem \cite{tadpole} is absent once the original source of supersymmetry breaking is of the $F$-type \cite{des,neme,ue1995}. On the contrary, singlet tadpole diagrams can now generate the desired structure of the singlet effective potential \cite{des,ue1995}, triggering a VEV of $S$. If the singlet couples at lowest possible loop order to the messenger/sequestered sector such that tadpole diagrams are allowed, a mild version of the singlet tadpole problem reappears, since the coefficients of the corresponding terms linear in $S$ are typically too large. This milder problem can be solved under the assumption that the involved Yukawa coupling is sufficiently small -- however, it does not need to be smaller than the electron Yukawa coupling of the Standard Model (see below). In the meantime, quite a large number of models involving GMSB and at least one gauge singlet, that generates an effective $\mu$ term, have been studied \cite{dgp,gmnm1,gr,gmnm2,dgs,gkr}. They differ in the particle content of the messenger/sequestered sector, and include sometimes more than one gauge singlet superfield. The purpose of the present paper is the investigation of a large class of models obtained after integrating out the messenger/sequestered sector (including possibly heavy singlet fields). It is assumed that the remaining particle content with masses below the messenger scale $M$ is the one of the NMSSM. The couplings and mass terms of the NMSSM are obtained under the following assumptions:\newline -- no interactions between the Higgs doublets ${H}_u$, ${H}_d$ and the messenger/sequestered sector exist apart from supersymmetric gauge interactions; then no MSSM-like $\mu$ or $B\mu$ terms are generated after integrating out the messenger/sequestered sector;\newline -- the gauge singlet superfield $\widehat{S}$ has Yukawa interactions with the messenger/sequestered sector. As a result, various soft terms and $\widehat{S}$-dependent terms in the superpotential can be generated after integrating out the messenger/sequestered sector. Under the only assumption that the original source of supersymmetry breaking is $F_X$ and that the messengers have a mass of the order $M \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \sqrt{F_X}$, superspace power counting rules allow to estimate the maximally possible order of magnitude of the generated masses and couplings. In general, these masses and couplings will comprise nearly all possibilities consistent with gauge invariance (see Section 2), leading to the general NMSSM. However, many of these mass terms and couplings can be much smaller, or absent, than indicated by the power counting rules (but never larger), if the corresponding diagrams involve high loop orders, small Yukawa couplings, or are forbidden by discrete or (approximate) continuous symmetries. In the next Section, we will parametrize the mass terms and couplings of the general NMSSM, and estimate their (maximally possible) radiatively generated order of magnitude with the help of superspace power counting rules. Section 3 is devoted to a phenomenological analysis of three different scenarios, which are defined by particular boundary conditions for the NMSSM parameters at the messenger scale, and Section 4 contains our conclusions. \section{Results of superspace power counting rules} The class of models investigated in this paper is defined by a superpotential \begin{equation}} \def\eeq{\end{equation}\label{2.1} W=\lambda\widehat{S}\widehat{H}_u\widehat{H}_d +\frac{\kappa}{3}\widehat{S}^3 + \widetilde{W}(\widehat{S},\widehat{X},\widehat{\varphi}_i,\dots) +\dots\ , \eeq where $\widetilde{W}(\widehat{S},\widehat{X},\widehat{\varphi}_i,\dots)$ denotes the couplings of $\widehat{S}$ to the messenger/sequestered sector, and we have omitted the standard Yukawa couplings of $\widehat{H}_u$ and $\widehat{H}_d$. No MSSM-like $\mu$-term is assumed to be present. Due to a coupling $\widehat{X}\widehat{\varphi}_i\widehat{\varphi}_i$ in $\widetilde{W}$, a non-vanishing $F_X$-component \begin{equation}} \def\eeq{\end{equation}\label{2.2} F_X = m^2 \eeq induces a mass term \begin{equation}} \def\eeq{\end{equation}\label{2.3} \frac{1}{2} m^2 \left(A_{\varphi_i}^2 + A_{\varphi_i}^{*\ 2} \right) \eeq which gives opposite contributions to the squared masses of the real and imaginary components of the scalar components of the messengers $\widehat{\varphi}_i$. Since we assume no direct couplings of $\widehat{S}$ to $\widehat{X}$, this constitutes the only original source of supersymmetry breaking. After integrating out the messenger/sequestered sector, the remaining effective action for the light superfields $\widehat\Phi$ (the fields $\widehat{S},\widehat{H}_u,\widehat{H}_d,\dots$ of the NMSSM) is necessarily of the form \begin{equation}} \def\eeq{\end{equation}\label{2.4} \sum_i c_i \int d^4\theta f_i(D_\alpha, \overline{D}_{\dot{\alpha}}, \widehat\Phi, \overline{\widehat\Phi}, \widehat{X}, \overline{\widehat{X}})\ , \eeq where the relevant terms are obtained after the replacement of at least one superfield $\widehat{X}$ by its $F$-component $F_X$. The maximally possible orders of magnitude of the coefficients $c_i$ can be obtained by dimensional analysis: if a function $f_i$ is of a mass dimension $[M]^{d_f}$, the corresponding coefficient $c_i$ has a mass dimension $[M]^{2-d_f}$. As long as $d_f \geq 2$ (which will be the case), $c_i$ will typically depend on the mass of the heaviest particle running in the loops to the appropriate power, and subsequently we identify this mass $M$ with a unique messenger scale $M_{mess}$. We are aware of the fact that models exist where the $c_i$ depend on several mass scales $M_i$; however, it is always trivial to identify a mass scale $M$ such that $c_i$ are bounded from above by $M^{2-d_f}$. Also, in the particular case $d_f=2$, $c_i$ can involve large logarithms; these depend on whether the VEV $F_X$ is ``hard'' (i.e. generated at a scale $\Lambda$ much larger than $M$) or ``soft'', i.e. generated by a potential involving terms of the order of $M$. In the first case, logarithms of the form $\ln(\Lambda^2/M^2)$ can appear in $c_i$. In the present situation (no interactions between the Higgs doublets ${H}_u$, ${H}_d$ and the messenger/sequestered sector) possible supercovariant derivatives $D_\alpha, \overline{D}_{\dot{\alpha}}$ inside $f_i$ do not lead to terms that would otherwise be absent; for this reason we will omit them in our analysis. (Here, we will not discuss the radiatively generated gaugino masses and scalar masses for the gauge non-singlets, but concentrate on the NMSSM specific effects.) To lowest loop order we can use the underlying assumption that only the singlet superfield $\widehat{S}$ has direct couplings to the messenger/sequestered sector (however, see Fig.~1 below). The first terms that we will investigate are then of the form \begin{equation}} \def\eeq{\end{equation}\label{2.5} \sum_i c_i \int d^4\theta f_i( \widehat{S}, \overline{\widehat{S}}, \widehat{X}, \overline{\widehat{X}})\ . \eeq Below we list all relevant terms with this structure. Given an expression of the form (\ref{2.5}), the generated $S$- and $F_S$-dependent terms can be obtained by the replacements \begin{equation}} \def\eeq{\end{equation}\label{2.6} \widehat{X}=M+\theta^2 m^2, \qquad \widehat{S}=S+\theta^2 F_S\ . \eeq Due to the coupling $\widehat{X}\widehat{\varphi}_i \widehat{\varphi}_i$, the supersymmetry conserving mass $M$ of the messengers $\widehat{\varphi}_i$ can be identified with the value of the scalar component of $\widehat{X}$. Loop factors like $(16\pi^2)^{-1}$ and model dependent Yukawa couplings are not explicitly given, but we indicate the powers of $m$ (which follow from the powers of $F_X$) and $M$ (which follow from dimensional analysis). The possible operators $f_i$ and the corresponding contributions to the scalar potential are then given by: \begin{eqnarray}} \def\eea{\end{eqnarray}\label{2.7} \widehat{S}\overline{\widehat{X}}+h.c.:& & m^2F_S+h.c.\\\label{2.8} \widehat{S}\overline{\widehat{X}}\widehat{X} +h.c.:& & \frac{m^4}{M} S +m^2 F_S +h.c.\\\label{2.9} \widehat{S}\overline{\widehat{S}}{\widehat{X}}+h.c.:& & \frac{m^2}{M} (S F_S^*+h.c.) + F_S F_S^*\\\label{2.10} \widehat{S}\overline{\widehat{S}}{\widehat{X}}\overline{\widehat{X}}:& & \frac{m^4}{M^2}S S^* + \frac{m^2}{M} (S F_S^* +h.c.) +F_S F_S^*\\ \label{2.11} \widehat{S}\widehat{S}\overline{\widehat{X}}+h.c.:& & \frac{m^2}{M} (S F_S +h.c.)\\ \label{2.12} \widehat{S}\widehat{S}{\widehat{X}}\overline{\widehat{X}} +h.c.:& & \frac{m^4}{M^2} S^2 + \frac{m^2}{M} S F_S +h.c. \eea Operators with higher powers of $\widehat{X}$ or $\overline{\widehat{X}}$ do not generate new expressions, and operators with higher powers of $\widehat{S}$ or $\overline{\widehat{S}}$ generate negligible contributions with higher powers of $M$ in the denominator (recall that we are assuming $M \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; m$). The terms $\sim F_S F_S^*$ in (\ref{2.9}) and (\ref{2.10}) only account for a correction to the wave function normalization of the superfield $\widehat{S}$, which can be absorbed by a redefinition of $\widehat{S}$. The remaining terms can be written as an effective superpotential $\Delta W$ and additional contributions $\Delta V_{soft}$ to the soft terms of the general NMSSM. To this end, the terms $S F_S^* +h.c.$ in (\ref{2.9}) and (\ref{2.10}) have to be rewritten using the expression derived from the superpotential (\ref{2.1}): \begin{equation}} \def\eeq{\end{equation}\label{2.13} F_S^* = \lambda{H_u}{H_d} +{\kappa}{S}^2 + \dots \eeq where the dots stand for terms of higher order in the loop expansion. We parametrize the effective superpotential $\Delta W$ and the soft terms $\Delta V_{soft}$ of the general NMSSM in agreement with SLHA2 conventions \cite{slha2}: \begin{eqnarray}} \def\eea{\end{eqnarray}\label{2.14} \Delta W &=& \mu'\widehat{S}^2 + \xi_F \widehat{S}\ ,\\ \label{2.15} \Delta V_{soft} &=& m_S^2\left|S\right|^2 +(\lambda A_\lambda S H_u H_d +\frac{1}{3} \kappa A_\kappa S^3 + m_S'^2 S^2 +\xi _S S +h.c.)\ . \eea Then the expressions (\ref{2.7}) to (\ref{2.12}) lead to \begin{eqnarray}} \def\eea{\end{eqnarray}\label{2.16} \mu' &\sim& \frac{m^2}{M}\ ,\\ \label{2.17} \xi_F &\sim& m^2\ ,\\ \label{2.18} m_S^2 &\sim& \frac{m^4}{M^2}\ ,\\ \label{2.19} A_\lambda = \frac{1}{3}A_\kappa &\sim& \frac{m^2}{M}\ ,\\ \label{2.20} m_S'^2 &\sim& \frac{m^4}{M^2}\ ,\\ \label{2.21} \xi_S &\sim& \frac{m^4}{M}\ . \eea Next, within the class of models defined by the superpotential (\ref{2.1}), there exist the diagrams shown in Fig.1 which generate terms in $\Delta V_{soft}$ which are not included in the list (\ref{2.16}) -- (\ref{2.21}). The corresponding operators and soft terms (after the replacement of $F_{H_u}$ and $F_{H_d}$ by their tree level expressions) are given by \begin{eqnarray}} \def\eea{\end{eqnarray}\label{2.22} \widehat{H}_u \overline{\widehat{H}}_u \widehat{X}+h.c. & \to \frac{m^2}{M} H_u F^*_{H_u} & \to \Delta A_\lambda = \Delta A_{t} \sim \frac{m^2}{M}\ ,\\ \label{2.23} \widehat{H}_u \overline{\widehat{H}}_u \widehat{X} \overline{\widehat{X}} &\to \frac{m^4}{M^2} H_u H^*_u &\to \Delta m_u^2 \sim \frac{m^4}{M^2}\ , \eea together with analogous expressions with $H_u$ replaced by $H_d$ (and $A_{t}$ by $A_{b}$). \begin{figure}[ht] \vskip 0.5cm \begin{center} \includegraphics[scale=0.64]{fig1.eps} \caption{Superfield diagrams which generate the operators (\ref{2.22}) and (\ref{2.23}) (omitting, for simplicity, the ``hats'' on top of the letters denoting the superfields.)} \label{1.1f} \end{center} \vspace*{5mm} \end{figure} Similar expressions are also generated by i)~the replacement of the shaded bubbles in Fig.1 by the effective operators (\ref{2.9}) and (\ref{2.10}) (which generate the soft terms (\ref{2.18}) and (\ref{2.17})), and ii)~the Renormalization Group (RG) evolution of $A_\lambda$, $A_{t}$, $A_{b}$, $m_u^2$ and $m_d^2$ from the messenger scale $M$ down to the weak (or SUSY) scale $M_{SUSY}$. Whereas this RG evolution sums up potentially large logarithms of the form $\ln(M^2/M_{SUSY}^2)$, it does not describe contributions without such logarithms which serve as boundary conditions for the RG evolution at the scale $Q^2 = M^2$. Note that both contributions (\ref{2.22}) and (\ref{2.23}) are generated only at (or beyond) two loop order, and are hence suppressed by additional factors $\lambda^2/(16\pi^2)^2 \times$ additional Yukawa couplings. Compared to the effective SUSY breaking scale $m^2/(16 \pi^2 M)$, the contribution to the $A$ terms (\ref{2.22}) is negligibly small. However, the contribution (\ref{2.23}) to $\Delta m_u^2 = \Delta m_d^2$ can be of the same order as the two loop contributions mediated by gauge interactions (see appendix A), if $\lambda$ is not too small. Since the contribution (\ref{2.23}) to $\Delta m_u^2 = \Delta m_d^2$ is typically negative, we will subsequently parametrize it in terms of $\Delta_H$ defined as \begin{equation}} \def\eeq{\end{equation}\label{2.24} \Delta m_u^2 = \Delta m_d^2 = - \Delta_H \frac{\lambda^2}{(16\pi^2)^2} M_{SUSY}^2 \eeq with $M_{SUSY}=m^2/M$ as in appendix A, and $\Delta_H$ bounded from above by $\Delta_H \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;$ (Yukawa)$^2$ $\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; {\cal{O}}(1)$. To summarize this Section, within the class of models defined by the superpotential (\ref{2.1}) one obtains in general, after integrating out the messenger/sequestered sector, an effective NMSSM valid at scales below the messenger scale $M$, which includes \newline a) the first two terms in the superpotential (\ref{2.1}),\newline b) the soft SUSY breaking gaugino, squark, slepton and Higgs masses obtained by gauge mediation, which we recall for convenience in appendix A,\newline c) additional terms in the superpotential (\ref{2.14}) and additional soft terms (\ref{2.15}),\newline d) additional contributions to the soft SUSY breaking Higgs masses as in (\ref{2.24}). Note that neither an explicit $\mu$ term nor an explicit $m_3^2 \equiv B\mu$ term are present at the messenger scale $M$. However, once the above soft terms are used as boundary conditions for the RG evolution from $M$ down to $M_{SUSY}$, a term of the form $m_3^2\,H_u H_d$ can be radiatively generated in general. (In the appendix B, we recall the $\beta$-functions of the parameters of the Higgs sector of the general NMSSM. One finds that a non-vanishing parameter $m_S'^2$ generally induces a non-vanishing $m_3^2$.) Depending on the structure of the messenger/sequestered sector, many of the terms in (\ref{2.16}) -- (\ref{2.21}) can be disallowed or suppressed by discrete or approximate continuous symmetries. (Exact continuous symmetries forbidding any of these terms would be spontaneously broken in the physical vacuum, giving rise to an unacceptable Goldstone boson.) An exception is the term (\ref{2.10}) leading to the soft singlet mass term (\ref{2.18}), which can never be suppressed using symmetries. However, precisely this term is often generated only to higher loop order and/or to higher order in an expansion in $m/M$ as expected from na\"ive dimensional analysis \cite{ue1995,dgs}. Finally we remark that terms of the form $S F_S^* +h.c.$ (which give rise to the trilinear soft terms (\ref{2.19})) will be suppressed if an $R$-symmetry is only weakly broken in the scalar sector. \section{Phenomenological analysis} The purpose of this Section is the phenomenological analysis of various scenarios within the class of models defined in Section 2, that differ by the presence/absence of the different terms (\ref{2.16}) to (\ref{2.21}) and (\ref{2.24}). To this end we employ a Fortran routine NMGMSB, that will be made public on the NMSSMTools web page \cite{nmssmtools}. The routine NMGMSB is a suitable generalization of the routine NMSPEC (available on the same web site) towards the general NMSSM with soft SUSY breaking terms specified by GMSB, i.e. it allows for a phenomenological analysis of the class of models defined in Section 2. It requires the definition of a model in terms of the parameter $\lambda$ and the soft SUSY breaking and superpotential terms b) -- d) above. Since the coupling $\lambda$ at the effective SUSY breaking scale plays an important phenomenological r\^ole (and in order to allow for comparisons with other versions of the NMSSM as mSUGRA inspired), the coupling $\lambda$ on input is defined at an effective SUSY breaking scale $Q_{SUSY}$ given essentially by the squark masses. The remaining input parameters, notably the soft SUSY breaking terms listed in appendix A and in (\ref{2.16}) -- (\ref{2.21}), are defined at a unique messenger scale $M$. The RG equations are then integrated numerically from $M$ down to $Q_{SUSY}$. Additional input parameters are, of course, $M_Z$, and also $\tan\beta$ (at the scale $M_Z$). Similar to the procedure employed in NMSPEC, the minimization equations of the effective Higgs potential -- including radiative corrections as in \cite{nmssmtools} -- can then be solved for the Yukawa coupling $\kappa$ in the superpotential (\ref{2.1}), and for the SUSY breaking singlet mass $m_S^2$ (\ref{2.18}) or, if $m_S^2$ is fixed as input, for $\xi_S$. (If specific values for $\kappa$, $m_S^2$ and $\xi_S$ at the scale $M$ are desired as input, this procedure is somewhat inconvenient. Then, one would have to scan over at least some of the other input parameters and select points in parameter space where $\kappa$, $m_S^2$ or $\xi_S$ -- which are given at the scale $M$ as output -- are close enough to the desired numerical values.) Since the gauge and SM Yukawa couplings are defined at the scale $M_Z$, a few iterations are required until the desired boundary conditions at $M_Z$ and $M$ are simultaneously satisfied. After checking theoretical constraints as the absence of deeper minima of the effective potential and Landau singularities below $M$, the routine proceeds with the evaluation of the physical Higgs masses and couplings (in\-cluding radiative corrections as in \cite{nmssmtools}) and the sparticle spectrum including pole mass corrections. Then, phenomenological constraints can be checked: \newline -- Higgs masses, couplings and branching ratios are compared to constraints from LEP, including constraints on unconventional Higgs decay modes \cite{lhg} relevant for the NMSSM; \newline -- constraints from $B$-physics are applied as in \cite{bphys}, and the muon anomalous magnetic moment is computed. Subsequently we investigate several scenarios, for which many (but different) terms in the list (\ref{2.16}) -- (\ref{2.21}) vanish or are negligibly small. \subsection{Scenarios with tadpole terms} The tadpole terms $\xi_F$ in $\Delta W$ in (\ref{2.14}) and $\xi_S$ in $\Delta V_{soft}$ in (\ref{2.15}) will trigger a nonvanishing VEV of $S$. However, as it becomes clear from (\ref{2.17}) and (\ref{2.21}), these tadpole terms -- if not forbidden by symmetries -- tend to be too large: the scale of the soft SUSY breaking gaugino, squark, slepton and Higgs masses in GMSB models is given by $M_{SUSY} \sim m^2/M$ (together with an additional loop factor $(16\pi^2)^{-1}$, see appendix A). Written in terms of $M$ and $M_{SUSY}$, the maximally possible order of magnitude of the supersymmetric and soft SUSY breaking tadpole terms are $\xi_F \sim m^2 \sim M M_{SUSY}$ and $\xi_S \sim m^4/M \sim M M_{SUSY}^2$. If $M \gg M_{SUSY}$, which will generally be the case, these tend to be larger than the desired orders of magnitude $\xi_F \sim M_{SUSY}^2$ and $\xi_S \sim M_{SUSY}^3$. (This problem is similar to the $\mu$ and $B \mu$ problem in the MSSM with GMSB, see \cite{dgp}.) Hence one has to assume that these terms are suppressed, e.g. generated to higher loop order only as in \cite{des}, or involve small Yukawa couplings. Let us study the latter scenario quantitatively in a simple model \cite{ue1995}: let us assume that the singlet superfield couples directly to $n_5$ pairs of messengers $\widehat{\phi}$, $\widehat{\overline{\phi}}$ (in $\underline{5}$ and $\overline{\underline{5}}$ representations under $SU(5)$) due to a term \begin{equation}} \def\eeq{\end{equation}\label{3.1} -\eta\widehat{S}\widehat{\overline{\phi}}\widehat{\phi} \eeq in the superpotential $\widetilde{W}$ in (\ref{2.1}). Then, one loop diagrams generate \cite{ue1995} \begin{equation}} \def\eeq{\end{equation}\label{3.2} \xi_F = n_5\frac{\eta}{8\pi^2} m^2 \ln\left(\Lambda^2/M^2\right) \eeq and \begin{equation}} \def\eeq{\end{equation}\label{3.3} \xi_S = -n_5\frac{\eta}{16 \pi^2} \frac{m^4}{M} \eeq in agreement with the power counting rules (\ref{2.17}) and (\ref{2.21}). (The UV cutoff $\Lambda$ appears in (\ref{3.2}) only if the SUSY breaking $F_X$ is ``hard'' in the sense discussed in Section 2; otherwise the logarithm in (\ref{3.2}) should be replaced by a number of ${\cal O}(1)$.) Below, we consider a mass splitting $m^2 \sim 8\times 10^{10}$ GeV$^2$ among the messenger scalars and pseudoscalars, and a messenger scale $M \sim 10^6$ GeV. Then, for $\ln\left(\Lambda^2/M^2\right) \sim 3$, a Yukawa coupling $\eta \sim 2\times 10^{-6}$ generates $\xi_F \sim (150\ \mathrm{GeV})^2$ and $\xi_S \sim -(1\ \mathrm{TeV})^3$. We find that these orders of magnitude for $\xi_F$ and $\xi_S$ are perfectly consistent with a phenomenologically viable Higgs sector. Given the presence of small Yukawa couplings in the Standard Model, and the possibility of obtaining additional symmetries in the limit of vanishing $\eta$, we do not consider $\eta \sim 10^{-6} - 10^{-5}$ as particularly unnatural. The coupling (\ref{3.1}) also gives rise to a positive SUSY breaking mass squared \begin{equation}} \def\eeq{\end{equation}\label{3.4} m_S^2 = n_5\frac{\eta^2}{4\pi^2} \frac{m^6}{M^4} \eeq for the singlet $S$. Under the assumption of such small values for $\eta$, this term is numerically negligible (as well as contributions to $A_\lambda, A_\kappa, \mu', m_S'^2, \Delta_H$ and two loop contributions to $m_S^2$ of ${\cal O}(m^4/M^2)$). Hence in the following we will concentrate on models where, among the terms in (\ref{2.16}) -- (\ref{2.21}) and (\ref{2.24}), only $\xi_F$ and $\xi_S$ are nonvanishing. (These models are then similar to the ones denoted as ``nMSSM'' in \cite{nnmssm}. However, given the present constraints on the soft terms we found that a term $\sim \kappa$ in the superpotential (\ref{2.1}) is required for the stability of the scalar potential.) The remaining free parameters are $\tan\beta$, $\lambda$, $M$, $m^2/M$ and $\xi_F$: since $m_S^2$ is fixed as input at the scale $M$ (where $m_S^2=0$), the equation following from the minimization of the potential w.r.t. $S$ can be used to determine~$\xi_S$. Quite generally, there exist two distinct allowed regions in the parameter space, which differ how the lightest scalar Higgs mass $m_{h_1}$ satisfies the LEP bound of $\sim 114$~GeV: \newline a) region A at low $\tan\beta$ and large $\lambda$, where the NMSSM specific contributions to the lightest Higgs mass allow for values above above 114 GeV. Low values of $\tan\beta$ demand that the messenger scale $M$ is not too large: $\tan\beta \sim 1$ requires $m_u^2 \sim m_d^2$ at the SUSY scale, but the RG equation for $m_u^2$ differs from the one for $m_d^2$ by the presence of the top Yukawa coupling (which is particularly large for small $\tan\beta$). Thus the range of the RG running should not be too big, i.e. the scale $M$ should not be too far above the SUSY scale.\newline b) region B at large $\tan\beta$, where the messenger scale $M$ is quite large (typically $\sim 10^{13}$ GeV) resulting in stop masses in the 1.5--2 TeV range. Then the top/stop radiative corrections to the lightest Higgs mass can lift it above 114 GeV without the need for NMSSM specific contributions. (At large $\tan\beta$, $\lambda$ does not increase the lightest Higgs mass; on the contrary, large values of $\lambda$ lower its mass through an induced mixing with the singlet-like scalar. Hence, $\lambda$ must be relatively small here.) However, in the present context one finds from (\ref{3.2}) and (\ref{3.3}) that such large values for $M$ (with fixed $m^2/M \sim 10^5$~GeV) would require extremely small values for $\eta$. For this reason we confine ourselves to region A in the following. In region A, the LEP bound on $m_{h_1}$ requires $\tan\beta$ to be smaller than $\sim 2$, and $\lambda$ larger than $\sim 0.45$. Subsequently we investigate the interval $0.45 < \lambda < 0.6$ and $\tan\beta > 1.2$, where perturbativity in the running Yukawa couplings $\lambda$, $\kappa$ and $h_t$ is guaranteed at least up to the messenger scale $M$. If we na\"ively extrapolate the RGEs beyond the scale $M$ (taking the contributions of the messenger fields to the running gauge couplings into account), perturbativity in the running Yukawa couplings is usually not satisfied up to the GUT scale in region A (in contrast to scenarios where $M \sim 10^{13}$ GeV). There exist different possible solutions to this problem: first, additional matter could be present at the messenger scale, charged under the SM gauge groups. Then, SM gauge couplings can become large (at the boundary of perturbativity) below the GUT scale, and since they induce a negative contribution to the $\beta$~functions for $h_t$ and $\lambda$, they could help to avoid a Landau singularity in the Yukawa sector below $M_{GUT}$. Another attitude would be to assume that a strongly interacting sector (possibly responsible for the breaking of supersymmetry) exists at or above the messenger scale $M$; then the singlet $S$, for example, could turn out to be a composite state which would imply a compositeness condition equivalent to Landau singularities in the Yukawa couplings of $S$ at the corresponding scale (without affecting, at the one loop level, the grand unification of the SM gauge couplings). Within the region $1.2 < \tan\beta < 2$ and $0.45 < \lambda < 0.6$, a wide range of the remaining parameters $M$, $m^2/M$ and $\xi_F$ satisfies all phenomenological constraints. Subsequently we fix these parameters near the center of the allowed range: $M=10^6$ GeV, $m^2/M=8\times 10^4$ GeV and $\xi_F=3\times 10^4$ GeV$^2$, and vary $\tan\beta$ and $\lambda$ in the above intervals (taking, for simplicity, $n_5 = 1$). The allowed range of $\tan\beta$ ($\tan\beta < 1.6$ for these values for $M$, $m^2/M$ and $\xi_F$) and $\lambda$ (actually $\lambda \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 0.5$) is shown in Fig. 2; the upper limit on $\tan\beta$ originates from the LEP bound on the lightest Higgs mass $m_{h_1}$. This becomes evident from Fig. 3, where we show the range of $m_{h_1}$ (for various values of $\lambda$, larger values of $\lambda$ corresponding to larger values of $m_{h_1}$) as a function of $\tan\beta$. If we would allow for larger values of $\lambda$ (and/or smaller values of $\tan\beta$), larger values for $m_{h_1}$ would be possible. \begin{figure}[ht] \begin{center} \includegraphics[angle=-90,scale=0.5]{fig2.ps} \caption{Allowed values of $\lambda$ as a function of $\tan\beta$ for $M=10^6$ GeV, $M_{SUSY}=m^2/M=8\times 10^4$ GeV and $\xi_F=3\times 10^4$ GeV$^2$.} \label{1.2f} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[angle=-90,scale=0.5]{fig3.ps} \caption{The lightest Higgs mass as a function of $\tan\beta$ for the same parameters as in Fig.~2, larger values of $m_{h_1}$ corresponding to larger values of $\lambda$.} \label{1.3f} \end{center} \end{figure} In Fig. 4 we display the charged Higgs mass $m_{h^\pm}$ (practically degenerated with a scalar with mass $m_{h_2}$ and a pseudoscalar with mass $m_{a_2}$), the singlet-like scalar mass $m_{h_3}$ and the singlet-like pseudoscalar mass $m_{a_1}$, all of which are nearly independent of $\lambda$. For small $\tan\beta$ the large values of the Higgs masses indicate that this region is implicitly more fine tuned. The remaining sparticle spectrum is essentially specified by $M$ and $m^2/M$, and hardly sensitive to $\tan\beta$ and $\lambda$ within the above intervals: \begin{eqnarray}} \def\eea{\end{eqnarray} \mathrm{Bino:} && \sim 105\ \mathrm{GeV}\nonumber\\ \mathrm{Winos:} && \sim 200\ \mathrm{GeV}\nonumber\\ \mathrm{Higgsinos:} && \sim 670 - 1000\ \mathrm{GeV}\nonumber\\ \mathrm{Singlino:} && \sim 900 - 1800\ \mathrm{GeV}\nonumber\\ \mathrm{Sleptons:} && \sim 140 - 290\ \mathrm{GeV}\nonumber\\ \mathrm{Squarks:} && \sim 640 - 890\ \mathrm{GeV}\nonumber\\ \mathrm{Gluino:} && \sim 660\ \mathrm{GeV}\nonumber \eea (Due to the small value of $\tan\beta$ in this scenario, the supersymmetric contribution to the muon anomalous magnetic moment is actually too small to account for the presently observed deviation w.r.t. the Standard Model.) \begin{figure}[ht] \begin{center} \includegraphics[angle=-90,scale=0.5]{fig4.ps} \caption{Heavy Higgs masses as a function of $\tan\beta$ for the same parameters as in Fig.~2.} \label{1.4f} \end{center} \end{figure} In Fig. 5, we give the values of $\xi_S$ (at the scale $M$), which are obtained as an output as function of $\tan\beta$. Within the model corresponding to (\ref{3.1}) -- (\ref{3.3}) above, one can easily deduce the Yukawa coupling $\eta$ from $\xi_S$ using (\ref{3.3}) resulting in $\eta$ varying in the range $2\times 10^{-6}$ (for $\tan\beta = 1.6$) to $10^{-5}$ (for $\tan\beta = 1.2$). The corresponding value of $\ln\left(\Lambda^2/M^2\right)$ can then be deduced from (\ref{3.2}), with the conclusion that $\ln\left(\Lambda^2/M^2\right)$ should assume values in the range 1 to 4 -- a reasonable result, by no means guaranteed, that we consider as a strong argument in favour of such a simple model. \begin{figure}[ht] \begin{center} \includegraphics[angle=-90,scale=0.5]{fig5.ps} \caption{$\xi_S$ as a function of $\tan\beta$ for the same parameters as in Fig. 2.} \label{1.5f} \end{center} \end{figure} Finally we note that for larger values of $n_5$ (as $n_5=3$), $M$ (as $M = 2\times 10^{10}$) and $\xi_F$ (as $\xi_F= 10^5$ GeV$^2$, see also the next subsection) phenomenologically viable regions in parameter space exist where the running Yukawa couplings $\lambda$, $\kappa$ and $h_t$ remain perturbative up to $M_{GUT}$. Within the model above, these scenarios would require an even smaller Yukawa coupling $\eta$, $\eta \sim 10^{-8}$. \subsection{Scenarios without tadpole terms} Scenarios without tadpole terms have been proposed in \cite{gr}. If the number of messengers is doubled ($n_5 = 2$), i.e. introducing $\widehat{\Phi}_1$, $\widehat{\overline{\Phi}}_1$, $\widehat{\Phi}_2$ and $\widehat{\overline{\Phi}}_1$, these can couple to $\widehat{S}$ and to the spurion $\widehat{X}$ in such a way that a discrete $Z_3$ symmetry is left unbroken by the VEV of $\widehat{X}$ \cite{gr}: \begin{equation}} \def\eeq{\end{equation}\label{3.5} \widetilde{W}=\widehat{X}\left(\widehat{\overline{\Phi}}_1 \widehat{\Phi}_1 + \widehat{\overline{\Phi}}_2 \widehat{\Phi}_2 \right) +\eta\widehat{S}\,\widehat{\overline{\Phi}}_1\widehat{\Phi}_2 \eeq Then tadpole terms $\sim \xi_F$ and $\sim \xi_S$ are disallowed, and the Yukawa coupling $\eta$ can be much larger. These scenarios have been recently investigated in \cite{dgs} (see also \cite{gkr}), where the $SU(5)$ breaking (generated via the RG equations between $M_{GUT}$ and $M$) inside $\eta\widehat{S}\,\widehat{\overline{\Phi}}_1 \widehat{\Phi}_2$ has been taken into account. For larger values of $\eta$, messenger loops generate non-negligible values for the singlet mass $m_S^2$ (\ref{2.18}), trilinear $A$-terms (\ref{2.19}) and corrections $\Delta m_u^2 = \Delta m_d^2$ as in (\ref{2.24}) at the scale $M$ \cite{dgs}. Phenomenologically viable regions in parameter space have been found in \cite{dgs}, where the parameters $M$ and $M_{SUSY}$ have been chosen as $M = 10^{13}$~GeV and $M_{SUSY}=m^2/M = 1.72 \times 10^{5}$~GeV. The stop masses are quite large (up to $\sim 2$~TeV) such that the stop/top induced radiative corrections to $m_{h_1}$ lift it above the LEP bound of $\sim 114$~GeV. We have re-investigated this scenario in a somewhat simpler setup: first we observe that the generated values for $A_\kappa$ and $\Delta_H$, in the notation (\ref{2.19}) and (\ref{2.24}), are always related by \begin{equation}} \def\eeq{\end{equation}\label{3.6} A_\kappa = -\frac{3}{16 \pi^2} \Delta_H M_{SUSY} \eeq (with $\Delta_H = 2\xi_D^2 + 3\xi_T^2$ in the notation of \cite{dgs}, where $\xi_{D,T}$ denote Yukawa couplings corresponding to our $\eta$ in (\ref{3.5}). At $M_{GUT}$ one has $\xi_D = \xi_T \equiv \xi_U$ \cite{dgs}.) The singlet mass at the scale $M$ is then of the order \begin{equation}} \def\eeq{\end{equation}\label{3.7} m_S^2 \simeq \frac{1}{(16 \pi^2)^2} \left(\frac{7}{5} \Delta_H^2 - \frac{1}{5} (16 g_3^2 + 6g_2^2 +\frac{10}{3}g_1^2) \Delta_H -4\kappa^2 \Delta_H \right) M_{SUSY}^2\ , \eeq where we have neglected the $SU(5)$ breaking among the Yukawas at the scale $M$. We tried to reproduce the three phenomenologically viable regions in parameter space studied in \cite{dgs}: region I where $\xi_U \ll 1$, region III where $0.6 \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \xi_U \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 1.1$, and region II where $1.3 \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \xi_U \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 2$. We observe, however, that for $\xi_U \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 0.7$ (or $\Delta_H \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 1.5$ after taking the running of $\xi_U$ between $M_{GUT}$ and $M$ into account) the generated value for $\left|A_\kappa\right|$ from (\ref{3.6}) exceeds $\sim 5$ TeV at $M$ (still $\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 2$ TeV at the weak scale), which we interpret as a certain amount of fine tuning between the remaining parameters of the Higgs potential. We will not consider the region II below. Note that, as in \cite{dgs}, we obtain $\kappa$ as an output (from the minimization equations of the Higgs potential with $M_Z$ as input), which can hide the fine tuning required. Limiting ourselves to $\Delta_H \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 1.5$ ($\left|A_\kappa\right| \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 5$ TeV), we were able to confirm the region~I. In Table~1 we show the Higgs spectrum, and in Table~2 the essential features of the corresponding sparticle spectrum for a representative point P1 in region~I, where $A_\kappa = -160$~GeV, $\Delta_H = 0.1$, $\lambda = 0.02$ and $\tan\beta = 6.6$ (leading to $m_S^2 \sim -2.8 \times 10^5$~GeV$^2$ in agreement with (\ref{3.7})). The point P2 in Tables~1 and 2 is in the region III of \cite{dgs}: there one has $A_\kappa = -4.77$~TeV, $\Delta_H = 1.46$, $\lambda = 0.5$ and $\tan\beta = 1.64$ ($m_S^2 \sim -5.3 \times 10^6$~GeV$^2$). We see that, in spite of stop masses in the 2 TeV region, $m_{h_1}$ is not far above the LEP bound. On the other hand these results confirm the phenomenological viability of the scenario proposed in \cite{gr,dgs}. (However, due to the very heavy sparticle spectrum the supersymmetric contribution to the muon anomalous magnetic moment is still too small to account for the presently observed deviation w.r.t. the Standard Model.) \begin{table}[!ht] \caption{Input parameters and Higgs masses for five specific points.} \vspace*{-5mm} \label{table1} \vspace{3mm} \footnotesize \begin{center} \begin{tabular}{|l|r|r|r|r|r|} \hline {\bf Point} & P1 & P2 & P3 & P4 & P5 \\\hline {\bf Input parameters } \\\hline Messenger scale $M$ (GeV) & $10^{13}$ & $10^{13}$ & $4\times 10^{8}$ & $3\times 10^{7}$ & $5\times 10^{14}$ \\\hline $M_{SUSY} = m^2/M$ (GeV) & $1.72\times 10^{5}$ & $1.72\times 10^{5}$ & $3.2\times 10^{4}$ & $3.5\times 10^{4}$ & $7.5\times 10^{4}$ \\\hline $\tan\beta$ & 6.6 & 1.64 & 1.6 & 1.9 & 40 \\\hline $n_5$ & 2 & 2 & 2 & 2 & 2 \\\hline $\lambda$ & 0.02 & 0.5 & 0.6 & 0.6 & 0.01 \\\hline $A_\kappa$ (GeV) & -160 &-4770 & 0 & 0 & 0 \\\hline $\Delta_H$ & 0.1 & 1.46 & 0 & 0 & 0 \\\hline $m_S^2$ (GeV$^2$) & $-2.8\times 10^5$ & $-5.3\times 10^6$ & $-4.3\times 10^5$ & $-2.1\times 10^{5}$ & $-5.0\times 10^{3}$ \\\hline\hline {\bf CP even Higgs masses} \\\hline $m_{h^0_1}$ (GeV) & 116.1 & 115.8 & 115.5 & 96.1 &94.5 \\\hline $m_{h^0_2}$ (GeV) & 794 & 2830 & 607 & 514 & 120 \\\hline $m_{h^0_3}$ (GeV) & 1762 & 3411 & 717 & 579 & 603 \\\hline\hline {\bf CP odd Higgs masses} \\\hline $m_{a^0_1}$ (GeV) & 448 & 2842 & 40.5 & 11.5 & 1.1 \\\hline $m_{a^0_2}$ (GeV) & 1761& 3662 & 628 & 546 & 603 \\\hline\hline {\bf Charged Higgs mass} \\\hline\hline $m_{h^\pm}$ (GeV) & 1764 & 2862 & 619 & 535 & 613 \\\hline \end{tabular}\end{center} \end{table} \begin{table}[!ht] \caption{Some sparticle masses and components for the five specific points of Table~1. The chargino masses are close to the wino/higgsino-like neutralino masses, the right-handed/left-handed slepton masses close to the stau$_1$/stau$_2$ masses, and the remaining squark masses are of the order of the gluino mass.} \vspace*{-5mm} \label{table2} \vspace{3mm} \footnotesize \begin{center} \begin{tabular}{|l|r|r|r|r|r|} \hline {\bf Point} & P1 & P2 & P3 & P4 & P5 \\\hline {\bf Neutralinos } \\\hline $\chi_1$ mass (GeV) & 467 & $469$ & $80.5$ & $88.3$ & $101$ \\\hline Dominant component & bino & bino & bino & bino & singlino \\\hline $\chi_2$ mass (GeV) & 839 & $890$ & $152$ & $166$ & $200$ \\\hline Dominant component & singlino & wino & wino & wino & bino \\\hline $\chi_3$ mass (GeV) & 882 & $2322$ & $463$ & $428$ & $380$ \\\hline Dominant component & wino & higgsino & higgsino & higgsino & wino \\\hline $\chi_4$ mass (GeV) & 1432& $2325$ & $476$ & $438$ & $675$ \\\hline Dominant component & higgsino & higgsino & higgsino & higgsino & higgsino \\\hline $\chi_5$ mass (GeV) & 1440 & $4019$ & $721$ & $572$ & $685$ \\\hline Dominant component & higgsino & singlino & singlino & singlino & higgsino \\\hline\hline Stau$_1$ mass (GeV) & 692 & $693$ & $100$ & $103$ & $260$ \\\hline Stau$_2$ mass (GeV) & 1100 & $1096$ & $188$ & $198$ & $514$ \\\hline Stop$_1$ mass (GeV) & 1931 & $1819$ & $376$ & $459$ & $872$ \\\hline Gluino mass (GeV) & 2389 & $2386$ & $522$ & $569$ & $1117$ \\\hline \end{tabular}\end{center} \end{table} \subsection{Scenarios without tadpole and $A$-terms} The scenario discussed in the previous subsection belongs to those where many (actually most) of the operators (\ref{2.7}) -- (\ref{2.12}) and (\ref{2.22}) -- (\ref{2.23}) are forbidden by a discrete $Z_N$ symmetry, which is left unbroken in the messenger/seques\-tered sector, but under which $\widehat{S}$ carries a non-vanishing charge. In the above case -- where $Z_N$ is {\it not} an $R$-symmetry -- all soft terms $m_S^2$, $A_\kappa = 3 A_\lambda$ and the parameter $\Delta_H$ in (\ref{2.24}) will in general be non-vanishing (all others being forbidden). The fate of $R$-symmetries in the context of gauge mediation has recently been reviewed in \cite{adjk}. In the case of spontaneous breaking within the messenger/sequestered sector \cite{adjk1}, $R$-symmetry violating terms in the effective low energy theory will be suppressed relative to $R$-symmetry conserving terms. Then, the trilinear terms $A_\kappa = 3 A_\lambda$ (\ref{2.19}) will be negligibly small. Although the $R$-symmetry breaking gaugino masses will typically also be smaller than the scalar masses at the messenger scale \cite{adjk}, we will consider in this subsection an illustrative scenario which is just a limiting case of the one previously discussed. We will investigate the case where the trilinear terms vanish, and where only $m_S^2$ (which can never be forbidden by symmetries) assumes natural values at the messenger scale $M$. For simplicity, we will allow for standard gaugino masses (and the usual scalar masses) as given in appendix A. Now, the scalar sector of the NMSSM has an exact $R$-symmetry at the scale $M$, with identical charges for all superfields. Given that gaugino masses break this $R$-symmetry, radiative corrections (the RG running between $M$ and the weak scale) induce $R$-symmetry violating trilinear terms in the scalar sector. If $M$ is not too large or if $\lambda$, $\kappa$ are small, these trilinear terms remain numerically small, and the $R$-symmetry in the scalar sector is only weakly broken. Given that this approximate $R$-symmetry is spontaneously broken at the weak scale by the VEVs of $H_u$, $H_d$ and $S$, a pseudo Goldstone boson (a pseudo $R$-axion \cite{raxion}) appears in the spectrum. Light pseudoscalars can lead to a reduction of the LEP constraints on $m_{h_1}$, and have recently been the subject of various investigations \cite{lighthiggs}. In what follows we study the phenomenological viability of such scenarios, which are defined by having all terms (\ref{2.16}) -- (\ref{2.21}) vanish except for $m_S^2$ (but vanishing $A_\kappa$, $A_\lambda$). For simplicity we will also assume that $\Delta_H$ in (\ref{2.24}) is negligibly small. Then, the model is completely specified by $\lambda$, $\tan\beta$ and the scales $M$ and $M_{SUSY}$ (recall that $\kappa$ and $m_S^2$ can be obtained from the minimization equations in terms of $M_Z$ and of the other parameters). Again we found that two completely different regions in parameter space are phenomenologically viable. As before, the first region is characterized by small values of $\tan\beta$ ($\tan\beta \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 2$) and large values of $\lambda$. Relatively large negative values for the soft mass $m_S^2$ for the singlet of the order $m_S^2 \sim -(600\ \mathrm{GeV})^2$ are required at the scale $M$ in order to generate the required VEV of~$S$. The mass $m_{a_1}$ of the lightest CP-odd scalar varies in the range $0 < m_{a_1} < 50$~GeV, where the larger values are obtained for larger messenger scales $M \sim 10^9$~GeV: then the RG evolution generates relatively large values $A_\lambda \sim 25$ GeV at the weak scale (whereas $A_\kappa$ remains very small), and this breaking of the $R$-symmetry induces a relatively large mass for the pseudo $R$-axion. On the other hand, arbitrarily small values for $A_\lambda$ and hence for $m_{a_1}$ can be obtained without any fine tuning for lower messenger scales $M$. In all cases we find that the lightest CP-even (SM like) scalar $h_1$ dominantly decays (with branching ratios of $\sim$ 80\%) into $h_1 \to a_1 a_1$, which allows for $m_{h_1} < 114$~GeV consistent with LEP constraints. For given $\lambda$, $m_{h_1}$ is nearly independent of the scales $M$ and $M_{SUSY}$, but decreases with $\tan\beta$. In Fig.~6 we show a scatterplot for $m_{h_1}$ as a function of $\tan\beta$, which is obtained for $\lambda = 0.6$, varying $M$ in the range $10^7\ \mathrm{GeV} < M < 5\times 10^9\ \mathrm{GeV}$ and $M_{SUSY}$ in the range $3.3 \times 10^4\ \mathrm{GeV} < M_{SUSY} < 4.3\times 10^4\ \mathrm{GeV}$. All points displayed satisfy LEP and $B$-physics constraints. (We have chosen $n_5 = 2$ messenger multiplets, but similar results can be obtained -- for slightly different ranges of $M$ and $M_{SUSY}$ -- for $n_5 = 1$.) \begin{figure}[ht] \begin{center} \includegraphics[angle=-90,scale=0.5]{fig6.ps} \caption{$m_{h_1}$ as a function of $\tan\beta$ for $\lambda = 0.6$, $10^7\ \mathrm{GeV} < M < 5\times 10^9\ \mathrm{GeV}$ and $3.3 \times 10^4\ \mathrm{GeV} < M_{SUSY} < 4.3\times 10^4\ \mathrm{GeV}$. Points where, in addition to all LEP and $B$-physics constraints, the SUSY contribution to the muon anomalous magnetic moment can (fails to) account for the presently observed deviation with respect to the Standard Model are denoted in blue/darker (gray/lighter) color.} \label{1.6f} \end{center} \end{figure} In the region $\tan\beta \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 1.7$ (where $m_{h_1} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 108$~GeV) LEP constraints are satisfied only for $m_{a_1} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 11$~GeV, so that $a_1 \to bb$ decays are forbidden and the dominant decays of $h_1$ are $h_1 \to a_1 a_1 \to 4\ \tau$ (still requiring $m_{h_1} \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 88$~GeV \cite{lhg}). For $\tan\beta \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 1.7$, the dominant decays of $h_1$ are $h_1 \to a_1 a_1 \to 4\ b$, in which case LEP constraints allow for $m_{h_1}$ as low as $\sim 108$~GeV. The complete theoretically possible range for $m_{a_1}$ is now allowed by LEP. (Fixing, e.g., $M = 10^8\ \mathrm{GeV}$, the complete range $1.2 \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \tan\beta \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 1.7$ is compatible with LEP constraints on the Higgs sector within the above range of $M_{SUSY}$. For smaller $\tan\beta$, however, the hidden fine tuning becomes quite large.) Now, in some regions in parameter space, the supersymmetric contribution to the muon anomalous magnetic moment is $\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 10^{-9}$, which accounts for the presently observed deviation with respect to the Standard Model. The blue (darker) points in Fig.~6 (which appear only for $\tan\beta \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 1.5$) satisfy this condition. In Tables~1 and 2 we present the Higgs and sparticle spectrum for points P3 (with $\tan\beta = 1.6$) and P4 (with $\tan\beta = 1.9$), which are inside the blue region of Fig.~6. Another interesting region in parameter space is characterized by large values of $\tan\beta$ ($\tan\beta \gsim30$) and small values of $\lambda$ ($\lambda \sim 10^{-2}$), associated with small values of $\kappa$ ($\kappa \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 10^{-3}$). In this case, comparatively small negative values for the soft mass $m_S^2$ for the singlet of the order $m_S^2 \sim -(70\ \mathrm{GeV})^2$ are required to generate the required VEV of $S$. Due to the small values of $\lambda$ and $\kappa$, $A_\lambda$ and $A_\kappa$ remain small after the RG evolution from $M$ down to the weak scale, leading to a pseudo $R$-axion with a mass $m_{a_1} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 1$~ GeV. Now $a_1$ is particularly light since, for small $\kappa$, it simultaneously plays the r\^ole of a Peccei-Quinn pseudo Goldstone boson. However, due to the small value of $\lambda$, the couplings of $a_1$ (with doublet components $\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 10^{-3}$) are tiny, and this CP-odd scalar would be very hard to detect; the branching ratios $h_i \to a_1 a_1$ are practically vanishing. The CP-even Higgs sector is still compatible with LEP constraints if $M$ is very large (and $M_{SUSY}$ somewhat larger than above), leading to a sparticle spectrum (and $A_{t}$) in the 1~TeV range such that top/stop induced radiative corrections lift up the CP-even Higgs masses. Interestingly, in spite of $\lambda \sim 10^{-2}$, large values for $\mu_{eff} = \lambda\left< S\right>$ still generate a large singlet/doublet mixing for the two lightest CP-even scalars. As an example, point P5 (which gives a satisfactory supersymmetric contribution to the muon anomalous magnetic moment) is shown in Tables~1 and 2. $m_{h_1} \sim 94$~GeV is well below 114~GeV, but the singlet component of $h_1$ is $\sim$ 88\% implying reduced couplings to gauge bosons. The state $h_2$ with a mass $m_{h_2} \sim 120$~GeV has still a singlet component $\sim$ 48\%. With the help of its nonsinglet components, the detection of both states seems feasible at the ILC \cite{ilc}. Also, the lightest neutralino is a nearly pure singlino (with nonsinglet components $\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 3\times 10^{-3}$), which would appear at the end of sparticle decay cascades \cite{casc}. \vspace{7mm} Throughout this paper we have not addressed the issue of dark matter. Clearly, within GMSB models the LSP is the gravitino, but heavy remnants from the messenger sector can also contribute to the relic density \cite{dimgp,messdm}. Its evaluation would require assumptions on the messenger/sequestered sector and the reheating temperature after inflation, and is beyond the scope of the present work. On the other hand, general considerations can possibly help to constrain the large variety of different scenarios found here. \section{Conclusions} We have seen in this analysis that the NMSSM can solve the $\mu$-problem in GMSB models in a phenomenologically acceptable way. Our starting point was a derivation of the magnitude of all possible supersymmetric and soft terms in a generalized NMSSM, that can be radiatively generated by integrating out a sequestered/messenger sector with couplings to the singlet superfield $\widehat{S}$. For the phenomenological analysis, we confined ourselves to scenarios where most of these terms are negligibly small. Nevertheless we found a large variety of very different viable scenarios. Scenarios with singlet tadpole terms $\it are$ acceptable, if the linear terms in $\widehat{S}$ (or $S$) are generated to higher loop order only, or if at least one small Yukawa coupling is involved. A simple concrete model \cite{ue1995} with a direct coupling of $\widehat{S}$ to the messengers is viable for a Yukawa coupling $\eta \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 10^{-5}$. In the case of models with forbidden tadpole terms, as those proposed in \cite{gr} and analysed in \cite{dgs}, we confirmed the phenomenological viability observed in \cite{dgs} (at least for the regions in parameter space without uncomfortably large values of $A_\kappa$). Quite interesting from the phenomenological point of view are the scenarios with vanishing $A$-terms at the messenger scale: these automatically lead to a light CP-odd Higgs scalar as studied in \cite{raxion,lighthiggs}, which plays the r\^ole of a pseudo $R$-axion. In view of the simplicity with which these scenarios can satisfy LEP constraints, it would be very desirable to develop concrete models which generate this structure for the effective NMSSM at the scale $M$. Finally we recall that the Fortran routine NMGMSB, that allowed to obtain the results above, will be available on the website \cite{nmssmtools}. With the help of corresponding input and output files, further properties of the points P1 to P5 as sparticle masses, couplings and branching ratios can be obtained. \section*{Note added} After the completion of this paper another viable scenario was proposed in \cite{wagner}, in which the singlet does not couple to the messenger/sequestered sector, but where the source of supersymmetry breaking in the messenger sector is not SU(5) invariant. \section*{Acknowledgements} We are grateful to A. Djouadi and F. Domingo for helpful discussions, and acknowledge support from the French ANR project PHYS@COL\&COS. \section*{Appendix A} In this appendix we summarize the expressions for the gaugino and scalar masses (at the scale $M$), which are generated by gauge mediation under the assumptions that the messenger sector involves $n_5$ $(5 + \bar{5})$ representations under $SU(5)$ (additional $(10 +\overline{10})$ representations can be taken care of by adding three units to $n_5$) with a common SUSY mass $M$, and $F$-type mass splittings $m^2$ among the scalars and pseudoscalars. The $U(1)_Y$ coupling $\alpha_1$ is defined in the SM normalization (not in the GUT normalization). For convenience we define the scale $M_{SUSY} = m^2/M$ and the parameter $x = M_{SUSY}/M$ (typically $\ll 1$). The required one loop and two loop functions are \cite{dimgp,loopf} \begin{eqnarray}} \def\eea{\end{eqnarray} f_1(x)&=&\frac{1}{x^2}\left((1+x)\ln(1+x)+(1-x)\ln(1-x)\right),\nonumber\\ f_2(x)&=&\frac{1+x}{x^2}\left(\ln(1+x)-2Li_2\left(\frac{x}{1+x}\right) +\frac{1}{2}Li_2\left(\frac{2x}{1+x}\right)\right) + (x\to -x)\ ,\nonumber \eea which satisfy $f_1(x\to 0) = f_2(x\to 0) = 1$. Then the gaugino masses are given by \begin{eqnarray}} \def\eea{\end{eqnarray} M_1 &=& \frac{\alpha_1}{4\pi} M_{SUSY} f_1(x) \frac{5}{3}n_5\ ,\nonumber\\ M_2 &=& \frac{\alpha_2}{4\pi} M_{SUSY} f_1(x) n_5\ ,\nonumber\\ M_3 &=& \frac{\alpha_3}{4\pi} M_{SUSY} f_1(x) n_5\ ,\nonumber \eea and the scalar masses squared by \begin{eqnarray}} \def\eea{\end{eqnarray} m^2 &=& \frac{M_{SUSY}^2}{16\pi^2}\left( \frac{10}{3}Y^2 \alpha_1^2 +\frac{3}{2} \alpha_2^2{^{(1)}} +\frac{8}{3} \alpha_3^2{^{(2)}} \right) f_2(x) n_5\ .\nonumber \eea The terms $^{(1)}$ are present for $SU(2)$ doublets only, and the terms $^{(2)}$ for $SU(3)$ triplets only. The hypercharges $Y$ are \begin{table}[!ht] \begin{center} \begin{tabular}{|l|r|r|r|r|r|r|} \hline & $u_L/d_L$ & $u_R$ & $d_R$ & $\nu_L/e_L$ & $e_R$ & $H_u,H_d$\\ \hline $Y$ & $\frac{1}{6}$\phantom{x\Large$\frac{1}{1}$} & $\frac{2}{3}$ & $-\frac{1}{3}$ & $ \phantom{x}-\frac{1}{2}$ \phantom{x} & $-1$ & $\pm\frac{1}{2} \phantom{xx}$\\ \hline \end{tabular} \end{center} \end{table} \section*{Appendix B} In this appendix we give the one loop $\beta$-functions for the parameters in the Higgs sector of the general NMSSM, defined by a superpotential \begin{eqnarray}} \def\eea{\end{eqnarray}\nonumber W &=& \lambda \widehat{S} \widehat{H_u} \widehat{H_d} + \frac{\kappa}{3} \widehat{S}^3 +\mu\widehat{H_u} \widehat{H_d} +\mu'\widehat{S}^2 +\xi_F\widehat{S} \\ \nonumber &&+h_t \widehat{Q}_3\widehat{H}_u \widehat{T}^c_R -h_b \widehat{Q}_3\widehat{H}_d \widehat{B}^c_R -h_\tau \widehat{L}_3\widehat{H}_d \widehat{L}^c_R\nonumber \eea and soft terms \begin{eqnarray}} \def\eea{\end{eqnarray}\nonumber V_{soft} &=& m_u^2 |H_u|^2 +m_d^2 |H_d|^2 +m_S^2|S|^2 +(\lambda A_\lambda S H_u H_d +\frac{\kappa}{3} A_\kappa S^3 + m_3^2 H_u H_d +m_s'^2 S^2 +\xi_S S \\ \nonumber && + h_tA_t Q_3H_u T^c_R -h_bA_b Q_3H_d B^c_R -h_\tau A_\tau L_3H_d L^c_R + h.c.)\ , \eea under the assumption $\sum_i Y_i m_i^2 = 0$, which is always satisfied for GMSB models. \begin{eqnarray}} \def\eea{\end{eqnarray}\nonumber \frac{d\lambda^2}{d\ln Q^2} &=& \frac{\lambda^2}{16\pi^2} \left(4\lambda^2 +2 \kappa^2 + 3 (h_t^2 +h_b^2) +h_\tau^2 -g_1^2 -3g_2^2 \right)\\ \nonumber \frac{d\kappa^2}{d\ln Q^2} &=& \frac{\kappa^2}{16\pi^2} \left(6\lambda^2 +6\kappa^2\right)\\ \nonumber \frac{d h_t^2}{d\ln Q^2} &=& \frac{ h_t^2}{16\pi^2} \left(\lambda^2 +6h_t^2 +h_b^2-\frac{16}{3}g_3^2 -3g_2^2 -\frac{13}{9}g_1^2\right)\\ \nonumber \frac{d h_b^2}{d\ln Q^2} &=& \frac{h_b^2}{16\pi^2} \left(\lambda^2 +6h_b^2 +h_t^2 +h_\tau^2 -\frac{16}{3}g_3^2 -3g_2^2 -\frac{7}{9}g_1^2\right)\\ \nonumber \frac{d h_\tau^2}{d\ln Q^2} &=& \frac{h_\tau^2}{16\pi^2} \left(\lambda^2 +3h_b^2 +4h_\tau^2 -3g_2^2 -3g_1^2\right)\\ \nonumber \frac{d\mu}{d\ln Q^2} &=& \frac{\mu}{16\pi^2} \left(\lambda^2 +\frac{3}{2}(h_t^2 +h_b^2) +\frac{1}{2}h_\tau^2 -\frac{1}{2}(g_1^2+3g_2^2)\right)\\ \nonumber \frac{d\mu'}{d\ln Q^2} &=& \frac{\mu'}{16\pi^2} \left(2\lambda^2 +2\kappa^2\right)\\ \nonumber \frac{d\xi_F}{d\ln Q^2} &=& \frac{\xi_F}{16\pi^2} \left(\lambda^2 +\kappa^2\right)\\ \nonumber \frac{d A_\lambda}{d\ln Q^2} &=& \frac{1}{16\pi^2} \left(4\lambda^2 A_\lambda +2\kappa^2A_\kappa +3(h_t^2A_t+h_b^2A_b) +h_\tau^2A_\tau+g_1^2M_1+3g_2^2M_2\right)\\ \nonumber \frac{d A_\kappa}{d\ln Q^2} &=& \frac{1}{16\pi^2} \left(6\lambda^2 A_\lambda +6\kappa^2A_\kappa\right)\\ \nonumber \frac{d A_t}{d\ln Q^2} &=& \frac{1}{16\pi^2} \left(\lambda^2 A_\lambda +6h_t^2 A_t +h_b^2A_b +\frac{13}{9}g_1^2M_1+3g_2^2M_2 +\frac{16}{3}g_3^2M_3\right)\\ \nonumber \frac{d A_b}{d\ln Q^2} &=& \frac{1}{16\pi^2} \left(\lambda^2 A_\lambda +6h_b^2 A_b +h_t^2A_t +h_\tau^2A_\tau +\frac{7}{9}g_1^2M_1+3g_2^2M_2 +\frac{16}{3}g_3^2M_3\right)\\ \nonumber \frac{d A_\tau}{d\ln Q^2} &=& \frac{1}{16\pi^2} \left(\lambda^2 A_\lambda +3h_b^2 A_b +4h_\tau^2A_\tau +3g_1^2M_1+3g_2^2M_2 \right)\\ \nonumber \frac{d m_u^2}{d\ln Q^2} &=& \frac{1}{16\pi^2} \left(\lambda^2 (m_u^2+m_d^2+m_S^2+A_\lambda^2) +3h_t^2 (m_u^2+m_T^2+m_Q^2+A_t^2) +\frac{g_1^2}{2}(m_u^2-m_d^2) \right. \\ \nonumber &&- \left. g_1^2M_1^2-3g_2^2M_2^2 \right)\\ \nonumber \frac{d m_d^2}{d\ln Q^2} &=& \frac{1}{16\pi^2}\left(\lambda^2 (m_u^2+m_d^2+m_S^2+A_\lambda^2) +3h_b^2 (m_d^2+m_B^2+m_Q^2+A_b^2) \right. \\ \nonumber && \left. +h_\tau^2(m_d^2+m_\tau^2+m_L^2+A_\tau^2) -\frac{g_1^2}{2}(m_u^2-m_d^2) -g_1^2M_1^2-3g_2^2M_2^2 \right) \\ \nonumber \frac{d m_S^2}{d\ln Q^2} &=& \frac{1}{16\pi^2}\left(2\lambda^2 (m_u^2+m_d^2+m_S^2+A_\lambda^2) +\kappa^2 (6m_S^2+2A_\kappa^2) \right) \\ \nonumber \frac{d m_3^2}{d\ln Q^2} &=& \frac{1}{16\pi^2}\left(\frac{m_3^2}{2}(6\lambda^2+3h_t^2+3h_b^2 +h_\tau^2-g_1^2-3g_2^2) +2\lambda\kappa m_S'^2 \right. \\ \nonumber &&\left. + \mu(2\lambda^2A_\lambda -3h_t^2A_t -3h_b^2A_b -h_\tau^2A_\tau +g_1^2M_1 +3g_2^2M_2) \right) \\ \nonumber \frac{d m_S'^2}{d\ln Q^2} &=& \frac{1}{16\pi^2}\left(m_S'^2(2\lambda^2+4\kappa^2) +2\lambda\kappa m_3^2 +4\mu'(\lambda^2A_\lambda +\kappa^2A_\kappa) \right) \\ \nonumber \frac{d \xi_S}{d\ln Q^2} &=& \frac{1}{16\pi^2}\left(\xi_S(\lambda^2+\kappa^2) +2\lambda\mu(m_u^2+m_d^2) +4\kappa \mu' m_S^2 +2\lambda m_3^2(A_\lambda+2\mu') \right. \\ \nonumber &&\left. +2\xi_F(\lambda^2A_\lambda +\kappa^2A_\kappa +2\kappa m_S'^2(A_\kappa+2\mu') \right) \eea
{'timestamp': '2008-04-02T18:15:38', 'yymm': '0803', 'arxiv_id': '0803.2962', 'language': 'en', 'url': 'https://arxiv.org/abs/0803.2962'}
\subsubsection*{\bibname}} \usepackage{graphicx} \usepackage{times} \usepackage{ifpdf} \usepackage{amsthm} \usepackage{algorithm} \usepackage[algo2e, ruled, vlined]{algorithm2e} \newcommand{\ignore}[1]{} \newcommand{\notinproc}[1]{#1} \newcommand{\onlyinproc}[1]{} \def\textsf{E}{\textsf{E}} \def\textsf{Exp}{\textsf{Exp}} \def\textsf{Lap}{\textsf{Lap}} \def\textsf{Var}{\textsf{Var}} \def\textsf{Bias}{\textsf{Bias}} \def\textsf{MSE}{\textsf{MSE}} \def\textsf{NRMSE}{\textsf{NRMSE}} \def\textsf{CDF}{\textsf{CDF}} \def\textsf{PDF}{\textsf{PDF}} \def\mathop{\mathrm{key}}{\textsf{key}} \def\textsf{val}{\textsf{val}} \def\boldsymbol{\nu}{\boldsymbol{\nu}} \def\textsf{Zipf}{\textsf{Zipf}} \def\mathop{\mathrm{poly}}{\mathop{\mathrm{poly}}} \def\mathop{\mathrm{polylog}}{\mathop{\mathrm{polylog}}} \def\mathop{\mathrm{key}}{\mathop{\mathrm{key}}} \newtheorem{thm}{Theorem}[section] \newtheorem{theorem}[thm]{Theorem} \newtheorem{cor}[thm]{Corollary} \newtheorem{lemma}[thm]{Lemma} \newtheorem{conjecture}[thm]{Conjecture} \newtheorem{prop}[thm]{Proposition} \newtheorem{claim}[thm]{Claim} \newtheorem{goal}[thm]{Goal} \newtheorem{infprob}[thm]{Informal\,\,Problem} \newtheorem{problem}[thm]{Problem} \newtheorem{fact}[thm]{Fact} \newtheorem{example}[thm]{Example} \newtheorem{definition}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \newtheorem{corollary}[thm]{ Corollary} \usepackage{xcolor} \newcommand\mycommfont[1]{{\scriptsize\ttfamily\textcolor{blue}{#1}}} \SetCommentSty{mycommfont} \renewcommand{\epsilon}{\varepsilon} \title{Differentially Private Weighted Sampling} \author{Edith Cohen \\ Google Research \\ Tel Aviv University \and Ofir Geri \\ Stanford University \and Tamas Sarlos \\ Google Research \and Uri Stemmer \\ Ben-Gurion University \\ Google Research } \date{} \begin{document} \maketitle \begin{abstract} Common datasets have the form of {\em elements} with {\em keys} (e.g., transactions and products) and the goal is to perform analytics on the aggregated form of {\em key} and {\em frequency} pairs. A weighted sample of keys by (a function of) frequency is a highly versatile summary that provides a sparse set of representative keys and supports approximate evaluations of query statistics. We propose {\em private weighted sampling} (PWS): A method that sanitizes a weighted sample as to ensure element-level differential privacy, while retaining its utility to the maximum extent possible. PWS maximizes the reporting probabilities of keys and estimation quality of a broad family of statistics. PWS improves over the state of the art even for the well-studied special case of {\em private histograms}, when no sampling is performed. We empirically observe significant performance gains of 20\%-300\% increase in key reporting for common Zipfian frequency distributions and accurate estimation with $\times 2$-$ 8$ lower frequencies. PWS is applied as a post-processing of a non-private sample, without requiring the original data. Therefore, it can be a seamless addition to existing implementations, such as those optimizes for distributed or streamed data. We believe that due to practicality and performance, PWS may become a method of choice in applications where privacy is desired. \end{abstract} \section{Introduction} {\em Weighted sampling schemes} are often used to obtain versatile summaries of large datasets. The sample constitutes a representation of the data and also facilitates efficient estimation of many statistics. Motivated by the increasing awareness and demand for data privacy, in this work we construct {\em privacy preserving} weighted sampling schemes. The privacy notion that we work with is that of {\em differential privacy}~\cite{DMNS06}, a strong privacy notion that is considered by many researchers to be a gold-standard for privacy preserving data analysis. Before describing our new results, we define our setting more precisely. Consider an input dataset containing $m$ elements, where each element contains a key $x$ from some domain $\mathcal{X}$. For every key $x\in \mathcal{X}$ we write $w_x$ to denote the multiplicity of $x$ in the input dataset. (We also refer to $w_x$ as the {\em frequency} of $x$ in the data.) With this notation, it is convenient to represent the input dataset in its {\em aggregated} form $D=\{(x,w_x) \}$ containing pairs of a key and its frequency $w_x\geq1$ in the data. Examples of such datasets are plentiful: Keys are search query strings and elements are search requests, keys are products and elements are transactions for the products, keys are locations and elements are visits by individuals, or keys are training examples and elements are activities that generate them. We aim here to protect the privacy of data elements. These example datasets tend to be very sparse, where the number of distinct keys in the data is much smaller than the size $|\mathcal{X}|$ of the domain. Yet, the number of distinct keys can be very large and samples serve as small summaries that can be efficiently stored, computed, and transmitted. We therefore aim for our private sample to retain this property and in particular only include keys that are in the dataset. The (non-private) sampling schemes we consider are specified by a (non-decreasing) sequence $(q_i)_{i\geq 0}$ of probabilities $q_i\in [0,1]$, where $q_0 := 0$. Such a sampling scheme takes an input dataset $D=\{(x,w_x) \}$ and returns a {\em sample} $S\subseteq D$, where each pair $(x,w_x)$ is included in $S$ independently, with probability $q_{w_x}$. Loosely speaking, given a (non-private) sampling scheme $A$, we aim in this paper to design a {\em privacy preserving} variant of $A$ with the goal of preserving its ``utility'' to the extent possible under privacy constraints. We remark that an immediate consequence of the definition of differential privacy is that keys $x\in \mathcal{X}$ with very low frequencies cannot be included in the private sample $S$ (except with very small probability). On the other hand, keys with high frequencies can be included with probability (close to) $1$. Private sampling schemes can therefore retain more utility when the dataset has many keys with higher frequencies or for tasks that are less sensitive to low frequency keys. \begin{infprob}\label{infprob} Given a (non-private) sampling scheme $A$, specified by a sampling function $q$, design a {\em private} sampling scheme that takes a dataset $D=\{(x,w_x)\}$ and outputs a ``sanitized'' sample $S^* =\{(x,w^*_x)\}$. Informally, the goals are: \begin{enumerate} \item Each pair $(x,w_x)\in D$ is sampled with probability ``as close as possible'' to the non-private sampling probability $q_{w_x}$. \item The sanitized sample $S^*$ provides utility that is ``as close as possible'' to that of a corresponding non-private sample $S$. In our constructions, the sanitized frequencies $w^*_x$ would be random variables from which we can estimate ordinal and linear statistics with (functions of) the frequency $w_x$. \end{enumerate} \end{infprob} Informal Problem~\ref{infprob} generalizes one of the most basic tasks in the literature of differential privacy -- {\em privately computing histograms}. Informally, algorithms for private histograms take a dataset $D=\{(x,w_x)\}$ as input, and return, in a differentially private manner, a ``sanitized'' dataset $D^* =\{(x,w^*_x)\}$. It is often required that the output $D^*$ is {\em sparse}, in the sense that if $w_x=0$ then $w^*_x=0$. Commonly, we seek to minimize the expected or maximum error of estimators applied to $\boldsymbol{w}^*$ of statistics on $\boldsymbol{w}$. One well-studied objective is to minimize $\max_{x\in X} \mid w_x-w^*_x|$. The work on private histograms dates all the way back to the paper that introduced differential privacy~\cite{DMNS06}, and it has received a lot of attention since then, e.g.,~\cite{KorolovaKMN09,HardtT10,BeimelBKN14,BeimelNS16,BunS16,BunNS19,BalcerV18,BunDRS18}. Observe that the private histogram problem is a special case of Informal Problem~\ref{infprob}, where $q\equiv1$. At first glance, one might try to solve Informal Problem~\ref{infprob} by a reduction to the private histogram problem. Specifically, we consider the baseline where the data is first ``sanitized'' using an algorithm for private histograms, and then a (non-private) weighted sampling algorithm is applied to the sanitized data (treating the sanitized frequencies as actual frequencies). This framework, of first sanitizing the data and then sampling it was also considered in \cite{CormodePST:ICDT2012}. We show that this baseline is sub-optimal, and improve upon it in several axes. \subsection{Our Contributions} Our proposed framework, {\em Private Weighted Sampling} (PWS), takes as input a non-private weighted sample $S$ that is produced by a (non-private) weighted sampling scheme. We apply a ``sanitizer'' to the sample $S$ to obtain a respective privacy-preserving sample $S^*$. Our proposed solution has the following advantages. \paragraph{Practicality.} The private version is generated from the sample $S$ as a post-processing step without the need to revisit the original dataset, which might be massive or unavailable. This means that we can augment existing implementations of non-private sampling schemes and retain their scalability and efficiency. This is particularly appealing for sampling schemes designed for massive distributed or streamed data that use small sketches and avoid a resource-heavy aggregation of the data~\cite{GM:sigmod98,EV:ATAP02,flowsketch:JCSS2014,AndoniKO:FOCS11,CCD:sigmetrics12,freqCapfill:TALG2018,JayaramW:Focs2018,CohenGeri:NeurIPS2019,CohenGeriPagh:ICML2020}. Our code is available at {\tiny \url{github.com/google-research/google-research/tree/master/private_sampling}}. \paragraph{Benefits of end-to-end privacy analysis} PWS achieves better utility compared to the baseline of first sanitizing the data and then sampling. In spirit, our gains follow from a well-known result in the literature of differential privacy stating that applying a differentially private algorithm on a random sample from the original data has the effect of boosting the privacy guarantees of the algorithm \cite{KamalikaMishra:Crypto2006,KasiviswanathanLNRS11,BunNSV15}. Our solution is derived from a precise {\em end-to-end} formulation of the privacy constraints that account for the benefits of the random sampling in our privacy analysis. \paragraph{Optimal reporting probabilities.} PWS is optimal in that it maximizes the probability that each key $x$ is included in the private sample. The private reporting probability of a key $x$ depends on the privacy parameters, frequency, and sampling rate and is at most the non-private sampling probability $q_{w_x}$. The derivation is provided in Section~\ref{basic:sec}. \paragraph{Estimation of linear statistics.} Linear statistics according to a function of frequency have the form: \onlyinproc{ $\sum_x L(x) g(w_x)$, } \notinproc{ \begin{equation} s := \sum_x L(x) g(w_x)\enspace , \end{equation} } where $g(w_x) \geq 0$ is a non-decreasing function of frequency with $g(0):=0$. The most common use case is when $L(x)$ is a predicate and $g(w) := w$ and the statistics is the sum of frequencies of keys that satisfy the selection $L$. Our PWS sanitizer in Section~\ref{refined:sec} maintains optimal reporting probabilities and provides private information on frequencies of keys. We show that generally differential privacy does not allow for unbiased estimators for statistics without significant increase in variance. We propose biased but nonnegative and low-variance estimators. \paragraph{Estimation of ordinal statistics.} Ordinal statistics, such as (approximate) quantiles and top-$k$ sets, are derived from the order of keys that is induced by their frequencies. This order can be approximated by the order induced by PWS sanitized frequencies. We show that PWS is optimal, over all DP sanitization schemes, for a broad class of ordinal statistics. In particular, PWS maximizes the probability that {\em any} pair is concordant and maximizes the expected Kendall-$\tau$ rank correlation between the order induced by sanitized and true frequencies. \paragraph{Improvement over prior baselines.} We show analytically and empirically in Section~\ref{experiments:sec} that we obtain orders of magnitude increase in reporting probability in low-frequency regimes. For estimation tasks, both PWS and prior schemes have lower error for higher frequencies but PWS obtains higher accuracy for frequencies that are $\times 2$-$8$ lower than prior schemes. This is particularly helpful for datasets/selections with many mid-low frequency keys. \paragraph{Improvement for private histograms.} As an important special case of our results, we improve upon the state-of-the-art constructions for private (sparse) histograms~\cite{KorolovaKMN09,BunNS19}. These existing constructions obtain privacy properties by adding Laplace or Gaussian noise to the frequencies of the keys whereas we directly formulate and solve elementary constraints. Let $\pi^*_i$ denote the PWS reporting probability of a key with frequency $i$, when applied to the special case of private histograms. Let $\phi_i$ denote the reporting probability of the state-of-the-art solution for private (sparse) histograms of~\cite{KorolovaKMN09,BunNS19}. Clearly $\pi^*_i$ is always at least $\phi_i$. We show that in low-frequency regimes we have $\pi^*_i/\phi_i \approx 2 i$. Similarly for estimation tasks, PWS provides more accurate estimates in these regimes. Qualitatively, PWS and private histograms have high reporting probabilities and low estimation error for high frequencies. But PWS significantly improves on low to medium frequencies, which is important for distributions with long tails. We empirically show gains of 20\%-300\% in overall key reporting for Zipf-distributed frequencies. As private histograms are one of the most important building blocks in the literature of differential privacy, we believe that our improvement is significant (both in theory and in practice). \onlyinproc{Due to space limitations, we refer the reader to \cite{PWS:arxiv2020} for proofs and further details.} \section{Related Work} The suboptimality of the Laplace mechanism for anonymization was noted by \cite{GhoshRS:sicomp2012}. In our language, Ghosh et al.\ studied the non-sparse case, where all values, including $0$ values, can be reported with added noise. They did not consider sampling, and studied pure differential privacy. Instead of Laplace noise, they propose the use of a symmetric Geometric distribution and establish it is optimal for certain estimation tasks. This can be viewed as a special case of what we do in that our schemes converge to that when there is no sampling, we use pure differential privacy, and when frequencies are large (so the effect of the sparse case constraint dissipates). Ghosh et al.\ establish the optimality of unbiased estimators for some frequency statistics when loss is symmetric. We show that bias is necessary in the sparse case and propose estimators that control the bias and variance. Key reporting was formulated and studied as {\em differentially private set union} problem \cite{GopiGJSSY:ICML2020}. They studied it without sampling, in a more general user privacy setting, and proposed a truncated Laplace noise mechanism similar to~\cite{KorolovaKMN09,BunNS19}. Recent independent work by \cite{desfontainesVG:2020} derived the optimal scheme for key reporting for sparse private histograms, a special case of our solution when there is no sampling. \section{Preliminaries} We consider data in the form of a set of elements $\mathcal{E}$, where each element $e\in\mathcal{E}$ has a key $e.\mathop{\mathrm{key}} \in \mathcal{X}$. The {\em frequency} of a key $x$, $w_x := \left|\{ e\in\mathcal{E} \mid e.\mathop{\mathrm{key}}=x\}\right|$, is defined as the number of elements with $e.\mathop{\mathrm{key}} = x$. The {\em aggregated form} of the data, known in the DP literature as its {\em histogram}, is the set of key and frequency pairs $\{(x,w_x)\}$. We use the vector notation $\boldsymbol{w}$ for the aggregated form. We will use $m:= |\mathcal{E}|$ for the number of elements and $n$ for the number of distinct keys in the data. \subsection{Weighted Sampling} \label{weightedsamplingprelim:sec} We consider a very general form of without-replacement sampling schemes. Each scheme is specified by non-decreasing probabilities $(q_i)_{i\geq 1}$. The probability that a key is sampled depends on its frequency -- a key with frequency $i$ is sampled independently with probability $q_i$. Our proposed methods apply with any non-decreasing $(q_i)$. {\em Threshold sampling} is a popular class of weighted sampling schemes. We review it for concreteness and motivation and use it in our empirical evaluation of PWS. A threshold sampling scheme (see Algorithm~\ref{alg:threshold}) is specified by $(\mathcal{D},f,\tau)$, where $\mathcal{D}$ is a distribution, $f$ is a function of frequency, and $\tau$ is a numeric threshold value that specifies the sampling rate. For each key we draw i.i.d.\ $u_x \sim \mathcal{D}$. The two common choices are $\mathcal{D} = \textsf{Exp}[1]$ for a probability proportional to size without replacement (ppswor) sample~\cite{Rosen1972:successive} and $\mathcal{D} = U[0,1]$ for a Poisson Probability Proportional to Size (PPS) sample~\cite{Ohlsson_SPS:1990,Ohlsson_SPS:1998,DLT:jacm07}. A key $x$ is included in the sample if $u_x \leq \tau f(w_x)$. The probability that a key with frequency $i$ is sampled is \begin{equation} \label{qdef:eq} q_i := \Pr_{u\sim \mathcal{D}}[u < f(i)\tau]\enspace . \end{equation} \ignore{ A weighted sample can be used to estimate linear statistics of the form \[ s := \sum_x L(x) g(w_x)\enspace . \] The per-key inverse-probability estimator~\cite{HT52} of $g(w_x)$ is defined as follows: \begin{equation} \label{invprob:eq} a_{w_x} := \begin{cases} \frac{g(w_x)}{q_{w_x}} & x\text{ is included in the sample}\\ 0 & \text{otherwise} \end{cases}\enspace . \end{equation} Our estimate of a query statistics will be the sum \[ \hat{s} := \sum_{(x,w_x)\in S} L(x) a_{w_x}\enspace . \] The estimate is unbiased, i.e., $\textsf{E}[\hat{s}] = s$. } Threshold sampling is related to bottom-$k$ (order) sampling~\cite{Rosen1997a,Ohlsson_SPS:1990,DLT:jacm07,bottomk07:ds,bottomk:VLDB2008} but instead of specifying the sample size $k$ we specify an inclusion threshold $\tau$. Ppswor is equivalent to drawing keys sequentially with probability proportional to $f(w_x)$. The bottom-$k$ version stops after $k$ keys and the threshold version has a stopping rule that corresponds to the threshold. The bottom-$k$ version of Poisson PPS sampling is known as sequential Poisson or Priority sampling. \begin{algorithm2e}\caption{Threshold Sampling}\label{alg:threshold} {\scriptsize \DontPrintSemicolon \tcp{{\bf Threshold Sampler:}} \KwIn{Dataset $\boldsymbol{w}$ of key frequency pairs $(x,w_x)$; distribution $\mathcal{D}$, function $f$, threshold $\tau$} \KwOut{Sample $S$ of key-frequency pairs from $\boldsymbol{w}$} \Begin{ $S \gets \emptyset$\; \ForEach{$(x,w_x) \in \boldsymbol{w}$} { Draw independent $u_x\sim \mathcal{D}$\; \If{$u_x < f(w_x)\tau$}{$S \gets S\cup\{(x,w_x)\}$ } } \Return{S} } } \end{algorithm2e} Since PWS applies a sanitizer to a sample, it inherits the efficiency of the base sampling scheme. Threshold sampling (via the respective bottom-$k$ schemes) can be implemented efficiently using small sketches (of size expected sample size) on aggregated data that can be distributed or streamed \cite{DLT:jacm07,Rosen1997a,Ohlsson_SPS:1998,bottomk07:ds}. On unaggregated datasets, it can be implemented using small sketches for some functions of frequency including the moments $f(w)=w^p$ for $p\in [0,2]$ \cite{CCD:sigmetrics12,freqCapfill:TALG2018,CohenGeri:NeurIPS2019,CohenPW:NeurIPS2020}. Our methods apply with a fixed threshold $\tau$. But the treatment extends to when the threshold is privately determined from the data. If we have a private approximation of the total count $\|f(\boldsymbol{w})\|_1 := \sum_x f(w_x)$ we can set $\tau \approx k/\|f(\boldsymbol{w})\|_1$. This provides (from the non-private sample that corresponds to the threshold) estimates with additive error $\|f(\boldsymbol{w})\|_1/\sqrt{k}$ for statistics with function of frequency $g=f$ and when $L$ is a predicate. \subsection{Differential Privacy} The privacy requirement we consider is {\em element-level} differential privacy. Two datasets with aggregated forms $\boldsymbol{w}$ and $\boldsymbol{w}'$ are neighbors if $\| \boldsymbol{w}-\boldsymbol{w}' \|_1 = 1$, that is, the frequencies of all keys but one are the same and the difference is at most $1$ for that one key. The privacy requirements are specified using two parameters $\epsilon, \delta \geq 0$. \begin{definition}[\cite{DMNS06}] A mechanism $M$ is $(\varepsilon,\delta)$-differentially private if for any two neighboring inputs $\boldsymbol{w}$, $\boldsymbol{w}'$ and set of potential outputs $T$, \begin{equation} \label{DP:eq} \Pr[M(\boldsymbol{w}) \in T] \leq e^\varepsilon \Pr[M(\boldsymbol{w}') \in T] + \delta \enspace . \end{equation} \end{definition} \begin{algorithm2e}\caption{Private Weighted Samples}\label{alg:sanitizer} {\scriptsize \DontPrintSemicolon \tcp{{\bf Sanitized Keys:}} \KwIn{$(\epsilon,\delta)$, weighted sample $S$, taken with non-decreasing probabilities $(q_i)_{i\geq 1}$} \KwOut{Private sample of keys $S^*$} Compute $(p_i)_{i\geq 1}$ \tcp*{Reporting probabilities per freq.} \Begin(\tcp*[h]{Sanitize using scheme}){ $S^* \gets \emptyset$\; \ForEach{$(x,w_x) \in S$} { With probability $p_{w_x}$, $S^* \gets S^* \cup \{x\}$ } \Return{$S^*$} } \tcp{{\bf Sanitized keys and frequencies:}} \KwIn{$(\epsilon,\delta)$, weighted sample $S$, taken with non-decreasing probabilities $(q_i)_{i\geq 1}$} \KwOut{Sanitized sample $S^*$} Compute probability vectors $(p_{i\bullet})_{i\geq 1}$ \tcp*{Reported values} \Begin(\tcp*[h]{Sanitize using scheme}){ $S^* \gets \emptyset$\; \ForEach{$(x,w_x) \in S$} { Draw $j \sim p_{w_x \bullet}$ \tcp*{By probability vector} \If{$j>0$}{$S^* \gets S^* \cup \{(x,j)\}$} } \Return{$S^*$} } \tcp{{\bf Estimator:}} \KwIn{Sanitized sample $S^*=\{(x,j_x)\}$, $\{\pi_{i,j}\}$ (where $\pi_{ij} := p_{ij} q_i$ , functions $g(i)$, $L(x)$} \KwOut{Estimate of the linear statistics $\sum_x L(x) g(x)$} \Begin{ Compute $(a_j)_{j\geq 1}$ using $\{\pi_{ij}\}$ and $g(i)$ \tcp*{Per-key estimates for $g()$} \Return{$\sum_{(x,j_x)\in S^*} L(x) a_{j_x}$} } } \end{algorithm2e} \subsection{Private Weighted Samples} Given a (non-private) weighted sample $S$ of the data in the form of key and frequency pairs and (a representation) of the sampling probabilities $(q_i)_{i\geq 1}$ that guided the sampling, our goal is to release as much of $S$ as we can without violating element-level differential privacy. We consider two utility objectives. The basic objective, {\em sanitized keys}, is to maximize the reporting probabilities of keys in $S$. The private sample in this case is simply a subset of the keys in $S$. The refined objective is to facilitate estimates of linear frequency and order statistics. The private sample includes sanitized keys from $S$ together with information on their frequencies. The formats of the sanitizers and estimators are provided as Algorithm~\ref{alg:sanitizer}. \section{Sanitized Keys} \label{basic:sec} \begin{algorithm2e}\caption{Compute $\pi_{i}$ for Sanitizing Keys}\label{alg:reportkeys} {\scriptsize \DontPrintSemicolon \KwIn{$(\epsilon,\delta)$, non-decreasing sampling probabilities $(q_i)_{i\geq 1}$, $\text{Max$\_$Frequency}$} $\pi_0 \gets 0$\; \ForEach{$i=1,\ldots,\text{Max$\_$Frequency}$} { $\pi_{i} \gets \min\{q_{i}, e^\varepsilon \pi_{i-1} + \delta, 1+ e^{-\varepsilon}(\pi_{i-1}+\delta-1) \}$ } \Return{$(\pi_{i})_{i=1}^{\text{Max$\_$Frequency}}$} } \end{algorithm2e} A sanitizer $C$ uses a representation of the non-decreasing $(q_i)_{i\geq 1}$ and computes respective probabilities $(p_i)_{i\geq 1}$. A non-private sample $S$ can then be sanitized by considering each pair $(x,w_x)\in S$ and reporting the key $x$ independently with probability $p_{w_x}$. We find it convenient to express constraints on $(p_i)_{i\geq 1}$ in terms of the {\em end-to-end} reporting probability of a key $x$ with frequency $i$ (probability that $x$ is sampled and then reported): \[\pi_i := p_i q_i =\Pr[x\in C(A(\boldsymbol{w}))] \enspace .\] Keys of frequency $0$ are not sampled or reported and we have $q_0=0$ and $\pi_0 := 0$. The objective of maximizing $p_i$ corresponds to maximizing $\pi_i$. We establish the following\notinproc{ (The proof is provided in Appendix~\ref{proofkeybasic:sec})}: \begin{lemma} \label{basicpi:lemma} Consider weighted sampling scheme $A$ where keys are sampled independently according to a non-decreasing $(q_i)_{i\geq 1}$ and a key sanitizer $C$ (Algorithm~\ref{alg:sanitizer}) is applied to the sample. Then the probabilities $p_i \gets \pi_i/q_i$, where $\pi_i$ are the iterates computed in Algorithm~\ref{alg:reportkeys}, are each at the maximum under the DP constraints for $C(A())$. Moreover, $(\pi_i)_{i\geq 1}$ is non-decreasing. \end{lemma} \subsection{Structure and Properties of ${\boldsymbol{(\pi_i)_{i\geq 1}}}$} \label{keybasic:sec} The solution as computed in Algorithm~\ref{alg:reportkeys} applies with any non-decreasing $q_i$. We explore properties of the solution that allow us to compute and store it more efficiently and understand the reporting loss (reduction in reporting probabilities) that is due to the privacy requirement. \notinproc{Proofs are provided in Appendix~\ref{proofkeybasic:sec}.} We provide closed-form expressions of the solution $\pi^*_i$ that corresponds to $q_i=1$ for all $i$ (aka the private histogram problem). We will use the following definition of $L(\epsilon,\delta)$. To simplify the presentation, we assume that $\epsilon$ and $\delta$ are such that $L$ is an integer (this assumption can be removed). \begin{equation}\label{Ldef:eq} L(\epsilon,\delta) := \frac{1}{\varepsilon} \ln\left( \frac{e^\varepsilon -1 +2\delta}{\delta(e^\varepsilon +1)} \right) \approx \frac{1}{\varepsilon} \ln\left(\frac{\min\{1,\varepsilon/2\}}{\delta} \right) \end{equation} \begin{lemma} \label{privatehist:lemma} When $q_i=1$ for all $i$, the sequence $(\pi_i)_{i\geq 1}$ computed by Algorithm~\ref{alg:reportkeys} has the form: \begin{equation} \label{pistar:eq} \pi^*_i = \left\{ \begin{array}{ll} \delta \frac{e^{\varepsilon i}-1 }{e^\varepsilon -1} &\; \text{\small $i\leq L+1$ }\\ 1- \delta \frac{e^{\varepsilon (2L+2-i)} -1}{e^{\varepsilon} -1} &\; \text{\small $L+1 \leq i \leq 2L+1$ }\\ 1 &\; \text{\small $i \geq 2L+2$ } \end{array}\right. \end{equation} \end{lemma} For the general case where the $q_i$'s can be smaller than 1, we bound the number of frequency values for which $\pi_i < q_i$. On these frequencies, the private reporting probability is strictly lower than that of the original non-private sample, and hence there is reporting loss due to privacy. \begin{lemma} \label{totaltwothree:lemma} There are at most $2L(\varepsilon,\delta)+1$ values $i$ such that $\pi_i < q_i$, where $L$ is as defined in \eqref{Ldef:eq}. \end{lemma} We now consider the structure of the solution for threshold sampling. The solution has a particularly simple form that can be efficiently computed and represented. \begin{lemma} \label{niceq:lemma} When the sampling probabilities $(q_i)_{i\geq 1}$ are those of threshold ppswor sampling with $f(i)=i$ then the solution has the form $\pi_i = \pi^*_i$ for $i< \ell$ and $\pi_i = q_i$ for $i\geq \ell$, where $\ell = \min\{i : \pi^*_i > q_i\}$ is the lowest position with $\pi^*_i > q_i$ and $\pi^*_i$ is as defined in \eqref{pistar:eq}. \end{lemma} \section{Sanitized Keys and Frequencies} \label{refined:sec} \begin{algorithm2e}\caption{Compute $(\pi_{i,j})$ for Sanitized Frequencies}\label{alg:reportfreq} {\scriptsize \DontPrintSemicolon \KwIn{($(\epsilon,\delta)$, non-decreasing $(q_i)_{i\geq 1}$), $\text{Max\_Frequency}$} \KwOut{$(\pi_{i,j})$ for $0\leq j\leq i \leq \text{Max\_Frequency}$} $\pi_{0,0} \gets 1$, $\pi_0=0$\; \ForEach(\tcp*[h]{Iterate over rows}){$i= 1,\ldots, \textrm{Max\_Frequency}$}{ $\pi_{i} \gets \min\{q_{i}, e^\varepsilon \pi_{i-1} + \delta, 1+ e^{-\varepsilon}(\pi_{i-1}+\delta-1) \}$\tcp*{End-to-end probability to output a key with frequency $i$} $\pi_{i,0} \gets 1- \pi_{i}$\; \ForEach(\tcp*[h]{Set lower bound values; use $[a]_+ :=\max\{a,0\}$}){$j=1,\ldots,i-1$} { $\pi_{i,j} \gets e^{-\varepsilon} \left(\sum_{h=1}^j \pi_{i-1,h} - \delta\right) - \sum_{h=1}^{j-1} \pi_{i,h}$ \\ $+ \left[ e^{-\varepsilon} \pi_{i-1,0}-\pi_{i,0}\right]_+$ \; $\pi_{i,j} \gets [\pi_{i,j}]_+$\; } $R \gets \pi_i - \sum_{h=1}^{i-1} \pi_{i,h}$ \tcp*{Remaining probability to assign}\; \ForEach(\tcp*[h]{Set final values for $\pi_{i,j}$}){$j=i,\ldots,1$} { \lIf{$R=0$}{{\bf Break}} $U \gets e^\varepsilon \sum_{h=j}^{i-1}\pi_{i-1,h} + \delta -\sum_{h=j+1}^i \pi_{i,h}$ \tcp*{Max value allowed for $\pi_{i,j}$}\; \eIf{$U-\pi_{i,j} \leq R$}{ $R\gets R- (U-\pi_{i,j})$\; $\pi_{i,j} \gets U$ } { $\pi_{i,j} \gets \pi_{i,j}+R$\; $R \gets 0$ } } } \Return{$(\pi_{i,j})$} } \end{algorithm2e} A frequency sanitizer $C$ returns keys $x$ together with sanitized information on their frequency. We use $p_{i,j}$ for the probability that $C$ reports $j\in[t]$ for a sampled key that has frequency $i$, with $p_{i,0}$ being the probability that the sampled key is not reported. We have that $\sum_{j=1}^{t} p_{i,j}$ is the total probability that a sampled key with frequency $i$ is reported by $C$. We use \[\pi_{i,j} \gets q_i p_{i,j}\] for the end-to-end probability that a key with frequency $i$ is sampled and reported in the private sample with sanitized value $j$. For notation convenience, we use $\pi_{i,0} := 1-\sum_{j=1}^t \pi_{i,t}$ for the probability that a key is not reported, making $\pi_{i,\bullet}$ probability vectors. The reader can interpret the returned value $j$ as a token from an ordered domain. The estimators we propose depend only on the order of tokens and not their values and hence are invariant to a mapping of the domain that preserves the order. We express constraints on $(\pi_{i,j})_{i\geq 0,j\geq 0}$. For a solution to be {\em realizable}, we must have end-to-end reporting probabilities that do not exceed the sampling probabilities: \begin{equation} \label{qbound2:eq} \forall i,\ \sum_{j=1}^{t} \pi_{i,j} \leq q_i\enspace . \end{equation} The DP constraints are provided in the sequel. Note that we must have $\sum_{j=1}^{t} \pi_{i,j}\leq \pi_i$, where $(\pi_i)_{i\geq 1}$ is the solution for sanitized keys (Algorithm~\ref{alg:reportkeys}), this because the sanitized frequencies DP constraints are a superset of the sanitized keys constraints -- we obtain the latter in the former by considering outputs that group together all outputs with a key $x$ with all possible values of $j >0$. For optimality, we seek solutions that (informally) {\em maximally separate} the distributions of different frequencies (minimize the overlap), over all possible DP frequency reporting schemes. We will see that maximum separation can (i)~always be achieved by a discrete distribution (when the maximum frequency is bounded) and (ii)~can be simultaneously achieved between any pair of frequencies. In particular, we maintain optimal reporting, that is, $\sum_{j=1}^{t} \pi_{i,j} = \pi_i$ and $\pi_{i,0} = \overline{\pi}_i$. The solutions we express are such that for $i_1 > i_2$, $\pi_{i_1,\bullet}$ (first-order) stochastically dominates $\pi_{i_2,\bullet}$: That is, for any $h$, the probability of a token $j\geq h$ is non-decreasing with frequency. We present two algorithms that express $(\pi_{i,j})$. Algorithm~\ref{alg:reportfreq} provides a simplified construction, where $t$ is equal to the maximum frequency and we always report $j\leq i$ for a key with frequency $i$. The sanitizer satisfies realizability and DP and has optimal key reporting but attains maximum separation only under some restrictions. The values $\pi_{i,j}$ are specified in order of increasing $i$, where the row $\pi_{i,\bullet}$ is set so that the probability mass of $\pi_i$ is pushed to the extent possible to higher $j$ values. Algorithm~\ref{alg:Creport} specifies PDFs $(f_i)$ for a frequency sanitizer. The PDFs have a discrete point mass at $0$ (that corresponds to the probability of not reporting) and are piecewise constant elsewhere. The scheme is a refinement of the scheme of Algorithm~\ref{alg:reportfreq} and, as we shall see, for any $(q_i)$ and $(\varepsilon,\delta)$, it maximally separates sanitized values for different frequencies. The construction introduces at most $3m$ distinct breakpoints for frequencies up to $m$ and can be discretized to have an equivalent $(\pi_{i,j})$ form with $j\in [3m]$. \notinproc{ (More details and proofs are provided in Appendix~\ref{general:sec}.)} \begin{theorem}\label{theorem:reported-freq-dist} The sanitizer with $(\pi_{i,j})$ expressed in Algorithm~\ref{alg:Creport} satisfies: \begin{enumerate} \item $\forall i,\, \sum_{j=1}^i \pi_{i,j} = \pi_i$, and in particular, \eqref{qbound2:eq} holds and the sanitizer is realizable. \item $(\varepsilon,\delta)$-DP \item \label{separation:cond} Maximum separation: For each $i$, there is an index $c_i$ so that subject to the above and to row $\pi_{i-1,\bullet}$, for all $j'\leq c_i$, the sum $\sum_{j=1}^{j'} \pi_{ij}$ is at a minimum and for all $j'\geq c_i$, the sum $\sum_{j=j'}^{i} \pi_{ij}$ is at a maximum. \end{enumerate} \end{theorem} The $(\pi_{i,j})$ expressed by Algorithm~\ref{alg:reportfreq} satisfy maximum separation (Property~\ref{separation:cond}) under the particular restrictions on the reported values (that only $i$ different outputs are possible for frequencies up to $i$). \notinproc{ (The proofs are provided in Appendix~\ref{proofssanitizedKF:sec})} The $(\pi_{i,j})$ expressed by Algorithm~\ref{alg:Creport} and then discretized satisfy maximum separation (Property~\ref{separation:cond}) unconditionally, over all DP frequency sanitization schemes. For the special case where $q_i=1$ for all $i$, Algorithm~\ref{alg:reportfreq} provides maximum separation. We provide a closed-form expression for the solution $\pi^*_{i,j}$. \begin{lemma} \label{formij:lemma} Let the DP parameters $(\varepsilon,\delta)$ be such that $L(\varepsilon,\delta)$ as in \eqref{Ldef:eq} is integral. Let $(\pi^*_{ij})$ be the solution computed in Algorithm~\ref{alg:reportfreq} for $q_i=1$ for all $i$. Then the matrix with entries $\pi^*_{ij}$ for $i,j\geq 1$ has a lower triangular form, with the non-zero entries as follows: \[ \text{For } j\in \{\max\{1, i-2L\},\ldots, i\},\ \pi^*_{ij} = \pi^*_{i-j+1} - \pi^*_{i-j}\enspace . \] Equivalently, $ \pi^*_{i,j} = \begin{cases} \delta e^{(i-j)\varepsilon} &\, \text{if $0\leq i-j\leq L$}\\ \delta e^{(2L-(i-j))\varepsilon} &\, \text{if $L+1\leq i-j \leq 2L$ .} \end{cases} $ \end{lemma} \begin{algorithm2e}\caption{Compute $(f_i)$ for Sanitizing Frequencies}\label{alg:Creport} {\scriptsize \DontPrintSemicolon \KwIn{$(\epsilon,\delta)$, non-decreasing sampling probabilities $(q_i)_{i\geq 1}$, $\text{Max$\_$Frequency}$ } \KwOut{$(f_i)_{i=0}^{\text{Max$\_$Frequency}} $, where $f_i:[0,i]$ \tcp*{PDF of sanitized frequency for frequency $i$: discrete mass at $f_i(0)$ (probability of not reporting) and density on $(0,i]$}} $f_0(0) \gets 1$; $\pi_i \gets 0$ \tcp*{Keys with frequency $0$ are never reported} \For(\tcp*[h]{Specify $f_i:[0,i]$}){$i\gets 1$ \KwTo $\text{Max$\_$Frequency}$} { $\pi_{i} \gets \min\{q_{i}, e^\varepsilon \pi_{i-1} + \delta, 1+ e^{-\varepsilon}(\pi_{i-1}+\delta-1) \}$; $f_i(0) \gets 1-\pi_i$ \tcp*{Reporting probability for $i$} $f_{i}(i-1,i] \gets \min\{\pi_i,\delta\}$\; \tcp{Represent a function $f_L:(0,i-1]$ that "lower bounds" $f_i$} \If {$\max\{0,e^{-\varepsilon} f_{i-1}(0) - f_i(0)\} + \int_{0^+}^{i-1} f_{i-1}(x) dx \leq \delta$} {$f_L(0,i-1]\gets 0$} \Else { $\mathit{b}_i \gets$ $z$ that solves $ \max\{0,e^{-\varepsilon} f_{i-1}(0) - f_i(0)\}+\int_{0^+}^z f_{i-1}(x) dx = \delta$ \tcp*{Well defined, as from DP, we always have $\max\{0,e^{-\varepsilon} f_{i-1}(0) - f_i(0)\}\leq \delta$} $f_L(0,b_i]\gets 0$\; \lFor{$x\in (\mathit{b}_i,i-1]$}{$f_L(x) \gets e^{-\varepsilon}f_{i-1}(x)$ }} \tcp{Point where $f_i(x)-f_{i-1}(x)$ switches sign} $c_i \gets$ $z$ that solves $\int_{0}^{z} f_L(x)dx + e^\varepsilon \int_z^{i-1} f_{i-1}(x) dx = \pi_i -\min\{\pi_i,\delta\}$ \tcp*{Any solution $z\in (0,i-1]$} \lFor{$x\in (c_i,i-1]$}{$f_i(x)= e^\varepsilon f_{i-1}(x)$} \lFor{$x \in (0,c_i]$}{$f_{i}(x) \gets f_L(x)$} } } \Return{$(f_i)_{i=0}^{\text{Max$\_$Frequency}}$} \end{algorithm2e} \section{Estimation of Ordinal Statistics} The sanitized frequencies can be used for estimation of statistics specified with respect to the actual frequencies. In this section we consider {\em ordinal} statistics, that only depend on the order of frequencies but not on their nominal values. Ordinal statistics include (approximate) top-$k$ set, quantiles, rank of a key, set of keys with a higher (or lower) rank than a specified key, and more. We approximate ordinal statistics from the ordering of keys that is induced by sanitized frequencies. The quality of estimated ordinal statistics is determined by the match between the order induced by exact frequencies and the order induced by sanitized frequencies. We say that the two orders are {\em concordant} on a subset of keys $\{x_i\}$, when they match on that subset. Since the output of our sanitizer is stochastic, we consider the probability of a subset being concordant. When sanitized values are discrete and two keys have the same sanitized value, we use probability of $0.5$ that two keys are concordant. We define \notinproc{(see Appendix~\ref{maxseparation:sec})} a measure of separation between distributions $f_{i_1}$ and $f_{i_2}$ at a certain quantile value $\alpha$ and show that the $(f_i)$ constructed by Algorithm~\ref{alg:Creport} maximize it pointwise for any $i_1,i_2,\alpha$. This measure generalizes and follows from Property~\ref{separation:cond} stated in Theorem~\ref{theorem:reported-freq-dist}. As a corollary we show\notinproc{ (The proof is provided in Appendix~\ref{maxseparation:sec})}: \begin{corollary} \label{ordinalstat:coro} The sanitizing scheme specified by the $(f_i)$ computed by Algorithm~\ref{alg:Creport} maximizes the following: The probability that a subset of keys is concordant, the probability that a key is correctly ordered with respect to all other keys, and the expected Kendall-$\tau$ rank correlation. \end{corollary} Note that we get optimality in a strong sense -- there is no Pareto front where concordant probability on some pairs of frequencies needs to be reduced in order to get a higher value for other pairs. \section{Estimation of Linear Frequency Statistics} The objective is to estimate statistics of the form \begin{equation} \label{qstat:eq} s := \sum_x L(x) g(w_x)\enspace . \end{equation} We briefly review estimators for the non-private setting where the sample consists of pairs $(x,w_x)$ of keys and their frequency. We use the per-key inverse-probability estimators~\cite{HT52} (also known as importance sampling). The estimate $\widehat{g(w_x)}$ of $g(w_x)$ is $0$ if key $x$ is not included in the sample and otherwise the estimate is \onlyinproc{$a_{w_x} := \frac{g(w_x)}{q_{w_x}}$.} \notinproc{ \begin{equation} \label{invprob:est} a_{w_x} := \frac{g(w_x)}{q_{w_x}} \enspace . \end{equation} } These estimates are nonnegative, a desired property for nonnegative values, and are also unbiased when $q_{w_x}>0$. The estimate for the query statistics \eqref{qstat:eq} is \begin{equation} \label{sumest:eq} \hat{s} := \sum_{(x,w_x)} L(x) \widehat{g(w_x)} = \sum_{(x,w_x)\in S} L(x) a_{w_x}\enspace . \end{equation} \notinproc{Since the estimate is $0$ for keys not represented in the sample, it can be computed from the sample. The variance of a per-key estimate for a key with frequency $i$ is $g(i)^2(\frac{1}{q_i} -1)$ and the variance of the sum estimator \eqref{sumest:eq} is \[ \textsf{Var}[\hat{s}] = \sum_x L(x)^2 g(w_x)^2(\frac{1}{q_{w_x}} -1) \enspace .\] These inverse-probability estimates are optimal for the sampling scheme in that they minimize the sum of per-key variance under unbiasedness and non-negativity constraints. We note that the quality of the estimates depends on the match between $g(i)$ and $q_i$: Probability Proportional to Size (PPS), where $q_i\propto g(i)$ is most effective and minimizes the sum of per-key variance for the sample size. Our aim here is to optimize what we can do privately when $q$ and $g$ are given.} \subsection{Estimation with Sanitized Samples} \label{estssanitized:sec} We now consider estimation from sanitized samples $S^*$. We specify our estimators $(a_j)_{j\geq 1}$ in terms of the reported sanitized frequencies $j$. The estimate is $0$ for keys that are not reported and are $a_j$ when reported with value $j$. The estimate of the statistics is \begin{equation} \label{partialfest:eq} \widehat{s} := \sum_{(x,j)\in S^*} L(x) a_{j}\ . \end{equation} As for choosing $(a_j)_{j\geq 1}$, a first attempt is the unique unbiased estimator: The unbiasedness constraints \onlyinproc{$\forall i,\ \sum_{j=1}^i \pi_{ij} a_j = g(i)$}\notinproc{ \[ \forall i,\ \sum_{j=1}^i \pi_{ij} a_j = g(i)\enspace \]} form a triangular system with a unique solution $(a_j)_{j\geq 1}$\onlyinproc{.}\notinproc{: \[a_i \gets \frac{g(i) - \sum_{j=1}^{i-1} \pi_{i,j} a_j}{\pi_{i,i}} \enspace .\]} However, $(a_j)_{j\geq 1}$ may include negative values and estimates have high variance. We argue that bias is unavoidable with privacy: First, the inclusion probability of keys with frequency $w_x=1$ can not exceed $\delta$. Therefore, the variance contribution of the key to any unbiased estimate is at least $1/\delta$. Typically, $\delta$ is chosen so that $1/\delta \gg n k$, where $n$ is the support size and $k$ the sample size, so this error can not be mitigated. Second, we show \onlyinproc{in the full version}\notinproc{in Appendix~\ref{mustnegative:sec}} that even for the special case of $q=1$, {\em any} unbiased estimator applied to the output of {\em any} sanitized keys and frequencies scheme with optimal reporting probabilities must assume negative values. That is, DP schemes do not admit unbiased nonnegative estimators without compromising reporting probabilities. We therefore seek estimators that are biased but balance bias and variance and are nonnegative. In our evaluation we use the following Maximum Likelihood estimator (MLE): \begin{equation} \label{MLest:eq} a_j \gets \frac{g(i)}{\pi_i}\text{, where } i = \arg\max_h \pi_{hj} \enspace . \end{equation} This estimate is "right" for the frequency $i$ for which the probability of reporting $j$ is maximized.\notinproc{ The estimate can be biased up or down.} \notinproc{Another estimator with desirable properties is proposed in Appendix~\ref{biaseddown:sec}.} We express the expected value, bias, Mean Squared Error (MSE), and variance of the per-key estimate for a key with frequency $i$: \onlyinproc{ $\textsf{E}_i := \sum_{j=1}^i \pi_{i,j} a_j$, $\textsf{Bias}_i := \textsf{E}_i - g(i)$, $\textsf{MSE}_i := \overline{\pi}_i g(i)^2 + \sum_{j=1}^i \pi_{i,j} (a_j - g(i))^2$, $\textsf{Var}_i := \textsf{MSE}_i - \textsf{Bias}_i^2$. For the sum estimate \eqref{partialfest:eq} we get: $\textsf{Bias}[\widehat{s}] = \sum_x L(x) \textsf{Bias}_{w_x}$, $\textsf{Var}[\widehat{s}] = \sum_x L(x)^2 \textsf{Var}_{w_x}$, $\textsf{MSE}[\widehat{s}] = \textsf{Var}[\widehat{s}] + \textsf{Bias}[\widehat{s}]^2$, and $\textsf{NRMSE}[\widehat{s}] = \frac{\sqrt{\textsf{MSE}[\widehat{s}]}}{s}$. } \notinproc{ {\small \begin{align*} \textsf{E}_i :=&\, \sum_{j=1}^i \pi_{i,j} a_j\\ \textsf{Bias}_i :=&\, \textsf{E}_i - g(i)\\ \textsf{MSE}_i :=&\, \overline{\pi}_i g(i)^2 + \sum_{j=1}^i \pi_{i,j} (a_j - g(i))^2 \\ \textsf{Var}_i :=&\, \textsf{MSE}_i - \textsf{Bias}_i^2 \enspace . \end{align*} } For the sum estimate \eqref{partialfest:eq} we get: {\small \begin{align} \textsf{Bias}[\widehat{s}] =&\, \sum_x L(x) \textsf{Bias}_{w_x}\nonumber\\ \textsf{Var}[\widehat{s}] =&\, \sum_x L(x)^2 \textsf{Var}_{w_x}\nonumber\\ \textsf{MSE}[\widehat{s}] =&\, \textsf{Var}[\widehat{s}] + \textsf{Bias}[\widehat{s}]^2\nonumber\\ \textsf{NRMSE}[\widehat{s}] =& \frac{\sqrt{\textsf{MSE}[\widehat{s}]}}{s}\enspace . \label{NRMSE:eq} \end{align} }} Note that the variance component of the normalized squared error $\textsf{MSE}[\widehat{s}]/s^2$ decreases linearly with support size whereas the bias component may not. We therefore consider both the variance and bias of the per-key estimators and qualitatively seek low bias and ``bounded'' variance. We measure quality of statistics estimators using the Normalized Root Mean Squared Error (NRMSE). \section{Performance Analysis} \label{experiments:sec} We study the performance of PWS on the key reporting and estimation objectives and compare with a baseline method that provides the same privacy guarantees. We use precise expressions (not simulations) to compute probabilities, bias, variance, and MSE of the different methods. \subsection{Private Histograms Baseline} \label{SbH:sec} We review the {\em Stability-based Histograms} (SbH) method of \cite{BunNS19,KorolovaKMN09,VadhanDPsurvey:2017}, which we use as a baseline. SbH, provided as Algorithm~\ref{alg:SBH}, is designed for the special case when $q_i=1$ for all frequencies. The input $S$ is the full data of pairs of keys and positive frequencies $(x,w_x)$. The private output $S^*$ is a subset of the keys\notinproc{ in the data} with positive sanitized frequencies $(x,w^*_x)$. \begin{algorithm2e}\caption{\small Stability-based Histograms (SbH) }\label{alg:SBH} {\small \DontPrintSemicolon \KwIn{$(\epsilon,\delta)$ , $S=\{(x,w_x)\}$ where $w_x>0$} \KwOut{Key value pairs $S'$} $S^*\gets \emptyset$\tcp*{Initialize private histogram} $T \gets (1/\varepsilon)\ln(1/\delta)+1$\tcp*{Threshold} \ForEach{$(x,w_x)\in S$} { $w^*_x \gets w_x + \textsf{Lap}[\frac{1}{\varepsilon}]$\tcp*{Add Laplace random variable} \If{$w^*_x \geq T$}{$S^*\gets S^* \cup (x,w^*_x)$ } } } \end{algorithm2e} The SbH method is considered the state of the art for sparse histograms (only keys with $w_x > 0$ can be reported). The method returns non-negative $w_x^*>0$ sanitized frequencies. For the case of no sampling, we compare PWS (with $q\equiv 1$) with SbH. We use the SbH sanitized frequencies directly for estimation. For sampling, our baseline is {\em sampled-SbH}: The data is first sanitized using SbH and then sampled, using a weighted sampling algorithm with $q$, while treating the sanitized frequencies as actual frequencies. For estimation, we apply the estimator \eqref{sumest:eq} (which in this context is biased). \notinproc{ To facilitate comparison with SbH and sampled-SbH we express the reporting probabilities, bias, and variance in Appendix~\ref{expressSbH:sec}.} \notinproc{\subsection{Reporting Probabilities: No Sampling}} \onlyinproc{\paragraph{Reporting Probabilities: No Sampling}} We start with the case of no sampling and the objective of maximizing the number of privately reported keys. We compare the PWS (optimal) probabilities $\pi^*$ \eqref{pistar:eq} to the baseline SbH \notinproc{\cite{BunNS19,KorolovaKMN09,VadhanDPsurvey:2017}} reporting probabilities $\phi$\notinproc{ \eqref{SbHphi:eq}}. Figure~\ref{probfreqhist:plot} shows reporting probability per frequency for selected DP parameters. We can see that with both private methods the reporting probability reaches $1$ for high frequencies but PWS (Opt) reaches the maximum earlier and is significantly higher than $\phi$ along the way. Analytically from the expressions we can see that for $i\leq L(\varepsilon,\delta)$, $\pi^*_i/\phi_i \in [2,2/\varepsilon]$ with $\pi^*_i/\phi_i \approx 2i$ for lower $i$. We can also see that $\pi^*_i =1$ for $i=2L+2 \approx \frac{2}{\varepsilon}\ln(\varepsilon/\delta)$ whereas $\phi_i > 1-\delta$ for $i\approx \frac{2}{\varepsilon} \ln(1/\delta)$. The ratio between the frequency values when maximum reporting is reached is $\approx\ln(1/\delta)/\ln(\varepsilon/\delta)$.\notinproc{ } Figure~\ref{Zipfreport:plot} shows the expected numbers of reported keys with PWS (Opt) and SbH for frequency distributions that are $\textsf{Zipf}[\alpha]$ with $\alpha=1,2$ as we sweep the privacy parameter $\delta$. Overall we see that PWS gains 20\%-300\% in the number of keys reported over baseline. Note that as expected, the optimal PWS reports {\em all} keys when $\delta=1$ (i.e., no privacy guarantees) but SbH incurs reporting loss. \onlyinproc{In the full version, we show similar results on two real-world datasets.} \notinproc{ We additionally evaluate PWS and compare it to SbH on two real-world datasets: \begin{enumerate} \item ABC: The words of news headlines from the Australian Broadcasting Corporation. The keys are words and the frequency is the respective number of occurrences \cite{abcnews}. \item SO: The multi-graph of Stack Overflow where keys are nodes in the graph and frequencies are undirected degrees \cite{StackOverflow}. \end{enumerate} Figure~\ref{RealWorldNoSampling:plot} shows the expected numbers of reported keys on these datasets. We note that the results are similar to what was observed on the synthetic Zipf datasets. } \begin{figure}[ht] \centering \includegraphics[width=0.44\textwidth]{plots/probability-frequency-10-100.pdf} \includegraphics[width=0.44\textwidth]{plots/probability-frequency-100-1000000.pdf} \caption{Key reporting probability for frequency. No sampling ($q=1$) with PWS (Opt) and SbH for $(\varepsilon,\delta)=(0.1,0.01),(0.01,10^{-6})$} \label{probfreqhist:plot} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.30\textwidth]{plots/optsbh-zipf-ratio-ieps10.pdf} \includegraphics[width=0.30\textwidth]{plots/optsbh-zipf-gains-a1-ieps10.pdf} \includegraphics[width=0.30\textwidth]{plots/optsbh-zipf-gains-a2-ieps10.pdf} \caption{Expected fraction of keys that are privately reported with PWS (Opt) and SbH for $\textsf{Zipf}[\alpha]$ frequency distributions. For $\alpha=1,2$, \notinproc{privacy parameters }$\varepsilon=0.1$ and sweeping $\delta$ between $1$ and $10^{-8}$. Left: The respective ratio of PWS to SbH.} \label{Zipfreport:plot} \end{figure} \notinproc{ \begin{figure}[ht] \centering \includegraphics[width=0.30\textwidth]{plots/real_world_datasets_gain_ratio.pdf} \includegraphics[width=0.30\textwidth]{plots/gains_ABC.pdf} \includegraphics[width=0.30\textwidth]{plots/gains_SO.pdf} \caption{Evaluation of PWS on real-world datasets (without sampling). Left: The ratio of reported keys with PWS to SbH. Center and Right: Fraction of total keys reported with sampled-SbH and PWS as we sweep the parameter $\delta$.} \label{RealWorldNoSampling:plot} \end{figure} } \notinproc{\subsection{Reporting Probabilities with Sampling}} \onlyinproc{\paragraph{Reporting Probabilities with Sampling}} Figure~\ref{reporting_sampling:fig} shows reporting probabilities with PWS (optimal reporting probabilities), sampled-SbH, and non-private sampling, for representative sampling rates and privacy parameters. \begin{figure}[ht] \centering \includegraphics[width=0.30\textwidth]{plots/probability-frequency-tau1000-10-1000000.pdf} \includegraphics[width=0.30\textwidth]{plots/probability-frequency-tau1000-100-1000000.pdf} \includegraphics[width=0.30\textwidth]{plots/probability-frequency-tau10000-10-1000000.pdf} \caption{Reporting probability as a function of frequency. For ppswor sampling with threshold $\tau$ ($q$), PWS private samples (Opt), and sampled-SbH private samples.} \label{reporting_sampling:fig} \end{figure} As expected, for sufficiently large frequencies both private methods have reporting probabilities that match the sampling probabilities $q$ of the non-private scheme. But PWS reaches $q$ at a lower frequency than sampled-SbH and has significantly higher reporting probabilities for lower frequencies. Figure~\ref{sweepsamplingZipf:fig} shows the fraction of keys reported for $\textsf{Zipf}$ distributions as we sweep the sampling rate (threshold $\tau$). PWS reports more keys than sampled-SbH and the gain persists also with low sampling rates. We can see that with PWS, thanks to end-to-end privacy analysis, the reporting loss due to sampling mitigates the reporting loss needed for privacy -- reporting approaches that of the non-private sampling when the sampling rate $\tau$ approaches $\delta$. Sampled-SbH, on the other hand, incurs reporting loss due to privacy on top of the reporting loss due to sampling. \notinproc{Figures \ref{RealWorldWithSampling1:plot} and \ref{RealWorldWithSampling2:plot} show the expected fraction of reported keys on the real-world datasets ABC and SO.} \begin{figure}[ht] \centering \includegraphics[width=0.30\textwidth]{plots/optsbh-tau-zipf-gains-a05-10-1000.pdf} \includegraphics[width=0.30\textwidth]{plots/optsbh-tau-zipf-gains-a1-10-1000.pdf} \includegraphics[width=0.30\textwidth]{plots/optsbh-tau-zipf-gains-a2-10-1000.pdf} \caption{Fraction of total keys reported with threshold-ppswor, sampled-SbH, and PWS (Opt), as we sweep the sampling rate $\tau$. For $\textsf{Zipf}[\alpha]$, $\varepsilon=0.1$ and $\delta=0.001$ the gains in reporting of PWS over Sampled-SbH are at least 230\% ($\alpha=0.5$), 97\% ($\alpha=1$) and 37\% ($\alpha=2$).} \label{sweepsamplingZipf:fig} \end{figure} \notinproc{ \begin{figure}[ht] \centering \includegraphics[width=0.44\textwidth]{plots/gains_with_ppswor_ABC.pdf} \includegraphics[width=0.44\textwidth]{plots/gains_with_ppswor_SO.pdf} \caption{Evaluation on real-world datasets (with sampling): fraction of total keys reported with threshold-ppswor (non-private), sampled-SbH, and PWS, as we sweep the sampling rate $\tau$.} \label{RealWorldWithSampling1:plot} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.44\textwidth]{plots/real_world_gain_with_sampling_01_3.pdf} \includegraphics[width=0.44\textwidth]{plots/real_world_gain_with_sampling_01_5.pdf} \caption{Ratio of expected number of keys that are privately reported with PWS to SbH for the ABC and SO datasets, as we sweep the sampling rate $\tau$.} \label{RealWorldWithSampling2:plot} \end{figure} } \notinproc{\subsection{Estimation of Linear Statistics}} \onlyinproc{\paragraph{Estimation of Linear Statistics}} We evaluate estimation quality for linear statistics \eqref{qstat:eq} when $g(w)=w$ and $L(x)$ is a selection predicate. The statistics is simply the sum of frequencies of selected keys. We compare performance of PWS with the MLE estimator \eqref{MLest:eq}, the baseline sampled-SbH, and for reference, the estimator of the respective non-private sample \eqref{sumest:eq}. Figure~\ref{bias_error:fig} (top) shows normalized bias $\textsf{Bias}_i/i$ as a function of the frequency $i$ for the two private methods (the non-private estimator is unbiased and not shown). With both methods, the bias decreases with frequency and diminishes for $i \gg 2\varepsilon^{-1}\ln(1/\delta)$. PWS has lower bias at lower frequencies than SbH, allowing for more accurate estimates on a broader range. We can see that with PWS, the bias decreases when the sampling rate ($\tau$) decreases and diminishes when $\tau$ approaches $\delta$. This is a benefit of the end-to-end privacy analysis. The bias of the baseline method does not change with sampling rate. Figure~\ref{variance:fig} shows the normalized variance $\textsf{Var}_i/i^2$ per frequency $i$ for representative parameter settings. The private methods PWS and sampled-SbH maintain low variance across frequencies: The value is fractional with no sampling and is of the order of that of the non-private unbiased estimator with sampling. In particular this means that the bias is a good proxy for performance and that the improvement in bias of PWS with respect to baseline does not come with a hidden cost in variance. For high frequencies (not shown), keys with all methods are included with probability (close to) $1$. The non-private method that reports exact frequencies have $0$ variance whereas the private methods maintain a low variance, but the normalized variance diminishes for all methods. For statistics estimation, the per-key performance suggest that when the selection has many high frequency keys, the private methods perform well and are similar to non-private sampling. When the selection is dominated by very low frequencies, the private methods perform poorly and well below the respective non-private sample. But for low to medium frequencies, PWS can provide drastic improvements over SbH and the gain increases with lower sampling rates. Figure~\ref{bias_error:fig} (bottom) shows the $\textsf{NRMSE}$ as a function of sampling rate for estimating the sum of frequencies on a selection of $2\times 10^5$ keys with frequencies uniformly drawn between $1$ and $200$. We can see that the error of non-private sampling and of sampled-SbH decreases with higher sampling rate. Note the perhaps counter-intuitive phenomenon that PWS (MLE) hits its sweet spot midway: This is due to a balance of the two components of the error, the variance which increases and the bias that decreases when the sampling rate decreases. Also note that PWS significantly improves over SbH also with no sampling ($\tau=1$). \begin{figure}[ht] \centering \includegraphics[width=0.44\textwidth]{plots/norm_bias_e0_1_full.pdf} \includegraphics[width=0.44\textwidth]{plots/norm_bias_e0_5_full.pdf} \includegraphics[width=0.44\textwidth]{plots/nrmse_on_uniform_range200_mult1000_e0_1.pdf} \includegraphics[width=0.44\textwidth]{plots/nrmse_on_uniform_range200_mult1000_e0_5.pdf} \caption{ Top: Normalized bias for PWS (MLE) and sampled-SbH as a function of frequency, for different sampling rates. The bias of the sampled-SbH estimates (shown once) does not change with sampling rate. Bottom: NRMSE as a function of sampling rate for a selection of $2\times 10^5$ keys with frequencies drawn uniformly $[1,200]$. } \label{bias_error:fig} \end{figure} \begin{figure}[!pht] \centering \includegraphics[width=0.44\textwidth]{plots/norm_variance_AlwaysIncludeSamplingMethod_t1_0_e0_1.pdf} \notinproc{ \includegraphics[width=0.44\textwidth]{plots/norm_variance_AlwaysIncludeSamplingMethod_t1_0_e0_5.pdf}} \includegraphics[width=0.44\textwidth]{plots/norm_variance_PrioritySamplingMethod_t0_001_e0_1.pdf} \notinproc{ \includegraphics[width=0.44\textwidth]{plots/norm_variance_PrioritySamplingMethod_t0_001_e0_5.pdf} \includegraphics[width=0.44\textwidth]{plots/variance_PrioritySamplingMethod_t0_001_e0_1.pdf} \includegraphics[width=0.44\textwidth]{plots/variance_PrioritySamplingMethod_t0_001_e0_5.pdf} } \caption{ Normalized variance $\textsf{Var}_i/i^2$ \notinproc{and variance $\textsf{Var}_i$} for PWS (MLE) and sampled-SbH as a function of the frequency $i$. } \label{variance:fig} \end{figure} \vspace{-0.24cm} \section*{Conclusion} We presented Private Weighted Sampling (PWS), a method to post-process a weighted sample and produce a version that is differentially private. Our private samples maximize the number of reported keys subject to the privacy constraints and support estimation of linear and ordinal statistics. We demonstrate significant improvement over prior methods for both reporting and estimation tasks, even for the well studied special case of private histograms (when there is no sampling). An appealing direction for future work is to explore the use of PWS to design \emph{composable} private sketches, e.g., in the context of coordinated samples. Threshold and bottom-$k$ samples of different datasets are {\em coordinated} when using consistent $\{u_x\}$. Coordinated samples generalize MinHash sketches and support estimation of similarity measures and statistics over multiple datasets \cite{BrEaJo:1972,Saavedra:1995,ECohen6f,Broder:CPM00,sdiff:KDD2014,sorder:PODC2014}. \subsubsection*{Acknowledgments} Part of this work was done while Ofir Geri was an intern at Google Research. This work was partially supported by Moses Charikar's Google Faculty Research Award. We also thank Chinmoy Mandayam for bringing the related paper \cite{GhoshRS:sicomp2012} to our attention. \bibliographystyle{plain}
{'timestamp': '2021-04-01T02:31:57', 'yymm': '2010', 'arxiv_id': '2010.13048', 'language': 'en', 'url': 'https://arxiv.org/abs/2010.13048'}
\section{Introduction} The main difference between open and isolated systems, is the lack of conservation laws in the former, the most common one being the energy conservation. For open quantum systems~\cite{Petruccione,Weiss,Schaller}, another peculiar but less uniquely defined quantity, quantum coherence, is being loosed. In more formal terms, if the system, when isolated, is governed by some time independent Hamiltonian $\bm{H}$, and if $\bm{O}_1,\ldots,\bm{O}_q$ are a set of $q$ independent operators that commute with $\bm{H}$, and commute with each other, the quantum mechanical averages of these operators, including $\bm{H}$, provide a set of $q+1$ constants of motion. If instead the system interacts with some environment, in general, none of these operators is a constant of motion. Nevertheless, if the system-environment interaction can be reduced to one of the operators $\bm{O}_1,\ldots,\bm{O}_q$, say $\bm{O}_1$, then, even if the system looses both energy and quantum coherence, $\bm{O}_1$ remains conserved during what we can call a partial thermalization process. This is what happens in the Lipkin-Meshkov-Glick (LMG) model~\cite{LMG} when put in contact with a thermal reservoir constituted by a blackbody radiation at thermal equilibrium. It has been proven in~\cite{GK1976} that the reduced density matrix of a system interacting with a chaotic bath of bosons, which is well approximated by a blackbody radiation, obeys a Lindblad equation (see for example~\cite{Petruccione,Weiss,Schaller} and references therein). Here, by using a Lindblad-based approach (LBA)~\cite{OP,OPprl}, we analyze the thermalization process of the LMG model embedded in a blackbody radiation. The analysis suggests that complete thermal equilibrium can be reached only at high enough density, while a partial thermalization takes place at low density. In the latter case, along the thermalization process, the total angular momenta remains a conserved quantum number. We then specialize the analysis of the thermalization in this low density regime, where the total spin is conserved. In the isotropic case, we provide a comprehensive picture of the characteristic thermalization times, as functions of the Hamiltonian parameters and of the system size $N$. Quite importantly, we find that these characteristic times diverge with $N$ only at the critical point and in its ferromagnetic phase, linearly at high temperatures, and quadratically at zero temperature. The latter result is to be compared with the time estimated for reaching the ground state of this model by a quantum adiabatic algorithm, which is known to diverge with $N^3$~\cite{QAD}. The LMG is a fully-connected model of quantum spins which, in the thermodynamic limit, is exactly solvable. It has been the subject of many works, both at equilibrium~\cite{LMG_Botet,LMG_Ribeiro}, along dynamics afterward a fast quench~\cite{LMG_Das,LMG_Campbell}, along adiabatic dynamics~\cite{LMG_Santoro}, and in the microcanonical framework \cite{LMG_Mori}. The LMG model has also been used to represent an environment of interacting spins in contact with a system made of a single spin or two spins, by mean-field approximations~\cite{Paganelli,Paganelli1}, and also by exact numerical analysis of the reduced density matrix~\cite{LMG_Petruccione,LMG_Quan}. The LMG model can find an approximated experimental realization in certain ferroelectrics, ferromagnets~\cite{Abragam,Wolf}, and magnetic molecules~\cite{Ziolo}. In more recent years, the model has attracted a renewed attention due to the possibility to be simulated by trapped ions~\cite{Cirac}, as well as by Bose-Einstein condensates of ultracold atoms~\cite{Cirac98}. Indeed it has been studied experimentally on several platforms: with trapped ions \cite{monz11,bohnet16}, with Bose-Einstein condensates via atom-atom elastic collisions \cite{gross10,riedel10}, and via off-resonance atom-light interaction in a optical cavity \cite{leroux10,zhang}. LMG emerges also as a fully blockaded limit of Rydberg dressed atoms \cite{henkel10} in lattices \cite{macri14,jau16,zeiher16}, which could have interesting applications to quantum metrology \cite{kitagawa93,gil14,macri16} as well as to simulation of magnetic Hamiltonians \cite{glaetzle15,bijnen15}. As we discuss more in detail below, LMG can also appear as a coarse-grained model for electric or magnetic quantum dipoles \cite{lahaye08}. In the present work, we assume that the components of the LMG system in interaction with a blackbody radiation are actual spins, like in the ferromagnetic compounds, whereas trapped ions and ultracold condensates, even if they behave as effective spins, can interact with a blackbody radiation via other degrees of freedom. The paper is organized as follows. In Sec. \ref{LE}, we briefly describe our LBA approach to thermalization. The LBA scheme is then specialized in Sec. \ref{BB}, where the environment is chosen to be a black-body radiation. In Sec. \ref{LMGS}, we recall the definition of the LMG model. In Sec. \ref{PR}, we investigate, under which conditions, a description via a LMG model of spins interacting with a black-body environment is correct, and when the fully coherent limit is valid or not, via tuning of the particle density. In Sec. \ref{SR}, we derive a simple selection rule that takes place when the fully coherent limit is realized. In Sec. \ref{TLD}, we analyze the isoptropic LMG model. Here, we specialize to the fully coherent limit, where the total angular momenta remains conserved, and derive analytically all the elements necessary to evaluate the thermalization times. For the latter, we first provide simple analytical evaluations of both the decoherence and dissipation times, which are then confirmed in Sec. \ref{numerical}, where we provide a complete numerical analysis, allowing also for a clear picture of the finite size effects, particularly strong near the critical point. Finally, several crucial conclusions are drawn. \section{Thermalization via Lindblad Equation} \label{LE} Let us consider a system described by a Hamiltonian operator $\bm{H}$ acting on a Hilbert space $\mathscr{H}$ of dimension $M$. We assume that the eigenproblem, $\bm{H} \ket{m} = E_m \ket{m}$, has discrete nondegenerate eigenvalues and that the eigenstates $\{\ket{m}\}$ form an orthonormal system in $\mathscr{H}$. We arrange the eigenvalues in ascending order $E_1< E_2 < \dots < E_M$. In the following we briefly resume our recently proposed LBA to the thermalization of many-body systems with nondegenerate spectra, which allows for an unambiguous definition of the thermalization times, also for compounds of, possibly equal, noninteracting systems~\cite{OPprl}. The Lindblad equation (LE) represents the most general class of evolution equations of the reduced density matrix operator $\bm{\rho}(t)$ of a system interacting with an environment under the assumptions that this evolution is a semigroup, preserves hermicity, positivity, and the trace of $\bm{\rho}(t)$ at all times. The generic LE equation can be written as \begin{align} \label{GEN_LIN}% \frac{\mathrm{d} \bm{\rho}}{\mathrm{d} t} = -\frac{\mathrm{i}}{\hbar} \left[\bm{H}',\bm{\rho}\right] +\sum_{\alpha} \left( \bm{L}_{\alpha}\bm{\rho}\bm{L}^{\dag}_{\alpha} -\frac{1}{2}\left\{ \bm{L}^{\dag}_{\alpha}\bm{L}_{\alpha},\bm{\rho} \right\} \right). \end{align} In this equation, the coherent part of the evolution is represented by the Hermitian operator $\bm{H}'$ which, in general, differs from the isolated system Hamiltonian $\bm{H}$. The Lindblad, or quantum jump, operators $\bm{L}_\alpha$ are, for the moment, completely arbitrary operators. Even their number is arbitrary but can always be reduced to $M^2-1$. If $\bm{H}$ has a nondegenerate spectrum, one can represent the most general set of these operators by dyadic products of eigenstates of $\bm{H}$, namely, $\ell_{m,n} \ket{m}\bra{n}$. The meaning of the coefficients $\ell_{m,n}$ is obtained by further developing the theory. When it is imposed that the stationary condition of the system coincides with the Gibbs state, $\bm{\rho}_G\propto \exp(-\beta \bm{H})$, for a given inverse temperature $\beta$, the Lindblad equation, projected onto the eigenstates of $\bm{H}$, benefits from a decoupling between the $M$ diagonal terms, $\rho_{n,n}$, and the $M(M-1)$ off-diagonals terms, $\rho_{m,n}$, $m\neq n$, and, furthermore, the latter terms are each other decoupled. \textit{Diagonal terms (Pauli Equation).} The diagonal terms obey the following master-equation \begin{align} \label{Pauli} \frac{\mathrm{d} p_{m}(t)}{\mathrm{d} t} = \sum_{n } \left[ p_{n }(t) W_{n \to m}-p_{m}(t )W_{m \to n} \right], \end{align} where $W_{m \to n}=|\ell_{n,m}|^2$ is the rate probability by which, due to the interaction with the environment, a transition $|m\rangle\to|n\rangle$ occurs. In the weak coupling limit, these rates can be calculated by using the time-dependent perturbation theory. The above Pauli equation can be written in vectorial form as follows \begin{align} \label{0PPt} \frac{\mathrm{d} \bm{p}(t)}{\mathrm{d} t}= -\bm{A}\ \bm{p}(t), \end{align} where $p_n=\rho_{n,n}$ and \begin{align} \label{0Amn} A_{m,n} = \left\{ \begin{array}{ll} -W_{n \to m}, &\qquad m\neq n, \\ \sum_{k \neq m} W_{m \to k}, &\qquad m= n. \end{array} \right. \end{align} \textit{Off-Diagonal terms (decoherences).} The $M(M-1)$ elements $\rho_{m,n}$, $m\neq n$, behave as normal modes which relax to zero according to \begin{align} \label{0rhomn} |\rho_{m ,n }(t)|=|\rho_{m ,n }(0)|e^{-t/\tau_{m,n}}, \end{align} where \begin{align} \label{taumn} \tau_{m,n} = \left[ \frac{1}{2}\sum_k \left(W_{m \to k}+W_{n \to k}\right) \right]^{-1}. \end{align} The environment is supposed to remain in its own thermal equilibrium at inverse temperature $\beta$. Mathematically, this information is encoded in the fact that the matrix $W_{m \to n}$ is similar to a symmetric matrix $C_{m,n}$ having non negative elements via the square root of the Boltzmann factors $\exp(-\beta E_m)$ and $\exp(-\beta E_n)$, namely, \begin{align} \label{WC} e^{-\frac{\beta}{2} E_{m }} W_{m \to n} e^{\frac{\beta}{2} E_{n }} = C_{m ,n }. \end{align} If the transition rates $W_{m \to n}$ satisfy Eq.~(\ref{WC}) for some matrix $C_{m,n}$ with $C_{m,n}=C_{n,m}\geq 0$, then the stationary state of the LE coincides with the Gibbs state $\bm{\rho}_G$, i.e., the stationary solution of the Pauli Eq.~(\ref{Pauli}) is $p_m=e^{-\beta E_m}/Z$, where $Z=\sum_k e^{-\beta E_k}$, and $\rho_{m,n}=0$, $m\neq n$. The characteristic time $\tau$ by which the system reaches the stationary state is thus due to two different processes: \begin{subequations} \label{0tau} \begin{align} \label{01tau} &\tau=\max\left\{\tau^{(P)},\tau^{(Q)}\right\}, &&\quad \mathrm{thermalization~time},\\ \label{0tauP} &\tau^{(P)}=\frac{1}{\mu_2(\bm{A})}, &&\quad \mathrm{dissipation~time},\\ \label{0tauQ} &\tau^{(Q)}=\max_{m\neq n}\tau_{m,n}, &&\quad \mathrm{decoherence~time}. \end{align} \end{subequations} The matrix $\bm{A}$ associated to the Pauli Eq.~(\ref{Pauli}) has a unique zero eigenvalue and $M-1$ positive eigenvalues~\cite{OP}. In the above definition of $\tau^{(P)}$, $\mu_2(\bm{A})$ is the smallest nonzero eigenvalue of $\bm{A}$. The natural interpretation of $\tau^{(P)}$ is that it represents a characteristic time by which the system looses or gains energy, whereas $\tau^{(Q)}$ represents a characteristic time by which the system looses quantum coherence. The above LBA satisfies a series of minimal physical requirements, as evident when applied to the case in which the environment is a blackbody radiation, which will be briefly illustrated in the next Section. We stress that the remarkable simplicity of our equations is not due to some heuristic approach: they originate uniquely from the Lindblad class when the Gibbs stationary state is imposed. The LBA is equivalent to the popular quantum optical master equation (QOME)~\cite{Petruccione}, only when there is no degeneracy in the energy levels as well as in the energy gaps of $\bm{H}$~\cite{OPprl}. As we shall show later, when we consider the subspace where the total angular momenta $\bm{J}^2$ is fixed, in the LMG model the energy levels as well as the energy gaps are nondegenerate (see Eq.~(\ref{HISO})). Therefore, in the subspace where $\bm{J}^2$ is fixed, all the results that we obtain could be equally derived from the QOME. \section{Blackbody radiation} \label{BB} In the case in which the environment is a blackbody radiation at inverse temperature $\beta$, the time-dependent perturbation theory combined with the Planck-law yields (this result can be reached by treating the electromagnetic field classically, provided at the end the contribution due to the spontaneous emission is added) \begin{align} \label{0Bm_explicit} A_{m,m}=& \sum_{k:\,E_k<E_m} D_{k ,m } \frac{(E_{m}-E_{k})^3}{1-e^{-\beta\left(E_{m}-E_{k}\right)}} \nonumber \\ &+ \sum_{k:\,E_k>E_m} D_{k ,m } \frac{(E_{k}-E_{m})^3}{e^{\beta\left(E_{k}-E_{m}\right)}-1}, \end{align} whereas the off-diagonal terms $m\neq n$ of $\bm{A}$ are \begin{align} \label{0Amn_explicit} A_{m,n} = \left\{ \begin{array}{ll} -D_{m,n}\frac{\left(E_{m}-E_{n}\right)^3} {e^{\beta\left(E_{m}-E_{n}\right)}-1}, &\qquad E_m>E_n, \\ 0, &\qquad E_m=E_n, \\ -D_{m,n}\frac{\left(E_{n}-E_{m}\right)^3} {1-e^{-\beta\left(E_{n}-E_{m}\right)}}, &\qquad E_m<E_n. \end{array} \right. \end{align} The coefficients $D_{m,n}$ are magnetic or electric dipole matrix elements, whose value depend on the properties of the system embedded in the blackbody radiation as follows. In the following, we focus on the case in which the system interacts with the electromagnetic (EM) field through magnetic dipole operators $\mu\bm{\sigma}_i=(\mu\sigma_i^x,\mu\sigma_i^y,\mu\sigma_i^z)$, where the index $i$ labels the individual elements of the system located at position $\bm{r}_i$. All the dynamics is encoded in the internal degrees of freedom, therefore all the particles are considered fixed in space. Based on the analysis of \cite{FGR} the thermalization dynamics is characterized by three regimes: \\ \textit{Fully coherent regime.}~ If the following condition holds \begin{align} \label{coherent} \left|E_n-E_m\right| \ll hc/\ell, \qquad \ell=\max_{i\neq j} \left|\bm{r}_i-\bm{r}_j\right|, \end{align} then the following formula applies \begin{align} \label{Dnm.coherent} D_{n,m} &= \gamma\sum_{h=x,y,z} \left|\bra{n} \sum_{i=1}^{N} \sigma_i^{h} \ket{m}\right|^2 , \end{align} where the coupling constant $\gamma$, in Gaussian units, can be expressed in terms of the magnetic dipole operator and fundamental constants as: \begin{align} \label{gamma} \gamma=\frac{4\mu^2}{3\hbar^4 c^3}. \end{align} For $N=1$, Eq.~(\ref{Dnm.coherent}) equals the standard textbook formula based on the long wavelength approximation \cite{DAV}. \\ \textit{Fully incoherent regime.}~ If the following condition holds \begin{align} \label{incoherent} \left|E_n-E_m\right| \gg hc/a, \qquad a=\min_{i\neq j} \left|\bm{r}_i-\bm{r}_j\right|. \end{align} then the following formula applies \begin{align} \label{Dnm.incoherent} D_{n,m} &= \gamma \sum_{i=1}^{N} \sum_{h=x,y,z} \left|\bra{n} \sigma_i^{h} \ket{m}\right|^2 . \end{align} Since $hc = 1.23\ \mbox{eV $\mu$m}$, we have that for atomic or molecular systems in which $|E_n-E_m|$ is typically of a few eV and $\ell$ is not larger than a few tens of \AA, condition~(\ref{coherent}) is well satisfied. Instead, for microscopic systems in which $a$ is $1~\mu\mbox{m}$ and the energy-level separations $|E_n-E_m|$ are much larger than the atomic eV scale, condition (\ref{incoherent}) applies. Concerning the incoherent limit, from Eqs.~(\ref{0Bm_explicit}) and (\ref{0Amn_explicit}) we see that, even if for some pairs of states $\ket{m},\ket{n}$, the condition~(\ref{incoherent}) is not satisfied, the contribution corresponding to such pairs can be neglected if $\beta\Delta E \ll 1$, where $\Delta E$ is the largest of the values $|E_n-E_m|$ for which the condition (\ref{incoherent}) does not hold. From Eq.~(\ref{incoherent}) we see that a sufficient condition for this to occur is \begin{align} \label{incobeta} \beta hc/a \ll 1. \end{align} \\ \textit{Intermediate regime.}~ When none of the above inequalities (\ref{coherent}), (\ref{incoherent}) and (\ref{incobeta}) hold, there is no simple formula to be applied, and one should include contributions with mixed dipole matrix elements. These contributions originate from the general formula for the transition probabilities of a many-body system perturbed by the presence of the black-body radiation \cite{FGR}: \begin{align} \label{Pnm.general} P^\pm_{n,m} &= \frac{\mu^2}{2\pi\hbar c^3}\ \frac{\omega_{n,m}^3}{e^{\hbar\omega_{n,m}/k_B T}-1} \nonumber \\ &\quad\times \sum_{i=1}^{N} \sum_{j=1}^{N} \sum_{h=1}^{3} \sum_{l=1}^{3} Q_{n,m}^{i,j;h,l} \nonumber \\ &\quad\times \bra{E_n} \sigma_i^{h} \ket{E_m} \overline{\bra{E_n} \sigma_j^{l} \ket{E_m}} , \end{align} where \begin{align} \label{omeganm} \omega_{n,m}=|E_n-E_m|/\hbar, \end{align} and \begin{align} \label{Qnm} Q_{n,m}^{i,j;h,l} = \int_{0}^{\pi} \!\! \sin\theta \mathrm{d}{\theta} \int_{0}^{2\pi} \!\!\! \mathrm{d}{\phi}\ e^{\mathrm{i}\bm{u}\cdot(\bm{r}_i-\bm{r}_j)\omega_{n,m}/c} \left(\delta_{h,l}-u_hu_l\right). \end{align} with $\bm{u}=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)$. Equation~(\ref{Pnm.general}), interpolates between the fully coherent and fully incoherent limits. Later on, we shall make use of Eq.~(\ref{Pnm.general}) to show that, in the LMG model, as soon as condition~(\ref{coherent}) is not satisfied, $\bm{J}^2$ is not conserved. \section{The Lipkin-Meshkov-Glick model} \label{LMGS} Let us consider the Hilbert space $\mathcal{H}$ of $N$ spins $\bm{S} = \bm{\sigma} \hbar/2$, where $\bm{\sigma}=(\sigma^x,\sigma^y,\sigma^z)$ are the standard Pauli matrices. The dimension of $\mathcal{H}$ is $M=2^N$. The LMG model is defined in $\mathcal{H}$ through the Hamiltonian \begin{align} \label{LMG} \bm{H}=-\frac{\mathcal{J} \hbar^2}{4N} \sum_{i\neq j}^{N} \left(\sigma^{x}_i\sigma^{x}_j+\gamma_y\sigma^{y}_i\sigma^{y}_j\right)- \frac{\Gamma \hbar}{2}\sum_{i=1}^{N} \sigma^{z}_i, \end{align} where $\mathcal{J}$ is the spin-spin coupling, $\Gamma$ the strength of a transverse field, and $\gamma_y$ the so called anisotropy parameter. The model is known to provide an exactly solvable mean-field like behavior in the limit $N\to\infty$~\cite{LMG}. Let us introduce the components $h=x,y,z$ of the total spin operator $\bm{J}$ \begin{align} \label{LMG1} J_h=\frac{\hbar}{2}\sum_{i=1}^{N} \sigma^{h}_i. \end{align} Up to the additive constant $\mathcal{J}\hbar^2(1+\gamma_y)/4$, we can rewrite the Hamiltonian as \begin{align} \label{LMG2} \bm{H}=-\frac{\mathcal{J}}{N}\left(J_x^2 +\gamma_y J_y^2\right)-\Gamma J_z. \end{align} It follows that $[\bm{H},\bm{J}^2]=0$ and $[\bm{H},\prod_i\sigma_i^z]=0$. These two relations imply, whenever the system of $N$ spins is isolated, the conservation of the total spin $\bm{J}^2$, and the conservation of the parity along the $z$ direction. As a consequence, the eigenstates $|m\rangle$ of $\bm{H}$ can be chosen as (the label $m$ here should not be confused with the eigenvalues of $J_z$, for which we shall use the symbol $m_z$) \begin{eqnarray} \label{LMG3} | m \rangle=|j,p,\alpha\rangle \in \mathcal{H}_j\cap\mathcal{H}^{(p)}, \end{eqnarray} where $j$ is the quantum number associated to $\bm{J}^2$, i.e., $\bm{J}^2|j,p,\alpha\rangle= \hbar^2 j(j+1)|j,p,\alpha\rangle$, and $p=\pm 1$ is the parity, i.e., $\prod_i\sigma_i^z \ket{j,p,\alpha}= p \ket{j,p,\alpha}$. The Greek symbol $\alpha$ stands for a suitable set of quantum numbers that allow the state $|j,p,\alpha\rangle$ to span the intersection between the $2j+1$ dimensional Hilbert space $\mathcal{H}_j$, where $j$ is fixed, and the $2^N/2$ dimensional Hilbert space $\mathcal{H}^{(p)}$, known as the ``half space'' of $\mathcal{H}$ in which $p$ is fixed. According to the rules for the addition of angular momenta, for $N$ spins 1/2 we have \begin{subequations} \label{jrange} \begin{align} & N~\mathrm{odd}\Rightarrow j\in\{1/2,3/2,\ldots,N/2\}, \\ & N~\mathrm{even}\Rightarrow j\in\{0,1,\ldots N/2\}. \end{align} \end{subequations} In the symmetric case, $\gamma_y=1$, we also have $[\bm{H},J_z]=0$, and the index pair $(p,\alpha)$ coincides with $(p,m_z)$, where $m_z$ is the eigenvalue of $J_z/\hbar$, taking the values $-j,-j+1,\ldots,j$ restricted to either $p=1$ or $p=-1$ (if two values $m_z$ and $m_z'$ have the same parity, then $|m_z'-m_z|$ can be either 0 or 2). \section{Implementations of LMG with magnetic systems in a blackbody environment} \label{PR} In this Section, we want to analyze which regime, fully coherent, fully incoherent, or intermediate, takes place in realistic models characterized by an effective LMG description. We restrict to two magnetic systems with permanent magnetic moment. A more general analysis devoted to the study of atomic/molecular systems with electric dipole moments will be done somewhere else. In general, the conditions (\ref{coherent}) or (\ref{incoherent}), must be checked for all those pairs $(m,n)$ of eigenstates contributing with non zero dipole elements (\ref{Dnm.coherent}) and (\ref{Dnm.incoherent}). However, in the LMG model, as well as in models characterized by a smooth energy landscape, near states correspond to near energies and, moreover, since the operators associated to the dipole matrix elements are sums of Pauli matrices, the dipole matrix elements can connect only states that differ by single spin flips. Therefore, the pairs $(m,n)$ for which we have to control the conditions (\ref{coherent}) or (\ref{incoherent}), with respect to possible dependencies on $N$, always have $|E_n-E_m|\sim \mathcal{J} \hbar^2$. The first realistic model of interest is provided by the so called high-spin molecules. These are large molecules having a large total spin $j$ (which defines the eigenvalues of $\bf{J}^2$), well described by the LMG Hamiltonian~(\ref{LMG2}). According to Ref.~\cite{Ziolo}, in the high-spin molecule Mn$_{12}$, we have $j=10$ and $hc/(\mathcal{J}\hbar^2) \simeq 2~\mathrm{cm}$. Substituting the latter value in Eq.~(\ref{coherent}), we see that the fully coherent condition becomes $\ell\ll 2~\mathrm{cm}$, which is certainly satisfied (the diameter of the molecule cannot overcome a few tens of \r{A}ngstr\"om). The other class of realistic models, concerns magnetic ions in a crystalline environment, such as $\mathrm{Dy(C_2H_5SO_4)_39H_2O}$ and $\mathrm{DyPO_4}$ among others~\cite{Abragam,Wolf} and ultracold atoms with a permanent dipole moment~\cite{lahaye08}. Here, the dipole-dipole interaction decays with the cube of the distance between two neighboring ions and is anisotropic. As a consequence, unless the temperature is sufficiently high, as prescribed by Eq.~(\ref{incobeta}), there is no way to stay in the fully incoherent regime. This becomes clear by the following argument. Two neighboring spins $S_i$ and $S_j$ interact via the dipole-dipole Hamiltonian: \begin{align} \label{Hmm} H_{i,j}={\displaystyle -{\frac {\mu _{0}}{4\pi |{\mathbf {r}}|^{3}}}\left[3({\mathbf {m}}_{1}\cdot {\mathbf {r}})({\mathbf {m}}_{2}\cdot {\mathbf {r}}){\dfrac {1}{|{\mathbf {r}}|^{2}}}-{\mathbf {m}}_{1}\cdot {\mathbf {m}}_{2}\right]}, \end{align} where $\mu_0$ is the vacuum permeability, $\mathbf {m}_{1}$ and $\mathbf {m}_{2}$ the magnetic moments of the two spins, and $|\mathbf {r}|=a$ their distance. Eq.~(\ref{Hmm}) allows to estimate the coupling constant $\mathcal{J}$ in a coarse-grained Ising-like Hamiltonian for spin $1/2$ particles $H=-\sum_{(i,j)}\mathcal{J} S_i S_j$. In fact, if each magnetic moment has an electronic origin, we have $|\mathbf {m}_{i}|\sim \mu_B$, where $\mu_B$ is the Bohr magneton. By comparison between $\mathcal{J} S_i S_j$ and $H_{i,j}$, we can rewrite the Ising coupling in terms of fundamental constants as \begin{align} \label{JEST} \mathcal{J}\hbar^2 \sim \alpha^3 a_0^2 \frac{\pi\hbar c}{a^3}, \end{align} where $\alpha_0$ it the fine~structure~constant, and $a_0$ is the Bohr~radius. We can now apply Eq.~(\ref{JEST}) to condition (\ref{coherent}) and find that the fully coherent condition amounts to: \begin{align} \label{APPFC} \alpha^3 a_0^2 \ll \frac{a^3}{\ell}, \end{align} while applying Eq.~(\ref{JEST}) to condition (\ref{incoherent}) we see that the fully incoherent condition amounts to: \begin{align} \label{APPFI} \alpha^3 a_0^2 \gg a^2. \end{align} Since $a\geq a_0$, and $\alpha\simeq 1/137$, we see that Eq.~(\ref{APPFI}) is never satisfied. Eq.~(\ref{APPFC}) can be instead satisfied at sufficiently low densities. In fact, since $\ell\sim a_0 N^{1/d}$, where $d$ is the dimension (real or effective) of the system, we see that Eq.~(\ref{APPFC}) is satisfied if $a$ grows with $N$ at least as $a\sim a_0 N^{1/(2d)}$. Whereas for finite $d$ such a condition amounts, in the thermodynamic limit, to infinitely small densities, for $d=\infty$, like occurs in a fully connected model, Eq.~(\ref{APPFC}) is certainly satisfied for any finite density. However, the fully connected interaction is a theoretical extrapolation, as the actual $d$ will remain finite. In this sense, we can consider the LMG model as a mean-field approximation of the finite dimensional case. As a consequence, we expect that some trade-off will take place, with the fully coherent limit being satisfied only for densities lower than some threshold. The numerical value of this threshold could be calculated on the base of the specific limiting procedure $d\to\infty$ chosen for defining the LMG model, which is beyond the aim of the present work. In any case, a threshold exists, and for densities higher than the threshold, neither Eq.~(\ref{APPFC}) and nor Eq.~(\ref{APPFI}) are satisfied, and the general formula (\ref{Pnm.general}) should be applied instead. In the next Section, we will make use of the general formula (\ref{Pnm.general}) to show that, along the thermalization, whenever the fully coherent regime is not satisfied, as occurs at high densities, the total angular momenta is not conserved, while, for the rest of the paper, we will perform a comprehensive analysis of the thermalization by assuming the fully coherent regime, as is expected to take place at low densities. \section{Selection rules for the thermalization of the LMG model} \label{SR} To determine the matrix elements of $\bm{A}$ from Eqs.~(\ref{0Bm_explicit}) and (\ref{0Amn_explicit}), one must evaluate the dipole matrix elements $D_{m,n}$. Let us indicate by $|m\rangle=|j,p,\alpha\rangle$ and $|n\rangle=|j',p'\alpha'\rangle$, two generic eigenstates of $\bm{H}$. Since $[\bm{J}^2,J_h]=0$, for any $h=x,~y,~z$, we clearly have \begin{align} \label{JJ} J_h|m\rangle =J_h|j,p,\alpha\rangle \in \mathcal{H}_j, \end{align} so that, if we assume the fully coherent regime, from Eq.~(\ref{Dnm.coherent}), for dipole matrix element we have \begin{align} \label{FC} D_{m;n} &= D_{j,p,\alpha;j',p',\alpha'} = 0 \qquad \mbox{if $j\neq j'$}. \end{align} Furthermore, whereas $J_z$ conserves the parity, this is not true for $J_x$ and $J_y$, so that, in general, we also have \begin{align} \label{FC1} D_{m;n} &= D_{j,p,\alpha;j,p',\alpha'} \neq 0. \end{align} Let us consider now the fully incoherent regime. Consider, for example, the symmetric case $\gamma_y=1$, where $|j,p,\alpha\rangle=|j,p,m_z\rangle$ and choose $N=2$. The basis is spanned by the singlet state $j=0$, $m_z=0$ and the triplet states $j=1$, $m_z=-1,0,+1$. From Eq.~(\ref{Dnm.incoherent}), we see that the dipole matrix elements contain, for example, contributions proportional to: \begin{align} \label{ARGO} &|\langle j=1,p,m_z=1|\sigma^z_1|j=0,p',m_z=0\rangle| = 0, \\ &|\langle j=1,p,m_z=1|\sigma^x_1|j=0,p',m_z=0\rangle| = \nonumber \\ \label{ARGOb} &|\langle j=1,p,m_z=1|\sigma^y_1|j=0,p',m_z=0\rangle|\neq 0, \end{align} (and similarly for $\sigma_2^h$, $h=x,~y,~z$) which give \begin{align} \label{FI} D_{m;n} &= D_{j,p,\alpha;j',p',\alpha'} \neq 0, \qquad \mbox{even if $j \neq j'$}. \end{align} Finally, let us consider the intermediate regime, and for simplicity let us again consider a system with $N=2$. From the rhs of the general formula (\ref{Pnm.general}), we see that, in particular, the contributions corresponding to the case $i=j$ and $h=l$, are proportional to the terms (\ref{ARGO})-(\ref{ARGOb}) and alike. Eqs.~(\ref{FC}), (\ref{FC1}), and (\ref{FI}), show that, whereas the thermalization process is always able to connect states with different parity, in the fully coherent regime the thermalization process does not connect states with different total spin, whereas it is able to do so outside of this regime. In the following, we will disregard the description of the states in terms of $p$ and we shall use the notation $|j,\alpha\rangle$ since, regardless of the regime, the parity of the state does not provide any useful selection rule. \section{Thermalization in the fully coherent regime for isotropic LMG models} \label{TLD} In the fully coherent limit, if the system is initially prepared in a mixture, $\bm{\rho}_j(t=0)$, of eigenstates of $\bm{J}^2$, all with eigenvalues $j$, it will remain in the subspace $\mathcal{H}_j$ for all times. In other words, the system will undergo a partial thermalization, reaching asymptotically the following thermal state \begin{align} \label{TS} \lim_{t\to\infty}\bm{\rho}_j(t)=\frac{\exp(-\beta \bm{H})\bm{P}_j}{Z_j}, \end{align} where $\bm{P}_j$ is the projector onto $\mathcal{H}_j$, and $Z_j=\mathrm{tr}(\exp(-\beta\bm{H})\bm{P}_j)$. We now briefly review the properties of the isotropic LMG model and discuss in detail its thermalization properties. If $\gamma_y=1$, the Hamiltonian (\ref{LMG2}) simplifies as \begin{align} \label{HISOa} \bm{H}=-\frac{\mathcal{J}}{N}J^2 +\frac{\mathcal{J}}{N}J_z^2 -\Gamma J_z, \end{align} where, as long as we are confined in the subspace $\mathcal{H}_j$, the first term is a constant. Note that, whereas in the full Hilbert space $\mathcal{H}$ the Hamiltonian Eq.~(\ref{HISOa}) leads to a ferromagnetic phase, in $\mathcal{H}_j$, if $\mathcal{J}>0$, as usually assumed in the LMG models, Eq.~(\ref{HISOa}) represents the classical Hamiltonian of a fully connected Ising model with an anti-ferromagnetic coupling, a highly frustrated system with no ordinary finite temperature phase-transition. Therefore, a phase transition can occur in the LMG model only at zero temperature, and the order parameter must be properly defined~\cite{LMG_Botet}. In order to have some magnetization in $\mathcal{H}_j$ with a finite temperature phase-transition, one must allow $\gamma_y$ to be different from $1$. An explicit classical analysis of the finite temperature phase transition can be found in~\cite{LMG_Das}. We stress that, even if, for $\gamma_y=1$, the Hamiltonian~(\ref{LMG2}) is somehow classical, its thermalization is governed by genuine quantum processes. More precisely, the interaction with the surrounding EM field is not trivial since all the three components of the total spin participate. Below we provide an exact analysis of the thermalization of the LMG model for $\gamma_y=1$. We first analyze the static and equilibrium properties, and then calculate the dipole matrix elements which, in turn, allow us to evaluate the thermalization times by using the equations discussed in Secs. II and III. \subsection{Energy levels, gap, and critical point} In the following we will work in units where $\hbar=1$. If $\gamma_y=1$, the eigenstates of the Hamiltonian $\bm{H}$ are simply given by $|m\rangle=|j,m_z\rangle$, with eigenvalues \begin{align} \label{HISO} E(j,m_z)=-\frac{\mathcal{J} j(j+1)}{N}+m_z\left(\frac{\mathcal{J}m_z}{N}-\Gamma\right), \end{align} with $m_z\in\{-j,-j+1\ldots,j\}$. We assume $N\geq 2$. Furthermore, we consider $j>0$, otherwise there exists only one state $\ket{j=0,m_z=0}$. As a consequence, we have $j\geq 1$ integer if $N$ is even or semi-integer if $N$ is odd. From Eq.~(\ref{HISO}) we have (hereafter, since $j$ is fixed, we use the shorter notation $E_{m_z}=E(j,m_z)$) \begin{subequations} \label{DEISO} \begin{align} & E_{m_z}-E_{m_z-1}=\frac{(2m_z-1) \mathcal{J}-\Gamma N}{N}, \qquad m_z\geq -j+1,\\ & E_{m_z}-E_{m_z+1}=\frac{\Gamma N-(2m_z+1)\mathcal{J}}{N}, \qquad m_z\leq j-1. \end{align} \end{subequations} In the following, we indicate by $m^{(1)}_z$ the ground state (GS), and by $m^{(2)}_z$ the first excited state (FES). Let us suppose for the moment being that $\Gamma N/(2\mathcal{J})$ is not an half integer for $j$ even (is not an integer for $j$ odd) so that, even for $N$ finite, the gaps do not close. For the GS, we have \begin{widetext} \begin{align} \label{GSISO} E_{m^{(1)}_z}=\min_{m_z} E_{m_z}, \qquad m^{(1)}_z=\mathrm{sgn}(\Gamma) \min\left\{\left[\frac{|\Gamma|N}{2\mathcal{J}}\right]_j,j\right\}, \end{align} where we have defined \begin{align} \label{NINT} [x]_j= \left\{ \begin{array}{ll} \mbox{integer closest to $x$}, &\qquad\mbox{$j$ even}, \\ \mbox{semi-integer closest to $x$}, &\qquad\mbox{$j$ odd}. \end{array} \right. \end{align} It is convenient to introduce \begin{align} \label{deltaISO} \delta=\left[\frac{\Gamma N}{2\mathcal{J}}\right]_j-\frac{\Gamma N}{2\mathcal{J}}. \end{align} By using Eqs.~(\ref{HISO}) and (\ref{GSISO}), and the definition of $\delta$, for the GS and FES levels we obtain \begin{align} \label{GSISOEXPL} E_{m^{(1)}_z}=\left\{ \begin{aligned} &-\frac{\mathcal{J} j(j+1)}{N}-\frac{\Gamma^2 N} {4\mathcal{J}}+\frac{\mathcal{J}\delta^2}{N}, &&\qquad \Gamma/\mathcal{J}\in \left[-\frac{2(j-\delta)}{N}, \frac{2(j-\delta)}{N}\right], \\ &-\frac{\mathcal{J} j(j+1)}{N}+\frac{\mathcal{J}j^2}{N}-j|\Gamma|, &&\qquad \Gamma/\mathcal{J}\notin \left[-\frac{2(j-\delta)}{N},\frac{2(j-\delta)}{N}\right], \end{aligned} \right. \end{align} \begin{align} \label{FELISO} E_{m^{(2)}_z}=\min_{m_z\neq m^{(1)}_z} E_{m_z}=\left\{ \begin{aligned} &E_{m^{(1)}_z-\mathrm{sgn}(\delta)}, &&\qquad |m^{(1)}_z-\mathrm{sgn}(\delta)|\leq j, \\ &E_{m^{(1)}_z+\mathrm{sgn}(\delta)}, &&\qquad |m^{(1)}_z-\mathrm{sgn}(\delta)|> j~\mathrm{and}~|m^{(1)}_z+\mathrm{sgn}(\delta)|\leq j, \\ &E_{\mathrm{sgn}(\Gamma)(j-1)}, &&\qquad m^{(1)}_z=\mathrm{sgn}(\Gamma)j. \end{aligned} \right. \end{align} From Eqs.~(\ref{DEISO})-(\ref{FELISO}) we evaluate the first gap $\Delta$ \begin{align} \label{GAPISO} \Delta=E_{m^{(2)}_z}-E_{m^{(1)}_z}=\left\{ \begin{aligned} &|\Gamma|-\mathcal{J}\frac{2j-1}{N}, &&\qquad \frac{\Gamma}{\mathcal{J}}\notin \left[-\frac{2(j-\delta)}{N},\frac{2(j-\delta)}{N}\right], \\ &\mathcal{J}\frac{1+2|\delta|}{N}, &&\qquad \frac{\Gamma}{\mathcal{J}}\in \left[-\frac{2(j-\delta)}{N},-\frac{2(j-r(\delta)-\delta)}{N}\right] \cup \left[\frac{2(j-r(\delta)-\delta)}{N},\frac{2(j-\delta)}{N}\right],\\ &\mathcal{J}\frac{1-2|\delta|}{N}, &&\qquad \frac{\Gamma}{\mathcal{J}}\in \left[-\frac{2(j-r(\delta)-\delta)}{N}, \frac{2(j-r(\delta)-\delta)}{N}\right], \end{aligned} \right. \end{align} \end{widetext} where $r(\delta) =1$ if $\delta\cdot\Gamma <0$ and $r(\delta)=0$ otherwise. If $r(\delta)=0$, the intermediate intervals in the second line of Eq.~(\ref{GAPISO}) are empty sets. Equation~(\ref{GAPISO}) shows that, for $N$ finite, we can define two ``exact critical points'', ${\Gamma}_c^+$ and ${\Gamma}_c^-$, as solutions, respectively, of the equations: \begin{align} \label{Gammacex} \frac{\Gamma_c^{\pm}}{\mathcal{J}}=\pm 2\frac{(j-\delta)}{N}. \end{align} By using the definition of $\delta$, it is easy to check that, for any $N$, Eqs.~(\ref{Gammacex}) are solved for $\Gamma$ such that $\delta=0$, \textit{i.e.} \begin{eqnarray} \label{Gammac} \frac{\Gamma_c^{\pm}}{\mathcal{J}}= \pm \frac{\Gamma_c}{\mathcal{J}}=\pm \frac{2j}{N}. \end{eqnarray} Notice that, for $j$ even (odd), the function $[x]_j$ has two values for $x$ semi-integer (integer). For $j$ even this reflects on the fact that, whenever $\Gamma N/(2\mathcal{J})=k/2$, for some odd (even, if $j$ is odd) integer $k$ such that $|k/2\pm 1/2|<j$, the GS level can be two-fold degenerate, with the states $m^{(1a)}_z=k/2-1/2$ and $m^{(1b)}_z=k/2+1/2$. The general expression of the GS, as well as of the FES, for the case in which $\Gamma N/(2\mathcal{J})$ is semi-integer for $j$ even (or integer for $j$ odd) is cumbersome. It is however clear that such a condition on the external field $\Gamma$, is of no physical interest, since one can approach an integer or a semi-integer by an infinite sequence of real numbers that are neither integer nor half-integer. Equation~(\ref{GAPISO}) shows that there is an inner region in $\Gamma$ where the gap closes to zero as $\Delta= (1-2|\delta|)\mathcal{J}/N$, a paramagnetic external region where $\Delta$ remains finite, and a transient region, whose size tends to zero as $1/N$ and $\Delta= (1+2|\delta|)\mathcal{J}/N$. Finally, we point out that analogous formulas hold for the successive gaps. For example, for the difference between the third and the second energy level, $\Delta'$, there is an interval in $\Gamma$ where $\Delta'$ goes to zero as $1/N$, and, for $N$ large, such interval and gap differ for negligible terms from, respectively, the interval and gap between GS and FES. \subsection{Partition function} For later use, we also calculate the partition function $Z_j$ for $j$ large of the type $j=\alpha N$, with $\alpha$ constant. From Eq.~(\ref{HISO}) we have \begin{align} \label{ZjISO} Z_j &= e^{\frac{\beta\mathcal{J} j(j+1)}{N}} \sum_{m_z\in[-j,-(j-1),\ldots,j]} e^{\beta N m_z \left(\frac{\mathcal{J}m_z}{N}-\Gamma\right)} \nonumber \\ &= e^{\frac{\beta\mathcal{J} j(j+1)}{N}}\sum_{x\in[-1,-(j-1)/j,\ldots,1]} e^{\beta \alpha x N \left(\mathcal{J}\alpha x-\Gamma\right)}. \end{align} For large $N$, the above sum can be approximated by an integral over the range $(-1,1)$, and we get \begin{align} \label{ZjISO1} Z_j=\sqrt{\frac{2\pi N}{\beta J \alpha^2}}e^{\frac{\beta\mathcal{J} j(j+1)}{N}}e^{\beta\Gamma \frac{\Gamma N}{2\mathcal{J}}}\left[1+O\left(\frac{1}{N}\right)\right]. \end{align} Notice the absence of the constant $\alpha$ in the second exponential. \subsection{Dipole matrix elements} In order to evaluate the dipole matrix elements, we shall make use of the ladder operators $J_{\pm}=J_x\pm\mathrm{i}J_y$. Let us consider two generic eigenstates $|m\rangle=|j,m_z\rangle$ and $|n\rangle=|j,n_z\rangle$, with $m_z,n_z\in\{-j,-j+1\ldots,j\}$. From Eq.~(\ref{Dnm.coherent}), by using $D_{m,n}=\gamma\sum_{h}|\langle j,m_z|2 J^h|j,n_z\rangle|^2$, we have \begin{widetext} \begin{align} \label{DISO} D_{m_z,n_z} = 2\gamma \left[(j-n_z)(j+n_z+1)\delta_{m_z,n_z+1}+ (j+n_z)(j-n_z+1)\delta_{m_z,n_z-1}\right]. \end{align} By plugging Eq.~(\ref{DISO}) into Eqs.~(\ref{0Bm_explicit}) and (\ref{0Amn_explicit}), with $A_{m,n} =A_{m_z,n_z}$, we get \begin{align} \label{BISO} A_{m_z,m_z} = 2\gamma(j-m_z+1)(j+m_z)f(E_{m_z-1}-E_{m_z})+ 2\gamma(j+m_z+1)(j-m_z)f(E_{m_z+1}-E_{m_z}), \end{align} and \begin{subequations} \label{AISO} \begin{align} & A_{m_z,m_z-1} = -2\gamma(j-m_z+1)(j+m_z)f(E_{m_z}-E_{m_z-1}), \\ & A_{m_z,m_z+1}=-2\gamma(j+m_z+1)(j-m_z)f(E_{m_z}-E_{m_z+1}), \\ & A_{m_z,n_z}=0, \qquad n_z\neq m_z,~m_z-1,~m_z+1, \end{align} \end{subequations} where we have introduced the function $f(E_m)$: \begin{align} \label{fISO} f(E_{m_z}-E_{n_z})= \frac{(E_{m_z}-E_{n_z})^3}{e^{\beta\left(E_{m_z}-E_{n_z}\right)}-1} \theta(E_{m_z}-E_{n_z})+ \frac{(E_{n_z}-E_{m_z})^3}{1-e^{-\beta\left(E_{n_z}-E_{m_z}\right)}} \theta(E_{n_z}-E_{m_z}), \end{align} $\theta(x)$ being the Heaviside step function. Plugging Eqs.~(\ref{BISO}) into Eqs.~(\ref{0Amn}) and (\ref{taumn}), we calculate the decoherence times as \begin{align} \label{taumnISO} \tau_{m_z,n_z} =&\ \left[2\gamma(j-m_z+1)(j+m_z)f(E_{m_z-1}-E_{m_z})+ 2\gamma(j+m_z+1)(j-m_z)f(E_{m_z+1}-E_{m_z})\right. \nonumber \\ &+ \left. 2\gamma(j-n_z+1)(j+n_z)f(E_{n_z-1}-E_{n_z})+ 2\gamma(j+n_z+1)(j-n_z)f(E_{n_z+1}-E_{n_z})\right]^{-1}, \qquad m_z\neq n_z. \end{align} In this framework, $j$ is fixed, but it can be chosen to be any value in agreement with Eqs.~(\ref{jrange}). Notice that, since in Eq.~(\ref{taumnISO}) $m_z\neq n_z$, the values $j=0$ (for $N$ even) and $j=1/2$ (for $N$ odd), are not allowed (obviously, for such fixed values of $j$ we have no dynamics at all). The decoherence times~(\ref{taumnISO}) can be easily evaluated numerically for any choice of the allowed $j$, $m_z$, and $n_z$. Depending on the particular value of $\Gamma$, which determines the energy gap $\Delta$ via Eq.~(\ref{GAPISO}), we can have different thermalization regimes. Below we provide analytical evaluations corroborated by exact numerical results. \subsection{Decoherence for $\Gamma/\mathcal{J}\notin \left[-\frac{2(j-\delta)}{N},\frac{2(j-\delta)}{N}\right]$} In this case, $\Delta$ is finite and, if $\beta\Gamma=O(1)$, from Eqs.~(\ref{DEISO}) we have \begin{align} \label{fISO1} f(E_{m_z\pm 1}-E_{m_z}) \sim O\left( \left| \Gamma-\mathcal{J}\frac{2m_z\pm 1}{N} \right|^3\right). \end{align} By using Eqs.~(\ref{fISO1}) in Eqs.~(\ref{taumnISO}), we get the two following possible scaling laws with respect to $j$ \begin{subequations} \label{taumnISO1} \begin{align} &\tau_{m_z,n_z} = O\left(\frac{1}{\gamma \left|\Gamma\right|^3 j^2}\right), \qquad |m_z|,~\mathrm{or}~|n_z| \ll j, \\ &\tau_{m_z,n_z} = O\left(\frac{1}{\gamma \left|\Gamma- \mathcal{J}\frac{2j}{N}\right|^3 j}\right), \qquad m_z\sim n_z \sim j, \\ &\tau_{m_z,n_z} = O\left(\frac{1}{\gamma \left||\Gamma|+\mathcal{J}\frac{2j}{N}\right|^3 j}\right), \qquad m_z\sim j,~ n_z \sim -j, \quad m_z\sim -j,~n_z\sim j, \\ &\tau_{m_z,n_z} = O\left(\frac{1}{\gamma \left|\Gamma+ \mathcal{J}\frac{2j}{N}\right|^3 j}\right), \qquad m_z\sim n_z \sim -j. \end{align} \end{subequations} Equations~(\ref{taumnISO1}) show that, for a given $j$, the states which remain coherent for a longer time are those with $m_z \sim n_z \sim \mathrm{sgn}(\Gamma) j$. Quite importantly, Eqs.~(\ref{taumnISO1}) implies that, if $j$ is fixed and independent of $N$, the decoherence times do not scale with $N$ at all. Consider in particular the states with $j=0$. For $N$ even, these states are the sum of all the $N!$ permutations of spin-flips with alternate signs, i.e., the $N$-particle analogous of singlet 2-particle state, an intrinsically entangled state. Equations~(\ref{taumnISO1}) tell us, if one is able to initially prepare the system with a small value of $j$, $N$-entangled states will show a strong resilience to decoherence. From the point of view of thermalization, this reflects on the overall thermalization time $\tau^{(Q)}$, which, from Eqs.~(\ref{taumnISO1}) becomes \begin{align} \label{tauQISO} \tau^{(Q)} = \max_{m_z\neq n_z}\tau_{m_z,n_z}=O\left(\frac{1}{\gamma \left||\Gamma|-\mathcal{J}\frac{2j}{N}\right|^3 j}\right). \end{align} In the limit of zero temperature $\beta\to\infty$, we can exploit \begin{align} \label{fbeta} \lim_{\beta\to \infty} f(E_{m_z}-E_{n_z})= \left\{ \begin{aligned} &0, &&\qquad E_{m_z}> E_{n_z}, \\ &\left(E_{n_z}-E_{m_z}\right)^3, &&\qquad E_{m_z}<E_{n_z}. \end{aligned} \right. \end{align} By applying Eqs.~(\ref{fbeta}) and (\ref{taumnISO}), we achieve, roughly, the same overall behavior as Eq.~(\ref{tauQISO}). \subsection{Decoherence for $\Gamma/\mathcal{J}\in \left[-\frac{2(j-\delta)}{N},\frac{2(j-\delta)}{N}\right]$} In this case, $\Delta\sim \Delta'\sim \Delta''\ldots \sim 1/N$. If $\beta\mathcal{J}=O(1)$ and $\beta|\Gamma|=O(1)$, Eqs.~(\ref{FELISO}) and (\ref{GAPISO}) and their generalization for the successive gaps (whose details are not important here) show that \begin{align} \label{fISO2} f(E_{m_z\pm 1}-E_{m_z}) \sim O\left(\frac{|\Gamma|\mathcal{J}^2}{N^2}\right). \end{align} The interval in $\Gamma$ where Eq.~(\ref{fISO2}) can be applied to the arbitrary state $m_z$ is not trivial. However, observing that Eq.~(\ref{fISO2}) can be applied to the GS and to the FES is enough to claim that, for $\Gamma/\mathcal{J}\in \left[-\frac{2(j-\delta)}{N},\frac{2(j-\delta)}{N}\right]$, \begin{align} \label{tauQISO2} \tau^{(Q)} = O\left(\frac{N^2 }{\gamma |\Gamma| \mathcal{J}^2 \left(j^2+j-\left(\frac{\Gamma N}{2\mathcal{J}}\right)^2+\frac{|\Gamma| N}{2\mathcal{J}}\right)}\right), \end{align} where we have used Eq.~(\ref{GSISO}) for the explicit form of the GS. From Eq.~(\ref{tauQISO2}) it follows that, if $j=O(N)$, then \begin{align} \label{tauQISO3} \tau^{(Q)}|_{\frac{|\Gamma|}{\mathcal{J}}= \frac{2j}{N} } = O\left(\frac{N }{\gamma |\Gamma|^2 \mathcal{J} }\right), \end{align} whereas \begin{align} \label{tauQISO4} \tau^{(Q)}|_{\frac{|\Gamma|}{\mathcal{J}}\ll \frac{2j}{N} } = O\left(\frac{1 }{\gamma |\Gamma| \mathcal{J}^2 }\right). \end{align} Equations~(\ref{tauQISO3}) and (\ref{tauQISO4}) show that, despite the gap closes to zero in all the interval $\left[-\frac{2j}{N},\frac{2j}{N}\right]$, the slow down dynamics takes place only in correspondence of the critical points $\Gamma_c^{\pm}/\mathcal{J}=\pm 2j/N$, and the decoherence time scales only linearly in $N$. On the other hand, we find remarkable to notice that, at the critical point, the decoherence time turns out to be a growing function of $N$. This observation confirms and strengthen the general idea that phase transitions could be exploited to generate resilience to decoherence and large entangled states~\cite{Paganelli,Paganelli1}. Notice that Eq.~(\ref{fISO2}) is valid also for $\beta$ large, provided $N$ is sufficiently large too. However, in general, the limits $\beta\to\infty$ and $N\to\infty$ cannot be switched. If we are interested in $\lim_{N\to\infty}\lim_{\beta\to\infty}\tau_{m_z,n_z}$ we can simply use Eq.~(\ref{fbeta}) applied to Eq.~(\ref{taumnISO}). The special case at $\Gamma=\Gamma_c$ will be analyzed later. If instead we are interested in $\lim_{\beta\to\infty}\lim_{N\to\infty}\tau_{m_z,n_z}$ we can use Eqs.~(\ref{fISO2})-(\ref{tauQISO4}) by substituting everywhere one factor $|\Gamma|$ with $1/\beta$. This shows that, in the thermodynamic limit, the thermalization time diverges at least linearly in $\beta$. \subsection{Dissipation} In order to evaluate the dissipation time $\tau^{(P)}$, we must find the eigenvalue $\mu_2(\bm{A})$ of the $2j\times 2j$ matrix $\bm{A}$ given in Eqs.~(\ref{BISO})-(\ref{AISO}). In general, this can be done only numerically. In the present case, this task is largely simplified because $\bm{A}$ is a tridiagonal matrix. From an analytical point of view, we can apply the general rule that, for $\beta$ finite, $\lim_{N\to\infty}\tau^{(P)}\geq \lim_{N\to\infty}\tau^{(Q)}$, with $\tau^{(Q)}$ given by Eqs.~(\ref{tauQISO}), (\ref{tauQISO3}), and (\ref{tauQISO4}). Eq. (\ref{tauQISO3}), in particular, implies that the thermalization time $\tau=\max\{\tau^{(P)},\tau^{(Q)}\}$, at the critical point and $\beta$ fixed diverges linearly in $N$. Actually, the rule $\lim_{N\to\infty}\tau^{(P)}\geq \lim_{N\to\infty}\tau^{(Q)}$ applies, if \cite{OP} \begin{align} \label{check} \lim_{N\to\infty}\frac{e^{-\beta E(j,m^{(1)}_z)}}{Z_j}=0. \end{align} Comparing Eq.~(\ref{GSISOEXPL}) with Eq.~(\ref{ZjISO1}), we see that the condition (\ref{check}) is verified for any value of $\Gamma$ (with a decreasing factor that decays exponentially in $N$). Notice that the inequality $\lim_{N\to\infty}\tau^{(P)}\geq \lim_{N\to\infty}\tau^{(Q)}$ holds for any $\beta$, so that we have also $\lim_{\beta\to\infty}\lim_{N\to\infty}\tau^{(P)}\geq \lim_{\beta\to\infty}\lim_{N\to\infty}\tau^{(Q)}$. However, $\lim_{N\to\infty}\lim_{\beta\to\infty}\tau^{(Q)}=2 \lim_{N\to\infty}\lim_{\beta\to\infty}\tau^{(P)}$, since, in general, $\lim_{\beta\to\infty}\tau^{(Q)}=2 \lim_{\beta\to\infty}\tau^{(P)}$~\cite{OP}. \subsection{Dissipation and decoherence at the critical point at zero temperature} The critical point at vanishing temperatures is intriguing. Indeed, if we choose $N$ even and $j=N/2$, this setup coincides with the one used to investigate the quantum adiabatic algorithm~\cite{LMG_Santoro}. From Eq.~(\ref{GAPISO}), for $N$ large enough we have $\Gamma_c=\pm j$ and the GS is $m^{(1)}_z=\mathrm{sgn}(\Gamma)j$. By using Eq.~(\ref{fbeta}), from Eqs.~(\ref{AISO}) we see that, for any finite $N$, in the limit $\beta\to\infty$ the matrix $\bm{A}$ becomes triangular and, as a consequence, from Eq.~(\ref{BISO}) for its lowest non zero eigenvalue $\mu_2$ we obtain \begin{align} \label{mu2crit} \lim_{\beta\to\infty}\mu_2(\bm{A})=2\gamma(2j-1)\Delta^3, \end{align} where $\Delta$ is given by Eq.~(\ref{GAPISO}) evaluated at $|\Gamma|\leq |\Gamma_c|=j$. For $N$ large enough, we thus have: \begin{align} \label{tauPcrit} \lim_{\beta\to\infty}\tau^{(P)}=\frac{N^2}{2\gamma\mathcal{J}^3}. \end{align} Moreover, for the property $\lim_{\beta\to\infty}\tau^{(Q)}=2 \lim_{\beta\to\infty}\tau^{(P)}$, we have also: \begin{align} \label{tauQcrit} \lim_{\beta\to\infty}\tau^{(Q)}=\frac{N^2}{\gamma\mathcal{J}^3}, \end{align} and therefore: \begin{align} \label{taucrit} \lim_{\beta\to\infty}\tau=\frac{N^2}{\gamma\mathcal{J}^3}. \end{align} The present thermalization time $\tau$, which grows as $N^2$, is to be compared with the characteristic time to perform the quantum adiabatic algorithm~\cite{QAD}, which grows as $\tau_{ad} \sim N/\Delta^2=O(N^3)$. This difference must be attributed to the spontaneous emission process, the only mechanism at $T=0$ by which the system, when in contact with the blackbody radiation, delivers its energy to the environment. Apparently, this mechanism provides a convergence toward the GS more efficient than that obtained in a slow transformation of the Hamiltonian parameters without dissipative effects. \begin{figure} \begin{center} {\includegraphics[width=0.45\textwidth,clip]{L_log_tau_ad_N_100_j_50_beta_1_Gamma_2.pdf}} {\includegraphics[width=0.45\textwidth,clip]{L_log_tau_ad_N_100_j_50_beta_10_Gamma_2.pdf}} {\includegraphics[width=0.45\textwidth,clip]{L_log_tau_ad_N_100_j_50_beta_1_Gamma_1.pdf}} {\includegraphics[width=0.45\textwidth,clip]{L_log_tau_ad_N_100_j_50_beta_10_Gamma_1.pdf}} {\includegraphics[width=0.45\textwidth,clip]{L_log_tau_ad_N_100_j_50_beta_1_Gamma_05.pdf}} {\includegraphics[width=0.45\textwidth,clip]{L_log_tau_ad_N_100_j_50_beta_10_Gamma_05.pdf}} \caption{(Color online) Log plots of the dimensionless quantities $b\tau_{m,n}$, where $b=2\gamma\mathcal{J}^3$, as a function of $m$ and $n$, with $m\neq n$, obtained from Eq.~(\ref{taumnISO}) with $N=100$, $j=N/2$, and, from top to bottom, $\Gamma=2\Gamma_c$ (paramagnetic), $\Gamma=\Gamma_c$ (critical point), and $\Gamma=0.5\Gamma_c$ (ferromagnetic), each evaluated at the dimensionless inverse temperatures $\beta\mathcal{J}=1$ (left) and $\beta\mathcal{J}=10$ (right). Notice that the left and right panels are different in each case.} \label{fig.taumn} \end{center} \end{figure} \begin{figure}[htb] \begin{center} {\includegraphics[width=0.44\textwidth,clip]{log_log_first_enlargment_no_guide_lines_tau_j_j_minus_1_vs_n_Gamma_more_Gammac_beta_1.pdf}} {\includegraphics[width=0.44\textwidth,clip]{log_log_first_enlargment_no_guide_lines_tau_j_j_minus_1_vs_n_Gamma_less_Gammac_beta_1.pdf}} {\includegraphics[width=0.44\textwidth,clip]{log_log_first_enlargment_no_guide_lines_tau_j_j_minus_1_vs_n_Gamma_more_Gammac_beta_10.pdf}} {\includegraphics[width=0.44\textwidth,clip]{log_log_first_enlargment_no_guide_lines_tau_j_j_minus_1_vs_n_Gamma_less_Gammac_beta_10.pdf}} {\includegraphics[width=0.44\textwidth,clip]{log_log_first_enlargment_no_guide_lines_tau_j_j_minus_1_vs_n_Gamma_more_Gammac_beta_100.pdf}} {\includegraphics[width=0.44\textwidth,clip]{log_log_first_enlargment_no_guide_lines_tau_j_j_minus_1_vs_n_Gamma_less_Gammac_beta_100.pdf}} {\includegraphics[width=0.44\textwidth,clip]{log_log_first_enlargment_no_guide_lines_tau_j_j_minus_1_vs_n_Gamma_more_Gammac_beta_1000.pdf}} {\includegraphics[width=0.44\textwidth,clip]{log_log_first_enlargment_no_guide_lines_tau_j_j_minus_1_vs_n_Gamma_less_Gammac_beta_1000.pdf}} \caption{(Color online) Log-log plots of the dimensionless quantities $b\tau_{m=j,n=j-1}$, where $b=2\gamma\mathcal{J}^3$, obtained from Eq.~(\ref{taumnISO}), as a function of $N$ even, calculated for $j=N/2$, and several values of $\Gamma>0$ approaching $\Gamma_c>0$, Eq.~(\ref{Gammac}), from above, i.e., in the paramagnetic region (left panels), and from below, i.e., in the ferromagnetic region (right panels). Different dimensionless inverse temperatures are considered, from top to bottom: $\beta\mathcal{J}=1$, $\beta\mathcal{J}=10$, $\beta\mathcal{J}=100$, and $\beta\mathcal{J}=1000$. The function $\lim_{\beta\to\infty}\tau^{(Q)}$ is obtained from Eq.~(\ref{tauQcrit}). Notice however that, by definition, $\tau^{(Q)}=\max_{m\neq n}\tau_{m,n}\geq \tau_{j,j-1}$ (compare Fig. \ref{fig.taumn}).} \label{fig.loglog} \end{center} \end{figure} \begin{figure}[htb] \begin{center} {\includegraphics[width=0.45\textwidth,clip]{tauP_beta1.pdf}} {\includegraphics[width=0.45\textwidth,clip]{tauP_beta10.pdf}} {\includegraphics[width=0.45\textwidth,clip]{tauP_beta100.pdf}} {\includegraphics[width=0.45\textwidth,clip]{tauP_beta1000.pdf}} \caption{(Color online) Log-log plots of the dimensionless quantity $b\tau^{(P)}=b/\mu_2(\bm{A})$, where $b=2\gamma\mathcal{J}^3$ and $\mu_2(\bm{A})$ is the smallest non zero eigenvalue of $\bm{A}$, the matrix given by Eqs.~(\ref{BISO})-(\ref{AISO}), as a function of $N$ even, evaluated for $j=N/2$ and several values of $\beta$ and $\Gamma$, above and below the critical point $\Gamma_c$. Top left panel $\beta\mathcal{J}=1$; top right panel $\beta\mathcal{J}=10$; bottom left panel $\beta\mathcal{J}=100$; bottom right panel $\beta\mathcal{J}=1000$. The function $\lim_{\beta\to\infty}\tau^{(P)}$ is given by Eq.~(\ref{tauPcrit}) evaluated at $0<\Gamma\leq\Gamma_c$ (it provides the same limit in all the ferromagnetic region). For the present values of $\beta$, $\lim_{\beta\to\infty}\tau^{(P)}$ matches well with the data corresponding to $\Gamma=\Gamma_c$ and $\beta\mathcal{J}=1000$ when $N\leq 10^3$. For larger values of $\beta$ the agreement extends to greater values of $N$ and also to data obtained for $\Gamma<\Gamma_c$. } \label{fig.tauP} \end{center} \end{figure} \end{widetext} \section{Numerical analysis of isotropic LMG models} \label{numerical} We made an exact numerical analysis of Eq.~(\ref{taumnISO}) and of the eigenvalues of the matrix $\bm{A}$ provided by Eqs.~(\ref{BISO})-(\ref{AISO}). The numerical analysis confirms our analytical formulas and, besides, makes evident the existence of finite size effects, which are a fingerprint of the phase transition. Figure~\ref{fig.taumn} provides 3D plots of $\tau_{m,n}$, as a function of $m$ and $n$, calculated for a few choices of $\Gamma$ and $\beta$. In agreement with Eqs.~(\ref{taumnISO1}), the maximum of $\tau_{m,n}$ occurs in correspondence of $m\simeq n \simeq j/2$. Figure~\ref{fig.loglog} shows the behavior of $\tau_{m=j,n=j-1}$ (i.e., one of the components of the decoherence times $\tau_{m,n}$ close to $\tau^{(Q)}=\max_{m\neq n}\tau_{m,n}$) as a function of the system size $N$ at different temperatures and for several values of $\Gamma$ approaching the critical point $\Gamma_c$ in both the paramagnetic and ferromagnetic regions. These plots confirm, in particular, that, for $\beta$ finite, $\tau_{m=j,n=j-1}$ diverges only at the critical point. More precisely, the divergence is linear in $N$ for $\beta$ sufficiently small, i.e., for $\beta\sim O(1/\Gamma)\sim O(1/\mathcal{J})$, in agreement with Eq.~(\ref{tauQISO3}). A different situation occurs instead for $\beta\to\infty$, where the divergence is quadratic in $N$ and takes place for any $\Gamma$ in the ferromagnetic region, in agreement with Eq.~(\ref{tauQcrit}). Fig.~\ref{fig.loglog} also provides a clear evidence of finite size effects in proximity of the critical point, which are particularly important in the ferromagnetic region and at low temperatures. At some threshold $N_s(\beta,\Gamma)$, these finite size effects decay approximately as power laws in $N$ (notice that Fig.~\ref{fig.loglog} is in log-log scale). In general, $N_s(\beta,\Gamma)$ turns out to be a non growing function of $\beta$, whereas, for a given $\beta$, it grows for $\Gamma$ approaching $\Gamma_c$. Figure~\ref{fig.tauP} shows $\tau^{(P)}$ as a function of the system size $N$. Unlike $\tau^{(Q)}$, we see that, whereas in the paramagnetic region, $\Gamma>\Gamma_c$, $\tau^{(P)}$ decays as a power law, in the ferromagnetic region, $\Gamma<\Gamma_c$, $\tau^{(P)}$ grows approximately as a power law even for $\beta$ finite. Actually, the behavior of $\tau^{(P)}$ in the ferromagnetic region is not as smooth as shown in Fig.~\ref{fig.tauP}: by varying $N$ we have periodic oscillations among three smooth curves associated to different sequences of $N$ even. The data shown in Fig.~\ref{fig.tauP} correspond to one of these sequences, for the other ones we have a power law growth with a similar exponent but with a different prefactor. Another fingerprint of the phase transition that takes place in the LMG model can be seen in Fig.~\ref{fig.tauP} observing the agreement between Eq.~(\ref{tauPcrit}) and $\tau^{(P)}$ when the latter is evaluated at larger and larger values of $\beta$ (see bottom panels). More precisely, it turns out that, for $N$ sufficiently large, $\tau^{(P)}(\Gamma_c)\geq \tau^{(P)}(\Gamma)$ for any $\Gamma$, and that, for $\beta$ sufficiently small, i.e., $\beta \sim O(1/\Gamma)\sim O(1/\mathcal{J})$, $\tau^{(P)}$ grows no more than linearly with $N$, while it grows no more than quadratically in $N$ for $\beta$ large. \section{Conclusions} \label{conclusions} We have addressed the thermalization of the LMG model in contact with a blackbody radiation. The analysis is done within LBA, a general mathematical set up developed in~\cite{OP} which allows us to analyze the thermalization processes of extensive many-body systems. When applied to the LMG model embedded in a blackbody radiation, the LBA equations (which, in the fully coherent regime, coincide with the QOME) are relatively simple and can be studied analytically in great detail. A series of novel results emerge. First, by analyzing the involved dipole-matrix elements, we find that, according to the conditions~(\ref{coherent}) and (\ref{incoherent}), in the general LMG model, i.e., independently of the anisotropy parameter $\gamma_y$, a full thermalization can take place only if the density is sufficiently high, while, in the limit of low density, the system thermalizes partially, namely, within the Hilbert subspaces $\mathcal{H}_j$ where the total spin has a fixed value. Second, in the fully coherent regime, and for the isotropic case $\gamma_y=1$, we are able to perform a comprehensive analysis of the thermalization. We evaluate the characteristic thermalization time $\tau$ almost analytically, as a function of the Hamiltonian parameters and of the system size $N$. Third, we show that, as a function of $N$, $\tau$ diverges only at the critical point and in the ferromagnetic region. This divergence is no more than linear in $N$ for $\beta$ small, and no more than quadratic in $N$ for $\beta$ large. In particular, in the ferromagnetic region and at zero temperature, we prove that $\tau$ diverges just quadratically with $N$, while quantum adiabatic algorithms lead to an adiabatic time that diverges with the cube of $N$. The latter result sheds new light on the problem of preparing a quantum system in a target state. If the target state is the GS of a subspace of the Hilbert space of the system, cooling the system at sufficiently small temperatures and ensuring, at the same time, that the system remains sufficiently confined in the chosen subspace, may produce an arbitrarily accurate result. This procedure, at least for the present LMG model coupled to a blackbody radiation, outperforms the procedure suggested by quantum adiabatic algorithms, where counterproductive costly efforts are spent to avoid dissipative effects. For more general many-body systems, it could be appropriate to consider cooling processes induced by different, possibly engineered, thermal reservoirs. The no-go theorem for exact ground-state cooling~\cite{nogotheorem}, which apparently prohibits the application of this idea, can be effectively evaded as discussed in~\cite{Viola}.
{'timestamp': '2017-04-12T02:08:23', 'yymm': '1611', 'arxiv_id': '1611.06720', 'language': 'en', 'url': 'https://arxiv.org/abs/1611.06720'}
\section{Introduction}\label{intro} The motion for a compressible viscous, heat-conductive, isotropic Newtonian fluid is described by the system of equations \begin{equation}\label{1.1} \left\{ \begin{aligned} & \rho_t+\nabla\cdot(\rho\bm{u})=0 \\ & \rho\bm{u}_t+\rho\bm{u}\cdot\nabla\bm{u}+\nabla p-\Big(\mu'-\frac{2}{3}\mu\Big)\nabla(\nabla\cdot\bm{u})-\nabla\cdot\big(\mu(\nabla\bm{u}+(\nabla\bm{u})^T)\big)=0 \\ & \Big(\rho\Big(\frac{|\bm{u}|^2}{2}+e\Big)\Big)_t+\nabla\cdot\Big(\rho\bm{u}\Big(\frac{|\bm{u}|^2}{2}+e\Big)+p\bm{u}\Big)-\nabla\cdot\Big(\mu(\nabla\bm{u}+(\nabla\bm{u})^T)\bm{u} \\ & +\Big(\mu'-\frac{2}{3}\mu\Big)\bm{u}(\nabla\cdot\bm{u})\Big)=\nabla\cdot(\kappa\nabla T), \end{aligned} \right. \end{equation} where $t\geqslant 0$, $x=(x_1,x_2,x_3)\in\mathbb{R}^3$. $\rho>0$ denotes the density, $\bm{u}=(u_1,u_2,u_3)$ the fluid velocity, $T>0$ the absolute temperature, $e$ denotes the internal energy, and $p$ denotes the pressure. And the positive constants $\mu,\mu'$satisfying $$\mu>0,\quad \mu'+\frac{2\mu}{3}\geqslant 0$$ describe the viscosity. There are very rich results about compressible Navier-Stokes system, such as small classical solutions with finite energy by Matsumura-Nishida \cite{Matsumura-Nishida}, see also Huang-Li \cite{Huang-Li} about the case of vacuum, weak, finite-energy solutions by Lions \cite{Lions}, variational solutions by Feireisl \cite{Feireisl} and Feireisl-Novotn$\acute{y}$-Petzeltov$\acute{a}$ \cite{Feireisl-Novotny-Petzeltov} , solutions in Besov spaces with the interpolation index one by Chikami-Danchin \cite{Chikami-Danchin}, Danchin \cite{Danchin}, self-similar solutions by Guo-Jiang \cite{Guo-Jiang}, Li-Chen-Xie \cite{Li-Chen-Xie} (density-dependent viscosity) and Germain-Iwabuchi \cite{Germain-Iwabuchi}. There are also some literature related to the vacuum. Xin \cite{Xin} proved the non-existence of smooth solutions for the initial density with the compact support. Hoff and Smoller \cite{Hoff-Smoller} considered 1-D barotropic Navier-Stokes equations and showed that the persistency of the almost everywhere positivity of the density can prevent the formulation of vacuum state. Jang and Masmoudi \cite{Jang-Masmoudi} obtained local solutions of the 3D compressible Euler equations under the barotropic condition with a physical vacuum, see also \cite{Jang-Masmoudi2} about problems of vacuum state. Recently, Lai, Liu and Tarfulea \cite{Lai-Liu-Tarfulea} studied the derivation of some non-isothermal hydrodynamic models (including non-isothermal ideal gas) and established the corresponding maximum principle. In the classical paper of Matsumura-Nishida \cite{Matsumura-Nishida}, they proved the global existence of classical solutions with small data of $O(\varepsilon)$ in $H^s$, where $\varepsilon$ depends on $\mu$, $\mu'$ and $\kappa$. The main purpose of this paper is to improve the result of \cite{Matsumura-Nishida} in radial symmetry case. In this case we can take the small constant $\varepsilon$ independent of $\mu$ and $\mu'$, and only depends on $\kappa$. More precisely, we can set $\mu=\mu'=0$ and \eqref{1.1} will thus reduce to the following system \begin{equation}\label{Heat-Hydro} \left\{ \begin{aligned} & \rho_t+\nabla\cdot(\rho\bm{u})=0 \\ & (\rho\bm{u})_t+\nabla\cdot(\rho\bm{u}\otimes\bm{u})+\nabla p=0 \\ & \left(\rho\left(\frac{1}{2}|\bm{u}|^2+e\right)\right)_t+\nabla\cdot\left(\rho\bm{u}\left(\frac{1}{2}|\bm{u}|^2+e\right)+p\bm{u}\right)=\nabla\cdot(\kappa\nabla T), \end{aligned} \right. \end{equation} We assume the following conditions on \eqref{Heat-Hydro}: \begin{itemize} \item[1.] The gas is ideal : $p=RT\rho$, where $R$ is a positive constant; \item[2.] The gas is polytropic : $e=c_VT$, where $c_V$ is a positive constant which denotes the specific heat at constant volume. \end{itemize} Assume that the positive constants $R, ~ c_V, ~ \kappa=1$, then the system \eqref{Heat-Hydro} can be written in the following form \begin{equation}\label{Heat-Hydro-2} \left\{ \begin{aligned} & \rho_t+\nabla\cdot(\rho\bm{u})=0 \\ & \bm{u}_t+\bm{u}\cdot\nabla\bm{u}+\frac{1}{\rho}\nabla(\rho T)=0 \\ & T_t+\bm{u}\cdot\nabla T+T(\nabla\cdot\bm{u})=\frac{\Delta T}{\rho}. \end{aligned} \right. \end{equation} Suppose that the initial data \begin{equation}\label{data small} \rho(0,x)=1+a_0(r),\quad T(0,x)=1+\theta_0(r),\quad\bm{u}(0,x)=\bm{u}_0(r)=u_0(r)\bm{\omega} \end{equation} satisfy $$\|a_0\|_{H^s}^2+\|\bm{u}_0\|_{H^s}^2+\|\theta_0\|_{H^s}^2\leqslant\varepsilon^2,$$ where $s>5$ is an integer, $r=|x|$ and $\bm{\omega}=\frac{x}{|x|}$, and $\varepsilon>0$ is a small constant. By the uniqueness of classical solutions, the solutions must have the following form $$\rho=1+a(t,r),\quad T=1+\theta(t,r),\quad \bm{u}=u(t,r)\bm{\omega},$$ as a result, we obtain $$\nabla\times\bm{u}\equiv 0.$$ So we may consider the follow system. \begin{equation}\label{Heat-Hydro-3} \left\{ \begin{aligned} & a_t+\bm{u}\cdot\nabla a+(1+a)(\nabla\cdot\bm{u})=0 \\ & \bm{u}_t+\bm{u}\cdot\nabla\bm{u}+\nabla\theta+\frac{1+\theta}{1+a}\nabla a=0 \\ & \theta_t+\bm{u}\cdot\nabla\theta+(1+\theta)(\nabla\cdot\bm{u})=\frac{\Delta\theta}{1+a} \end{aligned} \right. \end{equation} with the condition \begin{equation}\label{irrotational} \nabla\times\bm{u}\equiv 0. \end{equation} Our main result can be stated as follows. \begin{theorem}\label{main} Consider the Cauchy problem of the three dimensional system \eqref{Heat-Hydro-2}-\eqref{irrotational} ( or \eqref{Heat-Hydro-3}-\eqref{irrotational}) with data \eqref{data small}. Then there exists a constant $\varepsilon_0>0$ such that for $\forall ~ \varepsilon<\varepsilon_0$, the system \eqref{Heat-Hydro-2}-\eqref{irrotational} ( or \eqref{Heat-Hydro-3}-\eqref{irrotational}) admits a global solution $$(a,\bm{u})\in L^\infty(\mathbb{R}_+;H^s(\mathbb{R}^3))\cap L^2(\mathbb{R}_+;H^s(\mathbb{R}^3)),$$ and $$\theta\in L^\infty(\mathbb{R}_+;H^s(\mathbb{R}^3))\cap L^2(\mathbb{R}_+;H^{s+1}(\mathbb{R}^3)).$$ \end{theorem} As Thm \ref{main} shows, heat conduction effect alone can prevent the formation of shock despite the lack of viscosity. \begin{remark} It's clear that the solution of \eqref{Heat-Hydro} have the following conservation laws $$\frac{d}{dt}\int a dx\equiv 0, \qquad \frac{d}{dt}\int (a+1)\bm{u}dx\equiv 0,$$ and $$\frac{d}{dt}\int\left(\frac{|\bm{u}|^2}{2}+\frac{a|\bm{u}|^2}{2}+a\theta+\theta\right)dx\equiv 0.$$ \end{remark} We set $$ \begin{aligned} E_{k,1}(t) & \triangleq\sum\limits_{|\alpha|\leqslant k}\sup\limits_{\tau\in[0,t]}\big(\|\partial^\alpha a(\tau)\|_{L^2}^2+\|\partial^\alpha\bm{u}(\tau)\|_{L^2}^2+\|\partial^\alpha \theta(\tau)\|_{L^2}^2\big) \\ & +\sum\limits_{|\alpha|\leqslant k}\int_0^t\|\nabla\partial^\alpha\theta(\tau)\|_{L^2}^2d\tau \end{aligned} $$ for $0\leqslant k\leqslant s$, and $$E_{k,2}(t)\triangleq\sum\limits_{|\alpha|\leqslant k-1}\int_0^t\left(\|\nabla\partial^\alpha a(\tau)\|_{L^2}^2+\|\nabla\partial^\alpha\bm{u}(\tau)\|_{L^2}^2\right)d\tau$$ for $1\leqslant k\leqslant s$, where $$\partial=(\partial_t,\partial_{x_1},\partial_{x_2},\partial_{x_3}).$$ According to \eqref{data small}, it's clear that $\exists ~ M>0$ such that $$E_{s,1}(0)+E_{s,2}(0)\leqslant M^2\varepsilon^2.$$ Due to the local existence result, there exists a positive time $t_*\leqslant +\infty$ such that \begin{equation}\label{energy small assumption} t_*=\max\big\{t\geqslant 0 ~ \big| ~ E_{s,1}(\tau)+E_{s,2}(\tau)\leqslant\varepsilon, ~ \forall ~ \tau\in[0,t_*)\big\}. \end{equation} We have the following lemma. \begin{lemma}\label{entropy S lemma} Let \begin{equation}\label{entropy S} S=\ln\left(\frac{T}{\rho}\right)=\ln\left(\frac{1+\theta}{1+a}\right) \end{equation} denotes the entropy of unit mass, then the entropy of the system increases. \end{lemma} \begin{proof} The entropy of unit volume is $$\rho S=\rho\ln\left(\frac{T}{\rho}\right)=\rho\ln T-\rho\ln\rho,$$ and we can establish the evolution equation of $\rho S$: $$\partial_t(\rho S)=\rho_tS+\rho S_t=-S\nabla\cdot(\rho\bm{u})+\nabla\cdot(\rho\bm{u})+\frac{\rho T_t}{T},$$ then the third equation of \eqref{Heat-Hydro-2} gives the result $$ \begin{aligned} \frac{d}{dt}\int\rho Sdx & =\int\rho\bm{u}\cdot\nabla S+\frac{\rho T_t}{T}dx \\ & =\int\frac{\rho\bm{u}\cdot\nabla T}{T}-\bm{u}\cdot\nabla\rho+\frac{\rho}{T}\left(\frac{\Delta T}{\rho}-\bm{u}\cdot\nabla T-T(\nabla\cdot\bm{u})\right)dx \\ & =\int\frac{\Delta T}{T}dx=\int\frac{|\nabla T|^2}{T^2}dx\geqslant 0. \end{aligned} $$ This completes the proof of Lemma \ref{entropy S lemma}. \end{proof} \section{Basic Energy Estimate} By Lemma \ref{entropy S lemma}, we have \begin{equation}\label{Entropy increase} \frac{d}{dt}\int(1+a)\ln\left(\frac{1+a}{1+\theta}\right)dx+\int\frac{|\nabla\theta|^2}{(1+\theta)^2}dx=0. \end{equation} Making linear combination of \eqref{Entropy increase} and the conservation quantities, we obtain \begin{equation}\label{Basic L2 estimate} \frac{d}{dt}\int(1+a)\ln\left(\frac{1+a}{1+\theta}\right)+\left(\frac{|\bm{u}|^2}{2}+\frac{a|\bm{u}|^2}{2}-a+a\theta+\theta\right)dx+\int\frac{|\nabla\theta|^2}{(1+\theta)^2}dx=0. \end{equation} Making a Taylor expansion of \eqref{entropy S} with respect to $a$ and $\theta$, we have $$\ln\left(\frac{1+a}{1+\theta}\right)=a-\theta-\frac{a^2}{2}+\frac{\theta^2}{2}+r(a,\theta),$$ where the remainder $r(a,\theta)$ satisfies $$r(a,\theta)=O(a^3+\theta^3),\quad |a|+|\theta|\to 0.$$ Go back to \eqref{Basic L2 estimate}, we get $$ \begin{aligned} & \|a(t)\|_{L^2}^2+\|\bm{u}(t)\|_{L^2}^2+\|\theta(t)\|_{L^2}^2+2\int_0^t\|\nabla\theta(\tau)\|_{L^2}^2d\tau \\ = & ~ E_{0,1}(0)+\int a_0(\theta_0^2+|\bm{u}_0|^2-a_0^2)+(1+a_0)r(a_0,\theta_0)dx \\ - & ~ \int a(\theta^2+|\bm{u}|^2-a^2)+(1+a)r(a,\theta)dx+2\int_0^t\int\theta|\nabla\theta|^2\frac{(2+\theta)}{(1+\theta)^2}dxd\tau. \end{aligned} $$ By \eqref{energy small assumption}, we have $$\|a\|_{L^\infty}+\|\theta\|_{L^\infty}\leqslant C\sqrt{E_{2,1}(t)}\leqslant C\varepsilon^{\frac{1}{2}},$$ this gives the result \begin{equation}\label{Key Energy estimate 0} E_{0,1}(t)\lesssim E_{0,1}(0)+E_{2,1}^{3/2}(0)+E_{2,1}^{3/2}(t), \end{equation} here and hereafter $A\lesssim B$ means $A\leqslant CB$ with a positive constant $C$. \section{The Estimate of $E_k$} Firstly, we write the equations of $a$ and $u$ in \eqref{Heat-Hydro-3} in the following form of symmetric hyperbolic systems. \begin{equation}\label{Heat-Hydro-Hyperbolic system} A_0(\bm{U},\theta)\bm{U}_t+\sum\limits_{j=1}^3A_j(\bm{U},\theta)\partial_j\bm{U}+\bm{F}=0, \end{equation} where $$ \bm{U}=\left( \begin{array}{c} a \\ u_1 \\ u_2 \\ u_3 \end{array} \right),\quad A_0=\left( \begin{array}{cccc} \frac{1+\theta}{1+a} & 0 & 0 & 0 \\ 0 & 1+a & 0 & 0 \\ 0 & 0 & 1+a & 0 \\ 0 & 0 & 0 & 1+a \\ \end{array} \right), $$ and $$ A_j=\left( \begin{array}{cccc} \frac{1+\theta}{1+a}u_j & (1+\theta)\delta_{1j} & (1+\theta)\delta_{2j} & (1+\theta)\delta_{3j} \\ (1+\theta)\delta_{1j} & (1+a)u_j & 0 & 0 \\ (1+\theta)\delta_{2j} & 0 & (1+a)u_j & 0 \\ (1+\theta)\delta_{3j} & 0 & 0 & (1+a)u_j \\ \end{array} \right),\quad \bm{F}=\left( \begin{array}{c} 0 \\ (1+a)\theta_{x_1} \\ (1+a)\theta_{x_2} \\ (1+a)\theta_{x_3} \end{array} \right). $$ By applying $\partial^\alpha$ to \eqref{Heat-Hydro-Hyperbolic system}, where the multi-index $\alpha$ satisfying $0<|\alpha|\leqslant k$, and the positive integer $k\leqslant s$, we obtain $$A_0\partial^\alpha\bm{U}_t+\sum\limits_{j=1}^3A_j\partial_j\partial^\alpha\bm{U}=\left(A_0\partial^\alpha\bm{U}_t-\partial^\alpha(A_0\bm{U}_t)\right)+\sum\limits_{j=1}^3\big( A_j\partial_j\partial^\alpha\bm{U}-\partial^\alpha\left(A_j\partial_j\bm{U}\right)\big)-\partial^\alpha\bm{F}.$$ Then we take the $L^2$ inner product of the above equation with $\partial^\alpha\bm{U}$ and integrate with respect to $t$. By the symmetry of $A_j$ and $A_0$, we have the following energy estimate \begin{equation}\label{Key Energy estimate k-1-1} \begin{aligned} & \int\partial^\alpha\bm{U}^TA_0\partial^\alpha\bm{U}dx \\ = & ~ 2\int_0^t\int\partial^\alpha\bm{U}^T\Big(\left(A_0\partial^\alpha\bm{U}_t-\partial^\alpha(A_0\bm{U}_t)\right)+\sum\limits_{j=1}^3\big(A_j\partial_j\partial^\alpha\bm{U}- \partial^\alpha\left(A_j\partial_j\bm{U}\right)\big)\Big)dxd\tau \\ + & \int\partial^\alpha\bm{U}_0^TA_0(\bm{U}_0,\theta_0)\partial^\alpha\bm{U}_0dx+\int_0^t\int\partial^\alpha\bm{U}^T\left(\partial_tA_0+\sum\limits_{j=1}^3\partial_jA_j\right) \partial^\alpha\bm{U}dxd\tau \\ - & ~ 2\int_0^t\int \partial^\alpha\bm{F}\cdot\partial^\alpha\bm{U}dxd\tau, \end{aligned} \end{equation} where $$-2\int_0^t\int\partial^\alpha\bm{F}\cdot\partial^\alpha\bm{U}dxd\tau=-2\int_0^t\int\partial^\alpha\bm{u}\cdot\nabla\partial^\alpha\theta+\partial^\alpha\bm{u}\cdot\partial^\alpha (a\nabla\theta)dxd\tau.$$ On the other hand, we make energy estimate of $\theta$ to obtain \begin{equation}\label{Key Energy estimate k-1-2} \begin{aligned} & \int|\partial^\alpha\theta|^2dx+2\int_0^t\int|\nabla\partial^\alpha\theta|^2dxd\tau \\ = & ~ \int|\partial^\alpha\theta_0|^2dx+2\int_0^t\int\partial^\alpha\bm{u}\cdot\nabla\partial^\alpha\theta dx+2\int_0^t\int\partial^\alpha(\theta\bm{u})\cdot\nabla\partial^\alpha \theta dxd\tau \\ + & ~ 2\int_0^t\int\frac{\partial^\alpha\theta\nabla\partial^\alpha\theta\cdot\nabla a}{(1+a)^2}+\frac{a|\nabla\partial^\alpha\theta|^2}{1+a}+\partial^\alpha\theta\left( \partial^\alpha\left(\frac{\Delta\theta}{1+a}\right)-\frac{\Delta\partial^\alpha\theta}{1+a}\right)dxd\tau. \\ \end{aligned} \end{equation} Adding \eqref{Key Energy estimate k-1-1} to \eqref{Key Energy estimate k-1-2}, we get $$ \begin{aligned} & \int\partial^\alpha\bm{U}^TA_0\partial^\alpha\bm{U}dx+\int|\partial^\alpha\theta|^2dx+2\int_0^t\int|\nabla\partial^\alpha\theta|^2dxd\tau \\ = & ~ \int\partial^\alpha\bm{U}_0^TA_0(\bm{U}_0,\theta_0)\partial^\alpha\bm{U}_0dx+\int|\partial^\alpha\theta_0|^2dx \\ + & ~ 2\int_0^t\int\partial^\alpha(\theta\bm{u})\cdot\nabla\partial^\alpha\theta-\partial^\alpha\bm{u}\cdot\partial^\alpha(a\nabla\theta)dxd\tau \\ + & ~ 2\int_0^t\int\frac{\partial^\alpha\theta\nabla\partial^\alpha\theta\cdot\nabla a}{(1+a)^2}+\frac{a|\nabla\partial^\alpha\theta|^2}{1+a}+\partial^\alpha\theta\left( \partial^\alpha\left(\frac{\Delta\theta}{1+a}\right)-\frac{\Delta\partial^\alpha\theta}{1+a}\right)dxd\tau \\ + & ~ 2\int_0^t\int\partial^\alpha\bm{U}^T\Big(\left(A_0\partial^\alpha\bm{U}_t-\partial^\alpha(A_0\bm{U}_t)\right)+\sum\limits_{j=1}^3\big(A_j\partial_j\partial^\alpha\bm{U} -\partial^\alpha\left(A_j\partial_j\bm{U}\right)\big)\Big)dxd\tau \\ + & ~ \int_0^t\int\partial^\alpha\bm{U}^T\left(\partial_tA_0+\sum\limits_{j=1}^3\partial_jA_j\right)\partial^\alpha\bm{U}dxd\tau. \end{aligned} $$ We have the following lemma from \cite{Li-Zhou} to deal with the nonlinear terms. \begin{lemma}\label{nonlinear lemma} For $\forall ~ N\in\mathbb{N}_+$, we have $$ \begin{aligned} \|fg\|_{H^N} & \lesssim\bigg(\sum\limits_{|\alpha_1|\leqslant\lfloor\frac{N-1}{2}\rfloor}\|\partial^{\alpha_1}f\|_{L^\infty}\bigg)\bigg(\sum\limits_{|\alpha_3|\leqslant N}\| \partial^{\alpha_3}g\|_{L^2}\bigg) \\ & +\bigg(\sum\limits_{|\alpha_2|\leqslant\lfloor\frac{N-1}{2}\rfloor}\|\partial^{\alpha_2}g\|_{L^\infty}\bigg)\bigg(\sum\limits_{|\alpha_4|\leqslant N}\|\partial^{\alpha_4}f \|_{L^2}\bigg). \end{aligned} $$ For any multi-index $\beta$ satisfying $|\beta|=N>0$, we have $$ \begin{aligned} \big\|\partial^\beta(fg)-f\partial^\beta g\big\|_{L^2} & \lesssim\bigg(\sum\limits_{|\beta_1|\leqslant\lfloor\frac{N}{2}\rfloor}\|\partial^{\beta_1}f\|_{L^\infty}\bigg) \bigg(\sum\limits_{|\beta_3|\leqslant N-1}\|\partial^{\beta_3}g\|_{L^2}\bigg) \\ & +\bigg(\sum\limits_{|\beta_2|\leqslant\lfloor\frac{N-1}{2}\rfloor}\|\partial^{\beta_2}g\|_{L^\infty}\bigg)\bigg(\sum\limits_{|\beta_4|\leqslant N}\|\partial^{\beta_4}f \|_{L^2}\bigg). \end{aligned} $$ \end{lemma} Recall that $A_0$ is a positive definite matrix. By Lemma \ref{nonlinear lemma} and the Sobolev imbedding theorems, we obtain $$ \begin{aligned} & (1-C\varepsilon)\int|\partial^\alpha\bm{U}|^2dx+\int|\partial^\alpha\theta|^2dx+2\int_0^t\int|\nabla\partial^\alpha\theta|^2dxd\tau \\ \lesssim & ~ E_{k,1}(0)+E_{\lfloor k/2+5/2\rfloor,1}^{1/2}(t)\big(E_{k,1}(t)+E_{k,2}(t)\big). \end{aligned} $$ Thus we get \begin{equation}\label{Key Energy estimate k-1} E_{k,1}(t)\lesssim E_{k,1}(0)+E_{\lfloor k/2+5/2\rfloor,1}^{1/2}(t)\big(E_{k,1}(t)+E_{k,2}(t)\big). \end{equation} To estimate $E_{k,2}(t)$, we set $$B_i(\bm{U},\theta)=A_i(\bm{U},\theta)-A_i(\bm{0},0),\quad 0\leqslant i\leqslant 3,$$ then we can rewrite \eqref{Heat-Hydro-Hyperbolic system}, and apply $\partial^\beta ~ (|\beta|\leqslant k-1)$ to get \begin{equation}\label{Heat-Hydro-Hyperbolic system-2} \begin{aligned} & \partial^\beta\bm{U}_t+\sum\limits_{j=1}^3A_j(\bm{0},0)\partial^\beta\partial_j\bm{U} \\ = & \left( \begin{array}{c} \partial^\beta a_t+\partial^\beta(\nabla\cdot\bm{u}) \\ \partial^\beta\bm{u}_t+\nabla\partial^\beta a \end{array} \right) =-\partial^\beta\left(B_0\bm{U}_t+\sum\limits_{j=1}^3B_j\partial_j\bm{U}+\bm{F}\right). \end{aligned} \end{equation} Taking inner product of \eqref{Heat-Hydro-Hyperbolic system-2} with the following vector $$\partial^\beta\bm{V}\triangleq\big(-\partial^\beta(\nabla\cdot\bm{u}),\nabla\partial^\beta a\big),$$ and integrate with respect to $t$, we get \begin{equation}\label{Key Energy estimate k-2-1} \begin{aligned} & \int\partial^\beta\bm{u}\cdot\nabla\partial^\beta adx+\int_0^t\int|\nabla\partial^\beta a|^2-|\partial^\beta(\nabla\cdot\bm{u})|^2+\nabla\partial^\beta a\cdot\nabla\partial^\beta \theta dxd\tau \\ = & ~ \int\partial^\beta\bm{u}_0\cdot\nabla\partial^\beta a_0dx-\int_0^t\int\nabla\partial^\beta a\cdot\partial^\beta(a\nabla\theta)dxd\tau \\ - & ~ \int_0^t\int\partial^\beta\bm{V}\cdot\partial^\beta\left(B_0\bm{U}_t+\sum\limits_{j=1}^3B_j\partial_j\bm{U}\right)dxd\tau. \end{aligned} \end{equation} Taking inner product of \eqref{Heat-Hydro-Hyperbolic system-2} with the following vector $$\partial^\beta\bm{W}\triangleq\big(\bm{0},-\nabla\partial^\beta\theta\big),$$ we have \begin{equation}\label{Key Energy estimate k-2-2-1} \begin{aligned} - & ~ \int\partial^\beta\bm{u}_t\cdot\nabla\partial^\beta\theta dx-\int\partial^\beta a\cdot\nabla\partial^\beta\theta+|\nabla\partial^\beta\theta|^2dx \\ = & ~ \int\nabla\partial^\beta\theta\cdot\partial^\beta\left(\bm{u}\cdot\nabla\bm{u}+\frac{\theta-a}{1+a}\nabla a\right)dx. \end{aligned} \end{equation} Then we take inner product of the equation of $\partial^\beta\theta$, which is $$\partial^\beta\theta_t+\partial^\beta(\nabla\cdot\bm{u})-\Delta\partial^\beta\theta=-\partial^\beta\left(\frac{a\Delta\theta}{1+a}+\nabla\cdot(\theta\bm{u})\right),$$ with $\partial^\beta(\nabla\cdot\bm{u})$ to obtain \begin{equation}\label{Key Energy estimate k-2-2-2} \begin{aligned} - & ~ \int\partial^\beta\bm{u}\cdot\nabla\partial^\beta\theta_tdx+\int|\partial^\beta(\nabla\cdot\bm{u})|^2dx-\int\partial^\beta(\nabla\cdot\bm{u})\Delta\partial^\beta\theta dx \\ = & ~ -\int\partial^\beta(\nabla\cdot\bm{u})\partial^\beta\left(\frac{a\Delta\theta}{1+a}+\big(\nabla\cdot(\theta\bm{u})\big)\right)dx. \end{aligned} \end{equation} Adding \eqref{Key Energy estimate k-2-2-1} to \eqref{Key Energy estimate k-2-2-2} and integrating with respect to $t$, we get \begin{equation}\label{Key Energy estimate k-2-2} \begin{aligned} & -\int\partial^\beta\bm{u}\cdot\nabla\partial^\beta\theta dx+\int_0^t\int|\partial^\beta(\nabla\cdot\bm{u})|^2-|\nabla\partial^\beta\theta|^2dxd\tau \\ - & ~ \int_0^t\int\nabla\partial^\beta a\cdot\nabla\partial^\beta\theta dxd\tau-\int_0^t\int\partial^\beta(\nabla\cdot\bm{u})\Delta\partial^\beta\theta dxd\tau \\ = & ~ -\int\partial^\beta\bm{u}_0\cdot\nabla\partial^\beta\theta_0dx+\int_0^t\int\nabla\partial^\beta\theta\cdot\partial^\beta\left(\bm{u}\cdot\nabla\bm{u}+\frac{\theta-a}{1+a} \nabla a\right)dxd\tau \\ - & ~ \int_0^t\int\partial^\beta(\nabla\cdot\bm{u})\partial^\beta\left(\frac{a\Delta\theta}{1+a}+\nabla\cdot(\theta\bm{u})\right)dxd\tau. \end{aligned} \end{equation} Adding \eqref{Key Energy estimate k-2-1} to \eqref{Key Energy estimate k-2-2}, we have \begin{equation}\label{Key Energy estimate k-2-3} \begin{aligned} & \int\partial^\beta\bm{u}\cdot\nabla\partial^\beta(a-\theta)dx+\int_0^t\int|\nabla\partial^\beta a|^2-|\nabla\partial^\beta\theta|^2-\partial^\beta(\nabla\cdot\bm{u})\Delta \partial^\beta\theta dxd\tau \\ = & ~ \int\partial^\beta\bm{u}_0\cdot\nabla\partial^\beta(a_0-\theta_0)dx-\int_0^t\int\nabla\partial^\beta a\cdot\partial^\beta(a\nabla\theta)dxd\tau \\ - & ~ \int_0^t\int\partial^\beta\bm{V}\cdot\partial^\beta\left(B_0\bm{U}_t+\sum\limits_{j=1}^3B_j\partial_j\bm{U}\right)dxd\tau \\ + & ~ \int_0^t\int\nabla\partial^\beta\theta\cdot\partial^\beta\left(\bm{u}\cdot\nabla\bm{u}+\frac{\theta-a}{1+a}\nabla a\right)dxd\tau \\ - & ~ \int_0^t\int\partial^\beta(\nabla\cdot\bm{u})\partial^\beta\left(\frac{a\Delta\theta}{1+a}+\nabla\cdot(\theta\bm{u})\right)dxd\tau. \end{aligned} \end{equation} Thus we have \begin{equation}\label{Key Energy estimate k-2-4} \begin{aligned} \int_0^t\int|\nabla\partial^\beta a|^2dxd\tau\leqslant & ~ \int_0^t\int|\nabla\partial^\beta\theta|^2+\frac{1}{2}|\partial^\beta(\nabla\cdot\bm{u})|^2+\frac{1}{2}|\Delta \partial^\beta\theta|^2dxd\tau \\ + & ~ \int\partial^\beta\bm{u}_0\cdot\nabla\partial^\beta(a_0-\theta_0)dx-\int\partial^\beta\bm{u}\cdot\nabla\partial^\beta(a-\theta)dx \\ - & ~ \int_0^t\int\partial^\beta\bm{V}\cdot\partial^\beta\left(B_0\bm{U}_t+\sum\limits_{j=1}^3B_j\partial_j\bm{U}\right)dxd\tau \\ - & ~ \int_0^t\int\partial^\beta(\nabla\cdot\bm{u})\partial^\beta\left(\frac{a\Delta\theta}{1+a}+\nabla\cdot(\theta\bm{u})\right)dxd\tau \\ + & ~ \int_0^t\int\nabla\partial^\beta\theta\cdot\partial^\beta\left(\bm{u}\cdot\nabla\bm{u}+\frac{\theta-a}{1+a}\nabla a\right)dxd\tau \\ - & ~ \int_0^t\int\nabla\partial^\beta a\cdot\partial^\beta(a\nabla\theta)dxd\tau. \end{aligned} \end{equation} Now go back to \eqref{Key Energy estimate k-2-1}, we have \begin{equation}\label{Key Energy estimate k-2-5} \begin{aligned} \frac{1}{2}\int_0^t\int|\partial^\beta(\nabla\cdot\bm{u})|^2dxd\tau\leqslant & ~ \int_0^t\int\frac{3}{4}|\nabla\partial^\beta a|^2+\frac{1}{4}|\nabla\partial^\beta\theta|^2dxd\tau \\ + & ~ \frac{1}{2}\int_0^t\int\partial^\beta\bm{V}\cdot\partial^\beta\left(B_0\bm{U}_t+\sum\limits_{j=1}^3B_j\partial_j\bm{U}\right)dxd\tau \\ + & ~ \frac{1}{2}\int\partial^\beta\bm{u}\cdot\nabla\partial^\beta adx-\int\partial^\beta\bm{u}_0\cdot\nabla\partial^\beta a_0dx \\ + & ~ \frac{1}{2}\int_0^t\int\nabla\partial^\beta a\cdot\partial^\beta(a\nabla\theta)dxd\tau. \end{aligned} \end{equation} Substituting \eqref{Key Energy estimate k-2-5} into \eqref{Key Energy estimate k-2-4}, by \eqref{Key Energy estimate k-1} we have \begin{equation}\label{Key Energy estimate k-2} \begin{aligned} & \sum\limits_{|\beta|\leqslant k-1}\int_0^t\int|\nabla\partial^\beta a|^2dxd\tau \\ \lesssim & ~ E_{k,1}(t)+E_{k,1}(0)+E_{\lfloor k/2+5/2\rfloor,1}^{1/2}(t)\big(E_{k,1}(t)+E_{k,2}(t) \big) \\ \lesssim & ~ E_{k,1}(0)+E_{\lfloor k/2+5/2\rfloor,1}^{1/2}(t)\big(E_{k,1}(t)+E_{k,2}(t)\big). \end{aligned} \end{equation} Substituting \eqref{Key Energy estimate k-2} into \eqref{Key Energy estimate k-2-5}, similarly by \eqref{Key Energy estimate k-1} we have \begin{equation}\label{Key Energy estimate k-3} \begin{aligned} & \sum\limits_{|\beta|\leqslant k-1}\int_0^t\int|\partial^\beta(\nabla\cdot \bm{u})|^2dxd\tau \\ \lesssim & ~ E_{k,1}(t)+E_{k,1}(0)+E_{\lfloor k/2+5/2\rfloor,1}^{1/2}(t)\big(E_{k,1}(t)+E_{k,2}(t)\big) \\ \lesssim & ~ E_{k,1}(0)+E_{\lfloor k/2+5/2\rfloor,1}^{1/2}(t)\big(E_{k,1}(t)+E_{k,2}(t)\big). \end{aligned} \end{equation} Note that by \eqref{irrotational} and Hodge decomposition, we have $$\sum\limits_{|\beta|\leqslant k-1}\int_0^t\int|\partial^\beta(\nabla\cdot\bm{u})|^2dxd\tau=\sum\limits_{|\beta|\leqslant k-1}\int_0^t\int|\partial^\beta(\nabla\bm{u})|^2dxd\tau.$$ Adding \eqref{Key Energy estimate k-1}, \eqref{Key Energy estimate k-2} and \eqref{Key Energy estimate k-3}, one has \begin{equation}\label{Key Energy estimate k} E_{k,1}(t)+E_{k,2}(t)\lesssim E_{k,1}(0)+E_{\lfloor k/2+5/2\rfloor,1}^{1/2}(t)\big(E_{k,1}(t)+E_{k,2}(t)\big),\quad 1\leqslant k\leqslant s. \end{equation} Note that $s>5$, by \eqref{Key Energy estimate 0} and \eqref{Key Energy estimate k} we arrive at \begin{equation}\label{Energy estimate s} E_s(t)\triangleq E_{s,1}(t)+E_{s,2}(t)\lesssim E_s(0)+E^{3/2}_s(0)+E^{3/2}_s(t)\leqslant C\big(M^2\varepsilon^2+M^3\varepsilon^3+\varepsilon^{\frac{3}{2}}\big). \end{equation} Now we give the proof of Thm\ref{main}. \begin{proof} Assume that $t_*<\infty$ in \eqref{energy small assumption}. We take $\varepsilon>0$ small enough, then \eqref{Energy estimate s} gives $$E_s(t_*)\leqslant 2C\varepsilon^{\frac{3}{2}}<\varepsilon,$$ this contradicts our assumption \eqref{energy small assumption}. Thus we have $$E_s(t)\leqslant\varepsilon, \quad \forall ~ t\geqslant 0,$$ which completes the proof of Thm \ref{main}. \end{proof} \par{\bf Acknowledgements.} Yi Zhou was supported by Key Laboratory of Mathematics for Nonlinear Sciences, Ministry of Education of China. Shanghai Key Laboratory for Contemporary Applied Mathematics, School of Mathematical Sciences, Fudan University, P.R. China, NSFC (grants No. 11421061). And we sincerely thank Dr. Yi Zhu for her kind help.
{'timestamp': '2021-11-30T02:19:30', 'yymm': '2111', 'arxiv_id': '2111.14136', 'language': 'en', 'url': 'https://arxiv.org/abs/2111.14136'}
\section{Introduction} The importance of the corrections due to the mass of the heavy quark in the jet-production in $e^+e^-$-collisions has been already seen in the early tests of the flavour independence of the strong coupling constant \cite{delphi,lep}. The final high precision of the LEP/SLC experiments required accurate account for the bottom-quark mass in the theoretical predictions. If quark mass effects are neglected, the ratio $\alpha_s^b / \alpha_s^{uds}$ measured from the analysis of different three-jet event-shape observables is shifted away from unity up to $8\%$ \cite{opal98} (see also \cite{delphi97}). Sensitivity of the three-jet observables to the value of the heavy quark mass allowed to consider the possibility \cite{juan,brs95} of the determination of the b-quark mass from LEP data, assuming the universality of $\alpha_s$. In a recent analysis of three-jet events, DELPHI measured the mass of the b-quark, $m_b$, for the first time far above the production threshold \cite{delphi97}. This result is in a good agreement with low energy determinations of $m_b$ using QCD sum rules and lattice QCD from $\Upsilon$ and $B$-mesons spectra (for recent results see e.g. \cite{lowenergy}) The agreement between high and low energy determinations of the quark mass is rather impressive as non-perturbative parts are very different in the two cases. In this contribution we will discuss some aspects of the next-to-leading order (NLO) calculation of the decay $Z \rightarrow 3 jets$ with massive quarks, necessary for the measurements of the bottom-quark mass at the $Z$-peak. Recently such calculations were performed independently by three groups \cite{rsb97,bbu97,no97}. The first question we would like to answer whether it is not at all surprising that LEP/SLC observables are sensitive to $m_b$ as the main scale involved is the mass of the $Z$-boson, $M_Z \gg m_b$. Indeed, the quark-mass effects for an inclusive observable such as the total width $Z \rightarrow b \bar{b}$ are negligible. Due to Kinoshita-Lee-Nauenberg theorem such observable does not contain mass singularities and a quark-mass appears in the ratio $\overline{m}_q^2(M_Z)/M_Z^2 \approx 10^{-3}$, where using $\overline{MS}$ running mass at the $M_Z$-scale takes into account the bulk of the NLO QCD corrections \cite{dkz90,brs95}. However, the situation with more exclusive observables is different. Let's consider the simplest process, $Z \rightarrow \overline{b}b g$, which contributes to three-jet final state at the leading order (LO). When the energy of the radiated gluon approaches zero, the process has an infrared (IR) divergence and in order to have an IR-finite prediction, some kinematical restriction should be introduced in the phase-space integration to cut out the troublesome region. In $e^+e^-$-annihilation that is usually done by applying the so-called jet-clustering algorithm with a jet-resolution parameter, $y_c$ (see \cite{mls98} for recent discussion of jet-algorithms in $e^+e^-$). Then the transition probability in the three-jet part of the phase-space will have contributions as large as $1/y_c \cdot (m_b^2/M_Z^2)$, where $y_c$ can be rather small, in the range $10^{-2} - 10^{-3}$. Then one can expect a significant enhancement of the quark-mass effects, which can reach several percents. The convenient observable for studies of the mass effects in the three-jet final state, proposed some time ago \cite{delphi,brs95}, is defined as follows \begin{eqnarray} \label{r3bd} & & R_3^{bd}=\frac{\Gamma^b_{3j}(y_c)/\Gamma^b}{\Gamma^d_{3j}(y_c)/\Gamma^d} \\ \nonumber & &=1+r_b \left( b_0(y_c,r_b) + \frac{\alpha_s}{\pi} b_1(y_c,r_b ) \right) \end{eqnarray} where $\Gamma^q_{3j}$ and $\Gamma^q$ are three-jet and total decay widths of the $Z$-boson into quark pair of flavour $q$, $r_b=m_b^2/M_Z^2$. Note that above expression is not an expansion in $r_b$. The LO function, $b_0$, is plotted in fig.~1 for four different jet algorithms. \vskip -2.cm \myfigure{0.cm}{0.cm}{8.2cm}{b0.ps}{-1.cm} {\it LO contribution to the ratio $R_3^{bd}$ as a function of $y_c$ (see eq.(\ref{r3bd}) for the definition) for $m_b=3 GeV$ (dashed curve) and $m_b=5 GeV$ (solid curve).}{fig:b0} \vskip -8mm Together with well known JADE, E and DURHAM schemes we consider the so-called EM algorithm \cite{brs95} with a resolution parameter $y_{ij}=2p_i p_j/s$ and which was used for analytical calculations in the massive case \cite{brs95}. The main observation from fig.~1 is that for $ y_c>0.05$, $b_0$ is almost independent of the value of $m_b$ for all schemes. Although, this remains true also for smaller $y_c$ in DURHAM and E schemes, there is a noticeable mass dependence in JADE and EM schemes. Note that $b_0$ is positive for E-scheme. That contradicts the intuitive expectations that a heavy quark should radiate less than a light one. This unusual behavior is due to the definition of the resolution parameter in E-scheme, $y_{ij}=(p_i+p_j)^2/s$, which has significantly different values for partons with the same momenta in the massive and massless cases, and it can be used as a consistency check of the data. In what follows we restrict ourselves to DURHAM scheme, the one used in the experimental analysis \cite{delphi97}, and $b_0$ can be interpolated as: $b_0=b_0^{(0)}+b_0^{(1)} \ln y_c+b_0^{(2)} \ln^2 y_c$. In the LO calculations we can not specify what value of the b-quark mass should be taken in the calculations: all quark-mass definitions are equivalent (the difference is due to the higher orders in $\alpha_s$). One can use, for example, the pole mass $M_b \approx 4.6 GeV$ or the $\overline{MS}$-running mass $\overline{m}_b(\mu)$ at any scale relevant to the problem, $m_b \le \mu \le M_Z$, with $\overline{m}_b(m_b) \approx 4.13 GeV$ and $\overline{m}_b(M_Z) \approx 2.83 GeV$. As a result, the spread in LO predictions for different values of b-quark mass is significant, the LO prediction is not accurate enough and the NLO calculation should be done. At the NLO there are two different contributions: from one-loop corrections to the three-parton decay, $Z \rightarrow b \overline{b} g$ and tree-level four-parton decay, $Z \rightarrow b \overline{b} g g$ and $ Z \rightarrow b \overline{b} q \overline{q},~q=u,d,s,c,b$ integrated over the three-jet region of the four-parton phase-space. In the NLO calculation one has to deal with divergences, both ultraviolet \footnote{The ultraviolet divergences in the loop-contribution are cancelled after the renormalization of the parameters of the QCD Lagrangian.} and infrared which appear at the intermediate stages. The sum of the one-loop and tree-level contributions is, however, IR finite. We would like to stress that the structure of the NLO corrections in the massive case is completely different from the ones in the massless case \cite{ert81}. That is due to the fact that in the massive case, part of the collinear divergences, those associated with the gluon radiation from the quarks, are softened into $\ln r_b$ and only collinear divergences associated with gluon-gluon splitting remain. In the NLO calculations one should specify the quark mass definition. It turned out technically simpler to use a mixed renormalization scheme which uses on-shell definition for the quark mass and $\overline{MS}$ definition for the strong coupling. Therefore, physical quantities are originally expressed in terms of the pole mass. It can be perfectly used in perturbation theory, however, in contrast to the pole mass in QED, the quark pole mass is not a physical parameter. The non-perturbative corrections to the quark self-energy bring an ambiguity of order $\approx 300 MeV$ (hadron size) to the physical position of the pole of the quark propagator. Above the quark production threshold, it is natural to use the running mass definition (we use $\overline{MS}$). The advantage of this definition is that $\overline{m}_b(\mu)$ can be used for $\mu \gg m_b$. \vskip -2.cm \myfigure{0.cm}{0.cm}{6cm}{b1.ps}{-1.cm} {\it NLO function $\overline{b}_1$ for different $m_b$ (see eqs.(\ref{r3bd}),(\ref{r3bdrun}) for the definition and text for details). The errors are due to numerical integrations.}{fig:b1} \vskip -8mm The pole, $M_b$, and the running masses of the quark are perturbatively related \begin{equation} M_b=\overline{m}_b(\mu)\left[1+\frac{\alpha_s}{\pi}\left(\frac{4}{3} -\ln\frac{m_b^2}{\mu^2}\right)\right]~. \label{pole-run} \end{equation} We use this one-loop relation to pass from the pole mass to the running one, which is consistent with our NLO calculations. To match needed precision we have to use this equation for values of $\mu$ about $m_b$. Then we can use one-loop renormalization group improved equation in order to define the quark mass at the higher scales. Substituting eq.(\ref{pole-run}) into definition eq.(\ref{r3bd}) we have \begin{eqnarray} & &R_3^{bd}(y_c,\overline{m}_b(\mu),\mu)= \nonumber \\ & &1+\overline{r}_b(\mu)\left[b_0 + \frac{\alpha_s(\mu)}{\pi} \left(\overline{b}_1 -2b_0\ln\frac{M_Z^2}{\mu^2}\right)\right] \label{r3bdrun} \end{eqnarray} with $\overline{b}_1=b_1+b_0(8/3-2\ln r_b)$ and $\overline{r}_b=\overline{m}_b^2/M_Z^2$. In fig.~2 we show the NLO function $\overline{b}_1(y_c,r_b)$ calculated for three different values of the quark mass : $3GeV$ (open circles), $4GeV$ (squares) and $5GeV$ (triangles). \vskip -2.cm \myfigure{0.in}{-0.5cm}{6cm}{r3fit.ps}{-1.cm} {\it The ratio $R_3^{bd}$ (eq. (\ref{r3bd})). Solid curves - LO predictions, dashed curves give the NLO results (see text for details).} {fig:r3bd} \vskip -8mm In contrast to the $b_0$, one sees a significant residual mass dependence in $\overline{b}_1$, which can not be neglected. The solid lines in fig.~2 represent a fit by the function: $\overline{b}_1=\overline{b}_1^{(0)} +\overline{b}_1^{(1)} \ln y_c +\overline{b}_1^{(2)} \ln r_b $ performed in the range $ 0.01 \le y_c \le 0.1$ The quality of this interpolation is very good and the main residual $m_b$ dependence in $\overline{b}_1$ is taken into account by $\ln r_b$ term. Inclusion of higher powers of $\ln r_b$ does not improve the fit. Fig.~3 presents theoretical predictions in the DURHAM scheme for the $R_3^{bd}$ observable measured by DELPHI\cite{delphi97}. The solid lines are LO predictions for the b-quark mass, $m_b=\overline{m}_b(M_Z)=2.83 GeV$ (upper curve) and $m_b=M_b=4.6 GeV$ (lower curve). The dashed curves give NLO results for different values of scale $\mu:~10,~30,~91 GeV$. One sees that NLO curve for large scale is naturally closer to LO curve for $\overline{m}_b(M_Z)$ and for smaller scale is closer to the LO one with $m_b=M_b$. Fig.~4 illustrates the scale dependence of $R_3^{bd}$ for $y_c=0.02$. By studying the scale dependence, which is a reflection of the fixed order calculation, we can estimate the uncertainty of the predictions. The dashed-dotted curve gives $\mu$-dependence when eq. (\ref{r3bd}) was used, so it is $\mu$-dependence due to renormalization of the strong coupling constant, $\alpha_s$. \vskip -2.cm \myfigure{-1.in}{-0.5cm}{6cm}{r3mu.ps}{-1.cm} {\it The ratio $R_3^{bd}$ as a function of the scale $\mu$ for $y_c=0.02$.}{fig:r3bdmu} \vskip -8mm Other curves show $\mu$-dependence when $R_3^{bd}$ is parameterized in terms of the running mass, $\overline{m}_b(M_Z)$, eq.(\ref{r3bdrun}), but different mass definitions have been used in the logarithms. The conservative estimate of the theoretical error for the $R_3^{bd}$ is to take the whole spread given by the curves. The uncertainty in $R_3^{bd}$ induces an error in the measured mass of the b-quark, $\Delta R_3^{bd}=0.004 \rightarrow \Delta m_b \simeq 0.23 GeV$. This theoretical uncertainty is, however, below current experimental errors, which are dominated by fragmentation. To conclude, the NLO calculation is necessary for accurate description of the three-jet final state with massive quarks in $e^+e^-$-annihilation. Further studies of different observables and different jet-algorithms could be very useful for the reduction the uncertainty of such calculation. \noindent{\bf Acknowledgments.} We are indebted to S. Cabrera, J. Fuster and S. Mart\'{\i} for an enjoyable collaboration. M.B. is grateful to Laboratoire de Physique Math\'ematique et Th\'eorique for the warm hospitality during his stay at Montpellier.
{'timestamp': '1999-01-24T22:08:14', 'yymm': '9811', 'arxiv_id': 'hep-ph/9811464', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-ph/9811464'}
\section{Introduction} \label{sec:int} Located at a distance of $6.98\Mpc$ \citep{Tully2013}, NGC 5474 is a local star forming galaxy, classified as SAcd pec, belonging to the M 101 Group. With an absolute blue magnitude $\Magn_B\simeq-18.4$, it is among the most luminous satellites of M 101, also known as the Pinwheel Galaxy, and it is also relatively close to it, with an angular separation smaller than $1^{\circ}$ \citep[corresponding to $\sim120\kpc$, in projection;][]{Tikhonov2015}. Due to its various asymmetries \citep{Kornreich1998}, NGC 5474 was early recognized as peculiar \citep{Huchtmeier1979}. The $\HI$ disc is distorted at radial distances $R$ larger than $5\kpc$, resulting in a change of the position angle (PA) of $\simeq50^{\circ}$ twisting from $155^{\circ}$ for $R<5\kpc$ to $105^{\circ}$ for $R>8.5\kpc$ \citep[hereafter R04]{Rownd1994}. The change in the PA is thought to be associated with a warp in the $\HI$ disc, which connects the gaseous component to the south-western edge of M 101. This $\HI$ bridge \citep{Huchtmeier1979,vanderHulst1979} has been often considered as a tidal debris formed during a recent fly-by of NGC 5474 close to M 101 \citep{Mihos2012}. The $1\kpc$ off-set between the kinematic centre of its $\HI$ disc and the optical centre of what has always been interpreted as the galaxy bulge is another fascinating peculiarity of NGC 5474. As an example, Fig.~\ref{fig:ngc5474} shows a zoomed-in view of the central region of NGC 5474 in the F814W band obtained from the LEGUS photometric catalogue \citep{Calzetti2015}: the kinematic centre is marked with a blue dot, while the off-set bulge is clearly visible to the north of the kinematic centre. First observed by \cite{vanderHulst1979}, then validated by \citetalias{Rownd1994} and \citet[hereafter K00]{Kornreich2000} looking at the $\HI$ emission, the off-set has also been confirmed from the ${\rm H}\alpha$ kinematics \citep[hereafter E08]{Epinat2008}, making the picture even more puzzling. This oddly large discrepancy has recursively raised the question about the true nature of such a stellar component and what mechanisms may have produced it. \begin{figure} \centering \includegraphics[width=1\hsize]{figures/NGC5474.pdf} \caption{Image of NGC 5474 obtained using observations in the F814W band from the LEGUS photometric catalogue (\citealt{Calzetti2015}). The blue dot shows the position of the $\HI$ kinematic center from \citetalias{Rownd1994}, the orange circle is centred on the $\HI$ kinematic center and its radius is $90\asec$ long, corresponding to $3\kpc$ at a distance $D=6.98\Mpc$. The white contours are separated by $10^{k/3-1}\Imax$, where $\Imax$ is approximately the surface brightness of the PB centre, while $k=0,1,2,3$. The orange arrow points the PB, while the SW over-density is to the South. North is up and East is to the left.}\label{fig:ngc5474} \end{figure} Throughout this work we will refer to the galaxy's \lapex central\rapex stellar component as putative bulge (hereafter PB), an unbiased title that reflects our ignorance about its nature: it may be an off-set pseudo-bulge; it may have been formed in-situ, in an asymmetric burst of star formation; it may be the remnant of an external accreted dwarf galaxy; it may be an external galaxy crossing the line of sight (\citetalias{Rownd1994}; \citealt{Mihos2013}), bound or unbound to NGC 5474. Any of these explanations would require to call the PB with a different name. According to \cite{Fisher2010} this component has properties more similar to a pseudo-bulge than to a classical bulge (see \citealt{Kormendy2004,Fisher2008}), while \citet[hereafter B20]{Bellazzini2020} showed that its structural parameters are also consistent with the scaling relations of dwarf galaxies. For instance, the PB is similar to the dwarf elliptical galaxy (dE) NGC 205 in terms of stellar mass, V-band absolute magnitude and projected half-mass radius (\citetalias{Rownd1994}; \citealt{McConnachie2012}; \citetalias{Bellazzini2020}). \citetalias{Bellazzini2020} showed that the stellar populations of the PB are very similar to those dominating the stellar mass budget in the disc of NGC 5474 and the color-magnitude diagrams are fully compatible with systems lying at similar distance from us. Moreover, they constrained the maximum difference in radial velocity between the disc of NGC 5474 and the PB to $\sim50\kms$. Hence, if the PB is not a substructure of NGC 5474 it should be a satellite of it or, at least, another member of the M 101 group. The presence of a stellar over-density to the South-West of the PB, a structure that extends for almost $1\amin^2$, adds up to NGC 5474 oddities \citepalias[see also Fig.~\ref{fig:ngc5474}]{Bellazzini2020}. The main population dominating the over-density is older than $2\Gyr$, very similar to the stellar population dominating the PB. Young stars in NGC 5474 appear not to be correlated to the over-density, and dominate instead the spiral arms, which extend $8\kpc$ out from the centre. The spiral pattern seen in optical is also marked by the $\HI$ distribution \citepalias{Rownd1994}. Among the possible explanations, it does seem plausible that such over-density may have been caused by a recent or on-going interaction between NGC 5474 and the off-centre PB, which may be also the cause of the galaxy's large scale, asymmetric recent star formation \citepalias{Bellazzini2020}. In this work, we study the dynamical properties of NGC 5474 and investigate what the true nature of the PB may be by making use of realistic $N$-body hydrodynamical simulations, supported by analytic models. In Section~\ref{sec:setup} we describe the dynamical model built to match NGC 5474 and the PB: its properties, the observational data; the approach used to constrain the model. In Section~\ref{sec:model} we estimate the gravitational effects felt by NGC 5474 due to the PB to put limits on the large parameter space to be investigated with numerical simulations. In Sections~\ref{sec:equ} and \ref{sec:sims} we focus on simulations. After describing the method used to sample the initial conditions, we explore two different scenarios: i) a purely stellar system (without dark matter) orbiting within the plane of the disc of NGC 5474, and ii) a compact early type dwarf galaxy (with its own dark-matter halo) moving on a polar orbit around NGC 5474. The latter case is obviously intended to explore the possibility that PB is a satellite of NGC 5474 that is seen near the centre of its disc only in projection. On the other hand, the former case is the mean by which we explore the effects of an off-centred bulge on the underlying gaseous and stellar disc. In particular we are interested to answer questions like: is the off-set position of the PB compatible with some kind of long-standing quasi-equilibrium configuration and/or a relatively regular rotation curve? Can a stellar PB resist the drag by dynamical friction and/or the tidal strain from the dark-matter halo of its parent galaxy? Is our understanding of the galaxy driven by projection effects? Could a possible interaction between NGC 5474 and the PB explain some of the other peculiarities of NGC 5474? Section~\ref{sec:concl} concludes. \section{Setting the model} \label{sec:setup} Throughout this work we assume that the main structures that comprise the target are a dominant disk galaxy similar to NGC 5474 and the PB, a compact stellar component that can be either embedded in a dark-matter halo or not. \subsection{NGC 5474} \label{sec:modelscomp} We describe NGC 5474 as a multi-component galaxy comprising a dark-matter halo, a gaseous disc and a stellar disc. We assume that the dark halo is spherical with \citet*[hereafter NFW]{NavarroFrenkWhite1996} density distribution \begin{equation}\label{for:dm} \rhodm^{\rm NFW}(r) =\displaystyle\frac{4\rhos}{\displaystyle\left(\frac{r}{\rs}\right) \left(1+\frac{r}{\rs}\right)^2}, \end{equation} where $\rs$ is the halo scale radius and $\rhos\equiv\rhodm^{\rm NFW}(\rs)$. We assume that both the gaseous and the stellar discs are razor-thin exponential discs. The $\HI$ surface number density is given by \begin{equation}\label{for:expgas} n(R) = \frac{\Mgas}{2\pi \hgas^2\mprot}\exp\biggl(-\frac{R}{\hgas}\biggr), \end{equation} where $\Mgas$ is the gas total mass, $\mprot$ is the proton mass (assuming, for simplicity, that the disc is fully composed by hydrogen atoms), and $\hgas$ is the $\HI$ disc scale length. The stellar surface density is given by \begin{equation}\label{for:expstar} \Sigmastar(R) = \frac{\Mstar}{2\pi \hstar^2}\exp\biggl(-\frac{R}{\hstar}\biggr), \end{equation} where $\Mstar$ is the disc total stellar mass and $\hstar$ is the stellar disc scale length. In our analysis, a model of NGC 5474 is fully determined by the free parameter vector $\bbxi \equiv \{\rhos,\rs,\hgas,\Mgas,\hstar,\Mstar\}$. To determine $\bbxi$, we fit a dataset of observations of NGC 5474 with our galaxy model. We anticipate that, in the subsequent analysis of NGC 5474 through hydrodynamical $N$-body simulations, we will drop the approximation of razor-thin discs for gas and stars in favour of realistic discs with non-negligible thickness. \subsubsection{The dataset} \label{sec:hd} The first $\HI$ observations from \cite{vanderHulst1979} have too low velocity resolution ($\sim27\kms$) to allow for any detailed kinematic study. The $\HI$ data collected through VLA observations by \citetalias{Rownd1994} provide an $\HI$ velocity field relatively smooth and symmetric in the galaxy's central, unwarped region, with a rotation curve peaking at $\sim14\kms$. This is inconsistent with \citetalias{Kornreich2000} whose rotation curve is $7\kms$ systematically lower, even though the authors use the very same $\HI$ observations. Also, both $\HI$ rotation curves hardly agree with the ${\rm H}\alpha$ emission, tracing the gas kinematics of the innermost galaxy's $3\kpc$ region. According to \citetalias{Epinat2008}, the ${\rm H}\alpha$ rotation curve sharply rises up to $22\kms$ at $R\simeq2\kpc$ and it is barely consistent with \citetalias{Rownd1994} further out. Nonetheless, all the aforementioned studies agree and report the same off-set between the PB and the $\HI$/${\rm H}\alpha$ kinematic centres. For our study we rely on the rotation curve of \citetalias{Rownd1994} which provides a detailed description of the data reduction and has a large radial coverage. The dataset used to constrain the NGC 5474 galaxy model then consists of: i) the $\HI$ rotation curves derived starting from the analysis of \citetalias{Rownd1994} for the approaching and receding arms using a tilted ring model; ii) the observed $\HI$ column density profile of \citetalias{Rownd1994}; iii) the stellar disc parameters resulting from the stellar disc/bulge decomposition of \cite{Fisher2010}. \citetalias{Rownd1994} provides a collection of $\Nv=18$ points $\{\Rk,\vk^{\rm a}, \vk^{\rm r},\vk^{\rm t}\}$, with $k=0,...,\Nv$, where $\Rk$ is the distance of the observed point from the $\HI$ disc's kinematic centre, $\vk^{\rm a}$ ($\vk^{\rm r}$) is the corresponding velocity of the approaching (receding) arm as a result of the fit with the tilted ring model using half ring, while $\vk^{\rm t}$ is obtained using a complete ring. To rederive the $\HI$ rotation curve and determine a reliable error $\delta\vk$, accounting for the asymmetries of the two arms and the uncertain and low inclination we proceed as follows. For each radial bin $k$ \begin{itemize} \item[i)] we compute $\sigma_k\equiv|\vk^{\rm a}-\vk^{\rm r}|/2$, as a measure of the velocity asymmetry between the two arms. \item[ii)] With a Monte Carlo approach, we sample $M=20000$ new velocities $\vk^j$, with $j=1,...,M$, from a Gaussian distribution with mean and standard deviation equal to $\vk^{\rm t}$ and $\sigma_k$, respectively. \item[iii)] Each velocity $\vk^j$ is deprojected assuming a different inclination $i$, drawn from a uniform distribution over the range of inclinations $[17^{\circ},25^{\circ}]$, consistent with the estimate of \citetalias{Rownd1994}. \item[iv)] We now have $M$ realizations of the intrinsic rotation velocity which we use to build the probability distribution of the rotation velocity of the considered bin. For each bin, we adopt the $50^\tth$ percentile of the distribution as rotation velocity and the minimum distance between the $84^{\tth}$ and $50^{\tth}$, and the $50^{\tth}$ and $16^{\tth}$ percentiles as error. \end{itemize} The upper panel of Fig.~\ref{fig:modelvsdata} shows the $\HI$ rotation curve that we obtained. We mark the separate contributions of our reference NGC 5474 galaxy model with different colors, as we shall discuss in details in the following sections. At least for $R<8.5\kpc$, the rotation curves of the two arms from \citetalias{Rownd1994} are quite similar, with $\sigma\simeq2\kms$, so the velocity distribution is relatively smooth and symmetric. The deprojected rotation curve peaks at $R\sim3-4\kpc$, where $v\simeq42\kms$, and it slowly decreases out to $6\kpc$. For distances larger than $6\kpc$, the rotation curve rises up to $v\simeq50\kms$. As pointed out by \citetalias{Rownd1994}, the rise in the outermost regions is likely due to the $\HI$ disc's warp. We have no reason to believe that it would rise again in the outer parts, especially if this is a warped region. Since we are interested in getting a well motivated dynamical mass for NGC 5474, which can be used as a starting point for our simulations, we impose to our derived rotation curve to remain flat beyond $6\kpc$. \begin{figure} \centering \includegraphics[width=.9\hsize]{model_vs_data.pdf} \caption{Top panel: $\HI$ rotation curve derived in Section~\ref{sec:hd} (squares with error bars), superimposed to the model rotation curve. We mark with different colors and line types the contributions to the total circular speed (black dotted curve) of the halo (red dash-dotted curve), the stellar disc (orange dashed curve) and the gaseous disc (blue solid curve). For comparison, we show also the uncorrected rotation curve (triangles with error bars). Bottom panel: $\HI$ observed column density profile as derived in Section \ref{sec:hd} (squares with error bars) superimposed to the model profile (blue solid curve, equation \ref{for:expgas}).}\label{fig:modelvsdata} \end{figure} The $\HI$ column density profile has been derived from the $\HI$ column density map of \citetalias{Rownd1994}. We first plot the column density of each pixel as a function of the distance from the disc's kinematic centre. We build 15 radial bins, each containing the same number of pixels (approximately 100). For each bin, we compute the distribution of the $\HI$ surface density. We take the $50^{\tth}$ percentile of the distribution as a measure of the bin's $\HI$ column density, and the maximum interval between the $95^{\tth}$ and $50^{\tth}$, and $50^{\tth}$ and $5^{\tth}$ percentiles as error. We assume that the disc is razor-thin, and correct for the inclination according to the relation $\nint = \nobs \cos i$, where $\nint$ and $\nobs$ are, respectively, the intrinsic and observed surface density, $i=21^{\circ}$ (the mean inclination of the range used to estimate the rotation curve). We exclude the innermost bin, corresponding to a galactocentric distance of $R=1\kpc$, for which we do not have any kinematic information. At the end of the procedure, the $\HI$ column density profile consists of $\Ns=14$ triplets $\{\Ri,\nnk,\delta\nnk\}$, with $k=0,...,\Ns$ , which denote, respectively, the galactocentric distance ($\Rk$), the observed profile ($\nnk$) and the associated error ($\delta\nnk$). To derive a motivated range of stellar scale lengths and masses, we start from the stellar disc/bulge decomposition of \cite{Fisher2010}, who find that the best fit stellar disc model has $\log_{10}\hstar/\kpc=3.09\pm0.06$, assuming a distance $d=5.03\Mpc$\footnote{\cite{Fisher2010} model the stellar disc of NGC 5474 with a razor-thin disc model, as in equation (\ref{for:expstar}).}. This value, converted to our distance $d=6.98\Mpc$ \citep{Tully2013}, gives $\hstar=1.71\pm0.23\kpc$. The estimate of \cite{Fisher2010} is based on data in the 3.6 $\mu$m band, a good tracer of stellar mass, with a weak dependence on age and metallicity. \citetalias{Bellazzini2020} found that the bulk of the stellar mass in the disc should be provided by intermediate to old age populations. According to the theoretical models by \cite{Rock2015}, the 3.6 $\mu$m mass-to-light ratio $\Upsilon$ for a Kroupa IMF and in the metallicity range relevant for NGC 5474 (Z$\in[0.0006,0.006]$; \citetalias{Bellazzini2020}) is $\Upsilon\sim0.5$ for a $5\Gyr$ old population and $\Upsilon\sim0.8$ for a $10\Gyr$ old population. In the following we take these two values as our reference. Starting from the absolute magnitude in the 3.6 $\mu$m band $\Magn^{\disc}_{3.6}=-9.18\pm0.24$ as in \cite{Fisher2010}, for the adopted distance $d=6.98\Mpc$ and assuming $\Magn_{3.6,\odot}=2.24$ from \cite{Oh2008}, we convert the disc stellar luminosity into mass obtaining \begin{equation}\begin{split} & \Mstar = 2.96\pm0.87\times10^8\Msun\quad \text{and} \\ & \Mstar = 4.73\pm1.39\times10^8\Msun, \end{split} \end{equation} for $\Upsilon\sim0.5$ and $\Upsilon\sim0.8$, respectively. We conclude that, for the stellar disc, a plausible range of total stellar mass, accounting for all the uncertainties in the estimates provided above, is $\Mstar\in[2.1,6.1]\times10^8\Msun$, given a stellar disc scale length of $\hstar=1.71\pm0.23\kpc$. We note that our estimate of $\Mstar$ is consistent with \cite{Skibba2011}, who measure a total stellar mass $\Mstar=5\times10^8\Msun$ using far-infrared imaging from the Herschel Space Observatory for galaxies in the KINGFISH project. \subsubsection{Fitting procedure} \label{sec:method} The log-likelihood of a galaxy model $\ln\LL(\bbxi|\data)$, defined by the parameter vector $\bbxi$, given the data $\data$ is \begin{equation}\label{for:chi2} \ln\LL = \ln\LL_v + \ln\LL_n . \end{equation} The first term in the r.h.s. is \begin{equation}\label{for:chiv} \ln\LL_v = -\frac{1}{2}\sum_{k=0}^{\Nv}\biggl(\frac{\vc(\Rk;\bbxi) - \vk} {\delta \vk}\biggr)^2, \end{equation} where $\vc$ is the model circular speed given by \begin{equation}\label{for:vv} \vc^2 = \vh^2 + \vgas^2 + \vstar^2, \end{equation} with \begin{equation}\label{for:vh} \vh^2\equiv 8\pi\rs\rhos G\biggl[\frac{\ln(1+r/\rs)}{r/\rs} - \frac{1}{1+r/\rs}\biggr] \end{equation} the contribution to the circular speed due to the halo, and \begin{equation}\begin{split}\label{for:vd} & \vany^2 = \frac{2G\Many}{\hany}\yany^2\times \\ & [I_0(\yany)K_0(\yany) - I_1(\yany)K_1(\yany)] \end{split}\end{equation} the contribution to the circular speed of any of the discs, measured in the discs plane, with $\comp=\star$ for the stellar disc and $\comp=\gas$ for the gaseous disc. In equations (\ref{for:vh}) and (\ref{for:vd}), $G$ is the gravitational constant and $I_n$ and $K_n$ are Bessel's functions of the $n$-th order. In equation (\ref{for:chi2}), the latter term is \begin{equation}\label{for:chiS} \ln\LL_n = - \frac{1}{2}\sum_{k=0}^{\Ns}\biggl(\frac{n(\Rk;\bbxi) - \nnk} {\delta \nnk}\biggr)^2, \end{equation} where $n$ and $\nnk$ are the model (equation \ref{for:expgas}) and observed $\HI$ surface density profiles. Since the discs' and halo contributions may be highly degenerate, we fix the stellar disc parameters to observationally motivated values, consistent with the aforementioned estimates. We adopt $\hstar=1.71\kpc$ as stellar disc scale length and the average value $\Mstar=4.1\times10^8\Msun$ as stellar mass. \begin{figure*} \centering \includegraphics[width=1\hsize]{posterior.pdf} \caption{One- and two-dimensional marginalized posterior distributions over the model free parameters $(\rhos,\rs,\hgas,\Mgas)$. The black curves in the two-dimensional marginalized distributions correspond to regions enclosing, respectively, 68\%, 95\% and 99\% of the total probability. The orange vertical lines in the one-dimensional marginalized distributions correspond to the 16$^{\tth}$, 50$^{\tth}$ and 84$^{\tth}$ percentiles, used to estimate the uncertainties over the models free parameters. The vertical grey lines in the marginalized one-dimensional distributions, and the squares in the marginalized two-dimensional distributions mark the position of the reference model (see also Table~\ref{tab:param}).}\label{fig:posterior} \end{figure*} \begin{table*} \begin{center} \caption{Main parameters of the analytic NGC 5474 models of halo, $\HI$ and stellar discs. $\rhos$ and $\rs$: reference density and scale radius of the NFW dark-matter density profile (equation~\ref{for:dm}); $\Mgas$: total mass of the $\HI$ disc; $\hgas$: $\HI$ disc's scale length (equation~\ref{for:expgas}); $\Mstar$: total mass of the stellar disc; $\hstar$: stellar disc scale length (equation~\ref{for:expstar}). The parameters ($\rhos,\rs,\hgas,\Mgas$) are determined as described in Section~\ref{sec:method}, while we fixed the parameters of the stellar disc to the ones derived in Section~\ref{sec:hd}. The middle row lists the $1\sigma$ uncertenties over the model free parameters, while the bottom row lists the parameters of the reference model adopted throughout this work.}\label{tab:param} \begin{tabular}{lcccccc} \hline\hline NGC 5474 & \multicolumn{2}{c}{DM halo} & \multicolumn{2}{c}{HI disc} & \multicolumn{2}{c}{Stellar disc} \\ \hline\hline parameter & $\rhos$ [$10^6\Msun\kpc^{-3}$] & $\rs$ [$\kpc$] & $\Mgas$ [$10^9\Msun$] & $\hgas$ [$\kpc$] & $\Mstar$ [$10^8\Msun$] & $\hstar$ [$\kpc$] \\ value & $9.77^{+45.18}_{-7.14}$ & $1.55^{+1.47}_{-0.84}$ & $2.00^{+2.58}_{-0.71}$ & $7.41^{+6.08}_{-2.28}$ & $[2.1,6.1]$ & $1.71^{+0.23}_{-0.23}$ \\ reference model & $9.95$ & $1.51$ & $1.82$ & $6.76$ & $4.1$ & $1.71$ \\ \hline\hline \end{tabular} \end{center} \end{table*} \begin{table*} \begin{center} \caption{Main parameters of the PB relevant for this work. $\Reff$: PB effective radius from \citetalias{Bellazzini2020}; $\Mb$: PB total stellar mass as derived in Section~\ref{sec:PB}; $m$: PB S\'{e}rsic index (equation \ref{for:ser1}) from \citetalias{Bellazzini2020}; $\rhosPB/\Mb$: PB dark-matter halo scale density-to-stellar mass (equation \ref{for:PBhalo}); $\rsPB$ and $\rtPB$: PB dark-matter halo scale and truncation radii, respectively (see equation \ref{for:PBhalo}).}\label{tab:paramPB} \begin{tabular}{lcccccc} \hline\hline Putative bulge & \multicolumn{3}{c}{Stars} & \multicolumn{3}{c}{Dark-matter } \\ \hline\hline parameter & $\Reff$ [$\kpc$] & $\Mb$ [$10^8\Msun$] & $m$ & $\rhosPB/\Mb$ [$\kpc^{-3}]$ & $\rsPB$ [$\kpc$] & $\rtPB$ $[\kpc$] \\ value & $0.484$ & $[0.5,2]$ & $0.79$ & $5.30\times10^6$ & 2.5 & 15\\ \hline\hline \end{tabular} \end{center} \end{table*} We perform a parameter space search using a Markov Chain Monte Carlo (MCMC) method. We run 16 chains, each evolved for 7000 steps, using a classical Metropolis-Hastings sampler \citep{Metropolis1953,Hastings1970} to sample from the posterior. We adopt flat priors on the free parameters. After a burn-in of 3000 steps (which we eliminate as a conservative choice), we use the remaining steps to build the posterior distributions over $\bbxi$. Fig.~\ref{fig:posterior} shows the marginalized one- and two-dimensional posterior distributions over the models' parameters. We estimate the uncertainties on the models' free parameters using the 16$^{\tth}$, 50$^{\tth}$ and 84$^{\tth}$ percentiles of the corresponding marginalized one-dimensional distributions. We note that since the PB is off-set with respect to the disc kinematic centre we cannot include it in our axisymmetric model of the rotation curve. However, after computing the PB mass from its stellar population, in Section~\ref{sec:model} we try to estimate the possible effects of the PB on the gas kinematics. Figure~\ref{fig:modelvsdata} shows the newly derived $\HI$ rotation curve superimposed to that of the model. We highlight with different colors the contributions of the different components. The dark-matter halo dominates over the stellar and gaseous components at all the radii covered by the kinematic data. Even if the total stellar mass is lower than the total $\HI$ mass, in the central regions the stars provide a significant contribution, dominant over the $\HI$ disc for $R\le6\kpc$, due to the very different gas and stellar scale lengths. As a reference, we estimate $\Mstar/\Mgas|_{2\kpc}\simeq 2$ within $2\kpc$, and $\Mstar/\Mgas|_{9\kpc}\simeq0.55$ at the larger distance $9\kpc$. Also, we measure an $\HI$ mass $\Mgas|_{9\kpc}\simeq7\times10^7\Msun$ within $R=9\kpc$, consistent with the estimate of \citetalias{Rownd1994}. The bottom panel of Fig.~\ref{fig:modelvsdata} shows the $\HI$ disc column density as a function of the galactocentric distance, superimposed to the model. \citetalias{Rownd1994} reports that the disc surface density flattens in the central parts, which is consistent with an exponential disc model with a large scale length (equation~\ref{for:expgas}, Fig.~\ref{fig:posterior}). If the inner $\HI$ surface density were constant and equal to the innermost point of the observed profile, our exponential model would overestimate the $\HI$ mass within $2\kpc$ by only 10\%, which, given the uncertainties on the observed profile and the model assumptions, we consider of negligible impact. We define our reference model as the model (i.e. a set of $\bxi$) with the maximum likelihood (equation \ref{for:chi2}) a posteriori. Table~\ref{tab:param} lists the main parameters of disc and halo here derived, and the parameters of the reference NGC 5474 model. \subsection{Putative bulge} \label{sec:PB} In our analysis the PB can be either embedded in a dark-matter halo or not. We represent its stellar component with a spherical \cite{Sersic1968} model, whose surface brightness profile is \begin{equation}\label{for:ser1} I(R)=\Ie\exp\biggl[-\bm\biggl(\frac{R}{\Reff}\biggr)^{\frac{1}{m}}\biggr], \end{equation} where \begin{equation}\label{for:ser2} \Ie = \frac{b^{2m}}{2\pi m\Gamma(2m)}\frac{\Ltot}{\Reff^2}. \end{equation} In equations (\ref{for:ser1}) and (\ref{for:ser2}), $\Gamma$ is the Gamma function, $\Ltot$ is the total PB luminosity, $m$ is the S\'{e}rsic index, related to $\bm$ as in equation 18 of \cite{Ciotti1999}, and $\Reff$ the effective radius (i.e. the distance on the plane of the sky from the PB's centre that contains half of the total PB's luminosity $\Ltot$). We adopt $m=0.79$ and $\Reff=0.484\kpc$ from \citetalias{Bellazzini2020} (see also Table~\ref{tab:paramPB}) and we infer the stellar mass from the total 3.6$\mu$m luminosity by \cite{Fisher2010} using the same mass-to-light ratios we adopted for the disc. The absolute magnitude in the 3.6$\mu$m band estimated by \cite{Fisher2010} is $\Magn_{3.6}=-16.44\pm0.22$, which, converted assuming a distance $d=6.98\Mpc$, gives $\Magn_{3.6}=-17.15\pm0.22$. We follow \cite{Forbes2017} and, assuming the $1\sigma$ limits in the luminosity and adopting $\Magn_{3.6,\odot}=2.24$ from \cite{Oh2008}, we get \begin{equation}\begin{split} & \Mb=0.70^{+0.18}_{-0.12}\times10^8\Msun \quad\text{and} \\ & \Mb=1.14^{+0.26}_{-0.21}\times10^8\Msun, \end{split}\end{equation} for $\Upsilon=0.5$ and $\Upsilon=0.8$, respectively\footnote{As for the stellar disc, the mass-to-light ratios $\Upsilon$ are in the 3.6 $\mu$m band.}. Based on the aforementioned estimates, we take the slightly wider $\Mb\in[0.5,2]\times10^8\Msun$ as a reasonable range of stellar masses to explore for the PB. The main parameters $(m,\Reff,\Mb)$ relevant to this work are listed in Table~\ref{tab:paramPB}. At any rate, we should recall that any derivation of the PB and disc masses from $\Upsilon$ depends on the adopted IMF, and that switching, e.g., from a Kroupa to a Salpeter's IMF can change the resulting masses by a factor of 3 (see Fig.s 13 and 14 of \citealt{Rock2015}). When present, the PB dark-matter halo has density distribution \begin{equation}\label{for:PBhalo} \rhodmPB(r) = \displaystyle\frac{\rhosPB}{\displaystyle\frac{r}{\rsPB}\left(1+\displaystyle\frac{r}{\rsPB}\right)^2}e^{-\left(\displaystyle\frac{r}{\rtPB}\right)^2} \end{equation} i.e., a truncated NFW model, where $\rhosPB$ and $\rsPB$ are, respectively, the halo scale density and the characteristic radius, while $\rtPB$ is the halo truncation radius. According to estimates of the stellar-to-halo mass relation \citep{Read2017}, for a galaxy with stellar mass $\Mb\in[0.5,2]\times10^8\Msun$, one would expect a virial-to-stellar-mass ratio $\MvirPB/\Mb\simeq100$. However, since we consider the PB as an external galaxy orbiting around NGC 5474, we expect the PB halo to be less massive than estimated and to be truncated well before its nominal virial radius because of tidal interactions. Also, the structural properties of its stellar component resemble the ones of a typical dE galaxy (see also \citetalias{Bellazzini2020}), which we do not expect to be significantly dominated by dark matter \citep{McConnachie2012}. As such, we use the above $\MvirPB/\Mb$ only as a reference value to estimate $\rsPB$. Adopting the halo mass-concentration relation from \cite{MunozCuartas2011}, from which we estimate a concentration $\log_{10} c=1.19-1.22$, we get a halo scale radius $\rsPB=2.17-2.91\kpc$. We set $\rsPB=2.5\kpc$, but we do not expect our results to depend substantially on $\rsPB$. To derive a new mass scale we impose that the dynamical-to-stellar-mass ratio $\MdynPB/\Mb$, evaluated at the stellar half-mass radius $\rh$, is \begin{equation}\label{for:mdynmb} \frac{\MdynPB}{\Mb}\biggr|_{\rh} \sim 2, \end{equation} which is approximately the ratio expected for a dE of sizes and structure similar to the PB \citep{McConnachie2012}. We truncate the PB halo at $\rtPB=15\kpc$. As we will later discuss, such a value is slightly less than the initial distance we set between the PB and the NGC 5474 centres in the simulations of Section~\ref{sec:resPBDM} and it avoids the halo PB to be unreasonably massive. With these choices, the PB has a dark-matter halo 20 times as massive as its stellar component. As a reference, the most massive PB halo, corresponding to a stellar mass $\Mb=2\times10^8\Msun$, has a total dark-matter mass $\simeq4\times10^9\Msun$, approximately the same as the halo virial mass of NGC 5474. Table \ref{tab:paramPB} lists the relevant parameters of the PB used throughout this work. \section{Constraints from observations} \label{sec:model} \begin{figure*} \centering \includegraphics[width=.85\hsize]{potential_map.pdf} \caption{Left-hand panel: total (halo, stellar disc, $\HI$ disc and PB) gravitational potential map in the discs' equatorial plane when $\Mb=0.5 \times10^8\Msun$ (model\_0.5). The shades of blue, from light to dark, mark regions of increasing potential. The PB is located at $(x,y)=(1\kpc,0)$ (black point). The orange circles show distances corresponding to $\Reff$ (inner) and $3\Reff$ (outer), with $\Reff$ the PB's effective radius (Table~\ref{tab:paramPB}). Small inset at the bottom of the left panel: \lapex circular speed\rapex as a function of the distance from the $\HI$ disc's kinematic centre of NGC 5474 without (black curve) and with (orange curve) the PB. In the latter case the circular speed is computed following equation (\ref{for:vc}) along the principal axis $y=0$. The black points with errorbars show the rotation curve derived in Section~\ref{sec:hd}. Right-hand panels: same as the left-hand panels, but for $\Mb=2\times10^8\Msun$ (model\_2). As a further comparison, in the inset of the left-hand panel we also show the circular speed obtained in the intermediate case of a PB with $\Mb=10^8\Msun$ placed $1\kpc$ away from the galaxy kinematic center (blue curve).}\label{fig:potmap} \end{figure*} By means of the analytic model derived in Sections~\ref{sec:modelscomp} and \ref{sec:PB}, we quantify the mutual effects that the galaxy and the PB may have on each other when the latter is placed within the discs' equatorial plane. This allows us to put further constraints on the large parameter space we will explore with hydrodynamic $N$-body simulations in the following sections. \subsection{Effects of the presence of the PB} We start analyzing the two scenarios of a PB with $\Mb=0.5\times10^8\Msun$ and $\Mb=2\times10^8\Msun$, respectively the lower and upper limits of the mass range derived in Section~\ref{sec:PB}. We assume no dark matter since we do not consider the PB as an external galaxy, and we first place it on the discs' plane, $1\kpc$ away from the kinematic centre, with its current size and \Sersic index ($\Reff=0.484\kpc$, $m=0.79$, see Table~\ref{tab:paramPB}). We refer to the two analytic models as, respectively, model\_0.5 and model\_2, where the number indicates the PB mass, in units of $10^8\Msun$. Using these models, we roughly estimate the minimum and maximum distortions we may expect to see in the $\HI$ disc's velocity field due to the presence of the PB on plane. Figure~\ref{fig:potmap} shows the total gravitational potential map of model\_0.5 (left panel) and model\_2 (right panel) on a portion of the equatorial plane. The total gravitational potential has been computed summing the separate contributions of the discs, halo and PB. The PB of model\_2 contributes to the total gravitational potential so intensely that the potential well shifts towards the PB centre. In this circumstance, it is hard to imagine an equilibrium configuration in which the PB and the $\HI$ disc kinematic centre are off-set. A different perspective is given by the small insets of Fig.~\ref{fig:potmap}, where we show the circular speed of the NGC 5474 model (discs and halo), superimposed to the circular speed computed from model\_0.5 (left panel) and model\_2 (right panel). Of course, these systems have lost their cylindrical symmetry, so the concept of circular speed makes no strict sense, but, at least in model\_0.5 where the PB contribution to the potential is sub-dominant, this exercise still helps to quantify the magnitude of the $\HI$ disc's velocity field perturbations. Calling the $(x,y)$-plane the equatorial plane, and $\Phi_{\rm tot}$ the model's total gravitational potential, we define the \lapex circular speed\rapex \begin{equation}\label{for:vc} \vc \equiv \sqrt{x\biggl|\frac{\DD\Phi_{\rm tot}}{\DD x}\biggr|_{y=0}}. \end{equation} The latter is computed for the models with and without the PB, along the line $y=0$, where the $y=0$ axis is aligned with the PB centre, when present. Then we measure $\delta\vcmax$, i.e. the maximum difference in circular speed between the models with and without the PB, in a region of approximately $3\Reff\simeq1.5\kpc$ around the PB's centre. In model\_2 (Fig.~\ref{fig:potmap}, right panel) the PB produces distortions as high as $\delta\vcmax\simeq28\kms$, which is not even consistent with the rotation curve derived in Section~\ref{sec:hd} from the $\HI$ kinematics. This is not surprising since the PB centre is very close to the minimum value of the gravitational potential. While it is highly unlikely that the PB as in model\_2 can be located onto the $\HI$ disc, given the large distortion it generates over the wide area covering approximately $3\Reff$, this is not excluded for model\_0.5, especially due to the lack of kinematic information within $R\sim2\kpc$ (Fig.s~\ref{fig:modelvsdata} and \ref{fig:potmap}). In the inset in the left panel of Fig.~\ref{fig:potmap} we also show the circular speed obtained when we place a PB with $\Mb=10^8\Msun$ $1\kpc$ away from the galaxy kinematic centre. The maximum distortions the PB generates in this case are as high as $\delta\vcmax\simeq11\kms$, but still the overall profile is marginally consistent with the observed one within the errorbars. On this basis, we will restrict the range of possible PB stellar masses to $\Mb\in[0.5,1]\times10^8\Msun$ in any further analysis, since we expect more massive PB to have critical effects on the $\HI$ disc of NGC 5474 \subsection{The tidal radius of the PB} We now focus on cases in which the PB moves along orbits co-planar with the galaxy discs. When on plane, we expect the dynamical friction and the tidal force field of NGC 5474 to be the main drivers of the PB evolution. While the former makes the PB sink towards the galaxy centre on a relatively short timescale (for details, see Section~\ref{sec:sim1pb}), the latter is responsible for the PB mass loss and the development of possible non-equilibrium features. We quantify the effects of the tidal force field of NGC 5474 on the PB by computing its tidal radius $\rt$. Often the tidal radius $\rt$ is estimated as (\citealt{BinneyTremaine2008}, equation 8.91) \begin{equation}\label{for:rt} \rt = \RPB\biggl(\frac{\Mb}{3\Mtot(\RPB)}\biggr)^{1/3}, \end{equation} assuming that the size of the PB is negligible with respect to its distance from NGC 5474. In the above equation $\Mtot(\RPB)$ is the total mass of NGC 5474 enclosed within $\RPB$, and $\RPB$ is the distance of the PB from the centre of NGC 5474. For a PB with mass $\Mb=0.5\times10^8\Msun$ and $\Reff=0.484\kpc$, the tidal radius is $\rt\simeq2\rh$ at $\RPB=7\kpc$. Considering that the PB mass enclosed within $2\rh$ is $\simeq92\%$ of its total mass, we expect the tidal field of NGC 5474 to be of small impact on the PB structural properties at any $\RPB\gtrsim7\kpc$. In this on-plane scenario it sounds then legitimate to take $\RPB=7\kpc$ as PB initial position and $\vc(\RPB)\simeq42\kms$ in the azimuthal direction as initial velocity, i.e. a circular orbit. Even if we considered a larger $\RPB$, dynamical friction would anyway make the PB sink within the disc eventually reaching $7\kpc$ with negligible or minimal mass loss because of a larger $\rt$. Whether the PB is the galaxy's pseudo-bulge, or it is the remnant of an external galaxy, or it has formed in-situ in a burst of star formation happened at least $2\Gyr$ ago (compatibly with its dominant stellar population, \citetalias{Bellazzini2020}), the observed, present-day PB would be the end-state of the orbital decay of any of these configurations. Given that the systems are extended, to follow the evolution of the PB tidal radius from $\RPB=7\kpc$ to the central regions we rely on a more realistic approach. In a reference frame where the PB and the galaxy kinematic centre are aligned along the $x$-axis, we estimate the PB's tidal radius $\rt$ by computing the position $\bbx=(\rt<\RPB,0,0)$ (i.e. on the equatorial plane and along the $x$-axis), where the effective potential \begin{equation}\label{for:rt2} \Phieff(\bbx) = \Phi_{\rm tot}(\bbx) + \PhiPB(\bbx-\bbRPB) - \frac{1}{2}(\Omega\rt)^2 \end{equation} has a saddle point. Here, $\bbRPB\equiv(\RPB,0,0)$ and $\Omega\equiv\vc(\RPB)/\RPB$, i.e. the angular speed obtained from the NGC 5474 model circular speed (Fig.~\ref{fig:modelvsdata}) at a distance $\RPB$. Figure~\ref{fig:rt} shows the PB tidal radius as a function of the distance from the discs' centre. We compute $\rt$ according equation (\ref{for:rt2}) and, as a comparison, we also show $\rt$ computed according the classical (\ref{for:rt}). In addition to $\Mb=0.5\times10^8\Msun$, we examined the case in which $\Mb=10^8\Msun$. For each mass value, we considered three cases with $\Reff=0.484\kpc$, $\Reff=0.320\kpc$, and $\Reff=0.161\kpc$, because, as an effect of the tidal force field, the PB may become more extended (see, e.g. \citealt{Iorio2019}). We notice that equations~(\ref{for:rt}) and (\ref{for:rt2}) do not account for the PB mass loss, so even if $\rt$ decreases along the orbit, the PB total mass is the same as the initial one. According to equation (\ref{for:rt2}), while the tidal radius of the less extended PBs is always larger than at least three half-mass radii, in the remaining cases the tidal radius shrinks fast to less than two PB half-mass radii at $3\kpc$. We note that the tidal radius computed as in its classical formulation (equation \ref{for:rt}) and as in equation (\ref{for:rt2}) gives approximately the same results when the PB is sufficiently far from the galaxy's centre, while equation (\ref{for:rt2}) provides an estimate of $\rt$ sensibly lower when the PB is close to the galaxy centre. For the less massive ($\Mb=0.5\times10^8\Msun$) and most extended PB ($\Reff=0.484\kpc$), the effective potential (\ref{for:rt2}) does not even have a saddle point, meaning that the truncation radius is formally zero. We do not show predictions for $\RPB<3\kpc$ since we expect the PB to have lost such a significant amount of mass to make ineffective also the use of equation (\ref{for:rt2}). At least for the most extended and least massive PB we may expect any effect due to the tidal force field of NGC 5474 to be extremely intense. \begin{figure} \centering \includegraphics[width=1\hsize]{truncation_radius.pdf} \caption{Left panel: ratio between tidal radius $\rt$ and the half-mass radius $\rh$ of a PB with total stellar mass $\Mb=0.5\times10^8\Msun$ (and no dark matter) as a function of the distance from the system's kinematic centre. The tidal radius is computed following equation \ref{for:rt} (dashed curves) and according to equation \ref{for:rt2} (solid curves). Each color refers to PBs with $\Reff=0.161\kpc$ (black curves), $\Reff=0.320\kpc$ (orange cuvers) $\Reff=0.484\kpc$ (purple curves). Right panel: same as the left panel, but for a PB with mass $\Mb=10^8\Msun$.} \label{fig:rt} \end{figure} To conclude, on the grounds of these analytic models, it seems very unlikely that the PB would be as massive as $2\times10^8\Msun$, if placed on the discs' plane of NGC 5474. If that would be the case, we should be able to see strong distortions in the $\HI$ velocity field map that are, instead, unseen. Moreover, although we have considered the PB as only made of stars, we can interpret this result as an upper limit on the total PB dynamical mass since the effects on the $\HI$ disc on which we have focused are purely gravitational. Even if we may expect the chance of survival of large off-centred PBs to be low, due to the galaxy's intense tidal field force, we cannot either exclude or prove that more favorable initial conditions, as a less extended initial PB, may produce the required off-set, a smooth $\HI$ velocity field and a regular PB spatial distribution. Starting from the predictions of the analytic models we just examined, we consider more quantitatively these scenarios through our hydrodynamic $N$-body simulations, which are described in the following sections. \section{Set-up of the simulations} \label{sec:equ} \subsection{The \AREPO code} \label{sec:arepo} All our simulations are performed using the moving-mesh hydrodynamic code \AREPO \citep{Springel2010}, as implemented in its publicly-released version\footnote{\url{https://arepo-code.org/}.} \citep{Weinberger2020}. \AREPO combines the advantages of both Lagrangian smoothed particle hydrodynamics (SPH) and Eulerian hydrodynamics on an unstructured mesh with adaptive mesh refinement (AMR). The mesh is constructed from a Voronoi tessellation of a set of discrete points, used to solve the hyperbolic hydrodynamic equations with a finite-volume technique and it is free to move with the fluid flow. In our case, the mesh grid sizes are refined in such a way to ensure that each gas cell has approximately constant mass, allowing one to sample with a large number of cells high density regions. The gas moving mesh is coupled to a particle-mesh algorithm and oct-tree approach \citep{Barnes1986} to solve the Poisson equation and compute both collisional and collisionless particles accelerations. \AREPO has been extensively employed to deal with a large number of astrophysical problems, such as AGN winds and feedback \citep{Costa2020}, stellar evolution and interstellar medium enrichment processes \citep{vandeVoort2020}, spiral arms formation mechanisms \citep{Smith2014}, and run state-of-the-art large volume cosmological simulations of galaxy formation such as the latest IllustrisTNG simulations \citep{Illustris2018a,Illustris2018b,Illustris2018c,Illustris2018d,Illustris2018e}, or zoom-in cosmological magneto-hydrodynamical simulations as the ones from the Auriga project \citep{Grand2017}. For a detailed review see \cite{Weinberger2020} and references therein. \subsection{Realization of NGC 5474} \label{sec:ic} \subsubsection{Density distributions} \label{sec:HDICs} We sample the halo's and discs' initial conditions (hereafter ICs) following \cite{Springel2005}. The dark-matter halo density follows the \cite{Hernquist1990} model \begin{equation}\label{for:hern1} \rhodm(r) = \frac{\Mtot}{2\pi}\frac{a}{r}\frac{1}{(r+a)^3}, \end{equation} whose mass profile is given by \begin{equation}\label{for:hern2} \Mdm(r) = \Mtot\frac{r^2}{(a+r)^2}, \end{equation} where $a$ is the Hernquist's scale radius and $\Mtot$ the halo total mass. Dark-matter haloes are often represented with NFW models (equation \ref{for:dm}). However, the use of an Hernquist model (which has the same inner slope as the NFW model) is motivated by the fact that it has finite mass, and an analytic, ergodic distribution function (hereafter DF), which simplifies the sampling of the halo particle velocities. To link the Hernquist model to the NFW profile we require that the Hernquist total mass is equal to the NFW halo virial mass $\Mvir$ (i.e. the enclosed mass within the virial radius), and we impose that the two profiles share the same normalization in the central parts. As a consequence, provided, for instance, $\Mvir$ and $\rs$ for the NFW halo, the corresponding Hernquist halo is fixed with parameters \begin{equation}\begin{split}\label{for:hern3} & \Mtot = \Mvir \\ & a = \rs \sqrt{2[\ln(1+c)-c/(1+c)]}. \end{split}\end{equation} (see Fig. 1 of \citealt{Springel2005}) The radial density profiles of the gaseous and stellar discs follow from equations (\ref{for:expgas}) and (\ref{for:expstar}), respectively, where for the gas we have switched from particle number density to mass density. We drop the razor-thin disc approximation and let the discs have a non-negligible thickness. For the stellar disc, the vertical profile stratifies with radially constant scale height $\zstar$, so that its full three dimensional density distribution is given by \begin{equation}\label{for:fullstar} \rhostar(R,z) = \frac{\Mstar}{4\pi\hstar^2\zstar}e^{-\bigl(\frac{R}{\hstar}\bigr)}\sech^2\biggl({\frac{z}{\zstar}\biggr)}. \end{equation} The vertical profile of the $\HI$ disc is determined from the vertical hydrostatic equilibrium \begin{equation}\label{for:idroeq} \frac{\DD\Phi_{\rm tot}}{\DD z} = - \frac{1}{\rhogas}\frac{\DD P}{\DD z}. \end{equation} In the above equation $\Phi_{\rm tot}$ is the total gravitational potential (stellar disc, $\HI$ disc and dark matter), $P$ is the thermal pressure of the gas, assumed to be isothermal. For any chosen $\Phi_{\rm tot}$, $\rhogas$ is constrained requiring \begin{equation} \Sigmagas(R) = \int_{-\infty}^{+\infty} \rhogas(R,z)\dd z, \end{equation} where $\Sigmagas$ is an exponential disc model as in equation (\ref{for:expgas}), expressed in terms of surface density. The total potential is determined iteratively, following the scheme of \cite{Springel2005}. \subsubsection{Velocity distributions} For simplicity, only for the dark-matter halo, we assume $\Phi_{\rm tot}=\Phidm$ (i.e. the dark-matter potential) and we draw the halo phase-space positions directly from the Hernquist analytic isotropic DF. We assume that the stellar disc DF depends only on the energy $E$ and the third component of the angular momentum $\Lz$. Hence, the only non-vanishing moments of the stellar disc DF are $\hvRii = \hvzii \equiv \sigma^2$, $\hvphiii$ and $\hvphi$. We compute $\sigma$ and $\hvphiii$ from the Jeans equations, and, to sample the radial and vertical components of the stellar particle velocities, we assume that the velocity distributions in these two directions are Gaussian, with dispersion equal to $\sigma$. To compute the streaming velocity $\hvphi$ we rely on the epicyclic approximation, so \begin{equation} \hvphi = \sqrt{\hvphiii - \frac{\hvRii}{\eta^2}}, \end{equation} where \begin{equation} \eta^2 = \frac{4}{R}\frac{\DD\Phi_{\rm tot}}{\DD R}\biggl(\frac{3}{R}\frac{\DD\Phi_{\rm tot}}{\DD R} +\frac{\DD^2\Phi_{\rm tot}}{\DD R^2}\biggr)^{-1}. \end{equation} The azimuthal component of the stellar disc particles are then sampled from a Guassian with $\hvphi$ as mean and the r.m.s. velocity $\sqrt{\hvphiii-\hvphi^2}$ as standard deviation. The gas velocity field is instead composed only by the azimuthal component $\vphigas$ which satisfies the stationary Euler equation \begin{equation} \vphigas^2 = R\biggl(\frac{\DD\Phi_{\rm tot}}{\DD R} + \frac{1}{\rhogas}\frac{\DD P}{\DD R}\biggr). \end{equation} \subsubsection{Model parameters and equilibrium configuration} \label{sec:comp} The reference model derived in Section~\ref{sec:modelscomp} (see Table~\ref{tab:param}) fixes almost all the degrees of freedom needed to set the NGC 5474-like model of our simulations. We further adopt a stellar scale height $\zstar=0.15\hstar$ \citep{Kregel2002,Oh2015} while the gas is mono-atomic, with $T/\mu=3400\K$ ($T$ the gas temperature and $\mu$ the gas mean molecular weight). As a reference, for a neutral, hydrogen gas, this corresponds to $T=3400\K$. We require all particles to have the same mass $\mpart=5000\Msun$. With this choice, given \begin{equation}\begin{split} & \Mvir = 3.77\times10^9\Msun, \\ & \Mgas = 1.82\times10^9\Msun, \\ & \Mstar = 4.11\times10^8\Msun, \end{split}\end{equation} it follows \begin{equation}\begin{split} & \Nh = 708750, \\ & \Ngas = 364000, \\ & \Nstar = 82200, \end{split}\end{equation} where $\Nh$, $\Ngas$ and $\Nstar$ indicate, respectively, the number of particles of the halo, the $\HI$ disc and the stellar disc\footnote{The dark halo of NGC 5474 is not sampled at radii larger than $3\rvir$, so the total mass represented with particles is $\simeq 0.94\Mvir$.}. Since the dark-matter halo particles of NGC 5474 have been sampled as if the halo were in isolation (i.e. not accounting for the contribution of the discs to the total gravitational potential), the stellar disc was built in Maxwellian approximation and the discs provide a non-negligible contribution to the total gravitational potential, we expect all the models components to be close to equilibrium, but not exactly in equilibrium. To check how they respond to the presence of each other, and to let them shift towards an equilibrium state, we first run a simulation where NGC 5474 is evolved in isolation. The main features and the results of this simulation are described in Appendix~\ref{sec:DHeq}. In all the following hydrodynamic $N$-body models, we will take as ICs of NGC 5474 the output of the simulation of Appendix~\ref{sec:DHeq} after $0.98\Gyr$. \begin{table*} \centering \caption{Main input parameters of the set of simulations of Section~\ref{sec:sim1}. From the left-hand to the right-hand column: name of the model (model's name); PB effective radius ($\Reff$); PB total stellar mass ($\Mb$); PB number of particles ($\Nb$); softening used for the PB particles ($\lPB$). The softening is computed requiring that the maximum force between the PB's particles should not be larger than the PB's mean-field strength (\citealt{Dehnen2011}). We notice that all the models components (halo, stellar disc, gas disc, PB) have different softenings. The PB particles have mass $\mpart=1667\Msun$. In each simulation the initial position of the PB centre of mass is at $\RPB=7\kpc$, with initial streaming velocity $\vc(\RPB)=42\kms$. The ICs of NGC 5474 correspond to the configuration of Appendix~\ref{sec:DHeq} taken after $0.98\Gyr$ (for further details see Tables~\ref{tab:param} and \ref{tab:siminput}).}\label{tab:simimput1} \begin{tabular}{lcccc} \hline\hline Models' name & $\Reff$ [$\kpc$] & $\Mb$ [$10^8\Msun$] & $\Nb$ & $\lPB$ [$\kpc$] \\ \hline\hline PB\_Re484\_M1.5 & 0.484 & 1.5 & 90000 & 0.029 \\ PB\_Re484\_M1 & 0.484 & 1 & 60000 & 0.033 \\ PB\_Re484\_M0.5 & 0.484 & 0.5 & 30000 & 0.042 \\ PB\_Re320\_M1.5 & 0.320 & 1.5 & 90000 & 0.019 \\ PB\_Re320\_M1 & 0.320 & 1 & 60000 & 0.022 \\ PB\_Re320\_M0.5 & 0.320 & 0.5 & 30000 & 0.028 \\ PB\_Re161\_M1.5 & 0.161 & 1.5 & 90000 & 0.0096 \\ PB\_Re161\_M1 & 0.161 & 1 & 60000 & 0.011 \\ PB\_Re161\_M0.5 & 0.161 & 0.5 & 30000 & 0.014 \\ \hline\hline \end{tabular} \end{table*} \subsection{Realization of the PB} \label{sec:PBICs} The phase-space positions of the PB stellar and dark-matter particles are drawn directly from the components ergodic DFs. Starting from equation (\ref{for:ser1}), through an Abel inversion we retrieve the intrinsic density distribution $\rhostarPB$ of the stellar component. We complete the stellar and dark-matter density-potential pairs ($\rhostarPB,\PhistarPB$) and ($\rhodmPB,\PhidmPB$) solving the Poisson equation $\nabla^2\PhiastPB=4\pi G\rhoastPB$, where $*=(\star,\dm)$. By means of an Eddington inversion \citep{BinneyTremaine2008} we compute numerically the ergodic DFs of stars ($\fstarPB$) and dark matter ($\fdmPB$) as \begin{equation} \fastPB(E) = \frac{1}{\sqrt{8}\pi^2}\frac{\dd}{\dd E}\int_0^E \frac{\dd \PhiPB}{\sqrt{E-\PhiPB}}\frac{\dd \rhoastPB}{\dd \PhiPB}, \end{equation} where $*\in\{\star,{\rm dm}\}$, while $\PhiPB$ is the total potential $\PhiPB=\PhistarPB+\PhidmPB$. In case the PB is only made by stars, $\PhiPB=\PhistarPB$ and $*=\star$. Since in our simulations we will consider PBs with and without dark matter, and with stellar components ranging over different sizes and masses, for clarity, we will separately list in Sections \ref{sec:sim1pb} and \ref{sec:sim2pb} the PB parameters and the number of particles used in each simulation. \section{Results} \label{sec:sims} \subsection{First hypothesis: the PB within the discs' plane} \label{sec:sim1} \subsubsection{Setting the PB parameters} \label{sec:sim1pb} \begin{figure*} \centering \includegraphics[width=.9\hsize]{figures/orbits_M05.pdf} \includegraphics[width=.9\hsize]{figures/orbits_M1.pdf} \includegraphics[width=.9\hsize]{figures/orbits_M15.pdf} \caption{Trajectories of the centre of mass of the PBs (red curves) in all the hydrodynamical $N$-body models considered in Section~\ref{sec:sim1}. The top, middle and bottom rows of panels refer to models whose PB has an initial stellar mass $\Mb=0.5\times10^8\Msun$, $\Mb=10^8\Msun$ and $1.5\times\Mb=10^8\Msun$, respectively. The initial PB effective radius decreases from the left to the right column of panels. In each panel we also show the PB spatial density distribution taken at few representative snapshots along its orbit, projected along the symmetry axis (so the $(X,Y)$-plane is the equatorial plane). In most cases, the trajectory is drawn until the PB reaches a distance of $\sim1\kpc$ from the centre (black circle). The PB centre is determined using the shrinking sphere method (\citealt{Power2003}). Details on simulation parameters in Tables ~\ref{tab:param}, \ref{tab:simimput1} and \ref{tab:siminput}.} \label{fig:evolvedPB} \end{figure*} In our first set of simulations we explore cases in which the PB moves within the galaxy discs' plane, and we check whether the off-set can be reproduced as an outcome of the simulations keeping the shape of the PB smooth and regular and the kinematics of the galaxy's gaseous component unperturbed as observed. Starting from the conclusions of Section~\ref{sec:model}: \begin{itemize} \item we consider a PB made only of stars, without dark-matter halo; \item the PB centre of mass is at an initial distance $\RPB=7\kpc$ from the galaxy centre, on a circular orbit with an initial azimuthal velocity $\vc(\RPB)$, co-rotating with the $\HI$ and stellar discs. We expect the PB orbit to shrink because of dynamical friction and thus to reach $\RPB\approx 1\kpc$; \item we focus on a PB with total initial stellar mass $\Mb=0.5\times10^8\Msun$, $\Mb=10^8\Msun$ and $\Mb=1.5\times10^8\Msun$. For each mass, we consider PBs with initial $\Reff=0.484\kpc$, $\Reff=0.320\kpc$ and $\Reff=0.161\kpc$. The case of $\Mb=1.5\times10^8\Msun$ is intended to account for the fact that, due to mass loss, the PB can reach $1\kpc$ with less than the upper limit of $10^8\Msun$ that we estimated in Section~\ref{sec:model}. \end{itemize} While we have required that the particles of all the components of NGC 5474 must have $\mpart=5000\Msun$, we relax this condition on the PB and we set its particles to be three times less massive. This allows us to sample the PB's phase space with a sufficiently large number of particles and to avoid an overwhelmingly high number of particles per simulation, which would just be computationally expensive with no particular gain in terms of accuracy. The PBs with $\Mb=0.5\times10^8\Msun$, $\Mb=10^8\Msun$ and $\Mb=1.5\times10^8\Msun$ are sampled with $\Nb=30000$, $\Nb=60000$ and $\Nb=90000$ particles, respectively, following the scheme of Section~\ref{sec:PBICs}. When these PBs are evolved in isolation for $10\Gyr$ they keep their equilibrium configuration We expect the PB to sink towards the system centre due to dynamical friction on a timescale $\tdf$, which we estimate as \citep{BinneyTremaine2008} \begin{equation}\label{for:tdf} \tdf=\frac{1.17}{\ln\Lambda}\frac{\Mtot(\RPB)}{\Mb}\tc, \end{equation} where $\RPB$ is the PB distance from the centre, $\ln\Lambda$ is the Coulomb logarithm, $\tc\equiv2\RPB/\vc(\RPB)$ is the crossing time at $\RPB$ and $\Mtot$ is the total mass enclosed within $\RPB$. At $\RPB=7\kpc$, with $\vc(\RPB)\simeq42\kms$, we get \begin{equation}\begin{split}\label{for:tdf2} & \tdf = 1.4-3.5\Gyr \text{\quad when\quad} \Mb=0.5\times10^8\Msun, \\ & \tdf = 0.7-1.67\Gyr\text{\quad when\quad} \Mb=10^8\Msun, \\ & \tdf = 0.43-1.15\Gyr\text{\quad when\quad} \Mb=1.5\times10^8\Msun, \\ \end{split}\end{equation} The lower and upper limits over $\tdf$ are obtained assuming the typical values $\ln\Lambda\sim15$ and $\ln\Lambda\sim6$, respectively. According to these estimates we may expect the PB to decay towards the center of NGC 5474 on a very short timescale. We run a total of 9 simulations and, in each of them, the ICs of NGC 5474 correspond to the equilibrium simulation of Appendix~\ref{sec:DHeq} after $0.98\Gyr$. After $0.98\Gyr$ the center of mass of NGC 5474 is in the origin of the system reference frame. Details on simulations and NGC 5474 parameters (e.g. softenings, number of particles) are listed in Tables~\ref{tab:param} and ~\ref{tab:siminput}. We will refer to these simulations as PB\_Re$X$\_M$Y$, where $X=484,320,161$ indicates the PB effective radius in pc, and $Y=0.5,1,1.5$ is the PB mass in units of $10^8$ (see also Table~\ref{tab:simimput1}). The simulations run for $4.1\Gyr$ and we use an adaptive timestep refinement with typical timestep values $\simeq0.5\Myr$. \subsubsection{Results} \label{sec:res} \begin{figure} \centering \includegraphics[width=1\hsize]{orbits_remnants.pdf} \caption{Top two panels: distance between the centre of mass of the PB and the galaxy's centre as a function of time (top) and fraction of mass enclosed within $3\rh$ as a function of time (bottom), with $\rh$ the PB half-mass radius as in the ICs. The green curves refer to PBs with initial $\Reff=0.484\kpc$. Middle two panels: same as the top panels, but for PB with initial $\Reff=0.320\kpc$ (orange curves). Bottom two panels: same as the top panels, but for PB with initial $\Reff=0.161\kpc$ (purple curves). The solid, dashed and dotted curves refer, respectively, to the PBs with initial mass $\Mb=0.5\times10^8\Msun$, $\Mb=10^8\Msun$ and $1.5\times\Mb=10^8\Msun$. The curves lighten when the PBs have reached a distance from the system's centre of $\approx 1\kpc$ (colored circles).}\label{fig:orbitsevo} \end{figure} Figure~\ref{fig:evolvedPB} shows the trajectories of the PBs in each of the nine simulations. Each orbit (red curve) is obtained connecting the centre of mass of the PB from consecutive snapshots and, alongside the orbit, each panel also shows the projected density distribution of the PB taken at few representative snapshots. The $(X,Y)$-plane is the plane of the orbit, co-planar with the discs' plane. We point out that in Fig.~\ref{fig:evolvedPB} we have assumed the galaxy's symmetry axis as line of sight, but, as long as the PB evolves on the discs plane, the term $\cos i$ provides a negligible correction for $i=21^{\circ}$, and we can anyway project the galaxy in such a way to align the off-set with one of the galaxy's principal axes. As expected, the PBs with the largest initial $\Reff$ are distorted the most by the galactic tidal field. At $R\simeq2\kpc$, the PB of model PB\_Re484\_M0.5 has: i) lost approximately 60\% of its mass; ii) lost its spherical symmetry in favour of the formation of an elongated structure that has just started to wrap the galaxy centre; iii) developed extended and massive tidal debris, formed from the very beginning of the simulations. We find a similar outcome also in models PB\_Re484\_M1 and PB\_Re484\_M1.5, notwithstanding the higher PB mass which should make, in principle, the PB more resistant against the galaxy tidal force field. To estimate the mass loss we consider as particles belonging to the PB those that remain within 3$\rh$ ($\rh$ is the PB stellar half-mass radius in the ICs). As anticipated in Section~\ref{sec:model}, and looking at Fig.~\ref{fig:rt}, this result is not surprising given that the PB tidal radius is less than its initial half-mass radius at best. \begin{figure*} \centering \includegraphics[width=1\hsize]{PB_star_zoom_v2.pdf} \caption{Top left panel: total (disc and PB) stellar projected density map computed from the configuration corresponding to the orbit's end-point of the hydrodynamical $N$-body model PB\_Re0.161\_M0.5 as in Fig.~\ref{fig:evolvedPB}. The system has been projected assuming an inclination of $i=21^{\circ}$, as in \citetalias{Rownd1994}. We show intensity contours equal to $\Smax/2^n$ with $\Smax$ the map's densest peak and $n=1,...,6$. The blue and white dots show, respectively, the discs kinematic centre and the PB centre and the full projected orbit is shown with a red line. We notice that the disc kinematic center also corresponds to the centre of the $(\xi,\eta)$ plane. The small inset shows the $\HI$ line-of-sight velocity map, derived in the same portion as in the main panel. The $\HI$ velocity map has been obtained as in \citetalias{Rownd1994}, binning with pixels $0.33\kpc\times0.33\kpc$ wide, once we have assumed the distance $d=6.98\Mpc$, and with velocity contours separated by $3\kms$. The approaching arm is shown with red colors and solid black curves, while the receding arm with blue colors and dashed-black curves. Bottom left panel: $\HI$ circular speed as a function of the distance from the galaxy kinematic center (blue points with error bars) from the same snapshot as in the top left panel, compared to the galaxy’s deprojected rotation velocity curve computed in Section~\ref{sec:hd} (black points with error bars). The orange curve shows the circular speed of the analytic model of NGC 5474. Middle panels: same as the left panels but for the $N$-body model PB\_Re0.161\_M1. Right panels: same as the left panels but for the $N$-body model PB\_Re0.320\_M1.5. Details on simulation parameters in Tables ~\ref{tab:param}, \ref{tab:simimput1} and \ref{tab:siminput}.} \label{fig:stellarzooms} \end{figure*} \begin{figure*} \begin{subfigure}[b]{.325\textwidth} \includegraphics[width=1\hsize]{model_PB_Re320_M15.pdf} \end{subfigure}\quad \begin{subfigure}[b]{.575\textwidth} \includegraphics[width=1\hsize]{PB_star_ML.pdf} \end{subfigure} \caption{Left panel: projected density profile of the PB computed from the density map of the right panel of Fig.~\ref{fig:stellarzooms} (black points with error bars), superimposed to the reference best fitting model, obtained as in Section~\ref{sec:res} (red dashed curve). As a comparison, we show the \Sersic model resulting from the fit of \citetalias{Bellazzini2020} (blue curve), so $m=0.79$, $\Reff=0.484\kpc$, while we have imposed the same total mass as the best model ($\Mb\simeq10^8\Msun$, see Table~\ref{tab:pbsim}). The vertical black-dashed line marks the effective radius of the reference model. Middle panel: total stellar (disc and PB) surface brightness map as in the right hand panel of Fig.~\ref{fig:stellarzooms}, where we have assumed a different mass-to-light-ratio ($\Upsilon$) for stars belonging to the stellar disc and the PB. The PB is twice as luminous as the stellar disc. The blue dot, also the centre of the $(\xi,\eta)$ plane, shows the kinematic centre of the stellar and $\HI$ discs. Right panel: same as the middle panel, but the PB particles are three times as luminous as the stellar disc ones.}\label{fig:PBpost} \end{figure*} For each simulation, Fig.~\ref{fig:orbitsevo} shows the projected distance between the PB centre of mass and the galaxy centre, and the PB bound stellar mass fraction as a function of time. The systems are projected as in Fig.~\ref{fig:evolvedPB}, assuming as line of sight the symmetry axis, so $R\equiv\sqrt{X^2+Y^2}$. The main driver of the PB evolution is the dynamical friction: in a very short time-scale (less than $0.8\Gyr$; see also equation~\ref{for:tdf2}) all the PBs reach $R\sim2\kpc$ and, as expected, the most massive PBs sink faster than the least massive ones (equation \ref{for:tdf}; top panel of Fig.~\ref{fig:orbitsevo}). Among the PBs with initial $\Reff=0.320\kpc$, those of models PB\_Re320\_M1 and PB\_Re320\_M1.5 reach $R=1\kpc$ losing only 20\% of their original mass, while that of model PB\_Re320\_M0.5 has experienced substantial mass loss already at $R\simeq1.54\kpc$. The most compact PBs (PB\_Re161\_M0.5, PB\_Re161\_M1 and PB\_Re161\_M1.5), instead, provide a great resistance against the galaxy tidal force field ($\rt\simeq7\Reff$, see Fig.~\ref{fig:rt}), and reach the galaxy centre losing from 10\% to 30\% of their total initial mass, without developing significant tidal tails. Based on the aforementioned features, we exclude from any further analysis the PBs with initial $\Reff=0.484\kpc$ and the one from model PB\_Re320\_M0.5, which immediately depart from equilibrium and are severely distorted by the strong tidal force field of NGC 5474. We notice that when the initial mass is $\Mb=1.5\times10^8\Msun$, in some cases the PB reaches $1\kpc$ with less than $10^8\Msun$ (the mass upper limit estimated in Section~\ref{sec:model}). For this reason, we ran additional simulations with a PB initial mass as high as $\Mb=2\times10^8\Msun$. Even though these configurations reach the galactocentric $1\kpc$ distance with a bound mass of $10^8\Msun$ or more, when $\Reff=0.484\kpc$ the PB still develops pronounced tidal features, when $\Reff=0.320\kpc$ and $\Reff=0.161\kpc$ it presents the same structural properties of its $1.5\times10^8\Msun$ analogs, and they are not shown here for the sake of synthesis. To make any comparison between simulations and observations coherent, in Fig.~\ref{fig:stellarzooms} we add the contribution due to the stellar disc of NGC 5474 to the PB projected density maps of some of the remaining models, taken when they are $1\kpc$ away from the galaxy centre (i.e. the orbits end-point of Fig.~\ref{fig:evolvedPB}). As prototypical cases, we selected models PB\_Re161\_M0.5, PB\_Re161\_M1 and PB\_Re320\_M1.5. The greyscale extends over two order of magnitudes from the highest density peak, which would be compatible with a difference of five magnitudes from the map brightest point if we assume the same mass-to-light ratio for all the particles. In this case, the systems have been projected assuming $i=21^{\circ}$, and we have called the plane of the sky the $(\xi,\eta)$-plane, with $\xi\equiv X$. For instance, such an image displays patterns that are somehow reminiscent of the galaxy stellar map of Fig.~\ref{fig:ngc5474} or Fig. 5 by \citetalias{Bellazzini2020}, both obtained through the LEGUS \citep{Calzetti2015} photometric catalogue based on HST ACS images. In all cases, the perturbation provided by the PB induces the formation of a loose spiral structure in a region $2-3\kpc$ around the discs' centre. The spiral arms are similar to the observed pattern of NGC 5474 \citepalias{Bellazzini2020}, they are not necessarily symmetric with respect to the discs' centre and also the gaseous component develops a spiral structure that follows the optical stellar disc \citepalias{Rownd1994}. In the small insets of Fig.~\ref{fig:stellarzooms} we show the corresponding $\HI$ line-of-sight velocity field maps as in the main panels. The velocity maps are derived using the same spatial and velocity resolutions as the one from the \citetalias{Rownd1994} map: pixels $0.33\kpc\times 0.33\kpc$ wide and velocity contours separated by $3\kms$. The solid black lines and redder colors mark the disc's approaching arm, while the black dashed curves and bluer colors indicate the receding arm. Once the resolution has been downgraded to the same one of the observations, almost all the $\HI$ velocity maps appear smooth and regular, and consistent with \citetalias{Rownd1994}. A different perspective is given by the $\HI$ rotation curves shown in the bottom panels of Fig.~\ref{fig:stellarzooms}. Most of the differences that are clearly visible between the rotation curves from the simulations and the analytic model are confined within $2-3\kpc$, where we barely have kinematic information. However, as predicted in Section~\ref{sec:model}, the rotation curves start to show appreciable differences with respect to the de-projected rotation curve we derived when the PB mass is close to $\simeq10^8\Msun$. We rederived some of the structural parameters of the PBs from Fig.~\ref{fig:stellarzooms} fitting their projected density distribution with a \Sersic model, as in \cite{Fisher2010} and \citetalias{Bellazzini2020}, and computing their average axis-ratio $q$ through elliptical isodensity contours. We measure the axis ratio $q\equiv c/a$, with $c$ and $a$ the semi-minor and semi-major axes, respectively \citep[see][]{Lau2012}, computing the inertia tensor of the projected densities maps in an ellipse with semi-major axis $2\Reff$, centred on the map densest point. To produce the projected density distribution: i) we evaluate the PB centre on the plane of the sky as the densest/brightest point; ii) we bin the particles in 50 circular annuli, linearly equally spaced, and extending out to $4\Reff$, with $\Reff$ as in the models' ICs; iii) we subtracted the stellar disc contribution, estimated from a wider and more distant concentric circular annulus. A \Sersic model is specified by the parameters $\{\Reff,m,\Mb\}$, as in equation (\ref{for:ser1}) of Section~\ref{sec:PB}, replacing surface brightness with stellar surface mass density, assuming constant mass-to-light ratio. The model's log-likelihood is \begin{equation}\label{for:chi2PB} \ln\LL = -\frac{1}{2}\sum_i\biggl(\frac{\Sigma(\Rsnapi) - \Sigsnapi}{\delta\Sigsnapi}\biggr)^2, \end{equation} where the points $\{\Rsnapi,\Sigsnapi,\delta\Sigsnapi\}$ are the PB projected density profile, and the sum extends over the total number or radial bins. The fit is performed using a Bayesian approach relying on MCMC, adopting the same scheme as in Section~\ref{sec:method} and using flat priors over the model parameters. We list the parameters resulting from the fit in Table~\ref{tab:pbsim}. On the plane of the sky, the PB of NGC 5474 appears round and regular ($q\simeq1$). In our analysis, the only PB that keeps its circular symmetry is in model PB\_Re161\_M1 ($q\simeq0.9$), while we estimate $q\simeq0.67$ and $q\simeq0.76$ for models PB\_Re161\_M0.5 and PB\_Re320\_M1.5, respectively. According to the \Sersic fit, the PBs with initial $\Reff=0.161\kpc$ are more extended, on average, by a factor of $\sim1.3$ with respect to the initial system, while the PB with initial $\Reff=0.320\kpc$ by a factor 1.15 (see Table~\ref{tab:pbsim}). As a reference, the left panel of Fig.~\ref{fig:PBpost} shows the projected density profile of the PB from model PB\_Re320\_M1.5, together with the best fitting \Sersic model. In this case, the large errorbars are caused by having imposed circular symmetry in the derivation of the profile, even though the system is not strictly circularly symmetric. \begin{table} \begin{center} \caption{Models' parameters as a result of the fit with the \Sersic model of Section~\ref{sec:res}. The columns, from left to the right, list: the model's name; the \Sersic effective radius ($\Reff$); the \Sersic index ($m$); the PB total mass ($\Mb$); the axis ratio ($q$). For the parameters of the \Sersic model, we take as measure of that parameter the $50^{\tth}$ percentile of its marginalized one-dimensional distribution, while the uncertatites are estimated from the 16$^{\tth}$, 50$^{\tth}$ and 84$^{\tth}$ percentiles of the corresponding marginalized one-dimensional distribution.}\label{tab:pbsim} \begin{tabular}{lccc} \hline\hline Model & PB\_Re161\_M0.5 & PB\_Re161\_M1 & PB\_Re320\_M1.5 \\ \hline\hline $\Reff/\kpc$ & $0.22\pm0.01$ & $0.21\pm0.01$ & $0.37\pm0.02$ \\ $m$ & $0.64^{+0.05}_{-0.04}$ & $0.64\pm0.02$ & $0.78\pm0.04$ \\ $\log_{10}\frac{\Mb}{\Msun}$& $7.53\pm0.03$ & $7.93\pm0.01$ & $8.14\pm0.2$ \\ $q$ & $0.67$ & $0.90$ & $0.76$ \\ \hline\hline \end{tabular} \end{center} \end{table} The case of model PB\_Re320\_M1.5 is probably the most intriguing one. Even if its PB is too flattened to look like the observed stellar system of NGC 5474 ($q\simeq0.76$), the fit with the \Sersic model provides $\Reff=0.37\pm0.02\kpc$, the closest to $\Reff=0.484\kpc$ among the cases explored. Since in Fig.~\ref{fig:stellarzooms} we have assumed the same mass-to-light ratio for the stellar disc and PB particles, it may be worth asking how would the PB look like if we made its particles more luminous. Assuming different mass-to-light-ratios $\UpsilonPB$ and $\Upsilondisc$ between stars of the PB and the galaxy's stellar disc, respectively, the middle and right panels of Fig.~\ref{fig:PBpost} show the surface brightness maps obtained from a zoom of the right panel of Fig.~\ref{fig:stellarzooms}. In the 3.6 $\mu$m band, if $\UpsilonPB$ ranges between 0.5 and 0.8, then $\UpsilonPB/\Upsilondisc=2$ (middle panel) implies a disc stellar population $\simeq1-3\Gyr$ old (see also Sections~\ref{sec:modelscomp} and \ref{sec:PB}). Although very extreme, we also examined the case with $\UpsilonPB/\Upsilondisc=3$ (right panel). The shades of colours are the same as in Fig.~\ref{fig:stellarzooms}. For these PBs we have rederived the axis ratios and the structural parameters resulting from a \Sersic fit, but we do not find any significant difference with respect to the case with uniform $\Upsilon$ in axis ratio, size and concentration. To summarize, we find that a system with a large initial size steps away from its equilibrium state on a timescale so short to make very unlikely the possibility that it represents the observed PB of NGC 5474. It seems implausible that these systems may produce the observed off-set as the result of orbital decay as well as it seems implausible that the PB, if misplaced from the galaxy centre, would last long enough without developing visible debris or tidal tails. Furthermore, in this latter case, the problem of the mechanism that may have caused the off-set would still be an open question. The most compact PBs, at least over the range of masses explored, move towards the centre without producing detectable perturbation in the $\HI$ velocity field but, in spite of this, none of them reproduce the observed properties of the PB of NGC 5474. Not even the compromise of a massive and intermediate sized system (which can in principle resist the tidal force field, and expand to the required size during its evolution) is anyway close to the desired outcome. In fact, we find that a stellar component with these properties flattens to an extent inconsistent with the observations, independently on the luminosity of its stellar population. We do not expect an increase of the initial mass to improve the chances of survival of the PB: even though it would make it more resilient against the gravitational tidal field force, according to Section~\ref{sec:model}, the smoothness of the $\HI$ velocity field would start to be compromised. Instead, objects less massive than $0.5\times10^8\Msun$ would have a final mass (measured when the system is $1\kpc$ off-centred) less than the one we would expect on the basis of the luminosity of the PB stellar population, as discussed in Section~\ref{sec:PB}. \begin{figure*} \centering \includegraphics[width=.95\hsize]{SW_overdensity.pdf} \caption{Left panel: stellar projected density map (stellar disc and PB) from the hydrodynamic $N$-body model PB\_Re484\_M1 taken after $\tint=0.83\Gyr$. The system has been projected as in Fig.~\ref{fig:stellarzooms}, assuming $i=21^{\circ}$, with $\xi\equiv X$. The centre of the $(\xi,\eta)$ plane is the kinematic centre of the gaseous disc (blue dot), while the intensity contours are at $\Smax/2^n$ with $\Smax$ the map's densest peak and $n=1,...,4$. The orange ellipse shows the position of the over-density. Middle panel: $\HI$ projected density map. The colour scale is such that yellow corresponds to high-density regions while purple to low-density regions. As a comparison, the white curves show the stellar isodensity contours as in the left panel. For clarity, we only show stellar isodensities $\Smax/2^n$ corresponding to $n=1,4$. Right panel: $\HI$ velocity field map. The solid curves and the redder colours mark the disc's approaching arm while the dashed curves and the bluer colours the disc's receding arms. The contours are separated by $3\kms$ and each pixel is $0.33\kpc\times0.33\kpc$ wide, as in \citetalias{Rownd1994}.} \label{fig:SW} \end{figure*} Even if we limited ourselves in studying only few specific orbits, as long as PB moves within the galaxy's equatorial plane, it seems likely that different orbits (for instance, more radial orbits, or with low inclination) would not behave so differently from the ones that we have considered given the dominant effects of dynamical friction and the strong tidal force field of NGC 5474. \subsubsection{The SW over-density} \label{sec:sw} The left panel of Fig.~\ref{fig:SW} shows the projected stellar density map from model PB\_Re484\_M1 after $\simeq1\Gyr$. We show a zoomed-in view of the discs' centre, the line of sight is inclined of $i=21^{\circ}$ with respect to the symmetry axis, and we have used the same scheme of colours as in Fig.~\ref{fig:stellarzooms}. We have shown that the PB of the hydrodynamical $N$-body model PB\_Re484\_M1 is disrupted by the tidal force field of NGC 5474 and perturbs the central stellar and gas distribution of NGC 5474 while sinking towards the centre. In the left panel of Fig.~\ref{fig:SW}, we mark with an orange ellipse what is left of it after $1\Gyr$, when its centroid is $\sim1\kpc$ away from the galaxy's centre. We cannot help pointing out the similarities between this structure and the SW over-density of NGC 5474. We mentioned the SW over-density as one of the peculiarities of NGC 5474: it is a large substructure extending to the South-West of the PB of NGC 5474 (see Fig.~\ref{fig:ngc5474}), mostly dominated by old-intermediate age stars, whose structure is not associated with the overall spiral pattern \citepalias{Bellazzini2020}. When the PB of model PB\_Re484\_M1 is dismembered, its remnants shape into an elongated and wide structure appearing denser than the stellar disc's centre. The over-dense region partially brightens the galaxy's spiral structure with its tidal tails, and the spiral arms are also slightly traced by the $\HI$ distribution (middle panel). As noted by \citetalias{Bellazzini2020}, the SW over-density seems to be correlated to a local minimum of the $\HI$ distribution \citepalias{Kornreich2000}, which is partially consistent with Fig.~\ref{fig:SW}. We find a similar configuration also in model PB\_Re320\_M0.5. Reproducing the observed properties of the SW over-density of intermediate-old stars of NGC 5474 is beyond the scope of this work and it would require a systematic search of the wide parameter space (different orbits inclinations, eccentricity, initial velocity etc). However, as a by product, our simulations have produced configurations that are worth commenting as a viable channel for the formation of this substructure and, in general, of the substructures traced by old-intermediate age stars in the disc of NGC5474. An interaction with M 101 may not be enough to explain the SW over-density (\citetalias{Bellazzini2020}; \citealt{Mihos2013}) and we showed that the hypothesis that the SW over-density can be the remnant of a disrupted system, unrelated to the PB and M 101, as proposed, for instance, by \citetalias{Bellazzini2020}, is plausible and compatible with the smooth $\HI$ velocity field map (Fig~\ref{fig:SW}, right panel). \subsection{Second hypothesis: the PB as a satellite} \label{sec:sim2} \subsubsection{Setting the PB and the halo} \label{sec:sim2pb} In this second set of simulations, we shall consider the scenario that would attribute the off-set as the apparent position of the PB in the disc to projection effects of an external system (an unbound galaxy or a bound satellite) crossing the line of sight (\citetalias{Rownd1994}; \citealt{Mihos2013}; \citetalias{Bellazzini2020}). By means of radial velocity measurements from emission (absorption) lines of stars from the disc (PB), \citetalias{Bellazzini2020} constrained the maximum line-of-sight velocity difference between the two to $\sim50\kms$, less than the circular speed of NGC 5474. This feature implies a similar distance and, together with the fact that disc and PB have stellar populations with comparable age, probably implies a common history as well. As such, we explore cases where the PB is an external satellite galaxy, moving around NGC 5474 onto motivated orbits, rather than an un-related foreground galaxy. We try to explore the possibility that some of the other peculiarities observed in NGC 5474 (the $\HI$ warp, \citealt{vanderHulst1979}; the SW over-density, \citetalias{Bellazzini2020}) may be caused by the gravitational interaction with such a satellite and, more generally, whether the observed properties of NGC 5474 are consistent with the presence of an orbiting satellite. \begin{figure*} \centering \includegraphics[width=.49\hsize]{orbit_PBDM_05.pdf} \includegraphics[width=.49\hsize]{orbit_PBDM_1.pdf} \caption{Left panel: trajectory around NGC 5474 (black curve) made by the PB of the hydrodynamical $N$-body model PBwithDM\_M0.5\_d20. The PB centre of mass starts from $(\xii,\yi,\zi) = (0,0,20\kpc)$, with initial velocity $\vc(\zi)=33\kms$ in the $y$-direction, in a Cartesian reference frame whose $(x,y)$-plane is the galaxy's disc plane. The orbit is shown for $\simeq6\Gyr$. The system has been projected assuming as line of sight the $x$-axis, so $Y\equiv y$ and $Z\equiv z$. The edges of the two orange bands show the different lines-of-sight ($i=21^{\circ}$) that would produce an off-set of $1\kpc$ on the plane of the discs, compatible with observations, when the PB centre crosses them. The stars of the PB are embedded with a dark-matter halo, whose isodensity contours are marked in red. We show two different PBs, one corresponding to the ICs and one corresponding to a crossing of one of the lines of sight. As a comparison, we also show the projected distributions of the stellar disc of NGC 5474 (black points) and the NGC 5474 dark-matter halo isodensities (blue curves). Right panel: same as the left panel, but for the $N$-body model PBwithDM\_M1\_d20. In the latter case, the orbit is shown for $\simeq3\Gyr$.} \label{fig:orbitDM} \end{figure*} We focus on orbits that pass right above the galaxy's symmetry axis, starting from $(\xii,\yi,\zi)=(0,0,20\kpc)$, with $x,y,z$ the axes of a Cartesian reference frame whose $(x,y)$-plane is the galaxy's plane of the discs. The only non-zero component of the initial velocity of the PB centre of mass is in the $y$-direction, with modulus $\vc(\zi)=33\kms$. Since the PB moves far from the discs plane we relax the condition on its total mass and we embed it in a realistic dark-matter halo, following the scheme described in Section~\ref{sec:PB}. We focus on the two cases of a PB with a stellar mass of $\Mb=0.5\times10^8\Msun$ or $\Mb=10^8\Msun$. The PB halo parameters are chosen as in Section~\ref{sec:PB} and we highlight that the requirement in equation (\ref{for:mdynmb}) implies that, when the PB stellar mass is doubled, also the dark-matter mass is doubled, for fixed truncation radius. Since at $\zi=20\kpc$ the effects of the dynamical friction are still non-negligible, we expect the PB to sink slowly towards the galaxy centre in a few $\Gyr$ (equation \ref{for:tdf}). The PB ICs have been generated as described in Section~\ref{sec:PBICs}, considering its separate luminous and dark components. We verified that the PB is in equilibrium by evolving it for $10\Gyr$ in isolation. The only numerical effect is a slight decrease of the central density, which is negligible for the purposes of our investigation. For clarity, we will refer to the two hydrodynamical $N$-body models as PBwithDM\_M0.5\_d20 and PBwithDM\_M1\_d20, where the terms 0.5 and 1 indicate the PB stellar mass in units of $10^8\Msun$, while d20 states that the PB centre of mass is located at an initial distance of $z=20\kpc$ (see Table~\ref{tab:sim2input}). As in the previous section, the ICs of NGC 5474 correspond to the equilibrium configuration of Appendix~\ref{sec:DHeq} after $\simeq0.98\Gyr$. Details on simulations and NGC 5474 parameters are listed in Tables~\ref{tab:param} and \ref{tab:siminput}. The simulations run for $7\Gyr$ and we use an adaptive timestep refinement with typical timestep values of $0.2\Myr$. \begin{table} \centering \caption{Main input parameters of the set of simulations of Section~\ref{sec:sim2}. From top to bottom: name of the model (model's name); PB total stellar mass ($\Mb$); PB total dark-matter mass ($\MdmPB$); number of stellar particles used for the PB ($\Nb$); number of dark-matter particles used for the PB ($\Nbdm$); softening used for the PB stellar component particles ($\lPB$); softening used for the PB dark-matter component particles ($\lPBdm$). The softening is computed as in Section~\ref{sec:sim1pb} and all the simulations components (NGC 5474 dark halo, stellar disc, gas disc, PB stellar component and PB dark-matter halo) have different softenings. The PB stellar particles have $\mpart=1667\Msun$, while the dark-matter particles $\mpart=5000\Msun$. Details on how the PB dark-matter halo parameters have been fixed are in Section~\ref{sec:PB}, while details on the PB initial position and velocity are in Section~\ref{sec:sim2pb}. The ICs of NGC 5474 correspond to the configuration of Appendix \ref{sec:DHeq} taken after $0.98\Gyr$ (see also Tables~\ref{tab:param} and \ref{tab:siminput}).}\label{tab:sim2input} \begin{tabular}{lcc} \hline\hline Models' name & PBwithDM\_M0.5\_d20 & PBwithDM\_M1\_d20 \\ \hline\hline $\Mb$ [$10^8\Msun$] & 0.5 & 1 \\ $\MdmPB$ [$10^9\Msun$] & 0.98 & 1.95 \\ $\Nb$ & 30000 & 60000 \\ $\Nbdm$ & 195160 & 390320 \\ $\lPB$ [$\kpc$] & 0.042 & 0.033 \\ $\lPBdm$ [$\kpc$] & 0.16 & 0.16 \\ \hline\hline \end{tabular} \end{table} \subsubsection{Results} \label{sec:resPBDM} \begin{figure*} \centering \includegraphics[width=1.\hsize]{PBDM_PBoffset.pdf} \caption{Total (stellar disc and PB) stellar surface density map taken at three snapshots of the model PBwithDM\_M0.5 d20 assuming an inclination $i = 21^{\circ}$. In each snapshot the PB produces, as a perspective effect, an off-set of 1 kpc on the discs’ plane. The centre of $(\eta,\xi)$-plane is the kinematic center of the stellar disc (blue dot), and the shades of greys extend for two order of magnitudes from the stellar density peak, corresponding, assuming a constant mass-to-light-ratio for both populations (stellar disc and PB) to a depth of five magnitudes. Indicating with $\Sigma_{\rm max}$ the densest peak, the contours are separated by $\Sigma_{\rm max}/2^n$, with $n=1,..., 6$. The three snapshots have been taken after the system has evolved for $\tint=3.43\Gyr$, $4.75\Gyr$ and $5.28\Gyr$, respectively from the left to the right panel. In the small insets we show the full PB orbit as in Fig.~\ref{fig:orbitDM}, marking with a blue dot the position of PB along its orbit that corresponds to its reference main panel, and with the dashed-orange curves the possible galaxy’s lines-of-sight.} \label{fig:PBoffset} \end{figure*} Figure~\ref{fig:orbitDM} shows the trajectories of the PBs of models PBwithDM\_M0.5\_d20 and PBwithDM\_M1\_d20, alongside their projected spatial distribution taken at two representative snapshots. The systems have been projected assuming as line of sight the galaxy's $x$-axis, i.e. the axis perpendicular to the plane of the orbit. The trajectories of the satellites differ the most in the number of windings around NGC 5474. This different behavior is completely driven by dynamical friction: since the PB of model PBwithDM\_M1\_d20 has twice the dynamical mass of model PBwithDM\_M0.5\_d20, its orbital time is approximately halved (equation \ref{for:tdf}). While the lighter PB (left panel) completes approximately four excursions above and below the equatorial plane in $\simeq6\Gyr$, the stellar component of the most massive PB (right panel) develops tidal tails after $\simeq3\Gyr$, when it has completed less than three windings. In Fig.~\ref{fig:orbitDM} we have marked the possible galaxy's inclinations ($i=21^{\circ}$, as in \citetalias{Rownd1994}) with two orange bands. The edges of such bands are such that, when crossed by the PB centre, they produce an off-set of $1\kpc$ as an effect of projection on the stellar disc of NGC 5474. In both panels the orbits are interrupted when the systems have reached $r\sim5-6\kpc$ from the centre since, at shorter distances, the stellar components of the PBs develop non-equilibrium features (tidal tails, elongated structures) apparently inconsistent with observations, so we do not include them in any following analysis. The panels of Fig.~\ref{fig:PBoffset} show the total stellar (PB and disc) density map from three configurations from model PBwithDM\_M0.5\_d20, projected assuming an inclination $i=21^{\circ}$. The panels, from the left to the right, are projections taken after $\tint=3.43\Gyr$, $4.75\Gyr$ and $5.28\Gyr$, respectively after one and a half, two and a half and three full excursions above and below the equatorial plane. The gradient of contours is as in Fig.~\ref{fig:stellarzooms} and the small insets show the corresponding position of the PB along its orbit. When the PB crosses the galaxy's equatorial plane it causes, due to the gravitational perturbation, the development of a long-living and distinct spiral pattern in the stellar disc made up by two symmetric arms that dig the stellar disc up to $2\kpc$ and extend out to $5-6\kpc$. It is worth noticing that the spiral pattern does not form when the NGC 5474 galaxy model is evolved in isolation (see Appendix~\ref{sec:DHeq}, right panels of Fig.~\ref{fig:discmapeq}). \begin{figure} \centering \includegraphics[width=.9\hsize]{crossing_with_M05.pdf} \caption{Same as the middle panel of Fig.~\ref{fig:PBoffset}, but showing the $\HI$ disc surface density map. The density decreases from yellow to blue. The white contours show the stellar spatial distribution of the configuration of the middle panel of Fig.~\ref{fig:PBoffset}, where, for clarity, we have removed the lowest isodensity contour. The blue dot shows the kinematic centre of the $\HI$ disc. Small inset: $\HI$ circular speed as a function of the galactocentric distance (blue points with error bars) from the same snapshot as in the main panel, compared to the galaxy's deprojected rotation velocity computed in Section~\ref{sec:hd} (black points with error bars). The orange curve shows the circular speed of the analytic model of NGC 5474.} \label{fig:PBoffsetgas} \end{figure} Similar spiral arms form also in the $\HI$ disc. Although much more structured and extended (we recall that $\hgas/\hstar\simeq4$), the pattern of the $\HI$ follows the one of the stellar component. As an example, Fig.~\ref{fig:PBoffsetgas} shows the $\HI$ projected density map corresponding to the middle panel of Fig.~\ref{fig:PBoffset}. As a reference, we have superimposed with white contours the projected stellar density map of the middle panel of Fig.~\ref{fig:PBoffset}. A spiral pattern forms also in model PBwithDM\_M1\_d20. In both models the arms develop after the satellite has crossed the discs plane at least at $\sim7\kpc$. This happens sooner in model PBwithDM\_M1\_d20, but it lasts for less due to the smaller dynamical friction time. At this distance, at least for model PBwithDM\_M0.5\_d20, the crossings of the equatorial plane do not perturb sensitively the kinematics of the $\HI$ disc, whose rotation curve (inset in Fig.~\ref{fig:PBoffsetgas}) still looks very similar to the the initial one and to the measured rotation curve of NGC 5474. When comparing with the observed morphology of NGC 5474, it is important to bear in mind that our models do not include star formation. For example, in the real galaxy, star forming and $\HII$ regions trace a different spiral pattern with respect to old-intermediate age stars. Our simulations can approximately trace the latter but not the former. \begin{figure} \centering \includegraphics[width=.9\hsize]{mock_observables.pdf} \caption{Projected density distribution of the PB as a function of the distance $R$ from the PB centre (blue circles with errorbars) superimposed to the analytic \Sersic model (yellow dashed line) from which the ICs have been sampled (i.e. with the same $m$ and $\Reff$ as the observed PB, \citetalias{Bellazzini2020}, but with a total initial mass $\Mb=0.5\times10^8\Msun$). The PB corresponds to the configuration taken as in the middle panel of Fig.~\ref{fig:PBoffset} and Fig.~\ref{fig:PBoffsetgas}.} . \label{fig:mock} \end{figure} Figure~\ref{fig:mock} shows the PB stellar projected density profile obtained from the middle panel of Fig.~\ref{fig:PBoffset}. The profile has been computed as in the previous section by binning with 50 spherical annuli, equally spaced in $R$, out to $1.13\kpc$ (corresponding to $33.5\asec$, as in \citetalias{Bellazzini2020}). The background has been evaluated from a wider and distant annulus, and subtracted to the main profile. For comparison, the yellow curve shows the PB \Sersic model as in the analytic model from which the ICs have been sampled (corresponding to the \Sersic model of \citetalias{Bellazzini2020}, with a total stellar mass $\Mb=10^8\Msun$). The two profiles differ the most in the central parts, even though, as mentioned and discussed in the previous Section, we find the very same difference also when the system is evolved in isolation. \begin{figure} \centering \includegraphics[width=.8\hsize]{HI_warp.pdf} \caption{Top panel: $\HI$ projected density distribution when the galaxy is viewed edge-on. The configuration is taken from model PBwithDM\_M1\_d20 after $\tint=1.86\Gyr$ from the beginning of the simulation. The shades of colours, from yellow to black show regions of decreasing density, the dashed curve shows the $(X,Y)$-plane while the solid curve is $5^{\circ}$ inclined, as the warped $\HI$ disc. Middle panel: $\HI$ velocity field map from the same snapshot as in the top panel. The system is projected with an inclination of $i=21^{\circ}$ and the spatial and velocity resolutions are as in Fig.s~\ref{fig:stellarzooms} and ~\ref{fig:SW}. The small inset shows with a blue dot the corresponding position of the PB along its orbit (black circle) and with a red dot the galaxy's centre. The two orange lines mark the galaxy's possible inclinations. Bottom panel: projected HI column density profile computed from the same snapshot as in the top and middle panels (blue dot with errorbars) superimposed to the observed profile (black squares with errorbars), as derived in Section~\ref{sec:model}, and to the analytic models from which the ICs have been sampled (dashed red curve).} \label{fig:warp} \end{figure} As a second, interesting feature, we find that the satellite mildly warps the $\HI$ disc when it crosses the galaxy's equatorial plane. The top panel of Fig.~\ref{fig:warp} shows the projected density distribution of the $\HI$ disc from model PBwithDM\_M1\_d20, when the galaxy is viewed edge-on, after the system has evolved for $\tint=1.86\Gyr$ and the PB has completed a full oscillation in the vertical direction, crossing the equatorial plane twice. We selected a snapshot in which the PB is also $1\kpc$ off-centred from the discs' centre, similarly to Fig.~\ref{fig:PBoffset} (see the small inset in the middle panel of Fig.~\ref{fig:warp}). The $\HI$ bends by approximately $5^{\circ}$ and the warp lives for another complete full vertical oscillation of the PB around NGC 5474. However, the $\HI$ velocity field strongly constrains the minimum distance that a system with such dynamical mass can reach. The middle panel shows the $\HI$ velocity field map from the same configuration of the top panel, but when the galaxy is seen inclined by $i=21^{\circ}$ (the resolution of the map is as in Fig.s~\ref{fig:stellarzooms} and ~\ref{fig:SW}). The iso-velocities contours are regular enough to be consistent with those of \citetalias{Rownd1994}, even though the approaching and receding arms are not symmetric over the full map: the approaching arm reaches a $21-24\kms$ amplitude at $2-3\kpc$, while it peaks at $12-15\kms$ at the same distance in the opposite direction. The maximum velocity difference between the approaching and receding arms reported by \citetalias{Rownd1994} is only $2-3\kms$. The configuration is the last still compatible and in good agreement with the observations: for longer times, further interactions between the two systems erase any sign of regularity and differential rotation from the $\HI$ velocity field. The PB of model PBwithDM\_M0.5\_d20 does not warp or distort the $\HI$ disc when it crosses through it, keeping the $\HI$ velocity field regular, thanks to its lower dynamical mass and to the fact that most of the crossings happen for distances larger than $7\kpc$. We recall that in model PBwithDM\_M1\_d20, the PB dynamical mass is considerably high, more or less comparable to the one of NGC 5474. So, it sounds plausible that a system with a lower mass (in between the two models) could, at the same time, warp the $\HI$ without significantly distorting the $\HI$ velocity field map. The bottom panel of Fig.~\ref{fig:warp} shows the $\HI$ column density distribution corresponding to the middle panel, compared with the $\HI$ column density we derived in Section~\ref{sec:model} from observations. The density profile of the $\HI$ changes and bends at $\sim5-6\kpc$, which corresponds approximately to the distance of the latest crossing of the PB, but the overall shape is consistent with the observed one, apart from the centre, where the two profiles differed the most already at the beginning of the simulation (see Appendix~\ref{sec:DHeq}, Fig.~\ref{fig:discseq}). \section{Conclusions} \label{sec:concl} As a member of the M 101 Group, and appearing in projection so close to its giant central galaxy, a tumultuous past has always been invoked as the main driver of all of the peculiarities of NGC 5474. However, while the hypotheses of a gravitational interaction with M 101 may explain, for instance, the warped $\HI$ distribution \citepalias{Rownd1994}, it does not look like an explanation for its off-centred bulge. Off-set bars are observed in Magellanic spirals \citep{Odewahn1989}, even though the mechanism that can induce the misplacement is still unknown. NGC 5474 is, however, not a Magellanic spiral: i) it does not possess a bar, but rather a very round and regular stellar component; ii) it has two spiral arms and not one; iii) off-centred bars in Magellanic spirals are observed mostly in binary systems. To complicate things, the only available $\HI$ observations of NGC 5474 date back to the early 90s, and only trace the large scale structure of the galaxy, while the more recent ${\rm H}\alpha$ observations, tracing the inner kinematics, seem to be hardly reconcilable with the $\HI$ data \citepalias{Epinat2008,Bellazzini2020}, though the galaxy's low inclination does not allow to draw robust conclusions on the disc's kinematics. Following the work of \citetalias{Bellazzini2020}, who renewed the interest in this galaxy accomplishing a detailed study of its stellar populations, we have produced state-of-the-art hydrodynamical $N$-body models of NGC 5474, aimed to investigate the nature of the galaxy's central and compact stellar component, usually interpreted as an off-set bulge. Using analytic models we have argued that, if the PB really lies within the galaxy's disc plane, it is implausible that its dynamical mass is more than $10^8\Msun$, because such a system would: i) shift the entire galaxy's gravitational potential minimum, making the kinematic centre of the discs coincide with the centre of the bulge; ii) induce strong distortions in the $\HI$ velocity field map, inconsistent with observations. Through hydrodynamical $N$-body simulations, we tried to reproduce configurations where the PB appears off-centred as a result of orbital decay due to dynamical friction, when it moves within the galaxy's discs plane. We explored PBs of different masses and sizes but, in none of the considered scenarios we were able to reproduce the observations: a system with a large initial size ($\Reff\ge320\kpc$), while evolving into the strong tidal force field of NGC 5474, develops massive tidal tails and gets flattened and elongated (in some cases destroyed) after less than $0.8\Gyr$. The very short time needed to show non-equilibrium features makes very unlikely that a PB with these characteristics can either come from the galaxy's outer regions or be an off-centred pseudo-bulge. A compact system reaches the required $1\kpc$ distance from the centre but, on the basis of structural analysis, it remains either too compact or gets too flattened to look similar to the observed one. Through $N$-body simulations, \cite{Levine1998} showed that an off-set between a stellar disc and the gravitational potential minimum of its host galaxy can stand for sufficiently long time if the stellar disc spins in a sense retrograde to its orbit about the halo centre. According to the authors, we should then interpret the discs as off-centred with respect to the bulge, and not the other way around. We believe that this is not the case of NGC 5474 in which the off-set stands in the gas kinematics as well (\citealt{Levine1998} considered collisionless simulations with no gas). If we imagine the gas to behave similarly to the stellar counterpart, according to \cite{Levine1998}, the $\HI$ velocity field should be clearly and strongly asymmetric, which is not the case for NGC 5474. As different authors proposed (\citetalias{Rownd1994}; \citealt{Mihos2013}, \citetalias{Bellazzini2020}), we have explored the hypothesis that the off-set is produced by projection effects, once the PB is orbiting around NGC 5474. Due to the structural homology between the PB and a dE galaxy, we have coated it with dark-matter halo. We have shown reference cases of polar orbits where, in projection, the PB looks off-centred of $1\kpc$, just as observed. We exploit the gravitational interaction between the satellite PB and NGC 5474 to show that it may: i) explain the formation of the galaxy loose spiral pattern, formed by two symmetric arms, together with a very similar structure in the $\HI$ distribution; ii) partially account for the formation of the warped $\HI$ disc, at least in cases of sufficiently massive PB. Of course, the large parameter space, the lack of tight observational constraints and the degeneracy induced by projection would allow hundreds of orbits to reproduce the observed, present-day configuration. As such, we do not expect to have solved all the mysteries behind NGC 5474, but rather to have shown in a quantitative manner that its PB is probably not the bulge or the pseudo-bulge of NGC 5474, and we have also presented a possible, intriguing alternative scenario where the PB is a satellite galaxy of NGC 5474, moving on a polar orbit, that has the advantage of explaining some of the other peculiarities of NGC 5474. While our study is not sufficient to ascertain the real nature of the PB, it provides for the first time a sound way out to the main puzzle of the structure of NGC 5474: the odd off-centred \lapex bulge \rapex is likely not a bulge at all, but a satellite dwarf galaxy projected near the center of a M33-like bulge-less spiral \citep{Boker2002,Das2012,Grossi2018}. \section*{Acknowledgments} We thank the anonymous referee for his/her comments and suggestions that considerably improved the quality of this work. We acknowledge the use of computational resources from the parallel computing cluster of the Open Physics Hub (\url{https://site.unibo.it/openphysicshub/en}) at the Physics and Astronomy Department in Bologna. We acknowledge funding from the INAF Main Stream program SSH 1.05.01.86.28. FM is supported by the Program \lapex Rita Levi Montalcini\rapex of the Italian MIUR. We thank F. Fraternali and G. Iorio for very helpful discussions. RP acknowledges G. Sabatini for useful suggestions and comments. \section*{Data availability} The rotation curves and the $\HI$ column density map of NGC 5474 are available at https://dx.doi.org/10.1086/117185. The rotation curve and the $\HI$ density distribution rederived in this article will be shared on request to the corresponding author.
{'timestamp': '2020-11-26T02:00:49', 'yymm': '2011', 'arxiv_id': '2011.12322', 'language': 'en', 'url': 'https://arxiv.org/abs/2011.12322'}
\section{The maximal complex subbundle of the tangent bundle}\label{mcs} Let $\bar{M}$ be a K\"{a}hler manifold with K\"{a}hler structure $J$ and K\"{a}hler metric $g$. We always assume $n = \dim_\mathbb{C}(\bar{M}) \geq 2$. Let $M$ be a real hypersurface in $\bar{M}$. We will denote the induced Riemannian metric on $M$ also by $g$. The Levi Civita covariant derivative of $\bar{M}$ and $M$ is denoted by $\bar\nabla$ and $\nabla$, respectively. The Lie algebra of smooth vector fields on $M$ is denoted by ${\mathfrak{X}}(M)$. Let $\zeta$ be a (local) unit normal vector field on $M$. We denote by $A = A_\zeta$ the shape operator of $M$ with respect to $\zeta$. The unit vector field \[ \xi = -J\zeta \] is the Reeb vector field on $M$. The flow of the Reeb vector field $\xi$ is the Reeb flow on $M$. We define a $1$-form $\eta$ on $M$ by \[ \eta(X) = g(X,\xi) \] for all $X \in {\mathfrak{X}}(M)$ and a skew-symmetric tensor field $\phi$ on $M$ by decomposing $JX$ into its tangential component $\phi X$ and its normal component $g(JX,\zeta)\zeta$, that is, \[ JX = \phi X + g(JX,\zeta)\zeta = \phi X + \eta(X)\zeta \] for all $X \in {\mathfrak{X}}(M)$. The $1$-form $\eta$ is the almost contact form on $M$ and the skew-symmetric tensor field $\phi$ is the structure tensor field on $M$. The quadruple $(\phi,\xi,\eta,g)$ is the induced almost contact metric structure on $M$. Note that \[ \eta(\xi) = 1,\ \phi\xi = 0 \mbox{ and } \phi^2 X = -X + \eta(X)\xi \] for all $X \in {\mathfrak{X}}(M)$. Using the K\"{a}hler property $\bar\nabla J = 0$ and the Weingarten formula we obtain \[ 0 = (\bar\nabla_XJ)\zeta = \bar\nabla_X J\zeta - J\bar\nabla_X\zeta = -\bar\nabla_X\xi + JAX \] for all $X \in {\mathfrak{X}}(M)$. The tangential component of this equation induces the useful equation \[ \nabla_X\xi = \phi AX \] for all $X \in {\mathfrak{X}}(M)$. The subbundle \[ {\mathcal{C}} = \ker(\eta) = TM \cap J(TM) \] of the tangent bundle $TM$ of $M$ is the maximal complex subbundle of $TM$. We denote by $\Gamma(\calC)$ the set of all vector fields $X$ on $M$ with values in ${\mathcal{C}}$, that is, \begin{align*} \Gamma(\calC) & = \{ X \in {\mathfrak{X}}(M) : X_p \in {\mathcal{C}}_p \mbox{ for all } p \in M\} \\ & = \{ X \in {\mathfrak{X}}(M) : \eta(X) = 0 \} . \end{align*} The real hypersurface $M$ is called a Hopf hypersurface if the Reeb flow on $M$ is a geodesic flow, that is, if the integral curves of the Reeb vector field $\xi$ are geodesics in $M$. We have the following characterization of Hopf hypersurfaces. \begin{prop} Let $M$ be a real hypersurface in a K\"{a}hler manifold $\bar{M}$ with induced almost contact metric structure $(\phi,\xi,\eta,g)$. The following statements are equivalent: \begin{enumerate} \item[\rm (i)] $M$ is a Hopf hypersurface in $\bar{M}$; \item[\rm (ii)] $\nabla_\xi\xi = 0$; \item[\rm (iii)] The Reeb vector field $\xi$ is a principal curvature vector of $M$ at every point; \item[\rm (iv)] The maximal complex subbundle ${\mathcal{C}}$ of $TM$ is invariant under the shape operator $A$ of $M$, that is, $A{\mathcal{C}} \subseteq {\mathcal{C}}$. \end{enumerate} \end{prop} \begin{proof} Let $p \in M$ and $c : I \to M$ be an integral curve of the Reeb vector field $\xi$ with $0 \in I$ and $c(0) = p$. Then we have $\nabla_{\xi_p}\xi = \nabla_{\dot{c}(0)}\xi = (\xi \circ c)'(0) = \dot{c}'(0)$. If $M$ is a Hopf hypersurface, then we have $\dot{c}'(0) = 0$ by definition and therefore $\nabla_{\xi_p}\xi = 0$. Since this holds at any point $p \in M$, we obtain $\nabla_\xi\xi = 0$. Conversely, if $\nabla_\xi\xi = 0$, then $\dot{c}' = \nabla_{\dot{c}}\xi = \nabla_{\xi \circ c}\xi = 0$ for any integral curve $c$ of $\xi$. Thus any integral curve of $\xi$ is a geodesic in $M$ and hence $M$ is a Hopf hypersurface. This establishes the equivalence of (i) and (ii) The kernel $\ker(\phi)$ of the structure tensor field $\phi$ is spanned by the Reeb vector field, that is, $\ker(\phi) = \mathbb{R}\xi$. Since $\nabla_\xi\xi = \phi A\xi$, we therefore see that $\nabla_\xi\xi = 0$ if and only if $A\xi \in \mathbb{R}\xi$, which shows that (ii) and (iii) are equivalent. We have the orthogonal decomposition $TM = {\mathcal{C}} \oplus \mathbb{R}\xi$. Since the shape operator $A$ is self-adjoint, the equivalence of (iii) and (iv) is obvious. \end{proof} The next result provides a characterization of real hypersurfaces when taking the limit $f \to 0$ in Okumura's characterization $A\phi + \phi A = 2f\phi$ of contact hypersurfaces in K\"{a}hler manifolds. \begin{prop} \label{HopfCint} Let $M$ be a real hypersurface in a K\"{a}hler manifold $\bar{M}$ with induced almost contact metric structure $(\phi,\xi,\eta,g)$. The following statements are equivalent: \begin{enumerate} \item[\rm (i)] The almost contact form $\eta$ is closed, that is, $d\eta = 0$. \item[\rm (ii)] The shape operator $A$ of $M$ and the structure tensor field $\phi$ satisfy \[ A \phi + \phi A = 0. \] \item[\rm (iii)] The real hypersurface $M$ is a Hopf hypersurface and the maximal complex subbundle ${\mathcal{C}}$ of $TM$ is integrable. \end{enumerate} \end{prop} \begin{proof} Using the equation $\nabla_X\xi = \phi AX$, the exterior derivative $d\eta$ of $\eta$ is \begin{align*} d\eta(X,Y) & = d(\eta(Y))(X) - d(\eta(X))(Y) - \eta([X,Y]) \\ & = Xg(Y,\xi) - Yg(X,\xi) - g([X,Y],\xi) \\ & = g(\nabla_XY,\xi) + g(Y,\nabla_X\xi) - g(\nabla_YX,\xi) - g(X,\nabla_Y\xi) - g([X,Y],\xi) \\ & = g(Y,\phi A X) - g(X, \phi A Y) \\ & = g((A\phi + \phi A)X,Y) \end{align*} for all $X,Y \in {\mathfrak{X}}(M)$. It follows that $\eta$ is closed if and only if $A\phi + \phi A = 0$, which shows that (i) and (ii) are equivalent. The above calculations imply that for $X,Y \in \Gamma(\calC)$ we have \[ \eta([X,Y]) = - d\eta(X,Y) = -g((A\phi + \phi A)X,Y). \] It follows that the distribution ${\mathcal{C}}$ is involutive if and only if $g((A\phi + \phi A)X,Y) = 0$ holds for all $X,Y \in \Gamma(\calC)$. We have $g((A\phi + \phi A)\xi,Y) = g(\phi A\xi, Y) = 0$ for all $Y \in \Gamma(\calC)$ if and only if $A\xi \in \mathbb{R}\xi$, that is, if and only if $M$ is a Hopf hypersurface. We always have $g((A\phi + \phi A)\xi,\xi) = 0$. Using Frobenius Theorem we can now conclude the equivalence of (ii) and (iii). \end{proof} \section{The complex hyperbolic quadric}\label{tchq} The complex hyperbolic quadric is the Riemannian symmetric space \[ {Q^n}^* = SO^o_{2,n}/SO_2SO_n,\ n \geq 1. \] It is the non-compact dual symmetric space of the complex quadric $Q^n = SO_{2+n}/SO_2SO_n$. We denote by $M_{2,n}(\mathbb{R})$ the real vector space of $(2 \times n)$-matrices with real coefficients. Let \[ {\mathfrak{g}} = {\mathfrak{s}}{\mathfrak{o}}_{2,n} = \left\{ \begin{pmatrix} A_1 & B \\ B^\top & A_2 \end{pmatrix} : A_1 \in {\mathfrak{s}}{\mathfrak{o}}_2,\ A_2\in{\mathfrak{s}}{\mathfrak{o}}_n,\ B \in M_{2,n}(\mathbb{R}) \right\} \] be the Lie algebra of the indefinite special orthogonal group $G = SO^o_{2,n}$ and \[ {\mathfrak{k}} = {\mathfrak{s}}{\mathfrak{o}}_2 \oplus {\mathfrak{s}}{\mathfrak{o}}_n = \left\{ \begin{pmatrix} A_1 & 0_{2,n} \\ 0_{n,2} & A_2 \end{pmatrix} : A_1 \in {\mathfrak{s}}{\mathfrak{o}}_2,\ A_2\in{\mathfrak{s}}{\mathfrak{o}}_n \right\} \] be the Lie algebra of the isotropy group $K = SO_2SO_n$ of $G$ at $o \in {Q^n}^*$. Let \[ B : {\mathfrak{g}} \times {\mathfrak{g}} \to \mathbb{R}\ ,\ (X,Y) \mapsto {\rm {tr}}({\rm {ad}}(X){\rm {ad}}(Y)) = n{\rm {tr}}(XY) \] be the Killing form of ${\mathfrak{g}}$ and \[ {\mathfrak{p}} = \left\{ \begin{pmatrix} 0_{2,2} & B \\ B^\top & 0_{n,n} \end{pmatrix} : B \in M_{2,n}(\mathbb{R}) \right\} \] be the orthogonal complement of ${\mathfrak{k}}$ in ${\mathfrak{g}}$ with respect to $B$. The resulting decomposition ${\mathfrak{g}} = {\mathfrak{k}} \oplus {\mathfrak{p}}$ is a Cartan decomposition of ${\mathfrak{g}}$. We identify the tangent space $T_o{Q^n}^*$ of ${Q^n}^*$ at $o$ with ${\mathfrak{p}}$ in the usual way. The Cartan involution $\theta \in {\rm {Aut}}({\mathfrak{g}})$ on ${\mathfrak{g}}$ is given by \[ \theta(X) = I_{2,n} X I_{2,n} \mbox{ with } I_{2,n} = \begin{pmatrix} -I_2 & 0_{2,n} \\ 0_{n,2} & I_n \end{pmatrix}, \] where $I_2$ and $I_n$ is the identity $(2 \times 2)$-matrix and $(n \times n)$-matrix respectively. Then \[ B_\theta : {\mathfrak{g}} \times {\mathfrak{g}} \to \mathbb{R}\ ,\ (X,Y) = -B(X,\theta(Y)) \] is a positive definite ${\rm {Ad}}(K)$-invariant inner product on ${\mathfrak{g}}$. The Cartan decomposition ${\mathfrak{g}} = {\mathfrak{k}} \oplus {\mathfrak{p}}$ is orthogonal with respect to $B_\theta$. The restriction of $B_\theta$ to ${\mathfrak{p}} \times {\mathfrak{p}}$ induces a $G$-invariant Riemannian metric $g_{B_\theta}$ on ${Q^n}^*$, which is often referred to as the standard homogeneous metric on ${Q^n}^*$. The complex hyperbolic quadric $({Q^n}^*,g_{B_\theta})$ is an Einstein manifold with Einstein constant $-\frac{1}{2}$ (see \cite{WZ85} and use duality between Riemannian symmetric spaces of compact type and of non-compact type). We renormalize the standard homogeneous metric $g_{B_\theta}$ so that the Einstein constant of the renormalized Riemannian metric $g$ is equal to $-2n$, that is, \[ g_{B_\theta} = 4ng. \] This renormalization implies that the minimum of the sectional curvature of $({Q^n}^*,g)$ is equal to $-4$. Note that $({Q^1}^*,g)$ is isometric to the complex hyperbolic line $\mathbb{C} H^1(-4)$ and $({Q^2}^*,g)$ is isometric to the Riemannian product $\mathbb{C} H^1(-4) \times \mathbb{C} H^1(-4)$ of two complex hyperbolic lines. For $n \geq 3$, $({Q^n}^*,g)$ is an irreducible Riemannian symmetric space of non-compact type and rank $2$. We assume $n \geq 3$ in the following. The Lie algebra ${\mathfrak{k}}$ decomposes orthogonally into ${\mathfrak{k}} = {\mathfrak{s}}{\mathfrak{o}}_2 \oplus {\mathfrak{s}}{\mathfrak{o}}_n$. The first factor ${\mathfrak{s}}{\mathfrak{o}}_2$ is the $1$-dimensional center of ${\mathfrak{k}}$. The adjoint action of \[ Z = \begin{pmatrix} 0 & -1 & 0 & \cdots & 0 \\ 1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \cdots & 1 \\ \end{pmatrix} \in SO_2 \subset SO_2SO_n = K \] on ${\mathfrak{p}}$ induces a K\"{a}hler structure $J$ on ${Q^n}^*$. In this way $({Q^n}^*,g,J)$ becomes a Hermitian symmetric space. We define \[ c_0 = \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & -1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \cdots & 1 \\ \end{pmatrix} \in O_2SO_n . \] Note that $c_0 \not\in K$, but $c_0$ is in the isotropy group at $o$ of the full isometry group of $({Q^n}^*,g)$. The adjoint transformation ${\rm {Ad}}(c_0)$ leaves ${\mathfrak{p}}$ invariant and $C_0 = {\rm {Ad}}(c_0)|_{\mathfrak{p}}$ is an anti-linear involution on ${\mathfrak{p}} \cong T_o{Q^n}^*$ satisfying $C_0J + JC_0 = 0$. In other words, $C_0$ is a real structure on $T_o{Q^n}^*$. The involution $C_0$ commutes with ${\rm {Ad}}(g)$ for all $g \in SO_n \subset K$ but not for all $g \in K$. More precisly, for $g = (g_1,g_2) \in K$ with $g_1 \in SO_2$ and $g_2 \in SO_n$, say $g_1 = \left( \begin{smallmatrix} \cos(t) & -\sin(t) \\ \sin(t) & \cos(t) \end{smallmatrix} \right)$ with $t \in \mathbb{R}$, so that ${\rm {Ad}}(g_1)$ corresponds to multiplication with the complex number $\mu = e^{it}$, we have \[ C_0 \circ {\rm {Ad}}(g) = \mu^{-2} {\rm {Ad}}(g) \circ C_0 . \] It follows that we have a circle of real structures \[ \{\cos(\varphi)C_0 + \sin(\varphi)JC_0 : t \in \mathbb{R} \} . \] This set is ${\rm {Ad}}(K)$-invariant and therefore generates an ${\rm {Ad}}(G)$-invariant $S^1$-subbundle ${\mathfrak{A}}_0$ of the endomorphism bundle ${\rm {End}}(T{Q^n}^*)$, consisting of real structures (or conjugations) on the tangent spaces of ${Q^n}^*$. This $S^1$-bundle naturally extends to an ${\rm {Ad}}(G)$-invariant vector subbundle ${\mathfrak{A}}$ of ${\rm {End}}(T{Q^n}^*)$ with ${\rm {rk}}({\mathfrak{A}}) = 2$, which is parallel with respect to the induced connection on ${\rm {End}}(T{Q^n}^*)$. For any real structure $C \in {\mathfrak{A}}_0$ the tangent line to the fibre of ${\mathfrak{A}}$ through $C$ is spanned by $JC$. For every $p \in {Q^n}^*$ and real structure $C \in {\mathfrak{A}}_p$ we have an orthogonal decomposition \[ T_p{Q^n}^* = V(C) \oplus JV(C) \] into two totally real subspaces of $T_p{Q^n}^*$. Here $V(C)$ and $JV(C)$ are the $(+1)$- and $(-1)$-eigenspaces of $C$, respectively. By construction, we have \[ V(C_0) = \left\{ \begin{pmatrix} 0 & 0 & u_1 & \cdots & u_n \\ 0 & 0 & 0 & \cdots & 0 \\ u_1 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots\\ u_n & 0 & 0 & \cdots & 0 \end{pmatrix} : u \in \mathbb{R}^n \right\} \] and \[ JV(C_0) = \left\{ \begin{pmatrix} 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & v_1 & \cdots & v_n \\ 0 & v_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & v_n & 0 & \cdots & 0 \end{pmatrix} : v \in \mathbb{R}^n \right\}. \] For \[ C = \cos(\varphi)C_0 + \sin(\varphi)JC_0 \] and $u \in V(C_0)$ we have \begin{align*} & C(\cos(\varphi/2)u + \sin(\varphi/2)Ju) \\ & = \cos(\varphi/2)Cu + \sin(\varphi/2)CJu \\ & = \cos(\varphi/2)Cu - \sin(\varphi/2)JCu \\ & = \cos(\varphi/2)(\cos(\varphi)C_0 + \sin(\varphi)JC_0)u - \sin(\varphi/2)J(\cos(\varphi)C_0 + \sin(\varphi)JC_0)u \\ & = (\cos(\varphi/2)\cos(\varphi) + \sin(\varphi/2)\sin(\varphi))u + (\cos(\varphi/2)\sin(\varphi) - \sin(\varphi/2)\cos(\varphi))Ju \\ & = \cos(\varphi/2)u + \sin(\varphi/2)Ju. \end{align*} It follows that \[ V(C) = \{ \cos(\varphi/2)u + \sin(\varphi/2)Ju : u \in V(C_0) \}. \] Geometrically this tells us that, if we rotate a real structure by angle $\varphi$, then the $\pm 1$-eigenspaces rotate by angle $\varphi/2$. The Riemannian metric $g$, the K\"{a}hler structure $J$ and a real structure $C$ on ${Q^n}^*$ can be used to give an explicit expression of the Riemannian curvature tensor $\bar{R}$ of $({Q^n}^*,g)$ (see \cite{Re96} and use duality). More precisely, we have \begin{align*} \bar{R}(X,Y)Z & = g(X,Z)Y - g(Y,Z)X + g(JX,Z)JY - g(JY,Z)JX + 2g(JX,Y)JZ \\ & \qquad + g(CX,Z)CY - g(CY,Z)CX + g(JCX,Z)JCY - g(JCY,Z)JCX \end{align*} for all $X,Y,Z \in {\mathfrak{X}}({Q^n}^*)$, where $C$ is an arbitrary real structure in ${\mathfrak{A}}_0$. For every non-zero tangent vector $v \in {\mathfrak{p}} \cong T_o{Q^n}^*$ there exists a maximal abelian subspace ${\mathfrak{a}} \subset {\mathfrak{p}}$ with $v \in {\mathfrak{a}}$. If ${\mathfrak{a}}$ is unique, then $v$ is said to be a regular tangent vector, otherwise $v$ is said to be a singular tangent vector. From the explicit expression of the Riemannian curvature tensor it is straightforward to find the singular tangent vectors of ${Q^n}^*$. There are exactly two types of singular tangent vectors $v \in T_o{Q^n}^*$, which can be characterized as follows: \begin{enumerate} \item[(i)] If there exists a real structure $C \in {\mathfrak{A}}_0$ such that $v \in V(C)$, then $v$ is singular. Such a singular tangent vector is called ${\mathfrak{A}}$-principal. \item[(ii)] If there exist a real structure $C \in {\mathfrak{A}}_0$ and orthonormal vectors $u,w \in V(C)$ such that $\frac{v}{||v||} = \frac{1}{\sqrt{2}}(u+Jw)$, then $v$ is singular. Such a singular tangent vector is called ${\mathfrak{A}}$-isotropic. \end{enumerate} For every unit tangent vector $v \in T_o{Q^n}^*$ there exist a real structure $C \in {\mathfrak{A}}_0$ and orthonormal vectors $u,w \in V(C)$ such that \[ v = \cos(t)u + \sin(t)Jw \] for some $t \in [0,\frac{\pi}{4}]$. The singular tangent vectors correspond to the boundary values $t = 0$ and $t = \frac{\pi}{4}$. Let $v$ be a unit tangent vector of ${Q^n}^*$ and consider the Jacobi operator $\bar{R}_v$ defined by \[ \bar{R}_v X = \bar{R}(X,v)v. \] We have \[ \bar{R}_v X = -X + g(X,v)v - 3g(X,Jv)Jv + g(X,Cv)Cv - g(Cv,v)CX + g(X,JCv)JCv . \] By a straightforward computation we obtain the eigenvalues and eigenspaces of $\bar{R}_v$ (see also \cite{Re96}). The eigenvalues are \[ 0,-1+\cos(2t),-1-\cos(2t),-2+2\sin(2t),-2-2\sin(2t) \] with corresponding eigenspaces \begin{align*} E_0 & = \mathbb{R} u \oplus \mathbb{R} w \cong \mathbb{R}^2, \\ E_{-1+\cos(2t)} & = V(C) \ominus (\mathbb{R} u \oplus \mathbb{R} w) \cong \mathbb{R}^{n-2}, \\ E_{-1-\cos(2t)} & = JV(C) \ominus J(\mathbb{R} u \oplus \mathbb{R} w) \cong \mathbb{R}^{n-2} ,\\ E_{-2+2\sin(2t)} & = \mathbb{R}(Ju+w) \cong \mathbb{R}, \\ E_{-2-2\sin(2t)} & = \mathbb{R}(Ju-w) \cong \mathbb{R} , \end{align*} where $C$ is a suitable real structure and $u,w \in V(C)$ are orthonormal vectors such that \[ v = \cos(t)u + \sin(t)Jw \] for some $t \in [0,\frac{\pi}{4}]$. The five eigenvalues are distinct unless $t \in \{0,\tan^{-1}(\frac{1}{2}),\frac{\pi}{4}\}$. If $t = 0$, then $Cv = v$ and hence $v$ is ${\mathfrak{A}}$-principal. In this case $\bar{R}_v$ has two eigenvalues $0,-2$ with corresponding eigenspaces \begin{align*} E_0 & = \mathbb{R} v \oplus J(V(C)\ominus \mathbb{R} v) \cong \mathbb{R}^n,\\ E_{-2} & = \mathbb{R} Jv \oplus (V(C)\ominus \mathbb{R} v) \cong \mathbb{R}^n. \end{align*} If $t = \frac{\pi}{4}$, then $v = \frac{1}{\sqrt{2}}(u+Jw)$ and hence $v$ is ${\mathfrak{A}}$-isotropic. In this case $\bar{R}_v$ has three eigenvalues $0,-1,-4$ with corresponding eigenspaces \begin{align*} E_0 & = \mathbb{R} v \oplus \mathbb{R} Cv \oplus \mathbb{R} JCv = \mathbb{R} v \oplus \mathbb{C} Cv \cong \mathbb{R} \oplus \mathbb{C},\\ E_{-1} & = {\mathfrak{p}} \ominus (\mathbb{C} v \oplus \mathbb{C} Cv) \cong \mathbb{C}^{n-2}, \\ E_{-4} & = \mathbb{R} Jv \cong \mathbb{R}. \end{align*} If $t = \tan^{-1}(\frac{1}{2})$, then $\cos(t) = \frac{2}{\sqrt{5}}$, $\sin(t) = \frac{1}{\sqrt{5}}$, and hence $\cos(2t) = \frac{3}{5}$ and $\sin(2t) = \frac{4}{5}$. In this case $\bar{R}_v$ has four eigenvalues $0,-\frac{2}{5},-\frac{8}{5},-\frac{18}{5}$. Let ${\mathfrak{a}}$ be a maximal abelian subspace of ${\mathfrak{p}}$ and ${\mathfrak{a}}^*$ be the dual vector space of ${\mathfrak{a}}$. For each $\alpha \in {\mathfrak{a}}^*$ we define \[ {\mathfrak{g}}_\alpha = \{ X \in {\mathfrak{g}} : {\rm {ad}}(H)X = \alpha(H)X \mbox{ for all } H \in {\mathfrak{a}}\}. \] If $\alpha \neq 0$ and ${\mathfrak{g}}_\alpha \neq \{0\}$, then $\alpha$ is a restricted root and ${\mathfrak{g}}_\alpha$ is a restricted root space. Let $\Sigma \subset {\mathfrak{a}}^*$ be the set of restricted roots. The restricted root spaces provide a restricted root space decomposition \[ {\mathfrak{g}} = {\mathfrak{g}}_0 \oplus \left( \bigoplus_{\alpha \in \Sigma} {\mathfrak{g}}_\alpha \right) \] of ${\mathfrak{g}}$, where ${\mathfrak{g}}_0 = {\mathfrak{k}}_0 \oplus {\mathfrak{a}}$ and ${\mathfrak{k}}_0 \cong {\mathfrak{s}}{\mathfrak{o}}_{n-2}$ is the centralizer of ${\mathfrak{a}}$ in ${\mathfrak{k}}$. The restricted root spaces ${\mathfrak{g}}_\alpha$ and ${\mathfrak{g}}_0$ are pairwise orthogonal with respect to $B_\theta$. The corresponding restricted root system is of type $B_2$. We choose a set $\Lambda = \{\alpha_1,\alpha_2\}$ of simple roots of $\Sigma$ such that $\alpha_1$ is the longer root of the two simple roots, and denote by $\Sigma^+$ the resulting set of positive restricted roots. If we write, as usual, $\alpha_1 = \epsilon_1 - \epsilon_2$ and $\alpha_2 = \epsilon_2$, the positive restricted roots are \[ \alpha_1 = \epsilon_1 - \epsilon_2,\ \alpha_2 = \epsilon_2,\ \alpha_1 + \alpha_2 = \epsilon_1,\ \alpha_1 + 2\alpha_2 = \epsilon_1 + \epsilon_2. \] The multiplicities of the two long roots $\alpha_1$ and $\alpha_1 + 2\alpha_2$ are equal to $1$, and the multiplicities of the two short roots $\alpha_2$ and $\alpha_1 + \alpha_2$ are equal to $n-2$, respectively. Explicitly, the positive restricted root spaces and ${\mathfrak{g}}_0$ are: \begin{eqnarray*} {\mathfrak{g}}_0 & = & \left\{ \begin{pmatrix} 0 & 0 & a_1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & a_2 & 0 & \cdots & 0 \\ a_1 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & a_2 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & & & \\ \vdots & \vdots & \vdots & \vdots & & B & \\ 0 & 0 & 0 & 0 & & & \end{pmatrix} : a_1,a_2 \in \mathbb{R},\ B \in {\mathfrak{s}}{\mathfrak{o}}_{n-2} \right\} \cong \mathbb{R}^2 \oplus {\mathfrak{s}}{\mathfrak{o}}_{n-2},\\ {\mathfrak{g}}_{\alpha_1+\alpha_2} & = & \left\{ \begin{pmatrix} 0 & 0 & 0 & 0 & v_1 & \cdots & v_{n-2} \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & v_1 & \cdots & v_{n-2} \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ v_1 & 0 & -v_1 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ v_{n-2} & 0 & -v_{n-2} & 0 & 0 & \cdots & 0 \end{pmatrix} : v \in \mathbb{R}^{n-2} \right\} \cong \mathbb{R}^{n-2}, \\ {\mathfrak{g}}_{\alpha_2} & = & \left\{ \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & w_1 & \cdots & w_{n-2} \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & w_1 & \cdots & w_{n-2} \\ 0 & w_1 & 0 & -w_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & w_{n-2} & 0 & -w_{n-2} & 0 & \cdots & 0 \end{pmatrix} : w \in \mathbb{R}^{n-2} \right\} \cong \mathbb{R}^{n-2}, \\ {\mathfrak{g}}_{\alpha_1} & = & \left\{ \begin{pmatrix} 0 & x & 0 & x & 0 & \cdots & 0 \\ -x & 0 & x & 0 & 0 & \cdots & 0 \\ 0 & x & 0 & x & 0 & \cdots & 0 \\ x & 0 & -x & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \end{pmatrix} : x \in \mathbb{R} \right\} \cong \mathbb{R} ,\\ {\mathfrak{g}}_{\alpha_1+2\alpha_2} & = & \left\{ \begin{pmatrix} 0 & y & 0 & -y & 0 & \cdots & 0 \\ -y & 0 & y & 0 & 0 & \cdots & 0 \\ 0 & y & 0 & -y & 0 & \cdots & 0 \\ -y & 0 & y & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \end{pmatrix} : y \in \mathbb{R} \right\} \cong \mathbb{R}. \end{eqnarray*} The negative restricted root spaces can be computed easily from the positive restricted root spaces using the fact that ${\mathfrak{g}}_{-\alpha} = \theta({\mathfrak{g}}_\alpha)$. For each $\alpha \in \Sigma$ we define \begin{equation*} {\mathfrak{k}}_\alpha = {\mathfrak{k}} \cap ({\mathfrak{g}}_\alpha \oplus {\mathfrak{g}}_{-\alpha}) ,\ {\mathfrak{p}}_\alpha = {\mathfrak{p}} \cap ({\mathfrak{g}}_\alpha \oplus {\mathfrak{g}}_{-\alpha}). \end{equation*} Then we have ${\mathfrak{p}}_\alpha = {\mathfrak{p}}_{-\alpha}$, ${\mathfrak{k}}_\alpha = {\mathfrak{k}}_{-\alpha}$ and ${\mathfrak{p}}_\alpha \oplus {\mathfrak{k}}_\alpha = {\mathfrak{g}}_\alpha \oplus {\mathfrak{g}}_{-\alpha}$ for all $\alpha \in \Sigma$. We define a nilpotent subalgebra ${\mathfrak{n}}$ of ${\mathfrak{g}}$ by \begin{align*} {\mathfrak{n}} & = {\mathfrak{g}}_{\alpha_1} \oplus {\mathfrak{g}}_{\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1 + \alpha_2} \oplus {\mathfrak{g}}_{\alpha_1 + 2\alpha_2} \\ & = \left\{ \begin{pmatrix} 0 & x+y & 0 & x-y & v_1 & \cdots & v_{n-2} \\ -x-y & 0 & x+y & 0 & w_1 & \cdots & w_{n-2} \\ 0 & x+y & 0 & x-y & v_1 & \cdots & v_{n-2} \\ x-y & 0 & -x+y & 0 & w_1 & \cdots & w_{n-2} \\ v_1 & w_1 & -v_1 & -w_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ v_{n-2} & w_{n-2} & -v_{n-2} & -w_{n-2} & 0 & \cdots & 0 \end{pmatrix} : \begin{array}{l} x,y \in \mathbb{R},\\ v,w \in \mathbb{R}^{n-2} \end{array} \right\}. \end{align*} Then ${\mathfrak{g}} = {\mathfrak{k}} \oplus {\mathfrak{a}} \oplus {\mathfrak{n}}$ is an Iwasawa decomposition of ${\mathfrak{g}}$, which induces a corresponding Iwasawa decomposition $G = KAN$ of $G$. The subalgebra \[ {\mathfrak{a}} \oplus {\mathfrak{n}} = \left\{ \begin{pmatrix} 0 & x+y & a_1 & x-y & v_1 & \cdots & v_{n-2} \\ -x-y & 0 & x+y & a_2 & w_1 & \cdots & w_{n-2} \\ a_1 & x+y & 0 & x-y & v_1 & \cdots & v_{n-2} \\ x-y & a_2 & -x+y & 0 & w_1 & \cdots & w_{n-2} \\ v_1 & w_1 & -v_1 & -w_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ v_{n-2} & w_{n-2} & -v_{n-2} & -w_{n-2} & 0 & \cdots & 0 \end{pmatrix} : \begin{array}{l} a_1,a_2,x,y \in \mathbb{R},\\ v,w \in \mathbb{R}^{n-2} \end{array} \right\} \] of ${\mathfrak{g}}$ is solvable and the corresponding connected closed subgroup $AN$ of $G$ with Lie algebra ${\mathfrak{a}} \oplus {\mathfrak{n}}$ is solvable, simply connected, and acts simply transitively on ${Q^n}^*$. Then $({Q^n}^*,g)$ is isometric to the solvable Lie group $AN$ equipped with the left-invariant Riemannian metric $\langle \cdot , \cdot \rangle$ defined by \begin{align*} \langle H_1 + \hat{X}_1 , H_2 + \hat{X}_2 \rangle & = -\frac{1}{4n}B(H_1,\theta(H_2)) - \frac{1}{8n}B(\hat{X}_1,\theta(\hat{X}_2)) \\ & = -\frac{1}{4}{\rm {tr}}(H_1\theta(H_2)) - \frac{1}{8}{\rm {tr}}(\hat{X}_1\theta(\hat{X}_2)) \\ & = \frac{1}{4}{\rm {tr}}(H_1H_2) - \frac{1}{8}{\rm {tr}}(\hat{X}_1\theta(\hat{X}_2)) \end{align*} with $H_1,H_2 \in {\mathfrak{a}}$ and $\hat{X}_1,\hat{X}_2 \in {\mathfrak{n}}$. For each $\hat{X} \in {\mathfrak{n}}$, the orthogonal projection $X$ onto ${\mathfrak{p}}$ with respect to $B_\theta$ is \[ X = \frac{1}{2}(\hat{X} - \theta(\hat{X})) \in {\mathfrak{p}}. \] By construction, we have $\langle \hat{X} , \hat{X} \rangle = g(X,X)$ and \[ \langle H_1 + \hat{X}_1 , H_2 + \hat{X}_2 \rangle = g(H_1 + X_1,H_2 + X_2). \] Let $H^1,H^2 \in {\mathfrak{a}}$ be the dual basis of $\alpha_1,\alpha_2 \in {\mathfrak{a}}^\ast$ defined by $\alpha_\nu(H^\mu) = \delta_{\nu\mu}$. Since $\alpha_1 = \epsilon_1 - \epsilon_2$ and $\alpha_2 = \epsilon_2$, we have \[ H^1 = \begin{pmatrix} 0 & 0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 1 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \end{pmatrix} ,\ H^2 = \begin{pmatrix} 0 & 0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 1 & 0 & \cdots & 0 \\ 1 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \end{pmatrix}. \] Note that \[ \langle H^1 , H^1 \rangle = \frac{1}{4}{\rm {tr}}(H^1H^1) = \frac{1}{2},\ \langle H^2 , H^2 \rangle = \frac{1}{4}{\rm {tr}}(H^2H^2) = 1. \] For each $\alpha$ in $\Sigma$ we define the root vector $H_\alpha \in {\mathfrak{a}}$ of $\alpha$ by $\langle H_\alpha , H \rangle = \alpha(H)$ for all $H \in {\mathfrak{a}}$. Note that \[ [H,X_\alpha] = {\rm {ad}}(H)X_\alpha = \alpha(H)X_\alpha = \langle H_\alpha , H \rangle X_\alpha \] for all $H \in {\mathfrak{a}}$ and $X_\alpha \in {\mathfrak{g}}_\alpha$. If we put \[ H_{\alpha} = \begin{pmatrix} 0 & 0 & x_1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & x_2 & 0 & \cdots & 0 \\ x_1 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & x_2 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \end{pmatrix} ,\ H = \begin{pmatrix} 0 & 0 & a_1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & a_2 & 0 & \cdots & 0 \\ a_1 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & a_2 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \end{pmatrix}, \] then \[ \langle H_\alpha , H \rangle = \frac{1}{4}{\rm {tr}}(H_\alpha H) = \frac{1}{2}(x_1a_1 + x_2a_2). \] It follows that \[ \begin{matrix} H_{\alpha_1} & = & \begin{pmatrix} 0 & 0 & 2 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & -2 & 0 & \cdots & 0 \\ 2 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & -2 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \end{pmatrix} & , & H_{\alpha_1+2\alpha_2} & = & \begin{pmatrix} 0 & 0 & 2 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 2 & 0 & \cdots & 0 \\ 2 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 2 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \end{pmatrix} & ,\\ & & & & & & & \\ H_{\alpha_2} & = & \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 2 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 2 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \end{pmatrix} & , & H_{\alpha_1+\alpha_2} & = & \begin{pmatrix} 0 & 0 & 2 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 2 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \end{pmatrix} & . \end{matrix} \] We have \begin{align*} \langle H_{\alpha_1} , H_{\alpha_1} \rangle & = \frac{1}{4}{\rm {tr}}(H_{\alpha_1}H_{\alpha_1}) = 4,\\ \langle H_{\alpha_1+2\alpha_2} , H_{\alpha_1+2\alpha_2} \rangle & = \frac{1}{4}{\rm {tr}}(H_{\alpha_1+2\alpha_2}H_{\alpha_1+2\alpha_2}) = 4,\\ \langle H_{\alpha_2} , H_{\alpha_2} \rangle & = \frac{1}{4}{\rm {tr}}(H_{\alpha_2}H_{\alpha_2}) = 2,\\ \langle H_{\alpha_1+\alpha_2} , H_{\alpha_1+\alpha_2} \rangle & = \frac{1}{4}{\rm {tr}}(H_{\alpha_1+\alpha_2} H_{\alpha_1+\alpha_2} ) = 2, \end{align*} and \[ 2H^1 = H_{\alpha_1+\alpha_2} \qquad \mbox{and} \qquad 2H^2 = H_{\alpha_1+2\alpha_2}. \] \section{The homogeneous complex hypersurface} \label{hcs} In this section we construct a homogeneous complex hypersurface $P^{n-1} \cong \mathbb{C} H^{n-1}(-4)$ in $({Q^n}^*,g)$ and compute its shape operator. We define \[ {\mathfrak{h}}^{2n-3} = {\mathfrak{g}}_{\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1+2\alpha_2}. \] It is easy to verify that ${\mathfrak{h}}^{2n-3}$ is a nilpotent subalgebra of ${\mathfrak{n}}$ and isomorphic to the $(2n-3)$-dimensional Heisenberg algebra with $1$-dimensional center. We have \[ [H^2,\hat{X}] = \begin{cases} \hat{X} &, \mbox{ if } \hat{X} \in {\mathfrak{g}}_{\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1+\alpha_2}, \\ 2\hat{X} &, \mbox{ if } \hat{X} \in {\mathfrak{g}}_{\alpha_1+2\alpha_2}. \end{cases} \] It follows that \[ {\mathfrak{d}} = \mathbb{R} H^2 \oplus {\mathfrak{h}}^{2n-3} = \mathbb{R} H_{\alpha_1+2\alpha_2} \oplus {\mathfrak{g}}_{\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1+2\alpha_2} \] is a solvable subalgebra of ${\mathfrak{a}} \oplus {\mathfrak{n}}$. (Note that $\mathbb{R} H^2$ denotes here the real span of $H^2$ and not the real hyperbolic plane!) In fact, this subalgebra is the standard solvable extension of the Heisenberg algebra ${\mathfrak{h}}^{2n-3}$ and isomorphic to the solvable Lie algebra of the solvable part of the Iwasawa decomposition of the isometry group of the complex hyperbolic space $\mathbb{C} H^{n-1}(-4)$ (see e.g.\ \cite{BTV95} or \cite{Ta08}). This construction leads to an isometric embedding $\hat{P}^{n-1}$ of the $(n-1)$-dimensional complex hyperbolic space $\mathbb{C} H^{n-1}(-4)$ with constant holomorphic sectional curvature $-4$ into $(AN,\langle \cdot , \cdot \rangle)$. By construction, $\hat{P}^{n-1}$ is a homogeneous submanifold of $(AN,\langle \cdot , \cdot \rangle)$. Let $\hat{J}$ be the complex structure on $(AN,\langle \cdot , \cdot \rangle)$ corresponding to the complex structure $J$ on $({Q^n}^*,g)$. We have $\hat{J}{\mathfrak{g}}_{\alpha_2} = {\mathfrak{g}}_{\alpha_1+\alpha_2}$ and $\hat{J}H^2 \in {\mathfrak{g}}_{\alpha_1+2\alpha_2}$, which shows that the tangent space \[ T_o\hat{P}^{n-1} = \mathbb{R} H_{\alpha_1+2\alpha_2} \oplus {\mathfrak{g}}_{\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1+2\alpha_2} \] is a complex subspace of $T_oAN$. Since $AN$ is contained in the identity component $SO^o_{2,n}$ of the full isometry group of ${Q^n}^*$, it consists of holomorphic isometries, which implies that $\hat{P}^{n-1}$ is a complex submanifold of $(AN,\langle \cdot , \cdot \rangle)$. Altogether we conclude that the solvable subalgebra \[ {\mathfrak{d}} = \mathbb{R} H_{\alpha_1+2\alpha_2} \oplus {\mathfrak{g}}_{\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1+2\alpha_2} \] of ${\mathfrak{a}} \oplus {\mathfrak{n}}$ induces an isometric embedding $\hat{P}^{n-1}$ of the complex hyperbolic space $\mathbb{C} H^{n-1}(-4)$ with constant holomorphic sectional curvature $-4$ into $(AN,\langle \cdot , \cdot \rangle)$ as a homogeneous complex hypersurface. This induces an isometric embedding $P^{n-1}$ of the complex hyperbolic space $\mathbb{C} H^{n-1}(-4)$ with constant holomorphic sectional curvature $-4$ into $({Q^n}^*,g)$ as a homogeneous complex hypersurface. \begin{re} \rm Smyth \cite{Sm68} proved that every homogeneous complex hypersurface in the complex hyperbolic space $\mathbb{C} H^n$ is a complex hyperbolic hyperplane $\mathbb{C} H^{n-1}$ embedded in $\mathbb{C} H^n$ as a totally geodesic submanifold. As we have just seen, up to congruency, there are at least two homogeneous complex hypersurfaces in the complex hyperbolic quadric ${Q^n}^*$, namely the complex hyperbolic quadric ${Q^{n-1}}^*$ and the complex hyperbolic space $P^{n-1} \cong \mathbb{C} H^{n-1}(-4)$. The first one is totally geodesic (see \cite{CN77} or \cite{Kl08} and use duality between Riemannian symmetric spaces of compact type and of non-compact type), the second one is not. The classification of the homogeneous complex hypersurfaces in the complex hyperbolic quadric ${Q^n}^*$ remains an open problem. \end{re} \medskip We now compute the shape operator $\hat{A}$ of $\hat{P}^{n-1} \cong \mathbb{C} H^{n-1}(-4)$ in $(AN,\langle \cdot , \cdot \rangle)$. Let \[ \hat\zeta \in ({\mathfrak{a}} \ominus \mathbb{R} H^2) \oplus {\mathfrak{g}}_{\alpha_1} = \mathbb{R} H_{\alpha_1} \oplus {\mathfrak{g}}_{\alpha_1} \] be a unit normal vector of $\hat{P}^{n-1}$ at $o$. The Weingarten equation tells us that \[ \langle \hat{A}_{\hat{\zeta}} \hat{X} , \hat{Y} \rangle = - \langle \hat\nabla_{\hat{X}}\hat{\zeta} , \hat{Y} \rangle, \] where $\hat\nabla$ is the Levi Civita covariant derivative of $(AN,\langle \cdot , \cdot \rangle)$ and $\hat{X},\hat{Y} \in {\mathfrak{d}}$. We consider $\hat{\zeta},\hat{X},\hat{Y}$ as left-invariant vector fields. Since $\langle \cdot , \cdot \rangle$ is a left-invariant Riemannian metric, the Koszul formula for $\hat\nabla$ implies \[ 2\langle \hat{A}_{\hat{\zeta}}\hat{X} , \hat{Y} \rangle = 2\langle \hat\nabla_{\hat{X}} \hat{Y}, \hat{\zeta} \rangle = \langle [\hat{X},\hat{Y}],\hat{\zeta} \rangle + \langle [\hat{\zeta},\hat{X}],\hat{Y} \rangle + \langle [\hat{\zeta},\hat{Y}],\hat{X} \rangle . \] Since ${\mathfrak{d}}$ is a subalgebra of ${\mathfrak{a}} \oplus {\mathfrak{n}}$, we have $[\hat{X},\hat{Y}] \in {\mathfrak{d}}$ and hence $\langle [\hat{X},\hat{Y}],\hat{\zeta} \rangle = 0$. Moreover, since ${\rm {ad}}(\hat{\zeta})^* = -{\rm {ad}}(\theta(\hat{\zeta}))$, we have \[ \langle [\hat{\zeta},\hat{Y}],\hat{X} \rangle = -\langle [\theta(\hat{\zeta}),\hat{X}],\hat{Y} \rangle. \] Altogether this implies \[ 2\langle \hat{A}_{\hat{\zeta}} \hat{X} , \hat{Y} \rangle = \langle [\hat{\zeta}-\theta(\hat{\zeta}),\hat{X}],\hat{Y} \rangle. \] Thus, the shape operater $\hat{A}_{\hat{\zeta}}$ of $\hat{P}^{n-1}$ is given by \[ \hat{A}_{\hat{\zeta}} \hat{X} = [\zeta,\hat{X}]_{{\mathfrak{d}}}, \] where \[ \zeta = \frac{1}{2}(\hat\zeta - \theta(\hat\zeta)) \in {\mathfrak{p}} \] is the orthogonal projection of $\hat{\zeta}$ onto ${\mathfrak{p}}$ and $[\, \cdot\ ]_{\mathfrak{d}}$ is the orthogonal projection onto ${\mathfrak{d}}$. The normal space $\nu_o\hat{P}^{n-1}$ of $\hat{P}^{n-1}$ at the point $o$ is given by \[ \nu_o\hat{P}^{n-1} = \mathbb{R} H_{\alpha_1} \oplus {\mathfrak{g}}_{\alpha_1} = \left\{ \begin{pmatrix} 0 & x & a & x & 0 & \cdots & 0 \\ -x & 0 & x & -a & 0 & \cdots & 0 \\ a & x & 0 & x & 0 & \cdots & 0 \\ x & -a & -x & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \end{pmatrix} : a,x \in \mathbb{R} \right\} , \] and the tangent space $T_o\hat{P}^{n-1}$ of $\hat{P}^{n-1}$ at the point $o$ is given by \[ T_o\hat{P}^{n-1} = {\mathfrak{d}} = \left\{ \begin{pmatrix} 0 & y & b & -y & v_1 & \cdots & v_{n-2} \\ -y & 0 & y & b & w_1 & \cdots & w_{n-2} \\ b & y & 0 & -y & v_1 & \cdots & v_{n-2} \\ -y & b & y & 0 & w_1 & \cdots & w_{n-2} \\ v_1 & w_1 & -v_1 & -w_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ v_{n-2} & w_{n-2} & -v_{n-2} & -w_{n-2} & 0 & \cdots & 0 \end{pmatrix} : \begin{array}{l} b,y \in \mathbb{R},\\ v,w \in \mathbb{R}^{n-2} \end{array} \right\} . \] The vetor $\hat\zeta = \frac{1}{2}H_{\alpha_1} \in {\mathfrak{a}}$ is a unit normal vector of $\hat{P}^{n-1}$ at $o$. We have \[ \theta(\hat\zeta) = \frac{1}{2}\theta(H_{\alpha_1}) = -\frac{1}{2}H_{\alpha_1} = - \hat\zeta \] and thus \[ \zeta = \frac{1}{2}(\hat\zeta - \theta(\hat\zeta)) = \hat\zeta. \] A straightforward matrix computation gives \begin{align*} & \left[ \left( \begin{smallmatrix} 0 & 0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & -1 & 0 & \cdots & 0 \\ 1 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & -1 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \end{smallmatrix} \right), \left( \begin{smallmatrix} 0 & y & b & -y & v_1 & \cdots & v_{n-2} \\ -y & 0 & y & b & w_1 & \cdots & w_{n-2} \\ b & y & 0 & -y & v_1 & \cdots & v_{n-2} \\ -y & b & y & 0 & w_1 & \cdots & w_{n-2} \\ v_1 & w_1 & -v_1 & -w_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ v_{n-2} & w_{n-2} & -v_{n-2} & -w_{n-2} & 0 & \cdots & 0 \end{smallmatrix} \right) \right] \\ & = \left( \begin{smallmatrix} 0 & 0 & 0 & 0 & v_1 & \cdots & v_{n-2} \\ 0 & 0 & 0 & 0 & -w_1 & \cdots & -w_{n-2} \\ 0 & 0 & 0 & 0 & v_1 & \cdots & v_{n-2} \\ 0 & 0 & 0 & 0 & -w_1 & \cdots & -w_{n-2} \\ v_1 & -w_1 & -v_1 & w_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ v_{n-2} & -w_{n-2} & -v_{n-2} & w_{n-2} & 0 & \cdots & 0 \end{smallmatrix} \right) \in {\mathfrak{d}}. \end{align*} Since the latter matrix is in ${\mathfrak{d}}$, we conclude that \[ \hat{A}_{\hat{\zeta}} \hat{X} = \begin{pmatrix} 0 & 0 & 0 & 0 & v_1 & \cdots & v_{n-2} \\ 0 & 0 & 0 & 0 & -w_1 & \cdots & -w_{n-2} \\ 0 & 0 & 0 & 0 & v_1 & \cdots & v_{n-2} \\ 0 & 0 & 0 & 0 & -w_1 & \cdots & -w_{n-2} \\ v_1 & -w_1 & -v_1 & w_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ v_{n-2} & -w_{n-2} & -v_{n-2} & w_{n-2} & 0 & \cdots & 0 \end{pmatrix} \] with \[ \hat{X} = \begin{pmatrix} 0 & y & b & -y & v_1 & \cdots & v_{n-2} \\ -y & 0 & y & b & w_1 & \cdots & w_{n-2} \\ b & y & 0 & -y & v_1 & \cdots & v_{n-2} \\ -y & b & y & 0 & w_1 & \cdots & w_{n-2} \\ v_1 & w_1 & -v_1 & -w_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ v_{n-2} & w_{n-2} & -v_{n-2} & -w_{n-2} & 0 & \cdots & 0 \end{pmatrix} \in T_o\hat{P}^{n-1}. \] It follows that the principal curvatures of $\hat{P}^{n-1}$ with respect to the unit normal vector $\hat\zeta$ are $0$, $1$ and $-1$, with corresponding principal curvature spaces \[ \hat{T}^{\hat\zeta}_0 = \mathbb{R} H_{\alpha_1+2\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1+2\alpha_2}\ ,\ \hat{T}^{\hat\zeta}_1 = {\mathfrak{g}}_{\alpha_1+\alpha_2}\ ,\ \hat{T}^{\hat\zeta}_{-1} = {\mathfrak{g}}_{\alpha_2}. \] We now compute the shape operator of $\hat{P}^{n-1}$ at $o$ for other unit normal vectors. Since $\nu_o\hat{P}^{n-1}$ is $\hat{J}$-invariant, the vector $\hat{J}\hat\zeta \in {\mathfrak{g}}_{\alpha_1}$ is a unit normal vector of $\hat{P}^{n-1}$ at $o$. Moreover, $\hat\zeta,\hat{J}\hat\zeta$ is an orthonormal basis of the normal space $\nu_o\hat{P}^{n-1}$. Using a well-known formula for the shape operator of a complex submanifold of a K\"{a}hler manifold (see e.g.\ \cite{CR15}, Lemma 7.4), we have \[ \hat{A}_{\hat{J}\hat\zeta} = \hat{J}\hat{A}_{\hat\zeta}. \] Since every unit normal vector of $\hat{P}^{n-1}$ at $o$ is of the form \[ \cos(\varphi)\hat\zeta + \sin(\varphi)\hat{J}\hat\zeta, \] the shape operator $\hat{A}_{\hat{\zeta}}$ therefore completely determines the shape operator for every other unit normal vector of $\hat{P}^{n-1}$ at $o$. More precisely, we have \[ \hat{A}_{\cos(\varphi)\hat\zeta + \sin(\varphi)\hat{J}\hat\zeta} = \cos(\varphi)\hat{A}_{\hat\zeta} + \sin(\varphi)J\hat{A}_{\hat\zeta}. \] This readily implies that the principal curvatures of $\hat{P}^{n-1}$ with respect to the unit normal vector $\cos(\varphi)\hat\zeta + \sin(\varphi)\hat{J}\hat\zeta$ are $0$, $1$ and $-1$, with corresponding principal curvature spaces \begin{align*} \hat{T}^{\cos(\varphi)\hat\zeta + \sin(\varphi)\hat{J}\hat\zeta}_0 & = \mathbb{R} H_{\alpha_1+2\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1+2\alpha_2},\\ \hat{T}^{\cos(\varphi)\hat\zeta + \sin(\varphi)\hat{J}\hat\zeta}_1 & = \{ \cos\left(\textstyle{\frac{\varphi}{2}}\right)\hat{X} + \sin\left(\textstyle{\frac{\varphi}{2}}\right)J\hat{X} : \hat{X} \in {\mathfrak{g}}_{\alpha_1+\alpha_2}\},\\ \hat{T}^{\cos(\varphi)\hat\zeta + \sin(\varphi)\hat{J}\hat\zeta}_{-1} & = \{\sin\left(\textstyle{\frac{\varphi}{2}}\right)\hat{X} - \cos\left(\textstyle{\frac{\varphi}{2}}\right)J\hat{X} : \hat{X} \in {\mathfrak{g}}_{\alpha_1+\alpha_2}\}. \end{align*} Using orthogonal projections onto ${\mathfrak{p}}$ we obtain the corresponding description of the shape operator $A$ of $P^{n-1}$ at $o$. Recall that \[ \nu_o\hat{P}^{n-1} = \mathbb{R} H_{\alpha_1} \oplus {\mathfrak{g}}_{\alpha_1} . \] The orthogonal projection of $\nu_o\hat{P}^{n-1}$ onto ${\mathfrak{p}}$ is \[ \nu_oP^{n-1} = \mathbb{C} H_{\alpha_1} = \mathbb{R} H_{\alpha_1} \oplus {\mathfrak{p}}_{\alpha_1} = \left\{ \begin{pmatrix} 0 & 0 & a & x & 0 & \cdots & 0 \\ 0 & 0 & x & -a & 0 & \cdots & 0 \\ a & x & 0 & 0 & 0 & \cdots & 0 \\ x & -a & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \end{pmatrix} : a,x \in \mathbb{R} \right\} . \] The complex line $\mathbb{C} H_{\alpha_1}$ is a Lie triple system in ${\mathfrak{p}}$ and therefore determines a totally geodesic complex submanifold $B_1$ of ${Q^n}^*$. The (non-zero) tangent vectors of $B_1$ are ${\mathfrak{A}}$-isotropic, which implies that the sectional curvature of $B_1$ is equal to $-4$. Thus $B_1$ is isometric to the complex hyperbolic line $\mathbb{C} H^1(-4)$ of constant (holomorphic) sectional curvature $-4$. We will encounter $B_1$ again later, where it appears in a horospherical decomposition of the complex hyperbolic quadric. We now apply the standard real structure $C_0$ to the normal space $\nu_oP^{n-1} = T_oB_1$, \begin{align*} C_0(T_oB_1) & = \left\{ \begin{pmatrix} 0 & 0 & a & x & 0 & \cdots & 0 \\ 0 & 0 & -x & a & 0 & \cdots & 0 \\ a & -x & 0 & 0 & 0 & \cdots & 0 \\ x & a & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \end{pmatrix} : a,x \in \mathbb{R} \right\} \\ & = \mathbb{R} H_{\alpha_1+2\alpha_2} \oplus {\mathfrak{p}}_{\alpha_1+2\alpha_2} = \mathbb{C} H_{\alpha_1+2\alpha_2} . \end{align*} Note that $C_0(T_oB_1) = C(T_oB_1)$ for any real structure $C$ at $o$ and therefore the construction is independent of the choice of real structure. The complex line $\mathbb{C} H_{\alpha_1+2\alpha_2}$ is also a Lie triple system in ${\mathfrak{p}}$ and determines a totally geodesic complex submanifold $\Sigma_1$ of ${Q^n}^*$. The (non-zero) tangent vectors of $\Sigma_1$ are also ${\mathfrak{A}}$-isotropic, which implies that the sectional curvature of $\Sigma_1$ is equal to $-4$. Thus $\Sigma_1$ is isometric to the complex hyperbolic line $\mathbb{C} H^1(-4)$ of constant (holomorphic) sectional curvature $-4$. The tangent space $T_o\Sigma_1$ is the kernel of the shape operator of the homogeneous complex hypersurface $P^{n-1} \cong \mathbb{C} H^{n-1}(-4)$. Since $\mathbb{R} H_{\alpha_1+2\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1+2\alpha_2}$ is a subalgebra of ${\mathfrak{d}}$, this implies geometrically that $\hat{P}^{n-1}$, and hence also $P^{n-1}$, is foliated by totally geodesic complex hyperbolic lines $\mathbb{C} H^1(-4)$ whose tangent spaces are obtained by rotating the normal spaces of $\hat{P}^{n-1}$ (resp.\ $P^{n-1}$) via a real structure $\hat{C}$ (resp.\ $C$). The Riemannian product $B_1 \times \Sigma_1 \cong \mathbb{C} H^1(-4) \times \mathbb{C} H^1(-4)$ is isometric to the complex hyperbolic quadric ${Q^2}^*$ and describes the standard isometric embedding of ${Q^2}^*$ into ${Q^n}^*$. We have \[ [{\mathfrak{p}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{p}}_{\alpha_2},{\mathfrak{p}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{p}}_{\alpha_2}] \subset {\mathfrak{k}}_0 \oplus {\mathfrak{k}}_{\alpha_1} \oplus {\mathfrak{k}}_{\alpha_1+2\alpha_2} \] and \begin{align*} [{\mathfrak{k}}_0,{\mathfrak{p}}_{\alpha_1+\alpha_2}] & \subset {\mathfrak{p}}_{\alpha_1+\alpha_2},\\ [{\mathfrak{k}}_{\alpha_1},{\mathfrak{p}}_{\alpha_1+\alpha_2}] & \subset {\mathfrak{p}}_{\alpha_2},\\ [{\mathfrak{k}}_{\alpha_1+2\alpha_2},{\mathfrak{p}}_{\alpha_1+\alpha_2}] & \subset {\mathfrak{p}}_{\alpha_2},\\ [{\mathfrak{k}}_0,{\mathfrak{p}}_{\alpha_2}] & \subset {\mathfrak{p}}_{\alpha_2},\\ [{\mathfrak{k}}_{\alpha_1},{\mathfrak{p}}_{\alpha_2}] & \subset {\mathfrak{p}}_{\alpha_1+\alpha_2},\\ [{\mathfrak{k}}_{\alpha_1+2\alpha_2},{\mathfrak{p}}_{\alpha_2}] & \subset {\mathfrak{p}}_{\alpha_1+\alpha_2}. \end{align*} Altogether we conclude that ${\mathfrak{p}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{p}}_{\alpha_2}$ is a Lie triple system. It is easy to see that this Lie triple system is $J$-invariant. The complex totally geodesic submanifold of ${Q^n}^*$ generated by this Lie triple system is isometric to ${Q^{n-2}}^*$. However, the only complex totally geodesic submanifolds of a complex hyperbolic space are again complex hyperbolic spaces (see \cite{Wo63} and use duality). It follows that there exists a totally geodesic submanifold $\Sigma^{n-2} \cong \mathbb{C} H^{n-2}(-4)$ of $P \cong \mathbb{C} H^{n-1}(-4)$ with $T_o\Sigma^{n-2} = {\mathfrak{p}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{p}}_{\alpha_2}$. We have \[ T_oP^{n-1} = T_o\Sigma_1 \oplus T_o\Sigma^{n-2},\ \nu_oP^{n-1} = T_oB_1. \] The tangent space $T_o\Sigma^{n-2} = {\mathfrak{p}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{p}}_{\alpha_2}$ and the normal space $\nu_o\Sigma^{n-2} = {\mathfrak{a}} \oplus {\mathfrak{p}}_{\alpha_1+2\alpha_2} \oplus {\mathfrak{p}}_{\alpha_1}$ are Lie triple systems in ${\mathfrak{p}}$. We summarize the previous discussion in the following theorem. \begin{thm} \label{ghchP} There exists a homogeneous complex hypersurface $P^{n-1}$ in $({Q^n}^*,g)$ which is isometric to the complex hyperbolic space $\mathbb{C} H^{n-1}(-4)$ of constant holomorphic sectional curvature $-4$. In terms of root spaces and root vectors, the tangent space and normal space of $P^{n-1}$ at $o$ is \[ T_oP^{n-1} = \mathbb{R} H_{\alpha_1+2\alpha_2} \oplus {\mathfrak{p}}_{\alpha_1+2\alpha_2} \oplus {\mathfrak{p}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{p}}_{\alpha_2},\ \nu_oP^{n-1} = \mathbb{R} H_{\alpha_1} \oplus {\mathfrak{p}}_{\alpha_1}. \] The normal space $\nu_oP^{n-1}$ is a Lie triple system and the totally geodesic submanifold $B_1$ of ${Q^n}^*$ generated by this Lie triple system is isometric to a complex hyperbolic line $\mathbb{C} H^1(-4)$ of constant (holomorphic) sectional curvature $-4$. The (non-zero) tangent vectors of $B_1$ are ${\mathfrak{A}}$-isotropic. In particular, the (non-zero) normal vectors of $P^{n-1}$ are ${\mathfrak{A}}$-isotropic singular tangent vectors of ${Q^n}^*$. The tangent space $T_oP^{n-1}$ decomposes orthogonally into \[ T_oP^{n-1} = C(\nu_oP^{n-1}) \oplus ({\mathfrak{p}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{p}}_{\alpha_2}), \] where $C$ is any real structure in ${\mathfrak{A}}_0$ at $o$. The subspace $C(\nu_oP^{n-1})$ is a Lie triple system and the totally geodesic submanifold $\Sigma_1$ of ${Q^n}^*$ generated by this Lie triple system is isometric to a complex hyperbolic line $\mathbb{C} H^1(-4)$ of constant (holomorphic) sectional curvature $-4$. The (non-zero) tangent vectors of $\Sigma_1$ are ${\mathfrak{A}}$-isotropic. The subspace ${\mathfrak{p}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{p}}_{\alpha_2}$ is a Lie triple system in ${\mathfrak{p}}$ and a complex subspace of $T_oP^{n-1}$. The totally geodesic submanifold of ${Q^n}^*$ generated by this Lie triple system is isometric to the complex hyperbolic quadric ${Q^{n-2}}^*$ and the totally geodesic submanifold of $P^{n-1} \cong \mathbb{C} H^{n-1}(-4)$ generated by this complex subspace is isometric to the complex hyperbolic space $\mathbb{C} H^{n-2}(-4)$. Let $\zeta \in \nu_oP^{n-1}$ be a unit normal vector of $P^{n-1}$. Then $\zeta$ is of the form \[ \zeta = \frac{1}{2}\cos(\varphi)H_{\alpha_1} + \frac{1}{2}\sin(\varphi)JH_{\alpha_1} \] and the principal curvatures of $P^{n-1}$ with respect to $\zeta$ are $0,1,-1$ with corresponding principal curvature spaces \begin{align*} T^\zeta_0 & = C(\nu_oP^{n-1}) = T_o\Sigma_1,\\ T^\zeta_1 & = \{ \cos\left(\textstyle{\frac{\varphi}{2}}\right)X + \sin\left(\textstyle{\frac{\varphi}{2}}\right)JX : X \in {\mathfrak{p}}_{\alpha_1+\alpha_2}\} \subset V(C), \\ T^\zeta_{-1} & = \{\sin\left(\textstyle{\frac{\varphi}{2}}\right)X- \cos\left(\textstyle{\frac{\varphi}{2}}\right)JX : X \in {\mathfrak{p}}_{\alpha_1+\alpha_2}\} \subset JV(C), \end{align*} where $C = \cos(\varphi)C_0 + \sin(\varphi)JC_0$. The $0$-eigenspace is independent of the choice of unit normal vector $\zeta$ and coincides with the kernel $T_0$ of the shape operator of $P^{n-1}$. \end{thm} Let $M$ be a submanifold of a Riemannian manifold $\bar{M}$ and $\zeta \in \nu_pM$ be a normal vector of $M$. Consider the Jacobi operator $\bar{R}_\zeta = \bar{R}(\cdot,\zeta)\zeta : T_p\bar{M} \to T_p\bar{M}$. If $\bar{R}_\zeta(T_pM) \subseteq T_pM$, then the restriction $K_\zeta$ of $\bar{R}_\zeta$ to $T_pM$ is a self-adjoint endomorphism of $T_pM$, the so-called normal Jacobi operator of $M$ with respect to $\zeta$. The family $K = (K_\zeta)_{\zeta \in \nu M}$ is called the normal Jacobi operator of $M$. A submanifold $M$ of a Riemannian manifold $\bar{M}$ is curvature-adapted if for every normal vector $\zeta \in \nu_pM$, $p \in M$, the following two conditions are satisfied: \begin{enumerate} \item[(i)] $\bar{R}_\zeta(T_pM) \subseteq T_pM$; \item[(ii)] the normal Jacobi operator $K_\zeta$ and the shape operator $A_\zeta$ of $M$ are simultaneously diagonalizable, that is, \[ K_\zeta A_\zeta = A_\zeta K_\zeta. \] \end{enumerate} Since $\bar{R}_{\lambda\zeta} = \lambda^2 \bar{R}_\zeta$ for all $\lambda > 0$, it suffices to check conditions (i) and (ii) only for unit normal vectors. Curvature-adapted submanifolds were introduced in \cite{BV92}. They were also studied by Gray in \cite{G04} using the notion of compatible submanifolds. Curvature-adapted submanifolds form a very useful class of submanifolds in the context of focal sets and tubes. \begin{cor} \label{Pcurvad} The homogeneous complex hypersurface $P^{n-1} \cong \mathbb{C} H^{n-1}(-4)$ in $({Q^n}^*,g)$ is curvature-adapted. \end{cor} \begin{proof} Let $\zeta$ be a unit normal vector of $P^{n-1}$ at $o$. Then $\zeta$ is an ${\mathfrak{A}}$-isotropic singular tangent vector of ${Q^n}^*$. We already computed the eigenvalues and eigenspaces of the Jacobi operator $\bar{R}_\zeta$ in Section \ref{tchq}. It follows from this that $\bar{R}_\zeta$ has three eigenvalues $0,-1,-4$ with corresponding eigenspaces \[ E^\zeta_0 = \mathbb{R} \zeta \oplus \mathbb{C} C_0\zeta\ ,\ E^\zeta_{-1} = {\mathfrak{p}} \ominus (\mathbb{C} \zeta \oplus \mathbb{C} C_0\zeta)\ ,\ E^\zeta_{-4} = \mathbb{R} J\zeta. \] Note that $E^\zeta_{-1}$ is independent of the choice of the unit normal vector $\zeta$ and hence we can denote this space by $E_{-1}$. The tangent space $T_oP^{n-1}$ is given by \[ T_oP^{n-1} = C_0(\nu_oP^{n-1}) \oplus E_{-1}. \] From Theorem \ref{ghchP} we see that $T_0^\zeta = T_0 = C_0(\nu_oP^{n-1}) = \mathbb{C} C_0\zeta \subset E^\zeta_0$ and $E_{-1} = T^\zeta_1 \oplus T^\zeta_{-1}$, which implies that $K_\zeta$ and $A_\zeta$ commute. Since this holds for all unit normal vectors $\zeta$, it follows that $P^{n-1}$ is curvature-adapted. \end{proof} \section{Tubes around the homogeneous complex hypersurface} \label{thch} In this section we discuss the geometry of the tubes around the homogeneous complex hypersurface $P^{n-1} \cong \mathbb{C} H^{n-1}(-4)$ in $({Q^n}^*,g)$. We first observe that the tubes around $P^{n-1}$ are homogeneous real hypersurfaces in ${Q^n}^*$. In fact, the connected closed subgroup $H$ of $G = SO^0_{2,n}$ with Lie algebra \[ {\mathfrak{h}} = {\mathfrak{k}}_{\alpha_1} \oplus {\mathfrak{d}} = {\mathfrak{k}}_{\alpha_1} \oplus \mathbb{R} H_{\alpha_1+2\alpha_2} \oplus {\mathfrak{g}}_{\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1+2\alpha_2} \] acts on ${Q^n}^*$ with cohomogeneity one (see \cite{BD15}, Theorem 8). By construction, the orbit of $H$ containing $o$ is the homogeneous complex hypersurface $P^{n-1}$ and the principal orbits are the tubes around $P^{n-1}$. We denote by $P^{2n-1}_r$ the tube with radius $r \in \mathbb{R}_+$ around $P^{n-1}$ in ${Q^n}^*$. Note that $P^{2n-1}_r$ is a homogeneous real hypersurface in ${Q^n}^*$ and hence $\dim_\mathbb{R}(P^{2n-1}_r) = 2n-1$. By Corollary \ref{Pcurvad}, the homogeneous complex hypersurface $P^{n-1}$ is curvature-adapted. Since tubes around curvature-adapted submanifolds in Riemannian symmetric spaces are again curvature-adapted (see \cite{G04}, Theorem 6.14, or \cite{BV92}, Theorem 6), Corollary \ref{Pcurvad} implies: \begin{prop} The tube $P^{2n-1}_r$ with radius $r \in \mathbb{R}_+$ around the homogeneous complex hypersurface $P^{n-1} \cong \mathbb{C} H^{n-1}(-4)$ in $({Q^n}^*,g)$ is curvature-adapted. \end{prop} We can therefore use Jacobi field theory to compute the principal curvatures and principal curvature spaces of $P^{2n-1}_r$ (see e.g.\ \cite{BCO16}, Section 10.2.3, for a detailed description of the methodology). Since $P^{2n-1}_r$ is a homogeneous real hypersurface in ${Q^n}^*$, it suffices to compute the principal curvatures and principal curvature spaces at one point. Let $\zeta \in \nu_oP^{n-1}$ be a unit normal vector and $\gamma : \mathbb{R} \to {Q^n}^*$ the geodesic in ${Q^n}^*$ with $\gamma(0) = o$ and $\dot\gamma(0) = \zeta$. Then $p = \gamma(r) \in P^{2n-1}_r$ and $\zeta_r = \dot\gamma(r)$ is a unit normal vector of $P^{2n-1}_r$ at $o$. Since $\zeta$ is ${\mathfrak{A}}$-isotropic, also $\zeta_r$ is ${\mathfrak{A}}$-isotropic. Thus the normal bundle of $P^{2n-1}_r$ consists of ${\mathfrak{A}}$-isotropic singular tangent vectors of ${Q^n}^*$. We denote by $\gamma^\perp$ the parallel subbundle of the tangent bundle of ${Q^n}^*$ along $\gamma$ that is defined by the orthogonal complements of $\mathbb{R}\dot{\gamma}(t)$ in $T_{\gamma(t)}{Q^n}^*$, $t \in \mathbb{R}$, and put \[ \bar{R}^\perp_\gamma = \bar{R}_\gamma|_{\gamma^\perp} = \bar{R}(\cdot,\dot{\gamma})\dot{\gamma}|_{\gamma^\perp}. \] Let $D$ be the ${\rm End}(\gamma^\perp)$-valued tensor field along $\gamma$ solving the Jacobi equation \[ D^{\prime\prime} + \bar{R}^\perp_\gamma \circ D = 0,\ D(0) = \begin{pmatrix} {\rm {id}}_{T_oP^{n-1}} & 0 \\ 0 & 0 \end{pmatrix},\ D^\prime(0) = \begin{pmatrix} -A_\zeta & 0 \\ 0 & {\rm {id}}_{\mathbb{R} J\zeta} \end{pmatrix} , \] where the decomposition of the matrices is with respect to the decomposition $\gamma^\perp(0) = T_oP^{n-1} \oplus \mathbb{R} J\zeta$ and $A_\zeta$ is the shape operator of $P^{n-1}$ with respect to $\zeta$. If $v \in T_oP^{n-1}$ and $B_v$ is the parallel vector field along $\gamma$ with $B_v(0) = v$, then $Z_v = DB_v$ is the Jacobi field along $\gamma$ with initial values $Z_v(0) = v$ and $Z_v^\prime(0) = -A_\zeta v$. If $v \in \mathbb{R} J\zeta$ and $B_v$ is the parallel vector field along $\gamma$ with $B_v(0) = v$, then $Z_v = DB_v$ is the Jacobi field along $\gamma$ with initial values $Z_v(0) = 0$ and $Z_v^\prime(0) = v$. We decompose $T_oP^{n-1}$ orthogonally into $T_oP^{n-1} = T^\zeta_0 \oplus T^\zeta_1 \oplus T^\zeta_{-1}$ (see Theorem \ref{ghchP}). Since $\zeta$ is ${\mathfrak{A}}$-isotropic, the Jacobi operator $\bar{R}^\perp_\gamma$ at $o$ is of matrix form \[ \begin{pmatrix} 0 & 0 & 0 & 0\\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -4 \end{pmatrix} \] with respect to the decomposition $T^\zeta_0 \oplus T^\zeta_1 \oplus T^\zeta_{-1} \oplus \mathbb{R} J\zeta$. Since $({Q^n}^*,g)$ is a Riemannian symmetric space, the Jacobi operator $\bar{R}^\perp_\gamma$ is parallel along $\gamma$. By solving the above second order initial value problem explicitly we obtain \[ D(r) = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & e^{-r} & 0 & 0 \\ 0 & 0 & e^r & 0 \\ 0 & 0 & 0 & \frac{1}{2}\sinh(2r) \end{pmatrix} \] with respect to the parallel translate of the decomposition $T^\zeta_0 \oplus T^\zeta_1 \oplus T^\zeta_{-1} \oplus \mathbb{R} J\zeta$ along $\gamma$ from $o$ to $\gamma(r)$. The shape operator $A^r_{\zeta_r}$ of $P^{2n-1}_r$ with respect to the unit normal vector $\zeta_r = \dot{\gamma}(r)$ satisfies the equation \[ A^r_{\zeta_r} = - D^\prime(r) \circ D^{-1}(r). \] The matrix representation of $A^r_{\zeta_r}$ with respect to the parallel translate of the decomposition $T^\zeta_0 \oplus T^\zeta_1 \oplus T^\zeta_{-1} \oplus \mathbb{R} J\zeta$ along $\gamma$ from $o$ to $\gamma(r)$ therefore is \[ A^r_{\zeta_r} = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -2\coth(2r) \end{pmatrix}. \] It is remarkable that the principal curvatures of the tubes $P^{2n-1}_r$, corresponding to the maximal complex subbundle ${\mathcal{C}}$, are the same as those for the focal set $P^{n-1}$. The only additional principal curvature comes from the circles in $P^{2n-1}_r$ generated by the unit normal bundle of $P^{n-1}$, which in fact is the Hopf principal curvature function $\alpha$. We change the orientation of the unit normal vector field of $P^{2n-1}_r$ so that $\alpha$ becomes positive, that is, $\alpha = 2\coth(2r)$. Since the K\"{a}hler structure $J$ is parallel along $\gamma$, the condition $JT^\zeta_1 = T^\zeta_{-1}$ is preserved by parallel translation along $\gamma$. From this we easily see that the shape operator $A^r_{\zeta_r}$ of $P^{2n-1}_r$ satisfies $A^r_{\zeta_r}\phi + \phi A^r_{\zeta_r} = 0$. We summarize the previous discussion in the following result. \begin{thm} Let $P^{2n-1}_r$ be the tube with radius $r \in \mathbb{R}_+$ around the homogeneous complex hypersurface $P^{n-1} \cong \mathbb{C} H^{n-1}(-4)$ in $({Q^n}^*,g)$. The normal bundle of $P^{2n-1}_r$ consists of ${\mathfrak{A}}$-isotropic singular tangent vectors of $({Q^n}^*,g)$. The homogeneous real hypersurface $P^{2n-1}_r$ has four distinct constant principal curvatures \[ 0,1,-1,2\coth(2r) \] with multiplicities $2,n-2,n-2,1$, respectively, with respect to a suitable orientation of the unit normal vector field $\zeta_r$ of $P^{2n-1}_r$. In particular, the mean curvature of $P^{2n-1}_r$ is equal to $2\coth(2r)$. The corresponding principal curvature spaces are \[ T^{\zeta_r}_0 = \mathbb{C} C\zeta_r = {\mathcal{C}} \ominus {\mathcal{Q}}\ ,\ T^{\zeta_r}_{2\coth(2r)} = \mathbb{R} J\zeta_r, \] where $C$ is an arbitrary real structure on ${Q^n}^*$. The principal curvature spaces $T^{\zeta_r}_1$ and $T^{\zeta_r}_{-1}$ are mapped into each other by the complex structure $J$ (or equivalently, by the structure tensor field $\phi$) and are contained in the $\pm 1$-eigenspaces of a suitable real structure $C$. Moreover, the shape operator $A^r$ and the structure tensor field $\phi$ of $P^{2n-1}_r$ satisfy \[ A^r \phi + \phi A^r = 0. \] \end{thm} To put this into the context of Theorem \ref{mainthm}, we define $M_\alpha^{2n-1} = P_r^{2n-1}$ with $\alpha = 2\coth(2r)$. Recall that the normal space $\nu_oP^{n-1}$ is a Lie triple system and the totally geodesic submanifold $B_1$ of ${Q^n}^*$ generated by this Lie triple system is isometric to a complex hyperbolic line $\mathbb{C} H^1(-4)$ of constant holomorphic sectional curvature $-4$. The same is true for all the other normal spaces of $P^{n-1}$. It follows that, by construction, the integral curves of the Reeb vector field $\xi = -J\zeta$ are circles of radius $r$ in a complex hyperbolic line of constant sectional curvature $-4$. Such a circle has constant geodesic curvature $\alpha = 2\coth(2r)$. We thus see that the integral curves of the Reeb vector field are circles with radius $r$ in a complex hyperbolic line $\mathbb{C} H^1(-4)$. This clarifies the geometric construction discussed in the introduction. \section{The minimal homogeneous Hopf hypersurface} \label{mhHs} In this section we construct the minimal homogeneous real hypersurface $M^{2n-1}_0$ in $({Q^n}^*,g)$. The construction is a special case of the canonical extension technique developed by the author and Tamaru in \cite{BT13}. We start by defining the reductive subalgebra \[ {\mathfrak{l}}_1 = {\mathfrak{g}}_{-\alpha_1} \oplus {\mathfrak{g}}_0 \oplus {\mathfrak{g}}_{\alpha_1} \cong {\mathfrak{s}}{\mathfrak{u}}_{1,1} \oplus \mathbb{R} \oplus {\mathfrak{s}}{\mathfrak{o}}_{n-2} \] and the nilpotent subalgebra \[ {\mathfrak{n}}_1 = {\mathfrak{g}}_{\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1 + \alpha_2} \oplus {\mathfrak{g}}_{\alpha_1 + 2\alpha_2} \cong {\mathfrak{h}}^{2n-3} \] of ${\mathfrak{g}} = {\mathfrak{s}}{\mathfrak{o}}_{2,n}$. Here, ${\mathfrak{h}}^{2n-3}$ is the $(2n-3)$-dimensional Heisenberg algebra with $1$-dimensional center. Note that ${\mathfrak{n}}_1$ already appeared in the construction of the homogeneous complex hypersurface $P^{n-1}$ in Section \ref{hcs} as part of the subalgebra ${\mathfrak{d}} = \mathbb{R} H_{\alpha_1+2\alpha_2} \oplus {\mathfrak{n}}_1$. We define \begin{equation*} {\mathfrak{a}}_1 = \ker(\alpha_1) = \mathbb{R} H_{\alpha_1+2\alpha_2} ,\ {\mathfrak{a}}^1 = \mathbb{R} H_{\alpha_1}, \end{equation*} which gives an orthogonal decomposition of ${\mathfrak{a}}$ into ${\mathfrak{a}} = {\mathfrak{a}}_1\oplus {\mathfrak{a}}^1$. The reductive subalgebra ${\mathfrak{l}}_1$ is the centralizer and the normalizer of ${\mathfrak{a}}_1$ in ${\mathfrak{g}}$. Since $[{\mathfrak{l}}_1,{\mathfrak{n}}_1] \subseteq {\mathfrak{n}}_1$, \begin{equation*} {\mathfrak{q}}_1 = {\mathfrak{l}}_1 \oplus {\mathfrak{n}}_1 \end{equation*} is a subalgebra of ${\mathfrak{g}}$, the so-called parabolic subalgebra of ${\mathfrak{g}}$ associated with the simple root $\alpha_1$. The subalgebra ${\mathfrak{l}}_1 = {\mathfrak{q}}_1 \cap \theta({\mathfrak{q}}_1)$ is a reductive Levi subalgebra of ${\mathfrak{q}}_1$ and ${\mathfrak{n}}_1$ is the unipotent radical of ${\mathfrak{q}}_1$. Therefore the decomposition ${\mathfrak{q}}_1 = {\mathfrak{l}}_1 \oplus {\mathfrak{n}}_1$ is a semidirect sum of the Lie algebras ${\mathfrak{l}}_1$ and ${\mathfrak{n}}_1$. The decomposition ${\mathfrak{q}}_1 = {\mathfrak{l}}_1 \oplus {\mathfrak{n}}_1$ is the Chevalley decomposition of the parabolic subalgebra ${\mathfrak{q}}_1$. Next, we define a reductive subalgebra ${\mathfrak{m}}_1$ of ${\mathfrak{g}}$ by \begin{equation*} {\mathfrak{m}}_1 = {\mathfrak{g}}_{-\alpha_1} \oplus {\mathfrak{a}}^1 \oplus {\mathfrak{g}}_{\alpha_1} \oplus {\mathfrak{k}}_0 \cong {\mathfrak{s}}{\mathfrak{u}}_{1,1} \oplus {\mathfrak{s}}{\mathfrak{o}}_{n-2}. \end{equation*} The subalgebra ${\mathfrak{m}}_1$ normalizes ${\mathfrak{a}}_1 \oplus {\mathfrak{n}}_1$. The decomposition \begin{equation*} {\mathfrak{q}}_1 = {\mathfrak{m}}_1 \oplus {\mathfrak{a}}_1 \oplus {\mathfrak{n}}_1 \end{equation*} is the Langlands decomposition of the parabolic subalgebra ${\mathfrak{q}}_1$. We define a subalgebra ${\mathfrak{k}}_1$ of ${\mathfrak{k}}$ by \begin{equation*} {\mathfrak{k}}_1 = {\mathfrak{q}}_1 \cap {\mathfrak{k}} = {\mathfrak{l}}_1 \cap {\mathfrak{k}} = {\mathfrak{m}}_1 \cap {\mathfrak{k}} = {\mathfrak{k}}_{\alpha_1} \oplus {\mathfrak{k}}_0 \cong {\mathfrak{s}}{\mathfrak{o}}_2 \oplus {\mathfrak{s}}{\mathfrak{o}}_{n-2} . \end{equation*} Next, we define the semisimple subalgebra \[ {\mathfrak{g}}_1 = {\mathfrak{g}}_{-\alpha_1} \oplus {\mathfrak{a}}^1 \oplus {\mathfrak{g}}_{\alpha_1} \cong {\mathfrak{s}}{\mathfrak{u}}_{1,1}. \] It is easy to see that the subspaces \begin{equation*} {\mathfrak{a}} \oplus {\mathfrak{p}}_{\alpha_1} = {\mathfrak{l}}_1\cap {\mathfrak{p}} ,\ {\mathfrak{a}}^1 \oplus {\mathfrak{p}}_{\alpha_1} = {\mathfrak{m}}_1 \cap {\mathfrak{p}} = {\mathfrak{g}}_1 \cap {\mathfrak{p}} \end{equation*} are Lie triple systems in ${\mathfrak p}$. Then ${\mathfrak{g}}_1 = {\mathfrak{k}}_{\alpha_1} \oplus ({\mathfrak{a}}^1 \oplus {\mathfrak{p}}_{\alpha_1})$ is a Cartan decomposition of the semisimple subalgebra ${\mathfrak{g}}_1$ of ${\mathfrak{g}}$ and ${\mathfrak{a}}^1$ is a maximal abelian subspace of ${\mathfrak{a}}^1 \oplus {\mathfrak{p}}_{\alpha_1}$. Moreover, ${\mathfrak{g}}_1 = ({\mathfrak{k}}_{\alpha_1} \oplus {\mathfrak{a}}^1) \oplus {\mathfrak{g}}_{-\alpha_1} \oplus {\mathfrak{g}}_{\alpha_1}$ is the restricted root space decomposition of ${\mathfrak{g}}_1$ with respect to ${\mathfrak{a}}^1$ and $\{\pm\alpha_1\}$ is the corresponding set of restricted roots. We now relate these algebraic constructions to the geometry of the complex hyperbolic quadric ${Q^n}^*$. We denote by $A_1 \cong \mathbb{R}$ the connected abelian subgroup of $G$ with Lie algebra ${\mathfrak{a}}_1$ and by $N_1 \cong H^{2n-3}$ the connected nilpotent subgroup of $G$ with Lie algebra ${\mathfrak{n}}_1 \cong {\mathfrak{h}}^{2n-3}$. Here, $H^{2n-3}$ is the $(2n-3)$-dimensional Heisenberg group with $1$-dimensional center. The centralizer $L_1 = Z_G({\mathfrak{a}}_1) \cong SU_{1,1} \times \mathbb{R} \times SO_{n-2}$ of ${\mathfrak{a}}_1$ in $G$ is a reductive subgroup of $G$ with Lie algebra ${\mathfrak{l}}_1$. The subgroup $A_1$ is contained in the center of $L_1$. The subgroup $L_1$ normalizes $N_1$ and $Q_1 = L_1 N_1$ is a subgroup of $G$ with Lie algebra ${\mathfrak{q}}_1$. The subgroup $Q_1$ coincides with the normalizer $N_G({\mathfrak{l}}_1 \oplus {\mathfrak{n}}_1)$ of ${\mathfrak{l}}_1 \oplus {\mathfrak{n}}_1$ in $G$, and hence $Q_1$ is a closed subgroup of $G$. The subgroup $Q_1$ is the parabolic subgroup of $G$ associated with the simple root $\alpha_1$. Let $G_1 \cong SU_{1,1}$ be the connected subgroup of $G$ with Lie algebra ${\mathfrak{g}}_1 \cong {\mathfrak{s}}{\mathfrak{u}}_{1,1}$. The intersection $K_1$ of $L_1$ and $K$, i.e. $K_1 = L_1 \cap K \cong SO_2 SO_{n-2}$, is a maximal compact subgroup of $L_1$ and ${\mathfrak{k}}_1$ is the Lie algebra of $K_1$. The adjoint group ${\rm {Ad}}(L_1)$ normalizes ${\mathfrak{g}}_1$, and consequently $M_1 = K_1 G_1 \cong SU_{1,1} \times SO_{n-2}$ is a subgroup of $L_1$. The Lie algebra of $M_1$ is ${\mathfrak{m}}_1$ and $L_1$ is isomorphic to the Lie group direct product $M_1 \times A_1$, i.e. $L_1 = M_1 \times A_1 \cong (SU_{1,1} \times SO_{n-2}) \times \mathbb{R}$. The parabolic subgroup $Q_1$ acts transitively on ${Q^n}^*$ and the isotropy subgroup at $o$ is $K_1$, that is, ${Q^n}^* \cong Q_1/K_1$. Since ${\mathfrak{g}}_1 = {\mathfrak{k}}_{\alpha_1} \oplus ({\mathfrak{a}}^1 \oplus {\mathfrak{p}}_{\alpha_1})$ is a Cartan decomposition of the semisimple subalgebra ${\mathfrak{g}}_1$, we have $[{\mathfrak{a}}^1 \oplus {\mathfrak{p}}_{\alpha_1},{\mathfrak{a}}^1 \oplus {\mathfrak{p}}_{\alpha_1}] = {\mathfrak{k}}_{\alpha_1}$. Thus $G_1 \cong SU_{1,1}$ is the connected closed subgroup of $G$ with Lie algebra $[{\mathfrak{a}}^1 \oplus {\mathfrak{p}}_{\alpha_1},{\mathfrak{a}}^1 \oplus {\mathfrak{p}}_{\alpha_1}] \oplus ({\mathfrak{a}}^1 \oplus {\mathfrak{p}}_{\alpha_1})$. Since ${\mathfrak{a}}^1 \oplus {\mathfrak{p}}_{\alpha_1}$ is a Lie triple system in ${\mathfrak{p}}$, the orbit $B_1 = G_1 \cdot o$ of the $G_1$-action on ${Q^n}^*$ containing $o$ is a connected totally geodesic submanifold of ${Q^n}^*$ with $T_oB_1 = {\mathfrak{a}}^1 \oplus {\mathfrak{p}}_{\alpha_1}$. Moreover, $B_1$ is a Riemannian symmetric space of non-compact type and rank $1$, and \begin{equation*} B_1 = G_1 \cdot o = G_1/(G_1\cap K_1) \cong SU_{1,1}/SO_2 \cong \mathbb{C} H^1(-4), \end{equation*} where $\mathbb{C} H^1(-4)$ is a complex hyperbolic line of constant (holomorphic) sectional curvaturer $-4$. The submanifold $B_1$ is a boundary component of ${Q^n}^*$ in the context of the maximal Satake compactification of ${Q^n}^*$. This boundary component coincides with the totally geodesic submanifold $B_1$ that we constructed in Section \ref{hcs}. Clearly, ${\mathfrak{a}}_1$ is a Lie triple system and the corresponding totally geodesic submanifold is a Euclidean line $\mathbb{R} = A_1 \cdot o$. Since the action of $A_1$ on $M$ is free and $A_1$ is simply connected, we can identify $\mathbb{R}$ and $A_1$ canonically. Finally, ${\mathfrak{f}}_1 = {\mathfrak{a}} \oplus {\mathfrak{p}}_{\alpha_1}$ is a Lie triple system and the corresponding totally geodesic submanifold $F_1$ is the symmetric space \begin{equation*} F_1 = L_1 \cdot o = L_1/K_1= (M_1 \times A_1)/K_1 = B_1 \times \mathbb{R} \cong \mathbb{C} H^1(-4) \times \mathbb{R}. \end{equation*} The submanifolds $F_1$ and $B_1$ have a natural geometric interpretation. Denote by $\bar{C}^+(\Lambda) \subset {\mathfrak{a}}$ the closed positive Weyl chamber that is determined by the two simple roots $\alpha_1$ and $\alpha_2$. Let $Z$ be non-zero vector in $\bar{C}^+(\Lambda)$ such that $\alpha_1(Z) = 0$ and $\alpha_2(Z) > 0$, and consider the geodesic $\gamma_Z(t) = {\rm {Exp}}(tZ) \cdot o$ in ${Q^n}^*$ with $\gamma_Z(0) = o$ and $\dot{\gamma}_Z(0) = Z$. The totally geodesic submanifold $F_1$ is the union of all geodesics in ${Q^n}^*$ parallel to $\gamma_Z$, and $B_1$ is the semisimple part of $F_1$ in the de Rham decomposition of $F_1$ (see e.g.\ \cite{Eb96}, Proposition 2.11.4 and Proposition 2.20.10). The parabolic group $Q_1$ is diffeomorphic to the product $M_1 \times A_1 \times N_1$. This analytic diffeomorphism induces an analytic diffeomorphism between \[ B_1 \times \mathbb{R} \times N_1 \cong \mathbb{C} H^1(-4) \times \mathbb{R} \times H^{2n-3} \] and ${Q^n}^*$, giving a horospherical decomposition of the complex hyperbolic quadric ${Q^n}^*$, \[ \mathbb{C} H^1(-4) \times \mathbb{R} \times H^{2n-3} \cong {Q^n}^*. \] The factor $\mathbb{R} \times H^{2n-3}$ corresponds to the homogeneous complex hypersurface $P^{n-1} \cong \mathbb{C} H^{n-1}(-4)$ that we discussed in Section \ref{hcs}. We have $\mathbb{R} H_{\alpha_1} = {\mathfrak{a}}^1 \subset {\mathfrak{g}}_1$ and $G_1 \cdot o = B_1$. It follows from Theorem \ref{ghchP} that ${\mathfrak{a}}^1$ consists of ${\mathfrak{A}}$-isotropic tangent vectors of ${Q^n}^*$. Let $A^1 \cong \mathbb{R}$ be the abelian subalgebra of ${\mathfrak{a}}$ with Lie algebra ${\mathfrak{a}}^1$. Then the orbit $A^1 \cdot o$ is the path of an ${\mathfrak{A}}$-isotropic geodesic $\gamma$ (determined by the root vector $H_{\alpha_1}$) in the complex hyperbolic quadric ${Q^n}^*$. Moreover, by construction, this geodesic is contained in the boundary component $B_1 \cong \mathbb{C} H^1(-4)$. The action of $A^1$ on $\mathbb{C} H^1(-4)$ is of cohomogeneity one. The orbit containing $o$ is the geodesic $\gamma$, and the other orbits are the equidistant curves to $\gamma$. The canonical extension of the cohomogeneity one action of $A^1$ on the boundary component $B_1 \cong \mathbb{C} H^1(-4)$ is defined as follows. We first define the solvable subalgebra \begin{align*} {\mathfrak{s}}_1 & = {\mathfrak{a}}^1 \oplus {\mathfrak{a}}_1 \oplus {\mathfrak{n}}_1 = {\mathfrak{a}} \oplus {\mathfrak{n}}_1 \\ & = \left\{ \begin{pmatrix} 0 & y & a_1 & -y & v_1 & \cdots & v_{n-2}\\ -y & 0 & y & a_2 & w_1 & \cdots & w_{n-2} \\ a_1 & y & 0 & -y & v_1 & \cdots & v_{n-2}\\ -y & a_2 & y & 0 & w_1 & \cdots & w_{n-2} \\ v_1 & w_1 & -v_1 & -w_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ v_{n-2} & w_{n-2} & -v_{n-2} & -w_{n-2} & 0 & \cdots & 0 \end{pmatrix} : \begin{array}{l} a_1,a_2,y \in \mathbb{R},\\ v,w \in \mathbb{R}^{n-2} \end{array} \right\} \end{align*} of ${\mathfrak{a}} \oplus {\mathfrak{n}}$. Let $S_1$ be the connected solvable subgroup of $AN$ with Lie algebra ${\mathfrak{s}}_1$. Then the action of $S_1$ on $AN$ (resp.\ ${Q^n}^*$) is of cohomogeneity one (see \cite{BT13}). By construction, all orbits of the $S_1$-action on $AN$ (resp.\ ${Q^n}^*$) are homogeneous real hypersurfaces in $(AN,\langle \cdot , \cdot \rangle)$ (resp.\ $({Q^n}^*,g)$). Let $\hat{M}^{2n-1}_0$ (resp.\ $M^{2n-1}_0$) be the orbit containing the point $o$. Geometrically, we can describe this orbit as the canonical extension of an ${\mathfrak{A}}$-isotropic geodesic in the boundary component $B_1 \cong \mathbb{C} H^1(-4)$. We will now compute the shape operator of the homogeneous real hypersurface $\hat{M}^{2n-1}_0$ in $(AN,\langle \cdot , \cdot \rangle)$. Since $\hat{M}^{2n-1}_0$ is homogeneous, it suffices to make the computations at the point $o$. We define \[ \hat\zeta = \begin{pmatrix} 0 & 1 & 0 & 1 & 0 & \cdots & 0 \\ -1 & 0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & 1 & 0 & \cdots & 0 \\ 1 & 0 & -1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \end{pmatrix} \in {\mathfrak{g}}_{\alpha_1} \subset {\mathfrak{n}}. \] Then \[ \theta(\hat\zeta) = \begin{pmatrix} 0 & 1 & 0 & -1 & 0 & \cdots & 0 \\ -1 & 0 & -1 & 0 & 0 & \cdots & 0 \\ 0 & -1 & 0 & 1 & 0 & \cdots & 0 \\ -1 & 0 & -1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \end{pmatrix} \in {\mathfrak{g}}_{-\alpha_1} \] and \begin{align*} \langle \hat\zeta , \hat\zeta \rangle & = - \frac{1}{8}{\rm {tr}}(\hat\zeta\theta(\hat\zeta)) = 1. \end{align*} Thus $\hat\zeta$ is a unit normal vector of $\hat{M}^{2n-1}_0$ at $o$. Let $\hat{A}$ be the shape operator of $\hat{M}^{2n-1}_0$ in $(AN,\langle \cdot , \cdot \rangle)$ with respect to $\hat{\zeta}$. As in Section \ref{hcs}, using arguments involving the Weingarten and Koszul formulas, we can show that \[ \hat{A} \hat{X} = [\zeta,\hat{X}]_{{\mathfrak{s}}_1} \] for all $\hat{X} \in {\mathfrak{s}}_1$, where $\zeta = \frac{1}{2}(\hat\zeta - \theta(\hat\zeta))$ is the orthogonal projection of $\hat{\zeta}$ onto ${\mathfrak{p}}$ and $[\, \cdot\ ]_{{\mathfrak{s}}_1}$ is the orthogonal projection onto ${\mathfrak{s}}_1$. We have \[ \zeta = \frac{1}{2}(\hat\zeta - \theta(\hat\zeta)) = \begin{pmatrix} 0 & 0 & 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & 0 & 0 & \cdots & 0 \\ 1 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \end{pmatrix} \in {\mathfrak{p}}_{\alpha_1}. \] For \[ \hat{X} = \begin{pmatrix} 0 & y & a_1 & -y & v_1 & \cdots & v_{n-2}\\ -y & 0 & y & a_2 & w_1 & \cdots & w_{n-2} \\ a_1 & y & 0 & -y & v_1 & \cdots & v_{n-2}\\ -y & a_2 & y & 0 & w_1 & \cdots & w_{n-2} \\ v_1 & w_1 & -v_1 & -w_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ v_{n-2} & w_{n-2} & -v_{n-2} & -w_{n-2} & 0 & \cdots & 0 \end{pmatrix} \in {\mathfrak{s}}_1 \] we then compute \[ [\zeta,\hat{X}] = \begin{pmatrix} 0 & a_2-a_1 & 0 & 0 & w_1 & \cdots & w_{n-2} \\ a_1-a_2 & 0 & 0 & 0 & v_1 & \cdots & v_{n-2} \\ 0 & 0 & 0 & a_2-a_1 & w_1 & \cdots & w_{n-2}\\ 0 & 0 & a_1-a_2 & 0 & v_1 & \cdots & v_{n-2} \\ w_1 & v_1 & -w_1 & -v_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ w_{n-2} & v_{n-2} & -w_{n-2} & -v_{n-2} & 0 & \cdots & 0 \end{pmatrix}. \] The orthogonal projection of $[\zeta,\hat{X}]$ onto ${\mathfrak{s}}_1$ is \[ [\zeta,\hat{X}]_{{\mathfrak{s}}_1} = \begin{pmatrix} 0 & 0 & 0 & 0 & w_1 & \cdots & w_{n-2} \\ 0 & 0 & 0 & 0 & v_1 & \cdots & v_{n-2} \\ 0 & 0 & 0 & 0 & w_1 & \cdots & w_{n-2}\\ 0 & 0 & 0 & 0 & v_1 & \cdots & v_{n-2} \\ w_1 & v_1 & -w_1 & -v_1 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ w_{n-2} & v_{n-2} & -w_{n-2} & -v_{n-2} & 0 & \cdots & 0 \end{pmatrix}. \] We conclude that the shape operator $\hat{A}$ of $\hat{M}^{2n-1}_0$ in $(AN,\langle \cdot , \cdot \rangle)$ with respect to $\hat{\zeta}$ is given by \[ \hat{A} \hat{X} = \begin{pmatrix} 0 & 0 & 0 & 0 & w_1 & \cdots & w_{n-2} \\ 0 & 0 & 0 & 0 & v_1 & \cdots & v_{n-2} \\ 0 & 0 & 0 & 0 & w_1 & \cdots & w_{n-2}\\ 0 & 0 & 0 & 0 & v_1 & \cdots & v_{n-2} \\ w_1 & v_1 & -w_1 & -v_1 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ w_{n-2} & v_{n-2} & -w_{n-2} & -v_{n-2} & 0 & \cdots & 0 \end{pmatrix} \] with \[ \hat{X} = \begin{pmatrix} 0 & y & a_1 & -y & v_1 & \cdots & v_{n-2}\\ -y & 0 & y & a_2 & w_1 & \cdots & w_{n-2} \\ a_1 & y & 0 & -y & v_1 & \cdots & v_{n-2}\\ -y & a_2 & y & 0 & w_1 & \cdots & w_{n-2} \\ v_1 & w_1 & -v_1 & -w_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ v_{n-2} & w_{n-2} & -v_{n-2} & -w_{n-2} & 0 & \cdots & 0 \end{pmatrix} \in {\mathfrak{s}}_1. \] From this we deduce that $0$ is a principal curvature of $\hat{M}^{2n-1}_0$ with multiplicity $3$ and corresponding principal curvature space \begin{align*} \hat{T}_0 & = {\mathfrak{a}} \oplus {\mathfrak{g}}_{\alpha_1\oplus 2\alpha_2}. \end{align*} On the orthogonal complement ${\mathfrak{g}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{g}}_{\alpha_2}$ the shape operator is of the form \[ \hat{A} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \] with respect to the orthogonal decomposition ${\mathfrak{g}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{g}}_{\alpha_2}$. The characteristic polynomial of this matrix is $x^2 - 1$, and hence the eigenvalues of $\hat{A}$ restricted to ${\mathfrak{g}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{g}}_{\alpha_2}$ are $1$ and $-1$. The corresponding eigenspaces are \[ \hat{T}_1 = \left\{ \begin{pmatrix} 0 & 0 & 0 & 0 & u_1 & \cdots & u_{n-2}\\ 0 & 0 & 0 & 0 & u_1 & \cdots & u_{n-2} \\ 0 & 0 & 0 & 0 & u_1 & \cdots & u_{n-2}\\ 0 & 0 & 0 & 0 & u_1 & \cdots & u_{n-2} \\ u_1 & u_1 & -u_1 & -u_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ u_{n-2} & u_{n-2} & -u_{n-2} & -u_{n-2} & 0 & \cdots & 0 \end{pmatrix} \right\} \cong \mathbb{R}^{n-2} \] and \[ \hat{T}_{-1} = \left\{ \begin{pmatrix} 0 & 0 & 0 & 0 & u_1 & \cdots & u_{n-2}\\ 0 & 0 & 0 & 0 & -u_1 & \cdots & -u_{n-2} \\ 0 & 0 & 0 & 0 & u_1 & \cdots & u_{n-2}\\ 0 & 0 & 0 & 0 & -u_1 & \cdots & -u_{n-2} \\ u_1 & -u_1 & -u_1 & u_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ u_{n-2} & -u_{n-2} & -u_{n-2} & u_{n-2} & 0 & \cdots & 0 \end{pmatrix} \right\} \cong \mathbb{R}^{n-2}. \] All of the above calculations are with respect to the metric $\langle \cdot , \cdot \rangle$ on $AN$. We now switch to the Riemannian metric $g$ on ${Q^n}^*$ and the Cartan decomposition ${\mathfrak{g}} = {\mathfrak{k}} \oplus {\mathfrak{p}}$. Recall that, by construction, $(AN,\langle \cdot , \cdot \rangle)$ and $({Q^n}^*,g)$ are isometric and the metrics are related by \[ \langle H_1 + \hat{X}_1 , H_2 + \hat{X}_2 \rangle = g(H_1,H_2) + g(X_1,X_2) \] with $H_1,H_2 \in {\mathfrak{a}}$ and $\hat{X}_1,\hat{X}_2 \in {\mathfrak{n}}$. Since $\hat{\zeta}$ is a unit vector in ${\mathfrak{g}}_{\alpha_1}$, the vector $\zeta = \frac{1}{2}(\hat\zeta - \theta(\hat\zeta))$ is a unit vector in ${\mathfrak{p}}_{\alpha_1}$. Since ${\mathfrak{p}}_{\alpha_1} \subset T_oB_1$ and all (non-zero) tangent vectors of the boundary component $B_1$ are ${\mathfrak{A}}$-isotropic (see Theorem \ref{ghchP}), we conclude that the normal bundle of $M^{2n-1}_0$ consists of ${\mathfrak{A}}$-isotropic singular tangent vectors of $({Q^n}^*,g)$. Let $A$ be the shape operator of $M^{2n-1}_0$ in $({Q^n}^*,g)$ with respect to $\zeta$. The above calculations imply that \[ A X = \begin{pmatrix} 0 & 0 & 0 & 0 & w_1 & \cdots & w_{n-2} \\ 0 & 0 & 0 & 0 & v_1 & \cdots & v_{n-2} \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ w_1 & v_1 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ w_{n-2} & v_{n-2} & 0 & 0 & 0 & \cdots & 0 \end{pmatrix} \] with \[ X = \begin{pmatrix} 0 & 0 & a_1 & -y & v_1 & \cdots & v_{n-2}\\ 0 & 0 & y & a_2 & w_1 & \cdots & w_{n-2} \\ a_1 & y & 0 & 0 & 0 & \cdots & 0 \\ -y & a_2 & 0 & 0 & 0 & \cdots & 0 \\ v_1 & w_1 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ v_{n-2} & w_{n-2} & 0 & 0 & 0 & \cdots & 0 \end{pmatrix} \in T_oM^{2n-1}_0 \subset {\mathfrak{p}}. \] From this we easily deduce the following result. \begin{thm} \label{gEso} Let $M^{2n-1}_0$ be the homogeneous real hypersurface in $({Q^n}^*,g)$ obtained by canonical extension of the geodesic that is tangent to the root vector $H_{\alpha_1}$ in the boundary component $B_1 \cong \mathbb{C} H^1(-4)$ of $({Q^n}^*,g)$. The normal bundle of $M^{2n-1}_0$ consists of ${\mathfrak{A}}$-isotropic singular tangent vectors of $({Q^n}^*,g)$ and $M^{2n-1}_0$ has three distinct constant principal curvatures $0$, $1$, $-1$ with multiplicities $3$, $n-2$, $n-2$, respectively. The principal curvature spaces $T_0$, $T_1$ and $T_{-1}$ are \begin{align*} T_0 &= {\mathfrak{a}} \oplus {\mathfrak{p}}_{\alpha_1+2\alpha_2} = {\mathcal{C}} \ominus {\mathcal{Q}},\\ T_1 & = \{ X-JX : X \in {\mathfrak{p}}_{\alpha_2}\} = \{ X+JX : X \in {\mathfrak{p}}_{\alpha_1+\alpha_2}\},\\ T_{-1} & = \{ X+JX : X \in {\mathfrak{p}}_{\alpha_2}\} = \{ X-JX : X \in {\mathfrak{p}}_{\alpha_1+\alpha_2}\}. \end{align*} We have $T_1 \oplus T_{-1} = {\mathcal{Q}}$ and $JT_1 = T_{-1}$. The shape operator $A$ of $M^{2n-1}_0$ satisfies \[ A\phi + \phi A = 0. \] \end{thm} Note that \[ T_1 \subset V\left( \frac{1}{\sqrt{2}}(C_0 + JC_0) \right),\ T_{-1} \subset JV\left( \frac{1}{\sqrt{2}}(C_0 + JC_0) \right). \] We immediately see from Theorem \ref{gEso} that ${\rm {tr}}(A) = 0$. \begin{cor} The homogeneous Hopf hypersurface $M^{2n-1}_0$ in $({Q^n}^*,g)$ is minimal. \end{cor} The eigenspaces $T_0$, $T_1$ and $T_{-1}$ of the shape operator $A$ and the eigenspaces $E_0$, $E_{-1}$ and $E_{-4}$ of the normal Jacobi operator $K_\zeta$ satisfy \[ T_0 = E_0 \oplus E_{-4}\ ,\ T_{-1} \oplus T_1 = E_{-1}. \] It follows that $A$ and $K = K_\zeta$ are simultaneously diagonalizable and hence $AK = KA$. This implies that $M^{2n-1}_0$ is curvature-adapted. \begin{cor} \label{gEca} The homogeneous Hopf hypersurface $M^{2n-1}_0$ in $({Q^n}^*,g)$ is curvature-adapted. \end{cor} We finally relate this construction to the discussion in the introduction. The subalgebra \[ {\mathfrak{s}}_1 = {\mathfrak{a}} \oplus {\mathfrak{n}}_1 = {\mathfrak{a}} \oplus {\mathfrak{g}}_{\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1 + \alpha_2} \oplus {\mathfrak{g}}_{\alpha_1 + 2\alpha_2} \] of ${\mathfrak{a}} \oplus {\mathfrak{n}}$ contains the subalgebra \[ {\mathfrak{d}} = \mathbb{R} H_{\alpha_1+2\alpha_2} \oplus {\mathfrak{g}}_{\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1+\alpha_2} \oplus {\mathfrak{g}}_{\alpha_1+2\alpha_2}. \] The subalgebra ${\mathfrak{d}}$ induces the homogeneous complex hypersurface $\hat{P}^{n-1} \cong \mathbb{C} H^{n-1}(-4)$, as discussed in Section \ref{hcs}. Since the construction is left-invariant, it follows that the homogeneous real hypersurface $\hat{M}^{2n-1}_0$ in $(AN,\langle \cdot , \cdot \rangle)$ is foliated by isometric copies of the homogeneous complex hypersurface $\hat{P}^{n-1} \cong \mathbb{C} H^{n-1}(-4)$. This implies that the homogeneous complex hypersurface $M^{2n-1}_0$ in $({Q^n}^*,g)$ is foliated by isometric copies of the homogeneous complex hypersurface $P^{n-1} \cong \mathbb{C} H^{n-1}(-4)$. The normal space $\nu_oP^{n-1}$ is a Lie triple system and the totally geodesic submanifold $B_1$ of ${Q^n}^*$ generated by this Lie triple system is a complex hyperbolic line $\mathbb{C} H^1(-4)$ of constant holomorphic sectional curvature $-4$. The same is true for all the normal spaces of $P^{n-1}$ at other points. It follows that, by construction, the integral curves of the Reeb vector field $\xi = -J\zeta$ are geodesics in a complex hyperbolic line of constant (holomorphic) sectional curvature $-4$. Such a geodesic has constant geodesic curvature $0$. This clarifies the geometric construction explained in the introduction. \section{Equidistant real hypersurfaces} \label{eqdihyp} In this section we compute the shape operator of the other orbits of the cohomogeneity one action on $({Q^n}^*,g)$ that we discussed in Section \ref{mhHs}. Recall that $M^{2n-1}_0$ is the orbit of this action containing $o$. Since the action is isometric, the other orbits are the equidistant real hypersurfaces to $M^{2n-1}_0$. For $r \in \mathbb{R}_+$ we denote by $M^{2n-1}_\alpha$ the equidistant real hypersurface to $M^{2n-1}_0$ at oriented distance $r \in \mathbb{R}_+$, where we put $\alpha = 2\tanh(2r)$. From Corollary \ref{gEca} we know that $M^{2n-1}_0$ is a curvature-adapted real hypersurface in ${Q^n}^*$. We can therefore use Jacobi field theory to compute the principal curvatures and principal curvature spaces of $M^{2n-1}_\alpha$ (see e.g.\ \cite{BCO16}, Section 10.2.2). Since $M^{2n-1}_\alpha$ is a homogeneous real hypersurface in ${Q^n}^*$, it suffices to compute the principal curvatures and principal curvature spaces at one point. Let $\zeta \in \nu_oM^{2n-1}_0$ be the unit normal vector of $M^{2n-1}_0$ as defined in Section \ref{mhHs} and $A_\zeta$ be the shape operator of $M^{2n-1}_0$ at $o$ with respect to $\zeta$. We denote by $T_0$, $T_1$ and $T_{-1}$ the principal curvature spaces as in Theorem \ref{gEso}. Let $\gamma : \mathbb{R} \to {Q^n}^*$ be the geodesic in ${Q^n}^*$ with $\gamma(0) = o$ and $\dot\gamma(0) = \zeta$. Then $p = \gamma(r) \in M^{2n-1}_\alpha$ and $\zeta_r = \dot\gamma(r)$ is a unit normal vector of $M^{2n-1}_\alpha$ at $p$. We denote by $\gamma^\perp$ the parallel subbundle of the tangent bundle of ${Q^n}^*$ along $\gamma$ that is defined by the orthogonal complements of $\mathbb{R}\dot{\gamma}(t)$ in $T_{\gamma(t)}{Q^n}^*$, and put \[ \bar{R}^\perp_\gamma = \bar{R}_\gamma|_{\gamma^\perp} = \bar{R}(\cdot,\dot{\gamma})\dot{\gamma}|_{\gamma^\perp}. \] Let $D$ be the ${\rm End}(\gamma^\perp)$-valued tensor field along $\gamma$ solving the Jacobi equation \[ D^{\prime\prime} + \bar{R}^\perp_\gamma \circ D = 0 ,\ D(0) = {\rm {id}}_{T_oM^{2n-1}_0} ,\ D^\prime(0) = -A_\zeta. \] If $v \in T_oM^{2n-1}_0$ and $B_v$ is the parallel vector field along $\gamma$ with $B_v(0) = v$, then $Y = DB_v$ is the Jacobi field along $\gamma$ with initial values $Y(0) = v$ and $Y^\prime(0) = -A_\zeta v$. Since $\zeta$ is ${\mathfrak{A}}$-isotropic, the Jacobi operator $\bar{R}^\perp_\gamma$ at $o$ is of matrix form \[ \begin{pmatrix} 0 & 0 & 0 & 0\\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -4 \end{pmatrix} \] with respect to the decomposition $\mathbb{C} C\zeta \oplus T_1 \oplus T_{-1} \oplus \mathbb{R} J\zeta$. Note that $T_0 = \mathbb{C} C\zeta \oplus \mathbb{R} J\zeta$. Since $({Q^n}^*,g)$ is a Riemannian symmetric space, the Jacobi operator $\bar{R}^\perp_\gamma$ is parallel along $\gamma$. By solving the above second order initial value problem explicitly we obtain \[ D(r) = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & e^{-r} & 0 & 0 \\ 0 & 0 & e^r & 0 \\ 0 & 0 & 0 & \cosh(2r) \end{pmatrix} \] with respect to the parallel translate of the decomposition $T_0 \oplus T_1 \oplus T_{-1} \oplus \mathbb{R} J\zeta$ along $\gamma$ from $o$ to $\gamma(r)$. The shape operator $A^\alpha_{\zeta_r}$ of $M^{2n-1}_\alpha$ with respect to $\zeta_r = \dot{\gamma}(r)$ satisfies the equation \[ A^\alpha_{\zeta_r} = - D^\prime(r) \circ D^{-1}(r). \] The matrix representation of $A^\alpha_{\zeta_r}$ with respect to the parallel translate of the decomposition $T_0 \oplus T_1 \oplus T_{-1} \oplus \mathbb{R} J\zeta$ along $\gamma$ from $o$ to $\gamma(r)$ therefore is \[ A^\alpha_{\zeta_r} = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -2\tanh(2r) \end{pmatrix}. \] It is remarkable that the principal curvatures of the equidistant real hypersurfaces to $M^{2n-1}_0$ are preserved along the parallel translate of the maximal complex subspace ${\mathcal{C}}_o \subset T_oM_0^{2n-1}$. The only additional principal curvature arises in direction of the Reeb vector field, which is the Hopf principal curvature. We change the orientation of the unit normal vector field $\zeta_r$ so that the Hopf principal curvature is positive, that is, is equal to $\alpha$. Thus we have proved: \begin{thm} Let $M^{2n-1}_0$ be the minimal homogeneous Hopf hypersurface in $({Q^n}^*,g)$ as in Section \ref{mhHs} and $M^{2n-1}_\alpha$ be the equidistant real hypersurface at oriented distance $r \in \mathbb{R}_+$ from $M^{2n-1}_0$, where $\alpha = 2\tanh(2r)$. Then $M^{2n-1}_\alpha$ is a homogeneous Hopf hypersurface with four distinct constant principal curvatures $0$, $1$, $-1$, $2\tanh(2r)$ with multiplicities $2$, $n-2$, $n-2$, $1$, respectively. The principal curvature spaces $T_0$, $T_1$, $T_{-1}$, $T_{2\tanh(2r)}$ satisfy \begin{align*} T_0 &= {\mathcal{C}} \ominus {\mathcal{Q}},\\ T_{2\tanh(2r)} & = \mathbb{R} \xi = {\mathcal{C}}^\perp,\\ T_1 \oplus T_{-1} & = {\mathcal{Q}} \mbox{ and } JT_1 = T_{-1}. \end{align*} Moreover, the shape operator $A^\alpha$ and the structure tensor field $\phi$ of $M^{2n-1}_\alpha$ satisfy \[ A^\alpha \phi + \phi A^\alpha = 0. \] \end{thm} The principal curvature spaces $T_{2\tanh(2r)}$, $T_0$, $T_1$ and $T_{-1}$ of the shape operator $A^\alpha$ and the eigenspaces $E_0$, $E_{-1}$ and $E_{-4}$ of the normal Jacobi operator $K^\alpha = K_{\zeta_r}$ satisfy \[ T_{2\tanh(2r)} = E_{-4}\ ,\ T_0 = E_0 \ ,\ T_{-1} \oplus T_1 = E_{-1}. \] It follows that $A^\alpha$ and $K^\alpha$ are simultaneously diagonalizable and hence $A^\alpha K^\alpha = K^\alpha A^\alpha$. This implies: \begin{cor} The equidistant real hypersurface $M^{2n-1}_\alpha$, $0 < \alpha < 2$, to the minimal homogeneous Hopf hypersurface $M^{2n-1}_0$ in $({Q^n}^*,g)$ is curvature-adapted. \end{cor} By construction, the integral curves of the Reeb vector field on $M^{2n-1}_\alpha$ are congruent to an equidistant curve at distance $r = \frac{1}{2}\tanh^{-1}(\frac{\alpha}{2})$ to a geodesic in a complex hyperbolic line $\mathbb{C} H^1(-4)$. Such an equidistant curve has constant geodesic curvature $2\tanh(2r)$. As in previous cases, this leads to the geometric interpretation of $M^{2n-1}_\alpha$ given by attaching copies of the homogeneous complex hypersurface $P^{n-1} \cong \mathbb{C} H^{n-1}(-4)$ to such an equidistant curve to a geodesic in the boundary component $B_1 \cong \mathbb{C} H^1(-4)$. Equivalently, $M^{2n-1}_\alpha$ is the canonical extension of an equidistant curve at distance $r = \frac{1}{2}\tanh^{-1}(\frac{\alpha}{2})$ to a geodesic in the boundary component $B_1 \cong \mathbb{C} H^1(-4)$. \section{The homogeneous Hopf hypersurface of horocyclic type} \label{canexthoro} In this section we discuss the canonical extension of a horocycle in the boundary component $B_1 \cong \mathbb{C} H^1(-4)$, which leads to the homogeneous real hypersurface $M^{2n-1}_2$ in Theorem \ref{mainthm}. We first define the solvable subalgebra \[ {\mathfrak{h}}_1 = {\mathfrak{a}}_1 \oplus {\mathfrak{n}} = \left\{ \begin{pmatrix} 0 & x+y & a & x-y & v_1 & \cdots & v_{n-2} \\ -x-y & 0 & x+y & a & w_1 & \cdots & w_{n-2} \\ a & x+y & 0 & x-y & v_1 & \cdots & v_{n-2} \\ x-y & a & -x+y & 0 & w_1 & \cdots & w_{n-2} \\ v_1 & w_1 & -v_1 & -w_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ v_{n-2} & w_{n-2} & -v_{n-2} & -w_{n-2} & 0 & \cdots & 0 \end{pmatrix} : \begin{array}{l} a,x,y \in \mathbb{R},\\ v,w \in \mathbb{R}^{n-2} \end{array} \right\} \] of ${\mathfrak{a}} \oplus {\mathfrak{n}}$. Recall that ${\mathfrak{a}}^1 \oplus {\mathfrak{g}}_{\alpha_1} = \mathbb{R} H_{\alpha_1} \oplus {\mathfrak{g}}_{\alpha_1}$ generates the boundary component $B_1 \cong \mathbb{C} H^1(-4)$. The orbit containing $o$ of the $1$-dimensional Lie group generated by ${\mathfrak{g}}_{\alpha_1}$ is a horocycle in the boundary component $B_1$. Since the tangent vectors of $B_1$ are ${\mathfrak{A}}$-isotropic, the horocycle is ${\mathfrak{A}}$-isotropic. The canonical extension of this cohomogeneity one action on $B_1$ is the cohomogeneity one action on ${Q^n}^*$ by the subgroup $H_1$ of $AN$ with Lie algebra ${\mathfrak{h}}_1$. Let $\hat{M}^{2n-1}_2 = H_1 \cdot o \cong H_1$ be the orbit of the $H_1$-action on $(AN,\langle \cdot , \cdot \rangle)$ containing $o$ and $M^{2n-1}_2 = H_1 \cdot o$ be the orbit of the $H_1$-action on $({Q^n}^*,g)$ containing $o$. The normal space $\nu_o\hat{M}^{2n-1}_2$ of $\hat{M}^{2n-1}_2$ at $o$ is \[ \nu_o\hat{M}^{2n-1}_2 = {\mathfrak{a}}^1 = \mathbb{R} H_{\alpha_1}. \] Since $\langle H_{\alpha_1} , H_{\alpha_1} \rangle = 4$, the vector $\hat\zeta = \frac{1}{2}H_{\alpha_1} \in {\mathfrak{a}}$ is a unit normal vector of $\hat{M}^{2n-1}_2$ at $o$. Let $\hat{A}$ be the shape operator of $\hat{M}^{2n-1}_2$ in $(AN,\langle \cdot , \cdot \rangle)$ with respect to $\hat{\zeta}$. As in previous sections, we can show that the shape operater $\hat{A}$ of $\hat{M}^{2n-1}_2$ is given by \[ \hat{A} \hat{X} = [\zeta,\hat{X}]_{{\mathfrak{h}}_1} \] for all $\hat{X} \in {\mathfrak{h}}_1$, where \[ \zeta = \frac{1}{2}(\hat\zeta - \theta(\hat\zeta)) = \hat\zeta = \frac{1}{2}H_{\alpha_1} \] and $[\, \cdot\ ]_{{\mathfrak{h}}_1}$ is the orthogonal projection onto ${\mathfrak{h}}_1$. For \[ \hat{X} = \begin{pmatrix} 0 & x+y & a & x-y & v_1 & \cdots & v_{n-2} \\ -x-y & 0 & x+y & a & w_1 & \cdots & w_{n-2} \\ a & x+y & 0 & x-y & v_1 & \cdots & v_{n-2} \\ x-y & a & -x+y & 0 & w_1 & \cdots & w_{n-2} \\ v_1 & w_1 & -v_1 & -w_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ v_{n-2} & w_{n-2} & -v_{n-2} & -w_{n-2} & 0 & \cdots & 0 \end{pmatrix} \in {\mathfrak{h}}_1 \] we then compute \[ [\zeta,\hat{X}] = \begin{pmatrix} 0 & 2x & 0 & 2x & v_1 & \cdots & v_{n-2} \\ -2x & 0 & 2x & 0 & -w_1 & \cdots & -w_{n-2} \\ 0 & 2x & 0 & 2x & v_1 & \cdots & v_{n-2}\\ 2x & 0 & -2x & 0 & -w_1 & \cdots & -w_{n-2} \\ v_1 & -w_1 & -v_1 & w_1 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ v_{n-2} & -w_{n-2} & -v_{n-2} & w_{n-2} & 0 & \cdots & 0 \end{pmatrix}. \] Since the last matrix is in ${\mathfrak{h}}_1$, the orthogonal projection of $[\zeta,\hat{X}]$ onto ${\mathfrak{h}}_1$ is $[\zeta,\hat{X}]$. We conclude that the shape operator $\hat{A}$ of $\hat{M}^{2n-1}_2$ in $(AN,\langle \cdot , \cdot \rangle)$ is given by \[ \hat{A} \hat{X} = \begin{pmatrix} 0 & 2x & 0 & 2x & v_1 & \cdots & v_{n-2} \\ -2x & 0 & 2x & 0 & -w_1 & \cdots & -w_{n-2} \\ 0 & 2x & 0 & 2x & v_1 & \cdots & v_{n-2}\\ 2x & 0 & -2x & 0 & -w_1 & \cdots & -w_{n-2} \\ v_1 & -w_1 & -v_1 & w_1 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ v_{n-2} & -w_{n-2} & -v_{n-2} & w_{n-2} & 0 & \cdots & 0 \end{pmatrix} \] with \[ \hat{X} = \begin{pmatrix} 0 & x+y & a & x-y & v_1 & \cdots & v_{n-2} \\ -x-y & 0 & x+y & a & w_1 & \cdots & w_{n-2} \\ a & x+y & 0 & x-y & v_1 & \cdots & v_{n-2} \\ x-y & a & -x+y & 0 & w_1 & \cdots & w_{n-2} \\ v_1 & w_1 & -v_1 & -w_1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ v_{n-2} & w_{n-2} & -v_{n-2} & -w_{n-2} & 0 & \cdots & 0 \end{pmatrix} \in {\mathfrak{h}}_1. \] From this we deduce that the principal curvatures of $\hat{M}^{2n-1}_2$ are $2$, $0$, $1$, $-1$ with corresponding multiplicities $1$, $2$, $n-2$, $n-2$, respectively. The corresponding principal curvature spaces are \[ \hat{T}_2 = {\mathfrak{g}}_{\alpha_1}\ ,\ \hat{T}_0 = {\mathfrak{a}}_1 \oplus {\mathfrak{g}}_{\alpha_1\oplus 2\alpha_2} \ ,\ \hat{T}_1 = {\mathfrak{g}}_{\alpha_1+\alpha_2}\ ,\ \hat{T}_{-1} = {\mathfrak{g}}_{\alpha_2}. \] All of the above calculations are with respect to the metric $\langle \cdot , \cdot \rangle$ on $AN$. We now switch to the Riemannian metric $g$ on ${Q^n}^*$ and the Cartan decomposition ${\mathfrak{g}} = {\mathfrak{k}} \oplus {\mathfrak{p}}$. Recall that, by construction, $(AN,\langle \cdot , \cdot \rangle)$ and $({Q^n}^*,g)$ are isometric and the metrics are related by \[ \langle H_1 + \hat{X}_1 , H_2 + \hat{X}_2 \rangle = g(H_1,H_2) + g(X_1,X_2) \] with $H_1,H_2 \in {\mathfrak{a}}$ and $\hat{X}_1,\hat{X}_2 \in {\mathfrak{n}}$. Let $A$ be the shape operator of $M^{2n-1}_2$ in $({Q^n}^*,g)$ with respect to $\zeta$. The above calculations then imply \[ A X = \begin{pmatrix} 0 & 0 & 0 & 2x & v_1 & \cdots & v_{n-2} \\ 0 & 0 & 2x & 0 & -w_1 & \cdots & -w_{n-2} \\ 0 & 2x & 0 & 0 & 0 & \cdots & 0\\ 2x & 0 & 0 & 0 & 0 & \cdots & 0 \\ v_1 & -w_1 & 0 & 0 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ v_{n-2} & -w_{n-2} & 0 & 0 & 0 & \cdots & 0 \end{pmatrix} \] with \[ X = \begin{pmatrix} 0 & 0 & a & x-y & v_1 & \cdots & v_{n-2} \\ 0 & 0 & x+y & a & w_1 & \cdots & w_{n-2} \\ a & x+y & 0 & 0 & 0 & \cdots & 0 \\ x-y & a & 0 & 0 & 0 & \cdots & 0 \\ v_1 & w_1 & 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ v_{n-2} & w_{n-2} & 0 & 0 & 0 & \cdots & 0 \end{pmatrix} \in T_oM^{2n-1}_2 \subset {\mathfrak{p}}. \] From this we deduce the following result. \begin{thm} The homogeneous real hypersurface $M^{2n-1}_2$ in $({Q^n}^*,g)$ has four distinct constant principal curvatures $2$, $0$, $1$, $-1$ with multiplicities $1$, $2$, $n-2$, $n-2$, respectively. The principal curvature spaces $T_2$, $T_0$, $T_1$ and $T_{-1}$ are \[ T_2 = {\mathfrak{p}}_{\alpha_1} = \mathbb{R}\xi,\ T_0 = \mathbb{R} H_{\alpha+2\alpha_2} \oplus {\mathfrak{p}}_{\alpha_1\oplus 2\alpha_2} = {\mathcal{C}} \ominus {\mathcal{Q}} ,\ T_1 = {\mathfrak{p}}_{\alpha_1+\alpha_2},\ T_{-1} = {\mathfrak{p}}_{\alpha_2}. \] In particular, $T_1$ and $T_{-1}$ are mapped into each other by the structure tensor field $\phi$. Moreover, the shape operator $A$ and the structure tensor field $\phi$ of $M^{2n-1}_2$ satisfy \[ A \phi + \phi A = 0. \] \end{thm} Note that $T_1 \subset V(C_0)$ and $T_{-1} \subset JV(C_0)$. The eigenspaces $T_2$, $T_0$, $T_1$ and $T_{-1}$ of the shape operator $A$ and the eigenspaces $E_0$, $E_{-1}$ and $E_{-4}$ of the normal Jacobi operator $K = K_\zeta$ satisfy \[ T_2 = E_{-4} ,\ T_0 = E_0 ,\ T_{-1} \oplus T_1 = E_{-1}. \] It follows that $A$ and $K$ are simultaneously diagonalizable and hence $AK = KA$. Thus we have proved the following. \begin{cor} The homogeneous Hopf hypersurface $M^{2n-1}_2$ in $({Q^n}^*,g)$ is curvature-adapted. \end{cor} By construction, the integral curves of the Reeb vector field $\xi$ are congruent to a horocycle in a complex hyperbolic line $\mathbb{C} H^1(-4)$. Such a horocycle has constant geodesic curvature $2$. As in previous cases, this leads to the geometric interpretation of $M^{2n-1}_2$ being obtained by attaching isometric copies of the homogeneous complex hypersurface $P^{n-1} \cong \mathbb{C} H^{n-1}(-4)$ to the horocycle in a suitable way. Equivalently, $M^{2n-1}_2$ is the canonical extension of a horocycle in the boundary component $B_1 \cong \mathbb{C} H^1(-4)$. \section{Curvature} \label{curvature} In this section we compute the Ricci tensor ${\rm {Ric}}_\alpha$ and the scalar curvature $s_\alpha$ of the homogeneous Hopf hypersurface $M^{2n-1}_\alpha$ in $({Q^n}^*,g)$. Let $R_\alpha$, ${\rm {Ric}}_\alpha$, $s_\alpha$ be the Riemannian curvature tensor, Ricci tensor, scalar curvature of $M^{2n-1}_\alpha$, respectively. Let $A_\alpha$ and $K_\alpha$ be the shape operator and normal Jacobi operator of $M^{2n-1}_\alpha$ with respect to the unit normal vector $\zeta_\alpha$, respectively. The Gauss equation tells us that \[ g(\bar{R}(X,Y)Z, W) = g(R_\alpha(X, Y)Z,W) - g(A_\alpha Y,Z)g(A_\alpha X,W) + g(A_\alpha X,Z)g(A_\alpha Y,W) \] for all $X,Y,Z,W \in {\mathfrak{X}}(M^{2n-1}_\alpha)$. Contracting the Gauss equation gives, after some straightforward computations, the expression \[ {\rm {Ric}}_\alpha X = -2nX - K_\alpha X + \alpha A_\alpha X - A_\alpha^2X, \] where we used the fact that the Ricci tensor of $({Q^n}^*,g)$ is equal to $-2ng$ and ${\rm {tr}}(A_\alpha) = \alpha$ by Theorem \ref{mainthm}. Since the unit normal vector $\zeta_\alpha$ of $M^{2n-1}_\alpha$ is ${\mathfrak{A}}$-isotropic, the normal Jacobi operator $K_\alpha$ of $M^{2n-1}_\alpha$ satisfies \[ K_\alpha X = \begin{cases} 0 & ,\mbox{ if } X \in {\mathcal{C}} \ominus {\mathcal{Q}} = T_0, \\ -X & ,\mbox{ if } X \in {\mathcal{Q}} = T_{-1} \oplus T_1 ,\\ -4X & ,\mbox{ if } X \in {\mathcal{C}}^\perp = \mathbb{R}\xi = T_\alpha \end{cases} \] by Theorem \ref{mainthm} and the description of the Jacobi operator in Section \ref{tchq}. It follows that \[ {\rm {Ric}}_\alpha X = \begin{cases} -2nX & ,\mbox{ if } X \in {\mathcal{C}} \ominus {\mathcal{Q}} = T_0, \\ (-2n-\alpha)X & ,\mbox{ if } X \in T_{-1} , \\ (-2n+\alpha)X & ,\mbox{ if } X \in T_1 , \\ (-2n+4)X & ,\mbox{ if } X \in {\mathcal{C}}^\perp = \mathbb{R}\xi = T_\alpha. \end{cases} \] It follows that the Ricci tensor of $M^{2n-1}_\alpha$ has two (if $\alpha = 0$), three (if $\alpha = 4$) or four (if $\alpha \notin \{0,4\}$) constant eigenvalues. More specifically, for $\alpha = 0$ we obtain \[ {\rm {Ric}}_0 X = -2n X + 4\eta(X)\xi, \] which means that $M^{2n-1}_0$ is pseudo-Einstein (see \cite{Ko79}). \begin{prop} The minimal homogeneous real hypersurface $M^{2n-1}_0$ is a pseudo-\break Einstein Hopf hypersurface in $({Q^n}^*,g)$. In particular, the Ricci tensor ${\rm {Ric}}_0$ of $M^{2n-1}_0$ is $\phi$-invariant, that is, ${\rm {Ric}}_0 \circ \phi = \phi \circ {\rm {Ric}}_0$. \end{prop} We also see that \[ {\rm {Ric}}_\alpha \circ \phi + \phi \circ {\rm {Ric}}_\alpha = -4n \phi . \] This equation is motivated by Ricci solitons (see \cite{BS22}, Lemma 3.3.11). However, none of the homogeneous Hopf hypersurfaces $M^{2n-1}_\alpha$ is a Ricci soliton. By contracting the Ricci tensor we see that the scalar curvature of $M^{2n-1}_\alpha$ is independent of $\alpha$. \begin{prop} The scalar curvature $s_\alpha$ of the homogeneous Hopf hypersurface $M^{2n-1}_\alpha$ in $({Q^n}^*,g)$ does not depend on $\alpha$ and satisfies \[ s_\alpha = 4-2n(2n-1). \] \end{prop}
{'timestamp': '2022-03-08T02:36:15', 'yymm': '2203', 'arxiv_id': '2203.03205', 'language': 'en', 'url': 'https://arxiv.org/abs/2203.03205'}
\section{Introduction} \label{sec:introduction} Single stars with main-sequence masses $M \lesssim 100\, M_\odot$ end their nuclear-burning lives as electron-degenerate objects or with central electron-degenerate cores. More specifically, the end state is a carbon-oxygen or oxygen-neon white dwarf (WD) in the case of low-mass stars (\textit{i.e.,}~ with $M \lesssim 6-8\, M_\odot$), or a degenerate oxygen-neon or iron core embedded in an extended non-degenerate stellar envelope in the case of more massive stars (\textit{i.e.,}~ $6-8\, M_\odot \lesssim M \lesssim 100\,M_\odot$) (see, \textit{e.g.,}~ \cite{herwig:05,woosley_02_a,poelarends:08} and references therein). Electron-degenerate spherically-symmetric objects become unstable to radial contraction once their mass exceeds the Chandrasekhar mass which, assuming zero temperature and no rotation, is given by $M_{\mathrm{Ch}} = 1.4575\, (Y_e/0.5)^2\,M_\odot\,$, where $Y_e$ is the number of electrons per baryon, or ``electron fraction''~\cite{chandrasekhar:38,shapteu:83}. The \emph{effective} Chandrasekhar mass $M_{\mathrm{Ch,eff}}$ of a WD or a stellar core increases somewhat with WD/core entropy (\textit{e.g.,}~ \cite{woosley_02_a}) and can grow considerably by rotation, in which case it is limited only by the onset of nonaxisymmetric instability (\textit{e.g.,}~ \cite{ostriker_68_b,yoon_04,shapteu:83}). The iron core of a massive star is pushed over its Chandrasekhar limit by the ashes of silicon shell burning and undergoes collapse to a protoneutron star (PNS), accelerated by photodisintegration of heavy nuclei and electron capture \cite{bethe:90}. In a high-density sub-$M_\mathrm{ch}$ oxygen-neon core of a less massive star, electron capture may decrease $M_\mathrm{Ch,eff}$, also leading to collapse \cite{nomoto_84_a,gutierrez_05_a}. In both cases, if an explosion results, the observational display is associated with a Type II/Ibc supernova (SN). On the other hand, a carbon-oxygen WD can be pushed over its stability limit through merger with or accretion from another WD (double-degenerate scenario) or by accretion from a non-degenerate companion star (single-degenerate scenario). Here, the WD generally experiences carbon ignition and thermonuclear runaway, leading to a Type Ia SN and leaving no compact remnant~\cite{livio_99_a}. However, at least theoretically, it is possible that massive oxygen-neon WDs\footnote{Previously, such WDs were expected to have a significant central $^{24}{\mathrm{Mg}}$ mass fraction, hence were referred to as oxygen-neon-magnesium WDs. Recent work based on up-do-date input physics and modern stellar evolution codes suggests that the mass fraction of $^{24}{\mathrm{Mg}}$ is much smaller than previously thought (\textit{e.g.,}~~\cite{siess:06,iben:97,ritossa:96}).} formed by accretion or merger, and, depending on initial mass, temperature, and accretion rate, also carbon-oxygen WDs, may grow to reach their $M_{\mathrm{Ch,eff}}$ or reach central densities sufficiently high ($\gtrsim 10^{9.7} - 10^{10}\,{\mathrm{g\,cm^{-3}}}$) for rapid electron capture to take place, triggering collapse to a PNS rather than thermonuclear explosion~\cite{canal:76,saio:85,nomoto:86,mochkovitch_89_a,nomoto_91,saio:98, uenishi_03_a,saio_04,yoon_04,yoon_05,yoon_07_a,kalogera_00_a, gutierrez_05_a}. This may result in a peculiar, in most cases probably sub-energetic, low-nickel-yield and short-lived transient~\cite{nomoto:86,woosley_92_a,fryer_99_a,dessart_06_a,dessart_07_a,metzger:09b}. This alternative to the Type Ia SN scenario is called \textit{``accretion-induced collapse''} (AIC) and will be the focus of this paper. The details of the progenitor WD structure and formation and the fraction of all WDs that evolve to AICs are presently uncertain. Binary population synthesis models \cite{yungelson_98_a,belczynski_05_a, kalogera_00_a} and constraints on $r$-process nucleosynthetic yields from previous AIC simulations \cite{fryer_99_a,qian:07} predict AIC to occur in the Milky Way at a frequency of $\sim 10^{-5}$ to $\sim10^{-8}\,{\mathrm{yr}}^{-1}$ which is $\sim 20-50$ times less frequent than the expected rate of standard Type Ia SNe~(\textit{e.g.,}~~\cite{vdb:91,madau:98,scannapieco:05,mannucci:05}). In part as a consequence of their rarity, but probably also due to their short duration and potentially weak electromagnetic display, AIC events have not been directly observed (but see \cite{perets:09}, who discovered a peculiar type-Ib SN in NGC 1032 that can be interpreted as resulting from an AIC). The chances of seeing a rare galactic AIC are dramatically boosted by the possibility of guiding electromagnetic observations by the detection of neutrinos and gravitational waves (GWs) emitted during the AIC process and a subsequent SN explosion. GWs, similar to neutrinos, are extremely difficult to observe, but can carry ``live'' dynamical information from deep inside electromagnetically-opaque regions. The inherent multi-D nature of GWs (they are lowest-order quadrupole waves) makes them ideal messengers for probing multi-D dynamics such as rotation, turbulence, or NS pulsations~\cite{thorne:87,andersson:03,ott:09rev}. The detection prospects for a GW burst from an AIC are significantly enhanced if theoretical knowledge of the expected GW signature of such an event is provided by computational modelling. In reverse, once a detection is made, detailed model predictions will make it possible to extract physical information on the AIC dynamics and the properties of the progenitor WD and, hence, will allow ``parameter-estimation'' of the source. Early spherically-symmetric (1D) simulations of AIC~\cite{mayle_88_a,baron_87_b,woosley_92_a} and more recent axisymmetric (2D) ones~\cite{fryer_99_a,dessart_06_a, dessart_07_a} have demonstrated that the dynamics of AIC is quite similar to standard massive star core collapse: During collapse, the WD separates into a subsonically and homologously collapsing ($v \propto r$) inner core and a supersonically collapsing outer core. Collapse is halted by the stiffening of the equation of state (EOS) at densities near nuclear matter density and the inner core rebounds into the still infalling outer core. An unshocked low-entropy PNS of inner-core material is formed. At its edge, a bounce shock is launched and initially propagates rapidly outward in mass and radius, but loses energy to the dissociation of heavy nuclei as well as to neutrinos that stream out from the optically thin postshock region. The shock stalls and, in the AIC case (but also in the case of the oxygen-neon core collapse in super-AGB stars \cite{kitaura_06_a,burrows:07c}), is successfully revived by the deposition of energy by neutrinos in the postshock region (\textit{i.e.,}~ the ``delayed-neutrino mechanism''~\cite{bethewilson:85,bethe:90}) or by a combination of neutrino energy deposition and magnetorotational effects in very rapidly rotating WDs~\cite{dessart_07_a}. But even without shock revival, explosion would occur when the WD surface layer is eventually accreted through the shock. Following the onset of explosion, a strong long-lasting neutrino-driven wind blows off the PNS surface, adding to the total explosion energy and establishing favorable conditions for $r$-process nucleosynthesis~\cite{woosley_92_a,fryer_99_a, dessart_06_a,dessart_07_a,arcones:07}. If the progenitor WD was rotating rapidly (and had a rotationally-enhanced $M_\mathrm{Ch,eff}$), a quasi-Keplerian accretion disk of outer-core material may be left after the explosion~\cite{dessart_06_a}. Metzger~et~al.~\cite{metzger:09,metzger:09b} recently proposed that this may lead to nickel-rich outflows that could significantly enhance the AIC observational display. Rotating iron core collapse and bounce is the most extensively studied and best understood GW emission process in the massive star collapse context (see, \textit{e.g.,}~~\cite{dimmelmeier_08_a} and the historical overview in~\cite{ott:09rev}). However, most massive stars (perhaps up to $\sim 99\%$ in the local universe) are likely to be rather slow rotators that develop little asphericity during collapse and in the early postbounce phase~\cite{heger:05,woosley:06b,ott:06spin} and produce PNSs that cool and contract to neutron stars with periods above $\sim 10\,\mathrm{ms}$ and parameter $\beta = E_\mathrm{rot}/|W|\lesssim 0.1\%$ \cite{ott:06spin}, where $E_\mathrm{rot}$ is the rotational kinetic energy and $|W|$ is the gravitational binding energy. This does not only reduce the overall relevance of this emission process, but also diminishes the chances for postbounce gravito-rotational nonaxisymmetric deformation of the PNS which could boost the overall GW emission~\cite{ott:09rev}. Axisymmetric rapidly rotating stars become unstable to nonaxisymmetric deformations if a nonaxisymmetric configuration with a lower total energy exists at a given $ \beta $ (see~\cite{stergioulas:03} for a review). The classical high-$\beta$ instability develops in Newtonian stars on a dynamical timescale at $\beta \gtrsim \beta_\mathrm{dyn} \simeq 27\%$ (the general-relativistic value is $\beta \gtrsim 25\%$ \cite{baiotti_07_a,Manca07}). A ``secular'' instability, driven by fluid viscosity or GW backreaction, can develop already at $ \beta \gtrsim \beta_{\mathrm{sec}} \simeq 14\% $ \cite{stergioulas:03}. Slower, but strongly differentially rotating stars may also be subject to a nonaxisymmetric dynamical instability at $ \beta $ as small as $ \sim 1\% $. This instability at low $\beta$ was observed in a number of recent 3D simulations (\textit{e.g.,}~ \cite{centrella_01_a, shibata:04a, ott_05_a, ou_06_a, ott:06phd, cerda_07_b, ott_07_a, ott_07_b,scheidegger:08}), and may be related to corotation instabilities in disks, but its nature and the precise conditions for its onset are presently not understood \cite{watts:05,saijo:06}. Stellar evolution theory and pulsar birth spin estimates suggest that most massive stars are rotating rather slowly (\textit{e.g.,}~ \cite{heger:05, ott:06spin}, but also \cite{cantiello:07,woosley:06b} for exceptions). Hence, rotating collapse and bounce and nonaxisymmetric rotational instabilities are unlikely to be the dominant GW emission mechanisms in most massive star collapse events \cite{ott:09rev}. The situation may be radically different in AIC: Independent of the details of their formation scenario, AIC progenitors are expected to accrete significant amounts of mass and angular momentum in their pre-AIC evolutions \cite{yoon_04,saio_04,yoon_05,uenishi_03_a, piersanti_03_a}. They may reach values of $\beta$ of up to $\sim 10\%$ \emph{prior} to collapse, according to the recent work of Yoon \& Langer~\cite{yoon_04,yoon_05}, who studied the precollapse stellar structure and rotational configuration of WDs with sequences of 2D rotational equilibria. Depending on the distribution of angular momentum in the WD, rotational effects may significantly affect the collapse and bounce dynamics and lead to a large time-varying quadrupole moment of the inner core, resulting in a strong burst of GWs emitted at core bounce. In addition, the postbounce PNS may be subject to the high-$\beta$ rotational instability (see \cite{liu_01_a,liu_02_a} for an investigation via equilibrium sequences of PNSs formed in AIC) or to the recently discovered low-$\beta$ instability. Most previous (radiation-)hydrodynamic studies of AIC have either been limited to 1D~\cite{mayle_88_a,baron_87_b,woosley_92_a} or were 2D, but did not use consistent 2D progenitor models in rotational equilibrium~\cite{fryer_99_a}. Fryer, Hughes and Holz~\cite{fryer:02} presented the first estimates for the GW signal emitted by AIC based on one model of~\cite{fryer_99_a}. Drawing from the Yoon \& Langer AIC progenitors~\cite{yoon_04,yoon_05}, Dessart~et~al.~\cite{dessart_06_a,dessart_07_a} have recently performed 2D Newtonian AIC simulations with the multi-group flux-limited diffusion (MGFLD) neutrino radiation-(M)HD code VULCAN/2D \cite{livne:93,livne:07,burrows:07a}. They chose two representative WD configurations for slow and rapid rotation with central densities of $ 5 \times 10^{10} \ {\mathrm{g\,cm}}^{-3} $ and total masses of $ 1.46 M_\odot $ and $ 1.92 M_\odot $. Both models were set up with the differential rotation law of \cite{yoon_04,yoon_05}. The $1.46 M_\odot$ model had zero rotation in the inner core and rapid outer core rotation while the $1.92 M_\odot$ was rapidly rotating throughout (ratio $\Omega_\mathrm{max,initial} / \Omega_\mathrm{center,initial} \sim 1.5$). Dessart~et~al.~\cite{dessart_06_a,dessart_07_a} found that rapid electron capture in the central regions of both models led to collapse to a PNS within only a few tens of milliseconds and reported successful neutrino-driven~\cite{dessart_06_a} and magnetorotational explosions~\cite{dessart_07_a} with final values of $\beta$ (\textit{i.e.,}~ a few hundred milliseconds after core bounce) of $\sim 6\%$ and $\sim 26\%$, for the $1.46 M_\odot$ and $1.92 M_\odot$ models, respectively\footnote{These numbers are for the non-MHD simulations of~\cite{dessart_06_a}. In the MHD models of ~\cite{dessart_07_a}, an $\Omega$-dynamo builds up toroidal magnetic field, reducing the overall rotational energy and $\beta$.}. The analysis in \cite{dessart_06_a,ott:06phd,ott:09rev} of the GW signal of the Dessart~et~al.\ models showed that the morphology of the AIC rotating collapse and bounce gravitational waveform is reminiscent of the so-called Type~III signal first discussed by Zwerger \& M\"uller~\cite{zwerger_97_a} and associated with small inner core masses and a large pressure reduction at the onset of collapse in the latter's polytropic models. In this paper, we follow a different approach than Dessart~et~al.~\cite{dessart_06_a,dessart_07_a}. We omit their detailed and computationally-expensive treatment of neutrino radiation transport in favor of a simple, yet effective deleptonization scheme for the collapse phase \cite{liebendoerfer_05_a}. This simplification, while limiting the accuracy of our models at postbounce times $\gtrsim 5-10\,\mathrm{ms}$, (i) enables us to study a very large set of precollapse WD configurations and their resulting AIC dynamics and GW signals and, importantly, (ii) allows us to perform these AIC simulations in \emph{general relativity}, which is a crucial ingredient for the accurate modeling of dynamics in regions of strong gravity inside and near the PNS. Furthermore, as demonstrated by \cite{dimmelmeier_02_b,dimmelmeier_07_a,dimmelmeier_08_a}, general relativity is required for qualitatively and quantitatively correct predictions of the GW signal of rotating core collapse. We focus on the collapse and immediate postbounce phase of AIC and perform an extensive set of 114\ 2D general-relativistic hydrodynamics simulations. We analyze systematically the AIC dynamics and the properties of the resulting GW signal. We explore the dependence of nonrotating and rotating AIC on the precollapse WD rotational setup, central density, core temperature, and core deleptonization, and study the resulting PNS's susceptibility to rotational nonaxisymmetric deformation. Furthermore, motivated by the recent work of Metzger~et~al.\ \cite{metzger:09,metzger:09b}, who discussed the possible enhancement of the AIC observational signature by outflows from PNS accretion disks, we study the dependence of disk mass and morphology on WD progenitor characteristics and rotational setup. We employ the general-relativistic hydrodynamics code \textsc{CoCoNuT} ~\cite{dimmelmeier_02_b,dimmelmeier_05} and neglect MHD effects since they were shown to be small in the considered phases unless the precollapse magnetic field strength is extremely large ($B \gtrsim 10^{12}\, {\mathrm G}$, \textit{e.g.,}~ \cite{obergaulinger_06_a,dessart_07_a,burrows_07_b}). We employ a finite-temperature microphysical nuclear EOS in combination with the aforementioned deleptonization treatment of \cite{liebendoerfer_05_a}. The precollapse 2D rotational-equilibrium WDs are generated according to the prescription of Yoon \& Langer~\cite{yoon_04,yoon_05}. The plan of the paper is as follows. In Sec.~\ref{sec:methods}, we introduce the numerical methods employed and discuss the generation of our 2D rotational-equilibrium precollapse WD models as well as the parameter space of WD structure and rotational configuration investigated. In Sec.~\ref{sec:colldyn}, we discuss the overall AIC dynamics and the properties of the quasi-Keplerian accretion disks seen in many models. Sec.~\ref{sec:GW} is devoted to a detailed analysis of the GW signal from rotating AIC. There, we also assess the detectability by current and future GW observatories and carry out a comparison of the GW signals of AIC and massive star iron core collapse. In Sec.~\ref{sec:rotinst}, we study the postbounce rotational configurations of the PNSs in our models and assess the possibility for nonaxisymmetric rotational instabilities. In Sec.~\ref{sec:summary}, we present a critical summary and outlook. \section{Methods and Initial Models} \label{sec:methods} \subsection{The General-Relativistic Hydrodynamics Code} \label{sec:evolution_code} We perform our simulations in $ 2 + 1 $ dimensions using the \textsc{CoCoNuT}\ code~\cite{dimmelmeier_02_a, dimmelmeier_05} which adopts the conformally-flat approximation of general relativity~\cite{isenberg:08}. This has been shown to be a very good approximation in the context of stellar collapse to PNSs \cite{ott_07_a,ott_07_b,cerda:05}. \textsc{CoCoNuT}\ solves the metric equations as formulated in \cite{CorderoCarrion:2008nf} using spectral methods as described in \cite{dimmelmeier_05}. The relativistic hydrodynamics equations are solved via a finite-volume approach, piecewise parabolic reconstruction, and the HLLE approximate Riemann solver \cite{einfeldt:88}. \textsc{CoCoNuT}\ uses Eulerian spherical coordinates $ \{r, \theta\} $ and for our purposes assumes axisymmetry. For the computational grid, we choose 250 logarithmically-spaced, centrally-condensed radial zones with a central resolution of $ 250 $ m and 45 equidistant angular zones covering $ 90^\circ $. We have performed test calculations with different grid resolutions to ascertain that the grid setup specified above is appropriate for our simulations. The space between the surface of the star and outer boundary of the finite difference grid is filled with an artificial atmosphere. We assume a constant density and stationary atmosphere in all zones where density drops below a prescribed threshold of $ 7 \times 10^5 \, {\mathrm{g \, cm}^{-3}} $, a value marginally larger than the lowest density value in the EOS table employed in our calculations (cf. Sec.~\ref{sec:eos}). The atmosphere is reset after each timestep in order to ensure that it adapts to the time-dependent shape of the stellar surface. For further details of the formulations of the hydrodynamics and metric equations as well as their numerical implementation in \textsc{CoCoNuT}, the reader is referred to~\cite{dimmelmeier_08_a,dimmelmeier_05,CorderoCarrion:2008nf}. The version of \textsc{CoCoNuT}\ employed in this study does not include a nuclear reaction network. Hence, we, like Dessart~et~al.~\cite{dessart_06_a,dessart_07_a}, ignore nuclear burning which may be relevant in the outer core of AIC progenitors where material is not in nuclear statistical equilibrium (NSE), but still sufficiently hot for oxygen/neon/magnesium burning to occur. This approximation is justified by results from previous work of \cite{woosley_92_a,kitaura_06_a} who included nuclear burning and did not observe a strong dynamical effect. \subsubsection{Equation of State} \label{sec:eos} We make use of the finite-temperature nuclear EOS of Shen et al. (``Shen-et-al EOS'' in the following,~\cite{shen_98_a,shen_98_b}) which is based on a relativistic mean-field model and is extended with the Thomas-Fermi approximation to describe the homogeneous phase of matter as well as the inhomogeneous matter composition. The parameter for the incompressibility of nuclear matter is $ 281 $ MeV and the symmetry energy has a value of $ 36.9 $ MeV. The Shen-et-al EOS is used in tabulated fashion and in our version (equivalent to that used in~\cite{marek_05_a,dimmelmeier_08_a}) includes contributions from baryons, electrons, positrons and photons. The Shen-et-al EOS table used in our simulation has 180, 120, and 50 equidistant points in $\log_{10} \rho$, $\log_{10} T$, and $Y_e$, respectively. The table ranges are $6.4 \times 10^{5}\, {\mathrm{g \, cm^{-3}}} \le \rho \le 1.1 \times 10^{15}\,{\mathrm{g \, cm^{-3}}}$, $0.1\,{\mathrm{MeV}} \le T \le 100.0\,{\mathrm{MeV}}$, and $0.015 \le Y_e \le 0.56$. Our variant of the Shen-et-al EOS assumes that NSE holds throughout the entire $\{\rho,T,Y_e\}$ domain. In reality, NSE generally holds only at $T \gtrsim 0.5 $ MeV. At lower temperatures, a nuclear reaction network and, the advection of multiple chemical species and accounting for their individual ideal-gas contributions to the EOS is necessary for a correct thermodynamic description of the baryonic component of the fluid. However, since the electron component of the EOS is vastly dominant in the central regions of AIC progenitors (and also in the central regions of iron cores), the incorrect assumption of NSE at low temperatures can lead to only a small error in the overall (thermo)dynamics of the collapse and early postbounce phase. \subsubsection{Deleptonization during Collapse and Neutrino Pressure} \label{sec:deleptonization} To account for the dynamically highly important change of the electron fraction $Y_e$ by electron capture during collapse, we employ the approximate prescription proposed by Liebend\"orfer~\cite{liebendoerfer_05_a}. Liebend\"orfer's scheme is based on the observation that the local $Y_e$ of each fluid element during the contraction phase can be rather accurately parametrized from full radiation-hydrodynamics simulations as a function of density alone. Liebend\"orfer demonstrated the effectiveness of this parametrization in the case of spherical symmetry, but also argued that it should still be reliable to employ a parametrization $\overline{Y_e\!}(\rho)$ obtained from a 1D radiation-hydrodynamics calculation in a 2D or 3D simulation, since electron capture depends more on local matter properties and less on the global dynamics of the collapsing core. On the basis of this argument, a $\overline{Y_e\!}(\rho)$ parametrization was applied in the rotating iron core collapse calculations of \cite{ott_07_a,ott_07_b,dimmelmeier_07_a,dimmelmeier_08_a,scheidegger:08}. Here, we use the same implementation as discussed in \cite{dimmelmeier_08_a} and track the changes in $Y_e$ up to the point of core bounce. After bounce, we simply advect $Y_e$. Furthermore, as in \cite{dimmelmeier_08_a}, we approximate the pressure contribution due to neutrinos in the optically-thick regime ($\rho \gtrsim 2 \times 10^{12}\, {\mathrm{g\,cm^{-3}}}$) by an ideal Fermi gas, following the prescription of \cite{liebendoerfer_05_a}. This pressure contribution and the energy of the trapped neutrino radiation field are included in the matter stress-energy tensor and coupled with the hydrodynamics equations via the energy and momentum source terms specified in~\cite{ott_07_b}. \begin{figure} \centerline{\includegraphics[width = 86 mm, angle = 0]{Figures/f01.eps}} \caption{Average electron fraction $ Y_e $ in the innermost $2$ km in the collapsing WD as a function of density obtained from 2D MGFLD simulations with the VULCAN/2D code for models $ 1.46 M_\odot $ and $ 1.92 M_\odot $ of Dessart~et~al.~\cite{dessart_06_a}. Both models were set up with the same initial dependence of temperature on density and a temperature $T_{\mathrm 0} = 1.0 \times 10^{10}$ K (see Sec.~\ref{sec:initial_model} for details).} \label{fig:ye_vs_rho_VULCAN} \end{figure} The deleptonization scheme described here is applicable only until core bounce and can neither track the postbounce neutrino burst (see, \textit{e.g.,}~ \cite{thompson_03_a}) nor neutrino cooling/heating and the postbounce deleptonization of the PNS. The dynamics in the very early postbounce evolution (up to $\sim 10\,\mathrm{ms}$) are unlikely to be dramatically affected by this limitation, but it should be kept in mind when interpreting results from later postbounce times. For our AIC simulations we obtain $\overline{Y_e\!}(\rho)$ data from the 2D Newtonian radiation-hydrodynamics simulations carried out by Dessart~et~al.~\cite{dessart_06_a} with the VULCAN/2D code~\cite{burrows_07_b,livne:93,livne:07} in its MGFLD variant. We use these data because of their ready availability, but point out that the microphysics \cite{brt:06} used in VULCAN/2D does not yet include the updated electron capture rates of \cite{langanke_00_a}. Moreover, VULCAN/2D presently does not treat velocity-dependent terms in the transport equation and neglects neutrino--electron scattering, both of which my have some impact on the evolution of $Y_e$ in the collapse phase \cite{thompson_03_a,buras_06_a}. In Fig.~\ref{fig:ye_vs_rho_VULCAN} we plot representative $\overline{Y_e\!}(\rho)$ trajectories obtained from VULCAN/2D AIC simulations. At nuclear density, these data predict $Y_e \sim 0.18$ which is low compared to $Y_e \gtrsim 0.22-0.26$ seen in simulations of iron core collapse \cite{buras_06_a,thompson_03_a, liebendoerfer_05_a,dimmelmeier_08_a} and oxygen-neon core collapse \cite{kitaura_06_a}. This difference is not fully understood, but ({\it i}) could be physical and due to the WD initial data used here and in \cite{dessart_06_a} or ({\it ii}) may be related to the radiation transport approximations and microphysics treatment in VULCAN/2D. To measure the importance of these uncertainties in $\overline{Y_e\!}(\rho)$, we perform calculations with systematic variations of $\overline{Y_e\!}(\rho)$ due either to changes in the precollapse WD temperature or to an ad-hoc scaling (see Sec.~\ref{sec:tempye}). Since AIC progenitors may be extremely rapidly rotating, it is not clear that the $\overline{Y_e\!}(\rho)$ parametrization is indeed independent of the specific model and rotational setup. The $\overline{Y_e\!}(\rho)$ trajectories shown in Fig.~\ref{fig:ye_vs_rho_VULCAN} result from the collapse simulations of the slowly rotating $1.46\, M_\odot $ model and of the rapidly rotating $1.92\, M_\odot$ model of Dessart~et~al. The very close agreement of the two curves suggests that rotational effects have only a small influence on the prebounce deleptonization and confirm the supposition of~\cite{liebendoerfer_05_a} at the level of the MGFLD and microphysics treatment in VULCAN/2D. All $\overline{Y_e\!}(\rho)$ data used in this study are available from~\cite{stellarcollapseAIC}. \subsection{Precollapse White Dwarf Models} \label{sec:initial_model} For constructing 2D WD models in rotational equilibrium with a given rotation law, we follow~\cite{yoon_05} and employ the self-consistent field (SCF) method~\cite{ostriker_68_a, ostriker_68_b, hachisu_86, komatsu_89_a} in Newtonian gravity. For the purpose of the SFC method, we assume that the WD is cold and has a constant $Y_e$ of $0.5$. After finding the 2D equilibrium configuration, we impose a temperature and $Y_e$ distribution motivated by previous work \cite{dessart_06_a, woosley_92_a}. Ideally, the WD initial model should be evolved in a multi-dimensional stellar evolution code with a finite-temperature EoS and accounting for weak processes such as neutrino cooling and electron capture. Due to the unavailability of such self-consistent AIC progenitors, we resort to the treatment that we discuss in detail in the remainder of this section. \subsubsection{Implementation of the Self-Consistent Field Method} Our implementation of the Newtonian SFC method has been tested by reproducing the WD models presented in~\cite{hachisu_86, yoon_05}, and finding excellent agreement. The compactness parameter $GM/Rc^2$ of the highest-density WD models considered here reaches $\sim 5 \times 10^{-3}$, hence general-relativistic effects at the precollapse stage are small and the error introduced by Newtonian WD models is therefore negligible. Hereafter we will assume that the \textit{Newtonian mass} of the equilibrium model represents the \textit{baryon mass} accounted for when solving the general-relativistic equations. The equation governing the stellar equilibrium is given by \begin{equation} \label{eqq:wd_equilibrium} \int \rho^{-1} \ d P + \Phi - \int \Omega^2 \ \varpi \ d \varpi = C \ , \end{equation} where $ \Phi $ is the gravitational potential, $ \Omega $ is the angular velocity, $ \varpi $ is the radial cylindrical coordinate and $ C $ is a constant that will be determined from boundary conditions using the SCF iterations as discussed below. White dwarfs are stabilized against gravity by electron degeneracy pressure. For constructing precollapse WDs, we assume complete degeneracy for which the WD EOS (\textit{e.g.,}~ \cite{ostriker_68_b}) is given by \begin{equation} \label{eqq:eos_el_gas} P = A[x(2x^2-3)(x^2+1)^{1/2}+3\sinh^{-1}x]\,;\, ~ x = \left (\rho / B \right)^{1/3}, \end{equation} where $A = 6.01 \times 10^{22} ~ {\mathrm{dyn ~ cm^{-2}}} $ and $B = 9.82 \times 10^5\, Y^{-1}_e ~ {\mathrm{g ~ cm^{-3}}} $. We set $Y_e = 0.5$, assuming at this stage that no electron capture has taken place. The integral $ \int \rho^{-1} d P $ in Eq.~(\ref{eqq:wd_equilibrium}) is the enthalpy $H$ which, given our choice of WD EOS, can be expressed analytically as \begin{equation} \label{eqq:enthalpy_wd} H = \frac{8A}{B} \left[ 1 + \left( \frac{\rho}{B} \right)^{2 / 3} \right]\,\,. \end{equation} With this, Eq.~(\ref{eqq:wd_equilibrium}) trivially becomes \begin{equation} \label{eqq:wd_equilibrium_2} H = C - \Phi + \int \Omega^2 \ \varpi \ d \varpi \ . \end{equation} Following the SCF method, we proceed to first produce a trial density distribution $ \rho(r, \theta) $ and impose a rotation law (discussed in the following Sec.~\ref{sec:rotlaw}). We then calculate $ C $ by using the value for the maximum density and the angular velocity at the center of the star $ \Omega( \varpi = 0) = \Omega_{\mathrm{c,i}} $. Based on the trial density distribution, we calculate $ H $ via Eq.~(\ref{eqq:wd_equilibrium_2}) and then update the density distribution based on $H$ using the analytic expression (\ref{eqq:enthalpy_wd}). This updated density distribution in turn results in a new value for $H$. We iterate this procedure until all the maximum absolute values of three relative differences of $ H $, $ \Omega $ and $ \rho $ become less than $10^{-3}$. \subsubsection{Progenitor Rotational Configuration} \label{sec:rotlaw} Our axisymmetric progenitor WD models are assumed to be either in \emph{uniform} rotation or to follow the \emph{differential} rotation law proposed by Yoon \& Langer \cite{yoon_05}. The latter argued that the rotation law of a WD that accretes matter at high rates ($ > 10^{-7}\, M_\odot \, {\mathrm{yr}}^{-1}$) is strongly affected by angular momentum transport via the dynamical shear instability (DSI) in the inner region, and due to the secular shear instability (as well as Eddington-Sweet circulations \cite{tassoul_00}) in the outer layers. According to their results, the shear rate in the core remains near the threshold value for the onset of the DSI. This results in a characteristic rotation law which has an absolute maximum in the angular velocity just above the shear-unstable core. We define $ \varpi_{\mathrm p} $ as the position of this maximum. This position is linked to layers with a density as low as several percent of the WD central density so that \begin{equation} \rho_\mathrm{i}(\varpi = \varpi_{\mathrm p}, \ z = 0) = f_{\mathrm p} \rho_{\mathrm{c,i}}, \end{equation} and where, following~\cite{dessart_06_a, yoon_05}, we choose $ f_{\mathrm p} =\{ 0.05, \ 0.1 \}$ in our models. (Note that the differential rotation law adopted for the models of~\cite{dessart_06_a} had $f_{\mathrm p} = 0.05 $). In the inner regions with $ \varpi < \varpi_{\mathrm p} $, we have \begin{equation} \label{eqq:omega_inner_core} \Omega(\varpi) = \Omega_{\mathrm{c,i}} + \int_0^\varpi \frac{f_{\mathrm{sh}} \sigma_{\mathrm{DSI, crit}}}{\varpi'}{d \varpi'}, \end{equation} where $ \Omega_{\mathrm{c,i}} $ is the angular velocity at the center and $ \sigma_{\mathrm{DSI, crit}} $ is the threshold value of the shear rate in the inner core for the onset of the DSI. $f_{\mathrm{sh}}$ is a dimensionless parameter ($\le 1.0$) describing the deviation of the shear rate from $ \sigma_{\mathrm{DSI, crit}} $. We compute $ \sigma_{\mathrm{DSI, crit}} $ assuming homogeneous chemical composition and constant temperature, in which case $ \sigma_{\mathrm{DSI, crit}} $ can be estimated as (\textit{cf.}~ Eq.(7) of~\cite{yoon_04}): \begin{eqnarray} \label{eqq:sigma_dsi} \sigma_{\mathrm{DSI, crit}}^2 & \simeq & 0.2 \left( \frac{g}{10^9 ~ {\mathrm{cm ~ s^{-2}}}} \right) \\ \nonumber & \times & \left( \frac{\delta}{0.01} \right) \left( \frac{H_\mathrm{p}}{8 \times 10^7 ~ {\mathrm{cm}} } \right)^{-1} \left( \frac{ \nabla_{\mathrm{ad}} }{ 0.4 } \right) \ , \end{eqnarray} where $ g $ is the free-fall acceleration, $ H_\mathrm{p} $ is the pressure scale height ($ =-dr / d \ln P $), $ \nabla_{\mathrm{ad}} $ is the adiabatic temperature gradient ($ = -(\partial \ln T / \partial \ln P)_s $ where $s$ is the specific entropy) and $ \delta = (\partial \ln \rho / \partial \ln T)_P $. The quantities $ \delta $, $ H_\mathrm{p}$ and $ \nabla_{\mathrm{ad}} $ are computed using the routines of Blinnikov~et~al.~\cite{blinnikov_96}. At the equatorial surface, the WD is assumed to rotate at a certain fraction $ f_{\mathrm K} $ of the local Keplerian angular velocity $ \Omega_{\mathrm K}$: \begin{equation} \label{eqq:omega_surface} \Omega(R_\mathrm{e}) = f_{\mathrm K} \Omega_{\mathrm K}(R_\mathrm{e}), \end{equation} where $ R_\mathrm{e} $ is the equatorial radius of the WD and where we have set $ f_{\mathrm K} = 0.95$. In the region between $ \varpi_{\mathrm p} $ and $ R_\mathrm{e} $, we again follow~\cite{yoon_05} and adopt the following rotation law: \begin{equation} \label{eqq:omega_outer_core} \Omega(\varpi) / \Omega_{\mathrm K} = \Omega(\varpi_{\mathrm p}) / \Omega_{\mathrm K} (\varpi_{\mathrm p}) + {\cal C} (\varpi - \varpi_{\mathrm p})^a\,, \end{equation} where the constant $ {\cal C} $ is determined for a given value of $ a $ as \begin{equation} {\cal C} = \frac{f_{\mathrm K} - \Omega(\varpi_{\mathrm p}) / \Omega_{\mathrm K} (\varpi_{\mathrm p})} {(R_\mathrm{e} - \varpi_{\mathrm p})^a}\,. \end{equation} The choice of the exponent $ a $ does not have a strong impact on the WD structure because of the constraints imposed by $ \Omega(\varpi_{\mathrm p}) $ and $ \Omega(R_\mathrm{e})$ at each boundary. In our study, we adopt $ a = 1.2 $. For further details, we refer the reader to Sec.~2.2 of \cite{yoon_05}. Saio \& Nomoto~\cite{saio_04} argued that turbulent viscosity resulting from a combination of a baroclinic instability (see, \textit{e.g.,}~ \cite{pedlosky_87}; neglected by Yoon \& Langer \cite{yoon_04,yoon_05}) and the DSI is so efficient in transporting angular momentum that the angular velocity becomes nearly uniform in the WD interior, while only surface layers with mass $ \lesssim 0.01 M_\odot $ rotate differentially~\cite{saio_04}. Piro~\cite{piro_08_a}, who also considered angular momentum transport by magnetic stresses, confirmed these results. Hence, in order to study the suggested case of uniform precollapse WD rotation, we complement our differentially rotating WD models with a set of uniformly rotating AIC progenitors. \subsubsection{Initial Temperature Profile} \label{sec:temp} Because our initial models are constructed by imposing hydrostatic equilibrium (Eq.~(\ref{eqq:wd_equilibrium})) with a barotropic EOS (Eq.~(\ref{eqq:eos_el_gas})), the WD structure is independent of temperature. However, the latter is needed as input for the finite-temperature nuclear EOS used in our AIC simulations. We follow Dessart~et~al.~\cite{dessart_06_a} and impose a scaling of the temperature with density according to \begin{equation} \label{eqq:temp_profile} T (\varpi, z) = T_{\mathrm 0} \left[ \rho_{\mathrm{c,i}} / \rho (\varpi, z) \right]^{0.35} , \end{equation} where $(\varpi, z)$ are cylindrical coordinates and $\rho_\mathrm{0}$ is the density at which the stellar temperature equals $T_\mathrm{0}$. \subsubsection{Initial Electron Fraction Profile} \label{IEFP} For the purpose of constructing AIC progenitor WDs in rotational equilibrium we assume that no electron capture has yet taken place and set $Y_e = 0.5$. A real AIC progenitor, however, will have seen some electron captures on Ne/Mg/Na nuclei (\textit{e.g.,}~~\cite{gutierrez_05_a}) before the onset of dynamical collapse. In addition, electrons will be captured easily by free protons that are abundant at the temperatures of the WD models considered here. Hence, a $Y_e$ of $0.5$ is rather inconsistent with real WD evolution. Dessart~et~al.~\cite{dessart_06_a}, who started their simulations with $Y_e = 0.5$ models, observed an early burst of electron capture. This led to a significant initial drop of $Y_e$ that leveled off after $5-10$ ms beyond which the $Y_e$ profile evolved in qualitatively similar fashion to what is known from iron core collapse (see Fig.~\ref{fig:ye_vs_rho_VULCAN} which depicts this drop of $Y_e$ at low densities). To account for this, we adopt as initial $\overline{Y}_e(\rho)$ a parametrization obtained from the equatorial plane of the models of Dessart~et~al.~\cite{dessart_06_a} at $\sim 7$ ms into their evolution when the initial electron capture burst has subsided. We use these $\overline{Y}_e(\rho)$ data for the $Y_e$ evolution of the low-density ($\rho < \rho_\mathrm{c,i}$) part of the WD during collapse. \subsection{Parameter Space} \label{sec:parameter_space} The structure and thermodynamics of the AIC progenitor and the resulting dynamics of the collapse depend on a variety of parameters that are constrained only weakly by theory and observation (\textit{e.g.,}~ \cite{nomoto_91,yoon_04,yoon_05}). Here, we study the dependence on the central density, rotational configuration and the core temperature. In the following we lay out our parameter choices and discuss the nomenclature of our initial models whose key properties we summarize in Table~\ref{tab:initial_models}. \subsubsection{Progenitor White Dwarf Central Density} \label{sec:parameter_space:rhoc} In order to investigate the impact of the precollapse central WD density $\rho_\mathrm{c,i}$ on the collapse dynamics, we consider sequences of WD models with central densities in the range from $ 4 \times 10^9 \, {\mathrm{g \, cm}^{-3}} $ to $ 5 \times 10^{10} \, {\mathrm{g \, cm^{-3}}} $. This range of densities is motivated by previous studies arguing that WDs in this range of $\rho_\mathrm{c,i}$ may experience AIC \cite{nomoto_91,yoon_04,dessart_06_a}. We therefore choose a set of four central densities, \textit{i.e.,}~ $4 \times 10^9, \ 1 \times 10^{10}, \ 2 \times 10^{10}, \ 5 \times 10^{10}\,~{\mathrm{g\,cm}}^{-3}$, and correspondingly begin our model names with letters ${\rm A,~B,~C,~D}$. We perform AIC simulations of nonrotating (spherically-symmetric) WDs with central density choices A-D and restrict the rotating models to the limiting central density choices A and D. In Fig.~\ref{fig:rho_e_non_rotating} we plot radial density profiles of our nonrotating WD models to show the strong dependence of the WD compactness on the choice of central density. This aspect will prove important for the understanding of the collapse dynamics of rapidly rotating models. \begin{figure} \centerline{\includegraphics[width = 86 mm, angle = 0]{Figures/f02.eps}} \caption{The radial profile of the rest-mass density for nonrotating white dwarf models AU0, BU0, CU0 and DU0.} \label{fig:rho_e_non_rotating} \end{figure} \begin{figure} \centerline{\includegraphics[width = 86 mm, angle = 0]{Figures/f03.eps}} \caption{Upper panels: angular velocity as a function of equatorial radius (left panel) and enclosed mass coordinate (right panel) for three representative precollapse WD models AD3, AD5 and AD10. Lower panels: Angular velocity normalized to the local Keplerian value as a function of equatorial radius and enclosed mass for the same models.} \label{fig:precollapse_omega} \end{figure} \begin{figure*}[t] \centerline{\epsfxsize = 6.5 cm \epsfbox{Figures/f04a.eps} \hspace{-0.8cm} \epsfxsize = 6.5 cm \epsfbox{Figures/f04b.eps} \hspace{-0.8cm} \epsfxsize = 6.5 cm \epsfbox{Figures/f04c.eps}} \caption{Colormap of the rest mass density for the precollapse white dwarf models AD1 (right panel) AD5 (center panel), and AD10 (right panel). The apparent ruggedness of the WD surface layers is a results of the finite resolution of our computational grid and the mapping procedure in the visualization tool. The ruggedness has no influence on the collapse and postbounce dynamics of the inner core.} \label{fig:contour_rho} \end{figure*} \begin{figure} \centerline{\includegraphics[width = 86 mm, angle = 0]{Figures/f05.eps}} \caption{Angular velocity as a function of equatorial radius for model AD10 and varying values of the dimensionless shear parameter $f_{\mathrm{sh}}$ that controls the rate at which the angular velocity increases with $ \varpi $ in the region $ \varpi < \varpi_{\mathrm p}$.} \label{fig:omega_vs_fsh} \end{figure} \begin{figure} \centerline{\includegraphics[width = 86 mm, angle = 0]{Figures/f06.eps}} \caption{Parameter $\beta_\mathrm{i}$ versus central angular velocity $ \Omega_{\mathrm{c,i}} $ for our AIC progenitor WD model set.} \label{fig:beta_vs_omega_precollapse} \end{figure} \subsubsection{Progenitor White Dwarf Rotational Configuration} \label{sec:progenitor_rotation} Since the rotational configuration of AIC progenitor WDs is constrained only poorly, we consider uniformly rotating ($ \Omega_\mathrm{i} = \Omega_{\mathrm{c,i}} $ everywhere) as well as a variety of differentially rotating WD configurations. To denote the general rotation type, we use the letter U (D) for uniform (differential) rotation as the second letter in each model name. The low-density uniformly rotating model sequence AU\{1-5\} is set up with initial angular velocities $ \Omega_{\mathrm{c,i}} $ from $1$ to $3.5\,{\mathrm{rad\,s}}^{-1}$, where the latter value corresponds to rotation very close to the mass-shedding limit. The more compact uniformly-rotating sequence DU\{1-7\} is set up with precollapse $\Omega_\mathrm{c,i}$ from $2$ to $9.5\,{\mathrm{rad\,s}}^{-1}$, where, again, the latter value corresponds to near-mass-shedding rotation. Model sequences AD\{1-10\}, DD\{1-7\} are differentially rotating according to the rotation law discussed in Sec.~\ref{sec:rotlaw} and specified by Eqs.~(\ref{eqq:omega_inner_core}) and (\ref{eqq:omega_outer_core}), with the parameter choice $f_{\mathrm{sh}} = 1$ and $f_{\mathrm p} = 0.1$ for the AD sequence and $f_{\mathrm{sh}} = 1$ and $f_{\mathrm p} = 0.05$ for the DD sequence. We recall that $f_{\mathrm p}$ is the fraction of the central density where the angular velocity has a global maximum. While $f_{\mathrm p} = 0.1$ is the standard choice of \cite{yoon_05}, we adopt $f_{\mathrm p} = 0.05$ for the high-density sequence DD to be in line with the parameter choices made for the models of Dessart~et~al.~\cite{dessart_06_a}. Test calculations with AD models show that the variation of $f_{\mathrm p}$ between $0.05$ and $0.10$ affects the rotational configuration of the outer WD layers only and does not have any appreciable effect on the AIC dynamics. For the AD\{1-10\} sequence, we chose $\Omega_{\mathrm{c,i}}$ in the range from $0$ to $5.6 \ {\mathrm{rad\,s}}^{-1}$, resulting in maximum angular velocities $\Omega_{\mathrm{max,i}}$ in the range of $ 2.88 $ to $ 8.49 \ {\mathrm{rad\,s}}^{-1}$. The higher-density DD\{1-7\} sequence rotates with $\Omega_{\mathrm{c,i}} $ in the range from $0$ to $18\,{\mathrm{rad\,s}}^{-1}$, corresponding to maximum $\Omega$ in the range of $ 7.69 $ to $ 25.84 \,{\mathrm{rad \, s}}^{-1}$. The values of $\Omega_{\mathrm{c,i}}$ and $\Omega_{\mathrm{max,i}}$ for the individual AD and DD models are given in Table~\ref{tab:initial_models}. As representative examples resulting from our assumed rotation law, we plot in Fig.~\ref{fig:precollapse_omega} for models AD3, AD5, and AD10 the angular velocity and the ratio of the angular velocity to the local Keplerian value as a function of cylindrical radius and of the enclosed rest mass. In Fig.~\ref{fig:contour_rho}, we plot the colormaps of the rest mass density on the $ r-\theta $ plane for the representative precollapse WD models AD1, AD5 and AD10. In order to study the effect of variations in the degree of differential rotation, we vary the dimensionless shear parameter $f_{\mathrm{sh}}$ for a subsequence of AD models and append suffices $f\{1-4\}$ to their names corresponding to $f_{\mathrm{sh}} = \{0.8,0.6,0.4,0.2\}$, respectively. Figure~\ref{fig:omega_vs_fsh} shows the behavior of the initial angular velocity distribution with decreasing $f_{\mathrm{sh}}$ in the rapidly differentially rotating model AD10. An important point to mention is the large range of precollapse WD masses covered by our models. Depending on the initial central density and the rotational setup, our WDs masses range from a sub-$M_\mathrm{Ch}$ value of $1.39\,M_\odot$ in the nonrotating low-$\rho_\mathrm{c,i}$ model AU0 to a rotationally-supported super-$M_\mathrm{Ch}$ mass of $2.05\,M_\odot$ in the rapidly differentially rotating model AD13f4. The maximum mass in our sequence of uniformly rotating WDs is $1.462\,M_\odot$ and is obtained in model DU7. To conclude the discussion of our initial rotational configurations, we present in Fig.~\ref{fig:beta_vs_omega_precollapse} for all models the initial values ($\beta_\mathrm{i}$) of the parameter $\beta$ as a function of their precollapse central angular velocity $\Omega_{\mathrm{c,i}}$. Differentially rotating WD models can reach $\beta_\mathrm{i}$ of up to $\sim 10\%$ while staying below the mass-shedding limit. This number is more than a factor of $2$ larger than what seems possible in massive star iron core collapse (see, \textit{e.g.,}~ \cite{dimmelmeier_08_a}), making these rapidly rotating AIC progenitor models potential candidates for a dynamical nonaxisymmetric rotational instability during their postbounce AIC evolution (see Sec.~\ref{sec:rotinst}). \subsubsection{Progenitor White Dwarf Core Temperature and $\overline{Y_e}({\rho})$ parametrization} \label{sec:tempye} We use Eq.~(\ref{eqq:temp_profile}) to set up the initial temperature distribution as a function of density. Dessart~et~al.\ chose $\rho_\mathrm{0} = \rho_\mathrm{c,i}$ ($= 5\times 10^{10}\,\mathrm{g\,cm}^{-3}$ in their models) and $ T_{\mathrm 0} = 10^{10} $ K for their $ 1.46 M_\odot$ model, and $ T_{\mathrm 0} = 1.3 \times 10^{10}$ K for their $ 1.92 M_\odot$ model. These values (i) are similar to what was used in the earlier work of Woosley \& Baron~\cite{woosley_92_a} and (ii) work well with the tabulated EOS employed and the assumption of NSE, but may be higher than the temperatures prevailing in accreting precollapse WDs in nature~(see, \textit{e.g.,}~ \cite{gutierrez_05_a, saio_04, yoon_04}). While the fluid pressure is affected very little by different temperature distributions, this is not the case for the free proton fraction which increases strongly with $T$ in the range from $10^9$ to $ 10^{10}$ K and at precollapse core densities. This increase of the proton fraction can lead to enhanced electron capture during AIC and in this way may have a significant influence on the AIC dynamics. In order to test the sensitivity of our AIC simulations on the assumed $ T_{\mathrm 0} $, we not only study models with $ T_{\mathrm 0} = 10^{10}$ K (at $\rho_\mathrm{0} = 5\times10^{10}\,\mathrm{g\,cm}^{-3}$, herafter the \textit{``high-$T$'' models}), but perform also simulations for models set up with $ T_{\mathrm 0} = 5 \times 10^{9}$ K (at $\rho_\mathrm{0} = 5\times10^{10}\,\mathrm{g\,cm}^{-3}$, hereafter the \textit{``low-$T$'' models}). To obtain the $\overline{Y_e\!}(\rho)$ parametrization (see Sec.~\ref{sec:deleptonization}) for the latter temperature, we re-ran with VULCAN/2D the $1.46 M_\odot$ AIC model of Dessart~et~al. up to core bounce with the same setup as discussed in~\cite{dessart_06_a}, but using the lower value of $T_\mathrm{0}$. We do not indicate the two different initial temperatures in the model names, but list the results obtained in the two cases side-by-side in Table~\ref{tab:collapse_models}. In addition to variations in deleptonization due to differences in the precollapse WD thermodynamics, we must also consider the possibility of unknown systematic biases that lead to small values of $Y_e$ in the inner core at bounce (see Sec.~\ref{sec:deleptonization}). In order to study the effect that larger values of $Y_e$ in the inner core have on the AIC dynamics, we perform a set of test calculations with scaled $\overline{Y_e}(\rho)$ trajectories. We implement this by making use of the fact that $Y_e(\rho)$ is to good approximation a linear function of $\log (\rho)$ (see Fig.~\ref{fig:ye_vs_rho_VULCAN}). We change the slope of this function between $\rho = 5\times 10^{10}\, \mathrm{g\,cm}^{-3}$ and $\rho = 2.5\times 10^{14}\, \mathrm{g\,cm}^{-3}$ by increasing $Y_e(\rho = 2.5\times 10^{14}\, \mathrm{g\,cm}^{-3}$) by $10\%$ and $20\%$. We pick these particular scalings, since the $20\%$ increase yields inner-core values of $Y_e$ at bounce that are very close to those obtained in 1D Boltzmann neutrino transport simulations of oxygen-neon core collapse \cite{kitaura_06_a,mueller:09phd}. The $10\%$ scaling yields values in between those of \cite{dessart_06_a} and \cite{kitaura_06_a,mueller:09phd} and, hence, allows us to study trends in AIC dynamics with variations in deleptonization in between constraints provided by simulations. We will not list the results of these tests in our summary tables, but discuss them wherever the context requires their consideration (\textit{i.e.,}~ Sect.~\ref{sec:nonrotating_collapse_dynamics},~\ref{sec:rotating_collapse_dynamics},~\ref{sec:GW},~\ref{sec:peak_amplitude}). \begin{table*} \small \centering \caption{Summary of the initial WD models: $ \Omega_{\mathrm{c,i}} $ is the central angular velocity and $ \Omega_{\mathrm{max,i}} = \Omega (\varpi_{\mathrm p}) $, $ M_0 $ is the total rest-mass and $ J $ is the total angular momentum. $ |W_\mathrm{i}| $ and $ E_\mathrm{rot, i} $ are the gravitational energy and rotational kinetic energy of the WD, respectively. $R_\mathrm{e}$ and $R_\mathrm{p}$ are the equatorial and polar radii.} \label{tab:initial_models} \begin{tabular}{@{}l@{~~~}c@{~~~}c@{~~~}c@{~~~}c@{~~~}c@{~~~}c@{~~~}c@{~~~}c@{~~~}c@{~~~}c@{}} \hline \\ [-1 em] Initial & $ \Omega_{\mathrm{c,i}} $ & $ \Omega_{\mathrm{max,i}} $ & $ \rho_{\mathrm{c,i}} $ & $ M_0 $ & $ J $ & $ |W_{\mathrm i}| $ & $ E_{\mathrm{rot, i}} $ & $ \beta_{\mathrm i} $ & $ R_\mathrm{e} $ & $ R_{\mathrm e} / R_{\mathrm p} $ \\ model & [rad/s] & [rad/s] & [$ 10^{10} {\mathrm{\ g\ cm}}^{-3}$ ] & [$ M_\odot $] & [$ 10^{50} {\mathrm{ \ ergs}} $] & [$ 10^{50} {\mathrm{ \ ergs}} $] & [$ 10^{50} {\mathrm{ \ ergs}} $] & [\%]& [km] & \\ \hline \\ [-0.5 em] AU0 & 0.000 & 0.000 & 0.4 & 1.390 & 0.00 & \z37.32 & \zz0.00 & 0.00 & 1692 & 1.000\\ AU1 & 1.000 & 1.000 & 0.4 & 1.394 & 0.09 & \z37.50 & \zz0.04 & 0.11 & 1710 & 0.988\\ AU2 & 1.800 & 1.800 & 0.4 & 1.405 & 0.16 & \z37.90 & \zz0.15 & 0.38 & 1757 & 0.953\\ AU3 & 2.000 & 2.000 & 0.4 & 1.409 & 0.18 & \z38.04 & \zz0.18 & 0.48 & 1775 & 0.943\\ AU4 & 3.000 & 3.000 & 0.4 & 1.437 & 0.29 & \z39.04 & \zz0.44 & 1.12 & 1938 & 0.848\\ AU5 & 3.500 & 3.500 & 0.4 & 1.458 & 0.36 & \z39.78 & \zz0.64 & 1.60 & 2172 & 0.748\\ [0.3 em] BU0 & 0.000 & 0.000 & 1.0 & 1.407 & 0.00 & \z51.44 & \z0.00 & 0.00 & 1307 & 1.000\\ [0.3 em] CU0 & 0.000 & 0.000 & 2.0 & 1.415 & 0.00 & \z65.28 & \z0.00 & 0.00 & 1069 & 1.000\\ [0.3 em] DU0 & 0.000 & 0.000 & 5.0 & 1.421 & 0.00 & \z89.11 & \z0.00 & 0.00 &\z813 & 1.000\\ DU1 & 2.000 & 2.000 & 5.0 & 1.423 & 0.03 & \z89.25 & \z0.03 & 0.04 &\z817 & 0.995\\ DU2 & 3.000 & 3.000 & 5.0 & 1.425 & 0.05 & \z89.42 & \z0.08 & 0.09 &\z822 & 0.988\\ DU3 & 3.500 & 3.500 & 5.0 & 1.426 & 0.06 & \z89.53 & \z0.11 & 0.12 &\z825 & 0.983\\ DU4 & 5.000 & 5.000 & 5.0 & 1.432 & 0.09 & \z90.00 & \z0.22 & 0.24 &\z840 & 0.963\\ DU5 & 7.000 & 7.000 & 5.0 & 1.442 & 0.13 & \z90.87 & \z0.44 & 0.49 &\z871 & 0.920\\ DU6 & 9.000 & 9.000 & 5.0 & 1.458 & 0.17 & \z92.12 & \z0.77 & 0.83 &\z931 & 0.853\\ DU7 & 9.500 & 9.500 & 5.0 & 1.462 & 0.18 & \z92.50 & \z0.86 & 0.94 &\z956 & 0.828\\ [0.3 em] AD1 & 0.000 & 2.881 & 0.4 & 1.434 & 0.28 & \z38.74 & \z0.41 & 1.07 & 2344 & 0.71\\ AD2 & 0.327 & 3.204 & 0.4 & 1.443 & 0.31 & \z39.04 & \z0.49 & 1.26 & 2382 & 0.69\\ AD3 & 1.307 & 4.198 & 0.4 & 1.477 & 0.42 & \z40.25 & \z0.81 & 2.01 & 2521 & 0.64\\ AD4 & 2.287 & 5.174 & 0.4 & 1.526 & 0.56 & \z42.01 & \z1.27 & 3.01 & 2707 & 0.58\\ AD5 & 3.000 & 5.903 & 0.4 & 1.575 & 0.69 & \z43.77 & \z1.74 & 3.97 & 2888 & 0.54\\ AD6 & 3.267 & 6.173 & 0.4 & 1.595 & 0.75 & \z44.47 & \z1.95 & 4.38 & 2964 & 0.53\\ AD7 & 3.920 & 6.833 & 0.4 & 1.659 & 0.93 & \z46.77 & \z2.59 & 5.55 & 3200 & 0.47\\ AD8 & 4.247 & 7,161 & 0.4 & 1.706 & 1.05 & \z48.45 & \z3.02 & 6.24 & 3366 & 0.44\\ AD9 & 5.227 & 8.155 & 0.4 & 1.884 & 1.58 & \z54.69 & \z4.88 & 8.92 & 4008 & 0.35\\ AD10& 5.554 & 8.485 & 0.4 & 1.974 & 1.87 & \z57.80 & \z5.85 &10.13 & 4338 & 0.313\\ [0.3 em] DD1 & 0.000 & 7.688 & 5.0 & 1.446 & 0.13 & \z90.85 & \z0.51 & 0.60 & 1097 & 0.73\\ DD2 & 3.000 & 10.70 & 5.0 & 1.467 & 0.19 & \z92.52 & \z0.95 & 1.00 & 1156 & 0.69\\ DD3 & 6.000 & 13.73 & 5.0 & 1.498 & 0.26 & \z95.05 & \z1.61 & 1.70 & 1238 & 0.63\\ DD4 & 9.000 & 16.74 & 5.0 & 1.544 & 0.35 & \z98.70 & \z2.57 & 2.60 & 1353 & 0.56\\ DD5 & 12.00 & 19.77 & 5.0 & 1.612 & 0.48 & 104.08 & \z4.01 & 3.90 & 1528 & 0.48\\ DD6 & 15.00 & 22.81 & 5.0 & 1.716 & 0.68 & 111.95 & \z6.31 & 5.60 & 1819 & 0.39\\ DD7 & 18.00 & 25.84 & 5.0 & 1.922 & 1.10 & 126.69 & 10.77 & 8.50 & 2430 & 0.28\\ [0.3 em] AD1f1 & 0.000 & 2.305 & 0.4 & 1.422 & 0.23 & \z38.33 & \z0.30 & 0.79 & 2283 & 0.730\\ AD1f2 & 0.000 & 1.723 & 0.4 & 1.413 & 0.19 & \z38.03 & \z0.22 & 0.58 & 2233 & 0.753\\ AD1f3 & 0.000 & 1.152 & 0.4 & 1.406 & 0.15 & \z37.79 & \z0.16 & 0.41 & 2188 & 0.770\\ AD1f4 & 0.000 & 0.576 & 0.4 & 1.401 & 0.11 & \z37.64 & \z0.11 & 0.29 & 2151 & 0.785\\ [0.3 em] AD3f1 & 1.307 & 3.610 & 0.4 & 1.457 & 0.36 & \z39.56 & \z0.62 & 1.58 & 2434 & 0.673\\ AD3f2 & 1.307 & 3.032 & 0.4 & 1.441 & 0.31 & \z39.01 & \z0.47 & 1.22 & 2361 & 0.700\\ AD3f3 & 1.307 & 2.457 & 0.4 & 1.428 & 0.26 & \z38.56 & \z0.36 & 0.90 & 2298 & 0.723\\ AD3f4 & 1.307 & 1.883 & 0.4 & 1.417 & 0.21 & \z38.20 & \z0.26 & 0.70 & 2243 & 0.745\\ [0.3 em] AD6f1 & 3.267 & 5.574 & 0.4 & 1.555 & 0.64 & \z43.13 & \z1.54 & 3.57 & 2798 & 0.555\\ AD6f2 & 3.267 & 5.003 & 0.4 & 1.522 & 0.55 & \z41.96 & \z1.23 & 2.93 & 2666 & 0.590\\ AD6f3 & 3.267 & 4.423 & 0.4 & 1.494 & 0.47 & \z41.00 & \z0.97 & 2.37 & 2554 & 0.625\\ AD6f4 & 3.267 & 3.842 & 0.4 & 1.472 & 0.41 & \z40.20 & \z0.76 & 1.89 & 2462 & 0.655\\ [0.3 em] AD9f1 & 5.227 & 7.564 & 0.4 & 1.772 & 1.23 & \z50.92 & \z3.71 & 7.28 & 3574 & 0.400\\ AD9f2 & 5.227 & 6.978 & 0.4 & 1.691 & 1.00 & \z48.13 & \z2.89 & 6.01 & 3264 & 0.448\\ AD9f3 & 5.227 & 6.392 & 0.4 & 1.630 & 0.84 & \z46.00 & \z2.29 & 4.98 & 3029 & 0.493\\ AD9f4 & 5.227 & 5.808 & 0.4 & 1.584 & 0.71 & \z44.38 & \z1.83 & 4.12 & 2851 & 0.533\\ [0.3 em] AD10f1 & 5.554 & 7.896 & 0.4 & 1.833 & 1.41 & \z53.06 & \z4.35 & 8.20 & 3793 & 0.370\\ AD10f2 & 5.554 & 7.305 & 0.4 & 1.741 & 1.13 & \z49.93 & \z3.37 & 6.74 & 3434 & 0.420\\ AD10f3 & 5.554 & 7.721 & 0.4 & 1.665 & 0.93 & \z47.28 & \z2.64 & 5.58 & 3149 & 0.468\\ AD10f4 & 5.554 & 6.134 & 0.4 & 1.611 & 0.78 & \z44.40 & \z2.10 & 4.62 & 2942 & 0.510\\ [0.3 em] AD11f2 & 6.000 & 7.756 & 0.4 & 1.815 & 1.35 & \z52.60 & \z4.16 & 7.90 & 3696 & 0.380\\ [0.3 em] AD12f3 & 7.000 & 8.175 & 0.4 & 1.914 & 1.65 & \z56.27 & \z4.23 & 9.30 & 4010 & 0.340\\ AD12f4 & 7.000 & 7.586 & 0.4 & 1.798 & 1.29 & \z52.29 & \z3.99 & 7.64 & 3574 & 0.393\\ [0.3 em] AD13f4 & 8.000 & 8.585 & 0.4 & 2.049 & 2.09 & \z61.30 & \z6.75 &11.01 & 4436 & 0.295\\ \hline \end{tabular} \end{table*} \subsection{Gravitational Wave Extraction} \label{sec:wave_extraction} We employ the Newtonian quadrupole formula in the first-moment of momentum density formulation as discussed, \textit{e.g.,}~ in~\cite{dimmelmeier_02_a, shibata_03_a, dimmelmeier_05}. In essence, we compute the quadrupole wave amplitude $ A_{20}^{\mathrm{E2}} $ of the $\l=2, m=0$ mode in a multipole expansion of the radiation field into pure-spin tensor harmonics \cite{thorne_80_a}. In axisymmetric AIC, this quadrupole mode provides by far the largest contribution to the GW emission and other modes are at least one or more orders of magnitude smaller. Of course, should nonaxisymmetric instabilities develop (which we cannot track in our current 2D models), these would then provide a considerable nonaxisymmetric contribution to the GW signal. The GW amplitude is related to the dimensionless GW strain $ h $ in the equatorial plane by \begin{equation} h = \frac{1}{8} \sqrt{\frac{15}{\pi}} \left(\frac{A^{\mathrm E2}_{20}}{r}\right) = 8.8524 \times 10^{-21} \left(\frac{A^{\mathrm{E2}}_{20}}{10^3 {\mathrm{\ cm}}}\right) \left(\frac{10 {\mathrm{\ kpc}}}{r}\right), \end{equation} where $ r $ is the distance to the emitting source. We point out that although the quadrupole formula is not gauge invariant and is only valid in the weak-field slow-motion limit, it yields results that agree very well in phase and to $ \sim 10 \mbox{\,--\,} 20\% $ in amplitude with more sophisticated methods \cite{shibata_03_a,Nagar2007,baiotti_08_a}. In order to assess the prospects for detection by current and planned interferometric detectors, we calculate characteristic quantities for the GW signal following~\cite{thorne:87}. Performing a Fourier transform of the dimensionless GW strain $ h $, \begin{equation} \hat{h} = \int_{-\infty}^\infty \!\! e^{2 \pi i f t} h \, dt\,, \label{eq:waveform_fourier_tranform} \end{equation} we can compute the (detector dependent) integrated characteristic frequency \begin{equation} f_{\mathrm c} = \left( \int_0^\infty \! \frac{\langle \hat{h}^2 \rangle}{S_h} f \, df \right) \left( \int_0^\infty \! \frac{\langle \hat{h}^2 \rangle}{S_h} df \right)^{-1}\!\!\!\!\!\!\!, \label{eq:characteristic_frequency} \end{equation} and the dimensionless integrated characteristic strain \begin{equation} h_{\mathrm c} = \left( 3 \int_0^\infty \! \frac{S_{h,{\,\mathrm c}}}{S_h} \langle \hat{h}^2 \rangle f \, df \right)^{1/2}\!\!\!\!\!\!\!\!, \label{eq:characteristic_amplitude} \end{equation} where $ S_h $ is the power spectral density of the detector and $ S_{h,\,{\mathrm c}} = S_h (f_{\mathrm c}) $. We approximate the average $ \langle \hat{h}^2 \rangle $ over randomly distributed angles by $ ( 3 / 2) \, \hat{h}^2 $. From Eqs.~(\ref{eq:characteristic_frequency}), (\ref{eq:characteristic_amplitude}) the optimal single-detector signal-to-noise ratio (SNR) can be calculated as \begin{equation} \mathrm{SNR} \equiv \frac{h_{\mathrm c}}{h_{\mathrm{rms}} (f_{\mathrm c})}, \end{equation} where $ h_{\mathrm{rms}} = \sqrt{f S_h} $ is the value of the root-mean-square strain noise for the detector. \begin{table*} \footnotesize \centering \caption{Summary of key quantitative results from our AIC simulations. $ \rho_{\mathrm{max, b}} $ is the maximum density in the core at the time of bounce $ t_{\mathrm{b}} $, $ | h |_{\mathrm{max}} $ is the peak value of the GW signal amplitude, while $ \beta_{\mathrm{ic, b}} $ is the inner core parameter $ \beta $ at bounce. Models marked by unfilled/filled circles ($\mbox{\large $ \bullet $}/\mbox{\large $ \circ $}$) undergo a pressure-dominated bounce with/without significant early postbounce convection. Models marked with the cross sign ($\mbox{\footnotesize $ \times $}$) undergo centrifugal bounce at subnuclear densities. The values left/right of the vertical separator ($ | $) are for the models with low/high temperature profiles.} \label{tab:collapse_models} \begin{tabular}{@{}l@{~}@{~}c@{~}c@{~}c@{~}c@{~}c@{~~~~}c@{~}@{}l@{~}@{~}c@{~}c@{~}c@{~}c@{~}c@{~}c@{~}} \hline \\ [-1 em] Collapse & & $ \rho_{\mathrm{max,b}} $ & $ t_{\mathrm b}$ & $ |h|_{\mathrm{max}} $ & $ \beta_{\mathrm{ic,b}} $ & Collapse & & $ \rho_{\mathrm{max,b}} $ & $ t_{\mathrm b}$ & $ |h|_{\mathrm{max}} $ & $ \beta_{\mathrm{ic,b}} $ & \\ \raiseentry{model} & & [$10^{14} {\mathrm{\ g\ cm }}^{-3} $] & [ms] & $ \displaystyle \left[ \!\!\! \begin{array}{c} 10^{-21} \\ [-0.2 em] {\mathrm{\ at\ 10\ kpc}} \end{array} \! \right] $ & [$ \% $ ] & \raiseentry{model} & & [$10^{14} {\mathrm{\ g\ cm}}^{-3} $] & [ms] & $ \displaystyle \left[ \!\!\! \begin{array}{c} 10^{-21} \\ [-0.2 em] {\mathrm{\ at\ 10\ kpc}} \end{array} \! \right] $ & [$ \% $ ] & \\ \hline \\ [- 0.0 em] AU0 & $\mbox{\large $ \circ $}|\mbox{\large $ \circ $}$ & $2.782|2.807$ & $214.1|204.9$ & $0.27|0.20$ & $0.00|0.00$ & AD1f1 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.718|2.733$ & $217.1|207.2$ & $1.57|1.24$ & $2.06|1.80$ \\ AU1 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.697|2.719$ & $214.9|205.5$ & $0.78|0.62$ & $1.27|1.20$ & AD1f2 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.759|2.767$ & $215.8|206.2$ & $0.96|0.75$ & $1.26|1.10$ \\ AU2 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.628|2.629$ & $216.7|206.9$ & $2.33|1.90$ & $3.75|3.56$ & AD1f3 & $\mbox{\large $ \circ $}|\mbox{\large $ \circ $}$ & $2.787|2.795$ & $214.9|205.5$ & $0.47|0.36$ & $0.59|0.49$ \\ AU3 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.610|2.613$ & $217.3|207.4$ & $2.79|2.29$ & $4.49|4.28$ & AD1f4 & $\mbox{\large $ \circ $}|\mbox{\large $ \circ $}$ & $2.803|2.803$ & $214.4|205.0$ & $0.26|0.18$ & $0.14|0.12$ \\ AU4 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.525|2.508$ & $221.4|210.8$ & $5.01|4.23$ & $8.48|8.19$ & & & & & & \\ AU5 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.461|2.444$ & $224.2|212.9$ & $5.90|5.05$ & $10.46|10.17$ & AD3f1 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.560|2.564$ & $222.5|211.5$ & $4.09|3.41$ & $6.26|5.81$ \\ & & & & & & AD3f2 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.583|2.597$ & $220.1|209.6$ & $3.46|2.85$ & $5.23|4.85$ \\ BU0 & $\mbox{\large $ \circ $}|\mbox{\large $ \circ $}$ & $2.805|2.800$ & $100.9|95.3$ & $0.19|0.16$ & $0.00|0.00$ & AD3f3 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.622|2.631$ & $218.2|208.1$ & $2.76|2.25$ & $4.17|3.87$ \\ & & & & & & AD3f4 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.656|2.662$ & $216.7|207.0$ & $2.03|1.63$ & $3.10|2.90$ \\ CU0 & $\mbox{\large $ \circ $}|\mbox{\large $ \circ $}$ & $2.782|2.788$ & $56.3|51.4$ & $0.20|0.24$ & $0.00|0.00$ & & & & & & \\ & & & & & & AD6f1 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.356|2.377$ & $237.6|222.9$ & $6.47|5.77$ & $13.20|12.52$ \\ DU0 & $\mbox{\large $ \circ $}|\mbox{\large $ \circ $}$ & $2.797|2.781$ & $27.5|25.2$ & $0.34|0.42$ & $0.00|0.00$ & AD6f2 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.395|2.399$ & $233.1|219.7$ & $6.41|5.63$ & $12.31|11.79$ \\ DU1 & $\mbox{\large $ \circ $}|\mbox{\large $ \circ $}$ & $2.786|2.777$ & $27.6|25.2$ & $0.23|0.27$ & $0.18|0.17$ & AD6f3 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.421|2.424$ & $229.1|216.7$ & $6.28|5.43$ & $11.47|11.06$ \\ DU2 & $\mbox{\large $ \circ $}|\mbox{\large $ \circ $}$ & $2.774|2.763$ & $27.6|25.2$ & $0.24|0.31$ & $0.39|0.38$ & AD6f4 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.463|2.444$ & $225.7|214.1$ & $5.95|5.12$ & $10.56|10.20$ \\ DU3 & $\mbox{\large $ \circ $}|\mbox{\large $ \circ $}$ & $2.756|2.758$ & $27.6|25.2$ & $0.31|0.27$ & $0.53|0.51$ & & & & & & \\ DU4 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.728|2.719$ & $27.6|25.3$ & $0.64|0.55$ & $1.07|1.03$ & AD9f1 & $\mbox{\footnotesize $ \times $}|\mbox{\footnotesize $ \times $}$ & $1.913|1.994$ & $282.0|250.9$ & $3.88|5.77$ & $20.80|18.84$ \\ DU5 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.669|2.658$ & $27.7|25.3$ & $1.21|1.07$ & $2.04|1.97$ & AD9f2 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.000|2.073$ & $265.1|241.7$ & $5.25|6.17$ & $20.10|18.09$ \\ DU6 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.642|2.640$ & $27.8|25.4$ & $1.97|1.73$ & $3.27|3.17$ & AD9f3 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.092|2.115$ & $253.9|234.6$ & $6.96|6.41$ & $18.44|17.45$ \\ DU7 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.627|2.619$ & $27.9|25.5$ & $2.17|1.92$ & $3.61|3.50$ & AD9f4 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.159|2.166$ & $244.7|228.3$ & $7.30|6.74$ & $17.67|16.74$ \\ [0.5 em] AD1 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.661|2.679$ & $218.7|208.5$ & $2.24|1.75$ & $2.96|2.62$ & AD10f1 & $\mbox{\footnotesize $ \times $}|\mbox{\footnotesize $ \times $}$ & $1.797|1.899$ & $303.4|261.3$ & $3.99|4.14$ & $23.71|21.04$ \\ AD2 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.620|2.616$ & $220.1|209.5$ & $2.85|2.30$ & $3.93|3.55$ & AD10f2 & $\mbox{\footnotesize $ \times $}|\mbox{\footnotesize $ \times $}$ & $1.901|1.980$ & $272.4|245.2$ & $4.15|5.91$ & $21.07|19.30$ \\ AD3 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.547|2.547$ & $225.5|213.7$ & $4.62|3.93$ & $7.20|6.71$ & AD10f3 & $\mbox{\footnotesize $ \times $}|\mbox{\large $ \bullet $}$ & $1.984|2.042$ & $261.2|239.3$ & $5.52|6.39$ & $20.40|18.53$ \\ AD4 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.447|2.442$ & $232.9|219.4$ & $5.89|5.19$ & $10.58|10.01$ & AD10f4 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.096|2.098$ & $249.8|231.7$ & $7.32|6.67$ & $18.70|17.83$ \\ AD5 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.361|2.389$ & $241.1|225.5$ & $6.37|5.68$ & $13.09|12.33$ & & & & & & \\ AD6 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.324|2.355$ & $246.7|229.7$ & $6.39|5.78$ & $14.03|13.18$ & AD11f2 & $\mbox{\footnotesize $ \times $}|\mbox{\footnotesize $ \times $}$ & $1.734|1.845$ & $296.9|257.6$ & $3.78|4.13$ & $24.23|21.65$ \\ AD7 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.226|2.228$ & $260.0|238.8$ & $6.00|5.73$ & $16.34|15.23$ & & & & & & \\ AD8 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.145|2.167$ & $264.3|240.9$ & $6.01|5.70$ & $17.48|16.30$ & AD12f3 & $\mbox{\footnotesize $ \times $}|\mbox{\footnotesize $ \times $}$ & $0.319|1.555$ & $372.3|249.4$ & $1.61|4.08$ & $22.88|23.08$ \\ AD9 & $\mbox{\footnotesize $ \times $}|\mbox{\footnotesize $ \times $}$ & $1.817|1.911$ & $319.7|267.3$ & $3.40|4.00$ & $23.38|20.69$ & AD12f4 & $\mbox{\footnotesize $ \times $}|\mbox{\footnotesize $ \times $}$ & $1.432|1.677$ & $298.1|257.9$ & $3.14|4.44$ & $24.58|22.97$ \\ AD10& $\mbox{\footnotesize $ \times $}|\mbox{\footnotesize $ \times $}$ & $1.629|1.790$ & $393.6|284.5$ & $2.36|3.54$ & $24.41|21.25$ & & & & & & \\ & & & & & & AD13f4 & $\mbox{\footnotesize $ \times $}|\mbox{\footnotesize $ \times $}$ &$7\!\times\!10^{-4}\!|0.312$\phantom{0}&$331.8|322.9$&$0.40|2.01$&$15.39|24.02$\\ DD1 & $\mbox{\large $ \circ $}|\mbox{\large $ \circ $}$ & $2.779|2.772$ & $27.5|25.2$ & $0.46|0.37$ & $0.70|0.58$ &&&&&&\\ DD2 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.684|2.686$ & $27.7|25.3$ & $1.24|1.06$ & $1.95|1.81$ &&&&&&\\ DD3 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.642|2.638$ & $27.8|25.5$ & $2.41|2.08$ & $3.76|3.56$ &&&&&&\\ DD4 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.586|2.571$ & $28.1|25.7$ & $3.78|3.29$ & $5.93|5.71$ &&&&&&\\ DD5 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.526|2.498$ & $28.4|25.9$ & $5.22|4.61$ & $8.27|8.09$ &&&&&&\\ DD6 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.457|2.425$ & $28.8|26.3$ & $6.52|5.82$ & $10.57|10.51$ &&&&&&\\ DD7 & $\mbox{\large $ \bullet $}|\mbox{\large $ \bullet $}$ & $2.389|2.352$ & $29.2|26.5$ & $7.58|6.81$ & $12.75|12.79$ &&&&&&\\ \hline \end{tabular} \end{table*} \section{Collapse Dynamics} \label{sec:colldyn} The AIC starts when the progenitor WD reaches its effective Chandrasekhar mass and pressure support is reduced due to electron capture in the core. Similar to the case of massive star iron core collapse (\textit{e.g.,}~~\cite{ott_04_a, moenchmeyer_91_a, dimmelmeier_02_a, zwerger_97_a} and references therein), the collapse evolution can be divided into three phases: {\it Infall:} This is the longest phase of collapse and, depending on model parameters, lasts between $ \sim 25 ~ \mathrm{ms} $ and $ \sim 300 ~ \mathrm{ms} $. The inner part of the WD core (the ``inner core''), which is in sonic contact, contracts homologously ($ v_r \propto r $), while the ``outer core'' collapses supersonically. Fig.~\ref{fig:rho_max_t_non_rotating} shows the time evolution of the central density for the nonrotating high-$T$ AIC models. In the infall phase, the core contracts slowly, which is reflected in the slow increase of $ \rho_{\mathrm c} $. {\it Plunge and bounce}: The short dynamical ``plunge'' phase sets in when $\rho_\mathrm{c}$ reaches $ \sim 10^{12} \, {\mathrm{g \, cm^{-3}} } $, and the peak radial infall velocity is $ \sim 0.1 c $. At this point, neutrinos begin to be trapped in the inner core. The latter rapidly contracts to reach nuclear densities ($ \rho_{\rm nuc} \simeq 2.7 \times 10^{14} {\rm\ g\ cm}^{-3} $) at which the nuclear EOS stiffens, decelerating and eventually reversing the infall of the inner core on a millisecond timescale. Because of its large inertia and kinetic energy, the inner core does not come to rest immediately. It overshoots its equilibrium configuration, then bounces back, launching a shock wave at its outer edge into the still infalling outer core. The bounce and the re-expansion of the inner core is also evident in the time evolution of $\rho_\mathrm{c}$ shown in Fig.~\ref{fig:rho_max_t_non_rotating} which, at core bounce, reaches a value of $\sim 2.8 \times 10^{14} \, {\mathrm{g \, cm^{-3}}} $ in the nonrotating AIC models, after which the core slightly re-expands and settles down at $\sim 2.5 \times 10^{14} \, {\mathrm{g \, cm^{-3}}} $. As pointed out by extensive previous work (see, \textit{e.g.,}~ \cite{bethe:90, goldreich_80_a, yahil_83_a, van_riper_82_a, dimmelmeier_08_a} and references therein), the extent of the inner core at bounce determines the initial kinetic energy imparted to the bounce shock, the mass cut for the material that remains to be dissociated, and the amount of angular momentum that may become dynamically relevant at bounce. {\it Ringdown}: Following bounce, the inner core oscillates with a superposition of various damped oscillation modes with frequencies in the range of $ 500 - 800 \, \mathrm{Hz} $, exhibiting weak low-amplitude variations in $ \rho_{\mathrm c} $ (Fig.~\ref{fig:rho_max_t_non_rotating}). These oscillations experience rapid damping on a timescale of $10 ~ {\mathrm{ms}} $ due to the emission of strong sound waves into the postshock region which steepen into shocks. The newly born PNS thus \emph{rings} down to its new equilibrium state. The ringdown phase is coincident with the burst of neutrinos that is emitted when the bounce shock breaks out of the energy-dependent neutrinospheres (see, \textit{e.g.,}~ \cite{thompson_03_a,dessart_06_a}). The neutrino burst removes energy from the postshock regions and enhances the damping of the PNS ringdown oscillations (\textit{e.g.,}~ \cite{ott:06spin}), but, due to the limitations of our present scheme (see Sec.~\ref{sec:deleptonization}), is not accounted for in our models. \begin{figure} \centerline{\includegraphics[width = 86 mm]{Figures/f07.eps}} \caption{Evolution of the maximum (central) density for the nonrotating low-$T$ models AU0, BU0, CU0 and DU0. The inset plot displays a zoomed-in view of the maximum density around the time of core bounce on a linear scale. As clearly discernible from this figure, the collapse dynamics in the plunge and bounce phase are essentially independent of the initial WD central density. Time is normalized to the time of bounce $ t_\mathrm{b}$.} \label{fig:rho_max_t_non_rotating} \end{figure} \subsection{Nonrotating AIC} \label{sec:nonrotating_collapse_dynamics} The set of nonrotating AIC models that we consider here consists of models AU0, BU0, CU0, DU0. As noted in Section~\ref{sec:parameter_space:rhoc}, these models have different central densities with values in the range from $ 4 \times 10^{9} $ to $ 5 \times 10^{10} \ {\mathrm{g \, cm^{-3}}}$ which, because of the strong dependence of the WD compactness on the central density, corresponds to a range of WD radii from $ 1692 $ to $ 813 $ km (see Fig.~\ref{fig:rho_e_non_rotating}). Once mapped onto our computational grid and after the initial $\overline{Y}_e(\rho)$ parametrization is applied (see Sec.~\ref{IEFP}), all WD models start to collapse by themselves and no additional artificial pressure reduction is necessary. This is in contrast to previous work that employed a simple analytic EOS and required an explicit and global change of the adiabatic exponent to initiate collapse (\textit{e.g.,}~ \cite{zwerger_97_a,dimmelmeier_02_b}). The free-fall collapse time $\tau_{\mathrm{ff}}$ of a Newtonian self-gravitating object of mean density $\rho_\mathrm{mean}$ is proportional to $\rho_\mathrm{mean}^{-1/2}$. For our set of spherically-symmetric AIC models we find a scaling $\tau_{\mathrm{ff}} \propto \rho_{\mathrm c}^{-0.87}$, where $\rho_{\mathrm c}$ is the precollapse central density of the WD. This stronger scaling is due to the fact that WD cores are not constant density objects and that the collapse is not pressureless. Furthermore, the pressure reduction initiating and accelerating collapse is due primarily to electron capture which scales roughly with $\rho^{5/3}$ (\textit{e.g.,}~ \cite{bethe:90}). Hence, lower-density WDs collapse only slowly, spending much of their collapse time near their initial equilibrium states. In Fig.~\ref{fig:rho_max_t_non_rotating}, we plot the evolution of the central densities of the nonrotating high-$T$ models. Despite the strong dependence of the collapse times on the initial central densities, the evolution of $ \rho_{\mathrm c} $ around bounce does not exhibit a dependence on the initial central density. Moreover, the mass and the size of the inner core is rather insensitive to the initial value of $ \rho_{\mathrm c}$. These features, somewhat surprising in the light of the strong dependence of the collapse times on the initial value of $ \rho_{\mathrm c} $, are a consequence of the fact that the inner core mass is determined primarily not by hydrodynamics, but by the thermodynamic and compositional structure of the inner core set by nuclear and neutrino physics~\cite{bethe:90}. However, an important role is played also by the fact that an increase (decrease) of the central density of an equilibrium WD leads to a practically exact homologous\footnote{For a discussion of homology in the stellar structure context, see~\cite{kippenhahn_90}.} contraction (expansion) of the WD structure in the inner regions ($ m(\varpi) \lesssim 1\, M_\odot $) in the nonrotating case (this can be seen in Fig.~\ref{fig:m_vs_r}), at least in the range of central densities considered in this paper (as we shall see in Section~\ref{sec:rotating_collapse_dynamics}, this feature also holds to good accuracy in the case of rotating WDs). These aspects, in combination with the homologous nature of WD inner-core collapse, make the size and dynamics of the inner core in the bounce phase practically independent of the central density of the initial equilibrium WDs. Early analytical work \cite{goldreich_80_a, yahil_83_a,bethe:90} demonstrated (neglecting thermal corrections \cite{burrows:83} and rotation) that the mass $M_{\mathrm{ic}} $ of the inner core is proportional to $Y_e^2$ in the infall phase during which the fluid pressure is dominated by the contribution of degenerate electrons. Around bounce, at densities near nuclear matter density, the nuclear component dominates and the simple $Y_e^2$ dependence does not hold exactly any longer. As discussed in Sec.~\ref{IEFP}, we adopt the parametrization $\overline{Y_e}(\rho)$ as extracted from the simulations of Dessart~et~al.~\cite{dessart_06_a} which predict very efficient electron capture, resulting in an average inner-core $Y_e$ at bounce of $\sim 0.18$ in the high-$T$ models. This is significantly lower than in standard iron core collapse where the inner-core $Y_e$ at bounce is expected to be around $\sim 0.25-0.30$ \cite{thompson_03_a,liebendoerfer_05_b,buras_06_a}. In our nonrotating AIC models, we find inner core masses at bounce $ M_{\mathrm{ic,b}} \sim 0.27 M_\odot $ \footnote{We define the inner core as the region which is in sonic contact at the time of bounce, \textit{i.e.,}~ \begin{equation} M_{\mathrm{ic,b}} \equiv \int_{|v_r| < c_{\mathrm s}} \rho W d V \ , \end{equation} where $ W $ is the Lorentz factor and $dV$ is the invariant 3-volume element. The bounce time is defined as the time when the radial velocity of the outer edge of the inner core becomes positive. Note that such a measure of the inner core is strictly valid only at the time of bounce.} (see Fig.~\ref{fig:mic_vs_beta_ic}) which are, as expected, significantly smaller than in iron core collapse (where $ M_{\mathrm{ic,b}} \sim 0.5 M_\odot $~\cite{buras_06_a, buras_06_b, liebendoerfer_05_b}). Due to their small mass, our AIC inner cores have less kinetic energy at bounce and reach lower densities than their iron core counterparts. For example, the nonrotating AIC models exhibit central densities at bounce of $ \sim 2.8 \times 10^{14} ~ {\mathrm{g ~ cm^{-3}}} $, while in nonrotating iron core collapse, maximum densities of $ \gtrsim 3 \times 10^{14} ~ {\mathrm{g ~ cm^{-3}}} $ are generally reached at bounce in simulations (\textit{e.g.,}~ \cite{dimmelmeier_08_a}). In addition to $M_\mathrm{ic,b}$, the bounce density depends also on the stiffness of the nuclear EOS whose variation we do not explore here (see, \textit{e.g.,}~ \cite{dimmelmeier_08_a, sumiyoshi:05}). Since the free proton fraction at precollapse and early collapse densities grows rather rapidly with temperature in the range from $ \sim 10^9 $ to $ \sim 10^{10} $ K (\textit{e.g.,}~~\cite{bruenn_85_a}), the efficiency of electron capture is sensitive to the temperature of WD matter. For example, in the low-$T$ models, the value of $ Y_e $ drops to $ \simeq 0.32 $ when the density reaches $ 10^{12} \, {\mathrm{g \, cm^{-3}}} $, while in the high-$T$ models, we obtain $ Y_e \simeq 0.3 $ at that time. Due to this dependence of $ Y_e $ on $ T $, the inner core masses of low-$T$ models are larger by $ \sim 10 \% $ compared to those of high-$ T $ models. Moreover, since the electron degeneracy pressure is proportional to $ \left (Y_e \rho \right)^{4/3} $ \cite{bethe:90}, the collapse times of the low-$T$ models are longer by $ \sim 5 \% $. We find similar systematics in test calculations in which we modify the $\overline{Y_e}(\rho)$ trajectories of low-$T$ models to yield larger $Y_e$ at bounce (see Sec.~\ref{sec:tempye}). An increase of the inner-core $Y_e$ by $10\%$ ($20\%$) leads to an increase of $M_{\mathrm{ic,b}}$ by $\sim 11\%$ ($\sim 25\%$). It is important to note that the nonrotating AIC models discussed above as well as all of the other models considered in this study, experience prompt hydrodynamic explosions. The bounce shock, once formed, does slow down, but never stalls and steadily propagates outwards. While the shock propagation is insensitive to the initial WD temperature profile, it shows significant dependence on the initial WD central density: Owing to the greater initial compactness and the steeper density gradient of the higher-density models, the shock propagation in those models is faster and the shock remains stronger when it reaches the WD surface. For example, in the lowest-density progenitor model AU0, the shock reaches the surface within $ \sim 120$ ms after its formation, while in the highest density model DU0, it needs only $ \sim 80$ ms. We point out that Dessart~et~al.~\cite{dessart_06_a, dessart_07_a} and previous AIC studies~\cite{woosley_92_a, fryer_99_a} reported significant shock stagnation in the postbounce phase of AIC due to the dissociation of infalling material and neutrino losses from the postshock region. Our present computational approach includes dissociation (through the EOS, see Sec. \ref{sec:eos}), but does not account for neutrino losses in the postbounce phase. Hence, the "prompt" explosions in our models are most likely an artifact of our incomplete treatment of the postbounce physics. \begin{figure} \centerline{\includegraphics[width = 86 mm, angle = 0]{Figures/f08.eps}} \caption{Times to core bounce from the onset of collapse as a function of initial central angular velocity $\Omega_{\mathrm{c,i}}$. Shown are the results of the high-$T$ model sequence (the low-$T$ models exhibit identical systematics). Models denoted by an unfilled (filled) circle undergo a pressure-dominated bounce with (without) significant prompt postbounce convection. Models marked by a cross undergo centrifugal bounce at subnuclear densities and models marked with a small (large) symbol are of set A (D). The colors correspond to various precollapse rotational configurations (see the legend in Fig.~\ref{fig:rhomaxb_vs_betaicb}). Note that due to their much higher initial compactness, the high-density D models (shown in the inset plot) have much shorter collapse times than their lower-density A counterparts.} \label{fig:collapse_times} \end{figure} \begin{figure} \centerline{\includegraphics[width = 86 mm]{Figures/f09.eps}} \caption{The maximum density $ \rho_{\mathrm{max,b}} $ at bounce as a function of the inner core parameter $ \beta_{\mathrm{ic,b}} $ at bounce for the entire set of high-$T$ AIC models. Due to the increasing role of centrifugal support, $ \rho_{\mathrm{max,b}} $ decreases monotonically with increasing rotation (see the main text for details). The symbol convention for the various sets is explained in the caption of Fig.~\ref{fig:collapse_times}. } \label{fig:rhomaxb_vs_betaicb} \end{figure} \subsection{Rotating AIC} \label{sec:rotating_collapse_dynamics} The AIC of rotating models proceeds through the same stages as AIC without rotation and exhibits similar general features, including the well defined split of the WD into an inner core that is in sonic contact and collapses quasi-homologously\footnote{The collapse is \emph{quasi}-homologous because in this case the relation between the infall velocity $v_r$ depends on both the radial coordinate and on the polar coordinate~\cite{zwerger_97_a}.}, and a supersonically infalling outer core. Conservation of angular momentum leads to an increase of the angular velocity $\Omega \propto \varpi^{-2}$ and of the centrifugal acceleration $a_{\mathrm{cent}} = \Omega^2 \varpi \propto \varpi^{-3}$. The latter has opposite sign to gravitational acceleration, hence provides increasing \emph{centrifugal support} during collapse, slowing down the contraction and, if sufficiently strong, leading to centrifugally-induced core bounce only slightly above nuclear density or even at subnuclear density~\cite{tohline_84_a,moenchmeyer_91_a}. \begin{figure} \centerline{\includegraphics[width = 86 mm]{Figures/f10.eps}} \caption{The inner core parameter $ \beta_{\mathrm{ic,b}} $ at the time of bounce for all low-$T$ AIC models plotted as a function of the precollapse central angular velocity $ \Omega_{\mathrm{c,i}} $. For models with slow to moderately-rapid rotation, $ \beta_{\mathrm{ic, b}} $ increases roughly linearly with $ \Omega_{\mathrm{c,i}} $ (at fixed $ \rho_{\mathrm{c,i}} $ and rotation law). In more rapidly rotating models, the growth of $ \beta_{\mathrm{ic, b}} $ saturates at $ \sim 24.5 \, \% $, and further increase of the progenitor rotation results in a decrease of $ \beta_{\mathrm{ic, b}} $. Since D models experience less spin-up during collapse than A models, an increase of $ \rho_{\mathrm{c,i}} $ at fixed $ \Omega_{\mathrm{c,i}} $ and rotation law results in a decrease of $ \beta_{\mathrm{ic,b}} $. A uniformly rotating model with a given $ \Omega_{\mathrm{c,i}} $ and $ \rho_{\mathrm{c,i}} $ reaches smaller $ \beta_{\mathrm{ic,b}} $ than the differentially rotating model with the same $ \Omega_{\mathrm{c,i}} $ and $ \rho_{\mathrm{c,i}} $. The symbol convention for the various sets is explained in the caption of Fig.~\ref{fig:collapse_times}.} \label{fig:betaic_vs_omega} \end{figure} Just as in the case of nonrotating AIC, models of set A collapse more slowly than D models because of the dependence of the collapse times on the initial central densities. However, due to centrifugal support, the collapse times grow with increasing precollapse rotation. This is visualized in Fig.~\ref{fig:collapse_times} in which we plot the time to core bounce as a function of the initial central angular velocity $\Omega_{\mathrm{c,i}}$. The maximum angular velocity of uniformly rotating models is limited by the WD surface mass-shedding limit and is $\sim 3.5\,\mathrm{rad\,s}^{-1}$ ($\sim 9.5\,\mathrm{rad\,s}^{-1}$) in model AU5 (DU7). The effect of rotation on the collapse time of uniformly rotating models (dark-green symbols in Fig.~\ref{fig:collapse_times}) is small and the time to core bounce increases by $\sim5\%$ from zero to maximum precollapse rotation in model set A. The more compact D models collapse much faster than their lower initial density A counterparts and, in addition, experience a smaller spin-up of their more compact inner cores. Hence, uniformly rotating D models are less affected by rotation and their collapse times vary by only $\sim 0.8\%$ from zero to maximum rotation. As mentioned in Sec.~\ref{sec:rotlaw}, WD models that rotate differentially according to the rotation law of Yoon \& Langer~\cite{yoon_05, yoon_04} have an angular velocity that increases from its central value $\Omega_{\mathrm{c,i}}$ with $\varpi$ up to a maximum $\Omega_{\mathrm{max,i}}$ at the cylindrical radius $\varpi_{\mathrm p}$, beyond which $\Omega$ decreases to sub-Keplerian values at the surface (see Fig.~\ref{fig:precollapse_omega}). The rate at which $ \Omega $ increases in the WD core is controlled by the shear parameter $ f_{\mathrm{sh}} $, which we choose in the range from $ 0.2 $ to $ 1 $. The case $ f_{\mathrm{sh}} = 0.2 $ corresponds to a \emph{nearly uniformly} rotating inner region, while $ f_{\mathrm{sh}} = 1 $ corresponds to strong differential rotation with $ \Omega (\varpi_{\mathrm p}) / \Omega_{\mathrm{c,i}} \sim 2-3 $. In mass coordinate, this corresponds to a ratio $ \Omega (M_{\mathrm{ic,b}}) / \Omega_{\mathrm{c,i}}$ of $ \sim 1.4 - 2.4 $, where $M_{\mathrm{ic,b}} \simeq 0.3\,M_\odot$ is the approximate mass that constitutes the inner core at bounce in a nonrotating WD model. In contrast to uniformly rotating models, differentially rotating WDs are \emph{not limited} by the mass shedding limit at the surface. As a result, $\Omega_{\mathrm{c,i}}$ can in principle be increased up to the point beyond which the precollapse WD inner core becomes fully centrifugally supported and does not collapse at all. For model set AD, this maximum of $\Omega_{\mathrm{c,i}}$ is $\sim 8\,{\mathrm{rad\,s}}^{-1}$ (the low-$T$ model AD13f4, which becomes centrifugally supported already at a central density of $\sim 7\times10^{10}\,\mathrm{g\,cm}^{-3}$) while the more compact DD models still collapse rapidly at $\Omega_{\mathrm{c,i}} \sim 18\,{\mathrm{rad\,s}}^{-1}$ (model DD7). As shown in Fig.~\ref{fig:collapse_times}, the most rapidly rotating AD model (AD13f4) reaches core bounce after a time which is $\sim 55\%$ larger than a nonrotating A model. For the most rapidly rotating DD model this difference is only $\sim 5\%$. In Fig.~\ref{fig:rhomaxb_vs_betaicb} we plot the maximum density $\rho_{\mathrm{max,b}}$ at bounce as a function of the inner core parameter $\beta_{\mathrm{ic,b}}$ at bounce. Slowly to moderately-rapidly rotating WDs that reach $\beta_{\mathrm{ic,b}} \lesssim 15\%$ are only mildly affected by rotation and their $\rho_{\mathrm{max,b}}$ decrease roughly linearly with increasing $\beta_{\mathrm{ic,b}}$, but stay close to $\rho_{\mathrm{nuc}}$. The effect of rotation becomes nonlinear in more rapidly rotating WDs. Models of our set that reach $\beta_{\mathrm{ic,b}} \gtrsim 18\%$ (\textit{i.e.,}~ AD models with $\Omega_{\mathrm{c,i}} \gtrsim 5\,{\mathrm{rad\, s}}^{-1}$) undergo core bounce induced partly or completely centrifugally at subnuclear densities. \begin{figure} \centerline{\includegraphics[width = 86 mm, angle = 0]{Figures/f11.eps}} \caption{Mass $M_0 (r_\mathrm{e})$ in units of solar masses of the WD inner region plotted as a function of the equatorial radial coordinate $ r_\mathrm{e} $ for a number of AIC models with slow and rapid rotation as well as high and small central densities. The inset plot shows the same on a linear radial scale. The mass distribution of the inner $ M_0 (r_\mathrm{e}) \lesssim 0.5 M_\odot $ region is largely independent of the rotational configuration, while an increase (decrease) of the central density leads to homologous contraction (expansion) of the inner regions. See Sec.~\ref{sec:parameter_space} for details of the model setups.} \label{fig:m_vs_r} \end{figure} \begin{figure}[t] \centerline{\includegraphics[width = 86 mm]{Figures/f12.eps}} \caption{Angular velocity profiles in the equatorial plane at the time of core bounce and time-evolution of the central density $ \rho_{\mathrm c} $ (in the inset) for models AU2 and DU7. The initial angular velocity of model DU7 is larger by a factor of $ \sim 5.3 $ than that of model AU2, but the latter experiences a $\sim 5.3$ greater spin-up during collapse. As a result, these models produce inner cores with almost identical rotational configurations and similar masses in the bounce phase. This is reflected in an identical evolution of the central densities at bounce.} \label{fig:omega_AU2_DU7} \end{figure} As shown in Fig.~\ref{fig:betaic_vs_omega}, $\beta_{\mathrm{ic,b}}$ is a monotonic function of $\Omega_{\mathrm{c,i}}$, but is very sensitive to both the rotation law and the initial WD compactness. Our most rapidly uniformly rotating models AU5 and DU7 (both near the mass-shedding limit) reach $\beta_{\mathrm{ic,b}}$ of $\sim 10.5\%$ and $\sim 3.6\%$, respectively. Hence, uniformly rotating WDs always undergo core bounce due to the stiffening of the nuclear EOS and with little influence of rotation on the dynamics. In models where centrifugal effects remain subdominant during collapse, $\beta_{\mathrm{ic,b}}$ grows practically linearly with $\Omega_{\mathrm{c,i}}$. This relationship flattens off for models that become partially or completely centrifugally supported near bounce. $\beta_{\mathrm{ic,b}}$ grows with increasing rotation up to $\sim 24.5\%$ (model AD13f4), beyond which any further increase in precollapse rotation leads to a \emph{decrease} of $\beta_{\mathrm{ic,b}}$, since the inner core becomes fully centrifugally supported before reaching high compactness and spin-up. In other words, there exists a \textit{``centrifugal limit''} beyond which centrifugal forces dominate and, as a result, increasing precollapse rotation leads to a decreasing $\beta_\mathrm{ic,b}$ at core bounce. This result is analogous to what previous studies \cite{dimmelmeier_08_a,ott_04_a} found in the rotating core collapse of massive stars and has consequences for the appearance of nonaxisymmetric rotational instabilities in PNSs. This will be discussed in more detail in Sec.~\ref{sec:rotinst}. The influence of the precollapse compactness on the dynamics of rotating AIC can also easily be appreciated from Fig.~\ref{fig:betaic_vs_omega}. The higher-density, more-compact WDs of set D spin up much less than their A counterparts since their inner cores are already very compact at the onset of collapse. Hence, a higher-density WD that reaches a given value of $\beta_{\mathrm{ic,b}}$ must have started out with a larger $\Omega_{\mathrm{c,i}}$ than a lower-density WD reaching the same $\beta_{\mathrm{ic,b}}$. For the particular choice of initial central densities represented by D and A models and in the case of uniform or near-uniform rotation, the ratio between the $\Omega_{\mathrm{c,i}}$ of a D and A model required to reach the same $\beta_{\mathrm{ic,b}}$ is $\sim 5.3$. This factor can be understood by considering Fig.~\ref{fig:m_vs_r} in which we plot the enclosed mass as a function of equatorial radial coordinate $ r_\mathrm{e} $ of selected A and D initial WD configurations with slow and rapid rotation. The important thing to notice is that the WD core structure ($M \lesssim 0.5 M_\odot$) is insensitive to the rotational configuration and obeys a homology relation. Stated differently, for a model of set D, a homologous expansion in the radial direction by a factor of $\sim 2.3$ yields an object whose inner part is very similar to a lower-density A model. In turn, the collapse of A models corresponds to a $\sim 2.3 $ times greater contraction of the WD core compared to their D model counterparts and a spin up that is greater by a factor of $\sim (2.3)^2 \simeq 5.3$. This explains the strong dependence of the inner-core angular velocity and $\beta_{\mathrm{ic,b}}$ on the initial central density observed in Fig.~\ref{fig:betaic_vs_omega}. Furthermore, it suggests that one can find A$-$D model pairs that differ greatly in their precollapse angular velocities, but yield the same rotational configuration at bounce. An example for this is shown in Fig.~\ref{fig:omega_AU2_DU7} in which we plot for the uniformly rotating model pair AU2-DU7 the equatorial angular velocity profile at the time of bounce as well as the evolution of the central density around the time of bounce. AU2 and DU7 have practically identical angular velocity profiles and their core structure, core mass and $\beta_{\mathrm{ic,b}}$ agree very closely. As can be seen in the inset plot of Fig.~\ref{fig:omega_AU2_DU7}, this results in nearly identical $\rho_\mathrm{c}$ time evolutions around bounce and demonstrates that WDs with quite different precollapse structure and rotational setup can produce identical bounce and postbounce dynamics. This can also occur for pairs of differentially rotating models and is an important aspect to keep in mind when interpreting the GW signal from AIC discussed in Sec.~\ref{sec:GW}. \begin{figure} \centerline{\includegraphics[width = 86 mm]{Figures/f13.eps}} \caption{Mass $ M_{\mathrm{ic,b}} $ of the inner core at bounce for all high-$T$ models versus the parameter $ \beta_{\mathrm{ic,b}} $ of the inner core at bounce. The systematics of $ M_{\mathrm{ic,b}} $ with $\beta_{\mathrm{ic,b}}$ are identical for the set of low-$T$ models, but their $ M_{\mathrm{ic,b}} $ are generally $\sim 10\%$ larger. The symbol convention for the various sets is explained in the caption of Fig.~\ref{fig:collapse_times}.} \label{fig:mic_vs_beta_ic} \end{figure} \begin{figure*}[t] \vspace{0.1cm} \centerline{\epsfxsize = 7.8 cm \epsfbox{Figures/f14a.eps} \hspace{0.5cm} \epsfxsize = 7.8 cm \epsfbox{Figures/f14b.eps}} \vspace{0.5cm} \centerline{\epsfxsize = 7.8 cm \epsfbox{Figures/f14c.eps} \hspace{0.5cm} \epsfxsize = 7.8 cm \epsfbox{Figures/f14d.eps}} \caption{Profiles of radial velocity (top panels) and specific entropy per baryon (bottom panels) at different postbounce times for AIC model DD7.} \label{fig:shock_propagation_DD} \end{figure*} Figure~\ref{fig:mic_vs_beta_ic} shows the mapping between $\beta_{\mathrm{ic,b}}$ and the inner core mass $M_{\mathrm{ic,b}}$ at bounce for all high-$T$ models. Rapid (differential) rotation not only increases the equilibrium mass of WDs (see Table~\ref{tab:initial_models}), but rotational support also increases the extent of the region in sonic contact during collapse. Hence, it may be expected that $M_{\mathrm{ic,b}}$ grows with increasing rotation. However, for WDs below $\beta_{\mathrm{ic,b}} \lesssim 13\%$, $M_{\mathrm{ic,b}}$ is essentially unaffected by rotation and stays within $0.02\,M_\odot$ of the nonrotating value of $0.28\,M_\odot$. Only when the effects of rotation become strong at $\beta_{\mathrm{ic,b}} \gtrsim 13-18\%$ does $M_{\mathrm{ic,b}}$ increase roughly linearly with $\beta_{\mathrm{ic,b}}$. WDs that undergo centrifugal bounce have $\beta_{\mathrm{ic,b}} \gtrsim 18\%$ and correspondingly large inner cores that are more massive than $\sim 0.5\,M_\odot$. Such values of $M_{\mathrm{ic,b}}$ are accessible only to differentially rotating WDs. Also for rotating models, the dependence of the AIC dynamics on the initial temperature of AIC progenitor WDs is simple and straightforwardly understood from the nonrotating results discussed in Sec.~\ref{sec:nonrotating_collapse_dynamics}. These show that the low-$T$ models yield inner cores that are $\sim 10\%$ larger in mass than in their twice-as-hot high-$T$ counterparts. Due to their larger mass, the inner cores of collapsing low-$T$ AIC progenitors also contain a larger amount of angular momentum. At fixed rotation law and $\Omega_{\mathrm{c,i}}$, they reach values of $\beta_{\mathrm{ic,b}}$ that are larger by up to $ \sim 5\%$ (in absolute value!). Hence, lower-$T$ WDs become affected by centrifugal support, bounce centrifugally, and reach the centrifugal limit at lower $\Omega_{\mathrm{c,i}}$ than their higher-$T$ counterparts. Along the same lines behave test calculations in which we impose increased inner-core values of $Y_e$ (see Secs.~\ref{sec:deleptonization} and \ref{sec:tempye}). The increased $Y_e$ leads to more massive and more extend inner cores which, in turn, are more likely to experience a centrifugal support. To conclude our discussion of rotating AIC, we summarize for the reader that the PNSs born from the set of differentially (uniformly) rotating AIC models considered here have average angular velocities\footnote{The average angular velocity $ \bar{\Omega} $ of the differentially rotating models considered here is computed using the approximation $ \bar{\Omega} = J_{\mathrm{ic}} / I_{\mathrm{ic}} $, where $ J_{\mathrm{ic}} $ is the inner core angular momentum and $ I_{\mathrm{ic}} $ is the (Newtonian) inner core moment of inertia.} in the range from $ 0 $ to $ \sim 5\ {\mathrm{rad \, ms}}^{-1}$ ($ \sim 3.3\, {\mathrm{rad \, ms}}^{-1}$), while their pole to equator axis ratios vary from $ 1 $ to $ \sim 0.4 $ ($ \sim 0.6 $)\footnote{The PNS formed in the AIC of WDs is surrounded by hot low density material in the early postbounce phase, making it hard to define the boundary of the PNS unambiguously. For the present rough estimate of the axis ratio, we assume a density threshold of $ 10^{12} \, {\mathrm{g \, cm^{-3}}} $ to mark the boundary of the PNS.}. Some of the rapidly rotating WDs produce PNSs with a slightly off-center maximum in density, though the density distribution of the inner regions does not exhibit a pronounced toroidal geometry. The clearest deviation from a centrally-peaked density distribution is produced in the case of model AD10, which reaches $\beta_{\mathrm{ic,b}} \simeq 21.3 ~\% $ ($ \beta_{\mathrm{ic,b}} \simeq 24.4 ~ \% $) in its high-$T$ (low-$T$) variant. In this model, the point of highest density after bounce is located at $ r \simeq 0.94 $ km, but the maximum value is larger than the central density value by only $ \sim 0.3 \, \% $. For models with less rapid rotation, the off-center maximum is much less pronounced, and completely disappears for $ \beta_{\mathrm{ic,b}} $ below $ \sim 20 \, \% $. \subsection{Shock Propagation and the Formation of Quasi-Keplerian Disks} \label{sec:shock_disc} As pointed out earlier (see Sec.~\ref{sec:nonrotating_collapse_dynamics}), all AIC models considered in this study undergo weak hydrodynamic explosions. This is an artifact of our approach that neglects postbounce neutrino emission, but is unlikely to strongly affect the results presented in this section, since in the MGFLD simulations of \cite{dessart_06_a}, the shock stalls only for a very short period and a weak explosion is quickly initiated by neutrino heating. In moderately-rapidly and rapidly rotating AIC (with $ \beta_{\mathrm{ic, b}} \gtrsim 5 \, \% $), the shock propagation is significantly affected by centrifugal effects. The material near the equatorial plane of rotating WDs experiences considerable centrifugal support, and its collapse dynamics is slowed down. As a consequence, the bounce is less violent and the bounce shock starts out weaker near the equatorial plane than along the poles. Centrifugal support of low-latitude material also leads to reduced postbounce mass accretion rates near the equatorial plane, facilitating steady propagation of the shock at low latitudes. In the polar direction, where centrifugal support is absent, the shock propagates even faster due to the steeper density gradient and smaller polar radius of the WD. This quickly leads to a prolate deformation of the shock front in all rotating models and the shock hits the polar WD surface much before it breaks out of the equatorial envelope. This is shown in Fig.~\ref{fig:shock_propagation_DD}, where we plot the equatorial and polar profiles of the radial velocity and specific entropy per baryon for model DD7 at various postbounce times. Due to the prolateness of the shock front, it breaks out of the polar surface $ \sim 130 $ ms before reaching the WD's equatorial surface. Moreover, due to the anisotropy of the density gradient and the initial shock strength, the specific entropy of the shock-heated material is larger by a factor of $ \sim 2 - 3 $ along the polar direction. The asphericity of the shock front and the anisotropy of the shock strength become more pronounced in AIC with increasing rotation \cite{dimmelmeier_02_b, ott:06spin}. As pointed out in Section~\ref{sec:nonrotating_collapse_dynamics}, due to the their greater initial compactness and thus steeper density gradients, the shock propagates faster in D models: In model DD1, for example, the shock reaches the surface in the equatorial plane within $ \sim 88 $ ms, while for model AD1, the corresponding time is $ \sim 143 $ ms. Rapid rotation and, in particular, rapid differential rotation, increases the maximum allowable WD mass. The most rapidly uniformly rotating WDs in our model set (\textit{i.e.,}~ models DU7 and AU5) have an equilibrium mass of $\sim 1.46 M_\odot$, which is only slightly above $M_\mathrm{Ch}$ in the nonrotating limit. Our most rapidly differentially rotating WDs (models AD13f2 and DD7), on the other hand, reach equilibrium masses of up to $\sim 2\,M_\odot$. Much of the rotationally supported material is situated at low latitudes in the outer WD core, falls in only slightly during collapse, and forms a quasi-Keplerian disk-like structure. The equatorial bounce shock is not sufficiently strong to eject much of the disk material and ``wraps'' around the disk structure, producing only a small outflow of outer disk material at $ v_r \lesssim 0.025 c $. This is in agreement with Dessart et al.~\cite{dessart_06_a}, who first pointed out that rapidly rotating AIC produces PNSs surrounded by massive quasi-Keplerian disk-like structures in the early postbounce phase. As recently investigated by Metzger~et~al.~\cite{metzger:09} (but not simulated here), the hot disk will experience neutrino-cooling on a timescale of $ \sim 0.1$ s, driving the disk composition neutron-rich to reach $ Y_e \sim 0.1 $ \cite{metzger:09, dessart_06_a}, depleting the pressure support and leading to limited contraction of the inner parts of the disk. The outer and higher-latitude regions expand with a neutrino-driven wind \cite{dessart_06_a}. As discussed by~\cite{metzger:09}, subsequent irradiation of the disk by neutrinos from the PNS increases its proton-to-neutron ratio, and $ Y_e $ may reach values as high as $ \sim 0.5 $ by the time the weak interactions in the disk freeze out. The disk becomes radiatively inefficient, $ \alpha $-particles begin to recombine and a powerful disk wind develops, blowing off most of the disk's remaining material. Metzger~et~al.~\cite{metzger:09} argue that, depending on disk mass, the outflows synthesize of the order of $10^{-3} - 10^{-2}\,M_\odot$ of ${}^{56} {\mathrm{Ni}} $, but very small amounts of intermediate-mass isotopes, making such AIC explosions spectroscopically distinct from ${}^{56} {\mathrm{Ni}} $ outflows in standard core-collapse and thermonuclear SNe. \begin{table} \small \centering \caption{Summary of properties of the quasi-Keplerian disks formed in the set of AIC models AD, AU, DD and DU. $ H_{\mathrm{disk}} $ is the thickness and $ R_{\mathrm e} $ is the equatorial radius of the disk, while $ M_{\mathrm{disk}} $ is its mass. These quantities are computed at the time when the shock reaches the WD surface in the equatorial plane. The disk parameters do not vary significantly between the two choices of WD temperature considered in this study.} \label{tab:remnant_disc} \begin{tabular}{@{~~}l@{~~~~~}c@{~~~~~}c@{~~~~~}c@{~~~~~}} \hline \\ [-1 em] Collapse & $ R_{\mathrm e} $ & $ H_{\mathrm{disk}} / R_{\mathrm e} $ & $ M_{\mathrm{disk}} $ \\ model & [km] & & [$ M_\odot $] \\ \hline \\ [-0.5 em] AU1 & $347$ & $0.928$ & $\lesssim\!10^{-3}$\\ AU2 & $401$ & $0.903$ & $\lesssim\!10^{-3}$\\ AU3 & $447$ & $0.848$ & $\lesssim\!10^{-3}$\\ AU4 & $732$ & $0.577$ & $0.002$\\ AU5 & $907$ & $0.484$ & $0.030$\\ [0.5 em] DU1 & $248$ & $0.980$ & $\lesssim\!10^{-3}$\\ DU2 & $249$ & $0.971$ & $\lesssim\!10^{-3}$\\ DU3 & $249$ & $0.952$ & $\lesssim\!10^{-3}$\\ DU4 & $261$ & $0.916$ & $\lesssim\!10^{-3}$\\ DU5 & $291$ & $0.801$ & $\lesssim\!10^{-3}$\\ DU6 & $332$ & $0.701$ & $\lesssim\!10^{-3}$\\ DU7 & $350$ & $0.671$ & $0.007$\\ [0.5 em] AD1 & $866$ & $0.479$ & $0.030$\\ AD2 & $935$ & $0.452$ & $0.038$\\ AD3 & $1118$ & $0.437$ & $0.093$\\ AD4 & $1321$ & $0.410$ & $0.222$\\ AD5 & $1558$ & $0.374$ & $0.323$\\ AD6 & $1638$ & $0.370$ & $0.356$\\ AD7 & $1784$ & $0.377$ & $0.470$\\ AD8 & $1912$ & $0.382$ & $0.507$\\ AD9 & $2278$ & $0.342$ & $0.607$\\ AD10 & $2700$ & $0.296$ & $0.805$\\ [0.5 em] DD1 & $360$ & $0.669$ & $0.005$\\ DD2 & $402$ & $0.597$ & $0.008$\\ DD3 & $461$ & $0.525$ & $0.019$\\ DD4 & $554$ & $0.466$ & $0.054$\\ DD5 & $670$ & $0.436$ & $0.161$\\ DD6 & $853$ & $0.374$ & $0.279$\\ DD7 & $1313$ & $0.255$ & $0.507$\\ \hline \end{tabular} \end{table} Our results, summarized in Tab.~\ref{tab:remnant_disc}, show that the masses and the geometry of the disks produced in AIC are sensitive to the angular momentum distribution in the precollapse WDs. In models with uniform rotation below the mass-shedding limit, only a very small amount of low-latitude material rotates at near-Keplerian angular velocities. Therefore, most of the outer core material of such models undergoes significant infall, so that uniformly rotating WDs will generally produce small disks. The largest disk mass for uniform rotation is $ M_{\mathrm{disk}} \sim 0.03 M_\odot $\footnote{We point out that because the disks do not settle down to exact equilibrium right after bounce or not even after shock passage, it is hard to introduce an unambiguous definition of the disk mass. In the present study, we define the disk as the structure that surrounds the PNS at $ \varpi > 20 $ km with densities below $ 10^{11} ~ {\mathrm{g ~ cm}^{-3} } $ and angular velocity $ \Omega > 0.58\, \Omega_{\mathrm K} $. The latter condition ensures that the disk cannot contract by more than a factor of $ \sim 3 $ as a result of cooling.} and is produced in model AU5 which rotates near the mass-shedding limit. Since the angular velocity of the outer ($ \varpi > \varpi_{\mathrm p} $) core of differentially rotating models is set to reach nearly-Keplerian values (cf.\ Eq.~(\ref{eqq:omega_outer_core})), most of the outer WD envelope has substantial centrifugal support and thus the differentially rotating models yield significantly larger $ M_{\mathrm{disk}} $. For example, model AD4 which has $ \beta_{\mathrm{ic,b}} $ and total angular momentum comparable to model AU5 yields a disk mass of $\sim 0.2 M_\odot $. The total mass of the disk and its equatorial radius (the disk thickness $ H_{\mathrm{disk}} $) grow with increasing rotation (see Tab.~\ref{tab:remnant_disc}). Slowly rotating models such as AD1 have little centrifugally supported material and acquire spheroidal shape soon after bounce, resulting in a disk mass as small as $ \sim 0.03 M_\odot $. More rapidly rotating models such as AD10 produce significantly more strongly flattened disks, with $ R_\mathrm{e} \sim 2700 $ km, $ H_{\mathrm{disk}} \sim 800 $ km and a disk mass $ M_{\mathrm{disk}} $ of $\sim 0.8 M_\odot $. Due to the greater initial compactness of the higher-density D models, their disks are less massive and have smaller equatorial radii when compared to A models. Hence, when considering two WDs of set A and D with the same total angular momentum, the mass and equatorial extent of the disk in the D model will be smaller by a factor of $ \sim 1.5 - 2 $. These results indicate that massive disks of $ M_{\mathrm{disk}} \gtrsim 0.1 M_\odot $ are unlikely to be compatible with the assumption of uniformly rotating accreting WDs argued for by~\cite{saio_04, piro_08_a}. In order to produce disks of appreciable mass and significant ${}^{56}$Ni-outflows in AIC, the progenitor must either be an accreting WD obeying a differential rotation law similar to that proposed by~\cite{yoon_04,yoon_05}, or may be the remnant of a binary-WD merger event. However, for the latter, the differential rotation law is unknown and may be very different from what we consider here (for a discussion of binary-WD merger simulations, see, \textit{e.g.,}~ \cite{rosswog:09} and references therein). \section{Gravitational wave emission} \label{sec:GW} \begin{figure} \centerline{\includegraphics[width = 86 mm]{Figures/f15.eps}} \caption{Evolution of the dimensionless GW strain $ h $ (in units of $ 10^{-21} $ at a source distance of $ 10 \ \mathrm{kpc} $) as a function of postbounce time for representative models with different precollapse rotation profiles, central densities and temperatures (low-$T$ models with solid black lines and high-$T$ with dashed red lines). Models with slow and (almost) uniform precollapse rotation (\textit{e.g.,}~ AD1f4 or DU3) develop considerable prompt postbounce convection visible as a dominating lower-frequency contribution in the waveform. Centrifugal effects damp this prompt convection and the waveforms of models with moderately-rapid rotation (\textit{e.g.,}~ AD1, AU4, DD7 and AD8) and of rapidly rotating models (\textit{e.g.,}~ AD9 or AD10) exhibit no such contribution to the signal.} \label{fig:gw_strain_representative_models} \end{figure} In Fig.~\ref{fig:gw_strain_representative_models}, we present the time evolution of the GW strain $ h $ at an assumed source distance of $ 10\,\mathrm{kpc} $ for a representative set of AIC models evolved with the $\overline{Y_e}(\rho)$ parametrization obtained from \cite{dessart_06_a}. The GW signals of all models have the same overall morphology. This general AIC GW signal shape bears strong resemblance to GW signals that have been classified as ``type~III'' in the past~\cite{zwerger_97_a, dimmelmeier_02_b, ott:09rev}, but also has features in common with the GW signals predicted for rotating iron core collapse (``type~I'', \cite{dimmelmeier_08_a}). The GW strain $ h $ in our AIC models is positive in the infall phase and increases monotonically with time, reaching its peak value in the plunge phase, just $ \sim 0.1\,{\mathrm{ms}}$ before bounce. Then, $ h $ rapidly decreases, reaching a negative peak value within $ \sim 1-2\,{\mathrm{ms}} $. While the first positive peak is produced by rapid infall of the inner core, the first negative peak is caused by the reversal of the infall velocities at bounce. Following the large negative peak, $ h $ oscillates with smaller amplitude with a damping time of $ \sim 10 $ ms, reflecting the hydrodynamical ringdown oscillations of the PNS. Although all AIC models of our baseline set produce Type~III signals, we can introduce three subtypes whose individual occurrence depends on the parameter $\beta_{\mathrm{ic,b}}$ of the inner core at bounce: \textbf{Type IIIa}. In slowly rotating WDs (that reach $ \beta_{\mathrm{ic,b}} \lesssim 0.7 \, \% $), strong prompt convective overturn develops in the early postbounce phase, adding a lower-frequency contribution to the regular ringdown signal (\textit{e.g.,}~ models AD1f4, DU3). The largest-amplitude part of this GW signal type comes from the prompt convection. Nevertheless, the GW signal produced by the bouncing centrifugally-deformed inner core is still discernible, with the first positive peak being generally larger by a factor of $ \sim 2 $ than the first negative peak. Subsequent ringdown peaks are smaller by a factor of $ \gtrsim 3 $. We point out that the observed prompt postbounce convection is most likely overestimated in our approach, since we do not take into account neutrino losses and energy deposition by neutrinos in the immediate postshock region, whose effect will quickly smooth out the negative entropy gradient left behind by the shock and thus significantly damp this early convective instability in full postbounce radiation-hydrodynamics calculations (see, \textit{e.g.,}~~\cite{mueller_04_a, buras_06_a, ott:09rev}). \textbf{Type IIIb}. In moderately rapidly rotating WDs that reach $ 0.7 \lesssim \beta_{\mathrm{ic,b}} \lesssim 18 \, \% $ and still experience a pressure-dominated bounce, convection is effectively suppressed due to a sufficiently large positive specific angular momentum gradient~(\textit{e.g.,}~ \cite{ott:08sn}). Hence, there is no noticeable convective contribution to the postbounce GW signal (see, \textit{e.g.,}~ models AD1, AU4, DD7, AD8). For this signal subtype, the peak GW strain $ |h|_{\mathrm{max}} $ is associated with the first positive peak while relative values of the amplitudes of the first several peaks are similar to type~IIIa. \textbf{Type IIIc}. If rotation is sufficiently rapid and leading to $ \beta_{\mathrm{ic,b}} \gtrsim 18 \, \% $, the core bounces at subnuclear densities due to strong centrifugal support. This is reflected in the GW signal by an overall lower-frequency emission and a significant widening of the bounce peak of the waveform (see, \textit{e.g.,}~ models AD9, AD10). In some models of this subtype, the negative peak can be comparable to or slightly exceed that of the first positive peak in the waveform. This reflects the fact that the plunge acceleration is apparently reduced more significantly by rotation than is the re-expansion acceleration at core bounce. The postbounce ringdown peaks in all type~IIIc models are smaller by a factor of $ \gtrsim 2 $ compared to the bounce signal. As pointed out in Sec.~\ref{sec:rotating_collapse_dynamics}, uniformly rotating models do not rotate sufficiently rapidly to experience centrifugal bounce. Hence, they do not produce a type~IIIc signal. \vskip.3cm The AIC GW signal morphology is affected only slightly by variations in WD temperature and their resulting changes in the inner core $Y_e$ that are on the few-percent level for the range of precollapse temperatures considered here. In test calculations with more substantially increased inner-core values of $Y_e$ (see Secs.~\ref{sec:deleptonization} and \ref{sec:tempye}) and, in turn, significantly larger values of $M_\mathrm{ic,b}$, we find signals that are intermediate between type~III and type I. Key quantitative results from our model simulations are summarized in Tables.~\ref{tab:collapse_models} and \ref{tab:detectability}. The waveform data for all models are available for download from \cite{stellarcollapseGW}. \subsection{Peak Gravitational Wave Amplitude} \label{sec:peak_amplitude} \begin{figure} \centerline{\includegraphics[width = 86 mm]{Figures/f16.eps}} \caption{Peak value $ |h|_{\mathrm{max}} $ of the GW amplitude at a source distance of $ 10 $ kpc distance for the \textit{burst} signal of all models versus the parameter $ \beta_{\mathrm{ic,b}} $ of the inner core at the time of bounce. At slow to moderately rapid rotation, $|h|_{\mathrm{max}} $ scales almost linearly with $ \beta_{\mathrm{ic,b}} $ (marked by the dotted straight line), while for $ \beta_{\mathrm b} \gtrsim 16\% $ centrifugal effects reduce $ |h|_{\mathrm{max}} $. The symbol convention for the various sets is explained in the caption of Fig.~\ref{fig:collapse_times}.} \label{fig:h_max_vs_beta} \end{figure} Across our entire model set, the peak GW amplitude $ |h|_{\mathrm{max}} $ covers a range of almost two orders of magnitude, from $\sim 10^{-22}$ to $\sim 10^{-20}$ (at distance to the source of $10\,\mathrm{kpc}$; see Tab.~\ref{tab:collapse_models}). $|h|_\mathrm{max}$ depends on various parameters and it is difficult to provide a simple description of its systematics that encompasses all cases. In order to gain insight into how $|h|_\mathrm{max}$ depends on $\Omega_{\mathrm{c,i}} $, on differential rotation, on the initial $\rho_{\mathrm c}$, on the precollapse WD temperature, and on the degree of deleptonization in collapse, we describe below the effects of variations in one of these parameters while holding all others fixed. \textit{(i)} In a sequence of precollapse WDs with fixed differential rotation, $\rho_\mathrm{c}$, and $T_\mathrm{0}$, the peak GW amplitude $ |h|_{\mathrm{max}} $ increases steeply with $\Omega_{\mathrm{c,i}} $ in slowly rotating models that do not come close to being centrifugally supported. When centrifugal effects become dynamically important, $ |h|_{\mathrm{max}} $ saturates at $ \sim 7 \times 10^{-21} $ (at $ 10\,\mathrm{kpc}$) and then decreases with increasing $ \Omega_{\mathrm{c,i}} $. This reflects the fact that such rapidly spinning inner cores produced by AIC cannot reach high densities and high compactness and that the slowed-down collapse decreases the deceleration at bounce, thus reducing $ |h|_{\mathrm{max}} $ and pushing the GW emission to lower frequencies. \begin{figure} \centerline{\includegraphics[width = 86 mm]{Figures/f17.eps}} \caption{Spectral energy density of the GW signal for representative AIC models AD0f3 (top panel), AD4 (center panel), and AD10 (bottom panel). $f_\mathrm{max}$ is the peak frequency of the GWs emitted at core bounce.} \label{fig:gw_spectrum_representative_models} \end{figure} \textit{(ii)} In a sequence of precollapse WDs with fixed $ \Omega_{\mathrm{c, i}} $, $T_\mathrm{0}$, and $\rho_\mathrm{c}$, an increase in the degree of WD differential rotation leads to an increase in the amount of angular momentum present in the WD inner core at bounce. This translates into an increase of $ |h|_{\mathrm{max}} $ in models that do not become centrifugally supported and experience a pressure-dominated bounce. The transition to centrifugal bounce is now reached at lower values of $\Omega_{\mathrm{c, i}} $ (see Sec.~\ref{sec:rotating_collapse_dynamics}), so that the centrifugal saturation of $ |h|_{\mathrm{max}} $ described above in \emph{(i)} is reached at much smaller values of $\Omega_{\mathrm{c, i}} $. \textit{(iii)} In a sequence of precollapse WDs with fixed $ \Omega_{\mathrm{c, i}} $, $T_\mathrm{0}$, and differential rotation and varying $\rho_{\mathrm c}$, models with lower (higher) $\rho_\mathrm{c}$ yield larger (smaller) values of $ |h|_{\mathrm{max}} $. This is because models that are initially less compact spin up more during collapse (\textit{cf.}~ the discussion in Sec.~\ref{sec:rotating_collapse_dynamics}). However, this systematics holds only as long as the model does not become centrifugally supported, which happens for lower (higher) $\rho_c$ WDs at smaller (greater) $ \Omega_{\mathrm{c, i}} $. \begin{figure} \centerline{\includegraphics[width = 86 mm]{Figures/f18.eps}} \caption{Frequency $ f_{\mathrm{max}} $ at the maximum of the GW spectral energy density in pressure-dominated bounce and a subset of centrifugal bounce models versus the parameter $ \beta_{\mathrm{ic,b}} $ of the inner core at bounce. The symbol convention for the various sets is explained in the caption of Fig.~\ref{fig:collapse_times}. } \label{fig:fmax_vs_beta} \end{figure} \textit{(iv)} When only the WD temperature is varied, we find that for slowly to moderately-rapidly rotating WDs, high-$T$ models generally reach smaller $ |h|_{\mathrm{max}} $ than their low-$T$ counterparts. This is because high-$T$ WDs yield smaller inner cores at bounce, which hold less angular momentum and, as a consequence, are less centrifugally deformed (see Tab.~\ref{tab:collapse_models} and in Fig.~\ref{fig:gw_strain_representative_models}). However, this behavior reverses in rapidly rotating WDs for which low-$T$ models are more centrifugally affected and, hence, yield a smaller $ |h|_{\mathrm{max}} $ than their high-$T$ counterparts. \textit{(v)} If the degree of deleptonization is decreased by an ad-hoc increase of inner-core $Y_e$ (see Secs.~\ref{sec:deleptonization} and \ref{sec:tempye}) and all else is kept fixed, $M_\mathrm{ic,b}$ increases and for slowly to moderately rapidly rotating WDs, $ |h|_{\mathrm{max}} $ increases. As for the low-$T$ case discussed in the above, this behavior reverses in rapidly rotating WDs for which high-$Y_e$ models are more centrifugally affected and yield smaller $ |h|_{\mathrm{max}} $ than their lower-$Y_e$ counterparts. To demonstrate the dependence of $ |h|_{\mathrm{max}} $ on the overall rotation of the inner core at bounce, we plot in Fig.~\ref{fig:h_max_vs_beta} $ |h|_{\mathrm{max}} $ as a function of the inner core parameter $ \beta_{\mathrm{ic,b}} $ at bounce for our high-$T$ models. $|h|_{\mathrm{max}}$ depends primarily on $\beta_{\mathrm{ic,b}}$ and is rather independent of the particular precollapse configuration that leads to a given $\beta_{\mathrm{ic,b}}$. For small $\beta_{\mathrm{ic,b}}$ far away from the centrifugal limit, we find $|h|_{\mathrm{max}} \propto \beta_\mathrm{ic,b}^{0.74}$, where we have obtained the exponent by a power-law fit of high-$T$ models with $1\% \lesssim \beta_\mathrm{ic,b} \lesssim 13\%$. This finding is in qualitative agreement with what~\cite{dimmelmeier_08_a} saw for iron core collapse. The overall maximum of $|h|_{\mathrm{max}}$ is reached in WDs that yield $\beta_{\mathrm{ic,b}} \sim 16\%$, beyond which $ |h|_{\mathrm{max}} $ decreases with increasing $\beta_{\mathrm{ic,b}}$. \subsection{Gravitational-Wave Energy Spectrum} \label{sec:gw_spectrum} \begin{figure*}[t] \begin{center} \hspace*{-.4cm}\includegraphics[width = 63 mm]{Figures/f19a.ps} \hspace*{-.25cm}\includegraphics[width = 63 mm]{Figures/f19b.ps} \hspace*{-.25cm}\includegraphics[width = 63 mm]{Figures/f19c.ps}\hspace*{-.3cm} \end{center} \caption{Time-frequency colormaps of the GW signals of models AD1f3 (type~IIIa), AD4 (type~IIIb), and AD10 (type~IIIc, see also Fig.~\ref{fig:gw_spectrum_representative_models}). Plotted is the ``instantaneous'' spectral GW energy density $dE_{\mathrm{GW}} / df$ in a $2$-ms Gaussian window as a function of postbounce time. Note the prebounce low-frequency contribution in the moderately-rapidly rotating models (model AD4, center panel) and rapidly rotating models (model AD10, right panel). The range of the colormap of the left panel (model AD1f3) is smaller by one dex than those of the other panels.} \label{fig:gw_tf} \end{figure*} The total energy emitted in GWs is in the range of $ \sim 10^{-10} M_\odot c^2 \lesssim E_{\mathrm{GW}} \lesssim 2 \times 10^{-8} M_\odot c^2 $ in the entire set of models considered in this article. In Fig.~\ref{fig:gw_spectrum_representative_models}, we plot the GW spectral energy density $ d E_{\mathrm{GW}} / d f $ of the three models AD1f3, AD4 and AD10 as representative examples of the three signal subtypes IIIa - IIIc. The top panel shows model AD1f3 as a representative pressure-dominated bounce model with prompt convection. In such models, there is a strong structured, but broad contribution to the spectrum at low frequencies. The integral of such a contribution (which is present in all models with slow rotation) can exceed that from core bounce in these models. This is not the case in model AD1f3, whose GW burst from bounce is the one leading to the peak at $f_{\mathrm{max}} = 720\,\mathrm{Hz}$. The central panel of Fig.~\ref{fig:gw_spectrum_representative_models} depicts $ d E_{\mathrm{GW}} / d f $ of model AD4 as a representative pressure-dominated bounce model in which no significant postbounce convection occurs. The spectrum of this model exhibits a distinct and narrow high-frequency peak at $ f_{\mathrm{max}} \sim 805 $ Hz. Finally, the bottom panel of Fig.~\ref{fig:gw_spectrum_representative_models} refers to model A10 that experiences centrifugal bounce. In this model, the dynamics is dominated by centrifugal effects, leading to low frequency emission and $f_{\mathrm{max}} = 310\,\mathrm{Hz}$, but higher-frequency components are still discernible and are most likely related to prolonged higher-frequency GW emission from the PNS ringdown. In Fig.~\ref{fig:fmax_vs_beta}, we plot the peak frequencies $ f_{\mathrm{max}} $ of the GW energy spectrum as a function of the inner core parameter $ \beta_{\mathrm{ic,b}} $ for high-$T$ AIC models (the low-$T$ and higher-$Y_e$ models show the same overall systematics). In models that undergo pressure-dominated bounce, $ f_{\mathrm{max}} $ increases nearly linearly with $ \beta_{\mathrm{ic,b}} $ in the region $ \beta_{\mathrm{ic,b}} \lesssim 10 \ \% $, while at $ \beta_{\mathrm{ic,b}} $ in the range of $ 10 \ \% \lesssim \beta_{\mathrm{ic,b}} \lesssim 20 \ \% $, the growth of $ f_{\mathrm{max}} $ saturates at $ \sim 800 $ Hz and $f_{\mathrm{max}}$ does not change significantly with further increase of rotation. For very rapid rotation ($ \beta_{\mathrm{ic,b}} \gtrsim 20 \ \% $), $ f_{\mathrm{max}} $ decreases steeply with $ \beta_{\mathrm{ic,b}} $, reaching a value of $ \sim 400 $ Hz at $\beta_{\mathrm{ic,b}} \simeq 23 \, \% $ (not shown in the figure, see Table~\ref{tab:detectability}). While it is straightforward to understand the systematics of $f_{\mathrm{max}}$ at high $ \beta_{\mathrm{ic,b}} $ where centrifugal effects slow down collapse and thus naturally push the GW emission to low frequencies, the increase of $f_{\mathrm{max}}$ with rotation at low to intermediate $ \beta_{\mathrm{ic,b}} $ is less intuitive. If one assumes that the dominant GW emission at core bounce in all models is due to the quadrupole component of the fundamental quasi-radial mode of the inner core, one would expect a monotonic decrease of $f_\mathrm{max}$ with increasing rotation and, hence, decreasing mean core density (see, \textit{e.g.,}~ \cite{andersson:03}). A possible explanation for the increase of $f_\mathrm{max}$ at slow to moderately-rapid rotation is that the primary GW emission in these models is due to the fundamental quadrupole $^2\! f$-mode, whose frequency may increase with rotation. This has been demonstrated by Dimmelmeier~et~al.~\cite{dimmelmeier:06} who studied oscillation modes of sequences of $\gamma = 2$ polytropes. To confirm this interpretation, and following the technique of mode-recycling outlined in~\cite{dimmelmeier:06}, we perturb a subset of our postbounce cores with the eigenfunction of the $^2 \! f$-mode of a Newtonian nonrotating neutron star. As expected, we find that the resulting dynamics of the postbounce core is dominated by a single oscillation mode with a frequency that matches within $\lesssim 10\%$ the peak frequency $ f_{\mathrm{max}} $ of $dE_{\mathrm{GW}}/df$ of the corresponding slowly or moderately rapidly rotating AIC model. The interesting details of the mode structure of the inner cores of AIC and iron core collapse will receive further scrutiny in a subsequent publication. Finally, in Fig.~\ref{fig:gw_tf}, we provide time-frequency analyses of the GW signals of the same representative models shown in Fig.~\ref{fig:gw_spectrum_representative_models}. The analysis is carried out with a short-time Fourier transform employing a Gaussian window with a width of $2\,{\mathrm{ms}}$ and a sampling interval of $0.2\,{\mathrm{ms}}$. In all three cases, the core bounce is clearly visible and marked by a broadband increase of the emitted energy. The slowly-rotating model AD1f3 emits its strongest burst at $600-800\,\mathrm{Hz}$ ($f_{\mathrm{max}} = 720\,\mathrm{Hz}$) and subsequently exhibits broadband emission with significant power at lower frequencies due to prompt convection. Model AD4 is more rapidly rotating and shows significant pre-bounce low-frequency emission due to its increased rotational deformation. At bounce, a strong burst, again with power at all frequencies, but primarily at frequencies about its $f_{\mathrm{max}} = 805\,\mathrm{Hz}$, is emitted. Much of the postbounce $E_{\mathrm{GW}}$ is emitted through ringdown oscillations at $f_{\mathrm{max}}$ that may be related to the $^2\! f$ mode of this model's PNS. Finally, in the rapidly rotating and centrifugally bouncing model AD10, we observe again low-frequency emission before bounce, but only a small increase of the primary emission band at bounce to $\sim 200 - 400\,\mathrm{Hz}$. Nevertheless, there is still an appreciable energy emitted from higher-frequency components of the dynamics at bounce and postbounce times. \subsection{Comparing GW signals from AIC \\ and Iron Core Collapse} \label{sec:gw_types_comparison} Recent studies~\cite{dimmelmeier_07_a, ott_07_a, ott_07_b, dimmelmeier_08_a} have shown that the collapse of rotating iron cores (ICC) produces GW signals of uniform morphology (so-called ``type I'' signals, see, \textit{e.g.,}~ \cite{zwerger_97_a}) that generically show one pronounced spike associated with core bounce with a subsequent ringdown and are similar to the type~III signals found here for AIC. As in the AIC case, the GW signal of ICC has subtypes for slow, moderately rapid, and very rapid rotation. For comparing AIC and ICC GW signals, we chose three representative AIC models and then pick three ICC models with similar $\beta_{\mathrm{ic,b}}$ from the study of Dimmelmeier~et~al.~\cite{dimmelmeier_08_a} whose waveforms are freely available from~\cite{wave_catalog}. This should ensure that we compare collapse models that are similarly affected by centrifugal effects for a one-to-one comparison. However, one should keep in mind that the inner core masses $M_{\mathrm{ic,b}}$ at bounce of ICC models are generally larger by $\sim 0.2 - 0.3\,M_\odot$ than in our AIC models (see Sec.~\ref{sec:nonrotating_collapse_dynamics}). \begin{figure*} \centerline{\includegraphics[width = 150 mm]{Figures/f20.eps}} \caption{Evolution of GW signals in the high-$ T $ AIC models DU2, AU5, AD10 (black solid lines) and three massive star iron core-collapse models s20A1O05, s20A3O07 and s20A3O15 (red dashed lines) from the model set of Dimmelmeier et al.~\cite{dimmelmeier_08_a}. The AIC model DU2 and the iron core collapse model s20A1O05 undergo pressure-dominated bounce with significant prompt convection. Models AU5 and s20A3O7 experience pressure-dominated bounce without significant convection, and models AD10 and s20A3O15 undergo centrifugal bounce. The inner cores of models DU2, AU5 and AD10 (s20A1O05, s20A3O07 and s20A3O15) reach values of $\beta_{\mathrm{ic,b}}$ of about $ 0.4\%, \, 10.2\% $ and $ 21.3\% $ ($ 0.7\%, \, 10.1\%, $ and $ 21.6\% $). Times are given relative to the time of core bounce which we mark with a vertical line.} \label{fig:gw_strain_ICC_AIC} \end{figure*} In Fig.~\ref{fig:gw_strain_ICC_AIC} we present this comparison and plot the GW signals of the high-$T$ AIC models DU2 (slow rotation, type~IIIa), AU5 (moderately-rapid rotation, type~IIIb), and AD10 (very rapid rotation, type~IIIc). In the same order, we superpose the GW signals of the Dimmelmeier~et~al.~\cite{dimmelmeier_08_a} models s20A1O05, s20A3O07 and s20A3O15. These models started with the precollapse iron core of a $20\,M_\odot$ star and were run with the same code, EOS, and deleptonization algorithm as our AIC models, though with different, ICC specific $Y_e (\rho)$ trajectories. The left panel of Fig.~\ref{fig:gw_strain_ICC_AIC} compares the slowly rotating models DU2 and s20A1O05 that undergo pressure dominated bounce and exhibit strong postbounce convection. As pointed out before, the latter is most likely overestimated in our current approach as well as in Dimmelmeier~et~al.'s. Note that the width of the waveform peaks associated with core bounce is very similar, indicating very similar emission frequencies. Model s20A1O05 exhibits a significantly larger signal amplitude at bounce. This is due to s20A1O05's larger $M_{\mathrm{ic,b}}$ but also to the fact that its $\beta_{\mathrm{ic,b}}$ is $\sim 0.7\%$ compared to the $\sim 0.4\%$ of DU2 (a closer match was not available from \cite{wave_catalog}). The prompt convection in model s20A1O05 is more vigorous and generates a larger-amplitude GW signal than in model DU2. This is due to the much steeper density gradient in the WD core that allows the AIC shock to remain stronger out to larger radii. Hence, it leaves behind a shallower negative entropy gradient, leading to weaker convection and postbounce GW emission. In the central panel of Fig.~\ref{fig:gw_strain_ICC_AIC} we compare two moderately-rapidly rotating models with nearly identical $\beta_{\mathrm{ic}}$ of $\sim 10\%$. Both models show a pre-bounce rise due to the inner core's accelerated collapse in the plunge phase. The AIC inner core, owing to its lower $Y_e$ and weaker pressure support, experiences greater acceleration and emits a higher-amplitude signal than its ICC counterpart in this phase. At bounce, the stiff nuclear EOS decelerates the inner core, leading to the large negative peak in the GW signal. Because of the more massive inner core in ICC and since the EOS governing the dynamics is identical in both models, the magnitude of this peak is greater in the ICC model. Following bounce, the ICC model's GW signal exhibits a large positive peak of comparable or larger amplitude than the pre-bounce maximum. This peak is due to the re-contraction of the ICC inner core after the first strong expansion after bounce. With increasing rotation, this re-contraction and the associated feature in the waveform become less pronounced. On the other hand, due to its smaller inertia, the AIC inner core does not significantly overshoot its new postbounce equilibrium during the postbounce expansion. Hence, there is no appreciable postbounce re-contraction and no such large positive postbounce peak in the waveform. Example waveforms of AIC and ICC models experiencing core bounce governed by centrifugal forces are shown in the right panel of Fig.~\ref{fig:gw_strain_ICC_AIC}. In this case, the pre-bounce plunge dynamics are significantly slowed down by centrifugal effects and the GW signal evolution is nearly identical in AIC and ICC. At bounce, the more massive inner core of the ICC model leads to a larger and broader negative peak in the waveform and its ringdown signal exhibits larger amplitudes than in its AIC counterpart. Finally, we consider AIC models with variations in the inner-core $Y_e$ due either to different precollapse WD temperatures or ad-hoc changes of the $\overline{Y_e}(\rho)$ parametrization (see Secs.~\ref{sec:deleptonization} and \ref{sec:tempye}). Lower-$T$ WDs yield larger inner-core values of $Y_e$ and, in turn, larger $M_\mathrm{ic,b}$ and GW signals that are closer to their iron-core counterparts. The same is true for models in which we impose an increased inner-core $Y_e$: AIC models with inner-core $Y_e$ $10\%$ larger than predicted by~\cite{dessart_06_a} still show clear type~III signal morphology while models with $20\%$ larger $Y_e$ fall in between type~III and type I. To summarize this comparison: rotating AIC and rotating ICC lead to qualitatively and quantitatively fairly similar GW signals that most likely could not be distinguished by only considering general signal characteristics, such as maximum amplitudes, characteristic frequencies and durations. A detailed knowledge of the actual waveform would be necessary, but even in this case, a distinction between AIC and ICC on the basis of the comparison presented here would be difficult. It could only be made for moderately-rapidly spinning cores based on the presence (ICC, type Ib) or absence (AIC, type~IIIb) of a first large positive peak in the waveform, but, again, \emph{only} if AIC inner cores indeed have significantly smaller $Y_e$ than their iron-core counterparts. ICC and AIC waveforms of types Ia/IIIa and Ic/IIIc are very similar. Additional astrophysical information concerning the distance to the source and its orientation as well as knowledge of the neutrino and electromagnetic signatures will most likely be necessary to distinguish between AIC and ICC. \subsection{Detection Prospects for the Gravitational Wave Signal from AIC} \label{sec:detectability} In order to assess the detection prospects for the GW signal from AIC, we evaluate the characteristic signal frequency $ f_{\mathrm c} $ and the dimensionless characteristic GW amplitude $ h_{\mathrm c} $. Both quantities are detector-dependent and are computed using Eq.~(\ref{eq:characteristic_frequency}) and (\ref{eq:characteristic_amplitude}), respectively. \begin{figure} \centerline{\includegraphics[width = 86 mm, angle = 0]{Figures/f21.eps}} \caption{Detector-dependent characteristic amplitudes of the GW signals of all models at an assumed distance of $ 10 $ kpc. The symbol convention for the various sets is described in the caption of Fig.~\ref{fig:collapse_times}. See the text for a discussion of the numbers and arrows.} \label{fig:detectability_ligo} \end{figure} In Fig.~\ref{fig:detectability_ligo}, we show $ h_{\mathrm c} $ for all models as a function of $f_{\mathrm c} $ for an initial 4-km LIGO detector, assuming a source distance of $ 10 $ kpc. For comparison with detector sensitivity, we include initial LIGO's design $h_{\mathrm{rms}}$ curve~\cite{ligo}. The distribution of our set of models in this figure obeys simple systematics. A number of very slowly rotating models that undergo pressure-dominated bounce with prompt convection (type~IIIa) form a cluster in frequency in one region (near arrow 1). These models have the overall lowest values of $ h_{\mathrm c} $ and exhibit low values of $ f_{\mathrm c} $ in the range of $ 130 - 350\,\mathrm{Hz} $. Both $ f_{\mathrm c} $ and $ h_{\mathrm c} $ grow with increasing rotation (along arrow 1). For the pressure-dominated bounce models without significant prompt convection (type~IIIb), $ h_{\mathrm c} $ grows with increasing rotation (along arrow 2), now at practically constant $ f_{\mathrm c}$ of $ \sim 350 $ Hz. Even for these models, $ f_{\mathrm c} $ is always lower than the typical peak frequency $ f_{\mathrm{max}} \sim 700-800\,\mathrm{Hz}$ of their spectral GW energy densities. This is due to the specific characteristics of the LIGO detector, whose highest sensitivity is around $100\,\mathrm{Hz}$, thus leading to a systematic decrease of $f_\mathrm{c}$ with respect to $f_\mathrm{max}$. In more rapidly rotating models, centrifugal effects become more important, leading to greater rotational deformation of the inner core, but also slowing down the dynamics around core bounce, ultimately limiting $h_{\mathrm c}$ and reducing $f_{\mathrm c}$ (along arrow 3). Models that rotate so rapidly that they undergo centrifugal bounce (type~IIIc) cluster in a separate region in the $ h_{\mathrm c} - f_{\mathrm c}$ plane (along arrow 4), somewhat below the maximum value of $ h_{\mathrm c} $ and at considerably lower $ f_{\mathrm c} $. The systematics for the lower-$ T $ models and for other detectors is very similar. Not surprisingly, given the analogies in the two signals, a similar behavior of $ h_{\mathrm c} $ and $ f_{\mathrm c} $ was observed in the context of rotating iron core collapse~\cite{dimmelmeier_08_a}. \begin{figure} \centerline{\includegraphics[width = 86 mm, angle = 0]{Figures/f22.eps}} \caption{Location of the GW signals from core bounce in the $ h_{\mathrm c} $--$ f_{\mathrm c} $ plane relative to the sensitivity curves of various interferometer detectors (as color-coded) for an extended set of models AD. The sources are at a distance of $ 10 $ kpc for LIGO, $ 0.8 $ Mpc for Advanced LIGO, and $ 5 $ Mpc for the Einstein Telescope.} \label{fig:detectability_ligo_ligoII_et} \end{figure} Figure~\ref{fig:detectability_ligo_ligoII_et} provides the same type of information shown in Fig.~\ref{fig:detectability_ligo} but also for the advanced LIGO detector when the source is at $0.8$ kpc (\textit{e.g.,}~ within the Andromeda galaxy), or for the proposed Einstein Telescope (ET)~\cite{ET} and a source distance of $5$~Mpc. Initial LIGO is sensitive only to GWs coming from a moderately-rapidly or rapidly rotating AIC event in the Milky Way, but its advanced version will probably be able to reveal sources also outside the Galaxy, although only within the local group. Finally, third-generation detectors such as ET, may be sensitive enough to detect some AIC events out to $ \sim 5$ Mpc. \begin{figure} \centerline{\includegraphics[width = 86 mm, angle = 0]{Figures/f23.eps}} \caption{The inner core parameter $ \beta_{\mathrm{ic,b}} $ at bounce is plotted as a function of the precollapse parameter $ \beta_{\mathrm i} $ for high-$T$ models. Due to different central densities and rotation profiles of the precollapse WD models, there is no one-to-one correspondence between $ \beta_{\mathrm{ic,b}} $ and $ \beta_{\mathrm i} $. Hence, although one can extract $ \beta_{\mathrm{ic,b}} $ accurately from the bounce AIC GW signal, it is impossible to put strong constraints on $ \beta_{\mathrm i} $ using the GW signal.} \label{fig:betaicb_vs_betai} \end{figure} As pointed out in Secs.~\ref{sec:peak_amplitude} and~\ref{sec:gw_spectrum}, the GW signal amplitudes and the spectral GW energy distribution is determined primarily by $ \beta_{\mathrm{ic,b}} $. Hence, given the systematics shown in Fig.~\ref{fig:detectability_ligo}, one may be optimistic about being able to infer $\beta_{\mathrm{ic,b}}$ to good precision from the observation of GWs from a rotating AIC event. For example, as demonstrated in Fig.~\ref{fig:fmax_vs_beta}, even the knowledge of only $ f_{\mathrm{max}} $ can put some constraints on $\beta_{\mathrm{ic,b}}$. However, inferring accurately the properties of the progenitor WD using exclusively information provided by GWs may be extremely difficult given the highly degenerate dependence of $\beta_{\mathrm{ic,b}}$ on the various precollapse WD model parameters discussed in Sec.~\ref{sec:rotating_collapse_dynamics}. To elaborate on this point, we show in Fig.~\ref{fig:betaicb_vs_betai} the relation between $\beta_{\mathrm{ic,b}}$ and the precollapse WD parameter $\beta_{\mathrm i}$. Even if GWs can provide good constraints on $\beta_{\mathrm{ic,b}}$, a rather large variety of models with different initial rotational properties would be able to lead to that same $\beta_{\mathrm{ic,b}}$ and additional astrophysical information on the progenitor will be needed to determine the precollapse rotational configuration. The only exception to this is the possibility of ruling out uniform WD progenitor rotation if $\beta_\mathrm{ic,b} \gtrsim 18\% $ (\textit{cf.}~ Sec.~\ref{sec:rotating_collapse_dynamics}). \begin{table} \small \centering \caption{GW signal characteristics for the high-$T$ AIC models: $ E_\mathrm{GW}$ is the total GW energy, $ f_{\mathrm{max}} $ is the peak frequency of the GW energy spectrum, $ \Delta f_{50} $ is the frequency interval around $ f_{\mathrm{max}} $ that emits $ 50 \ \% $ of $ E_\mathrm{GW} $. The nonrotating models are omitted here.} \label{tab:detectability} \begin{tabular}{@{}l@{~~~~}c@{~~~~}c@{~~~~}c@{~~~~}} \hline \\ [-1 em] AIC & $ E_{\mathrm{GW}} $ & $ f_{\mathrm{max}} $ & $ \Delta f_{50} $ \\ model & [$10^{-9} M_\odot c^2 $] & [Hz] & [Hz] \\ \hline \\ [-0.5 em] AU1 & 1.1 & 742.7 & 31\\ AU2 & 5.7 & 782.7 & 28\\ AU3 & 7.8 & 786.7 & 27\\ AU4 & 15.7 & 816.0 & 49\\ AU5 & 17.0 & 831.9 & 120\\ [0.3 em] DU1 & 0.1 & 768.0 & 343\\ DU2 & 0.3 & 770.0 & 556\\ DU3 & 0.4 & 747.0 & 473\\ DU4 & 0.9 & 745.0 & 304\\ DU5 & 2.0 & 765.7 & 26\\ DU6 & 4.6 & 778.3 & 21\\ DU7 & 5.8 & 788.3 & 20\\ [0.3 em] AD1 & 2.2 & 752.6 & 33\\ AD2 & 3.7 & 765.4 & 30\\ AD3 & 8.7 & 790.5 & 37\\ AD4 & 11.8 & 812.5 & 115\\ AD5 & 14.2 & 813.6 & 173\\ AD6 & 15.0 & 815.0 & 201\\ AD7 & 15.5 & 815.1 & 220\\ AD8 & 13.8 & 811.0 & 245\\ AD9 & 1.8 & 806.0 & 413\\ AD10& 1.0 & 304.0 & 126\\ [0.3 em] DD1 & 0.3 & 740.0 & 324\\ DD2 & 1.4 & 746.7 & 62\\ DD3 & 4.6 & 780.1 & 21\\ DD4 & 9.3 & 793.6 & 22\\ DD5 & 15.2 & 813.7 & 21\\ DD6 & 18.5 & 820.2 & 48\\ DD7 & 22.3 & 826.9 & 152\\ [0.3 em] AD1f1 & 1.3 & 745.7 & 54\\ AD1f2 & 0.6 & 731.4 & 69\\ AD1f3 & 0.2 & 726.5 & 315\\ AD1f4 & 0.1 & 737.0 & 476\\ [0.3 em] AD3f1 & 7.7 & 787.0 & 36\\ AD3f2 & 7.0 & 781.2 & 31\\ AD3f3 & 5.5 & 777.2 & 27\\ AD3f4 & 3.7 & 769.8 & 27\\ [0.3 em] AD6f1 & 15.7 & 818.0 & 175\\ AD6f2 & 16.0 & 819.5 & 165\\ AD6f3 & 16.0 & 822.6 & 145\\ AD6f4 & 15.8 & 827.6 & 123\\ [0.3 em] AD9f1 & 5.4 & 805.5 & 323\\ AD9f2 & 11.8 & 813.0 & 273\\ AD9f3 & 15.9 & 833.0 & 263\\ AD9f4 & 19.5 & 844.0 & 254\\ [0.3 em] AD10f1 & 1.9 & 808.0 & 137\\ AD10f2 & 5.8 & 809.0 & 333\\ AD10f3 & 12.1 & 826.6 & 269\\ AD10f4 & 16.5 & 840.0 & 263\\ [0.3 em] AD11f2 & 3.4 & 794.0 & 105\\ [0.3 em] AD12f3 & 0.8 & 165.5 & 44\\ AD12f4 & 1.4 & 231.0 & 73\\ [0.3 em] AD13f4 & 0.07 & 62.5 & 60\\ \hline \end{tabular} \end{table} \section{Prospects for nonaxisymmetric rotational instabilities} \label{sec:rotinst} Nonaxisymmetric rotational instabilities in PNSs formed in AIC or iron core collapse have long been proposed as strong and possibly long-lasting sources of GWs (see, \textit{e.g.,}~ \cite{ott:09rev} for a recent review). The postbounce GW emission by nonaxisymmetric deformations of rapidly rotating PNSs could be of similar amplitude as the signal from core bounce and, due to its potentially much longer duration, could exceed it in emitted energy (\textit{e.g.,}~~\cite{ott_07_b, ott_07_a,scheidegger:08}). Moreover, since the characteristic GW amplitude $ h_{\mathrm c} $ scales with the square root of the number of cycles, the persistence of the nonaxisymmetric dynamics for many rotation periods can drastically increase the chances for detection. The simulations presented in this paper impose axisymmetry, hence we are unable to track the formation and evolution of rotationally induced nonaxisymmetric structures. Nonetheless, since the dynamical high-$ \beta $ instability can develop only at $ \beta $ above $ \beta_{\mathrm{dyn}} \simeq 0.25$ \cite{baiotti_07_a,Manca07}, we can still assess the prospects for such instabilities by studying the values of $ \beta $ reached by our AIC models. Moreover, as we shall see below, the analysis of the rotational configuration of the newly formed PNS can give a rough idea about the outlook also for low-$ \beta $ instabilities. As shown in Sec.~\ref{sec:rotating_collapse_dynamics}, for not very rapidly rotating models, the parameter $ \beta_{\mathrm{ic, b}} $ of the inner core at bounce increases with the progenitor rotation and saturates at $ \sim 24.5 \, \% $ (see Fig.~\ref{fig:betaic_vs_omega}). Immediately after bounce, the inner core re-expands and, after undergoing several damped oscillations, settles into a new quasi-equilibrium state with a $ \beta_{\mathrm{ic, pb}} $ typically smaller by $ \sim 3 \, \% $ (in relative value) than that at bounce. The highest value of $ \beta_{\mathrm{ic, pb}} $ of our entire model set is $\sim 24\,\%$ (observed in model AD12f4) and most other rapidly rotating models reach values of $ \beta_{\mathrm{ic, pb}} $ that are well below this value (\textit{cf.}~ Tab.~\ref{tab:collapse_models}). Hence, we do not expect the high-$\beta$ instability to occur immediately after bounce in most AIC events. On the other hand, the matter around the PNS experiences rapid neutrino-cooling (not modeled by our approach) and the PNS contracts significantly already in the early postbounce phase. This results in spin-up and in a substantial increase of $\beta_{\mathrm{ic,pb}}$. Using the VULCAN/2D code, Ott~\cite{ott:06phd} studied the postbounce evolution of the PNS rotation of the Dessart~et~al.\ AIC models \cite{dessart_06_a}. He found that, in the case of the rapidly rotating $ 1.92 M_\odot $ model, the postbounce contraction leads to a growth of $ \beta_{\mathrm{ic,pb}} $ by $ \sim 50 \, \% $ from $ \sim 14 \ \% $ to $ \sim 22 \ \% $ in the initial $ \sim 50 \ {\mathrm{ms}} $ after bounce. We expect that a similar increase of the parameter $\beta_{\mathrm{ic,pb}}$ should take place also for the rapidly rotating AIC models considered here. More specifically, if we assume that $ \beta $ increases by $ \sim 50\ \% $ within $ \sim 50 \ {\mathrm{ms}} $ after bounce, we surmise that AIC models with $ \beta_{\mathrm{ic,b}} \gtrsim 17 \, \% $ at bounce should reach $ \beta_{\mathrm{ic, pb}} \gtrsim \beta_{\mathrm{dyn}} $ within this postbounce interval and thus become subject to the high-$\beta$ dynamical instability. As mentioned in Sec.~\ref{sec:rotating_collapse_dynamics}, uniformly rotating WDs cannot reach $ \beta_\mathrm{ic,pb} $ in excess of $ \sim 10.5\,\% $. Hence, they are unlikely to become subject to the high-$\beta$ dynamical nonaxisymmetric instability, but may contract and spin up to $ \beta \ge \beta_\mathrm{sec} \simeq 14\,\% $ at which they, in principle, could experience a secular nonaxisymmetric instability in the late postbounce phase. However, other processes, e.g., MHD dynamos and instabilities (see, \textit{e.g.,}~ \cite{balbus_91_a, cerda_07_a}) may limit and/or decrease the PNS spin on the long timescale needed by a secular instability to grow. In addition to the prospects for the high-$\beta$ instability, the situation appears favorable for the low-$\beta$ instability as well. The latter can occur at much lower values of $\beta$ as long as the PNS has significant \emph{differential} rotation (see, \textit{e.g.,}~ \cite{shibata:04a,watts:05,saijo:06,cerda_07_b,ott_05_a,scheidegger:08,ott_07_b} and references therein). While this instability's true nature is not yet understood, a necessary condition for its development seems to be the existence of a corotation point inside the star, \textit{i.e.,}~ a point where the mode pattern speed coincides with the local angular velocity~\cite{watts:05, saijo_06_a}. Bearing in mind that the lowest order unstable modes have pattern speeds of the order of the characteristic Keplerian angular velocity $ \mathcal{O}(\Omega_{\mathrm{char}})$~\cite{centrella_01_a}, we can easily verify whether such a criterion is ever satisfied in our models. Assuming a characteristic mass of the early postbounce PNS of $ \sim 0.8 M_\odot $ and a radius of $ \sim 20 $ km, we obtain a characteristic Keplerian angular velocity of $ \Omega_{\mathrm{char}} \sim 4 \ \mathrm{ rad \ ms}^{-1} $. Because most AIC models that reach $\beta_{\mathrm{ic,pb}} \gtrsim 15 \, \% $ have a peak value of $ \Omega \gtrsim 5 \ \mathrm{rad \ ms}^{-1}$, it is straightforward to conclude that these models will have a corotation point and, hence, that the low-$ \beta $ instability may be a generic feature of rapidly rotating AIC. We note that even uniformly rotating precollapse models have strong differential rotation in the postshock region outside the inner core. However, further investigation is needed to infer whether such models may be also be subject to low-$ \beta $ dynamical instability. As a concluding remark we stress that the above discussion is based on simple order-of-magnitude estimates and is therefore rather inaccurate. Reliable estimates can be made only by performing numerical simulations in 3D that adequately treat the postbounce deleptonization and contraction of the PNS and that investigate the dependence of the instability on $ \beta_{\mathrm{ic, pb}} $, on the degree of differential rotation, and on the thermodynamic and MHD properties of the PNS. Finally, these calculations will also establish what is the effective long-term dynamics of the bar-mode deformation. In simulations of isolated polytropes~\cite{baiotti_07_a,Manca07} and from perturbative calculations~\cite{saijo2008}, it was found that coupling among different modes tends to counteract the bar-mode instability on a dynamical timescale after its development. It is yet unclear whether this behavior will be preserved also in the AIC scenario, where infalling material with high specific angular momentum may lead to significant changes. This will be the subject of future investigations. \section{Summary and Conclusions} \label{sec:summary} In this paper we have presented the first general-relativistic simulations of the axisymmetric AIC of massive white dwarfs to protoneutron stars. Using the general-relativistic hydrodynamics code \textsc{CoCoNuT}, we performed 114\ baseline model calculations, each starting from a 2D equilibrium configuration, using a finite-temperature microphysical EOS, and a simple, yet effective parametrization scheme of the electron fraction $Y_e$ that provides an approximate description of deleptonization valid in the collapse, bounce, and very early postbounce phases. The precollapse structure and rotational configuration of WDs that experience AIC is essentially unconstrained. This prompted us to carry out this work. With our large set of model calculations, we have investigated the effects on the AIC evolution of variations in precollapse central density, temperature, central angular velocity, differential rotation, and deleptonization in collapse. The inclusion of general relativity enabled us to correctly describe the AIC dynamics and our extended model set allowed us for the first time to study systematically GW emission in the AIC context. We find that the overall dynamics in the collapse phase of AIC events is similar to what has long been established for rotating iron core collapse. A universal division in homologously collapsing inner core and supersonically infalling outer core obtains and the self-similarity of the collapse nearly completely washes out any precollapse differences in stellar structure in the limit of slow rotation. Due to the high degeneracy of the electrons in the cores of AIC progenitor WDs, electron capture is predicted to be strong already in early phases of collapse \cite{dessart_06_a}, leading to a low trapped lepton fraction and consequently small inner core masses $M_\mathrm{ic,b}$ at bounce of around $0.3\,M_\odot$ which decrease somewhat with increasing precollapse WD temperature due to the temperature dependent abundance of free protons. Test calculations motivated by potential systematic biases of the AIC $\overline{Y_e}(\rho)$ trajectories obtained from \cite{dessart_06_a} (see Secs.~\ref{sec:deleptonization} and \ref{sec:tempye}) with inner-core values of $Y_e$ increased by $\sim 10\%$ and $\sim 20\%$ yielded values $M_\mathrm{ic,b}$ larger by $\sim 11\%$ and $\sim 25\%$. Our simulations show that rotation can have a profound influence on the AIC dynamics, but will \emph{always} stay subdominant in the collapse of uniformly rotating WDs whose initial angular velocity is constrained by the Keplerian limit of surface rotation. In rapidly differentially rotating WDs, on the other hand, centrifugal support can dominated the plunge phase of AIC and lead to core bounce at subnuclear densities. We find that the parameter $\beta_\mathrm{ic,b} = (E_\mathrm{rot}/|W|)_\mathrm{ic,b}$ of the inner core at bounce provides a unique mapping between inner core rotation and late-time collapse and bounce dynamics, but the mapping between precollapse configurations and $\beta_\mathrm{ic,b}$ is highly degenerate, i.e., multiple, in many cases very different precollapse configurations of varying initial compactness and total angular momentum, can yield practically identical $\beta_\mathrm{ic,b}$ and corresponding collapse/bounce dynamics. Recent phenomenological work presented in \cite{metzger:09,metzger:09b} on the potential EM display of an AIC event has argued for both uniform WD rotation \cite{metzger:09,piro_08_a} and massive quasi-Keplerian accretion disks left behind at low latitudes after AIC shock passage. The analysis of our extensive model set, on the other hand, shows that uniformly rotating WDs produce no disks at all or, in extreme cases that are near mass shedding at the precollapse stage, only very small disks ($M_\mathrm{disk} \lesssim 0.03 M_\odot$). Only rapidly differentially rotating WDs yield the large disk masses needed to produce the enhanced EM signature proposed in \cite{metzger:09,metzger:09b}. An important focus of this work has been on the GW signature of AIC. GWs, due to their inherently multi-D nature, are ideal messengers for the rotational dynamics of AIC. We find that all AIC models following our standard $\overline{Y_e}(\rho)$ parametrizations yield GW signals of a generic morphology which has been classified previously as \emph{type~III} \cite{zwerger_97_a,dimmelmeier_02_b,ott:06phd}. This signal type is due primarily to the small inner core masses at bounce obtained in these models. We distinguish between three subtypes of AIC GW signals. Type IIIa occurs for $\beta_\mathrm{ic,b} \lesssim 0.7\%$ (slow rotation), is due in part to early postbounce prompt convection and results in peak GW amplitudes $|h_\mathrm{max}| \lesssim 5 \times 10^{-22}$ (at $10\,\mathrm{kpc}$) and emitted energies $E_\mathrm{GW} \lesssim \mathrm{few}\,\times\,10^{-9}\,M_\odot \, c^2$. Most of our AIC models produce type~IIIb GW signals that occur for $0.7\lesssim \beta_\mathrm{ic,b} \lesssim 18\%$ (moderate/moderately rapid rotation) and yield $6\times10^{-22} \lesssim |h_\mathrm{max}|\,(\mathrm{at}\,10\,\mathrm{kpc}) \lesssim 8\times10^{-21}$ and emitted energies of $9\times10^{-10} M_\odot\,c^2 \lesssim E_\mathrm{GW} \lesssim 2\times10^{-8}\,M_\odot\,c^2$. Rotation remains subdominant in type~IIIa and type~IIIb models and we find that there is a monotonic and near-linear relationship between maximum GW amplitude and the rotation of the inner core which is best described by the power law $|h_\mathrm{max}| \propto 10^{-21} \beta_\mathrm{ic,b}^{0.74}$. Furthermore, we find that the frequencies $f_\mathrm{max}$ at which the GW spectral energy densities of type IIIa and IIIb models peak are in a rather narrow range from $\sim 720\,\mathrm{Hz}$ to $\sim 840\,\mathrm{Hz}$ and exhibit a monotonic growth from the lower to the upper end of this range with increasing rotation. This finding suggests that the GW emission in these models is driven by the fundamental quadrupole (${}^2\!f$) mode of the inner core. In the dynamics of AIC models that reach $\beta_\mathrm{ic,b} \gtrsim 18\%$, centrifugal effects become dominant and lead to core bounce at subnuclear densities. Such models must be differentially rotating at the onset of collapse and produce type~IIIc GW signals with maximum amplitudes of $4.0\times10^{-22} \lesssim |h_\mathrm{max}|\,\mathrm{(at\, 10\,\mathrm{kpc})} \lesssim 5.5\times10^{-21}$, emitted energies of $ 10^{-10} M_\odot\,c^2 \lesssim E_\mathrm{GW} \lesssim 10^{-8} M_\odot\, c^2$, and peak frequencies of $62\, \mathrm{Hz} \lesssim f_\mathrm{max} \lesssim 800 \, \mathrm{Hz}$. In contrast too type~IIIa and IIIb models, in type~IIIc models, $|h_\mathrm{max}|$, $E_\mathrm{GW}$, and $f_\mathrm{max}$ decrease monotonically with increasing $\beta_\mathrm{ic,b}$. Combining the information from signal morphology, $|h_\mathrm{max}|$, $E_\mathrm{GW}$ and $f_\mathrm{max}$, we conclude that already first-generation interferometer GW detectors should be able to infer the rotation of the inner core at bounce (as measured by $\beta_\mathrm{ic,b}$) from a Galactic AIC event. Due to the degenerate dependence of $\beta_\mathrm{ic,b}$ on initial model parameters, this can put only loose constraints on the structure and rotational configuration of the progenitor WD. However, the observation of an AIC with $\beta_\mathrm{ic,b} \gtrsim 18\%$ would rule out uniform progenitor rotation. Studying the configurations of the protoneutron stars formed in our AIC models, we find that none of them are likely to experience the high-$\beta$ nonaxisymmetric bar-mode instability at very early postbounce times. We estimate, however, that all models that reach $\beta_\mathrm{ic,b} \gtrsim 17\%$ will contract and reach the instability threshold within $\sim 50\,\mathrm{ms}$ after bounce. Less rapidly spinning models will require more time or will go unstable to the low-$\beta$ instability. The latter requires strong differential rotation which is ubiquitous in the outer PNS and in the postshock region of our AIC models. AIC progenitors, due to their evolution through accretion or formation through merger, are predestined to be rapidly rotating and form PNSs that are likely to become subject to nonaxisymmetric instabilities. This is in contrast to the precollapse iron cores of ordinary massive stars that are expected to be mostly slowly-spinning objects~\cite{heger:05,ott:06spin}. We conclude that the appearance of nonaxisymmetric dynamics driven by either the low-$\beta$ or high-$\beta$ instability and the resulting great enhancement of the GW signature may be a generic aspect of AIC and must be investigated in 3D models. The comparison of the GW signals of our axisymmetric AIC models with the gravitational waveforms of the iron core collapse models of Dimmelmeier~et~al.~\cite{dimmelmeier_08_a} reveals that the overall characteristics of the signals are rather similar. It appears unlikely that AIC and iron core collapse could be distinguished on the basis of the axisymmetric parts of their GW signals alone, unless detailed knowledge of the signal time series as well as of source orientation and distance is available to break observational degeneracies. The results of our AIC simulations presented in this paper and the conclusions that we have drawn on their basis demonstrate the complex and in many cases degenerate dependence of AIC outcomes and observational signatures on initial conditions. The observation of GWs from an AIC event can provide important information on the rotational dynamics of AIC. However, to lift degeneracies in model parameters and gain full insight, GW observations must be complemented by observations of neutrinos and electromagnetic waves. These multi-messenger observations require underpinning by comprehensive and robust computational models that have no symmetry constraints and include all the necessary physics to predict neutrino, electromagnetic, and GW signatures. As a point of caution, we note that the generic type~III GW signal morphology observed in our AIC models is due to the small inner-core values of $Y_e$ and consequently small inner core masses predicted by the $\overline{Y_e}(\rho)$ parametrization obtained from the approximate Newtonian radiation-hydrodynamic simulations of \cite{dessart_06_a}. Tests with artificially reduced deleptonization show that the signal shape becomes a mixture of type~III found in our study and type I observed in rotating iron core collapse \cite{dimmelmeier_08_a} if the $Y_e$ in the inner core is larger by $\sim 20\%$. In a follow-up study, we will employ $\overline{Y_e}(\rho)$ data from improved general-relativistic radiation-hydrodynamics simulations \cite{mueller:09phd} to better constrain the present uncertainties of the AIC inner-core electron fraction. Although performed using general-relativistic hydrodynamics, the calculations discussed here are limited to conformally-flat spacetimes and axisymmetry. We ignored postbounce deleptonization, neutrino cooling, and neutrino heating. We also neglected nuclear burning, employed only a single finite-temperature nuclear EOS, and were forced to impose ad-hoc initial temperature and electron fraction distributions onto our precollapse WD models in rotational equilibrium. Future studies must overcome the remaining limitations to build accurate models of AIC. Importantly, extensive future 3D radiation-hydrodynamic simulations are needed to address the range of possible, in many cases probably nonaxisymmetric, postbounce evolutions of AIC and to make detailed predictions of their signatures in GWs, neutrinos, and in the electromagnetic spectrum. \section{Acknowledgements} It is a pleasure to thank Alessandro Bressan, Adam Burrows, Frank L\"offler, John Miller, Stephan Rosswog, Nikolaos Stergioulas, Sung-Chul Yoon, Shin Yoshida, and Burkhard Zink for helpful comments and discussions. This work was supported by the Deutsche Forschungsgemeinschaft through the Transregional Collaborative Research Centers SFB/TR~27 ``Neutrinos and Beyond'', SFB/TR~7 ``Gravitational Wave Astronomy'', and the Cluster of Excellence EXC~153 ``Origin and Structure of the Universe'' (\texttt{http://www.universe-cluster.de}). CDO acknowledges partial support by the National Science Foundation under grant no.\ AST-0855535. The simulations were performed on the compute clusters of the Albert Einstein Institute, on machines of the Louisiana Optical Network Initiative under allocation LONI\_numrel04 and on the NSF Teragrid under allocation TG-MCA02N014.
{'timestamp': '2009-10-14T22:00:04', 'yymm': '0910', 'arxiv_id': '0910.2703', 'language': 'en', 'url': 'https://arxiv.org/abs/0910.2703'}
\section{Introduction} Future wireless networks are expected to face unprecedented demands for continuous and ubiquitous connectivity due to the increasing number of mobile users, the evolving deployment of \ac{IoT} networks, and the growing number of novel use cases. Increasing \ac{BS} density and using relays may be a straightforward solution to cope with this unforeseen situation, but it comes at the cost of high capital and operational expenditures. Alternatively, a green and an energy-efficient solution that utilizes reconfigurable intelligent surfaces (RIS) in terrestrial networks (TNs) has been introduced \cite{huang2019reconfigurable}. RIS constitutes a large number of low-cost and nearly passive elements, which can be configured to reflect the incident \ac{RF} signal to a desired direction \cite{di2020smart,huang2019reconfigurable}. However, the deployment of RIS in terrestrial environments involves several challenges, such as placement flexibility and channel impairments including excessive path loss and shadowing effects. Instead, in \cite{alfattani2021aerial}, we proposed the integration of RIS with non-terrestrial systems and discussed its prospects for wireless systems and services. Since energy consumption is a critical issue in aerial platforms, equipping them with a massive number of active antennas would exaggerate the issue. Alternatively, mounting aerial platforms with RIS can provide an energy-efficient solution \cite{ye2022non,shang2022uav,li2022energy,jeon2022energy, aung2022energy}. In addition, due to the favorable wireless channel conditions in non-terrestrial networks (NTN), \ac{RIS} can support panoramic full-angle reflection serving wider areas with strong line-of-sight (LoS) links \cite{lu2021aerial,ye2022non}. Further, in \cite{alfattani2021link}, we showed that RIS-mounted \ac{HAPS} has the potential to outperform other RIS-mounted aerial platforms, due to the large size of \ac{HAPS}. To reap the benefits of RIS-mounted HAPS, in \cite{alfattani2022beyond}, we proposed a novel \textit{beyond-cell communications} approach. This approach offers service to the stranded terrestrial \acp{UE}, whose either channel conditions are below the required quality-of-service (QoS), or are located in a cell with fully loaded \ac{BS}. In particular, the stranded UEs get service from a dedicated ground \ac{CS} through the HAPS-RIS. We showed that HAPS-RIS can work in tandem with legacy TNs to support unserved UEs. We also discussed the optimal power and RIS units allocation design schemes to maximize the system throughput and the worst UE rate. However, due to practical limitations on system resources, including \ac{CS} power and HAPS size (equivalently, the number of RIS units), it might be infeasible to serve all unsupported UEs by CS through HAPS-RIS. Accordingly, we formulated a novel optimization problem to maximize the \ac{RE} of the system. \begin{itemize} \item A novel resource-efficient optimization problem that maximizes the percentage of connected \acp{UE} while minimizing the usage of RIS units and transmit power is formulated. \item Since the resulting problem is a \ac{MINLP} and hard to solve, a low complexity two-stage algorithm is proposed to solve it. \item Through numerical results, we study the impact of HAPS size and QoS requirements of \acp{UE} on the percentage of connected UEs and demonstrate significant improvements in \ac{RE} of the system. \end{itemize} The rest of the letter is organized as follows. In Section \ref{Sec:model}, the system model is described. Section \ref{Sec:Problem_form} presents the problem formulation. The proposed solution and the algorithm for solving the optimization problem are discussed in Section \ref{Sec:Solution}. Numerical results and discussion are presented in Section \ref{Sec:Results}. Finally, Section \ref{Sec:conclstion} concludes the letter. \begin{figure}[h] \centering \includegraphics[width=0.6\linewidth]{new_model_HAPS-RIS.pdf} \caption{System model for HAPS-RIS assisting beyond cell communications.} \label{fig:model} \end{figure} \section{System Model} \label{Sec:model} We consider a typical urban region consists of $K$ UEs, $L$ terrestrial \acp{BS}, a single HAPS-RIS, and one \ac{CS}, as depicted in Fig.~\ref{fig:model}. The UEs are assumed to suffer from severe shadowing and blockages, and NLoS paths, which are typical characteristics of propagation media in urban regions. Based on channel conditions between the \acp{BS} and the UEs, and the maximum serving capacity of the \acp{BS}, a set of $\mathcal{K}_1 = \{1,\ldots,K_1\}$ UEs will be supported by direct links from the BSs (referred to as \textit{within-cell} communications \cite{alfattani2022beyond}). The set of remaining UEs $\mathcal{K}_2 = \{1,\ldots,K_2\}$, which cannot form direct connection with the terrestrial \ac{BS} will be served by the \ac{CS} via HAPS-RIS (referred to as \textit{beyond-cell} communications \cite{alfattani2022beyond}). The CS is located somewhere in the HAPS coverage area. Note that $\mathcal{K} = \mathcal{K}_1 \cup \mathcal{K}_2$. We assume that the \ac{CS} serves the stranded UEs in set $\mathcal{K}_2$ using orthogonal subcarriers, and hence, there will be no inter-UE interference. Further, both \textit{within-cell} and \textit{beyond-cell} communications occur in two orthogonal frequency bands, while keeping the subcarrier bandwidth $B_{\rm UE}$ same for both types of communications. As a result, the signals from \textit{within-cell} UEs will not interfere with the signals from \textit{beyond-cell} UEs and vice versa. We assume that \textit{within-cell} UEs are connected optimally with terrestrial BSs, and hence, our focus in this work is on \textit{beyond-cell} communications. Accordingly, the received signal at UE $k \in \mathcal{K}_2$ on subcarrier $m$ can be expressed as \begin{equation} \label{eq:Rx_signal} y_{k m}=\sqrt{P^{\rm CS}_{k m}} h_{k m} \mathrm{\Phi}_{k} \;x_{{k m}}+w_{km}, \end{equation} where $P^{\rm CS}_{km}$ denotes the transmit power of UE $k$ and $w_{km}$ denotes the additive white Gaussian noise (AWGN) $\sim \mathcal{CN}(0,N_0B_{\rm UE})$, where $N_0$ is the noise power spectral density. $h_{k m}$ denotes the effective channel gain from the \ac{CS} to the HAPS-RIS and from the HAPS-RIS to UE $k$, and is given by \begin{equation} h_{k m} = \sqrt{G^{\rm CS} G_{r}^{k} (\textsf{PL}^{\rm{CS-HAPS}-\textit{k}})^{-1}}, \end{equation} where $G^{\rm CS}$ denotes the antenna gain of the control station, and $G_{r}^{k}$ is the receiver antenna gain of UE $k$. $\textsf{PL}^{\text{CS-HAPS}-k} = \textsf{PL}^{\rm{CS}-\rm{HAPS}} \textsf{PL}^{\text{HAPS}-k}$ denotes the effective path loss between the \ac{CS} and UE $k$ via HAPS-RIS. Further, $\mathrm{\Phi_{k}}$ represents the reflection gain of the RIS corresponding to UE $k$, and is expressed as \begin{equation} \mathrm{\Phi}_{k}=\sum_{i=1}^{N_{k}} \rho_i e^{-j \left(\phi_i - \theta_{i}-\theta_{k}\right)}, \end{equation} where $\rho_i$ denotes the reflection loss corresponding to RIS unit $i$. $\theta_{i}$ and $\theta_{k}$ represent the corresponding phases between RIS unit $i$ and both the control station and UE $k$, respectively. $\phi_i$ represents the adjusted phase shift of RIS unit $i$, and $N_{k}$ represents the total number of RIS units allocated to UE $k$. Using \eqref{eq:Rx_signal}, the \ac{SNR} at UE $k$ can be written as \begin{comment} \begin{equation}\label{eq:SNR} \gamma_{k^\prime}= \frac{P_{k^\prime}^{\rm CS} \left| h_{k^\prime} \mathrm{\Phi_{k^\prime}}\right|^2} {\sum_{i=1,i \neq {k^\prime}}^{K_2} P_{i}^{\rm CS}\left| h_{i} \mathrm{\Phi_{i}}\right|^2 + \sigma^2 B} \end{equation} \end{comment} \begin{equation}\label{eq:SNR} \gamma_{k}= \frac{P_{km}^{\rm CS} \left| h_{k m} \mathrm{\Phi}_{k}\right|^2} { N_0 B_{\rm UE}}, \end{equation} and the corresponding achievable rate can be expressed as \begin{equation}\label{eq:R_K2} R_{k} = B_{\rm UE} \log_2(1+\gamma_{k}). \end{equation} \section{Problem Formulation}\label{Sec:Problem_form} Using power efficiently at the \ac{CS} for transmitting signals, and at the HAPS-RIS for activating the RIS units offer greener environment and longer refueling intervals for HAPS. Also, in case of a large UE density or strict QoS requirement, it might infeasible to serve all unserved UEs by the CS via HAPS-RIS. Therefore, we need to select as many UEs that can be supported by the \ac{CS}, while using as minimum system resources (transmit power and RIS units) as possible. Accordingly, we define a novel performance metric known as resource efficiency as follows: \begin{definition} Resource efficiency (RE) of the beyond-cell communication system, $\eta$, is defined as the ratio between the percentage of served UEs and the average power consumption in dBm by each supported UE, which includes the consumption towards signal transmission and RIS units configuration: \begin{equation} \eta = \dfrac{\frac{1}{K}\Big(K_1+\sum_{k=1}^{K_2}u_{k}\Big)}{\frac{1}{|\mathcal{U}|}\sum_{k=1}^{K_2} \Big(P_{k m}^{\rm CS}u_k + P_{\rm RIS}N_ku_k\Big)}, \end{equation} where $u_k$ is the indicator variable, if $u_k=1$, user $k$ will get service by HAPS-RIS, otherwise not if $u_k=0$. In the denominator, the tersm $\sum_{k=1}^{K_2} P_{k m}^{\rm CS}u_k$ and $\sum_{k=1}^{K_2}P_{\rm RIS}N_ku_k$ represent the total transmit power consumption at the \ac{CS} and the total power consumed by the RIS units for all supported UEs, respectively. $P_{\rm RIS}$ denotes the consumed power by each RIS unit for phase shifting, which is dependent on the RIS configuration technology and its resolution. Finally, $|\mathcal{U}|$ denotes the cardinality of the set, which constitutes the UEs supported by the CS via HAPS-RIS. \end{definition} In the following, we formulate the resource-efficient UEs maximization problem. It can be expressed as \begin{comment} \begin{subequations} \begin{align} \label{eq:P4} & \max _{\mathrm{\Phi}_{k},N_{k},P_{k m}^{\rm CS}} \dfrac{\mathcal{PC}}{\sum_{k=1}^{K_2} N_{k}} \\ \label{p4:c1} & \text { s.t. } \textcolor{black}{ L \leq L_{\rm max}} \\ \label{p1_throu:c2} &\quad R_{k} \geq R_{th}, \enspace \forall k=1,2, \ldots, K_2 \\ \label{p1_throu:c3} &\quad \sum_{k=1}^{K_2} N_{k} \leq N_{\rm max} \\ \label{p1_throu:c4} &\quad 0\leq \theta_{n} \leq 2\pi, \enspace \forall n=1,2, \ldots, N_{ \rm max} \\ \label{p1_throu:c5} &\quad\sum_{k=1}^{K_2} P_{k m}^{\rm CS} \leq P_{\rm max}^{\rm CS} \\ \label{p1_throu:c6} &\quad N_{k \rm min} \leq N_{k} \leq N_{k \rm max}\\ \label{p1_throu:c7} &\quad P_{k \rm min}^{\rm CS} \leq P_{k m}^{\rm CS} \leq P_{k \rm max}^{\rm CS}, \end{align} \end{subequations} \end{comment} \begin{subequations} \label{eq:RIS-efficiency} \begin{align} \label{eq:P4-2} &\max _{u_{k},\mathrm{\Phi}_{k},N_{k},P_{k m}^{\rm CS}} \et \\ \label{p4:c1} & \text { s.t. } L \leq L_{\rm max}, \\ \label{p1:c1} &\quad R_{k} \geq u_{k} R_{\rm th}, \enspace \forall k=1,2, \ldots, K_2, \\ \label{p1:c2} &\quad \sum_{k=1}^{K_2} u_{k}N_{k} \leq N_{\rm max}, \\ \label{p1:c3} &\quad \theta_{n} \in \{0, 2\pi\}, \enspace \forall n=1,2, \ldots, N_{\rm max}, \\ \label{p1:c4} &\quad\sum_{k=1}^{K_2} u_{k}P_{k m}^{\rm CS} \leq P_{\rm max}^{\rm CS}, \\ \label{p1:c5} &\quad N_{k} \in \{N_{k, \rm min}, N_{k, \rm{max}}\} ,\enspace \forall k=1,2, \ldots, K_2, \\ \label{p1:c6} &\quad 0 \leq P_{k m}^{\rm CS} \leq u_{k} P_{k, \rm max}^{\rm CS} ,\enspace \forall k=1,2, \ldots, K_2,\\ \label{p1:c7} &\quad u_{k} \in \{0,1\}, \end{align} \end{subequations} where $P^{\rm CS}_{\rm max}$ denotes the maximum available transmit power at the \ac{CS}. $N_{k,\rm max}$ and $P_{k, \rm max}^{\rm CS}$ denote the maximum number of RIS units and the amount of power allocated to UE $k$, respectively. (\ref{p4:c1}) limits the number of BSs in the area. Constraint (\ref{p1:c1}) guarantees that each selected UE satisfies the threshold rate, $R_{\rm th}$, requirement. Constraint (\ref{p1:c2}) ensures the total number of RIS units allocated to the supported UEs does not exceed the maximum number of available RIS units, $N_{\rm max}$, which is limited by the HAPS size. Constraint (\ref{p1:c3}) determines the range of the adjustable phase shift for each RIS unit. Constraint (\ref{p1:c4}) guarantees that the total allocated power by the \ac{CS} to the selected UEs is less than the maximum available power. Finally, constraints (\ref{p1:c5}) and (\ref{p1:c6}) ensure feasible and fair allocation of both RIS units and \ac{CS} power to each UE. \begin{comment} \begin{subequations} \label{eq:power-efficiency} \begin{align} \label{eq:P5} & \max _{\mathrm{\Phi}_{k},N_{k},P_{k m}^{\rm CS}} \dfrac{\mathcal{PC}}{\sum_{k=1}^{K_2} \Big( P^{\rm CS}_{k m}} \\ \label{p5:c1} & \text { s.t. } (\ref{p4:c1})-(\ref{p1:c7}) . \end{align} \end{subequations} \end{comment} \section{Proposed Solution} \label{Sec:Solution} \begin{comment} To maximize the number of \textit{within-cell} UEs connection, we set $L=L_{max}$. Then, for the \textit{beyond-cell} UEs, the problem of maximizing the percentage of connected UEs using minimal number of RIS units can be re-written as: Accordingly, \eqref{eq:RIS-efficiency} is a combinatorial hard problem, which may involve high complex computation. Therefore, we propose a low-complexity practical approach to solve it as described in Algorithm~\ref{Algo1}. \end{comment} Since problem (\ref{eq:RIS-efficiency}) is an \ac{MINLP}, it is hard to solve optimally with lower complexity. Therefore, we develop a low-complexity algorithm to solve it suboptimally. The main step of the proposed algorithm involves solving \eqref{eq:RIS-efficiency} in two stages. In the first stage, we maximize the numerator of (\ref{eq:P4-2}) by maximizing the number of UEs, which can establish direct connection with TNs (i.e., by setting $L=L_{\rm max}$), and then finding the set of maximum feasible UEs $ \in \mathcal{K}_2$, which can be supported by the CS via HAPS-RIS. To this end, we first sort the channel gains of all UEs. Secondly, under the assumptions of equal power allocation to each UE and perfect reflection of each RIS unit, we allocate the minimum required number of RIS units to each UE as follows: \begin{equation}\label{eq:N_k} N_{k} = \Biggl\lceil \sqrt{\frac{N_0 B_{\rm UE} (2^{\frac{R_{\rm min}}{B_{\rm UE}}}-1)}{P_{k m}^{\rm CS} \left| h_{k} \right|^2}}\Biggr\rceil. \end{equation} The initial RIS units allocation starts with UEs with the best channel conditions until all RIS units are utilized. As a result, a set $\mathcal{U}$ with the largest number of feasible UEs is determined. In the second stage, the denominator of (\ref{eq:P4-2}) is minimized by optimally allocating the \ac{CS} power and RIS units to each UE belongs to set $\mathcal{U}$. This is accomplished by solving the following optimization problem: \begin{subequations}\label{eq:min_N_general} \begin{align} \label{eq:min_N} &\min _{\mathrm{\Phi}_{k},N_{k},P_{k m}^{\rm CS}} \sum_{k=1}^{|\mathcal{U}|} P_{k m}^{\rm CS} + P_{\rm RIS}N_k \\ \label{p3:c1} &\text { s.t. } R_{k} \geq R_{th}, \enspace \forall k=1,2, \ldots, |\mathcal{U}|, \\ \label{p1_throu:c3} &\quad \sum_{k=1}^{K_2} N_{k} \leq N_{\rm max}, \\ \label{p1_throu:c4} &\quad \theta_{n} \in \{0, 2\pi\}, \enspace \forall n=1,2, \ldots, N_{ \rm max}, \\ \label{p1_throu:c5} &\quad\sum_{k=1}^{K_2} P_{k m}^{\rm CS} \leq P_{\rm max}^{\rm CS}, \\ \label{p1_throu:c6} &\quad N_{k} \in \{N_{k, \rm min} ,N_{k, \rm max}\}, \enspace \forall k=1,2, \ldots, |\mathcal{U}|,\\ \label{p1_throu:c7} &\quad 0 \leq P_{k m}^{\rm CS} \leq P_{k, \rm max}^{\rm CS}, \enspace \forall k=1,2, \ldots, |\mathcal{U}|. \end{align} \end{subequations} Without loss of generality, problem \eqref{eq:min_N_general} can be re-written as \begin{subequations}\label{eq:min_N_modified} \begin{align} \label{eq:min_N_modified_a} &\min _{\mathrm{\Phi}_{k},N_{k},P_{k m}^{\rm CS}} \sum_{k=1}^{|\mathcal{U}|} P_{k m}^{\rm CS} + P_{\rm RIS}N_k \\ \label{p3:c1_modified} &\text { s.t. } \frac{1}{\gamma_{k}} \leq \frac{1}{\gamma_{\rm min}}, \enspace \forall k=1,2, \ldots, |\mathcal{U}|, \\ \label{p1_throu:c3_mod} &\quad \quad \eqref{p1_throu:c3} - \eqref{p1_throu:c7}. \end{align} \end{subequations} Problem \eqref{eq:min_N_modified} is a mixed-integer linear program (MILP) and to solve it efficiently, we relax $N_{k}$ and $\theta$ to be the continuous variables. The final solution can be obtained as $N_{k} \approx \lceil N^*_{k} \rceil$ and $\theta_{n} \approx \lceil \theta^*_{n} \rceil$. Consequently, the objective and the constraints of \eqref{eq:min_N_modified} become posynomials, and can be solved optimally using geometric programming (GP) technique \cite{boyd2007tutorial}. The pseudo-code of the proposed two-stage algorithm is described in Algorithm~\ref{Algo1}. \begin{comment} \subsubsection{Power-efficient UEs maximization} {\color{black} By following a similar procedure of the previous section, and the same steps 1--8 of Algorithm \ref{Algo1}, we can obtain the maximum set of UEs to be supported by HAPS-RIS. Then, a similar problem to \eqref{eq:min_N_general} can be formulated with the objective of minimizing the CS power, as follow \begin{subequations}\label{eq:min_PCS_general} \begin{align} \label{eq:min_PCS} &\min _{\mathrm{\Phi}_{k},N_{k},P_{k m}^{\rm CS}} \sum_{k=1}^{|\mathcal{U}|} P_{k m}^{\rm CS} \\ \label{p4_Pcs:c1} &\text { s.t. } \eqref{p3:c1}-\eqref{p1_throu:c7} \end{align} \end{subequations} Then, by converting the constraints of \eqref{eq:min_PCS_general} to an equivelent or an approximate posynomial one, as in \eqref{eq:min_N_modified}, geometric optimization techniques can be applied to obtain optimal allocation of power and RIS units. \end{comment} \begin{algorithm}[h] \caption{Efficient maximization of connected UEs} \label{Algo1} \begin{algorithmic}[1] \State Set $L= L_{\rm max}$ and obtained set $\mathcal{K}_2$. \State \textbf{Input:} $h_{k}, k\in \{1, 2, \ldots, K_2\}$. \State Sort $K_2$ UEs in descending order based on their channel gains $K_{S2}\leftarrow K_2$ \For{$k = 1 \dots K_{S2}$} \; \While{$\sum_{k=1}^{K_{S2}} N_{k} \leq N_{\rm max}$} \State $P_{k m}^{\rm CS}=P_{\rm max}^{\rm CS}/K_{s2}$ \State Obtain initial $N_{k}$ from (\ref{eq:N_k}). \EndWhile \EndFor \State \textbf{Stage-1 Output:} Selected UEs $\mathcal{U}$. \State Solve optimally \eqref{eq:min_N_modified} for the set $\mathcal{U}$. \State \textbf{Stage-2 Output:} $P_{k m}^{\text{CS}*}$ and $N^*_{k}$ ($\forall k=1,\ldots,|\mathcal{U}|$). \end{algorithmic} \end{algorithm} \section{Numerical Results and Discussion} \label{Sec:Results} In this section, we present and discuss the performance of the proposed algorithm. For he purpose of comparison, we employ a benchmark approach, which first allocates equal power to all UEs and then selects the UEs with the highest channel gains to be served first with the minimum number of RIS units until the QoS requirement is satisfied. In the simulation setup, we consider an urban square area of 10 km by 10 km consisting of $L=4$ terrestrial BSs serving $K=100$ uniformly randomly distributed UEs with a minimum separation distance of 100 m among them. We assume that the terrestrial BSs are optimally placed in the considered region. The channel gains between all the UEs and the terrestrial BSs are obtained by following the 3GPP standards \cite{3gpp38901study}. The carrier frequency is set to $f_c=2$ GHz with shadowing standard deviation $\sigma=8$ dB. Unless stated otherwise, the minimum rate for a direct connection between a UE and a BS is $R_{\rm th} = 2$ Mb/s. The maximum available bandwidth to each BS is $B_{\rm BS}=50$ MHz, and subcarrier bandwidth for each UE is set to $B_{\rm UE}=2$ MHz. Accordingly, a UE will be connected to a terrestrial BS that provides the highest data rate, and the collection of such UEs forms the set $\mathcal{K}_1$. Consequently, the collection of stranded UEs forms the other set $\mathcal{K}_2$. On the other hand, the effective channel gains from the \ac{CS} to all the UEs in set $\mathcal{K}_2$ through HAPS-RIS are obtained using the standardized 3GPP channel model between a HAPS and terrestrial nodes in urban environments \cite{3gpp2017Technical,alfattani2021link}. This model considers dry air atmospheric attenuation, and corresponding atmosphere parameters are selected based on the mean annual global reference atmosphere \cite{itu1999p}. Further, we assume each UE has 0 dB antenna gain, and noise power spectral density $N_0 = -174$ dBm/Hz. We further set $P_{\rm max}^{\rm CS} =$ 33 dBm, and $G^{\rm CS}=$ 43.2 dBm \cite{3gpp2017Technical} in all of the simulations, unless stated otherwise. The values of other parameters are as follow: $P_{\rm RIS} =7.8$ mW\cite{huang2019reconfigurable}, $P_{k, \rm max}^{\rm CS}=30$ dBm, $N_{k, \rm min}=1000$ units and $N_{k, \rm max}=50,000$ units. {\color{black} \begin{comment} \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth]{figures/UEs_BSs_connection_diff_freq.pdf} \caption{Relation between BSs densities and the percentage of UEs with direct connection for different frequencies.} \label{fig:connection_diff_freq} \end{figure} \subsection{Relation between BSs densities and UEs direct connection} Note that the percentage of UEs supported by the \ac{CS} via HAPS-RIS depends on the number of UEs that failed to connect with any terrestrial BS directly. The chances of UEs being supported by the BSs directly rely on the BSs and UEs densities and the carrier frequency. For a fixed density of BS, as the density of UEs increases, the percentage of UEs with direct connection drops because BSs cannot serve more users beyond their maximum loading capacities. Fig.~\ref{fig:connection_diff_freq} illustrates the fact that the percentage of UEs with direct connection increases as the density of BSs increases. However, as the carrier frequency increases to provide high data rate communications, the percentage of UEs with direct connection significantly drops even with the large number of BSs. For instance, four BSs communicating at $f_c \leq 2$ GHz are sufficient to support more than 80\% of UEs directly connected to BSs, whereas more than 20 BSs are required to support 80\% of UEs communicating at $f_c \geq 10$ GHz. Therefore, HPAS-RIS may offer a cost-effective solution in such situations than just increasing the density of the BSs. \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth]{figures/BSs_vs_connection_v2.pdf} \caption{Relation between BSs and UEs densities and (BSs-UEs) direct connection. \textcolor{red}{To be corrected}} \label{fig:direct_percent} \end{figure} \subsection{UEs with direct connection vs UEs supported by HAPS-RSS} By neglecting interference effect in (\ref{eq:SNR}), the relation between the number of RIS units and the achievable rate for a UE supported by HAPS-RSS can be obtained. Fig. \ref{fig:N_vs_Rate} shows that the average data rate for K1 UEs with direct connection to BSs is about 23 Mb/s. However, K2 UEs supported by HAPS-RSS can satisfy the minimum rate requirement with about 4,000 RSS units, and are able to achieve the average rate of direct connection with approximately 12,000 units. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figures/N_vs_Rate.pdf} \caption{The relation between the number of RIS units on HAPS and a UE achievable rate.} \label{fig:N_vs_Rate} \end{figure} \subsection{RSS units allocation schemes} The total amount of RIS units placed on HAPS surface is limited by the HAPS size. Generally, HAPS nodes are giant platforms whose lengths are typically between 100 and 200 m for aerostatic airships, whereas aerodynamic HAPS have wingspans between 35 and 80 m. Since the size of each RIS unit is proportional to the wavelength ($\lambda$), [$0.1\lambda$×$0.1\lambda$ -- $0.2\lambda$×$0.2\lambda$], the surface of HAPS can accommodates large number of reflectors. To maximize the number of supported UEs, efficient allocation schemes of RSS units should be selected. In Fig. \ref{fig:schemes}, we compare between different allocation schemes. Initially, the percentage of connected UEs ($R\geq R_{min}$) with direct connection to BSs is around 43\%. However, using HAPS increases the percentage of connected UEs in all the allocation schemes. Equal and proportional schemes have almost the same performance, with slight higher percentage of connected UEs with the proportional scheme. Equal scheme basically serves all the K2 UEs with same amount of the RSS units, whereas the proportional scheme allocate RSS units to each K2 UEs proportionally based on their channel gain. The min scheme serves UEs with the lowest channel gain first until all the RSS units are utilized, whereas the max scheme gives the priority for UEs with the highest channel gain. As seen in Fig. \ref{fig:schemes}, max scheme has the best performance in terms of the connected UEs with the limited number of RSS units. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figures/allocation_schemes.pdf} \caption{Comparison of different allocation schemes of RSS units.} \label{fig:schemes} \end{figure} \begin{figure*}[h] \begin{subfigure}{0.35\textwidth} \centering \includegraphics[width=\linewidth]{figures/av_opt_schemes.pdf} \caption{Average UE rate.} \label{fig:sfig1} \end{subfigure}% \begin{subfigure}{0.35\textwidth} \centering \includegraphics[width=\linewidth]{figures/worstR_opt_schemes.pdf} \caption{Worst UE rate.} \label{fig:sfig2} \end{subfigure} \begin{subfigure}{0.35\textwidth} \centering \includegraphics[width=\linewidth]{figures/throu_opt_schemes.pdf} \caption{Sum rate of the UEs belong to set $\mathcal{K}_2$.} \label{fig:sfig3} \end{subfigure} \caption{Comparison of different allocation strategies.} \label{fig:opt_comp} \end{figure*} \subsection{Comparison between allocation schemes} \subsubsection{Sum rate and worst UE rate maximization based allocation} In this simulation, we set $N_{k, min}=1000$ , $N_{k, max}=10,000$ RIS units and $P_{k, min}^{\rm CS}=15$ dBm and $P_{k, max}^{\rm CS}=20$ dBm. Fig. \ref{fig:opt_comp} compares the performances of the achievable average UE rate, worst UE rate, and the sum rate of UEs belongs to set $\mathcal{K}_2$ based on the optimized power and RIS units allocation strategy. The optimized allocation is compared to the \textit{proportional} allocation strategy for benchmark purpose. It can be observed that the \textit{max throughput} allocation scheme achieved the best performance in terms of the average UEs rate, and the sum rate of the UEs belongs to set $\mathcal{K}_2$. However, in terms of the worst UE performance, the \textit {max min R} allocation scheme significantly improves the rate of the UE with the weakest channel gain, and it substantially outperforms the \textit{max throughput} and the \textit{proportional allocation} schemes. Note that the improvement in the worst UE rate leads to the degradation of the sum and average UE rates. Since the \textit{max min R} scheme distributes the system resources fairly and maximizes the fairness among all the UEs, it results in performance loss for the whole system. It is worth noting that performance enhancement for the worst UE rate by \textit{max min R} scheme is about 15\%, while the degradation in terms of the average rate or the throughput is less than 1\% to the optimized \textit{max throughput} allocation scheme. \subsubsection{RIS units minimization based allocation} Fig. \ref{fig:R_min_vs_N} shows the variation of the minimum number of RIS units required with the different values of the minimum rate requirements of the UE. The number of RIS units and power $P_{k m}^{\rm CS}$ corresponding to all $\mathcal{K}_2$ UEs that satisfy the minimum rate requirements are obtained by solving the problem \eqref{eq:min_N_2}. It can be observed that an almost linear relationship exists between the rate requirement and the required minimum number of RIS units. Moreover, by doubling the rate required for the UEs, the RIS unit requirement is increased by 100\%. Fig. \ref{fig:R_min_vs_N} also shows the relationship between the different values of maximum transmit power available at the \ac{CS} $P_{max}^{\rm CS}$ and the optimized number of RIS units. It can be observed that by increasing $P_{max}^{\rm CS}$ by 1 dB, the minimum required the number of RIS units is reduced by about 11\%. \begin{figure} \centering \includegraphics[scale=0.6]{figures/min_N_allocation_dif_PCS.pdf} \caption{Relation between $R_{min}$ for $\mathcal{K}_2$ UEs and optimized total required RIS units.} \label{fig:R_min_vs_N} \end{figure} \end{comment} \subsection{Resource-efficiency maximization} Fig. \ref{fig:max_UES_minPCS} plots the normalized RE obtained using Algorithm~\ref{Algo1} (on the left-hand side y-axis) and the percentage of connected UEs ( on the right-hand side y-axis) versus different values of minimum rate requirement $R_{\rm min}$. It also compares the performance of Algorithm~\ref{Algo1} with the benchmark approach. The maximum number of RIS units mounted on HAPS is set to $N_{\rm max}=220,000$ units. It can be observed that as the QoS (represented by $R_{\rm min}$) increases, the percentage of connected UEs and the RE drops. However, this performance degradation is more significant in terms of the percentage of connected UEs than the RE. Furthermore, we observed that the RE obtained using Algorithm~\ref{Algo1} significantly outperforms the one obtained using the benchmark approach. This is due to the fact that Algorithm~\ref{Algo1} optimizes allocation of both power and RIS units to UEs. \begin{comment} \begin{figure} \centering \includegraphics[scale=0.6]{figures/max_UEs_connectedRatio_bench_vs_Algo_normalized.pdf} \caption{RIS-efficiency of connected UEs for different $R_{\rm min}$.} \label{fig:max_UEs_connectedRatio_bench_vs_Algo} \end{figure} \end{comment} \begin{comment} Fig.~\ref{fig:max_UES_minPCS} plots and compares normalized efficiency of the power-efficient allocation and the benchmark approach for varying $R_{\rm min}$ requirements. The power-efficient allocation is obtained by solving \eqref{eq:power-efficiency} using Algorithm~\ref{Algo1}. We observe that the proposed approach is always more power-efficient than the benchmark approach However, the percentage of connected UEs and the power efficiency decreases when the QoS requirements is high for all UEs. \end{comment} } \begin{figure} \centering \includegraphics[scale=0.6]{max_UES_minP_total_normalized_modified.pdf} \caption{Resource-efficiency performance of connected UEs for different $R_{\rm min}$.} \label{fig:max_UES_minPCS} \end{figure} \subsection{Percentage of connected UEs} \begin{figure} \centering \includegraphics[scale=0.6]{max_UEs_connected_bench_vs_Algo.pdf} \caption{Maximizing connected UEs performance for different $N_{\rm max}$.} \label{fig:max_UES_algo} \end{figure} \begin{figure} \centering \includegraphics[scale=0.6]{max_UEs_connected_bench_vs_Algo_different_PCS_max.pdf} \caption{Maximizing connected UEs performance for different $P_{\rm max}^{\rm CS}$.} \label{fig:max_UES_algo_diff_PCS} \end{figure} Figs. \ref{fig:max_UES_algo} and \ref{fig:max_UES_algo_diff_PCS} plot and compare the percentage of connected UEs obtained through Algorithm~\ref{Algo1}, the benchmark approach, and \textit{within-cell} communication approach for different values of the number of RIS units $N_{\rm max}$ available at the HAPS, and maximum power $P_{\rm max}^{\rm CS}$ available at the \ac{CS}, respectively. In Fig. \ref{fig:max_UES_algo}, to study the impact of $N_{\rm max}$, we consider only the second term (i.e., $P_{\rm RIS}N_k$) of the objective function in (\ref{eq:min_N_modified_a}). The selected range of $N_{\rm max}$ is set between 10,000 and 220,000 units. This range corresponds to a total RIS area between 9 $\rm m^2$ and 198 $\rm m^2$ at carrier frequency of 2 GHz \footnote{This represents a limited area on a typical HAPS surface, as the length of an airship is between 100-200 m, whereas aerodynamic HAPS have wingspans between 35m and 80 m. The size of each RIS unit is $(0.2\lambda)^2$ \cite{kurt2021vision}.}. In Fig. \ref{fig:max_UES_algo_diff_PCS}, only the first term of (\ref{eq:min_N_modified_a}) (i.e., $P_{km}^{\rm CS}$) is considered to study the effect of $P_{\rm max}^{\rm CS}$ on the percentage of connected UEs. The maximum transmit power of CS is set to vary between 30 dBm and 35 dBm, and $N_{\rm max}$ is set to 150,000 units. It can be observed from the figures that the percentage of connected UEs increases with the maximum power of the \ac{CS} $P_{\rm max}^{\rm CS}$, and the maximum number of RIS units $N_{\rm max}$ (or the size of HAPS). These behaviours are intuitive as making available more system resources ($P_{\rm max}^{\rm CS}$ and $N_{\rm max}$) allow more number of stranded users to be served by the \ac{CS} via HAPS-RIS. Figs. \ref{fig:max_UES_algo} and \ref{fig:max_UES_algo_diff_PCS} also show that the performance of the proposed approach is 1-3 \% higher than the benchmark approach. Moreover, $P_{\rm max}^{\rm CS}$ has more significant impact than $N_{\rm max}$ on the percentage of connected UEs. By increasing $P_{\rm max}^{\rm CS}$ by 2 dB, the percentage of connected UEs increases about 4\%, whereas doubling of the RIS size is needed to achieve the same increase in the percentage of connected UEs. Furthermore, it can be observed from the figures that 76\% UEs are served through \textit{within-cell} communication approach, and the \ac{CS} supports the remaining UEs via HAPS-RIS. Hence, \textit{beyond-cell} communications via HAPS-RIS is able to complement TNs. \section{Conclusion}\label{Sec:conclstion} \textcolor{black}{In this letter, we investigated the resources optimization of \textit{beyond-cell} communications that use HAPS-RIS technology to complement TNs by supporting unserved UEs. In particular, given the limitations of the CS power and HAPS-RIS size, it might not be feasible to support all unserved UEs. Therefore, we formulated a novel resource-efficient optimization problem that simultaneously maximizes the percentage of connected UEs using minimal \ac{CS} power and RIS units. The results show the capability of the \textit{beyond-cell} communications approach to support a larger number of UEs. Further, the results show the superiority of the proposed solutions over the benchmark approach and demonstrate the impact of the HAPS size and the QoS requirement on the percentage of connected UEs and the efficiency of the system. } \bibliographystyle{IEEEtran} \section{Introduction} Future wireless networks are expected to face unprecedented demands for continuous and ubiquitous connectivity due to the increasing number of mobile users, the evolving deployment of \ac{IoT} networks, and the growing number of novel use cases. Increasing \ac{BS} density and using relays may be a straightforward solution to cope with this unforeseen situation, but it comes at the cost of high capital and operational expenditures. Alternatively, a green and an energy-efficient solution that utilizes reconfigurable intelligent surfaces (RIS) in terrestrial networks (TNs) has been introduced \cite{huang2019reconfigurable}. RIS constitutes a large number of low-cost and nearly passive elements, which can be configured to reflect the incident \ac{RF} signal to a desired direction \cite{di2020smart,huang2019reconfigurable}. However, the deployment of RIS in terrestrial environments involves several challenges, such as placement flexibility and channel impairments including excessive path loss and shadowing effects. Instead, in \cite{alfattani2021aerial}, we proposed the integration of RIS with non-terrestrial systems and discussed its prospects for wireless systems and services. Since energy consumption is a critical issue in aerial platforms, equipping them with a massive number of active antennas would exaggerate the issue. Alternatively, mounting aerial platforms with RIS can provide an energy-efficient solution \cite{ye2022non,shang2022uav,li2022energy,jeon2022energy, aung2022energy}. In addition, due to the favorable wireless channel conditions in non-terrestrial networks (NTN), \ac{RIS} can support panoramic full-angle reflection serving wider areas with strong line-of-sight (LoS) links \cite{lu2021aerial,ye2022non}. Further, in \cite{alfattani2021link}, we showed that RIS-mounted \ac{HAPS} has the potential to outperform other RIS-mounted aerial platforms, due to the large size of \ac{HAPS}. To reap the benefits of RIS-mounted HAPS, in \cite{alfattani2022beyond}, we proposed a novel \textit{beyond-cell communications} approach. This approach offers service to the stranded terrestrial \acp{UE}, whose either channel conditions are below the required quality-of-service (QoS), or are located in a cell with fully loaded \ac{BS}. In particular, the stranded UEs get service from a dedicated ground \ac{CS} through the HAPS-RIS. We showed that HAPS-RIS can work in tandem with legacy TNs to support unserved UEs. We also discussed the optimal power and RIS units allocation design schemes to maximize the system throughput and the worst UE rate. However, due to practical limitations on system resources, including \ac{CS} power and HAPS size (equivalently, the number of RIS units), it might be infeasible to serve all unsupported UEs by CS through HAPS-RIS. Accordingly, we formulated a novel optimization problem to maximize the \ac{RE} of the system. \begin{itemize} \item A novel resource-efficient optimization problem that maximizes the percentage of connected \acp{UE} while minimizing the usage of RIS units and transmit power is formulated. \item Since the resulting problem is a \ac{MINLP} and hard to solve, a low complexity two-stage algorithm is proposed to solve it. \item Through numerical results, we study the impact of HAPS size and QoS requirements of \acp{UE} on the percentage of connected UEs and demonstrate significant improvements in \ac{RE} of the system. \end{itemize} The rest of the letter is organized as follows. In Section \ref{Sec:model}, the system model is described. Section \ref{Sec:Problem_form} presents the problem formulation. The proposed solution and the algorithm for solving the optimization problem are discussed in Section \ref{Sec:Solution}. Numerical results and discussion are presented in Section \ref{Sec:Results}. Finally, Section \ref{Sec:conclstion} concludes the letter. \begin{figure}[h] \centering \includegraphics[width=0.6\linewidth]{new_model_HAPS-RIS.pdf} \caption{System model for HAPS-RIS assisting beyond cell communications.} \label{fig:model} \end{figure} \section{System Model} \label{Sec:model} We consider a typical urban region consists of $K$ UEs, $L$ terrestrial \acp{BS}, a single HAPS-RIS, and one \ac{CS}, as depicted in Fig.~\ref{fig:model}. The UEs are assumed to suffer from severe shadowing and blockages, and NLoS paths, which are typical characteristics of propagation media in urban regions. Based on channel conditions between the \acp{BS} and the UEs, and the maximum serving capacity of the \acp{BS}, a set of $\mathcal{K}_1 = \{1,\ldots,K_1\}$ UEs will be supported by direct links from the BSs (referred to as \textit{within-cell} communications \cite{alfattani2022beyond}). The set of remaining UEs $\mathcal{K}_2 = \{1,\ldots,K_2\}$, which cannot form direct connection with the terrestrial \ac{BS} will be served by the \ac{CS} via HAPS-RIS (referred to as \textit{beyond-cell} communications \cite{alfattani2022beyond}). The CS is located somewhere in the HAPS coverage area. Note that $\mathcal{K} = \mathcal{K}_1 \cup \mathcal{K}_2$. We assume that the \ac{CS} serves the stranded UEs in set $\mathcal{K}_2$ using orthogonal subcarriers, and hence, there will be no inter-UE interference. Further, both \textit{within-cell} and \textit{beyond-cell} communications occur in two orthogonal frequency bands, while keeping the subcarrier bandwidth $B_{\rm UE}$ same for both types of communications. As a result, the signals from \textit{within-cell} UEs will not interfere with the signals from \textit{beyond-cell} UEs and vice versa. We assume that \textit{within-cell} UEs are connected optimally with terrestrial BSs, and hence, our focus in this work is on \textit{beyond-cell} communications. Accordingly, the received signal at UE $k \in \mathcal{K}_2$ on subcarrier $m$ can be expressed as \begin{equation} \label{eq:Rx_signal} y_{k m}=\sqrt{P^{\rm CS}_{k m}} h_{k m} \mathrm{\Phi}_{k} \;x_{{k m}}+w_{km}, \end{equation} where $P^{\rm CS}_{km}$ denotes the transmit power of UE $k$ and $w_{km}$ denotes the additive white Gaussian noise (AWGN) $\sim \mathcal{CN}(0,N_0B_{\rm UE})$, where $N_0$ is the noise power spectral density. $h_{k m}$ denotes the effective channel gain from the \ac{CS} to the HAPS-RIS and from the HAPS-RIS to UE $k$, and is given by \begin{equation} h_{k m} = \sqrt{G^{\rm CS} G_{r}^{k} (\textsf{PL}^{\rm{CS-HAPS}-\textit{k}})^{-1}}, \end{equation} where $G^{\rm CS}$ denotes the antenna gain of the control station, and $G_{r}^{k}$ is the receiver antenna gain of UE $k$. $\textsf{PL}^{\text{CS-HAPS}-k} = \textsf{PL}^{\rm{CS}-\rm{HAPS}} \textsf{PL}^{\text{HAPS}-k}$ denotes the effective path loss between the \ac{CS} and UE $k$ via HAPS-RIS. Further, $\mathrm{\Phi_{k}}$ represents the reflection gain of the RIS corresponding to UE $k$, and is expressed as \begin{equation} \mathrm{\Phi}_{k}=\sum_{i=1}^{N_{k}} \rho_i e^{-j \left(\phi_i - \theta_{i}-\theta_{k}\right)}, \end{equation} where $\rho_i$ denotes the reflection loss corresponding to RIS unit $i$. $\theta_{i}$ and $\theta_{k}$ represent the corresponding phases between RIS unit $i$ and both the control station and UE $k$, respectively. $\phi_i$ represents the adjusted phase shift of RIS unit $i$, and $N_{k}$ represents the total number of RIS units allocated to UE $k$. Using \eqref{eq:Rx_signal}, the \ac{SNR} at UE $k$ can be written as \begin{comment} \begin{equation}\label{eq:SNR} \gamma_{k^\prime}= \frac{P_{k^\prime}^{\rm CS} \left| h_{k^\prime} \mathrm{\Phi_{k^\prime}}\right|^2} {\sum_{i=1,i \neq {k^\prime}}^{K_2} P_{i}^{\rm CS}\left| h_{i} \mathrm{\Phi_{i}}\right|^2 + \sigma^2 B} \end{equation} \end{comment} \begin{equation}\label{eq:SNR} \gamma_{k}= \frac{P_{km}^{\rm CS} \left| h_{k m} \mathrm{\Phi}_{k}\right|^2} { N_0 B_{\rm UE}}, \end{equation} and the corresponding achievable rate can be expressed as \begin{equation}\label{eq:R_K2} R_{k} = B_{\rm UE} \log_2(1+\gamma_{k}). \end{equation} \section{Problem Formulation}\label{Sec:Problem_form} Using power efficiently at the \ac{CS} for transmitting signals, and at the HAPS-RIS for activating the RIS units offer greener environment and longer refueling intervals for HAPS. Also, in case of a large UE density or strict QoS requirement, it might infeasible to serve all unserved UEs by the CS via HAPS-RIS. Therefore, we need to select as many UEs that can be supported by the \ac{CS}, while using as minimum system resources (transmit power and RIS units) as possible. Accordingly, we define a novel performance metric known as resource efficiency as follows: \begin{definition} Resource efficiency (RE) of the beyond-cell communication system, $\eta$, is defined as the ratio between the percentage of served UEs and the average power consumption in dBm by each supported UE, which includes the consumption towards signal transmission and RIS units configuration: \begin{equation} \eta = \dfrac{\frac{1}{K}\Big(K_1+\sum_{k=1}^{K_2}u_{k}\Big)}{\frac{1}{|\mathcal{U}|}\sum_{k=1}^{K_2} \Big(P_{k m}^{\rm CS}u_k + P_{\rm RIS}N_ku_k\Big)}, \end{equation} where $u_k$ is the indicator variable, if $u_k=1$, user $k$ will get service by HAPS-RIS, otherwise not if $u_k=0$. In the denominator, the tersm $\sum_{k=1}^{K_2} P_{k m}^{\rm CS}u_k$ and $\sum_{k=1}^{K_2}P_{\rm RIS}N_ku_k$ represent the total transmit power consumption at the \ac{CS} and the total power consumed by the RIS units for all supported UEs, respectively. $P_{\rm RIS}$ denotes the consumed power by each RIS unit for phase shifting, which is dependent on the RIS configuration technology and its resolution. Finally, $|\mathcal{U}|$ denotes the cardinality of the set, which constitutes the UEs supported by the CS via HAPS-RIS. \end{definition} In the following, we formulate the resource-efficient UEs maximization problem. It can be expressed as \begin{comment} \begin{subequations} \begin{align} \label{eq:P4} & \max _{\mathrm{\Phi}_{k},N_{k},P_{k m}^{\rm CS}} \dfrac{\mathcal{PC}}{\sum_{k=1}^{K_2} N_{k}} \\ \label{p4:c1} & \text { s.t. } \textcolor{black}{ L \leq L_{\rm max}} \\ \label{p1_throu:c2} &\quad R_{k} \geq R_{th}, \enspace \forall k=1,2, \ldots, K_2 \\ \label{p1_throu:c3} &\quad \sum_{k=1}^{K_2} N_{k} \leq N_{\rm max} \\ \label{p1_throu:c4} &\quad 0\leq \theta_{n} \leq 2\pi, \enspace \forall n=1,2, \ldots, N_{ \rm max} \\ \label{p1_throu:c5} &\quad\sum_{k=1}^{K_2} P_{k m}^{\rm CS} \leq P_{\rm max}^{\rm CS} \\ \label{p1_throu:c6} &\quad N_{k \rm min} \leq N_{k} \leq N_{k \rm max}\\ \label{p1_throu:c7} &\quad P_{k \rm min}^{\rm CS} \leq P_{k m}^{\rm CS} \leq P_{k \rm max}^{\rm CS}, \end{align} \end{subequations} \end{comment} \begin{subequations} \label{eq:RIS-efficiency} \begin{align} \label{eq:P4-2} &\max _{u_{k},\mathrm{\Phi}_{k},N_{k},P_{k m}^{\rm CS}} \et \\ \label{p4:c1} & \text { s.t. } L \leq L_{\rm max}, \\ \label{p1:c1} &\quad R_{k} \geq u_{k} R_{\rm th}, \enspace \forall k=1,2, \ldots, K_2, \\ \label{p1:c2} &\quad \sum_{k=1}^{K_2} u_{k}N_{k} \leq N_{\rm max}, \\ \label{p1:c3} &\quad \theta_{n} \in \{0, 2\pi\}, \enspace \forall n=1,2, \ldots, N_{\rm max}, \\ \label{p1:c4} &\quad\sum_{k=1}^{K_2} u_{k}P_{k m}^{\rm CS} \leq P_{\rm max}^{\rm CS}, \\ \label{p1:c5} &\quad N_{k} \in \{N_{k, \rm min}, N_{k, \rm{max}}\} ,\enspace \forall k=1,2, \ldots, K_2, \\ \label{p1:c6} &\quad 0 \leq P_{k m}^{\rm CS} \leq u_{k} P_{k, \rm max}^{\rm CS} ,\enspace \forall k=1,2, \ldots, K_2,\\ \label{p1:c7} &\quad u_{k} \in \{0,1\}, \end{align} \end{subequations} where $P^{\rm CS}_{\rm max}$ denotes the maximum available transmit power at the \ac{CS}. $N_{k,\rm max}$ and $P_{k, \rm max}^{\rm CS}$ denote the maximum number of RIS units and the amount of power allocated to UE $k$, respectively. (\ref{p4:c1}) limits the number of BSs in the area. Constraint (\ref{p1:c1}) guarantees that each selected UE satisfies the threshold rate, $R_{\rm th}$, requirement. Constraint (\ref{p1:c2}) ensures the total number of RIS units allocated to the supported UEs does not exceed the maximum number of available RIS units, $N_{\rm max}$, which is limited by the HAPS size. Constraint (\ref{p1:c3}) determines the range of the adjustable phase shift for each RIS unit. Constraint (\ref{p1:c4}) guarantees that the total allocated power by the \ac{CS} to the selected UEs is less than the maximum available power. Finally, constraints (\ref{p1:c5}) and (\ref{p1:c6}) ensure feasible and fair allocation of both RIS units and \ac{CS} power to each UE. \begin{comment} \begin{subequations} \label{eq:power-efficiency} \begin{align} \label{eq:P5} & \max _{\mathrm{\Phi}_{k},N_{k},P_{k m}^{\rm CS}} \dfrac{\mathcal{PC}}{\sum_{k=1}^{K_2} \Big( P^{\rm CS}_{k m}} \\ \label{p5:c1} & \text { s.t. } (\ref{p4:c1})-(\ref{p1:c7}) . \end{align} \end{subequations} \end{comment} \section{Proposed Solution} \label{Sec:Solution} \begin{comment} To maximize the number of \textit{within-cell} UEs connection, we set $L=L_{max}$. Then, for the \textit{beyond-cell} UEs, the problem of maximizing the percentage of connected UEs using minimal number of RIS units can be re-written as: Accordingly, \eqref{eq:RIS-efficiency} is a combinatorial hard problem, which may involve high complex computation. Therefore, we propose a low-complexity practical approach to solve it as described in Algorithm~\ref{Algo1}. \end{comment} Since problem (\ref{eq:RIS-efficiency}) is an \ac{MINLP}, it is hard to solve optimally with lower complexity. Therefore, we develop a low-complexity algorithm to solve it suboptimally. The main step of the proposed algorithm involves solving \eqref{eq:RIS-efficiency} in two stages. In the first stage, we maximize the numerator of (\ref{eq:P4-2}) by maximizing the number of UEs, which can establish direct connection with TNs (i.e., by setting $L=L_{\rm max}$), and then finding the set of maximum feasible UEs $ \in \mathcal{K}_2$, which can be supported by the CS via HAPS-RIS. To this end, we first sort the channel gains of all UEs. Secondly, under the assumptions of equal power allocation to each UE and perfect reflection of each RIS unit, we allocate the minimum required number of RIS units to each UE as follows: \begin{equation}\label{eq:N_k} N_{k} = \Biggl\lceil \sqrt{\frac{N_0 B_{\rm UE} (2^{\frac{R_{\rm min}}{B_{\rm UE}}}-1)}{P_{k m}^{\rm CS} \left| h_{k} \right|^2}}\Biggr\rceil. \end{equation} The initial RIS units allocation starts with UEs with the best channel conditions until all RIS units are utilized. As a result, a set $\mathcal{U}$ with the largest number of feasible UEs is determined. In the second stage, the denominator of (\ref{eq:P4-2}) is minimized by optimally allocating the \ac{CS} power and RIS units to each UE belongs to set $\mathcal{U}$. This is accomplished by solving the following optimization problem: \begin{subequations}\label{eq:min_N_general} \begin{align} \label{eq:min_N} &\min _{\mathrm{\Phi}_{k},N_{k},P_{k m}^{\rm CS}} \sum_{k=1}^{|\mathcal{U}|} P_{k m}^{\rm CS} + P_{\rm RIS}N_k \\ \label{p3:c1} &\text { s.t. } R_{k} \geq R_{th}, \enspace \forall k=1,2, \ldots, |\mathcal{U}|, \\ \label{p1_throu:c3} &\quad \sum_{k=1}^{K_2} N_{k} \leq N_{\rm max}, \\ \label{p1_throu:c4} &\quad \theta_{n} \in \{0, 2\pi\}, \enspace \forall n=1,2, \ldots, N_{ \rm max}, \\ \label{p1_throu:c5} &\quad\sum_{k=1}^{K_2} P_{k m}^{\rm CS} \leq P_{\rm max}^{\rm CS}, \\ \label{p1_throu:c6} &\quad N_{k} \in \{N_{k, \rm min} ,N_{k, \rm max}\}, \enspace \forall k=1,2, \ldots, |\mathcal{U}|,\\ \label{p1_throu:c7} &\quad 0 \leq P_{k m}^{\rm CS} \leq P_{k, \rm max}^{\rm CS}, \enspace \forall k=1,2, \ldots, |\mathcal{U}|. \end{align} \end{subequations} Without loss of generality, problem \eqref{eq:min_N_general} can be re-written as \begin{subequations}\label{eq:min_N_modified} \begin{align} \label{eq:min_N_modified_a} &\min _{\mathrm{\Phi}_{k},N_{k},P_{k m}^{\rm CS}} \sum_{k=1}^{|\mathcal{U}|} P_{k m}^{\rm CS} + P_{\rm RIS}N_k \\ \label{p3:c1_modified} &\text { s.t. } \frac{1}{\gamma_{k}} \leq \frac{1}{\gamma_{\rm min}}, \enspace \forall k=1,2, \ldots, |\mathcal{U}|, \\ \label{p1_throu:c3_mod} &\quad \quad \eqref{p1_throu:c3} - \eqref{p1_throu:c7}. \end{align} \end{subequations} Problem \eqref{eq:min_N_modified} is a mixed-integer linear program (MILP) and to solve it efficiently, we relax $N_{k}$ and $\theta$ to be the continuous variables. The final solution can be obtained as $N_{k} \approx \lceil N^*_{k} \rceil$ and $\theta_{n} \approx \lceil \theta^*_{n} \rceil$. Consequently, the objective and the constraints of \eqref{eq:min_N_modified} become posynomials, and can be solved optimally using geometric programming (GP) technique \cite{boyd2007tutorial}. The pseudo-code of the proposed two-stage algorithm is described in Algorithm~\ref{Algo1}. \begin{comment} \subsubsection{Power-efficient UEs maximization} {\color{black} By following a similar procedure of the previous section, and the same steps 1--8 of Algorithm \ref{Algo1}, we can obtain the maximum set of UEs to be supported by HAPS-RIS. Then, a similar problem to \eqref{eq:min_N_general} can be formulated with the objective of minimizing the CS power, as follow \begin{subequations}\label{eq:min_PCS_general} \begin{align} \label{eq:min_PCS} &\min _{\mathrm{\Phi}_{k},N_{k},P_{k m}^{\rm CS}} \sum_{k=1}^{|\mathcal{U}|} P_{k m}^{\rm CS} \\ \label{p4_Pcs:c1} &\text { s.t. } \eqref{p3:c1}-\eqref{p1_throu:c7} \end{align} \end{subequations} Then, by converting the constraints of \eqref{eq:min_PCS_general} to an equivelent or an approximate posynomial one, as in \eqref{eq:min_N_modified}, geometric optimization techniques can be applied to obtain optimal allocation of power and RIS units. \end{comment} \begin{algorithm}[h] \caption{Efficient maximization of connected UEs} \label{Algo1} \begin{algorithmic}[1] \State Set $L= L_{\rm max}$ and obtained set $\mathcal{K}_2$. \State \textbf{Input:} $h_{k}, k\in \{1, 2, \ldots, K_2\}$. \State Sort $K_2$ UEs in descending order based on their channel gains $K_{S2}\leftarrow K_2$ \For{$k = 1 \dots K_{S2}$} \; \While{$\sum_{k=1}^{K_{S2}} N_{k} \leq N_{\rm max}$} \State $P_{k m}^{\rm CS}=P_{\rm max}^{\rm CS}/K_{s2}$ \State Obtain initial $N_{k}$ from (\ref{eq:N_k}). \EndWhile \EndFor \State \textbf{Stage-1 Output:} Selected UEs $\mathcal{U}$. \State Solve optimally \eqref{eq:min_N_modified} for the set $\mathcal{U}$. \State \textbf{Stage-2 Output:} $P_{k m}^{\text{CS}*}$ and $N^*_{k}$ ($\forall k=1,\ldots,|\mathcal{U}|$). \end{algorithmic} \end{algorithm} \section{Numerical Results and Discussion} \label{Sec:Results} In this section, we present and discuss the performance of the proposed algorithm. For he purpose of comparison, we employ a benchmark approach, which first allocates equal power to all UEs and then selects the UEs with the highest channel gains to be served first with the minimum number of RIS units until the QoS requirement is satisfied. In the simulation setup, we consider an urban square area of 10 km by 10 km consisting of $L=4$ terrestrial BSs serving $K=100$ uniformly randomly distributed UEs with a minimum separation distance of 100 m among them. We assume that the terrestrial BSs are optimally placed in the considered region. The channel gains between all the UEs and the terrestrial BSs are obtained by following the 3GPP standards \cite{3gpp38901study}. The carrier frequency is set to $f_c=2$ GHz with shadowing standard deviation $\sigma=8$ dB. Unless stated otherwise, the minimum rate for a direct connection between a UE and a BS is $R_{\rm th} = 2$ Mb/s. The maximum available bandwidth to each BS is $B_{\rm BS}=50$ MHz, and subcarrier bandwidth for each UE is set to $B_{\rm UE}=2$ MHz. Accordingly, a UE will be connected to a terrestrial BS that provides the highest data rate, and the collection of such UEs forms the set $\mathcal{K}_1$. Consequently, the collection of stranded UEs forms the other set $\mathcal{K}_2$. On the other hand, the effective channel gains from the \ac{CS} to all the UEs in set $\mathcal{K}_2$ through HAPS-RIS are obtained using the standardized 3GPP channel model between a HAPS and terrestrial nodes in urban environments \cite{3gpp2017Technical,alfattani2021link}. This model considers dry air atmospheric attenuation, and corresponding atmosphere parameters are selected based on the mean annual global reference atmosphere \cite{itu1999p}. Further, we assume each UE has 0 dB antenna gain, and noise power spectral density $N_0 = -174$ dBm/Hz. We further set $P_{\rm max}^{\rm CS} =$ 33 dBm, and $G^{\rm CS}=$ 43.2 dBm \cite{3gpp2017Technical} in all of the simulations, unless stated otherwise. The values of other parameters are as follow: $P_{\rm RIS} =7.8$ mW\cite{huang2019reconfigurable}, $P_{k, \rm max}^{\rm CS}=30$ dBm, $N_{k, \rm min}=1000$ units and $N_{k, \rm max}=50,000$ units. {\color{black} \begin{comment} \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth]{figures/UEs_BSs_connection_diff_freq.pdf} \caption{Relation between BSs densities and the percentage of UEs with direct connection for different frequencies.} \label{fig:connection_diff_freq} \end{figure} \subsection{Relation between BSs densities and UEs direct connection} Note that the percentage of UEs supported by the \ac{CS} via HAPS-RIS depends on the number of UEs that failed to connect with any terrestrial BS directly. The chances of UEs being supported by the BSs directly rely on the BSs and UEs densities and the carrier frequency. For a fixed density of BS, as the density of UEs increases, the percentage of UEs with direct connection drops because BSs cannot serve more users beyond their maximum loading capacities. Fig.~\ref{fig:connection_diff_freq} illustrates the fact that the percentage of UEs with direct connection increases as the density of BSs increases. However, as the carrier frequency increases to provide high data rate communications, the percentage of UEs with direct connection significantly drops even with the large number of BSs. For instance, four BSs communicating at $f_c \leq 2$ GHz are sufficient to support more than 80\% of UEs directly connected to BSs, whereas more than 20 BSs are required to support 80\% of UEs communicating at $f_c \geq 10$ GHz. Therefore, HPAS-RIS may offer a cost-effective solution in such situations than just increasing the density of the BSs. \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth]{figures/BSs_vs_connection_v2.pdf} \caption{Relation between BSs and UEs densities and (BSs-UEs) direct connection. \textcolor{red}{To be corrected}} \label{fig:direct_percent} \end{figure} \subsection{UEs with direct connection vs UEs supported by HAPS-RSS} By neglecting interference effect in (\ref{eq:SNR}), the relation between the number of RIS units and the achievable rate for a UE supported by HAPS-RSS can be obtained. Fig. \ref{fig:N_vs_Rate} shows that the average data rate for K1 UEs with direct connection to BSs is about 23 Mb/s. However, K2 UEs supported by HAPS-RSS can satisfy the minimum rate requirement with about 4,000 RSS units, and are able to achieve the average rate of direct connection with approximately 12,000 units. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figures/N_vs_Rate.pdf} \caption{The relation between the number of RIS units on HAPS and a UE achievable rate.} \label{fig:N_vs_Rate} \end{figure} \subsection{RSS units allocation schemes} The total amount of RIS units placed on HAPS surface is limited by the HAPS size. Generally, HAPS nodes are giant platforms whose lengths are typically between 100 and 200 m for aerostatic airships, whereas aerodynamic HAPS have wingspans between 35 and 80 m. Since the size of each RIS unit is proportional to the wavelength ($\lambda$), [$0.1\lambda$×$0.1\lambda$ -- $0.2\lambda$×$0.2\lambda$], the surface of HAPS can accommodates large number of reflectors. To maximize the number of supported UEs, efficient allocation schemes of RSS units should be selected. In Fig. \ref{fig:schemes}, we compare between different allocation schemes. Initially, the percentage of connected UEs ($R\geq R_{min}$) with direct connection to BSs is around 43\%. However, using HAPS increases the percentage of connected UEs in all the allocation schemes. Equal and proportional schemes have almost the same performance, with slight higher percentage of connected UEs with the proportional scheme. Equal scheme basically serves all the K2 UEs with same amount of the RSS units, whereas the proportional scheme allocate RSS units to each K2 UEs proportionally based on their channel gain. The min scheme serves UEs with the lowest channel gain first until all the RSS units are utilized, whereas the max scheme gives the priority for UEs with the highest channel gain. As seen in Fig. \ref{fig:schemes}, max scheme has the best performance in terms of the connected UEs with the limited number of RSS units. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figures/allocation_schemes.pdf} \caption{Comparison of different allocation schemes of RSS units.} \label{fig:schemes} \end{figure} \begin{figure*}[h] \begin{subfigure}{0.35\textwidth} \centering \includegraphics[width=\linewidth]{figures/av_opt_schemes.pdf} \caption{Average UE rate.} \label{fig:sfig1} \end{subfigure}% \begin{subfigure}{0.35\textwidth} \centering \includegraphics[width=\linewidth]{figures/worstR_opt_schemes.pdf} \caption{Worst UE rate.} \label{fig:sfig2} \end{subfigure} \begin{subfigure}{0.35\textwidth} \centering \includegraphics[width=\linewidth]{figures/throu_opt_schemes.pdf} \caption{Sum rate of the UEs belong to set $\mathcal{K}_2$.} \label{fig:sfig3} \end{subfigure} \caption{Comparison of different allocation strategies.} \label{fig:opt_comp} \end{figure*} \subsection{Comparison between allocation schemes} \subsubsection{Sum rate and worst UE rate maximization based allocation} In this simulation, we set $N_{k, min}=1000$ , $N_{k, max}=10,000$ RIS units and $P_{k, min}^{\rm CS}=15$ dBm and $P_{k, max}^{\rm CS}=20$ dBm. Fig. \ref{fig:opt_comp} compares the performances of the achievable average UE rate, worst UE rate, and the sum rate of UEs belongs to set $\mathcal{K}_2$ based on the optimized power and RIS units allocation strategy. The optimized allocation is compared to the \textit{proportional} allocation strategy for benchmark purpose. It can be observed that the \textit{max throughput} allocation scheme achieved the best performance in terms of the average UEs rate, and the sum rate of the UEs belongs to set $\mathcal{K}_2$. However, in terms of the worst UE performance, the \textit {max min R} allocation scheme significantly improves the rate of the UE with the weakest channel gain, and it substantially outperforms the \textit{max throughput} and the \textit{proportional allocation} schemes. Note that the improvement in the worst UE rate leads to the degradation of the sum and average UE rates. Since the \textit{max min R} scheme distributes the system resources fairly and maximizes the fairness among all the UEs, it results in performance loss for the whole system. It is worth noting that performance enhancement for the worst UE rate by \textit{max min R} scheme is about 15\%, while the degradation in terms of the average rate or the throughput is less than 1\% to the optimized \textit{max throughput} allocation scheme. \subsubsection{RIS units minimization based allocation} Fig. \ref{fig:R_min_vs_N} shows the variation of the minimum number of RIS units required with the different values of the minimum rate requirements of the UE. The number of RIS units and power $P_{k m}^{\rm CS}$ corresponding to all $\mathcal{K}_2$ UEs that satisfy the minimum rate requirements are obtained by solving the problem \eqref{eq:min_N_2}. It can be observed that an almost linear relationship exists between the rate requirement and the required minimum number of RIS units. Moreover, by doubling the rate required for the UEs, the RIS unit requirement is increased by 100\%. Fig. \ref{fig:R_min_vs_N} also shows the relationship between the different values of maximum transmit power available at the \ac{CS} $P_{max}^{\rm CS}$ and the optimized number of RIS units. It can be observed that by increasing $P_{max}^{\rm CS}$ by 1 dB, the minimum required the number of RIS units is reduced by about 11\%. \begin{figure} \centering \includegraphics[scale=0.6]{figures/min_N_allocation_dif_PCS.pdf} \caption{Relation between $R_{min}$ for $\mathcal{K}_2$ UEs and optimized total required RIS units.} \label{fig:R_min_vs_N} \end{figure} \end{comment} \subsection{Resource-efficiency maximization} Fig. \ref{fig:max_UES_minPCS} plots the normalized RE obtained using Algorithm~\ref{Algo1} (on the left-hand side y-axis) and the percentage of connected UEs ( on the right-hand side y-axis) versus different values of minimum rate requirement $R_{\rm min}$. It also compares the performance of Algorithm~\ref{Algo1} with the benchmark approach. The maximum number of RIS units mounted on HAPS is set to $N_{\rm max}=220,000$ units. It can be observed that as the QoS (represented by $R_{\rm min}$) increases, the percentage of connected UEs and the RE drops. However, this performance degradation is more significant in terms of the percentage of connected UEs than the RE. Furthermore, we observed that the RE obtained using Algorithm~\ref{Algo1} significantly outperforms the one obtained using the benchmark approach. This is due to the fact that Algorithm~\ref{Algo1} optimizes allocation of both power and RIS units to UEs. \begin{comment} \begin{figure} \centering \includegraphics[scale=0.6]{figures/max_UEs_connectedRatio_bench_vs_Algo_normalized.pdf} \caption{RIS-efficiency of connected UEs for different $R_{\rm min}$.} \label{fig:max_UEs_connectedRatio_bench_vs_Algo} \end{figure} \end{comment} \begin{comment} Fig.~\ref{fig:max_UES_minPCS} plots and compares normalized efficiency of the power-efficient allocation and the benchmark approach for varying $R_{\rm min}$ requirements. The power-efficient allocation is obtained by solving \eqref{eq:power-efficiency} using Algorithm~\ref{Algo1}. We observe that the proposed approach is always more power-efficient than the benchmark approach However, the percentage of connected UEs and the power efficiency decreases when the QoS requirements is high for all UEs. \end{comment} } \begin{figure} \centering \includegraphics[scale=0.6]{max_UES_minP_total_normalized_modified.pdf} \caption{Resource-efficiency performance of connected UEs for different $R_{\rm min}$.} \label{fig:max_UES_minPCS} \end{figure} \subsection{Percentage of connected UEs} \begin{figure} \centering \includegraphics[scale=0.6]{max_UEs_connected_bench_vs_Algo.pdf} \caption{Maximizing connected UEs performance for different $N_{\rm max}$.} \label{fig:max_UES_algo} \end{figure} \begin{figure} \centering \includegraphics[scale=0.6]{max_UEs_connected_bench_vs_Algo_different_PCS_max.pdf} \caption{Maximizing connected UEs performance for different $P_{\rm max}^{\rm CS}$.} \label{fig:max_UES_algo_diff_PCS} \end{figure} Figs. \ref{fig:max_UES_algo} and \ref{fig:max_UES_algo_diff_PCS} plot and compare the percentage of connected UEs obtained through Algorithm~\ref{Algo1}, the benchmark approach, and \textit{within-cell} communication approach for different values of the number of RIS units $N_{\rm max}$ available at the HAPS, and maximum power $P_{\rm max}^{\rm CS}$ available at the \ac{CS}, respectively. In Fig. \ref{fig:max_UES_algo}, to study the impact of $N_{\rm max}$, we consider only the second term (i.e., $P_{\rm RIS}N_k$) of the objective function in (\ref{eq:min_N_modified_a}). The selected range of $N_{\rm max}$ is set between 10,000 and 220,000 units. This range corresponds to a total RIS area between 9 $\rm m^2$ and 198 $\rm m^2$ at carrier frequency of 2 GHz \footnote{This represents a limited area on a typical HAPS surface, as the length of an airship is between 100-200 m, whereas aerodynamic HAPS have wingspans between 35m and 80 m. The size of each RIS unit is $(0.2\lambda)^2$ \cite{kurt2021vision}.}. In Fig. \ref{fig:max_UES_algo_diff_PCS}, only the first term of (\ref{eq:min_N_modified_a}) (i.e., $P_{km}^{\rm CS}$) is considered to study the effect of $P_{\rm max}^{\rm CS}$ on the percentage of connected UEs. The maximum transmit power of CS is set to vary between 30 dBm and 35 dBm, and $N_{\rm max}$ is set to 150,000 units. It can be observed from the figures that the percentage of connected UEs increases with the maximum power of the \ac{CS} $P_{\rm max}^{\rm CS}$, and the maximum number of RIS units $N_{\rm max}$ (or the size of HAPS). These behaviours are intuitive as making available more system resources ($P_{\rm max}^{\rm CS}$ and $N_{\rm max}$) allow more number of stranded users to be served by the \ac{CS} via HAPS-RIS. Figs. \ref{fig:max_UES_algo} and \ref{fig:max_UES_algo_diff_PCS} also show that the performance of the proposed approach is 1-3 \% higher than the benchmark approach. Moreover, $P_{\rm max}^{\rm CS}$ has more significant impact than $N_{\rm max}$ on the percentage of connected UEs. By increasing $P_{\rm max}^{\rm CS}$ by 2 dB, the percentage of connected UEs increases about 4\%, whereas doubling of the RIS size is needed to achieve the same increase in the percentage of connected UEs. Furthermore, it can be observed from the figures that 76\% UEs are served through \textit{within-cell} communication approach, and the \ac{CS} supports the remaining UEs via HAPS-RIS. Hence, \textit{beyond-cell} communications via HAPS-RIS is able to complement TNs. \section{Conclusion}\label{Sec:conclstion} \textcolor{black}{In this letter, we investigated the resources optimization of \textit{beyond-cell} communications that use HAPS-RIS technology to complement TNs by supporting unserved UEs. In particular, given the limitations of the CS power and HAPS-RIS size, it might not be feasible to support all unserved UEs. Therefore, we formulated a novel resource-efficient optimization problem that simultaneously maximizes the percentage of connected UEs using minimal \ac{CS} power and RIS units. The results show the capability of the \textit{beyond-cell} communications approach to support a larger number of UEs. Further, the results show the superiority of the proposed solutions over the benchmark approach and demonstrate the impact of the HAPS size and the QoS requirement on the percentage of connected UEs and the efficiency of the system. } \bibliographystyle{IEEEtran}
{'timestamp': '2022-07-26T02:11:35', 'yymm': '2207', 'arxiv_id': '2207.11576', 'language': 'en', 'url': 'https://arxiv.org/abs/2207.11576'}
\section{Proof of Proposition~\ref{perp-path0}} \label{multi} The purpose of this section is to lift the framework to \imp{multi-designs}, in order to prove properties of the path recording the interaction between multi-designs (thus in particular, between designs). We show: \begin{itemize} \item the existence and uniqueness of the interaction path between two orthogonal multi-designs (Proposition~\ref{inter-unique}), \item the equivalence between the existence of such a path and the orthogonality of two multi-designs (Proposition~\ref{perp-path}, a generalisation of Proposition~\ref{perp-path0}), \item an associativity theorem for paths (Proposition~\ref{asso-path}). \end{itemize} These results are needed for next section. Their proofs require a lot of supplementary formalism, so the reader intuitively convinced may jump directly to next section. \subsection{Multi-Designs} The notion of \imp{multi-design} introduced below generalises the one of \imp{anti-design} given by Terui \cite{Terui}, thus in particular it generalises designs. Interaction between two \imp{compatible} multi-designs $\design D$ and $\design E$ corresponds to eliminating the cuts in another multi-design $\cut{\design D}{\design E}$. Several well-known notions of Ludics can be extended to this setting. \begin{definition} \label{multi-des} \ \begin{itemize} \item A \defined{negative multi-design} is a set $\{(x_1, \design n_1), \dots, (x_n, \design n_n)\}$ where $x_1 , \dots , x_n$ are distinct variables and $\design n_1, \dots, \design n_n$ are negative designs, such that for all ${1 \le i \le n}$, $\mathrm{fv}(\design n_i) \cap \{x_1 , \dots , x_n\} = \emptyset$, and for all $j \neq i$, $\mathrm{fv}(\design n_i) \cap \mathrm{fv}(\design n_j) = \emptyset$. \item A \defined{positive multi-design} is a set $\{\design p, (x_1, \design n_1), \dots, (x_n, \design n_n)\}$ where $\{(x_1, \design n_1), \dots, (x_n, \design n_n)\}$ is a negative multi-design and $\design p$ is a positive design such that $\mathrm{fv}(\design p) \cap \{x_1, \dots, x_n\} = \emptyset$, and for all $1 \le i \le n$, $\mathrm{fv}(\design p) \cap \mathrm{fv}(\design n_i) = \emptyset$. \end{itemize} \end{definition} We will use $\design D, \design E, \dots$ to denote multi-designs of any polarity, $\design M, \design N, \dots$ for negative ones and $\design P, \design Q, \dots$ for positive ones. A pair $(x, \design n)$ in a multi-design will be denoted by $\design n/x$ or $(\design n/x)$; hence a negative multi-design will be written $\{\design n_1/x_1, \dots, \design n_n/x_n\}$ (or even $\{\vect{\design n/x}\}$), a positive one $\{\design p, \design n_1/x_1, \dots, \design n_n/x_n\}$, and we will write $(\design n/x) \in \design D$ instead of $(x, \design n) \in \design D$. This notation makes the parallel with substitution: if $\design N = \{\design n_1/x_1, \dots, \design n_n/x_n\}$ and $\design d$ is a design, then we will allow to write $\design d[\design N]$ for the substitution $\design d[\design n_1/x_1, \dots, \design n_n/x_n]$. By abuse, we might also write $\design n \in \design D$ when the variable associated to $\design n$ in the multi-design $\design D$ does not matter; thus when writing ``let $\design d \in \design D$'', the design $\design d$ can be either positive or negative associated with a variable in $\design D$. A design can be viewed as a multi-design: a positive design $\design p$ corresponds to the positive multi-design $\{\design p\}$, and a negative design $\design n$ to the negative multi-design $\{\design n/x_0\}$, where $x_0$ is the same distinguished variable we introduced for atomic designs. Notations $\design p$ and $\design n$ will be used instead of $\{\design p\}$ and $\{\design n/x_0\}$ respectively. Note that if $\design D$ and $\design E$ are multi-designs, $\design D \cup \design E$ is not always a multi-design. \begin{definition} \label{multi-normal} Let $\design D$ be a multi-design. Its \defined{normal form} is the cut-free multi-design defined by \[\normalisation{\design D} = \setst{(\normalisation{\design n}/x)}{(\design n/x) \in \design D} \cup \setst{\normalisation{\design p}}{\design p \in \design D}\] \end{definition} \begin{definition} Let $\design D$ be a multi-design. \begin{itemize} \item The \defined{free variables} of $\design D$ are $\mathrm{fv}(\design D) = \bigcup_{\design d \in \design D}\mathrm{fv}(\design d)$ \item The \defined{negative places} of $\design D$ are $\mathrm{np}(\design D) = \setst{x}{\exists \design n \ (\design n/x) \in \design D}$. \end{itemize} \end{definition} In Definition~\ref{multi-des}, the condition ``for all ${1 \le i \le n}$, $\mathrm{fv}(\design n_i) \cap \{x_1 , \dots , x_n\} = \emptyset$'' (adding the similar condition for $\design p$ in the positive case) can thus be rephrased as ``$\mathrm{fv}(\design D) \cap \mathrm{np}(\design D) = \emptyset$''. When two multi-designs $\design D$ and $\design E$ interact, this condition will ensure that a substitution specified in $\design D$ or in $\design E$ creates a cut between a design from $\design D$ and a design from $\design E$, and never between two designs on the same side. This is exactly the form of interaction we want in the following: an interaction with two distinct sides. But in order to talk about interaction between two multi-designs, we must first determine when two multi-designs are \imp{compatible}, i.e., when we can define substitution between them in a unique way, without ambiguity, which is not the case in general. \begin{definition} Let $\design D$ and $\design E$ be multi-designs. \begin{itemize} \item $\design D$ and $\design E$ are \defined{compatible} if they satisfy the following conditions: \begin{itemize} \item $\mathrm{fv}(\design D) \cap \mathrm{fv}(\design E) = \mathrm{np}(\design D) \cap \mathrm{np}(\design E) = \emptyset$ \item either they are both negative and there exists $x \in \mathrm{np}(\design D) \cup \mathrm{np}(\design E)$ such that $x \notin \mathrm{fv}(\design D) \cup \mathrm{fv}(\design E)$, or they are of opposite polarities \end{itemize} \item $\design D$ and $\design E$ are \defined{closed-compatible} if they are of opposite polarities, compatible, and satisfying $\mathrm{fv}(\design D) = \mathrm{np}(\design E)$ and $\mathrm{fv}(\design E) = \mathrm{np}(\design D)$. \end{itemize} \end{definition} Intuitively, \imp{compatible} means that we are able to define the multi-design of the interaction between $\design D$ and $\design E$, and \imp{closed-compatible} means that this multi-design is a closed design. \begin{definition} \label{cut} Let $\design D$ and $\design E$ be compatible multi-designs. $\design{Cut}_{\design D|\design E}$ is a multi-design defined by induction on the number of designs in $\design E$ by: \begin{align} \label{1} \design{Cut}_{\design D|\emptyset} & = \design D \\ \label{2} \design{Cut}_{\design D|\design E' \cup \{\design p\}} & = \design{Cut}_{(\design D \setminus S) \cup \{\design p[S]\}|\design E'} \\ \label{3} \design{Cut}_{\design D|\design E' \cup \{\design n/x\}} & = \design{Cut}_{(\design D \setminus S) \cup \{\design n[S]/x\}|\design E'} & \mbox{ if } x \notin \mathrm{fv}(\design D) \\ \label{4} \design{Cut}_{\design D|\design E' \cup \{\design n/x\}} & = \design{Cut}_{(\design D \setminus S)[\design n[S]/x]|\design E'} & \mbox{ if } x \in \mathrm{fv}(\design D) \end{align} \begin{tabular}{ll} where $S$ & $= \setst{(\design m/y) \in \design D}{y \in \mathrm{fv}(\design p)}$ in (\ref{2}) \\ & $= \setst{(\design m/y) \in \design D}{y \in \mathrm{fv}(\design n)}$ in (\ref{3}) and (\ref{4}). \end{tabular} \end{definition} The successive pairs of compatible (resp. closed-compatible) multi-designs stay compatible (resp. closed-compatible) after one step of the definition, thus this is well defined. Moreover, if $\design D$ and $\design E$ are closed-compatible then, according to the base case, $\design{Cut}_{\design D|\design E}$ will be a closed design. In particular, if $\design p$ and $\design n$ are atomic designs then $\design{Cut}_{\design p|\design n} = \design p[\design n/x_0]$. In order to prove an \imp{associativity} theorem for multi-designs, recall first the original theorem on designs: \begin{theorem}[Associativity] \label{assoc} Let $\design d$ be a design and $\design n_1, \dots, \design n_k$ be negative designs. \[\normalisation{\design d[\design n_1/y_1, \dots, \design n_k/y_k]} = \normalisation{\normalisation{\design d}[\normalisation{\design n_1}/y_1, \dots, \normalisation{\design n_k}/y_k]}.\] \end{theorem} This result was first established by Girard \cite{Girard1}. The theorem, in the form given above, was proved by Basaldella and Terui \cite{BT}. Associativity naturally extends to multi-designs as follows: \begin{theorem}[Multi-associativity] \label{assoc2} Let $\design D$ be a multi-design and $\design n_1, \dots, \design n_k$ be negative designs. \[\normalisation{\design D[\design n_1/y_1, \dots, \design n_k/y_k]} = \normalisation{\normalisation{\design D}[\normalisation{\design n_1}/y_1, \dots, \normalisation{\design n_k}/y_k]}.\] \end{theorem} \begin{proof} Immediate from the definition of the normal form of a multi-design (Definition~\ref{multi-normal}) and simple associativity (Theorem~\ref{assoc}). \end{proof} \begin{corollary} \label{multi-assoc} \[\normalisation{\cut{\design D}{\design E}} = \normalisation{\cut{\normalisation{\design D}}{\normalisation{\design E}}}\] \end{corollary} \begin{proof} By induction on $\design E$: \begin{itemize} \item If $\design E = \emptyset$ then $\normalisation{\design{Cut}_{\design D|\emptyset}} = \normalisation{\design D} = \normalisation{\cut{\normalisation{\design D}}{\emptyset}} = \normalisation{\cut{\normalisation{\design D}}{\normalisation{\emptyset}}}$. \item If $\design E = \design E' \cup \{\design p\}$, let $S = \{\design m_1/y_1, \dots, \design m_k/y_k\} = \setst{(\design m/y) \in \design D}{y \in \mathrm{fv}(\design p)}$. By definitions of the normal form of multi-designs (Definition~\ref{multi-normal}) and of $\cut{.}{.}$ (Definition~\ref{cut}), and using associativity (Theorem~\ref{assoc2}), we have: \begin{align*} \normalisation{\design{Cut}_{\design D|\design E}} & = \normalisation{\design{Cut}_{(\design D \setminus S) \cup \{\design p[S]\}|\design E'}} & \mbox{by Def.~\ref{cut}} \\ & = \normalisation{\design{Cut}_{\normalisation{(\design D \setminus S) \cup \{\design p[S]\}}|\normalisation{\design E'}}} & \mbox{ by induction hypothesis}\\ & = \normalisation{\design{Cut}_{\normalisation{\design D \setminus S} \cup \{\normalisation{\normalisation{\design p}[\normalisation{\design m_1}/y_1, \dots, \normalisation{\design m_k}/y_k]}\}|\normalisation{\design E'}}} & \mbox{by Def.~\ref{multi-normal} and Thm.~\ref{assoc2}} \\ & = \normalisation{\design{Cut}_{\normalisation{\normalisation{\design D \setminus S} \cup \{\normalisation{\design p}[\normalisation{\design m_1}/y_1, \dots, \normalisation{\design m_k}/y_k]\}}|\normalisation{\design E'}}} & \mbox{by Def.~\ref{multi-normal}} \\ & = \normalisation{\design{Cut}_{\normalisation{\design D \setminus S} \cup \{\normalisation{\design p}[\normalisation{\design m_1}/y_1, \dots, \normalisation{\design m_k}/y_k]\}|\normalisation{\design E'}}} & \mbox{ by induction hypothesis}\\ & = \normalisation{\cut{\normalisation{\design D}}{\normalisation{\design E}}} & \mbox{by Def.~\ref{multi-normal} and \ref{cut}} \end{align*} \item If $\design E = \design E' \cup \{\design n/x\}$ with $x \notin \mathrm{fv}(\design D)$, similar as above with $S = \setst{(\design m/y) \in \design D}{y \in \mathrm{fv}(\design n)}$. \item If $\design E = \design E' \cup \{\design n/x\}$ with $x \in \mathrm{fv}(\design D)$, let $S = \{\design m_1/y_1, \dots, \design m_k/y_k\} = \setst{(\design m/y) \in \design D}{y \in \mathrm{fv}(\design n)}$. We have: \begin{align*} \normalisation{\design{Cut}_{\design D|\design E}} & = \normalisation{\design{Cut}_{(\design D \setminus S)[\design n[S]/x]|\design E'}} & \mbox{by Def.~\ref{cut}} \\ & = \normalisation{\design{Cut}_{\normalisation{(\design D \setminus S)[\design n[S]/x]}|\normalisation{\design E'}}} & \mbox{ by induction hypothesis}\\ & = \normalisation{\design{Cut}_{\normalisation{\normalisation{\design D \setminus S}[\normalisation{\normalisation{\design n}[\normalisation{\design m_1}/y_1, \dots, \normalisation{\design m_k}/y_k]}/x]}|\normalisation{\design E'}}} & \mbox{by using Thm.~\ref{assoc2} twice} \\ & = \normalisation{\design{Cut}_{\normalisation{\normalisation{\design D \setminus S}[\normalisation{\design n}[\normalisation{\design m_1}/y_1, \dots, \normalisation{\design m_k}/y_k]/x]}|\normalisation{\design E'}}} & \mbox{by Thm.~\ref{assoc2}} \\ & = \normalisation{\design{Cut}_{\{\normalisation{\design D \setminus S}[\normalisation{\design n}[\normalisation{\design m_1}/y_1, \dots, \normalisation{\design m_k}/y_k]/x]|\normalisation{\design E'}}} & \mbox{ by induction hypothesis}\\ & = \normalisation{\cut{\normalisation{\design D}}{\normalisation{\design E}}} & \mbox{by Def.~\ref{multi-normal} and \ref{cut}} \end{align*} \end{itemize} \end{proof} \begin{lemma} \label{cut-commut} $\design{Cut}_{\design D|\design E} = \design{Cut}_{\design E|\design D}$. \end{lemma} \begin{proof} By induction on the number $n$ of variables in $(\mathrm{fv}(\design D) \cap \mathrm{np}(\design E)) \cup (\mathrm{fv}(\design E) \cap \mathrm{np}(\design D))$. \begin{itemize} \item If $n = 0$ then $\design{Cut}_{\design D|\design E} = \design{Cut}_{\design E|\design D} = \design D \cup \design E$. \item Let $n > 0$ and suppose the property is satisfied for all $k < n$. Without loss of generality suppose there exists $x \in (\mathrm{fv}(\design D) \cap \mathrm{np}(\design E))$. Thus $\design E$ is of the form $\design E = \design E' \cup \{\design n/x\}$. Let $S = \setst{(\design m/y) \in \design D}{y \in \mathrm{fv}(\design n)}$. \begin{itemize} \item If $S = \emptyset$, let $\design d \in \design D$ be the design such that $x \in \mathrm{fv}(\design d)$, and let us write $\design D = \design D' \cup \{\design d\}$. If $\design d$ is positive then: \begin{align*} \cut{\design D}{\design E} & = \cut{\design D' \cup \{\design d[\design n/x]\}}{\design E'} & \mbox{ by one step~\ref{4} of Def.~\ref{cut}} \\ & = \cut{\design E'}{\design D' \cup \{\design d[\design n/x]\}} & \mbox{ by induction hypothesis} \\ & = \cut{(\design E'\setminus T') \cup \{\design d[\design n/x, T']\}}{\design D'} & \mbox{ by one step~\ref{2} of Def.~\ref{cut}} \end{align*} where $T' = \setst{(\design m/y) \in \design E'}{y \in \mathrm{fv}(\design d[\design n/x])}$. Let $T = \setst{(\design m/y) \in \design E}{y \in \mathrm{fv}(\design d)}$, we have $T = T' \cup \{\design n/x\}$, indeed: $\mathrm{fv}(\design d[\design n/x]) = (\mathrm{fv}(\design d) \setminus \{x\}) \cup \mathrm{fv}(\design n)$, where $\mathrm{fv}(\design n) \cap \mathrm{np}(\design E) = \emptyset$ by definition of a multi-design, thus also $\mathrm{fv}(\design n) \cap \mathrm{np}(\design E') = \emptyset$. Therefore: \[\cut{(\design E'\setminus T') \cup \{\design d[\design n/x, T']\}}{\design D'} = \cut{(\design E\setminus T) \cup \{\design d[T]\}}{\design D'} = \cut{\design E}{\design D}\] by one step~\ref{2} of Def.~\ref{cut} backwards, hence the result. The reasoning is similar if $\design d$ is negative and $\design D = \design D' \cup \{\design d/y\}$, we just have to distinguish between the cases $y \in \mathrm{fv}(\design E')$ and $y \notin \mathrm{fv}(\design E')$. \item Otherwise, let $S' = \setst{(\design m/y) \in \design E}{y \in \mathrm{fv}(S)}$ and $S'' = \setst{(\design m/y) \in \design D}{y \in \mathrm{fv}(S')}$; note that $S' \subseteq \design E'$ and $S'' \subseteq (\design D \setminus S)$. We have: \begin{align*} \cut{\design E}{\design D} & = \cut{(\design E'\setminus S') \cup \{\design n[S[S']]\}}{\design D \setminus S} & \mbox{ by several steps~\ref{4} of Def.~\ref{cut}} \\ & = \cut{\design D \setminus S}{(\design E'\setminus S') \cup \{\design n[S[S']]\}} & \mbox{ by induction hypothesis, since } S \neq \emptyset \\ & = \cut{(\design D \setminus (S \cup S''))[\design n[S[S'[S'']]]/x]}{\design E'\setminus S'} & \mbox{ by one step~\ref{4} of Def.~\ref{cut}} \\ & = \cut{\design D}{\design E} & \mbox{ by steps~\ref{4} of Def.~\ref{cut} backwards} \end{align*} The last equality is obtained by moving successively, from left to right, all the designs from $S'$, and finally the design $\design n$. \end{itemize} \end{itemize} \end{proof} \begin{lemma} \label{cutcut} Let $\design D_1$, $\design D_2$ and $\design E$ be multi-designs such that $\design D_1 \cup \design D_2$ is a multi-design, and $\design E$ is compatible with $\design D_1 \cup \design D_2$. We have: \[ \cut{\design D_1 \cup \design D_2}{\design E} = \cut{\design D_1}{\cut{\design E}{\design D_2}} \] \end{lemma} \begin{proof} By induction on $\design D_2$: \begin{itemize} \item If $\design D_2 = \emptyset$ then $\cut{\design E}{\design D_2} = \design E$ hence the result. \item If $\design D_2 = \design D_2' \cup \{\design p\}$ then $\cut{\design E}{\design D_2} = \cut{(\design E \setminus S) \cup \{\design p[S]\}}{\design D_2'}$ where $S = \setst{(\design m/y) \in \design E}{y \in \mathrm{fv}(\design p)}$. Thus by induction hypothesis: \begin{align*} \cut{\design D_1}{\cut{\design E}{\design D_2}} & = \cut{\design D_1 \cup \design D_2'}{(\design E \setminus S) \cup \{\design p[S]\}} & \\ & = \cut{((\design D_1 \cup \design D_2')\setminus S') \cup \{\design p[S[S']]\}}{\design E \setminus S} & \mbox{ by one step~\ref{2} of Def.~\ref{cut}} \\ & = \cut{((\design D_1 \cup \design D_2) \setminus S')[S[S']]}{\design E \setminus S} & \end{align*} where $S' = \setst{(\design m/y) \in (\design D_1 \cup \design D_2)}{y \in \mathrm{fv}(S)}$. Finally, by several steps~\ref{4} backwards of Definition~\ref{cut}, this is equal to $\cut{\design D_1 \cup \design D_2}{\design E}$. \item If $\design D_2 = \design D_2' \cup \{\design n/x\}$ and $x \notin \mathrm{fv}(\design E)$, then similar to previous case. \item If $\design D_2 = \design D_2' \cup \{\design n/x\}$ and $x \in \mathrm{fv}(\design E)$, then $\cut{\design E}{\design D_2} = \cut{(\design E \setminus S)[\design n[S]/x]}{\design D_2'}$ where $S = \setst{(\design m/y) \in \design E}{y \in \mathrm{fv}(\design n)}$. Thus by induction hypothesis: \begin{align*} \cut{\design D_1}{\cut{\design E}{\design D_2}} & = \cut{\design D_1 \cup \design D_2'}{(\design E \setminus S)[\design n[S]/x]} & \\ & = \cut{(\design E \setminus S)[\design n[S]/x]}{\design D_1 \cup \design D_2'} & \mbox{ by Lemma~\ref{cut-commut}} \\ & = \cut{\design E}{\design D_1 \cup \design D_2} & \mbox{ by one step~\ref{4} backwards of Def.~\ref{cut}} \\ & = \cut{\design D_1 \cup \design D_2}{\design E} & \mbox{ by Lemma~\ref{cut-commut}} \end{align*} \end{itemize} \end{proof} We now extend the notion of orthogonality to multi-designs. \begin{definition} \label{ortho} Let $\design D$ and $\design E$ be closed-compatible multi-designs. $\design D$ and $\design E$ are \defined{orthogonal}, written $\design D \perp \design E$, if $\design{Cut}_{\design D|\design E} \Downarrow \maltese$. \end{definition} \subsection{Paths and Multi-Designs} Recall that we write $\epsilon$ for the empty sequence. \begin{definition} Let $\design D$ be a multi-design. \begin{itemize} \item A \defined{view of} $\design D$ is a view of a design in $\design D$. \item A \defined{path of} $\design D$ is a path $\pathLL s$ of same polarity as $\design D$ such that for all prefix $\pathLL s'$ of $\pathLL s$, $\view{\pathLL s'}$ is a view of $\design D$. \end{itemize} \end{definition} We are now interested in a particular form of closed interactions, where we can identify two sides of the multi-design: designs are separated into two groups such that there are no cuts between designs of the same group. This corresponds exactly to the interaction between two closed-compatible multi-designs. \begin{definition} \label{inter-path-multi} Let $\design D$ and $\design E$ be closed-compatible multi-designs such that $\design D \perp \design E$. The \defined{interaction path} of $\design D$ with $\design E$ is the unique path $\pathLL s$ of $\design D$ such that $\dual{\pathLL s}$ is a path of $\design E$. \end{definition} But nothing ensures the existence and uniqueness of such a path: this will be proved in the rest of this subsection. We will moreover show that, if $\design D \perp \design E$, this path corresponds to the \imp{interaction sequence} defined below. For the purpose of giving an inductive definition of the interaction sequence, we define it not only for a pair of closed-compatible multi-designs but for a larger class of pairs of multi-designs. \begin{definition} \label{inter-seq} Let $\design D$ and $\design E$ be multi-designs of opposite polarities, compatible, and satisfying $\mathrm{fv}(\design D) \subseteq \mathrm{np}(\design E)$ and $\mathrm{fv}(\design E) \subseteq \mathrm{np}(\design D)$. The \defined{interaction sequence} of $\design D$ with $\design E$, written $\interseq{\design D}{\design E}$, is the sequence of actions followed by interaction on the side of $\design D$. More precisely, if we write $\design p$ for the only positive design of $\design D \cup \design E$, the interaction sequence is defined recursively by: \begin{itemize} \item If $\design p = \maltese$ then: \begin{itemize} \item $\interseq{\design D}{\design E} = \maltese$ if $\maltese \in \design D$ \item $\interseq{\design D}{\design E} = \epsilon$ if $\maltese \in \design E$. \end{itemize} \item If $\design p = \Omega$ then $\interseq{\design D}{\design E} = \epsilon$. \item If $\design p = \posdes{x}{a}{\vect{\design m}}$ then there exists $\design n$ such that $(\design n/x) \in \design E$ if $\design p \in \design D$, $(\design n/x) \in \design D$ otherwise. Let us write $\design n = \negdes{b}{\vect{y^b}}{\design p_b}$. We have $\interseq{\design D}{\design E} = \kappa \interseq{\design D'}{\design E'}$ where: \begin{itemize} \item if $\design p \in \design D$, then $\kappa = \posdes{x}{a}{\vect{y^a}}$, $\design D' = (\design D \setminus \{\design p\}) \cup \{\vect{\design m/y^a}\}$ and $\design E' = {(\design E \setminus \{\design n/x\}) \cup \{\design p_a\}}$. \item otherwise $\kappa = a_x(\vect{y^a})$, $\design D' = (\design D \setminus \{\design n/x\}) \cup \{\design p_a\}$ and $\design E' = (\design E \setminus \{\design p\}) \cup \{\vect{\design m/y^a}\}$. \end{itemize} \end{itemize} \end{definition} Note that this applies in particular to two closed-compatible multi-designs. Remark also that this definition follows exactly the interaction between $\design D$ and $\design E$: indeed, in the inductive case of the definition, the multi-designs $\design D'$ and $\design E'$ are obtained from $\design D$ and $\design E$ similarly to the following lemma. In particular the interaction sequence is finite whenever the interaction between $\design D$ and $\design E$ is finite. \begin{lemma} \label{lem-norm} Let $\design D$ and $\design E$ be closed-compatible multi-designs of opposite polarities. Suppose the only positive design $\design p \in \design D$ is of the form $\design p = \posdes{x}{a}{\vect{\design n}}$, and suppose moreover there exists $\design n_0$ such that $(\design n_0/x) \in \design E$, say $\design n_0 = \negdes{b}{\vect{x^b}}{\design p_b}$. Then: \[\cut{\design D}{\design E} \leadsto \cut{\design D'}{\design E'} \setminus \setst{(\design m/x_i^a)}{x_i^a \notin \mathrm{fv}(\design p_a)} \] where $\design D' = (\design D \setminus \{\design p\}) \cup \{\vect{\design n/x^a}\}$ and $\design E' = (\design E \setminus \{\design n_0/x\}) \cup \{\design p_a\}$. \end{lemma} \begin{proof} First notice that as $\design D$ and $\design E$ are closed-compatible, $\cut{\design D}{\design E}$ is a design, and since this design has cuts we can indeed apply one step of reduction to it. Let $S' = \setst{(\design m/x_i^a)}{x_i^a \notin \mathrm{fv}(\design p_a)})$. We have to prove $\cut{\design D}{\design E} \leadsto \cut{\design D'}{\design E'} \setminus S'$. The proof is done by induction on the number of designs in $\design E$. \begin{itemize} \item If $\design E = \{\design n_0/x\}$, then $\design E' = \{\design p_a\}$. In this case let $S= \setst{(\design m/y) \in \design D}{y \in \mathrm{fv}(\design n_0)}$, and remark that, as $\design E$ and $\design D$ are closed-compatible, $S = \design D \setminus \{\design p\}$. Thus: \begin{align*} \cut{\design D}{\design E} & = \cut{(\design D \setminus S)[\design n_0[S]/x]}{\emptyset} & \mbox{ by one step~\ref{4} of Def.~\ref{cut}} \\ & = \design p[\design n_0[S]/x] \\ & \leadsto \design p_a[S][\vect{\design n/x^a}] \\ & = \design p_a[\design D'] \\ & = \{\design p_a[\design D' \setminus S'_0]\} \cup S'_0 \setminus S' & \mbox{where } S'_0 = \proj{S'}{\design D'} \\ & = \cut{S'_0 \cup \{\design p_a[\design D' \setminus S'_0]\}}{\emptyset} \setminus S'\\ & = \cut{\design D'}{\design p_a} \setminus S' & \mbox{ by one step~\ref{2} backwards of Def.~\ref{cut}} \\ & = \cut{\design D'}{\design E'} \setminus S' \end{align*} \item Otherwise there exists $(\design n_1/z) \in \design E$ such that $x \neq z$. Suppose $z \notin \mathrm{fv}(\design D)$ (resp. $z \in \mathrm{fv}(\design D)$). Define: \begin{itemize} \item $S = \setst{(\design m/y) \in \design D}{y \in \mathrm{fv}(\design n_1)}$, and remark that $S = \setst{(\design m/y) \in \design D'}{y \in \mathrm{fv}(\design n_1)}$. \item $\design D'' = (\design D' \setminus S)\cup \{(\design n_1[S]/z)\}$ (resp. $\design D'' = (\design D' \setminus S)[\design n_1[S]/z]$) \item $\design E'' = \design E' \setminus \{(\design n_1/z)\}$. \end{itemize} We have: \begin{align*} \cut{\design D}{\design E} & = \cut{(\design D \setminus S)\cup \{(\design n_1[S]/z)\}}{\design E \setminus \{\design n_1/z\}} & \mbox{by one step~\ref{3} of Def.~\ref{cut}} \\ \mbox{( \hspace{.2cm} resp. } & = \cut{(\design D \setminus S)[(\design n_1[S]/z)]}{\design E \setminus \{\design n_1/z\}} & \mbox{by one step~\ref{4} of Def.~\ref{cut} \hspace{.2cm})} \\ & \leadsto \cut{\design D''}{\design E''} \setminus S' & \mbox{by induction hypothesis} & \\ & = \cut{\design D'}{\design E'} \setminus S' & \mbox{by step~\ref{3} (resp. \ref{4}) of Def.~\ref{cut} backwards} \end{align*} \end{itemize} \end{proof} \begin{lemma} \label{inter-dual} If $\maltese \in \normalisation{\cut{\design D}{\design E}}$ (in particular if $\design D \perp \design E$) then $\interseq{\design D}{\design E} = \dual{\interseq{\design E}{\design D}}$. Otherwise $\interseq{\design D}{\design E} = \overline{\interseq{\design E}{\design D}}$. \end{lemma} \begin{proof} It is clear from the definition of the interaction sequence that the proper actions in $\interseq{\design D}{\design E}$ are the opposite of those in $\interseq{\design E}{\design D}$. Concerning the daimon: since the interaction sequence follows the interaction between $\design D$ and $\design E$, $\maltese$ appears at the end of one of the sequences $\interseq{\design D}{\design E}$ or $\interseq{\design E}{\design D}$ if and only if $\maltese \in \normalisation{\cut{\design D}{\design E}}$, and in this case $\interseq{\design D}{\design E} = \dual{\interseq{\design E}{\design D}}$. \end{proof} \begin{proposition} \label{pref-inter-path} Every positive-ended prefix of $\interseq{\design D}{\design E}$ is a path of $\design D$. In particular, if $\interseq{\design D}{\design E}$ is finite and positive-ended then it is a path of $\design D$. \end{proposition} \begin{proof} First remark that every (finite) prefix of $\interseq{\design D}{\design E}$ is an aj-sequence. Indeed, since $\design D$ and $\design E$ are well shaped multi-designs the definition of interaction sequence ensures that an action cannot appear before its justification, and all the conditions of the definition of an aj-sequence are satisfied: \imp{Alternation} and \imp{Daimon} are immediate from the definition of interaction sequence, while \imp{Linearity} is indeed satisfied as variables are disjoint in $\design D$ and $\design E$ (by Barendregt's convention). By definition, for every prefix $\pathLL s$ of $\interseq{\design D}{\design E}$, $\view{\pathLL s}$ is a view. We show that it is a view of $\design D$ by induction on the length of $\pathLL s$: \begin{itemize} \item If $\pathLL s = \epsilon$ then $\view{\epsilon} = \epsilon$ is indeed a view of $\design D$. \item If $\pathLL s = \maltese$ then $\interseq{\design D}{\design E} = \maltese$. From the definition of interaction sequence, we know that in this case $\maltese \in \design D$, hence $\view{\maltese} = \maltese$ is a view of $\design D$. \item If $\pathLL s = \kappa\pathLL s'$ where $\kappa$ is proper, then $\interseq{\design D}{\design E} = \kappa\interseq{\design D'}{\design E'}$ where $\design D'$ and $\design E'$ are as in Definition~\ref{inter-seq}, and $\pathLL s'$ is a prefix of $\interseq{\design D'}{\design E'}$. By induction hypothesis, $\view{\pathLL s'}$ is a view of $\design D'$. Two possibilities: \begin{itemize} \item Either $\kappa = \kappa^+$ is positive. From the definition of the interaction sequence, this means $\design p := \posdes{x}{a}{\vect{\design m}} \in \design D$, $\kappa^+ = \posdes{x}{a}{\vect{y^a}}$ and $\design D' = (\design D \setminus \{\design p\}) \cup \{\vect{\design m/y^a}\}$. We have $\view{\pathLL s} = \view{\kappa^+ \pathLL s'}$ and either $\view{\kappa^+ \pathLL s'} = \kappa^+ \view{\pathLL s'}$ if the first negative action of $\view{\pathLL s'}$ is justified by $\kappa^+$ (i.e., $\exists i$ such that $\view{\pathLL s'}$ is a view of $\design m_i/y^a_i$), or $ \view{\kappa^+ \pathLL s'} = \view{\pathLL s'}$ otherwise (i.e., $\view{\pathLL s'}$ is a view of $\design D \setminus \{\design p\}$). In the second case, there is nothing more to show; in the first one, by definition of the views of a design, $\kappa^+ \view{\pathLL s'}$ is a view of $\design p = \posdes{x}{a}{\vect{\design m}}$. \item Or $\kappa = \kappa^-$ is negative. Hence there exists a design $\design n = \negdes{b}{\vect{y^b}}{\design p_b}$ such that $(\design n/x) \in \design D$, $\kappa^- = a_x(\vect{y^a})$, and $\design D' = (\design D \setminus \{\design n/x\}) \cup \{\design p_a\}$. We have $\view{\pathLL s} = \view{\kappa^- \pathLL s'}$ and either $\view{\kappa^- \pathLL s'} = \kappa^- \view{\pathLL s'}$ if the first action of $\view{\pathLL s'}$ is positive (i.e., $\view{\pathLL s'}$ is a view of $\design p_a$), or $ \view{\kappa^- \pathLL s'} = \view{\pathLL s'}$ otherwise (i.e., $\view{\pathLL s'}$ is a view of $\design D' \setminus \{\design p_a\} \subseteq \design D$). In the second case, there is nothing to do; in the first one, note that $\kappa^- \view{\pathLL s'}$ is a view of $(\design n/x)$, hence the result. \end{itemize} \end{itemize} We have proved that $\view{\pathLL s}$ is a view of $\design D$. This implies in particular that $\interseq{\design D}{\design E}$ satisfies P-visibility, indeed: given a prefix $\pathLL s \kappa^+$ of $\interseq{\design D}{\design E}$, the action $\kappa^+$ is either initial or it is justified in $\pathLL s$ by the same action that justifies it in $\design D$; since $\view{\pathLL s}$ is a view of $\design D$, the justification of $\kappa^+$ is in it, thus P-visibility is satisfied. Similarly, we can prove that $\view{\pathLL t}$ is a view of $\design E$ whenever $\pathLL t$ is a prefix of $\interseq{\design E}{\design D}$, therefore $\interseq{\design E}{\design D}$ also satisfies P-visibility; by Lemma~\ref{inter-dual} either $\interseq{\design E}{\design D} = \dual{\interseq{\design D}{\design E}}$ or $\interseq{\design E}{\design D} = \overline{\interseq{\design D}{\design E}}$, thus this implies that $\interseq{\design D}{\design E}$ satisfies O-visibility. Hence every positive-ended prefix of $\interseq{\design D}{\design E}$ is a path, and since the views of its prefixes are views of $\design D$, it is a path of $\design D$. \end{proof} \begin{remark}\label{differ-neg} If $\pathLL s\kappa_1^+$ and $\pathLL s\kappa_2^+$ are views (resp. paths) of a multi-design $\design D$ then $\kappa_1^+ = \kappa_2^+$. Indeed, if $\pathLL s\kappa_1^+$ and $\pathLL s\kappa_2^+$ are views of $\design D$, the result is immediate by definition of the views of a design; if they are paths of $\design D$, just remark that $\view{\pathLL s\kappa_1^+} = \view{\pathLL s}\kappa_1^+$ and $\view{\pathLL s\kappa_2^+} = \view{\pathLL s}\kappa_2^+$ are views of $\design D$, hence the conclusion. \end{remark} \begin{proposition} \label{prefix-norm} Suppose $\design D \perp \design E$, $\pathLL s$ is a path of $\design D$ and $\overline{\pathLL s}$ is a path of $\design E$. The path $\pathLL s$ is a prefix of $\interseq{\design D}{\design E}$. \end{proposition} \begin{proof} Suppose $\pathLL s$ is not a prefix of $\interseq{\design D}{\design E}$. Let $\pathLL t$ be the longest common prefix of $\pathLL s$ and $\interseq{\design D}{\design E}$ (possibly $\epsilon$). Without loss of generality, we can assume there exist actions of same polarity $\kappa_1$ and $\kappa_2$ such that $\kappa_1 \neq \kappa_2$, $\pathLL t \kappa_1$ is a prefix of $\pathLL s$ and $\pathLL t \kappa_2$ is a prefix of $\interseq{\design D}{\design E}$: indeed, if there are no such actions, it is because $\interseq{\design D}{\design E}$ is a strict prefix of $\pathLL s$; in this case, it suffices to consider $\interseq{\design E}{\design D}$ and $\overline{\pathLL s}$ instead. \begin{itemize} \item If $\kappa_1$ and $\kappa_2$ are positive, then $\pathLL t \kappa_1$ and $\pathLL t \kappa_2$ are paths of $\design D$, and by previous remark $\kappa_1 = \kappa_2$: contradiction. \item If $\kappa_1$ and $\kappa_2$ are negative, a contradiction arises similarly from the fact that $\overline{\pathLL t \kappa_1}$ and $\overline{\pathLL t \kappa_2}$ are paths of $\design E$ where $\overline{\kappa_1}$ and $\overline{\kappa_2}$ are positive. \end{itemize} Hence the result. \end{proof} The following result ensures that the interaction path is well defined. \begin{proposition} \label{inter-unique} If $\design D \perp \design E$, there exists a unique path $\pathLL s$ of $\design D$ such that $\dual{\pathLL s}$ is a path of $\design E$. \end{proposition} \begin{proof} Lemma~\ref{inter-dual} and Proposition~\ref{pref-inter-path} show that such a path exists, namely $\interseq{\design D}{\design E}$. Unicity follows from Proposition~\ref{prefix-norm}. \end{proof} Conversely, we prove that the existence of such a path implies the orthogonality of multi-designs (Proposition~\ref{perp-path}). \begin{proposition} \label{path-perp} Let $\design P$ and $\design N$ be closed-compatible multi-designs such that $\Omega \notin \design P$ and such that their interaction is finite. Suppose that for every path $\pathLL s\kappa^+$ of $\design P$ such that $\kappa^+$ is proper and $\overline{\pathLL s}$ is a path of $\design N$, $\overline{\pathLL s\kappa^+}$ is a path of $\design N$, and suppose also that the same condition is satisfied when reversing $\design P$ and $\design N$. Then $\design P \perp \design N$. \end{proposition} \begin{proof} By induction on the number $n$ of steps of the interaction before divergence/convergence: \begin{itemize} \item If $n = 0$, then we must have $\design P = \maltese$, since $\Omega \notin \design P$. Hence the result. \item If $n > 0$ then $\design p \in \design P$ is of the form $\design p = \posdes{x}{a}{\vect{\design n}}$ and there exists $\design n_0 = \negdes{b}{\vect{x^b}}{\design p_b}$ such that $(\design n_0/x) \in \design N$. Let $\kappa^+ = \posdes{x}{a}{\vect{x^a}}$ and remark that $\kappa^+$ is a path of $\design p$. By hypothesis, $\overline{\kappa^+} = a_x(\vect{x^a})$ is a path of $\design N$, thus a path of $\design n_0$, and this implies $\design p_a \neq \Omega$. By Lemma~\ref{lem-norm}, we have $\cut{\design P}{\design N} \leadsto \cut{\design P'}{\design N'} \setminus \setst{(\design m/x^a_i)}{x_i \notin \mathrm{fv}(\design p_a)}$ where $\design P' = (\design P \setminus \{\design p\}) \cup \{\vect{\design n/x^a}\}$ and $\design N' = (\design N \setminus \{\design n_0/x\}) \cup \{\design p_a\}$. This corresponds to a cut-net between two closed-compatible multi-designs $\design P'' \subseteq \design P'$ (negative) and $\design N'' \subseteq \design N'$ (positive), where: \begin{itemize} \item $\Omega \notin \design N''$ because $\design p_a \neq \Omega$; \item their interaction is finite and takes $n-1$ steps; \item the condition on paths stated in the proposition is satisfied for $\design P''$ and $\design N''$, because it is for $\design P$ and $\design N$: indeed, the paths of $\design P''$ (resp. $\design N''$) are the paths $\pathLL t$ such that $\kappa^+ \pathLL t$ is a path of $\design P$ (resp. $\overline{\kappa^+} \pathLL t$ is a path of $\design N$), unless such a path $\pathLL t$ contains a negative initial action whose address is not the address of a positive action on the other side, but this restriction is harmless with respect to our condition. \end{itemize} We apply the induction hypothesis to get $\design P'' \perp \design N''$. Finally $\design P \perp \design N$. \end{itemize} \end{proof} \begin{proposition} \label{perp-path} Let $\design D$ and $\design E$ be closed-compatible multi-designs. $\design D \perp \design E$ if and only if there exists a path $\pathLL s$ of $\design D$ such that $\dual{\pathLL s}$ is a path of $\design E$. \end{proposition} \begin{proof} \noindent $(\Rightarrow)$ If $\design D \perp \design E$ then result follows from Proposition~\ref{inter-unique}. \noindent $(\Leftarrow)$ We will prove that the hypothesis of Proposition~\ref{path-perp} is satisfied. Let us show that every path of $\design D$ (resp. of $\design E$) of the form $\pathLL t\kappa^+$ where $\kappa^+$ is proper and $\overline{\pathLL t}$ is a path of $\design E$ (resp. of $\design D$) is a prefix of $\pathLL s$ (resp. of $\overline{\pathLL s}$). By induction on the length of $\pathLL t$, knowing that it is either empty or negative-ended: \begin{itemize} \item If $\pathLL t$ is empty, $\kappa^+$ is necessarily the first action of the positive design in $\design D$ (resp. in $\design E$), hence the first action of $\pathLL s$ (resp. of $\overline{\pathLL s}$). \item If $\pathLL t = \pathLL t_0\kappa^-$, then $\overline{\pathLL t_0\kappa^-}$ is a path of $\design E$ (resp. of $\design D$) and $\pathLL t_0$ is a path of $\design D$ (resp. of $\design E$). By induction hypothesis, $\overline{\pathLL t} = \overline{\pathLL t_0\kappa^-}$ is a prefix of $\overline{\pathLL s}$ (resp. of $\pathLL s$), thus $\pathLL t$ is a prefix of $\pathLL s$ (resp. of $\overline{\pathLL s}$). The path $\pathLL s$ is of the form $\pathLL s = \pathLL t\kappa^{\prime+}\pathLL s'$. But since $\pathLL s$ and $\pathLL t\kappa^+$ are both paths of $\design D$ (resp. $\design E$), they cannot differ on a positive action, hence $\kappa^+ = \kappa^{\prime+}$. Thus $\pathLL t\kappa^+$ is a prefix of $\pathLL s$. \end{itemize} \end{proof} \subsection{Associativity for Interaction Paths} If $\pathLL s$ is a path of a multi-design $\design D$, and $\design E \subseteq \design D$, then we write $\proj{\pathLL s}{\design E}$ for the longest subsequence of $\pathLL s$ that is a path of $\design E$. Note that this is well defined. \begin{proposition}[Associativity for paths] \label{asso-path} Let $\design D$, $\design E$ and $\design F$ be cut-free multi-designs such that $\design E \cup \design F$ is a multi-design, and suppose $\design D \perp (\design E \cup \design F)$. We have: \[\interseq{\design E}{\normalisation{\cut{\design F}{\design D}}} = \proj{\interseq{\design E \cup \design F}{\design D}}{\design E}\] \end{proposition} \begin{proof} We will prove the result for a larger class of multi-designs. Instead of the assumption $\design D \perp (\design E \cup \design F)$, suppose that $\design D$ and $\design E \cup \design F$ are: \begin{itemize} \item of opposite polarities \item compatible \item satisfying $\mathrm{fv}(\design D) \subseteq \mathrm{np}(\design E \cup \design F)$ and $\mathrm{fv}(\design E \cup \design F) \subseteq \mathrm{np}(\design D)$ \item and such that $\maltese \in \normalisation{\cut{\design E \cup \design F}{\design D}}$ (in particular their interaction is finite). \end{itemize} First remark that $\design F$ and $\design D$ are compatible, hence it is possible to define $\cut{\design F}{\design D}$. Then since $\maltese \in \normalisation{\cut{\design E \cup \design F}{\design D}}$, we have $\maltese \in \normalisation{\cut{\design E}{\normalisation{\cut{\design F}{\design D}}}}$, indeed: \begin{align*} \normalisation{\cut{\design E \cup \design F}{\design D}} & = \normalisation{\cut{\design E}{\cut{\design F}{\design D}}} & \mbox{ by Lemmas~\ref{cutcut} and \ref{cut-commut}} \\ & = \normalisation{\cut{\design E}{\normalisation{\cut{\design F}{\design D}}}} & \mbox{ by Corollary~\ref{multi-assoc}} \end{align*} This also shows that $\design E$ and $\normalisation{\cut{\design F}{\design D}}$ are compatible. As they are of opposite polarities and they satisfy the condition on variables, $\langle \design E \leftarrow \normalisation{\cut{\design F}{\design D}} \rangle$ is defined. Let $\pathLL s = \interseq{\design E \cup \design F}{\design D}$, and let us show the result (i.e., $\proj{\pathLL s}{\design E} = \interseq{\design E}{\normalisation{\cut{\design F}{\design D}}}$) by induction on the length of $\pathLL s$, which is finite because the interaction between $\design D$ and $\design E \cup \design F$ is finite. \begin{itemize} \item If $\pathLL s = \epsilon$ then necessarily $\maltese \in \design D$ thus also $\maltese \in \normalisation{\cut{\design F}{\design D}}$. Hence $\proj{\pathLL s}{\design E} = \epsilon = \langle \design E \leftarrow \normalisation{\cut{\design F}{\design D}} \rangle$. \item If $\pathLL s = \maltese$ then $\maltese \in \design E \cup \design F$. If $\maltese \in \design E$ then $\langle \design E \leftarrow \normalisation{\cut{\design F}{\design D}} \rangle = \maltese = \proj{\pathLL s}{\design E}$. Otherwise $\maltese \in \design F$, thus $\maltese \in \normalisation{\cut{\design F}{\design D}}$ and $\interseq{\design E}{\normalisation{\cut{\design F}{\design D}}} = \epsilon = \proj{\pathLL s}{\design E}$. \item If $\pathLL s = \kappa^+ \pathLL s'$ where $\kappa^+ = \posdes{x}{a}{\vect{x^a}}$ is a proper positive action, then $\design E \cup \design F$ is a positive multi-design such that its only positive design is of the form $\design p = \posdes{x}{a}{\vect{\design m}}$. Thus $\design D$ is negative, and there exists $\design n$ such that $(\design n/x) \in \design D$ of the form $\design n = \negdes{b}{\vect{x^b}}{\design p_b}$, where $\design p_a \neq \Omega$ because the interaction converges. Let $\design D' = (\design D \setminus \{\design n/x\}) \cup \{\design p_a\}$. \begin{itemize} \item Either $\design p \in \design F$ [\textbf{reduction step}]. \\ In this case, we have $\proj{\pathLL s}{\design E} = \proj{\pathLL s'}{\design E}$, so let us show that $\proj{\pathLL s'}{\design E} = \langle \design E \leftarrow \normalisation{\cut{\design F}{\design D}} \rangle$. By definition of the interaction sequence, we have $\pathLL s' = \interseq{\design E \cup \design F'}{\design D'}$ where $\design F' = (\design F \setminus \{\design p\}) \cup \{\vect{(\design m/x^a)}\}$. Thus by induction hypothesis $\proj{\pathLL s'}{\design E} = \interseq{\design E}{\normalisation{\cut{\design F'}{\design D'}}}$. But by Lemma~\ref{lem-norm}, $\interseq{\design E}{\normalisation{\cut{\design F'}{\design D'}}} = \interseq{\design E}{\normalisation{\cut{\design F}{\design D}}}$ because the negatives among $\vect{(\design m/x^a)}$ in $\normalisation{\cut{\design F'}{\design D'}}$ will not interfere in the interaction with $\design E$, since the variables $\vect{x^a}$ do not appear in $\design E$. Hence the result. \item Or $\design p \in \design E$ [\textbf{commutation step}]. \\ In this case, we have $\proj{\pathLL s}{\design E} = \kappa^+(\proj{\pathLL s'}{\design E})$, and by definition of the interaction sequence $\pathLL s' = \interseq{\design E' \cup \design F}{\design D'}$ where $\design E' = (\design E \setminus \{\design p\}) \cup \{\vect{(\design m/x^a)}\}$. Thus by induction hypothesis $\proj{\pathLL s'}{\design E} = \proj{\pathLL s'}{\design E'} = \interseq{\design E'}{\normalisation{\cut{\design F}{\design D'}}}$. But we have \begin{align*} \interseq{\design E}{\normalisation{\cut{\design F}{\design D}}} &= \interseq{\design E}{\normalisation{\cut{\design F}{\design D' \cup \{(\design n/x)\} \setminus \{\design p_a\}}}} \\ &= \interseq{\design E}{\normalisation{\cut{\design F}{\design D'}}\cup \{(\design n'/x)\} \setminus \{\design p'_a\}} \\ &= \kappa^+\interseq{\design E'}{\normalisation{\cut{\design F}{\design D'}}} \end{align*} where $\design n'$ is the only negative design of $\normalisation{\cut{\design F}{\design D}}$ on variable $x$, and $\design p_a'$ the only positive design of $\normalisation{\cut{\design F}{\design D'}}$. Hence $\interseq{\design E}{\normalisation{\cut{\design F}{\design D}}} = \kappa^+ (\proj{\pathLL s'}{\design E}) = \proj{\pathLL s}{\design E}$. \end{itemize} \item If $\pathLL s = \kappa^- \pathLL s'$ where $\kappa^- = a_x(\vect{x^a})$, then $\design D$ is positive with only positive design of the form $\design p = \posdes{x}{a}{\vect{\design m}}$, and there exists a negative design $\design n$ such that $(\design n/x) \in \design E \cup \design F$, with $\design n$ of the form $\design n = \negdes{b}{\vect{x^b}}{\design p_b}$ where $\design p_a \neq \Omega$. By definition of the interaction sequence, we have $\pathLL s' = \interseq{((\design E \cup \design F) \setminus \{\design n/x\}) \cup \{\design p_a\}}{\design D'}$ where $\design D' = (\design D \setminus \{\design p\}) \cup \{\vect{(\design m/x^a)}\}$. \begin{itemize}\item Either $\design n \in \design F$ [\textbf{reduction step}]. \\ In this case, we have $\proj{\pathLL s}{\design E} = \proj{\pathLL s'}{\design E}$, so let us show that $\proj{\pathLL s'}{\design E} = \langle \design E \leftarrow \normalisation{\cut{\design F}{\design D}} \rangle$. By induction hypothesis $\proj{\pathLL s'}{\design E} = \interseq{\design E}{\normalisation{\cut{\design F'}{\design D'}}}$ where $\design F' = (\design F \setminus \{\design n/x\}) \cup \{\design p_a\}$, and by Lemma~\ref{lem-norm} we deduce $\proj{\pathLL s'}{\design E} = \interseq{\design E}{\normalisation{\cut{\design F}{\design D}}}$, hence the result. \item Or $\design n \in \design E$ [\textbf{commutation step}]. \\ In this case, we have $\proj{\pathLL s}{\design E} = \kappa^-(\proj{\pathLL s'}{\design E})$. By induction hypothesis $\proj{\pathLL s'}{\design E} = \proj{\pathLL s'}{\design E'} = \interseq{\design E'}{\normalisation{\cut{\design F}{\design D'}}}$ where $\design E' = (\design E \setminus \{\design n/x\}) \cup \{\design p_a\}$. But we have \begin{align*} \interseq{\design E}{\normalisation{\cut{\design F}{\design D}}} &= \interseq {\design E} {\normalisation {\cut {\design F} {\design D' \cup \{\design p\} \setminus \{\vect{(\design m/x^a)}\}} } } \\ &= \interseq {\design E} {\normalisation {\cut {\design F} {\design D'} } \cup \{\design p'\} \setminus \{\vect{(\design m'/x^a)}\} } \\ &= \kappa^-\interseq {\design E'} { \normalisation {\cut {\design F}{\design D'}}} \end{align*} where $\design p'$ is the only positive design of $\normalisation{\cut{\design F}{\design D}}$, and for each $i \le ar(a)$, $\design m'_i$ is the only negative design of $\normalisation{\cut{\design F}{\design D'}}$ on variable $x^a_i$. Therefore $\interseq{\design E}{\normalisation{\cut{\design F}{\design D}}} = \kappa^- (\proj{\pathLL s'}{\design E}) = \proj{\pathLL s}{\design E}$, which concludes the proof. \end{itemize} \end{itemize} \end{proof} \section{Proofs of Subsection~\ref{sub-regpur}} \label{sec-paths-bis} We now come back to (non ``multi-'') designs, and we prove: \begin{itemize} \item the form of visitable paths for each connective (\textsection~\ref{sec-inc-visit}), which is needed for next point; \item that (some) connectives preserve regularity (Propositions~\ref{reg-sh}, \ref{reg-tensor}, \ref{reg-arrow}, corresponding to Proposition~\ref{prop_reg_stable}), purity (Proposition~\ref{prop_pure_stable}) and quasi-purity (Proposition~\ref{prop_arrow_princ}). \end{itemize} \subsection{Preliminaries} \subsubsection{Observational Ordering and Monotonicity} \label{ord-mono} We consider the \defined{observational ordering} $\preceq$ over designs: $\design d' \preceq \design d$ if $\design d$ can be obtained from $\design d'$ by substituting: \begin{itemize} \item positive subdesigns for some occurrences of $\Omega$. \item $\maltese$ for some positive subdesigns. \end{itemize} Remark in particular that for all positive designs $\design p$ and $\design p'$, we have $\Omega \preceq \design p \preceq \maltese$, and if $\design p \sqsubseteq \design p'$ then $\design p \preceq \design p'$. We can now state the \imp{monotonicity} theorem, an important result of ludics. A proof of the theorem formulated in this form is found in \cite{Terui}. \begin{theorem}[Monotonicity] \label{thm_mono} \begin{itemize} \item If $\design d \preceq \design e$ and $\design m \preceq \design n$, then $\design d[\design m/x] \preceq \design e[\design n/x]$ \item If $\design d \preceq \design e$ then $\normalisation{\design d} \preceq \normalisation{\design e}$ \end{itemize} \end{theorem} This means that the relation $\preceq$ compares the likelihood of convergence: if $\design d \perp \design e$ and $\design d \preceq \design d'$ then $\design d' \perp \design e$. In particular, if $\beh B$ is a behaviour, if $\design d \in \beh B$ and $\design d \preceq \design d'$ then $\design d' \in \beh B$. Remark the following important fact: given a path $\pathLL s$ of some design $\design d$, there is a unique design maximal for $\preceq$ such that $\pathLL s$ is a path of it. Indeed, this design $\completed{\pathLL s}$ is obtained from $\design d$ by replacing all positive subdesigns (possibly $\Omega$) whose first positive action is not in $\pathLL s$ by $\maltese$. Note that, actually, the design $\completed{\pathLL s}$ does not depend on $\design d$ but only on the path $\pathLL s$. \begin{example} Consider design $\design d$ and the path $\pathLL s$ below: \begin{align*} \design d \hspace{.5cm} & = \hspace{.5cm} \posdes{x}{a}{b(y).(\posdes{y}{e}{}),c().\maltese+d(z).(\posdes{z}{e}{})} \\ \pathLL s \hspace{.5cm} & = \hspace{.5cm} \posdes{x}{a}{x_1, x_2} \hspace{1cm} b_{x_1}(y) \hspace{1cm} \posdes{y}{e}{} \hspace{1cm} c_{x_2}() \hspace{1cm} \maltese \end{align*} \begin{center} \begin{tikzpicture}[grow=up, level distance=3cm, sibling distance = 4.5cm, scale=.4] \draw[-] node (a) {$\posdes{x_0}{a}{x_1,x_2}$} child { node (d) {$d_{x_2}(y)$} child { node (e2) {$\posdes{z}{e}{}$} } } child { node (c) {$c_{x_2}()$} child { node (dai) {$\maltese$} } } child { node (b) {$b_{x_1}(y)$} child { node (e1) {$\posdes{y}{e}{}$} } } ; \draw[-, red, thick, rounded corners] ($(a)+(-2.8,-.5)$) -- ($(a)+(-2.8,.5)$) -- ($(b)+(-1.7,-.5)$) -- ($(e1)+(-1.7,0)$) -- ($(e1)+(-1.4,-.3)$); \draw[-,red, dashed, thick] ($(e1)+(-1.4,-.3)$) -- ($(c)+(-1.7,-.2)$); \draw[->, red, thick, rounded corners] ($(c)+(-1.7,-.2)$) -- ($(c)+(-1.4,-.5)$) -- ($(dai)+(-1.4,.5)$) ; \node[red] at ($(b)+(-2.4,1.2)$) {$\pathLL s$}; \end{tikzpicture} \end{center} We have $\completed{\pathLL s} = \posdes{x}{a}{b(y).(\posdes{y}{e}{}) + \sum_{f \neq b}f(\vect{x^f}).\maltese , \negdes{f}{\vect{x^f}}{\maltese}}$ \begin{center} \begin{tikzpicture}[grow=up, level distance=3cm, sibling distance = 3.5cm, scale=.4] \draw[-] node (a) {$\posdes{x_0}{a}{x_1,x_2}$} child { node (d) {$\dots$} edge from parent[draw=none] } child { node (d) {$f_{x_2}(\vect{x^f})$} child { node (e2) {$\maltese$} } } child { node (d) {$\dots$} edge from parent[draw=none] } child { node (d) {$d_{x_2}(y)$} child { node (e2) {$\maltese$} } } child { node (c) {$c_{x_2}()$} child { node (dai) {$\maltese$} } } child { node (b) {$b_{x_1}(y)$} child { node (e1) {$\posdes{y}{e}{}$} } } child { node (d) {$\dots$} edge from parent[draw=none] } child { node (d) {$f_{x_1}(\vect{x^f})$} child { node (e2) {$\maltese$} } } child { node (d) {$\dots$} edge from parent[draw=none] } ; \draw[-, red, thick, rounded corners] ($(a)+(-2.5,-.5)$) -- ($(a)+(-2.5,.5)$) -- ($(b)+(-1.4,-.5)$) -- ($(e1)+(-1.4,0)$) -- ($(e1)+(-1.1,-.3)$); \draw[-,red, dashed, thick] ($(e1)+(-1.1,-.3)$) -- ($(c)+(-1.5,-.2)$); \draw[->, red, thick, rounded corners] ($(c)+(-1.5,-.2)$) -- ($(c)+(-1.2,-.5)$) -- ($(dai)+(-1.2,.5)$) ; \node[red] at ($(b)+(-2.4,1.2)$) {$\pathLL s$}; \end{tikzpicture} \end{center} \end{example} \begin{proposition} \label{cor-mono} If $\pathLL s \in \visit{\beh B}$ then $\completed{\pathLL s} \in \beh B$. \end{proposition} \begin{proof} There exists $\design d \in \beh B$ such that $\pathLL s$ is a path of $\design d$, thus $\design d \preceq \completed{\pathLL s}$. The result then comes from monotonicity (Theorem~\ref{thm_mono}). \end{proof} \subsubsection{More on Paths} Let $\beh B$ be a behaviour. \begin{lemma} \label{lem_visit_path_inc} If $\design d \in \beh B$ and $\pathLL s \in \visit{B}$ is a path of $\design d$, then $\pathLL s$ is a path of $|\design d|$. \end{lemma} \begin{proof} Let $\design e \in \beh B^\perp$ such that $\pathLL s = \interseq{\design d}{\design e}$, and let $\pathLL t = \interseq{|\design d|}{\design e}$. \begin{itemize} \item Since $|\design d| \sqsubseteq \design d$, the path $\pathLL s$ cannot be a strict prefix of $\pathLL t$, and $\pathLL s$ and $\pathLL t$ cannot differ on a positive action. \item If $\pathLL t$ is a strict prefix of $\pathLL s$ then it is positive-ended. So $\dual{\pathLL s}$ and $\dual{\pathLL t}$ are paths of $\design e$ differing on a positive action, which is impossible. \item If $\pathLL s$ and $\pathLL t$ differ on a negative action, say $\pathLL u\kappa^-_1$ and $\pathLL u\kappa^-_2$ are respective prefixes of $\pathLL s$ and $\pathLL t$ with $\kappa^-_1 \neq \kappa^-_2$, then $\overline{\pathLL u\kappa^-_1}$ and $\overline{\pathLL u\kappa^-_2}$ are paths of $\design e$ differing on a positive action, which is impossible. \end{itemize} Thus we must have $\pathLL s = \pathLL t$, hence the result. \end{proof} \begin{lemma} \label{daimon_visit} Let $\pathLL s \in \visit B$. For every positive-ended (resp. negative-ended) prefix $\pathLL s'$ of $\pathLL s$, we have $\pathLL s' \in \visit B$ (resp. $\pathLL s'\maltese \in \visit B$). \end{lemma} \begin{proof} Let $\pathLL s = \interseq{\design d}{\design e}$ where $\design d \in B$ and $\design e \in B^\perp$, and let $\pathLL s'$ be a prefix of $\pathLL s$. \begin{itemize} \item If $\pathLL s'$ is negative-ended, let $\kappa^+$ be such that $\pathLL s'\kappa^+$ is a prefix of $\pathLL s$. The action $\kappa^+$ comes from $\design d$. Consider design $\design d'$ obtained from $\design d$ by replacing the positive subdesign of $\design d$ starting on action $\kappa^+$ with $\maltese$. Since $\design d \preceq \design d'$, by monotonicity $\design d' \in \beh B$. Moreover $\pathLL s' \maltese = \interseq{\design d'}{\design e}$, hence the result. \item If $\pathLL s'$ is positive-ended then either $\pathLL s' = \pathLL s$ and there is nothing to prove or $\pathLL s'$ is a strict prefix of $\pathLL s$, so assume we are in the second case. $\pathLL s'$ is $\maltese$-free, hence $\overline{\pathLL s'}$ is a negative-ended prefix of $\dual{\pathLL s} \in \visit{B^\perp}$. Using the argument above, it comes $\dual{\pathLL s'} = \overline{\pathLL s'} \maltese \in \visit{B^\perp}$, thus $\pathLL s' \in \visit B$. \end{itemize} \end{proof} \begin{lemma} \label{nec} Let $\pathLL s \in \visit{B}$. For every prefix $\pathLL s' \kappa^-$ of $\pathLL s$ and every $\design d \in \beh B$ such that $\pathLL s'$ is a path of $\design d$, $\pathLL s'\kappa^-$ is a prefix of a path of $\design d$. \end{lemma} \begin{proof} There exist $\design d_0 \in \beh B$ and $\design e_0 \in \beh B^{\perp}$ such that $\pathLL s = \langle \design d_0 \leftarrow \design e_0 \rangle$. Let $\pathLL s' \kappa^-$ be a prefix of $\pathLL s$, and let $\design d \in \beh B$ such that $\pathLL s'$ is a path of $\design d$. Since $\overline{\pathLL s'}$ is a prefix of a path of $\design e_0$, $\pathLL s'$ is a prefix of $\langle \design d \leftarrow \design e_0 \rangle$. We cannot have $\pathLL s' = \langle \design d \leftarrow \design e_0 \rangle$, otherwise $\dual{\pathLL s'} = \overline{\pathLL s'}\maltese$ and $\overline{\pathLL s' \kappa^-}$ would be paths of $\design e_0$ differing on a positive action, which is impossible. Thus there exists $\kappa^{\prime-}$ such that $\pathLL s'\kappa^{\prime-}$ is a prefix of $\langle \design d \leftarrow \design e_0 \rangle$, which is a path of $\design d$, and necessarily $\kappa^- = \kappa^{\prime-}$. Finally $\pathLL s'\kappa^-$ is a prefix of a path of $\design d$. \end{proof} \subsubsection{An Alternative Definition of Regularity} Define the \defined{anti-shuffle} ($\text{\rotatebox[origin=c]{180}{$\shuffle$}}$) as the dual operation of shuffle, that is: \begin{itemize} \item $\pathLL s \text{\rotatebox[origin=c]{180}{$\shuffle$}} \pathLL t = \dual{\dual{\pathLL s} \shuffle \dual{\pathLL t}}$ if $\pathLL s$ and $\pathLL t$ are paths of same polarity; \item $S \text{\rotatebox[origin=c]{180}{$\shuffle$}} T = \dual{\dual{S} \shuffle \dual{T}}$ if $S$ and $T$ are sets of paths of same polarity. \end{itemize} \begin{definition} \begin{itemize} \item A \defined{trivial view} is an aj-sequence such that each proper action except the first one is justified by the immediate previous action. In other words, it is a view such that its dual is a view as well. \item The \defined{trivial view of} an aj-sequence is defined inductively by: \begin{align*} \triv{\epsilon} & =\epsilon && \mbox{ empty sequence} \\ \triv{\pathLL s\maltese} & =\triv{\pathLL s}\maltese && \\ \triv{\pathLL s\kappa} & =\kappa && \mbox{ if } \kappa \neq \maltese \mbox{ initial} \\ \triv{\pathLL s\kappa} & =\triv{\pathLL s_0}\kappa && \mbox{ if } \kappa \neq \maltese \mbox{ justified, where } \pathLL s_0 \mbox{ prefix of } \pathLL s \mbox{ ending on } \mathrm{just}(\kappa) \end{align*} We also write $\trivv{\kappa}{\pathLL s}$ (or even $\triv{\kappa}$) instead of $\triv{\pathLL s' \kappa}$ when $\pathLL s' \kappa$ is a prefix of $\pathLL s$. \item \defined{Trivial views of a design} $\design d$ are the trivial views of its paths (or of its views). In particular, $\epsilon$ is a trivial view of negative designs only. \item Trivial views of designs in $|\beh B|$ are called \defined{trivial views of} $\beh B$. \end{itemize} \end{definition} \begin{lemma} \label{triv-view-path} \begin{enumerate} \item Every view is in the anti-shuffle of trivial views. \item Every path is in the shuffle of views. \end{enumerate} \end{lemma} \begin{proof} \ \begin{enumerate} \item Let $\viewseq v$ be a view, the result is shown by induction on $\viewseq v$: \begin{itemize} \item If $\viewseq v = \epsilon$ or $\viewseq v = \kappa$, it is itself a trivial view, hence the result. \item Suppose $\viewseq v = \viewseq v'\kappa$ with $\viewseq v' \neq \epsilon$ and $\viewseq v' \in \viewseq t_1 \text{\rotatebox[origin=c]{180}{$\shuffle$}} \dots \text{\rotatebox[origin=c]{180}{$\shuffle$}} \viewseq t_n$ where the $\viewseq t_i$ are trivial views. \begin{itemize} \item If $\kappa$ is negative, as $\viewseq v$ is a view, the action $\kappa$ is justified by the last action of $\viewseq v'$, say $\kappa^+$. Hence $\kappa^+$ is the last action of some trivial view $\viewseq t_{i_0}$. Hence $\viewseq v \in \viewseq t_1 \text{\rotatebox[origin=c]{180}{$\shuffle$}} \dots \text{\rotatebox[origin=c]{180}{$\shuffle$}} \viewseq t_{i_0-1} \text{\rotatebox[origin=c]{180}{$\shuffle$}} (\viewseq t_{i_0}\kappa) \text{\rotatebox[origin=c]{180}{$\shuffle$}} \viewseq t_{i_0+1} \text{\rotatebox[origin=c]{180}{$\shuffle$}} \dots \text{\rotatebox[origin=c]{180}{$\shuffle$}} \viewseq t_n$. \item If $\kappa$ is positive, either it is initial and $\viewseq v \in \viewseq t_1 \text{\rotatebox[origin=c]{180}{$\shuffle$}} \dots \text{\rotatebox[origin=c]{180}{$\shuffle$}} \viewseq t_n \text{\rotatebox[origin=c]{180}{$\shuffle$}} \kappa$ with $\kappa$ a trivial view, or it is justified by $\kappa^-$ in $\viewseq v'$. In the last case, there exists a unique $i_0$ such that $\kappa^-$ appears in $\viewseq t_{i_0}$, so let $\viewseq t\kappa^-$ be the prefix of $\viewseq t_{i_0}$ ending with $\kappa^-$. We have that $\viewseq v \in \viewseq t_1 \text{\rotatebox[origin=c]{180}{$\shuffle$}} \dots \text{\rotatebox[origin=c]{180}{$\shuffle$}} \viewseq t_n \text{\rotatebox[origin=c]{180}{$\shuffle$}} (\viewseq t\kappa^-\kappa)$ where $\viewseq t\kappa^-\kappa$ is a trivial view. \end{itemize} \end{itemize} \item Similar reasoning as above, but replacing $\text{\rotatebox[origin=c]{180}{$\shuffle$}}$ by $\shuffle$, ``trivial view'' by ``view'', ``view'' by ``path'', and exchanging the role of the polarities of actions. \end{enumerate} \end{proof} \begin{remark} Following previous result, note that every view (resp. path) of a design $\design d$ is in the anti-shuffle of trivial views (resp. in the shuffle of views) of $\design d$. \end{remark} \begin{proposition} \label{reg2} $\beh B$ is regular if and only if the following conditions hold: \begin{itemize} \item the positive-ended trivial views of $\beh B$ are visitable in $\beh B$, \item $\visit{B}$ and $\visit{B^\perp}$ are stable under $\shuffle$ (i.e., $\visit{B}$ is stable under $\shuffle$ and $\text{\rotatebox[origin=c]{180}{$\shuffle$}}$). \end{itemize} \end{proposition} \begin{proof} Let $\beh B$ be a behaviour. \\ \noindent ($\Rightarrow$) Suppose $\beh B$ is regular, and let $\viewseq t$ be a positive-ended trivial view of $\beh B$. There exists a view $\viewseq v$ of a design $\design d \in |\beh B|$ such that $\viewseq t$ is a subsequence of $\viewseq v$, and such that $\viewseq v$ ends with the same action as $\viewseq t$. Since $\viewseq v$ is a view of $\design d$, $\viewseq v$ is in particular a path of $\design d$, hence by regularity $\viewseq v \in \visit{B}$. There exists $\design e \in \beh B^\perp$ such that $\viewseq v = \interseq{\design d}{\design e}$, and by Lemma~\ref{lem_visit_path_inc} we can even take $\design e \in |\beh B^\perp|$. Since $\dual{\viewseq v}$ is a path of $\design e$, $\view{\dual{\viewseq v}}$ is a view of $\design e$. But notice that $\view{\dual{\viewseq v}} = \view{\dual{\viewseq t}} = \dual{\viewseq t}$ by definition of a view and of a trivial view. We deduce that $\dual{\viewseq t}$ is a view (and in particular a path) of $\design e$, hence $\dual{\viewseq t} \in \visit{B^\perp}$ by regularity. Finally, $\viewseq t \in \visit{B}$. \\ \noindent ($\Leftarrow$) Assume the two conditions of the statement. Let $\pathLL s$ be a path of some design of $|\beh B|$. By Lemma~\ref{triv-view-path}, we know that there exist views $\viewseq v_1$, \dots, $\viewseq v_n$ such that $\pathLL s \in \viewseq v_1 \shuffle \dots \shuffle \viewseq v_n$, and for each $\viewseq v_i$ there exist trivial views $\viewseq t_{i,1}$, \dots, $\viewseq t_{i,m_i}$ such that $\viewseq v_i \in \viewseq t_{i,1} \text{\rotatebox[origin=c]{180}{$\shuffle$}} \dots \text{\rotatebox[origin=c]{180}{$\shuffle$}} \viewseq t_{i,m_i}$. By hypothesis each $\viewseq t_{i,j}$ is visitable in $\beh B$, hence as $\visit{B}$ is stable by anti-shuffle, $\viewseq v_i \in \visit{B}$. Thus as $\visit{B}$ is stable by shuffle, $\pathLL s \in \visit{B}$. Similarly the paths of designs of $|\beh B^\perp|$ are visitable in $\beh B^\perp$. Hence the result. \end{proof} \subsection{Form of the Visitable Paths} \label{sec-inc-visit} From internal completeness, we can make explicit the form of the visitable paths for behaviours constructed by logical connectives; such results are necessary for proving the stability of regularity and purity (\textsection~\ref{sub-reg} and \ref{sub-pur} respectively). We will use the notations given at the beginning of Subsection~\ref{sub_connectives}, and also the following. Given an action $\kappa$ and a set of sequences $V$, we write $\kappa V$ for $\setst{\kappa\pathLL s}{\pathLL s \in V}$. Let us note $\kappa_\blacktriangledown = x_0|\blacktriangledown\langle x\rangle$, $\kappa_\blacktriangle = \blacktriangle_{x_0}(x)$, $\kappa_\bullet = x_0|\bullet \langle x, y \rangle$ and $\kappa_{\iota_i} = x_0|\iota_i \langle x_i\rangle$ for $i \in \{1, 2\}$. In this section are proved the following results: \begin{itemize} \item $\visit{\shpos N} = \kappa_\blacktriangledown \visit{N}^x \cup \{\maltese\}$ and $\visit{\shneg P} = \kappa_\blacktriangle \visit{P}^x \cup \{\epsilon\}$ (Proposition~\ref{visit-sh}), \item $\visit{M \oplus N} = \kappa_{\iota_1} \visit{M}^{x_1} \cup \kappa_{\iota_2} \visit{N}^{x_2} \cup \{\maltese\}$ (Proposition~\ref{visit-pl}), \item $\visit{M \otimes N} = \kappa_\bullet (\visit{M}^x \shuffle \visit{N}^y) \cup \{\maltese\}$ if $\beh M$ and $\beh N$ are regular (Proposition~\ref{visit-tensor-reg}), \item the general form of the visitable paths of $\beh M \otimes \beh N$, not as simple (Proposition~\ref{visit-tensor}), \item finally, the case of $\multimap$ easily deduced from $\otimes$ (Corollaries~\ref{visit-arrow} and \ref{visit-arrow-reg}). \end{itemize} \subsubsection{Shifts} \begin{lemma} \ \label{lem-inc-sh} \begin{enumerate} \item \label{lem1} $(\blacktriangle(x).(\beh N^\perp)^x)^\perp \subseteq \blacktriangledown\langle\beh N\rangle \cup \{\maltese\}$. \item \label{lem2} $\blacktriangle(x).(\beh N^\perp)^x \subseteq \blacktriangledown\langle\beh N\rangle^\perp$. \end{enumerate} \end{lemma} \begin{proof} Let $E = \blacktriangledown\langle\beh N\rangle$, and let $F = \blacktriangle(x).(\beh N^\perp)^x$. To show the lemma, we must show $F^\perp \subseteq E \cup \{\maltese\}$ and $F \subseteq E^\perp$. \begin{enumerate} \item Let $\design q \in F^\perp$. If $\design q \neq \maltese$, $\design q$ is necessarily of the form $\blacktriangledown\langle\design n\rangle$ where $\design n$ is a negative atomic design. For every design $\design p \in \beh N^\perp$, we have $\blacktriangle(x).\design p^x \in F$ and $\design q[\blacktriangle(x).\design p^x/x_0] \leadsto \design p[\design n/x_0]$, thus $\normalisation{\design q[\blacktriangle(x).\design p^x/x_0]} = \normalisation{\design p[\design n/x_0]} = \maltese$ since $\design q \perp \blacktriangle(x).\design p^x$. We deduce $\design n \in \beh N$, hence $\design q \in E$. \item Let $\design m = \blacktriangle(x).\design p^x \in F$. For every design $\design n \in \beh N$, we have $\blacktriangledown\langle\design n\rangle[\design m/x_0] \leadsto \design p[\design n/x_0]$, thus $\normalisation{\blacktriangledown\langle\design n\rangle[\design m/x_0]} = \normalisation{\design p[\design n/x_0]} = \maltese$ since $\design p \in \beh N^\perp$ and $\design n \in \beh N$. Hence $\design m \in E^\perp$. \end{enumerate} \end{proof} \begin{lemma} \label{lem-inc-shneg} $\shneg \beh P = (\shpos \beh P^\perp)^\perp$. \end{lemma} \begin{proof} If we take $\beh N = \beh P^\perp$, Lemma~\ref{lem-inc-sh} gives us: \begin{enumerate} \item \label{lem3} $(\blacktriangle(x).\beh P^x)^\perp \subseteq \blacktriangledown\langle\beh P^\perp\rangle \cup \{\maltese\}$ and \item \label{lem4} $\blacktriangle(x).\beh P^x \subseteq \blacktriangledown\langle\beh P^\perp\rangle^\perp$. \end{enumerate} Let $E = \blacktriangledown\langle\beh P^\perp\rangle$, and let $F = \blacktriangle(x).\beh P^x$. By definition $\shneg \beh P = F^{\perp\perp}$. From (\ref{lem4}) we deduce $F^{\perp\perp} \subseteq E^\perp$, and from (\ref{lem3}) $E^\perp = (E \cup \{\maltese\})^\perp \subseteq F^{\perp\perp}$. Hence $\shneg \beh P = F^{\perp\perp} = E^\perp = (\shpos \beh P^\perp)^\perp$. \end{proof} \begin{proposition}\ \label{visit-sh} \begin{enumerate} \item $\visit{\shpos N} = \kappa_\blacktriangledown \visit{N}^x \cup \{\maltese\}$ \item $\visit{\shneg P} = \kappa_\blacktriangle \visit{P}^x \cup \{\epsilon\}$ \end{enumerate} \end{proposition} \begin{proof} \ \begin{enumerate} \item $(\subseteq)$ Let $\design q \in \shpos \beh N$ and $\design m \in (\shpos \beh N)^\perp$, let us show that $\langle \design q \leftarrow \design m\rangle \in \kappa_\blacktriangledown\visit{N}^x \cup \{\maltese\}$. By Lemma~\ref{lem-inc-shneg}, $\design m \in \shneg \beh N^\perp$. If $\design q = \maltese$ then $\langle \design q \leftarrow \design m \rangle = \maltese$. Otherwise, by Theorem~\ref{thm_intcomp_all}, $\design q = \blacktriangledown\langle\design n\rangle$ with $\design n \in \beh N$. We have $\langle \design q \leftarrow \design m \rangle = \langle \design q \leftarrow |\design m| \rangle$ by Lemma~\ref{lem_visit_path_inc} , where $ |\design m| \in \blacktriangle(x).|(\beh N^\perp)^x|$ by Theorem~\ref{thm_intcomp_all}, hence $|\design m|$ is of the form $|\design m| = \blacktriangle(x).\design p^x$ with $\design p \in \beh N^\perp$. By definition $\langle \design q \leftarrow |\design m| \rangle = \kappa_\blacktriangledown \langle \design n^x \leftarrow \design p^x \rangle$, where $\langle \design n^x \leftarrow \design p^x \rangle \in \visit{N}^x$. $(\supseteq)$ Indeed $\maltese \in \visit{\shpos N}$. Now let $\pathLL s \in \kappa_\blacktriangledown \visit{N}^x$. There exist $\design n \in \beh N$ and $\design p \in \beh N^\perp$ such that $\pathLL s = \kappa_\blacktriangledown \langle \design n^x \leftarrow \design p^x \rangle$. Note that $\blacktriangledown\langle\design n\rangle \in \blacktriangledown\langle\beh N\rangle$ and $\blacktriangle(x).\design p^x \in \blacktriangle(x).(\beh N^\perp)^x$. By Lemma~\ref{lem-inc-shneg}, $\shneg \beh N^\perp = (\shpos \beh N)^\perp$, hence $\blacktriangledown\langle\design n\rangle \perp \blacktriangle(x).\design p^x$. Moreover $\interseq{\blacktriangledown\langle\design n\rangle}{\blacktriangle(x).\design p^x} = \kappa_\blacktriangledown \langle \design n^x \leftarrow \design p^x \rangle = \pathLL s$, therefore $\pathLL s \in \visit{\shpos N}$. \item By Lemma~\ref{lem-inc-shneg} and previous item, and remarking that $\visit B = \dual{\visit{B^\perp}}$ for every behaviour $\beh B$, we have: $\visit{\shneg P} = \dual{\visit{(\shneg P)^\perp}} = \dual{\visit{\shpos P^\perp}} = \dual{(\kappa_\blacktriangledown \visit{ P^\perp}^x \cup \{\maltese\})} = \kappa_\blacktriangle \dual{\visit{P^\perp}^x} \cup \{\epsilon\} = \kappa_\blacktriangle \visit{P}^x \cup \{\epsilon\}$. \end{enumerate} \end{proof} \subsubsection{Plus} \begin{proposition}\ \label{visit-pl} $\visit{M \oplus N} = \kappa_{\iota_1} \visit{M}^{x_1} \cup \kappa_{\iota_2} \visit{N}^{x_2} \cup \{\maltese\}$ \end{proposition} \begin{proof} Remark that $\beh M \oplus \beh N = (\iota_1 \langle \beh M \rangle \cup \{\maltese\}) \cup (\iota_2 \langle \beh N \rangle \cup \{\maltese\})$ is the union of behaviours $\oplus_1 \beh M$ and $\oplus_2 \beh N$, which correspond respectively to $\shpos \beh M$ and $\shpos \beh N$ with a different name for the first action. Moreover, $(\beh M \oplus \beh N)^\perp = \setst{\design n}{\proj{\design n}{\pi_1} \in \pi_1(x).(\beh M^\perp)^x \mbox{ and } \proj{\design n}{\pi_2} \in \pi_2(x).(\beh N^\perp)^x} = (\with_1 \beh M^\perp) \cap (\with_2 \beh N^\perp)$, where the behaviours $\with_1 \beh M^\perp$ and $\with_2 \beh N^\perp$ correspond to $\shneg \beh M^\perp$ and $\shneg \beh N^\perp$ with different names; note also that for every $\design d \in |\with_1\beh M^\perp|$ (resp. $|\with_2\beh N^\perp|$) there exists $\design d' \in (\beh M \oplus \beh N)^\perp$ such that $\design d \sqsubseteq \design d'$, in other words such that $\design d = |\design d'|_{\with_1\beh M^\perp}$ (resp. $|\design d'|_{\with_2\beh N^\perp}$). Therefore the proof can be conducted similarly to the one of Proposition~\ref{visit-sh}(1). \end{proof} \subsubsection{Tensor and Linear Map} The following proposition is a joint work with Fouquer\'e and Quatrini; in \cite{FQ2}, they prove a similar result in the framework of original Ludics. \begin{proposition} \label{visit-tensor} $\pathLL s \in \visit{M \otimes N}$ if and only if the two conditions below are satisfied: \begin{enumerate} \item $\pathLL s \in \kappa_\bullet(\visit{M}^x \shuffle \visit{N}^y) \cup \{\maltese\}$, \item for all $\pathLL t \in \visit{M}^x \shuffle \visit{N}^y$, for all $\kappa^-$ such that $\overline{\kappa_\bullet \pathLL t \kappa^-}$ is a path of $\completed{\dual{\pathLL s}}$, $\pathLL t \kappa^- \maltese \in \visit{M}^x \shuffle \visit{N}^y$. \end{enumerate} \end{proposition} The proof of this proposition uses some material on multi-designs introduced in Section~\ref{multi}. Note also that for all negative designs $\design m$ and $\design n$, we will write $\design m \otimes \design n$ instead of $x_0|\bullet\langle\design m, \design n\rangle$. \begin{proof} \ $(\Rightarrow)$ Let $\pathLL s \in \visit{M \otimes N}$. If $\pathLL s = \maltese$ then both conditions are trivial, so suppose $\pathLL s \neq \maltese$. By internal completeness (Theorem~\ref{thm_intcomp_all}), there exist $\design m \in \beh M$, $\design n \in \beh N$ and $\design n_0 \in (\beh M \otimes \beh N)^{\perp}$ such that $\pathLL s = \langle \design m \otimes \design n \leftarrow \design n_0\rangle$. Thus $\design n_0$ must be of the form $\design n_0 = \negdes{a}{\vect{z^a}}{\design p_a}$ with $\design p_\wp \neq \Omega$ (remember that $\bullet = \overline \wp$), and we have $\pathLL s = \kappa_\bullet \pathLL s'$ where $\pathLL s' = \langle \{\design m^x, \design n^y\} \leftarrow \{\design p_\wp, \design n_0/x_0\}\rangle = \langle \{\design m^x, \design n^y\} \leftarrow \design p_\wp\rangle$. Let us prove both properties: \begin{enumerate} \item By Proposition~\ref{asso-path}, $\proj{\pathLL s'}{\design m^x} = \langle \design m^x \leftarrow \normalisation{\design p_\wp[\design n/y]} \rangle$, where $\design m^x \in \beh M^x$. Moreover, $\normalisation{\design p_\wp[\design n/y]} \in \beh M^{x\perp}$, indeed: for any $\design m' \in \beh M$, we have $\normalisation{\normalisation{\design p_\wp[\design n/y]}[\design m'/x]} = \normalisation{\design p_\wp[\design n/y, \design m'/x]} = \normalisation{(\design m' \otimes \design n)[\design n_0/x_0]} = \maltese$ using associativity and one reduction step backwards. Thus $\proj{\pathLL s'}{\design m^x} \in \visit{M}^x$. Likewise, $\proj{\pathLL s'}{\design n^y} = \langle \design n^y \leftarrow \normalisation{\design p_\wp[\design m/x]} \rangle$, so $\proj{\pathLL s'}{\design n^y} \in \visit{N}^y$. Therefore $\pathLL s' \in (\visit{M}^x \shuffle \visit{N}^y)$. \item Now let $\pathLL t_1 \in \visit{M}^x, \pathLL t_2 \in \visit{N}^y$. Suppose $\pathLL t \in (\pathLL t_1 \shuffle \pathLL t_2)$ and $\kappa^-$ is a negative action such that $\overline {\kappa_\bullet \pathLL t\kappa^-}$ is a path of $\completed{\dual{\pathLL s}}$. Without loss of generality, suppose moreover that the action $\kappa^-$ comes from $\design m^x$, and let us show that $\pathLL t_1 \kappa^- \maltese \in \visit{M}^x$. Let $\pathLL t' = \langle \{\completed{\pathLL t_1}/x, \completed{\pathLL t_2}/y\} \leftarrow \completed{\dual{\pathLL s'}} \rangle$. We will show that $\pathLL t_1 \kappa^-$ is a prefix of $\proj{\pathLL t'}{\completed{\pathLL t_1}}$ and that $\proj{\pathLL t'}{\completed{\pathLL t_1}} \in \visit M^x$, leading to the conclusion by Lemma~\ref{daimon_visit}. Note the following facts: \begin{enumerate} \item $\completed{\dual{\pathLL s}} = \wp(x, y).\completed{\dual{\pathLL s'}} + \sum_{a \neq \wp}a(\vect{z^a}).\maltese$, and thus $\completed{\dual{\pathLL s'}} \neq \maltese$ (otherwise a path of the form $\overline {\kappa_\bullet \pathLL t\kappa^-}$ cannot be path of $\completed{\dual{\pathLL s}}$). \item $\pathLL t$ is a path of the multi-design $\{\completed{\pathLL t_1}/x, \completed{\pathLL t_2}/y\}$, and $\overline{\pathLL t}$ is a prefix of a path of $\completed{\dual{\pathLL s'}}$ since $\overline{\kappa_\bullet \pathLL t\kappa^-}$ is a path of $\completed{\dual{\pathLL s}}$, thus $\pathLL t$ is a prefix of $\pathLL t'$ by Proposition~\ref{prefix-norm}. \item Since $\pathLL t$ is a $\maltese$-free positive-ended prefix of $\pathLL t'$, we have that $\overline{\kappa_\bullet \pathLL t}$ is a strict prefix of $\dual{\kappa_\bullet \pathLL t'}$. Thus there exists a positive action $\kappa_0^+$ such that $\overline{\kappa_\bullet \pathLL t}\kappa_0^+$ is a prefix of $\dual{\kappa_\bullet \pathLL t'}$. The paths $\overline{\kappa_\bullet \pathLL t\kappa^-}$ and $\overline{\kappa_\bullet \pathLL t}\kappa^+_0$ are both paths of $\completed{\dual{\pathLL s}}$, hence necessarily $\kappa^+_0 = \overline{\kappa^-}$. We deduce that $\pathLL t \kappa^-$ is a prefix of $\pathLL t'$. \item The sequence $\proj{\pathLL t'}{\completed{\pathLL t_1}}$ therefore starts with $\proj{(\pathLL t \kappa^-)}{\completed{\pathLL t_1}}$. \item We have $\proj{(\pathLL t \kappa^-)}{\completed{\pathLL t_1}} = (\proj{\pathLL t}{\completed{\pathLL t_1}})\kappa^-$ because, since $\kappa^-$ comes from $\design m^x$, it is hereditarily justified by an initial negative action of address $x$, and thus $\kappa^-$ appears in design $\completed{\pathLL t_1}$. We deduce $\proj{(\pathLL t \kappa^-)}{\completed{\pathLL t_1}} = (\proj{\pathLL t}{\completed{\pathLL t_1}})\kappa^- = \pathLL t_1\kappa^-$. \item Moreover, by Proposition~\ref{asso-path} $\proj{\pathLL t'}{\completed{\pathLL t_1}} = \langle \completed{\pathLL t_1} \leftarrow \normalisation{\completed{\dual{\pathLL s'}}[\completed{\pathLL t_2}/y]} \rangle$. \end{enumerate} Hence (by d, e, f) the sequence $\pathLL t_1\kappa^-$ is a prefix of $\proj{\pathLL t'}{\completed{\pathLL t_1}} = \langle \completed{\pathLL t_1} \leftarrow \normalisation{\completed{\dual{\pathLL s'}}[\completed{\pathLL t_2}/y]} \rangle$. Since $\completed{\pathLL t_1} \in \beh M^x$ (by Proposition~\ref{cor-mono}) and $\normalisation{\completed{\dual{\pathLL s'}}[\completed{\pathLL t_2}/y]} \in \beh M^{x\perp}$ (by associativity, similar reasoning as item $1$), we deduce $\proj{\pathLL t'}{\completed{\pathLL t_1}} \in \visit{M}^x$. Finally $\pathLL t_1\kappa^-\maltese \in \visit{M}^x$ by Lemma~\ref{daimon_visit}. \end{enumerate} \noindent $(\Leftarrow)$ Let $\pathLL s \in \kappa_\bullet (\visit{M}^x \shuffle \visit{N}^y) \cup \{\maltese\}$ such that the second constraint is also satisfied. If $\pathLL s = \maltese$ then $\pathLL s \in \visit{M \otimes N}$ is immediate, so suppose $\pathLL s = \kappa_\bullet \pathLL s'$ where $\pathLL s' \in (\visit{M}^x \shuffle \visit{N}^y)$. Consider the design $\completed{\dual{\pathLL s}}$, and note that $\completed{\dual{\pathLL s}} = \wp(x, y).\completed{\dual{\pathLL s'}} + \sum_{a \neq \wp}a(\vect{z^a}).\maltese$. We will show by contradiction that $\completed{\dual{\pathLL s}} \in (\beh M \otimes \beh N)^\perp$, leading to the conclusion. Let $\design m \in \beh M$ and $\design n \in \beh N$ such that $\design m \otimes \design n \not\perp \completed{\dual{\pathLL s}}$. By Proposition~\ref{path-perp} and given the form of design $\completed{\dual{\pathLL s}}$, the interaction with $\design m \otimes \design n$ is finite and the cause of divergence is necessarily the existence of a path $\pathLL t$ and an action $\kappa^-$ such that: \begin{enumerate} \item $\pathLL t$ is a path of $\design m \otimes \design n$, \item $\overline{\pathLL t\kappa^-}$ is a path of $\completed{\dual{\pathLL s}}$ \item $\pathLL t\kappa^-$ is not a path of $\design m \otimes \design n$. \end{enumerate} Hence $\pathLL t$ is of the form $\pathLL t = \kappa_\bullet \pathLL t'$. Choose $\design m$ and $\design n$ such that $\pathLL t$ is of minimal length with respect to all such pairs of designs non orthogonal to $\completed{\dual{\pathLL s}}$. Let $\pathLL t_1 = \proj{\pathLL t'}{\design m^x}$ and $\pathLL t_2 = \proj{\pathLL t'}{\design n^y}$, we have $\pathLL t \in \kappa_\bullet (\pathLL t_1 \shuffle \pathLL t_2)$. Consider the design $\completed{\dual{\pathLL t}}$, and note that $\completed{\dual{\pathLL t}} = \wp(x, y).\completed{\dual{\pathLL t'}} + \sum_{a \neq \wp}a(\vect{z^a}).\maltese$. We prove the following: \begin{itemize} \item \underline{$\completed{\dual{\pathLL t}} \in (\beh M \otimes \beh N)^\perp$}: By contradiction. Let $\design m' \in \beh M$ and $\design n' \in \beh N$ such that $\design m' \otimes \design n' \not\perp \completed{\dual{\pathLL t}}$. Again using Proposition~\ref{path-perp}, divergence occurs necessarily because there exists a path $\pathLL v$ and a negative action $\kappa^{\prime-}$ such that: \begin{enumerate} \item $\pathLL v$ is a path of $\design m' \otimes \design n'$, \item $\overline{\pathLL v\kappa^{\prime-}}$ is a path of $\completed{\dual{\pathLL t}}$, \item $\pathLL v\kappa^{\prime-}$ is not a path of $\design m' \otimes \design n'$. \end{enumerate} Since the views of $\overline{\pathLL v\kappa^{\prime-}}$ are views of $\overline{\pathLL t}$, $\overline{\pathLL v\kappa^{\prime-}}$ is a path of $\completed{\dual{\pathLL s}}$. Thus $\design m' \otimes \design n' \not\perp \completed{\dual{\pathLL s}}$. Moreover $\pathLL v$ is strictly shorter than $\pathLL t$, indeed: $\pathLL v$ and $\pathLL t$ are $\maltese$-free, and since $\overline{\pathLL v \kappa^{\prime-}}$ is a path of $\completed{\dual{\pathLL t}}$ any action of $\pathLL v \kappa^{\prime-}$ is an action of $\pathLL t$. This contradicts the fact that $\pathLL t$ is of minimum length. We deduce $\completed{\dual{\pathLL t}} \in (\beh M \otimes \beh N)^\perp$. \item \underline{$\pathLL t \in \kappa_\bullet (\visit{M}^x \shuffle \visit{N}^y)$}: We show $\pathLL t_1 \in \visit{M}^x$, the proof of $\pathLL t_2 \in \visit{N}^y$ being similar. Since $\pathLL t$ is a path of $\design m \otimes \design n$ and $\dual{\pathLL t}$ a path of $\completed{\dual{\pathLL t}}$, we have $\pathLL t = \interseq{\design m \otimes \design n}{\completed{\dual{\pathLL t}}} = \kappa_\bullet \interseq{\{\design m^x, \design n^y\}}{\completed{\dual{\pathLL t'}}}$, hence $\pathLL t' = \interseq{\{\design m^x, \design n^y\}}{\completed{\dual{\pathLL t'}}}$. Thus by Proposition~\ref{asso-path} $\pathLL t_1 = \proj{\pathLL t'}{\design m^x} = \interseq{\design m^x}{\normalisation{\completed{\dual{\pathLL t'}}[\design n/y]}}$. Moreover $\normalisation{\completed{\dual{\pathLL t'}}[\design n/y]} \in \beh M^{x\perp}$: for any design $\design m' \in \beh M$ we have $\normalisation{\completed{\dual{\pathLL t'}}[\design n/y]} \perp \design m'^x$ because of the equality $\normalisation{\normalisation{\completed{\dual{\pathLL t'}}[\design n/y]}[\design m'/x]} = \normalisation{\completed{\dual{\pathLL t'}}[\design n/y, \design m'/x]} = \normalisation{(\design m' \otimes \design n)[\completed{\dual{\pathLL t}}/x_0]} = \maltese$, using associativity, one reduction step backwards, and the fact that $\completed{\dual{\pathLL t}} \in (\beh M \otimes \beh N)^\perp$. It follows that $\pathLL t_1 \in \visit{M}^x$. \item \underline{$\pathLL t\kappa^-$ is a path of $\design m \otimes \design n$}: Remember that $\overline{\pathLL t\kappa^-}$ is a path of $\completed{\dual{\pathLL s}}$, and we have just seen that $\pathLL t \in \kappa_\bullet (\visit{M}^x \shuffle \visit{N}^y)$. Using the second constraint of the proposition, we should have $\pathLL t_1 \kappa^-\maltese \in \visit{M}^x$ or $\pathLL t_2 \kappa^-\maltese \in \visit{N}^y$. Without loss of generality suppose $\pathLL t_1 \kappa^-\maltese \in \visit{M}^x$. Since $\design m^x \in \beh M^x$ and $\pathLL t_1$ is a path of $\design m^x$, we should also have that $\pathLL t_1\kappa^-$ is a prefix of a path of $\design m^x$ by Lemma~\ref{nec}, hence $\view{\pathLL t' \kappa^-} = \view{\pathLL t_1 \kappa^-}$ is a view of $\design m^x$. But in this case, knowing that $\pathLL t$ is a path of $\design m \otimes \design n$ and that $\view{\pathLL t \kappa^-} = \kappa_\bullet \view{\pathLL t' \kappa^-}$ is a view of $\design m \otimes \design n$, we deduce that $\pathLL t\kappa^-$ is a path of $\design m \otimes \design n$. \end{itemize} Last point contradicts the cause of divergence between $\design m \otimes \design n$ and $\completed{\dual{\pathLL s}}$. Hence $\completed{\dual{\pathLL s}} \in (\beh M \otimes \beh N)^\perp$. Moreover, $\dual{\pathLL s}$ is a path of $\completed{\dual{\pathLL s}} $, and since $\pathLL s \in \kappa_\bullet (\visit{M}^x \shuffle \visit{N}^y)$ there exist $\design m_0 \in \beh M$ and $\design n_0 \in \beh N$ such that $\pathLL s$ is a path of $\design m_0 \otimes \design n_0$ (and $\design m_0 \otimes \design n_0 \in \beh M \otimes \beh N$). We deduce $\pathLL s = \interseq{\design m_0 \otimes \design n_0}{\completed{\dual{\pathLL s}}}$, hence $\pathLL s \in \visit{M \otimes N}$. \end{proof} \begin{corollary} \label{visit-arrow} $\pathLL s \in \visit{N \multimap P}$ if and only if the two conditions below are satisfied: \begin{enumerate} \item $\dual{\pathLL s} \in \kappa_\bullet(\visit{N}^x \shuffle \dual{\visit{P}^y}) \cup \{\maltese\}$ \item for all $\pathLL t \in \visit{N}^x \shuffle \dual{\visit{P}^y}$, for all $\kappa^-$ such that $\overline{\kappa_\bullet \pathLL t \kappa^-}$ is a path of $\completed{\pathLL s}$, $\pathLL t \kappa^- \maltese \in \visit{N}^x \shuffle \dual{\visit{P}^y}$. \end{enumerate} \end{corollary} \subsubsection{Tensor and Linear Map, Regular Case} \begin{proposition} \label{visit-tensor-reg} If $\beh M$ and $\beh N$ regular then $\visit{M \otimes N} = \kappa_\bullet (\visit{M}^x \shuffle \visit{N}^y) \cup \{\maltese\}$. \end{proposition} \begin{proof} Suppose $\beh M$ and $\beh N$ regular. Following Proposition~\ref{visit-tensor}, it suffices to show that any path $\pathLL s \in \kappa_\bullet (\visit{M}^x \shuffle \visit{N}^y) \cup \{\maltese\}$ satisfies the following condition: for all $\pathLL t \in \visit{M}^x \shuffle \visit{N}^y$, for all negative action $\kappa^-$ such that $\overline{\kappa_\bullet \pathLL t \kappa^-}$ is a path of $\completed{\dual{\pathLL s}}$, $\pathLL t \kappa^- \maltese \in \visit{M}^x \shuffle \visit{N}^y$. If $\pathLL s = \maltese$, there is nothing to prove, so suppose $\pathLL s = \kappa_\bullet \pathLL s'$ where $\pathLL s' \in \visit{M}^x \shuffle \visit{N}^y$. Let $\pathLL t \in \visit{M}^x \shuffle \visit{N}^y$ and $\kappa^-$ be such that $\overline{\kappa_\bullet \pathLL t \kappa^-}$ is a path of $\completed{\dual{\pathLL s}}$, that is $\overline{\pathLL t \kappa^-}$ is a path of $\completed{\dual{\pathLL s'}}$. Let $\pathLL s_1, \pathLL t_1 \in \visit{M}^x$ and $\pathLL s_2, \pathLL t_2 \in \visit{N}^y$ such that $\pathLL s' \in \pathLL s_1 \shuffle \pathLL s_2$ and $\pathLL t \in \pathLL t_1 \shuffle \pathLL t_2$. Without loss of generality, suppose $\kappa^-$ is an action in $\pathLL s_1$, thus we must show $\pathLL t_1 \kappa^- \maltese \in \visit{M}^x$. Notice that $\triv{\pathLL t_1\kappa^-} = \triv{\pathLL t\kappa^-} = \trivv{\kappa^-}{\pathLL s'} = \trivv{\kappa^-}{\pathLL s_1}$ (the second equality follows from the fact that $\overline{\pathLL t \kappa^-}$ is a path of $\completed{\,\dual{\pathLL s'}\,}$). Since $\pathLL s_1 \in \visit{M}^x$, the sequence $\trivv{\kappa^-}{\pathLL s_1} = \triv{\pathLL t_1\kappa^-}$ is a trivial view of $\beh M^x$. Let $\pathLL s_1'\kappa^-$ be the prefix of $\pathLL s_1$ ending with $\kappa^-$. By Lemma~\ref{daimon_visit} $\pathLL s_1'\kappa^-\maltese \in \visit{M}^x$, so $\triv{\pathLL t_1\kappa^- \maltese} = \triv{\pathLL s'_1\kappa^-\maltese}$ is also a trivial view of $\beh M^x$; by regularity of $\beh M$, we deduce $\triv{\pathLL t_1\kappa^- \maltese} \in \visit{M}^x$. We have $\pathLL t_1 \kappa^- \maltese \in \pathLL t_1 \shuffle \triv{\pathLL t_1\kappa^-\maltese}$, where both $\pathLL t_1$ and $\triv{\pathLL t_1\kappa^- \maltese}$ are in $\visit{M}^x$, hence $\pathLL t_1 \kappa^- \maltese \in \visit{M}^x$ by regularity of $\beh M$. \end{proof} \begin{corollary} \label{visit-arrow-reg} If $\beh N$ and $\beh P$ are regular then $\visit{N \multimap P} = \dual{\kappa_\bullet (\visit{N} \shuffle \dual{\visit{P}})} \cup \{\epsilon\}$. \end{corollary} \subsection{Proof of Proposition~\ref{prop_reg_stable}: Regularity and Connectives} \label{sub-reg} \begin{proposition}\ \label{reg-sh} \begin{enumerate} \item If $\beh N$ is regular then $\shpos \beh N$ is regular. \item If $\beh P$ is regular then $\shneg \beh P$ is regular. \end{enumerate} \end{proposition} \begin{proof}\ \begin{enumerate} \item Following Proposition~\ref{reg2}: \begin{itemize} \item By internal completeness, the trivial views of $\shpos \beh N$ are of the form $\kappa_\blacktriangledown\viewseq t$ where $\viewseq t$ is a trivial view of $\beh N$. Since $\beh N$ is regular $\viewseq t \in \visit{N}$. Hence by Proposition~\ref{visit-sh}, $\kappa_\blacktriangledown\viewseq t \in \visit{\shpos N}$. \item Since $\visit{N}$ is stable by shuffle, so is $\visit{\shpos N} = \kappa_\blacktriangledown \visit{N}^x$ where $\kappa_\blacktriangledown$ is a positive action. \item For all paths $\kappa_\blacktriangle \pathLL s$, $\kappa_\blacktriangle \pathLL t \in \visit{(\shpos N)^\perp} = \kappa_\blacktriangle \visit{N^\perp}^x$ such that $\kappa_\blacktriangle \pathLL s \shuffle \kappa_\blacktriangle \pathLL t$ is defined, $\pathLL s$ and $\pathLL t$ start necessarily by the same positive action and $\pathLL s \shuffle \pathLL t \subseteq \visit{N^\perp}^x$ because $\visit{N^\perp}$ (thus also $\visit{N^\perp}^x$) is stable by $\shuffle$, hence $\kappa_\blacktriangle \pathLL s \shuffle \kappa_\blacktriangle \pathLL t = \kappa_\blacktriangle (\pathLL s \shuffle \pathLL t) \subseteq \visit{(\shpos N)^\perp}$. \end{itemize} \item If $\beh P$ is regular then $\beh P^\perp$ is too. Then by previous point $\shpos \beh P^\perp$ is regular, therefore so is $(\shpos \beh P^\perp)^\perp$. By Lemma~\ref{lem-inc-shneg}, this means that $\shneg \beh P$ is regular. \end{enumerate} \end{proof} \begin{proposition}\ \label{reg-plus} If $\beh M$ and $\beh N$ are regular then $\beh M \oplus \beh N$ is regular. \end{proposition} \begin{proof} Similar to Proposition~\ref{reg-sh} (1), by the same remark as in proof of Proposition~\ref{visit-pl}. \end{proof} In order to show that $\otimes$ preserves regularity, consider first the following definitions and lemma. We call \defined{quasi-path} a positive-ended P-visible aj-sequence. The \defined{shuffle} $\pathLL s \shuffle \pathLL t$ of two negative quasi-paths $\pathLL s$ and $\pathLL t$ is the set of paths $\pathLL u$ formed with actions from $\pathLL s$ and $\pathLL t$ such that $\proj{\pathLL u}{\pathLL s} = \pathLL s$ and $\proj{\pathLL u}{\pathLL t} = \pathLL t$. \begin{lemma}\label{lem:dual_inverse_shuffle} Let $\pathLL s$ and $\pathLL t$ be negative quasi-paths. If $\pathLL s \shuffle \pathLL t \neq \emptyset$ then $\pathLL s$ and $\pathLL t$ are paths. \end{lemma} \begin{proof} We prove the result by contradiction. Let us suppose that there exists a triple $(\pathLL s, \pathLL t, \pathLL u)$ such that $\pathLL s$ and $\pathLL t$ are two negative quasi-paths, $\pathLL u \in \pathLL s \shuffle \pathLL t$ is a path, and at least one of $\pathLL s$ or $\pathLL t$ does not satisfy O-visibility, say $\pathLL s$: there exists a negative action $\kappa^-$ and a prefix $\pathLL s_0\kappa^-$ of $\pathLL s$ such that the action $\kappa^-$ is justified in $\pathLL s_0$ but $\mathrm{just}(\kappa^-)$ does not appear in $\antiview{\pathLL s_0}$. We choose the triple $(\pathLL s, \pathLL t, \pathLL u)$ such that the length of $\pathLL u$ is minimal with respect to all such triples. Without loss of generality, we can assume that $\pathLL u$ and $\pathLL s$ are of the form $\pathLL u = \pathLL u_0 \kappa^- \maltese$ and $\pathLL s = \pathLL s_0 \kappa^- \maltese$ respectively. Indeed, if this is not true, $\pathLL u$ has a strict prefix of the form $\pathLL u_0 \kappa^-$; in this case we can replace $(\pathLL s, \pathLL t, \pathLL u)$ by the triple $(\pathLL s_0\kappa^-\maltese, \proj{\pathLL u_0}{\pathLL t}, \pathLL u_0 \kappa^-\maltese)$ which satisfies all the constraints, and where the length of $ \pathLL u_0 \kappa^-\maltese$ is less or equal to the length of $\pathLL u$. Let $\kappa^+ = \mathrm{just}(\kappa^-)$. $\pathLL u$ is necessarily of the form $\pathLL u = \pathLL u_1 \alpha^- \pathLL u_2 \alpha^+ \kappa^- \maltese$ where $\alpha^-$ justifies $\alpha^+$ and $\kappa^+$ appears in $\pathLL u_1$, indeed: \begin{itemize} \item $\kappa^+$ does not appear immediately before $\kappa^-$ in $\pathLL u$, otherwise it would also be the case in $\pathLL s$, contradicting the fact that $\kappa^-$ is not O-visible in $\pathLL s$. \item The action $\alpha^+$ which is immediately before $\kappa^-$ in $\pathLL u$ is justified by an action $\alpha^-$, and $\kappa^+$ appears before $\alpha^-$ in $\pathLL u$, otherwise $\kappa^+$ would not appear in $\antiview{\pathLL u_0}$ and that would contradict O-visibility of $\pathLL u$. \end{itemize} Let us show by contradiction something that will be useful for the rest of this proof: in the path $\pathLL u$, all the actions of $\pathLL u_2$ (which cannot be initial) are justified in $\alpha^-\pathLL u_2$. If it is not the case, let $\pathLL u_1\alpha^-\pathLL u'_2\beta$ be longest prefix of $\pathLL u$ such that $\beta$ is an action of $\pathLL u_2$ justified in $\pathLL u_1$, and let $\beta'$ be the following action (necessarily in $\pathLL u_2\alpha^+$), thus $\beta'$ is justified in $\alpha^-\pathLL u_2$. If $\beta'$ is positive (resp. negative) then $\beta$ is negative (resp. positive), thus $\view{\pathLL u_1\alpha^-\pathLL u'_2\beta} = \view{\pathLL u'_1}$ (resp. $\antiview{\pathLL u_1\alpha^-\pathLL u'_2\beta} = \antiview{\pathLL u'_1}$) where $\pathLL u'_1$ is the prefix of $\pathLL u_1$ ending on $\mathrm{just}(\beta)$. But then $\view{\pathLL u_1\alpha^-\pathLL u'_2\beta}$ (resp. $\antiview{\pathLL u_1\alpha^-\pathLL u'_2\beta}$) does not contain $\mathrm{just}(\beta')$: this contradicts the fact that $\pathLL u$ is a path, since P-visibility (resp. O-visibility) is not satisfied. Now define $\pathLL u' = \pathLL u_1\kappa^-\maltese$, $\pathLL s' = \proj{\pathLL u'}{\pathLL s}$ and $\pathLL t' = \proj{\pathLL u'}{\pathLL t}$, and remark that: \begin{itemize} \item \underline{$\pathLL u'$ is a path}, indeed, O-visibility for $\kappa^-$ is still satisfied since $\antiview{\pathLL u_1 \alpha^- \pathLL u_2 \alpha^+ \kappa^-} = \antiview{\pathLL u_1} \alpha^- \alpha^+ \kappa^-$ and $\antiview{\pathLL u_1\kappa^-} = \antiview{\pathLL u_1}\kappa^-$ both contain $\kappa^+$ in $\antiview{\pathLL u_1}$. \item \underline{$\pathLL s'$ and $\pathLL t'$ are quasi-paths}, since $\pathLL s'$ is of the form $\pathLL s' = \pathLL s_1\kappa^-\maltese$ where $\pathLL s_1 = \proj{\pathLL u_1}{\pathLL s}$ is a prefix of $\pathLL s$ containing $\kappa^+ = \mathrm{just}(\kappa^-)$, and $\pathLL t' = \proj{\pathLL u'}{\pathLL t} = \proj{\pathLL u_1}{\pathLL t}$ is a prefix of $\pathLL t$. \item \underline{$\pathLL u' \in \pathLL s' \shuffle \pathLL t'$}. \item \underline{$\pathLL s'$ is not a path}: Note that $\pathLL s$ is of the form $\pathLL s_1 \pathLL s_2 \kappa^- \maltese$ where $\pathLL s_1 = \proj{\pathLL u_1}{\pathLL s}$ and $\pathLL s_2 = \proj{\alpha^- \pathLL u_2 \alpha^+}{\pathLL s}$. By hypothesis, $\pathLL s$ is not a path because $\kappa^+$ does not appear in $\antiview{\pathLL s_1\pathLL s_2}$. But $\antiview{\pathLL s_1\pathLL s_2}$ is of the form $\antiview{\pathLL s_1}\pathLL s_2'$, since all the actions of $\pathLL s_2$ are hereditarily justified by the first (necessarily negative) action of $\pathLL s_2$, indeed: we have proved that, in $\pathLL u$, all the actions of $\pathLL u_2$ (in particular those of $\pathLL s_2$) were justified in $\alpha^-\pathLL u_2$. Thus $\kappa^+$ does not appear in $\antiview{\pathLL s_1}$, which means that O-visibility is not satisfied for $\kappa^-$ in $\pathLL s' = \pathLL s_1\kappa^-\maltese$. \end{itemize} Hence the triple $(\pathLL s', \pathLL t', \pathLL u')$ satisfies all the conditions. This contradicts the minimality of $\pathLL u$. \end{proof} \begin{proposition} \label{reg-tensor} If $\beh M$ and $\beh N$ are regular, then $\beh M \otimes \beh N$ is regular. \end{proposition} \begin{proof} Following Proposition~\ref{reg2}, we will prove that the positive-ended trivial views of $\beh M \otimes \beh N$ are visitable in $\beh M \otimes \beh N$, and that $\visit{M \otimes N}$ and $\visit{(M \otimes N)^\perp}$ are stable by shuffle. Every trivial view of $\beh M \otimes \beh N$ is of the form $\kappa_\bullet\viewseq t$. It follows from internal completeness (incarnated form) that $\kappa_\bullet\viewseq t$ is a trivial view of $\beh M \otimes \beh N$ iff $\viewseq t$ is a trivial view either of $\beh M^x$ or of $\beh N^y$. As $\beh M$ (resp. $\beh N$) is regular, positive-ended trivial views of $\beh M^x$ (resp. $\beh N^y$) are in $\visit{M}^x$ (resp. $\visit{N}^y$). Thus by Proposition~\ref{visit-tensor-reg}, positive-ended trivial views of $\beh M \otimes \beh N$ are in $\visit{M \otimes N}$. From Proposition~\ref{visit-tensor-reg}, and from the fact that $\shuffle$ is associative and commutative, we also have that $\visit{M \otimes N}$ is stable by shuffle. Let us prove that $\visit{M \otimes N}$ is stable by anti-shuffle. Let $\pathLL t, \pathLL u \in \visit{M \otimes N}$ and let $\pathLL s \in \pathLL t \text{\rotatebox[origin=c]{180}{$\shuffle$}} \pathLL u$, we show that $\pathLL s \in \visit{M \otimes N}$ by induction on the length of $\pathLL s$. Notice first that, from Proposition~\ref{visit-tensor-reg}, there exist paths $\pathLL t_1, \pathLL u_1 \in \visit{M}^x$ and $\pathLL t_2, \pathLL u_2 \in \visit{N}^y$ such that $\pathLL t \in \kappa_\bullet (\pathLL t_1 \shuffle \pathLL t_2)$ and $\pathLL u \in \kappa_\bullet (\pathLL u_1 \shuffle \pathLL u_2)$. In the case $\pathLL s$ of length $1$, either $\pathLL s = \maltese$ or $\pathLL s = \kappa_\bullet$, thus the result is immediate. So suppose $\pathLL s = \pathLL s'\kappa^-\kappa^+$ and by induction hypothesis $\pathLL s' \in \visit{M \otimes N}$. Hence, it follows from Proposition~\ref{visit-tensor-reg} that there exist paths $\pathLL s_1 \in \visit{M}^x$ and $\pathLL s_2 \in \visit{N}^y$ such that $\pathLL s' \in \kappa_\bullet (\pathLL s_1 \shuffle \pathLL s_2)$. Without loss of generality, we can suppose that $\kappa^-$ is an action of $\pathLL t_1$, hence of $\pathLL t$. We study the different cases, proving each time either that $\pathLL s \in \visit{M \otimes N}$ or that the case is impossible. \begin{itemize} \item Either $\kappa^+ = \maltese$. In that case, $\pathLL s_1\kappa^-\maltese$ is a negative quasi-path. As $\pathLL s$ is a path and $\pathLL s \in \kappa_\bullet(\pathLL s_1\kappa^-\maltese \shuffle \pathLL s_2)$, by Lemma~\ref{lem:dual_inverse_shuffle}, we have moreover that $\pathLL s_1\kappa^-\maltese$ is a path. Notice that $\kappa_\bullet\triv{\pathLL s_1\kappa^-} = \trivv{\kappa^-}{\pathLL s} = \trivv{\kappa^-}{\pathLL t} = \kappa_\bullet\trivv{\kappa^-}{\pathLL t_1}$. Hence $\triv{\pathLL s_1\kappa^-} = \trivv{\kappa^-}{\pathLL t_1}$ is a trivial view of $\beh M^x$. Let $\viewseq t\kappa^- = \triv{\pathLL s_1\kappa^-}$. By Lemma~\ref{triv-view-path}, $\pathLL s_1$ is a shuffle of anti-shuffles of trivial views of $\beh M^x$, one of which is the trivial view $\viewseq t$. Then remark that $\pathLL s_1\kappa^-\maltese$ is also a shuffle of anti-shuffles of trivial views of $\beh M^x$, replacing $\viewseq t$ by $\viewseq t\kappa^-\maltese$ (note that $\viewseq t\kappa^-\maltese$ is indeed a trivial view of $\beh M^x$ since $\viewseq t\kappa^-\maltese = \triv{\pathLL t_0\kappa^-\maltese}$ where $\pathLL t_0\kappa^-$ is the prefix of $\pathLL t_1$ ending with $\kappa^-$, and $\pathLL t_0\kappa^-\maltese \in \visit{M}^x$ by Lemma~\ref{daimon_visit}). It follows from Proposition~\ref{reg2} that $\pathLL s_1\kappa^-\maltese \in \visit{M}^x$. Finally, as $\pathLL s \in \kappa_\bullet (\pathLL s_1\kappa^-\maltese \shuffle \pathLL s_2)$ and by Proposition~\ref{visit-tensor-reg}, we have $\pathLL s \in \visit{M \otimes N}$. \item Or $\kappa^+$ is a proper action of $\pathLL t_1$, hence of $\pathLL t$. Remark that $\view{\pathLL s'\kappa^-} = \view{\kappa_\bullet\pathLL s_1\kappa^-} = \kappa_\bullet\view{\pathLL s_1\kappa^-}$, thus $\mathrm{just}(\kappa^+)$ appears in $\view{\pathLL s_1\kappa^-}$ hence $\pathLL s_1\kappa^-\kappa^+$ is a (negative) quasi-path. As $\pathLL s$ is a path and as $\pathLL s \in \kappa_\bullet(\pathLL s_1\kappa^-\kappa^+ \shuffle \pathLL s_2)$, by Lemma~\ref{lem:dual_inverse_shuffle} $\pathLL s_1\kappa^-\kappa^+$ is a path. We already know from previous item that $\pathLL s_1\kappa^-\maltese \in \visit{M}^x$. Notice that $\kappa_\bullet\triv{\pathLL s_1\kappa^-\kappa^+} = \trivv{\kappa^+}{\pathLL s} = \trivv{\kappa^+}{\pathLL t} = \kappa_\bullet\trivv{\kappa^+}{\pathLL t_1}$. Hence $\triv{\pathLL s_1\kappa^-\kappa^+} = \trivv{\kappa^+}{\pathLL t_1}$ is a trivial view of $\beh M^x$. Let $\viewseq u\kappa^+ = \triv{\pathLL s_1\kappa^-\kappa^+}$. By Lemma~\ref{triv-view-path}, $\pathLL s_1\kappa^-\maltese$ is a shuffle of anti-shuffles of trivial views of $\beh M^x$, one of which is the trivial view $\viewseq u\maltese$. Remark that $\pathLL s_1\kappa^-\kappa^+$ is also a shuffle of anti-shuffles of trivial views of $\beh M^x$, replacing $\viewseq u\maltese$ by $\viewseq u\kappa^+$. By Proposition~\ref{reg2}, $\pathLL s_1\kappa^-\kappa^+ \in \visit{M}^x$. Finally, as $\pathLL s \in \kappa_\bullet (\pathLL s_1\kappa^-\kappa^+ \shuffle \pathLL s_2)$ and by Proposition~\ref{visit-tensor-reg}, we have $\pathLL s \in \visit{M \otimes N}$. \item Or $\kappa^+$ is a proper action of $\pathLL u_1$, hence of $\pathLL u$. The reasoning is similar to previous item, using $\pathLL u$ and $\pathLL u_1$ instead of $\pathLL t$ and $\pathLL t_1$ respectively. \item Or $\kappa^+$ is a proper action of $\pathLL t_2$, hence of $\pathLL t$. This is impossible, being given the structure of $\pathLL s$: the action $\kappa_0^+$ following the negative action $\kappa^-$ in $\pathLL t$ is necessarily in $\pathLL t_1$ (due to the structure of a shuffle), hence the action following $\kappa^-$ in $\pathLL s$ is necessarily either $\kappa_0^+$ (hence in $\pathLL t_1$) or in $\pathLL u$. \item Or $\kappa^+$ is a proper action of $\pathLL u_2$, hence of $\pathLL u$: this case also leads to a contradiction. We know from previous item that a positive action of $\pathLL t_2$ cannot immediately follow a negative action of $\pathLL t_1$ in $\pathLL s$. Similarly, a positive action of $\pathLL u_2$ (resp.\ $\pathLL t_1$, $\pathLL u_1$) cannot immediately follow a negative action of $\pathLL u_1$ (resp.\ $\pathLL t_2$, $\pathLL u_2$) in $\pathLL s$. Suppose that there exists a positive action $\kappa_0^+$ of $\pathLL u_2$ (or resp.\ $\pathLL t_2$, $\pathLL u_1$, $\pathLL t_1$) which follows immediately a negative action $\kappa_0^-$ of $\pathLL t_1$ (or resp.\ $\pathLL u_1$, $\pathLL t_2$, $\pathLL u_2$). Let $\pathLL s_0\kappa_0^-\kappa_0^+$ be the shortest prefix of $\pathLL s$ satisfying such a property, say $\kappa_0^+$ is an action of $\pathLL u_2$ and $\kappa_0^-$ is an action of $\pathLL t_1$. Then the view $\view{\pathLL s_0\kappa_0^-}$ is necessarily only made of $\kappa_\bullet$ and of actions from $\pathLL t_1$ or $\pathLL u_1$, thus it does not contain $\mathrm{just}(\kappa_0^+)$ (where $\kappa_0^+$ cannot be initial because $\beh N$ is negative), i.e., $\pathLL s$ does not satisfy P-visibility: contradiction. \end{itemize} \end{proof} \begin{corollary} \label{reg-arrow} If $\beh N$ and $\beh P$ are regular, then $\beh N \multimap \beh P$ is regular. \end{corollary} \subsection{Proofs of Propositions~\ref{prop_pure_stable} and \ref{prop_arrow_princ}: Purity and Connectives} \label{sub-pur} \begin{proof}[Proof (Proposition~\ref{prop_pure_stable})] We must prove: \begin{itemize} \item If $\beh N$ is pure then $\shpos \beh N$ is pure. \item If $\beh P$ is pure then $\shneg \beh P$ is pure. \item If $\beh M$ and $\beh N$ are pure then $\beh M \oplus \beh N$ is pure. \item If $\beh M$ and $\beh N$ are pure then $\beh M \otimes \beh N$ is pure. \end{itemize} For the shifts and plus, the result is immediate given the form of visitable paths of $\shpos \beh N$, $\shneg \beh P$ and $\beh M \oplus \beh N$ (Propositions~\ref{visit-sh} and \ref{visit-pl}). Let us prove the result for the tensor. Let $\pathLL s = \pathLL s' \maltese \in V_{\beh M \otimes \beh N}$. According to Proposition~\ref{visit-tensor}, either $\pathLL s = \maltese$ or there exist $\pathLL s_1 \in \visit{M}^x$ and $\pathLL s_2 \in \visit{N}^y$ such that $\pathLL s \in \kappa_\bullet(\pathLL s_1 \shuffle \pathLL s_2)$. If $\pathLL s = \maltese$ then it is extensible with $\kappa_\bullet$, so suppose $\pathLL s \in \kappa_\bullet(\pathLL s_1 \shuffle \pathLL s_2)$. Without loss of generality, suppose $\pathLL s_1 = \pathLL s_1'\maltese$. Since $\beh M$ is pure, $\pathLL s_1$ is extensible: there exists a proper positive action $\kappa^+$ such that $\pathLL s_1'\kappa^+ \in \visit{M}^x$. Then, note that $\pathLL s'\kappa^+$ is a path: indeed, since $\pathLL s_1'\kappa^+$ is a path, the justification of $\kappa^+$ appears in $\view{\pathLL s_1'} = \view{\pathLL s'}$. Moreover $\pathLL s'\kappa^+ \in \kappa_\bullet(\visit{M}^x \shuffle \visit{N}^y)$, let us show that $\pathLL s'\kappa^+ \in V_{\beh M\otimes \beh N}$. Let $\pathLL t \in \visit{M}^x \shuffle \visit{N}^y$ and $\kappa^-$ a negative action such that $\overline{\kappa_\bullet\pathLL t \kappa^-}$ is a path of $\completed{\dual{\pathLL s'\kappa^+}}$, and by Proposition~\ref{visit-tensor} it suffices to show that $\pathLL t \kappa^- \maltese \in \visit{M}^x \shuffle \visit{N}^y$. But $\completed{\dual{\pathLL s'\kappa^+}} = \completed{\overline{\pathLL s'\kappa^+}\maltese} = \completed{\overline{\pathLL s'}} = \completed{\dual{\pathLL s}}$, therefore $\overline{\kappa_\bullet\pathLL t \kappa^-}$ is a path of $\completed{\dual{\pathLL s}}$. Since $\pathLL s \in V_{\beh M \otimes \beh N}$, by Proposition~\ref{visit-tensor} we get $\pathLL t \kappa^- \maltese \in \visit{M}^x \shuffle \visit{N}^y$. Finally $\pathLL s'\kappa^+ \in V_{\beh M\otimes \beh N}$, hence $\pathLL s$ is extensible. \end{proof} \begin{proof}[Proof (Proposition~\ref{prop_arrow_princ})] Since $\beh N$ and $\beh P$ are regular, $\visit{(N \multimap P)^\perp} = \kappa_\bullet (\visit{N}^x \shuffle \dual{\visit{P}^y}) \cup \{\maltese\}$ by Corollary~\ref{visit-arrow-reg}. Let $\pathLL s \in V_{({\beh N \multimap \beh P})^\perp}$ and suppose $\dual{\pathLL s}$ is $\maltese$-ended, i.e., $\pathLL s$ is $\maltese$-free. We must show that either $\dual{\pathLL s}$ is extensible or $\dual{\pathLL s}$ is not well-bracketed. The path $\pathLL s$ is of the form $\pathLL s = \kappa_\bullet\pathLL s'$ and there exist $\maltese$-free paths $\pathLL t \in \visit{N}^x$ and $\pathLL u \in \dual{\visit{P}^y}$ such that $\pathLL s' \in \pathLL t \shuffle \pathLL u$. We are in one of the following situations: \begin{itemize} \item Either $\dual{\pathLL u} \in \visit{P}^y$ is not well-bracketed, hence neither is $\dual{\pathLL s}$. \item Otherwise, since $\beh P$ is quasi-pure, $\dual{\pathLL u} = \overline{\pathLL u}\maltese$ is extensible, i.e., there exists a proper positive action $\kappa_{\pathLL u}^+$ such that $\overline{\pathLL u}\kappa_{\pathLL u}^+ \in \visit{P}^y$. If $\overline{\pathLL s}\kappa_{\pathLL u}^+$ is a path, then $\overline{\pathLL s}\kappa_{\pathLL u}^+ \in \visit{N \multimap P}$, hence $\dual{\pathLL s}$ is extensible: indeed, $\dual{\overline{\pathLL s}\kappa_{\pathLL u}^+} = \pathLL s\overline{\kappa_{\pathLL u}^+}\maltese \in \kappa_\bullet(\pathLL t \shuffle \pathLL u \overline{\kappa_{\pathLL u}^+}\maltese)$, thus $\pathLL s\overline{\kappa_{\pathLL u}^+}\maltese \in \kappa_\bullet(\visit{N}^x \shuffle \dual{\visit{P}^y})$. In the case $\overline{\pathLL s}\kappa_{\pathLL u}^+$ is not a path, this means that $\kappa_{\pathLL u}^+$ is justified by an action $\kappa_{\pathLL u}^-$ that does not appear in $\view{\overline{\pathLL s}}$, thus we have something of the form: \begin{center} \begin{tikzpicture} \node (a) {$\overline{\pathLL s}\kappa_{\pathLL u}^+ = \hspace{.5cm} \dots \hspace{.5cm} \kappa^+ \hspace{.5cm} \dots \hspace{.5cm} \kappa_{\pathLL u}^- \hspace{.5cm} \dots \hspace{.5cm} \kappa^- \hspace{.5cm} \dots \hspace{.5cm} \kappa_{\pathLL u}^+$} ; \draw[->,blue, thick] ($(a)+(4.3,.3)$) to [ out = 130, in = 50] ($(a)+(.3,.3)$) ; \draw[->,blue, thick] ($(a)+(2.2,.3)$) to [ out = 130, in = 50] ($(a)+(-1.8,.3)$) ; \draw[dotted, orange, thick] ($(a)+(4.3,-.3)$) to [ out = -90, in = -90] ($(a)+(3.4,-.3)$) ; \draw[dotted, orange, thick] ($(a)+(3.4,-.3)$) to [ out = -90, in = -90] ($(a)+(2.9,-.3)$) ; \draw[dotted, orange, thick] ($(a)+(2.9,-.3)$) to [ out = -90, in = -90] ($(a)+(2.2,-.3)$) ; \draw[-, orange, thick] ($(a)+(2.2,-.3)$) to [ out = -130, in = -50] ($(a)+(-1.8,-.3)$) ; \draw[->, dotted, orange, thick] ($(a)+(-1.8,-.3)$) to [ out = -90, in = -90] ($(a)+(-2.8,-.3)$) ; \node[blue] (just1) at ($(a)+(2.9,.8)$) {just.} ; \node[blue] (just2) at ($(a)+(-.4,.8)$) {just.} ; \node[orange] (view) at ($(a)+(2.6,-1)$) {view $\view{\overline{\pathLL s}\kappa_{\pathLL u}^+}$} ; \end{tikzpicture} \end{center} If $\kappa^-$ comes from $\overline{\pathLL t}$, and thus also $\kappa^+$, then $\overline s$ is not well-bracketed, indeed: since $\kappa_{\pathLL u}^-$ is hereditarily justified by $\overline{\kappa_\bullet}$ and by no action from $\overline{\pathLL t}$, we have: \begin{center} \begin{tikzpicture} \node (a) {$\overline{\pathLL s} = \hspace{.5cm} \overline{\kappa_\bullet} \hspace{.5cm} \dots \hspace{.5cm} \kappa^+ \hspace{.5cm} \dots \hspace{.5cm} \kappa_{\pathLL u}^- \hspace{.5cm} \dots \hspace{.5cm} \kappa^- \hspace{.5cm} \dots$} ; \draw[->,blue, thick] ($(a)+(2.8,.3)$) to [out = 130, in = 50] ($(a)+(-.9,.3)$) ; \draw[dotted, blue, thick] ($(a)+(.9,-.3)$) to [ out = -90, in = -90] ($(a)+(.2,-.3)$) ; \draw[dotted, blue, thick] ($(a)+(.2,-.3)$) to [ out = -90, in = -90] ($(a)+(-.3,-.3)$) ; \draw[blue, thick] ($(a)+(-.3,-.3)$) to [ out = -90, in = -90] ($(a)+(-1.7,-.3)$) ; \draw[dotted, blue, thick] ($(a)+(-1.7,-.3)$) to [ out = -90, in = -90] ($(a)+(-2.4,-.3)$) ; \draw[->, dotted, blue, thick] ($(a)+(-2.4,-.3)$) to [out = -90, in = -90] ($(a)+(-2.9,-.3)$) ; \node[blue] (just1) at ($(a)+(.9,.8)$) {just.} ; \node[blue] (just2) at ($(a)+(-.9,-1)$) {just.} ; \end{tikzpicture} \end{center} So suppose now that $\kappa^-$ comes from $\overline{\pathLL u}$, thus also $\kappa^+$. We know that $\view{\overline{\pathLL u}}$ contains $\kappa_{\pathLL u}^- = \mathrm{just}(\kappa_{\pathLL u}^+)$, thus in particular $\view{\overline{\pathLL u}}$ does not contain $\kappa^-$; on the contrary, we have seen that $\view{\overline{\pathLL s}}$ contains $\kappa^-$. By definition of the view of a sequence, this necessarily means that, in $\overline{\pathLL s}$, between the action $\kappa^-$ and the end of the sequence, the following happens: $\view{\overline{\pathLL s}}$ comes across an action $\alpha_{\pathLL t}^-$ from $\overline{\pathLL t}$, justified by an action $\alpha_{\pathLL t}^+$ also from $\overline{\pathLL t}$, making the view miss at least one action $\alpha_{\pathLL u}$ from $\overline{\pathLL u}$ appearing in $\view{\overline{\pathLL u}}$, as depicted below. \begin{center} \begin{tikzpicture} \node (a) {$\overline{\pathLL s} = \hspace{.5cm} \overline{\kappa_\bullet} \hspace{.5cm} \dots \hspace{.5cm} \kappa^- \hspace{.5cm} \dots \hspace{.5cm} \alpha_{\pathLL t}^+ \hspace{.5cm} \dots \hspace{.5cm} \alpha_{\pathLL u} \hspace{.5cm} \dots \hspace{.5cm} \alpha_{\pathLL t}^- \hspace{.5cm} \dots$} ; \draw[dotted, blue, thick] ($(a)+(1.8,.3)$) to [out = 130, in = 50] ($(a)+(.8,.3)$) ; \draw[blue, thick] ($(a)+(.8,.3)$) to [ out = 90, in = 90] ($(a)+(-1.1,.3)$) ; \draw[dotted, blue, thick] ($(a)+(-1.1,.3)$) to [out = 90, in = 90] ($(a)+(-2.9,.3)$) ; \draw[->, dotted, blue, thick] ($(a)+(-2.9,.3)$) to [out = 90, in = 90] ($(a)+(-3.9,.3)$) ; \node[blue] (just1) at ($(a)+(1.3,.8)$) {just.} ; \draw[dotted, orange, thick] ($(a)+(4.8,-.3)$) to [ out = -90, in = -90] ($(a)+(3.8,-.3)$) ; \draw[-, orange, thick] ($(a)+(3.8,-.3)$) to [ out = -130, in = -50] ($(a)+(0,-.3)$) ; \draw[dotted, orange, thick] ($(a)+(0,-.3)$) to [ out = -90, in = -90] ($(a)+(-1.4,-.3)$) ; \draw[dotted, orange, thick] ($(a)+(-1.4,-.3)$) to [ out = -90, in = -90] ($(a)+(-2.1,-.3)$) ; \draw[dotted, orange, thick] ($(a)+(-2.1,-.3)$) to [ out = -90, in = -90] ($(a)+(-2.8,-.3)$) ; \draw[->, dotted, orange, thick] ($(a)+(-2.8,-.3)$) to [ out = -90, in = -90] ($(a)+(-3.9,-.3)$) ; \node[orange] (view) at ($(a)+(1.9,-.8)$) {view $\view{\overline{\pathLL s}}$} ; \end{tikzpicture} \end{center} Since $\alpha_{\pathLL u}$ is hereditarily justified by $\overline{\kappa_\bullet}$ and by no action from $\overline{\pathLL t}$, the path $\overline{\pathLL s}$ is not well-bracketed: the justifications of $\alpha_{\pathLL u}$ and of $\alpha_{\pathLL t}^-$ intersect. To sum up, we have proved that in the case when $\dual{\pathLL u} = \overline{\pathLL u}\maltese$ is extensible, either $\dual{\pathLL s}$ is extensible too or it is not well-bracketed. \end{itemize} Hence $\beh N \multimap \beh P$ is quasi-pure. \end{proof} \section{Proofs of Section~\ref{sec-induct}} In this section we prove: \begin{itemize} \item that the functions $\phi^{A}_{\sigma}$ are Scott-continuous (Proposition~\ref{prop_scott_conti}), \item internal completeness for particular infinite unions of behaviours (Theorem~\ref{thm_union_beh}), \item two lemmas of Subsection~\ref{reg_pure_data} (Lemmas~\ref{lem_incarn_hier} and \ref{lem_cb_basis}). \end{itemize} \subsection{Proof of Proposition~\ref{prop_scott_conti}} \begin{lemma} \label{lem_intcomp_set} Let $E, F$ be sets of atomic negative designs and $G$ be a set of atomic positive designs. \begin{enumerate} \item $\shpos (E^{\perp\perp}) = \blacktriangledown\langle E\rangle^{\perp\perp}$ \item $\shneg (G^{\perp\perp}) = \setst{\design n}{\proj{\design n}{\blacktriangle} \in \blacktriangle(x).G^x}^{\perp\perp}$ \item $(E^{\perp\perp}) \oplus (F^{\perp\perp}) = (\iota_1\langle E\rangle \cup \iota_2\langle F\rangle)^{\perp\perp}$ \item $(E^{\perp\perp}) \otimes (F^{\perp\perp}) = \bullet\langle E, F\rangle^{\perp\perp}$ \end{enumerate} \end{lemma} \begin{proof} We prove (1) and (2), the other cases are very similar to (1). \begin{enumerate} \item $\blacktriangledown\langle E\rangle^{\perp\perp} = \setst{\design n}{\proj{\design n}{\blacktriangle} \in \blacktriangle(x).(E^\perp)^x}^\perp = (\shneg (E^\perp))^\perp = (\shpos (E^{\perp\perp}))^{\perp\perp} = \shpos (E^{\perp\perp})$, \item $\setst{\design n}{\proj{\design n}{\blacktriangle} \in \blacktriangle(x).G^x}^{\perp\perp} = \setst{\blacktriangledown\langle\design m\rangle}{\design m \in G^\perp}^\perp = (\shpos (G^\perp))^\perp = \shneg (G^{\perp\perp})$, \end{enumerate} using the definition of the orthogonal, internal completeness, and Lemma~\ref{lem-inc-shneg}. \end{proof} \begin{proof}[Proof (Proposition~\ref{prop_scott_conti})] By induction on $A$, we prove that for every $X$ and every $\sigma$ the function $\phi^{A}_{\sigma}$ is continuous. Note that $\phi^{A}_{\sigma}$ is continuous if and only if for every directed subset $\mathbb P \subseteq \mathcal B^+$ we have $\bigvee_{\beh P \in \mathbb P} (\interpret{A}^{\sigma, X \mapsto \beh P}) = \interpret{A}^{\sigma, X \mapsto \bigvee \mathbb P}$. The cases $A = Y \in \mathcal V$ and $A = a \in \mathcal S$ are trivial, and the case $A = A_1 \oplus^+ A_2$ is very similar to the tensor, hence we only treat the two remaining cases. Let $\mathbb P \subseteq \mathcal B^+$ be directed. \begin{itemize} \item Suppose $A = A_1 \otimes^+ A_2$, thus $\interpret{A}^{\sigma, X \mapsto \beh P} = \interpret{A_1}^{\sigma, X \mapsto \beh P} \otimes^+ \interpret{A_2}^{\sigma, X \mapsto \beh P}$, with both functions $\phi^{A_i}_{\sigma}: \beh P \mapsto \interpret{A_i}^{\sigma, X \mapsto \beh P}$ continuous by induction hypothesis. For any positive behaviour $\beh P$, let us write $\sigma_{\beh P}$ instead of $\sigma, X \mapsto \beh P$. We have \[ \bigvee_{\beh P \in \mathbb P} \interpret{A}^{\sigma_{\beh P}} = (\bigcup_{\beh P \in \mathbb P} \interpret{A}^{\sigma_{\beh P}})^{\perp\perp} = (\bigcup_{\beh P \in \mathbb P} (\interpret{A_1}^{\sigma_{\beh P}} \otimes^+ \interpret{A_2}^{\sigma_{\beh P}}))^{\perp\perp} \] Let us show that \[\bigcup_{\beh P \in \mathbb P} (\interpret{A_1}^{\sigma_{\beh P}} \otimes^+ \interpret{A_2}^{\sigma_{\beh P}}) = \bullet \langle \bigcup_{\beh P' \in \mathbb P} \shneg \interpret{A_1}^{\sigma_{\beh P'}}, \bigcup_{\beh P'' \in \mathbb P} \shneg \interpret{A_2}^{\sigma_{\beh P''}} \rangle \cup \{\maltese\} \tag{$*$} \label{star}\] By internal completeness we have $\interpret{A_1}^{\sigma_{\beh P}} \otimes^+ \interpret{A_2}^{\sigma_{\beh P}} = \bullet \langle \shneg \interpret{A_1}^{\sigma_{\beh P}}, \shneg \interpret{A_2}^{\sigma_{\beh P}} \rangle \cup \{\maltese\}$ for every $\beh P \in \mathbb P$. The inclusion $(\subseteq)$ of (\ref{star}) is then immediate, so let us prove $(\supseteq)$. First, indeed, $\maltese$ belongs to the left side. Let $\beh P', \beh P'' \in \mathbb P$, let $\design m \in \shneg \interpret{A_1}^{\sigma_{\beh P'}}$, $\design n \in \shneg \interpret{A_2}^{\sigma_{\beh P''}}$, and let us show that $\bullet \langle \design m, \design n \rangle \in \interpret{A_1}^{\sigma_{\beh P}} \otimes^+ \interpret{A_2}^{\sigma_{\beh P}}$ where $\beh P = \beh P' \vee \beh P''$ (note that $\beh P \in \mathbb P$ since $\mathbb P$ is directed). By induction hypothesis, $\phi^{A_1}_{\sigma}$ is continuous, thus in particular increasing; since $\beh P' \subseteq \beh P$, it follows that $\interpret{A_1}^{\sigma_{\beh P'}} = \phi^{A_1}_{\sigma}(\beh P') \subseteq \phi^{A_1}_{\sigma}(\beh P) = \interpret{A_1}^{\sigma_{\beh P}}$. Similarly, $\interpret{A_2}^{\sigma_{\beh P''}} \subseteq \interpret{A_2}^{\sigma_{\beh P}}$. We get $\bullet \langle \design m, \design n \rangle \in \bullet \langle \shneg \interpret{A_1}^{\sigma_{\beh P}}, \shneg \interpret{A_2}^{\sigma_{\beh P}} \rangle \subseteq \interpret{A_1}^{\sigma_{\beh P}} \otimes^+ \interpret{A_2}^{\sigma_{\beh P}}$, using internal completeness for $\shneg$, which proves (\ref{star}). Using internal completeness, Lemma~\ref{lem_intcomp_set} and induction hypothesis, we deduce \begin{align*} (\bigcup_{\beh P \in \mathbb P} (\interpret{A_1}^{\sigma_{\beh P}} \otimes^+ \interpret{A_2}^{\sigma_{\beh P}}))^{\perp\perp} & = \bullet \langle \bigcup_{\beh P' \in \mathbb P} \shneg \interpret{A_1}^{\sigma_{\beh P'}}, \bigcup_{\beh P'' \in \mathbb P} \shneg \interpret{A_2}^{\sigma_{\beh P''}} \rangle^{\perp\perp} \\ & = (\bigcup_{\beh P' \in \mathbb P} \shneg \interpret{A_1}^{\sigma_{\beh P'}})^{\perp\perp} \otimes (\bigcup_{\beh P'' \in \mathbb P} \shneg \interpret{A_2}^{\sigma_{\beh P''}})^{\perp\perp} \\ & = (\bigcup_{\beh P' \in \mathbb P} \interpret{A_1}^{\sigma_{\beh P'}})^{\perp\perp} \otimes^+ (\bigcup_{\beh P'' \in \mathbb P} \interpret{A_2}^{\sigma_{\beh P''}})^{\perp\perp} \\ & = \interpret{{A_1}}^{\sigma, X \mapsto \bigvee \mathbb P} \otimes^+ \interpret{{A_2}}^{\sigma, X \mapsto \bigvee \mathbb P} \\ & = \interpret{A}^{\sigma, X \mapsto \bigvee \mathbb P} \end{align*} Consequently $\phi^{A}_{\sigma}$ is continuous. \item If $A = \mu Y.A_0$, define $f_0 : \beh Q \mapsto \interpret{A_0}^{\sigma, X \mapsto \bigvee \mathbb P, Y \mapsto \beh Q}$ and, for every $\beh P \in \mathcal B^+$, $f_{\beh P} : \beh Q \mapsto \interpret{A_0}^{\sigma, X \mapsto \beh P, Y \mapsto \beh Q}$. Those functions are continuous by induction hypothesis, thus using Kleene fixed point theorem we have $\mathrm{lfp}(f_0) = \bigvee_{n \in \mathbb N}{f_0}^n(\maltese) \mbox{ and } \mathrm{lfp}(f_{\beh P}) = \bigvee_{n \in \mathbb N}{f_{\beh P}}^n(\maltese)$. Therefore $\bigvee_{\beh P \in \mathbb P} (\interpret{A}^{\sigma, X \mapsto \beh P}) = \bigvee_{\beh P \in \mathbb P} (\mathrm{lfp}(f_{\beh P})) = \bigvee_{\beh P \in \mathbb P} (\bigvee_{n \in \mathbb N}{f_{\beh P}}^n(\maltese)) = \bigvee_{n \in \mathbb N} (\bigvee_{\beh P \in \mathbb P} {f_{\beh P}}^n(\maltese))$. For every $\beh Q \in \mathcal B^+$ the function $g_{\beh Q}: \beh P \mapsto f_{\beh P}(\beh Q)$ is continuous by induction hypothesis, hence $f_0(\beh Q) = \bigvee_{\beh P \in \mathbb P} f_{\beh P}(\beh Q)$. From this, we prove easily by induction on $m$ that for every $\beh Q \in \mathcal B^+$ we have ${f_0}^m(\beh Q) = \bigvee_{\beh P \in \mathbb P} {f_{\beh P}}^m(\beh Q)$. Thus $\bigvee_{\beh P \in \mathbb P} (\interpret{A}^{\sigma, X \mapsto \beh P}) = \bigvee_{n \in \mathbb N}{f_0}^n(\maltese) = \mathrm{lfp}(f_0) = \interpret{A}^{\sigma, X \mapsto \bigvee \mathbb P}$. We conclude that the function $\phi^{A}_{\sigma}$ is continuous. \end{itemize} \end{proof} \subsection{Proof of Theorem~\ref{thm_union_beh}} Before proving Theorem~\ref{thm_union_beh} we need some lemmas. Suppose $(\beh A_n)_{n \in \mathbb N}$ is an infinite sequence of regular behaviours such that for all $n \in \mathbb N$, $|\beh A_n| \subseteq |\beh A_{n+1}|$; the simplicity hypothesis is not needed for now. Let us note $\beh A = \bigcup_{n \in \mathbb N} \beh A_n$. Notice that the definition of visitable paths can harmlessly be extended to any set $E$ of designs of same polarity, even if it is not a behaviour; the same applies to the definition of incarnation, provided that $E$ satisfies the following: if $\design d, \design e_1, \design e_2 \in E$ are cut-free designs such that $\design e_1 \sqsubseteq \design d$ and $\design e_2 \sqsubseteq \design d$ then there exists $\design e \in E$ cut-free such that $\design e \sqsubseteq \design e_1$ and $\design e \sqsubseteq \design e_2$. In particular, as a union of behaviours, $\beh A$ satisfies this condition. \begin{lemma} \label{0lem_visit_hier} \label{0lem_visit_union} \label{0lem_incarn_union} \begin{enumerate} \item $\forall n \in \mathbb N$, $V_{\beh A_n} \subseteq V_{\beh A_{n+1}}$. \item $V_{\bigcup_{n \in \mathbb N} \beh A_n} = \bigcup_{n \in \mathbb N} V_{\beh A_n}$. \item $|\bigcup_{n \in \mathbb N} \beh A_n| = \bigcup_{n \in \mathbb N} |\beh A_n|$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item Fix $n$ and let $\pathLL s \in V_{A_n}$. There exist $\design d \in |\beh A_n|$ such that $\pathLL s$ is a path of $\design d$. Since $|\beh A_n| \subseteq |\beh A_{n+1}|$ we have $\design d \in |\beh A_{n+1}|$, thus by regularity of $\beh A_{n+1}$, $\pathLL s \in V_{A_{n+1}}$. \item $(\subseteq)$ Let $\pathLL s \in \visit A$. There exist $n \in \mathbb N$ and $\design d \in |\beh A_n|$ such that $\pathLL s$ is a path of $\design d$. By regularity of $\beh A_n$ we have $\pathLL s \in V_{\beh A_n}$. \\ $(\supseteq)$ Let $m \in \mathbb N$ and $\pathLL s \in V_{\beh A_m}$. For all $n \ge m$, $V_{\beh A_m} \subseteq V_{\beh A_n}$ by previous item, thus $\pathLL s \in V_{\beh A_n}$. Hence if we take $\design e = \completed{\dual{\pathLL s}}$, we have $\design e \in {\beh A_n}^\perp$ for all $n \ge m$ by monotonicity. We deduce $\design e \in \bigcap_{n \ge m}{\beh A_n}^\perp = (\bigcup_{n \ge m} \beh A_n)^\perp = (\bigcup_{n \in \mathbb N} \beh A_n)^\perp = \beh A^\perp$. Let $\design d \in \beh A_m$ such that $\pathLL s$ is a path of $\design d$; we have $\design d \in \beh A$ and $\design e \in \beh A^\perp$, thus $\interseq{\design d}{\design e} = \pathLL s \in \visit A$. \item $(\subseteq)$ Let $\design d$ be cut-free and minimal for $\sqsubseteq$ in $\beh A$. There exists $m \in \mathbb N$ such that $\design d \in \beh A_m$. Thus $\design d$ is minimal for $\sqsubseteq$ in $\beh A_m$ otherwise it would not be minimal in $\beh A$, hence the result. \\ $(\supseteq)$ Let $m \in \mathbb N$, and let $\design d \in |\beh A_m|$. By hypothesis, $\design d \in |\beh A_n|$ for all $n \ge m$. Suppose $\design d$ is not in $|\beh A|$, so there exists $\design d' \in \beh A$ such that $\design d' \sqsubseteq \design d$ and $\design d' \neq \design d$. In this case, there exists $n \ge m$ such that $\design d' \in \beh A_n$, but this contradicts the fact that $\design d \in |\beh A_n|$. \end{enumerate} \end{proof} \begin{lemma} \label{0lem_visit_double} $V_{\bigcup_{n \in \mathbb N} \beh A_n} = \dual{V_{(\bigcup_{n \in \mathbb N} \beh A_n)^\perp}} = V_{(\bigcup_{n \in \mathbb N} \beh A_n)^{\perp\perp}}$. \end{lemma} \begin{proof} In this proof we use the alternative definition of regularity (Proposition~\ref{reg2}). We prove $\visit A = \dual{\visit{A^\perp}}$, and the result will follow from the fact that for any behaviour $\beh B$ (in particular if $\beh B = \beh A^{\perp\perp}$) we have $\dual{\visit{B^\perp}} = \visit B$. First note that the inclusion $\visit A \subseteq \dual{\visit{A^\perp}}$ is immediate. Let $\pathLL s \in \visit{A^\perp}$ and let us show that $\dual{\pathLL s} \in \visit A$. Let $\design e \in |\beh A^\perp|$ such that $\pathLL s$ is a path of $\design e$. By Lemma~\ref{triv-view-path} and the remark following it, $\pathLL s$ is in the shuffle of anti-shuffles of trivial views $\viewseq t_1, \dots, \viewseq t_k$ of $\beh A^\perp$. For every $i \le k$, suppose $\viewseq t_i = \triv{\kappa_i}$; necessarily, there exists a design $\design d_i \in \beh A$ such that $\kappa_i$ occurs in $\interseq{\design e}{\design d_i}$, i.e., such that $\viewseq t_i$ is a subsequence of $\interseq{\design e}{\design d_i}$, otherwise $\design e$ would not be in the incarnation of $\beh A^\perp$ (it would not be minimal). Let $n$ be big enough such that $\design d_1, \dots, \design d_k \in \beh A_n$, and note that in particular $\design e \in {\beh A_n}^\perp$. For all $i$, $\dual{\viewseq t_i}$ is a trivial view of $|\design d_i|_{\beh A_n}$, thus it is a trivial view of $\beh A_n$. By regularity of $\beh A_n$ we have $\dual{\viewseq t_i} \in V_{\beh A_n}$. Since $\dual{\pathLL s}$ is in the anti-shuffle of shuffles of $\dual{\viewseq t_1}, \dots, \dual{\viewseq t_k}$, we have $\dual{\pathLL s} \in V_{\beh A_n}$ using regularity again. Therefore $\dual{\pathLL s} \in \visit A$ by Lemma~\ref{0lem_visit_union}. \end{proof} \begin{lemma} \label{0lem_union_reg} $(\bigcup_{n \in \mathbb N} \beh A_n)^\perp$ and $(\bigcup_{n \in \mathbb N} \beh A_n)^{\perp\perp}$ are regular. \end{lemma} \begin{proof} Let us show $\beh A^\perp$ is regular using the equivalent definition (Proposition~\ref{reg2}). \begin{itemize} \item Let $\viewseq t$ be a trivial view of $\beh A^\perp$. By a similar argument as in the proof above, there exists $n \in \mathbb N$ such that $\dual{\viewseq t}$ is a trivial view of $\beh A_n$, thus $\dual{\viewseq t} \in V_{\beh A_n} \subseteq \visit A$. By Lemma~\ref{0lem_visit_double} $\viewseq t \in \visit{A^\perp}$. \item Let $\pathLL s, \pathLL t \in \visit{A^\perp}$. By Lemma~\ref{0lem_visit_double}, $\dual{\pathLL s}, \dual{\pathLL t} \in \visit A$. By Lemma~\ref{0lem_visit_union}(2), there exists $n \in \mathbb N$ such that $\dual{\pathLL s}, \dual{\pathLL t} \in V_{\beh A_n}$, thus by regularity of $\beh A_n$ we have $\dual{\pathLL s} \text{\rotatebox[origin=c]{180}{$\shuffle$}} \dual{\pathLL t}$, $\dual{\pathLL s} \shuffle \dual{\pathLL t} \subseteq V_{\beh A_n} \subseteq \visit A$, in other words $\dual{\pathLL s \shuffle \pathLL t}$, $\dual{\pathLL s \text{\rotatebox[origin=c]{180}{$\shuffle$}} \pathLL t} \subseteq \visit A$. By Lemma~\ref{0lem_visit_double} we deduce $\pathLL s \shuffle \pathLL t$, $\pathLL s \text{\rotatebox[origin=c]{180}{$\shuffle$}} \pathLL t \subseteq \visit{A^\perp}$, hence $\visit{A^\perp}$ is stable under shuffle and anti-shuffle. \end{itemize} Finally $\beh A^\perp$ is regular. We deduce that $\beh A^{\perp\perp}$ is regular since regularity is stable under orthogonality. \end{proof} Let us introduce some more notions for next proof. An \defined{\boldmath$\infty$-path} (resp. \defined{\boldmath$\infty$-view}) is a finite or infinite sequence of actions satisfying all the conditions of the definition of path (resp. view) but the requirement of finiteness. In particular, a finite $\infty$-path (resp. $\infty$-view) is a path (resp. a view). An \defined{\boldmath$\infty$-path} (resp. \defined{\boldmath$\infty$-view}) \defined{of} a design $\design d$ is such that any of its positive-ended prefix is a path (resp. a view) of $\design d$. We call \defined{infinite chattering} a closed interaction which diverges because the computation never ends; note that infinite chattering occurs in the interaction between two atomic designs $\design p$ and $\design n$ if and only if there exists an infinite $\infty$-path $\pathLL s$ of $\design p$ such that $\dual{\pathLL s}$ is an $\infty$-path of $\design n$ (where, when $\pathLL s$ is infinite, $\dual{\pathLL s}$ is obtained from $\pathLL s$ by simply reversing the polarities of all the actions). Given an infinite $\infty$-path $\pathLL s$, the design $\completed{\pathLL s}$ is constructed similarly to the case when $\pathLL s$ is finite (see \textsection~\ref{ord-mono}). For the proof of the theorem, suppose now that the behaviours $(\beh A_n,)_{n \in \mathbb N}$ are simple. Remark that the second condition of simplicity implies in particular that the dual of a path in a design of a simple behaviour is a view. \begin{proof}[Proof (Theorem~\ref{thm_union_beh})] We must show that $\beh A^{\perp\perp} \subseteq \beh A$ since the other inclusion is trivial. Remark the following: given designs $\design d$ and $\design d'$, if $\design d \in \beh A$ and $\design d \sqsubseteq \design d'$ then $\design d' \in \beh A$. Indeed, if $\design d \in \beh A$ then there exists $n \in \mathbb N$ such that $\design d \in \beh A_n$; if moreover $\design d \sqsubseteq \design d'$ then in particular $\design d \preceq \design d'$, and by monotonicity $\design d' \in \beh A_n$, hence $\design d' \in \beh A$. Thus it is sufficient to show $|\beh A^{\perp\perp}| \subseteq \beh A$ since for every $\design d' \in \beh A^{\perp\perp}$ we have $|\design d'| \in |\beh A^{\perp\perp}|$ and $|\design d'| \sqsubseteq \design d'$. So let $\design d \in |\beh A^{\perp\perp}|$ and suppose $\design d \notin \beh A$. First note the following: by Lemmas~\ref{0lem_visit_double} and \ref{0lem_union_reg}, every path $\pathLL s$ of $\design d$ is in $\visit{A^{\perp\perp}} = \visit A$, thus there exists $\design d' \in |\beh A|$ containing $\pathLL s$. We explore separately the possible cases, and show how they all lead to a contradiction. \\ \textbf{If {\boldmath$\design d$} has an infinite number of maximal slices} then: \begin{itemize} \item Either there exists a negative subdesign $\design n = \negdes{a}{\vect{x^a}}{\design p_a}$ of $\design d$ for which there is an infinity of names $a \in \mathcal A$ such that $\design p_a \neq \Omega$. In this case, let $\viewseq v$ be the view of $\design d$ such that for every action $\kappa^-$ among the first ones of $\design n$, $\viewseq v\kappa^-$ is the prefix of a view of $\design d$. All such sequences $\viewseq v\kappa^-$ being prefixes of paths of $\design d$, we deduce by regularity of $\beh A^{\perp\perp}$ and using Lemma~\ref{daimon_visit} that $\viewseq v\kappa^-\maltese \in \visit{A^{\perp\perp}}$. Let $\design d' \in |\beh A|$ be such that $\viewseq v$ is a view of $\design d'$. Since $\design d'$ is also in $\beh A^{\perp\perp}$, we deduce by Lemma~\ref{nec} that for every action $\kappa^-$ among the first ones of $\design n$, $\viewseq v\kappa^-$ is the prefix of a view of $\design d'$. Thus $\design d'$ has an infinite number of slices: contradiction. \item Or we can find an infinite $\infty$-view $\viewseq v = (\kappa^-_0)\kappa^+_1\kappa^-_1\kappa^+_2\kappa^-_1\kappa^+_3\kappa^-_3 \dots$ of $\design d$ (the first action $\kappa^-_0$ being optional depending on the polarity of $\design d$) satisfying the following: there is an infinity of $i \in \mathbb N$ such than $\kappa^-_i$ is one of the first actions of a negative subdesign $\negdes{a}{\vect{x^a}}{\design p_a}$ of $\design d$ with at least two names $a \in \mathcal A$ such that $\design p_a \neq \Omega$. Let $\viewseq v_i$ be the prefix of $\viewseq v$ ending on $\kappa^+_i$. There is no design $\design d' \in |\beh A|$ containing $\viewseq v$, indeed: in this case, for all $i$ and all negative action $\kappa^-$ such that $\viewseq v_i\kappa^-$ is a prefix of a view of $\design d$, $\viewseq v_i\kappa^-$ would be a prefix of a view of $\design d'$ by Lemma~\ref{nec}, thus $\design d'$ would have an infinite number of slices, which is impossible since the $\beh A_n$ are simple. Thus consider $\design e = \completed{\dual{\viewseq v}}$: since all the $\viewseq v_i$ are views of designs in $|\beh A| = \bigcup_{n \in \mathbb N} |\beh A_n|$ and since the $\beh A_n$ are simple, the sequences $\dual{\viewseq v_i}$ are views, thus $\dual{\viewseq v}$ is an $\infty$-view. Therefore an interaction between a design $\design d' \in \beh A$ and $\design e$ necessarily eventually converges by reaching a daimon of $\design e$, indeed: infinite chattering is impossible since we cannot follow $\viewseq v$ forever, and interaction cannot fail after following a finite portion of $\viewseq v$ since those finite portions $\viewseq v_i$ are in $\visit A$. Hence $\design e \in \beh A^\perp$. But $\design d \not \perp \design e$, because of infinite chattering following $\viewseq v$. Contradiction. \end{itemize} \textbf{If {\boldmath$\design d$} has a finite number of maximal slices} $\design c_1, \dots, \design c_k$ then for every $i \le k$ there exist an $\infty$-path $\pathLL s_i$ that visits all the positive proper actions of $\design c_i$. Indeed, any (either infinite or positive-ended) sequence $\pathLL s$ of proper actions in a slice $\design c \sqsubseteq \design d$, without repetition, such that polarities alternate and the views of prefixes of $\pathLL s$ are views of $\design c$, is an $\infty$-path: \begin{itemize} \item (Linearity) is ensured by the fact that we are in only one slice, \item (O-visibility) is satisfied since positive actions of $\design d$, thus also of $\design c$, are justified by the immediate previous negative action (a condition true in $|\beh A|$, thus also satisfied in $\design d$ because all its views are views of designs in $|\beh A|$) \item (P-visibility) is natively satisfied by the fact that $\pathLL s$ is a promenade in the tree representing a design. \end{itemize} For example, $\pathLL s$ can travel in the slice $\design c$ as a breadth-first search on couples of nodes $(\kappa^-,\kappa^+)$ such that $\kappa^+$ is just above $\kappa^-$ in the tree, and $\kappa^+$ is proper. Then 2 cases: \begin{itemize} \item Either for all $i$, there exists $n_i \in \mathbb N$ and $\design d_i \in \beh A_{n_i}$ such that $\pathLL s_i$ is an $\infty$-path of $\design d_i$. Without loss of generality we can even suppose that $\design c_i \sqsubseteq \design d_i$: if it is not the case, replace some positive subdesigns (possibly $\Omega$) of $\design d_i$ by $\maltese$ until you obtain $\design d'_i$ such that $\design c_i \sqsubseteq \design d'_i$, and note that indeed $\design d'_i \in \beh A_{n_i}$ since $\design d_i \preceq \design d'_i$. Let $N = \mathrm{max}_{1 \le i \le k} (n_i)$. Since $\design d \not \in \beh A$, thus in particular $\design d \not \in \beh A_N$, there exists $\design e \in \beh A_N^\perp$ such that $\design d \not \perp \design e$. The reason of divergence cannot be infinite chattering, otherwise there would exist an infinite $\infty$-path $\pathLL t$ in $\design d$ such that $\dual{\pathLL t}$ is in $\design e$, and $\pathLL t$ is necessarily in a single slice of $\design d$ (say $\design c_i$) to ensure its linearity; but in this case we would also have $\design d_i \not \perp \design e$ where $\design d_i \in \beh A_N$, impossible. Similarly, for all (finite) path $\pathLL s$ of $\design d$, there exists $i$ such that $\pathLL s$ is a path of $\design c_i$ thus of $\design d_i \in \beh A_N$; this ensures that interaction between $\design d$ and $\design e$ cannot diverge after a finite number of steps either, leading to a contradiction. \item Or there is an $i$ such that the (necessarily infinite) $\infty$-path $\pathLL s_i$ is in no design of $\beh A$. In this case, let $\design e = \completed{\dual{\pathLL s_i}}$ (where $\dual{\pathLL s_i}$ is a view since the $\beh A_n$ are simple), and with a similar argument as previously we have $\design e \in \beh A^\perp$ but $\design d \not \perp \design e$ by infinite chattering, contradiction. \end{itemize} \end{proof} \subsection{Proofs of Subsection~\ref{reg_pure_data}} \begin{proof}[Proof (Lemma~\ref{lem_incarn_hier})] By induction on $A$, we prove that for all $X \in \mathcal V$ and $\sigma: \mathrm{FV}(A) \setminus \{X\} \to \mathcal B^+$ simple and regular, the induction hypothesis consisting in all the following statements holds: \begin{enumerate} \item for all simple regular behaviours $\beh P, \beh P' \in \mathcal B^+$, if $|\beh P| \subseteq |\beh P'|$ then $|{\phi^{A}_{\sigma}}(\beh P)| \subseteq |{\phi^{A}_{\sigma}}(\beh P')|$; \item for all $n \in \mathbb N$, $|(\phi^{A}_{\sigma})^n(\maltese)| \subseteq |(\phi^{A}_{\sigma})^{n+1}(\maltese)|$; \item for all simple regular behaviour $\beh P \in \mathcal B^+$, ${\phi^{A}_{\sigma}}(\beh P)$ is simple and regular; \item $\interpret{\mu X.A}^{\sigma} = \bigcup_{n \in \mathbb N} (\phi^{A}_{\sigma})^n(\maltese)$. \item $|\interpret{\mu X.A}^{\sigma}| = \bigcup_{n \in \mathbb N} |(\phi^{A}_{\sigma})^n(\maltese)|$. \end{enumerate} Let us write $\sigma_{\beh P}$ for $\sigma, X\mapsto \beh P$. Note that the base cases are immediate. If $A = A_1 \oplus^+ A_2$ or $A = A_1 \otimes^+ A_2$ then: \begin{enumerate} \item Follows from the incarnated form of internal completeness (in Theorem~\ref{thm_intcomp_all}). \item Easy by induction on $n$, using previous item. \item Regularity of ${\phi^{A}_{\sigma}}(\beh P)$ comes from Proposition~\ref{prop_reg_stable}, and simplicity is easy since the structure of the designs in $\interpret{A}^{\sigma_{\beh P}}$ is given by internal completeness. \item By Corollary~\ref{coro_kleene_sup} we have $\interpret{\mu X.A}^{\sigma} = (\bigcup_{n \in \mathbb N} (\phi^{A}_{\sigma})^n(\maltese))^{\perp\perp}$, and by Theorem~\ref{thm_union_beh} we have $(\bigcup_{n \in \mathbb N} (\phi^{A}_{\sigma})^n(\maltese))^{\perp\perp} = \bigcup_{n \in \mathbb N} (\phi^{A}_{\sigma})^n(\maltese)$ since items (2) and (3) guarantee that the hypotheses of the theorem are satisfied. \item By previous item and Lemma~\ref{0lem_incarn_union}(3). \end{enumerate} If $A = \mu Y.A_0$ then: \begin{enumerate} \item Suppose $|\beh P| \subseteq |\beh P'|$, where $\beh P$ and $\beh P'$ are simple regular. We have $|{\phi^{A}_{\sigma}}(\beh P)| = |\interpret{\mu Y.A_0}^{\sigma_{\beh P}}| = \bigcup_{n \in \mathbb N} |(\phi^{A_0}_{\sigma_{\beh P}})^n(\maltese)|$ by induction hypothesis (5), and similarly for $\beh P'$. By induction on $n$, we prove that \[|(\phi^{A_0}_{\sigma_{\beh P}})^n(\maltese)| \subseteq |(\phi^{A_0}_{\sigma_{\beh P'}})^n(\maltese)| \tag{$\delta$}\] It is immediate for $n = 0$, and the inductive case is: \begin{align*} |(\phi^{A_0}_{\sigma_{\beh P}})^{n+1}(\maltese)| & = |\phi^{A_0}_{\sigma_{\beh P}}((\phi^{A_0}_{\sigma_{\beh P}})^n(\maltese))| & \\ & \subseteq |\phi^{A_0}_{\sigma_{\beh P}}((\phi^{A_0}_{\sigma_{\beh P'}})^n(\maltese))| & \mbox{ by induction hypotheses (1), (3) and ($\delta$)} \\ & = |\phi^{A_0}_{\sigma, Y \mapsto (\phi^{A_0}_{\sigma_{\beh P'}})^n(\maltese)}(\beh P)|\\ & \subseteq |\phi^{A_0}_{\sigma, Y \mapsto (\phi^{A_0}_{\sigma_{\beh P'}})^n(\maltese)}(\beh P')| & \mbox{ by induction hypotheses (1) and (3)} \\ & = |(\phi^{A_0}_{\sigma_{\beh P'}})^{n+1}(\maltese)| \end{align*} \setcounter{enumi}{2} \item By induction hypotheses (2), (3) and (4) respectively, we have \begin{itemize} \item for all $n \in \mathbb N$, $|(\phi^{A_0}_{\sigma})^n(\maltese)| \subseteq |(\phi^{A_0}_{\sigma})^{n+1}(\maltese)|$, \item for all $n \in \mathbb N$, $(\phi^{A_0}_{\sigma})^n(\maltese)$ is simple regular, \item $\interpret{\mu Y.A_0}^{\sigma} = \bigcup_{n \in \mathbb N}(\phi^{A_0}_{\sigma})^n(\maltese)$. \end{itemize} Consequently, by Corollary~\ref{coro_reg_pur}, $\interpret{\mu Y.A_0}^{\sigma}$ is simple regular. \setcounter{enumi}{1} \item 4. 5. Similar to the cases $A = A_1 \oplus^+ A_2$ and $A = A_1 \otimes^+ A_2$. \end{enumerate} \end{proof} \begin{proof}[Proof (Lemma~\ref{lem_cb_basis})] By induction on $A$: \begin{itemize} \item If $A = a$ then it has basis $\interpret{a} = \beh C_a$. \item If $A = A_1 \oplus^+ A_2$, without loss of generality suppose $A_1$ is steady, with basis $\beh B_1$. Take $\otimes_1 \shneg \beh B_1$, as a basis for $A$, where the connective $\otimes_1$ is defined like $\shpos$ with a different name of action: $\otimes_1 \beh N :=\iota_1\langle\beh N\rangle^{\perp\perp}$ and by internal completeness $\otimes_1 \beh N :=\iota_1\langle\beh N\rangle$. \item If $A = A_1 \otimes^+ A_2$ then both $A_1$ and $A_2$ are steady, of respective base $\beh B_1$ and $\beh B_2$. The behaviour $\beh B = \beh B_1 \otimes^+ \beh B_2$ is a basis for $A$, indeed: since $\beh B_1$ and $\beh B_2$ are regular, Proposition~\ref{visit-tensor-reg} gives $V_{\beh B_1 \otimes^+ \beh B_2} = \kappa_\bullet(V_{\shneg \beh B_1}^x \shuffle V_{\shneg \beh B_2}^y) \cup \{\maltese\}$ where, by Proposition~\ref{visit-sh}, $V_{\shneg \beh B_i} = \kappa_\blacktriangle V_{\beh B_i}^x \cup \{\epsilon\}$ for $i \in \{1, 2\}$; from this, and using internal completeness, we deduce that $\beh B$ satisfies all the conditions. \item Suppose $A = \mu X.A_0$, where $A_0$ is steady and has a basis $\beh B_0$, let us show that $\beh B_0$ is also a basis for $A$. \begin{itemize} \item By Proposition~\ref{prop_reg_union}, $\interpret{A}^{\sigma} = \bigcup_{n \in \mathbb N}(\phi^{A_0}_{\sigma})^n(\maltese)$, and since $\beh B_0$ is a basis for $A_0$ we have $\beh B_0 \subseteq \interpret{A_0}^{\sigma, X \to \maltese} = (\phi^{A_0}_{\sigma})(\maltese)$, so indeed $\beh B_0 \subseteq \interpret{A}^{\sigma}$. \item By induction hypothesis, we immediately have that for every path $\pathLL s \in V_{\beh B_0}$, there exists $\pathLL t \in V_{\beh B_0}^{max}$ $\maltese$-free extending $\pathLL s$. \item By Lemma~\ref{0lem_visit_union}(2) $V_{\interpret{A}^{\sigma}} = \{\maltese\} \cup \bigcup_{n \in \mathbb N} V_{(\phi^{A_0}_{\sigma})^{n+1}(\maltese)} = \{\maltese\} \cup \bigcup_{n \in \mathbb N} V_{\interpret{A_0}^{\sigma_n}}$ where $\sigma_n = \sigma, X \mapsto (\phi^{A_0}_{\sigma})^n(\maltese)$ has a simple regular image. By induction hypothesis, for all $n \in \mathbb N$, $V_{\beh B}^{max} \subseteq V_{\interpret{A_0}^{\sigma_n}}^{max}$, therefore $V_{\beh B}^{max} \subseteq V_{\interpret{A}^{\sigma}}^{max}$. \end{itemize} \end{itemize} \end{proof} \section{Proof of Proposition~\ref{prop_main}} \label{wpf} In this section, we prove Proposition~\ref{prop_main}, which requires first several lemmas. Let us denote the set of functional behaviours by $\mathcal F$, and recall that $\mathcal D$ stands for the set of data behaviours. \begin{lemma} \label{lem_datafunc_pure} Let $\beh P \in \mathcal D$, and let $\beh Q$ be a pure regular behaviour. The behaviour $\beh P \multimap^+ \beh Q$ is pure. \end{lemma} \begin{proof} By Proposition~\ref{prop_pure_stable} it suffices to show that $(\shneg \beh P) \multimap \beh Q$ is pure. Remark first that, by construction of data behaviours, the following assertion is satisfied in views (thus also in paths) of $\shneg \beh P$: every proper positive action is justified by the negative action preceding it. By regularity and Corollary~\ref{visit-arrow-reg}, we have $\visit{(\shneg P) \multimap Q} = \dual{\kappa_\bullet (\visit{\shneg P} \shuffle \dual{\visit{Q}})} \cup \{\epsilon\}$. Let $\pathLL s\maltese \in \visit{(\shneg P) \multimap Q}$, and we will prove that it is extensible. There exist $\pathLL t_1 \in \visit{\shneg P}$ and $\pathLL t_2 \in \visit{Q}$ such that $\dual{\pathLL s\maltese} = \overline{\pathLL s} \in \kappa_\bullet(\pathLL t_1 \shuffle \dual{\pathLL t_2})$. In particular $\pathLL t_1$ is $\maltese$-free and $\pathLL t_2$ is $\maltese$-ended, say $\pathLL t_2 = \pathLL t_2'\maltese$. Since $\beh Q$ is pure, there exists $\kappa^+$ such that $\pathLL t_2'\kappa^+ \in V_{\beh Q}$. Let us show that $\pathLL s\kappa^+$ is a path, i.e., that if $\kappa^+$ is justified then $\mathrm{just}(\kappa^+)$ appears in $\view{\pathLL s}$, by induction on the length of $\pathLL t_1$: \begin{itemize} \item If $\pathLL t_1 = \epsilon$ then $\pathLL s\kappa^+ = \pathLL t_2'\kappa^+$ hence it is a path. \item Suppose $\pathLL t_1 = \pathLL t_1'\kappa_p^-\kappa_p^+$. Since $\pathLL t_1$ is $\maltese$-free, $\kappa_p^+$ is proper. Thus $\pathLL s$ is of the form $\pathLL s = \pathLL s_1 \overline{\kappa_p^-\kappa_p^+}\pathLL s_2$, and by induction hypothesis $\pathLL s_1 \pathLL s_2 \kappa^+$ is a path, i.e., $\mathrm{just}(\kappa^+)$ appears in $\view{\pathLL s_1 \pathLL s_2}$. \begin{itemize} \item Either $\view{\pathLL s} = \view{\pathLL s_1 \pathLL s_2}$ and indeed $\mathrm{just}(\kappa^+)$ also appears in $\view{\pathLL s}$. \item Or $\view{\pathLL s}$ is of the form $\view{\pathLL s} = \view{\pathLL s_1}\overline{\kappa_p^-}\overline{\kappa_p^+}\pathLL s'_2$ since, by the remark at the beginning of this proof, $\kappa_p^+$ is justified by $\kappa_p^-$. This means in particular that $\pathLL s'_2$ start with the same positive action as $\pathLL s_2$, thus we have $\view{\pathLL s_1 \pathLL s_2} = \view{\pathLL s_1} \pathLL s'_2$. Since $\mathrm{just}(\kappa^+)$ appears in $\view{\pathLL s_1 \pathLL s_2}$ and it is an action of $\pathLL s_1$, it appears in $\view{\pathLL s_1}$ thus also in $\view{\pathLL s}$. \end{itemize} \end{itemize} Therefore $\pathLL s\kappa^+$ is a path. Since $\pathLL s \kappa^+ \in \dual{\kappa_\bullet (\visit{\shneg P} \shuffle \dual{\visit{Q}})}$ and the behaviours are regular, $\pathLL s\kappa^+ \in \visit{P \multimap^+ Q}$, thus $\pathLL s\maltese$ is extensible. As this is true for every $\maltese$-ended path in $\visit{(\shneg P) \multimap Q}$, the behaviour $(\shneg \beh P) \multimap \beh Q$ is pure, and so is $\beh P \multimap^+ \beh Q$. \end{proof} \begin{lemma}\label{cons-arrow-pure} If $\beh P \in \mathcal F$ and $\beh Q \in \mathrm{Const}$ then $\beh P \multimap^+ \beh Q$ is pure. \end{lemma} \begin{proof} We prove that $(\shneg \beh P) \multimap \beh Q$ is pure, and the conclusion will follow from Proposition~\ref{prop_pure_stable}. Let $\kappa^+ = \posdes{x_0}{a}{\vect{y}}$ where $\beh Q = \beh C_a$, and let $\pathLL s \maltese \in \visit{(\shneg P) \multimap Q}$. As in the proof of Lemma~\ref{lem_datafunc_pure}, there exist $\pathLL t_1 \in \visit{\shneg P}$ and $\pathLL t_2 \in \visit{Q}$ such that $\dual{\pathLL s\maltese} = \overline{\pathLL s} \in \kappa_\bullet(\pathLL t_1 \shuffle \dual{\pathLL t_2})$ with $\pathLL t_2$ $\maltese$-ended. But $\visit{Q} = \{\maltese, \kappa^+\}$, thus $\pathLL t_2 = \maltese$ and $\dual{\pathLL t_2} = \epsilon$. Hence $\pathLL s \maltese = \dual{\kappa_\bullet\pathLL t_1}$, and this path is extensible with action $\kappa^+$, indeed: $\pathLL s\kappa^+$ is a path because $\kappa^+$ is justified by $\kappa_\bullet$, which is the only initial action of $\pathLL s\kappa^+$ thus appearing in $\view{\pathLL s}$; moreover $\dual{\pathLL s\kappa^+} \in \kappa_\bullet(\pathLL t_1 \shuffle \dual{\kappa^+})$ where $\kappa^+ \in \visit Q$, therefore $\pathLL s \kappa^+ \in \visit{(\shneg P) \multimap Q}$. \end{proof} \begin{lemma} \label{arrow-max} Let $\beh P, \beh Q \in \mathcal F$. If there is $\pathLL s \in \visit{Q}$ $\maltese$-free (resp. $\maltese$-ended) and maximal, then there is $\pathLL t \in \visit{P \multimap^+ Q}$ $\maltese$-free (resp. $\maltese$-ended) and maximal. \end{lemma} \begin{proof} Suppose there exists $\pathLL s \in \visit{Q}$ $\maltese$-free (resp. $\maltese$-ended) and maximal. Since $\beh P$ is positive and different from $\maltese$, there exists $\pathLL s' \in \visit{\shneg P}$ $\maltese$-free and non-empty. Let $\pathLL t' = \dual{\kappa_\bullet\pathLL s'\dual{\pathLL s}}$, and remark that $\pathLL t' = \overline{\kappa_\bullet\pathLL s'}\pathLL s$. This is a path (O- and P-visibility are satisfied), it belongs to $\visit{(\shneg P) \multimap Q}$, it is $\maltese$-free (resp. $\maltese$-ended). Suppose it is extensible, and consider both the ``$\maltese$-free'' and the ``$\maltese$-ended'' cases: \begin{itemize} \item if $\pathLL s$ and $\pathLL t'$ are $\maltese$-free, then there exists a negative action $\kappa^-$ such that $\pathLL t'\kappa^-\maltese \in \visit{(\shneg P) \multimap Q}$. But $\pathLL t'\kappa^-\maltese = \overline{\kappa_\bullet\pathLL s'}\pathLL s\kappa^-\maltese$, and since it belongs to $\visit{(\shneg P) \multimap Q} = \dual{\kappa_\bullet (\visit{\shneg P} \shuffle \visit{Q^\perp})} \cup \{\epsilon\}$, we necessarily have $\pathLL s\kappa^-\maltese \in \visit{Q}$ -- indeed, the sequence $\overline{\pathLL s'}\kappa^-$ has two adjacent negative actions. This contradicts the maximality of $\pathLL s$ in $\visit{Q}$. \item if $\pathLL s$ and $\pathLL t'$ are $\maltese$-ended, there exists a positive action $\kappa^+$ that extends $\pathLL t'$ and a contradiction arises with a similar reasoning. \end{itemize} Hence $\pathLL t'$ is maximal in $\visit{(\shneg P) \multimap Q}$. Finally, $\pathLL t = \kappa_\blacktriangledown \pathLL t'$ fulfills the requirements.\end{proof} \begin{lemma}\label{max-df} For every behaviour $\beh P \in \mathcal F$, there exists $\pathLL s \in \visit{P}$ maximal and $\maltese$-free. \end{lemma} \begin{proof} By induction on $\beh P$. If $\beh P \in \mathcal D$ then take $\pathLL s \in \visit B$ maximal, where $\beh B$ is a base of $\beh P$. Use Lemma~\ref{arrow-max} in the case of $\multimap^+$, and the result is easy for $\otimes^+$ and $\oplus^+$. \end{proof} \begin{lemma} \label{contra-pur} Let $\beh P \in \mathcal F$ and let $\mathcal C$ be a context. If $\mathcal C[\beh P]$ pure then $\beh P$ pure. \end{lemma} \begin{proof} We prove the contrapositive by induction on $\mathcal C$. Suppose $\beh P$ is impure. \begin{itemize} \item If $\mathcal C = [~]$ then $\mathcal C[\beh P] = \beh P$, thus $\mathcal C[\beh P]$ is impure. \item If $\mathcal C = \mathcal C' \oplus^+ \beh Q$ or $\beh Q \oplus^+ \mathcal C'$ and by induction hypothesis $\mathcal C'[\beh P]$ is impure, i.e., there exists a maximal path $\pathLL s\maltese \in \visit{\mathcal C'[P]}$, then one of $\kappa_{\iota_1}\kappa_\blacktriangle \pathLL s\maltese$ or $\kappa_{\iota_2}\kappa_\blacktriangle \pathLL s\maltese$ is maximal in $\visit{\mathcal C[P]}$, hence the result. \item If $\mathcal C = \mathcal C' \otimes^+ \beh Q$ or $\beh Q \otimes^+ \mathcal C'$ and by induction hypothesis there exists a maximal path $\pathLL s\maltese \in \visit{\mathcal C'[P]}$, then by Lemma~\ref{max-df}, there exists a $\maltese$-free maximal path $\pathLL t \in \visit{Q}$. Consider the path $\pathLL u = \kappa_\bullet \kappa_\blacktriangle^{\pathLL t} \pathLL t \kappa_\blacktriangle^{\pathLL s} \pathLL s\maltese$, where: \begin{itemize} \item $\kappa_\blacktriangle^{\pathLL t}$ justifies the first action of $\pathLL t$, \item $\kappa_\blacktriangle^{\pathLL s}$ justifies the first one of $\pathLL s$, and \item $\kappa_\bullet$ justifies $\kappa_\blacktriangle^{\pathLL t}$ and $\kappa_\blacktriangle^{\pathLL s}$, one on each (1\textsuperscript{st} or 2\textsuperscript{nd}) position, depending on the form of $\mathcal C$. \end{itemize} We have $\pathLL u \in \visit{\mathcal C[\beh P]}$, and $\pathLL u$ is $\maltese$-ended and maximal, hence the result. \item If $\mathcal C = \beh Q \multimap^+ \mathcal C'$ and by induction hypothesis $\mathcal C'[\beh P]$ is impure, then Lemma~\ref{arrow-max} (in its ``$\maltese$-ended'' version) concludes the proof. \end{itemize} \end{proof} \begin{proof}[Proof (Proposition~\ref{prop_main})] \noindent $(\Rightarrow)$ Suppose $\beh P$ impure. By induction on behaviour $\beh P$: \begin{itemize} \item $\beh P \in \mathcal D$ impossible by Corollary~\ref{coro_data_pur}. \item If $\beh P = \beh P_1 \oplus^+ \beh P_2$ (resp. $\beh P = \beh P_1 \otimes^+ \beh P_2$) then one of $\beh P_1$ or $\beh P_2$ is impure by Proposition~\ref{prop_pure_stable}, say $\beh P_1$. By induction hypothesis, $\beh P_1$ is of the form $\beh P_1 = \mathcal C'_1[~\mathcal C'_2[\beh Q_1 \multimap^+ \beh Q_2] \multimap^+ \beh R~]$. Let $\mathcal C_1 = \mathcal C'_1 \oplus^+ \beh P_2$ (resp. $\mathcal C_1 = \mathcal C'_1 \otimes^+ \beh P_2$) and $\mathcal C_2 = \mathcal C'_2$, in order to get the result for $\beh P$. \item If $\beh P = \beh P_1 \multimap^+ \beh P_2$, then $\beh P_2 \not \in \mathrm{Const}$ by Lemma~\ref{cons-arrow-pure}, and: \begin{itemize} \item If $\beh P_2$ impure, then by induction hypothesis $\beh P_2$ is of the form $\beh P_2 = \mathcal C'_1[~\mathcal C'_2[\beh Q_1 \multimap^+ \beh Q_2] \multimap^+ \beh R~]$, and it suffices to take $\mathcal C_1 = \beh P_1 \to \mathcal C'_1$ and $\mathcal C_2 = \mathcal C'_2$ to get the result for $\beh P$. \item If $\beh P_2$ is pure, since it is also regular the conclusion follows from Lemma~\ref{lem_datafunc_pure}. \end{itemize} \end{itemize} \noindent $(\Leftarrow)$ Let $\mathcal C_1, \mathcal C_2$ be contexts, $\beh Q_1, \beh Q_2, \beh R \in \mathcal P$ with $\beh R \not \in \mathrm{Const}$. Let $\beh P = \mathcal C_1[~\mathcal C_2[\beh Q_1 \multimap^+ \beh Q_2] \multimap^+ \beh R~]$ and $\beh Q = \mathcal C_2[\beh Q_1 \multimap^+ \beh Q_2]$. We prove that $\beh P$ is impure. First suppose that $\beh P = \mathcal C_2[\beh Q_1 \multimap^+ \beh Q_2] \multimap^+ \beh R$, and in this case we show the result by induction on the depth of context $\mathcal C_2$. The exact induction hypothesis will be: there exists a maximal $\maltese$-ended path in $\visit P$ of the form $\kappa_\blacktriangledown\pathLL s\maltese$ where $\overline{\pathLL s} \in \kappa_\bullet((\kappa_\blacktriangle\visit Q) \shuffle \dual{\visit R})$. \begin{itemize} \item If $\mathcal C_2 = [~]$, then $\beh Q = \beh Q_1 \multimap^+ \beh Q_2 = \shpos(\shneg \beh Q_1 \multimap \beh Q_2)$ and $\beh P = \beh Q \multimap^+ \beh R = \shpos (\shneg \beh Q \multimap \beh R)$. In order to differentiate actions $\kappa_\blacktriangledown, \kappa_\blacktriangle, \kappa_\bullet$ used to construct $\beh Q$ from those to construct $\beh P$, we will use corresponding superscripts. Let $\kappa^Q_\blacktriangle\pathLL t_1 \in \visit{\shneg Q_1}$ be $\maltese$-free (and non-empty). Let $\pathLL t_2 \in V_{\beh Q_2}$ be a maximal $\maltese$-free path: its existence is ensured by Lemma~\ref{max-df}, and it has one proper positive initial action $\kappa_2^+$. Now let $\pathLL t = \dual{\kappa^Q_\bullet\kappa^Q_\blacktriangle\pathLL t_1\dual{\pathLL t_2}} = \overline{\kappa^Q_\bullet\kappa^Q_\blacktriangle\pathLL t_1}\pathLL t_2$. Similarly to the path constructed in proof of Lemma~\ref{arrow-max}, we have that $\pathLL t$ is $\maltese$-free, it is in $\visit{(\shneg Q_1) \multimap Q_2}$, and it is maximal. Thus $\kappa^Q_\blacktriangledown\pathLL t \in \visit{Q}$. Since $\beh R \notin \mathrm{Const}$, there exists a path of the form $\kappa^+\kappa^-\maltese \in \visit{R}$, and thus necessarily $\kappa^+$ justifies $\kappa^-$. Define the sequence: \[\pathLL s\maltese = \overline{\kappa^P_\bullet\kappa^P_\blacktriangle\kappa^Q_\blacktriangledown}\kappa^Q_\bullet\kappa^Q_\blacktriangle \kappa^+ \kappa^-\pathLL t_1\overline{\pathLL t_2}\maltese\] and notice the following facts: \begin{enumerate} \item \underline{$\pathLL s\maltese$ is a path}: it is a linear aj-sequence. Since $\kappa^-$ is justified by $\kappa^+$, O- and P-visibility are easy to check. \item \underline{$\pathLL s\maltese \in V_{\shneg \beh Q \multimap \beh R}$}: indeed, we have $\dual{\pathLL s\maltese} \in \kappa^P_\bullet(\kappa^P_\blacktriangle\kappa^Q_\blacktriangledown\pathLL t \shuffle \dual{\kappa^+\kappa^-\maltese})$ where $\kappa^P_\blacktriangle\kappa^Q_\blacktriangledown\pathLL t \in \visit{\shneg Q}$ and $\kappa^+\kappa^-\maltese \in \visit R$. \item \underline{$\pathLL s\maltese$ is maximal}: Let us show that $\pathLL s\maltese$ is not extensible. First, it is not possible to extend it with an action from $\beh Q^\perp$, because this would contradict the maximality of $\pathLL t$ in $\visit{Q}$. Suppose it is extensible with an action $\kappa^{+\prime}$ from $\beh R$, that is $\pathLL s \kappa^{+\prime} \in \visit{\shneg Q \multimap R}$ and $\dual{\pathLL s\kappa^{+\prime}} \in \kappa^P_\bullet(\kappa^P_\blacktriangle\kappa^Q_\blacktriangledown\pathLL t \shuffle \dual{\kappa^+\kappa^-\kappa^{+\prime}})$ where $\kappa^+\kappa^-\kappa^{+\prime} \in \visit R$. The action $\kappa^{+\prime}$ (that cannot be initial) is necessarily justified by $\kappa^-$. But $\view{\pathLL s}$ contains necessarily the first negative action of $\overline{\pathLL t_2}$, which is the only initial action in $\overline{\pathLL t_2}$, and this action is justified by $\kappa^Q_\bullet$ in $\pathLL s$. Therefore $\view{\pathLL s}$ does not contain any action from $\pathLL s$ between $\kappa^Q_\bullet$ and $\overline{\pathLL t_2}$, in particular it does not contain $\kappa^- = \mathrm{just}(\kappa^{+\prime})$. Thus $\pathLL s\kappa^{+\prime}$ is not P-visible: contradiction. Hence $\pathLL s\maltese$ maximal. \end{enumerate} Finally $\kappa^P_\blacktriangledown\pathLL s \maltese \in \visit{P}$ is not extensible, and of the required form. \item If $\mathcal C_2 = \beh Q_0 \multimap^+ \mathcal C$, then $\beh Q$ is of the form $\beh Q = \beh Q_0 \multimap^+ \beh Q'$, thus previous reasoning applies. \item If $\mathcal C_2 = \mathcal C \otimes^+ \beh Q_0$ or $\beh Q_0 \otimes^+ \mathcal C$, the induction hypothesis gives us the existence of a maximal path in $\visit{\mathcal C[Q_1 \multimap^+ Q_2]\multimap^+ R}$ of the form $\kappa^P_\blacktriangledown\overline{\kappa^P_\bullet\kappa^P_\blacktriangle}\pathLL s'\maltese$ where $\kappa^P_\blacktriangle\overline{\pathLL s'} \in (\kappa^P_\blacktriangle\pathLL t') \shuffle \dual{\pathLL u}$ with $\pathLL t' \in \visit{\mathcal C[Q_1 \multimap^+ Q_2]}$ and $\pathLL u \in \visit R$. Let $\pathLL t_0 \in \visit{Q_0}$ be $\maltese$-free and maximal, using Lemma~\ref{max-df}. Consider the following sequence: \[\pathLL s \maltese = \overline{\kappa^P_\bullet\kappa^P_\blacktriangle\kappa^Q_\bullet\kappa^0_\blacktriangle\pathLL t_0\kappa^1_\blacktriangle}\pathLL s'\maltese\] where: \begin{itemize} \item $\kappa_\blacktriangle^0$ justifies the first action of $\pathLL t_0$, \item $\kappa_\blacktriangle^1$ justifies the first action of $\overline{\pathLL s'}$ thus the first action of $\pathLL t'$, \item $\kappa^Q_\bullet$ justifies $\kappa_\blacktriangle^0$ and $\kappa_\blacktriangle^1$, \item $\kappa_\blacktriangle^P$ now justifies $\kappa^Q_\bullet$, \item $\kappa_\bullet^P$ justifies the same actions as before. \end{itemize} Notice that: \begin{enumerate} \item \underline{$\pathLL s\maltese$ is a path}: O- and P-visibility are satisfied. \item \underline{$\pathLL s\maltese \in V_{\shneg \beh Q \multimap \beh R}$}: We have $\kappa^Q_\bullet\kappa^0_\blacktriangle\pathLL t_0\kappa^1_\blacktriangle\overline{\pathLL t'} \in \kappa^Q_\bullet(\kappa^0_\blacktriangle\visit{\beh Q_0} \shuffle \kappa^1_\blacktriangle\visit{\mathcal C[Q_1 \multimap^+ Q_2]}) = \visit{Q}$, hence $\dual{\pathLL s\maltese} \in \kappa^P_\bullet(\visit{\shneg Q} \shuffle \dual{\visit R})$. \item \underline{$\pathLL s\maltese$ is maximal}: Indeed, it cannot be extended neither by an action of $\beh Q_0^\perp$ (contradicts the maximality of $\pathLL t_0$) nor by an action of $\mathcal C[\beh Q_1 \multimap^+ \beh Q_2]^\perp$ or $\beh R$ (contradicts the maximality of $\pathLL s'$). \end{enumerate} Finally $\kappa^P_\blacktriangledown\pathLL s\maltese \in \visit{P}$ is a path satisfying the constraints. \item If $\mathcal C_2 = \mathcal C \oplus^+ \beh Q_0$ or $\beh Q_0 \oplus^+ \mathcal C$, by induction hypothesis, there exists a path of the form $\kappa^P_\blacktriangledown\overline{\kappa^P_\bullet\kappa^P_\blacktriangle}\pathLL s'\maltese$ maximal in $\visit{\mathcal C[Q_1 \multimap^+ Q_2]\multimap^+ R}$, where $\kappa^P_\blacktriangle\overline{\pathLL s'} \in (\kappa^P_\blacktriangle\pathLL t') \shuffle \dual{\pathLL u}$ with $\pathLL t' \in \visit{\mathcal C[Q_1 \multimap^+ Q_2]}$ and $\pathLL u \in \visit R$. Reasoning as previous item, we see that for one of $i \in \{1, 2\}$ (depending on the form of context $\mathcal C_2$) the path $\kappa^P_\blacktriangledown\overline{\kappa^P_\bullet\kappa^P_\blacktriangle\kappa^Q_{\iota_i}\kappa_\blacktriangle}\pathLL s'\maltese$ (where $\kappa_\blacktriangle^P$ now justifies $\kappa^Q_{\iota_i}$) is in $\visit{P}$, maximal, and of the required form. \end{itemize} The result for the general case, where $\beh P = \mathcal C_1[~\mathcal C_2[\beh Q_1 \multimap^+ \beh Q_2] \multimap^+ \beh R~]$, finally comes from Lemma~\ref{contra-pur}. \end{proof} \section{Introduction} \subsection{Context and Contributions} \subparagraph{Context} \hspace{0cm} \imp{Ludics} was introduced by Girard \cite{Girard1} as a variant of \imp{game semantics} with interactive types. Game Semantics has successfully provided fully abstract models for various logical systems and programming languages, among which PCF~\cite{HO}. Although very close to Hyland--Ong (HO) games, ludics reverses the approach: in HO games one defines first the interpretation of a type (an arena) before giving the interpretation for the terms of that type (the strategies), while in ludics the interpretation of terms (the \imp{designs}) is primitive and the types (the \imp{behaviours}) are recovered dynamically as well-behaved sets of terms. This approach to types is similar to what exists in realisability \cite{Krivine} or geometry of interaction \cite{Girard2}. The motivation for such a framework was to reconstruct logic around the dynamics of proofs. Girard provides a ludics model for (a polarised version of) multiplicative-additive linear logic (MALL); a key role in his interpretation of logical connectives is played by the \imp{internal completeness} results, which allow for a direct description of the behaviours' content. As most behaviours are not the interpretation of MALL formulas, an interesting question, raised from the beginning of ludics, is whether these remaining behaviours can give a logical counterpart to computational phenomena. In particular, data and functions~\cite{Terui, Sironi}, and also fixed points~\cite{BDS} have been studied in the setting of ludics. The present work follows this line of research. Real life (functional) programs usually deal with data, functions over it, functions over functions, etc. \imp{Data types} allow one to present information in a structured way. Some data types are defined \imp{inductively}, for example: \begin{lstlisting}[caption={Example of inductive types in OCaml}] > type nat = Zero | Succ of nat ;; > type 'a list = Nil | Cons of 'a * 'a list ;; > type 'a tree = Empty | Node of 'a * ('a tree) list ;; \end{lstlisting} Upon this basis we can consider \imp{functional types}, which are either first-order -- from data to data -- or higher-order -- i.e., taking functions as arguments or returning functions as a result. This article aims at interpreting constructively the (potentially inductive) data types and the (potentially higher-order) functional types as behaviours of ludics, so as to study their structural properties. Inductive types are defined as (least) \imp{fixed points}. As pointed out by Baelde, Doumane and Saurin~\cite{BDS}, the fact that ludics puts the most constraints on the formation of terms instead of types, conversely to game semantics, makes it a more natural setting for the interpretation of fixed points than HO games \cite{Clairambault}. \subparagraph{Contributions} The main contributions of this article are the following: \begin{itemize} \item We prove that internal completeness holds for infinite unions of behaviours satisfying particular conditions (Theorem~\ref{thm_union_beh}), leading to an explicit construction of the least fixed points in ludics (Proposition~\ref{prop_reg_union}). \item Inductive and functional types are interpreted as behaviours, and we prove that such behaviours are \imp{regular} (Corollary~\ref{coro_data_reg} and Proposition~\ref{prop_func_quasi_pure}). Regularity (that we discuss more in \textsection~\ref{sub_tools}) is a property that could be used to characterise the behaviours corresponding to $\mu$MALL formulas \cite{Baelde,BDS} -- i.e., MALL with fixed points. \item We show that a functional behaviour fails to satisfy \imp{purity}, a property ensuring the safety of all possible executions (further explained in \textsection~\ref{sub_tools}), if and only if it is higher order and takes functions as argument (Proposition~\ref{prop_main}); this is typically the case of $(\beh A \multimap \beh B) \multimap \beh C$. In \textsection~\ref{ex_discuss} we discuss the computational meaning of this result. \end{itemize} The present work is conducted in the term-calculus reformulation of ludics by Terui \cite{Terui} restricted to the linear part -- the idea is that programs call each argument at most once. \subparagraph{Related Work} The starting point for our study of inductive types as fixed points in ludics is the work by Baelde, Doumane and Saurin \cite{BDS}. In their article, they provide a ludics model for $\mu$MALL, a variant of multiplicative-additive linear logic with least and greatest fixed points. The existence of fixed points in ludics is ensured by Knaster-Tarski theorem, but this approach does not provide an explicit way to construct the fixed points; we will consider Kleene fixed point theorem instead. Let us also mention the work of Melliès and Vouillon \cite{MV} which introduces a realisability model for recursive (i.e., inductive and coinductive) polymorphic types. The representation of both data and functions in ludics has been studied previously. Terui \cite{Terui} proposes to encode them as designs in order to express computability properties in ludics, but data and functions are not considered at the level of behaviours. Sironi \cite{Sironi} describes the behaviours corresponding to some data types: integers, lists, records, etc. as well as first-order function types; our approach generalises hers by considering generic data types and also higher order functions types. \subsection{Background} \label{sub_tools} \subparagraph{Behaviours and Internal Completeness} A behaviour $\beh B$ is a set of designs which pass the same set of tests $\beh B^\perp$, where tests are also designs. $\beh B^\perp$ is called the \imp{orthogonal} of $\beh B$, and behaviours are closed under bi-orthogonal: $\beh B^{\perp\perp} = \beh B$. New behaviours can be formed upon others using various constructors. In this process, internal completeness, which can be seen as a built-in notion of observational equivalence, ensures that two agents reacting the same way to any test are actually equal. From a technical point of view, this means that it is not necessary to apply a $\perp\perp$-closure for the sets constructed to be behaviours. \subparagraph{Paths: Ludics as Game Semantics} This paper makes the most of the resemblance between ludics and HO game semantics. The connections between them have been investigated in many pieces of work \cite{BF, Faggian, FQ1} where designs are described as (innocent) strategies, i.e., in terms of the traces of their possible interactions. Following this idea, Fouqueré and Quatrini define \imp{paths} \cite{FQ1}, corresponding to legal plays in HO games, and they characterise a behaviour by its set of \imp{visitable paths}. This is the approach we follow. The definitions of regularity and purity rely on paths, since they are properties of the possible interactions of a behaviour. \subparagraph{Regularity: Towards a Characterisation of {\boldmath$\mu$}MALL?} Our proof that internal completeness holds for an infinite union of increasingly large behaviours (Theorem~\ref{thm_union_beh}) relies in particular on the additional hypothesis of regularity for these behaviours. Intuitively, a behaviour $\beh B$ is regular if every path in a design of $\beh B$ is realised by interacting with a design of $\beh B^\perp$, and vice versa. This property is not actually ad hoc: it was introduced by Fouqueré and Quatrini~\cite{FQ2} to characterise the denotations of MALL formulas as being precisely the regular behaviours satisfying an additional finiteness condition. In this direction, our intuition is that -- forgetting about finiteness -- regularity captures the behaviours corresponding to formulas of $\mu$MALL. Although such a characterisation is not yet achieved, we provide a first step by showing that the \imp{data patterns}, a subset of positive $\mu$MALL formulas, yield only regular behaviours (Proposition~\ref{prop_interp_reg}). \subparagraph{Purity: Type Safety} Ludics has a special feature for termination which is not present in game semantics: the \imp{daimon} $\maltese$. On a computational point of view, the daimon is commonly interpreted as an error, an exception raised at run-time causing the program to stop (see for example the notes of Curien~\cite{Curien}). Thinking of Ludics as a programming language, we would like to guarantee \imp{type safety}, that is, ensure that ``well typed programs cannot go wrong''~\cite{Milner}. This is the purpose of purity, a property of behaviours: in a pure behaviour, maximal interaction traces are $\maltese$-free, in other words whenever the interaction stops with $\maltese$ it is actually possible to ``ask for more'' and continue the computation. Introduced by Sironi~\cite{Sironi} (and called \imp{principality} in her work), this property is related to the notions of \imp{winning} designs~\cite{Girard1} and \imp{pure} designs~\cite{Terui}, but at the level of a behaviour. As expected, data types are pure (Corollary~\ref{coro_data_pur}), but not always functional types are; we identify the precise cases where impurity arises (Proposition~\ref{prop_main}), and explain why some types are not safe. \subsection{Outline} In Section~\ref{sec-designs} we present ludics and we state internal completeness for the logical connectives constructions. In Section~\ref{sec-paths} we recall the notion of path, so as to define formally regularity and purity and prove their stability under the connectives. Section~\ref{sec-induct} studies inductive data types, which we interpret as behaviours; Kleene theorem and internal completeness for infinite union allows us to give an explicit and direct construction for the least fixed point, with no need for bi-orthogonal closure; we deduce that data types are regular and pure. Finally, in Section~\ref{sec-func}, we study functional types, showing in what case purity fails. \section{Computational Ludics} \label{sec-designs} This section introduces the ludics background necessary for the rest of the paper, in the formalism of Terui~\cite{Terui}. The \imp{designs} are the primary objects of ludics, corresponding to (polarised) proofs or programs in a Curry-Howard perspective. Cuts between designs can occur, and their reduction is called \imp{interaction}. The \imp{behaviours}, corresponding to the types or formulas of ludics, are then defined thanks to interaction. Compound behaviours can be formed with \imp{logical connectives} constructions which satisfy \imp{internal completeness}. \subsection{Designs and Interaction} \label{sub-design} Suppose given a set of variables $\mathcal V_0$ and a set $\mathcal S$, called \defined{signature}, equipped with an arity function $ar: \mathcal S \to \mathbb N$. Elements $a, b, \dots \in \mathcal S$ are called \defined{names}. A \defined{positive action} is either $\maltese$ (daimon), $\Omega$ (divergence), or $\overline a$ with $a \in \mathcal S$; a \defined{negative action} is $a(x_1, \dots, x_n)$ where $a \in \mathcal S$, $\mathrm{ar}(a)=n$ and $x_1, \dots, x_n \in \mathcal V_0$ distinct. An action is \defined{proper} if it is neither $\maltese$ nor $\Omega$. \begin{definition} Positive and negative \defined{designs}\footnote{In the following, the symbols $\design d, \design e, \dots$ refer to designs of any polarity, while $\design p, \design q, \dots$ and $\design m, \design n, \dots$ are specifically for positive and negative designs respectively.} are coinductively defined by: \begin{align*} &\design p ~::=~ \maltese ~~~\boldsymbol |~~~ \Omega ~~~\boldsymbol |~~~ \posdesdots{x}{a}{\design n_1}{\design n_{\mathrm{ar}(a)}} ~~~\boldsymbol |~~~ \posdesdots{\design n_0}{a}{\design n_1}{\design n_{\mathrm{ar}(a)}} \\ &\design n ~::=~ \textstyle\negdesdots{a}{x^a_1}{x^a_{\mathrm{ar}(a)}}{\design p_a} \end{align*} \end{definition} Positive designs play the same role as \imp{applications} in $\lambda$-calculus, and negative designs the role of \imp{abstractions}, where each name $a \in \mathcal S$ binds $\mathrm{ar}(a)$ variables. Designs are considered up to $\alpha$-equivalence. We will often write $a(\vect x)$ (resp. $\overline a \langle \vect{\design n} \rangle$) instead of $a(x_1, \dots, x_n)$ (resp. $\overline a \langle \design n_1 \dots \design n_n \rangle$). Negative designs can be written as partial sums, for example $a(x, y).\design p + b().\design q$ instead of $a(x, y).\design p + b().\design q + \sum_{c \neq a, c \neq b} c(\vect{z^c}).\Omega$. Given a design $\design d$, the definitions of the \defined{free variables} of $\design d$, written $\mathrm{fv}(\design d)$, and the (capture-free) \defined{substitution} of $x$ by a negative design $\design n$ in $\design d$, written $\design d[\design n/x]$, can easily be inferred. The design $\design d$ is \defined{closed} if it is positive and it has no free variable. A \defined{subdesign} of $\design d$ is a subterm of $\design d$. A \defined{cut} in $\design d$ is a subdesign of $\design d$ of the form $\posdes{\design n_0}{a}{\vect{\design n}}$, and a design is \defined{cut-free} if it has no cut. In the following, we distinguish a particular variable $x_0$, that cannot be bound. A positive design $\design p$ is \defined{atomic} if $\mathrm{fv}(\design p) \subseteq \{x_0\}$; a negative design $\design n$ is \defined{atomic} if $\mathrm{fv}(\design n) = \emptyset$. A design is \defined{linear} if for every subdesign of the form $\posdes{x}{a}{\vect{\design n}}$ (resp. $\posdes{\design n_0}{a}{\vect{\design n}}$), the sets $\{x\}$, $\mathrm{fv}(\design n_1)$, \dots, $\mathrm{fv}(\design n_{\mathrm{ar}(a)})$ (resp. the sets $\mathrm{fv}(\design n_0)$, $\mathrm{fv}(\design n_1)$, \dots, $\mathrm{fv}(\design n_{\mathrm{ar}(a)})$) are pairwise disjoint. This article focuses on linearity, so in the following when writing ``design'' we mean ``linear design''. \begin{definition} The \defined{interaction} corresponds to reduction steps applied on cuts: \[\textstyle\sum_{a \in \mathcal S} a(x^a_1, \dots, x^a_{\mathrm{ar}(a)}).\design p_a~|~ \overline b \langle \design n_1, \dots, \design n_k\rangle \hspace{.6cm} \leadsto \hspace{.6cm} \design p_b[\design n_1/x^b_1, \dots, \design n_k/x^b_k] \] \end{definition} We will later describe an interaction as a sequence of actions, a path (Definition~\ref{def-path}). Let $\design p$ be a design, and let $\leadsto^*$ denote the reflexive transitive closure of $\leadsto$; if there exists a design $\design q$ which is neither a cut nor $\Omega$ and such that $\design p \leadsto^* \design q$, we write $\design p \Downarrow \design q$; otherwise we write $\design p \Uparrow$. The normal form of a design, defined below, exists and is unique \cite{Terui}. \begin{definition} The \defined{normal form} of a design $\design d$, noted $\normalisation{\design d}$, is defined by: \begin{align*} & \normalisation{\design p} = \maltese \mbox{\hspace{.3cm}~if~} \design p \Downarrow \maltese && \normalisation{\design p} = \posdesdots{x}{a}{\normalisation{\design n_1}}{\normalisation{\design n_n}} \mbox{\hspace{.3cm}~if~} \design p \Downarrow \posdesdots{x}{a}{\design n_1}{\design n_n} \\ & \normalisation{\design p} = \Omega \mbox{\hspace{.3cm}~if~} \design p \Uparrow && \normalisation{\textstyle \negdes{a}{\vect{x^a}}{\design p_a}} = \textstyle \negdes{a}{\vect{x^a}}{\normalisation{\design p_a}} \end{align*} \end{definition} Note that the normal form of a closed design is either $\maltese$ (convergence) or $\Omega$ (divergence). Orthogonality expresses the convergence of the interaction between two atomic designs, and behaviours are sets of designs closed by bi-orthogonal. \begin{definition} Two atomic designs $\design p$ and $\design n$ are \defined{orthogonal}, noted $\design p \perp \design n$, if $\normalisation{\design p[\design n/x_0]} = \maltese$. \end{definition} Given an atomic design $\design d$, define $\design d^\perp = \setst{\design e}{\design d \perp \design e}$; if $E$ is a set of atomic designs of same polarity, define $E^\perp = \setst{\design d}{\forall \design e \in E, \design d \perp \design e}$. \begin{definition} A set $\beh B$ of atomic designs of same polarity is a \defined{behaviour}\footnote{Symbols $\beh A, \beh B, \dots$ will designate behaviours of any polarity, while $\beh M, \beh N \dots$ and $\beh P, \beh Q, \dots$ will be for negative and positive behaviours respectively.} if $\beh B^{\perp\perp} = \beh B$. A behaviour is either positive or negative depending on the polarity of its designs. \end{definition} Behaviours could alternatively be defined as the orthogonal of a set $E$ of atomic designs of same polarity -- $E$ corresponds to a set of \imp{tests} or \imp{trials}. Indeed, $E^\perp$ is always a behaviour, and every behaviour $\beh B$ is of this form by taking $E = \beh B^\perp$. The \imp{incarnation} of a behaviour $\beh B$ contains the cut-free designs of $\beh B$ whose actions are all visited during an interaction with a design in $\beh B^\perp$. Those correspond to the cut-free designs that are minimal for the \defined{stable ordering} $\sqsubseteq$, where $\design d' \sqsubseteq \design d$ if $\design d$ can be obtained from $\design d'$ by substituting positive subdesigns for some occurrences of $\Omega$. \begin{definition} \label{def-incarn} Let $\beh B$ be a behaviour and $\design d \in \beh B$ cut-free. \begin{itemize} \item The \defined{incarnation} of $\design d$ in $\beh B$, written $|\design d|_{\beh B}$, is the smallest (for $\sqsubseteq$) cut-free design $\design d'$ such that $\design d' \sqsubseteq \design d$ and $\design d' \in \beh B$. If $|\design d|_{\beh B} = \design d$ we say that $\design d$ is \defined{incarnated} in $\beh B$. \item The \defined{incarnation} $|\beh B|$ of $\beh B$ is the set of the (cut-free) incarnated designs of $\beh B$. \end{itemize} \end{definition} \subsection{Logical Connectives} \label{sub_connectives} Behaviour constructors -- the \imp{logical connectives} -- can be applied so as to form compound behaviours. These connectives, coming from (polarised) linear logic, are used for interpreting formulas as behaviours, and will also indeed play the role of type constructors for the types of data and functions. In this subsection, after defining the connectives we consider, we state the \imp{internal completeness} theorem for these connectives. Let us introduce some notations. In the rest of this article, suppose the signature $\mathcal S$ contains distinct unary names $\blacktriangle, \pi_1, \pi_2$ and a binary name $\wp$, and write $\blacktriangledown = \overline \blacktriangle, \iota_1 = \overline \pi_1, \iota_2 = \overline \pi_2$ and $\bullet = \overline \wp$. Given a behaviour $\beh B$ and $x$ fresh, define $\beh B^x = \setst{\design d[x/x_0]}{\design d \in \beh B}$; such a substitution operates a ``delocation'' with no repercussion on the behaviour's inherent properties. Given a $k$-ary name $a \in \mathcal S$, we write $\overline{a}\langle \beh N_1, \dots, \beh N_k \rangle$ or even $\overline{a}\langle \vect{\beh N} \rangle$ for $\setst{\posdes{x_0}{a}{\vect{\design n}}}{\design n_i \in \beh N_i}$, and write $a(\vect x).\beh P$ for $\setst{a(\vect x).\design p}{\design p \in \beh P}$. For a negative design $\design n = \negdes{a}{\vect{x^a}}{\design p_a}$ and a name $a \in \mathcal S$, we denote by $\proj{\design n}{a}$ the design $a(\vect{x^a}).\design p_a$ (that is $a(\vect{x^a}).\design p_a + \sum_{b \neq a} b(\vect{x^b}).\Omega$). \begin{definition}[Logical connectives] \begin{align*} & \shpos \beh N = \blacktriangledown \langle \beh N \rangle^{\perp\perp} && \mbox{ (\defined{positive shift}) } \\ & \shneg \beh P = (\blacktriangle(x).\beh P^x)^{\perp\perp} \mbox{, with $x$ fresh} && \mbox{ (\defined{negative shift}) }\\ & \beh M \oplus \beh N = (\iota_1 \langle \beh M \rangle \cup \iota_2 \langle \beh N \rangle)^{\perp\perp} && \mbox{ (\defined{plus}) }\\ & \beh M \otimes \beh N = \bullet \langle \beh M, \beh N \rangle^{\perp\perp} && \mbox{ (\defined{tensor}) }\\ & \beh N \multimap \beh P = (\beh N \otimes \beh P^\perp)^\perp && \mbox{ (\defined{linear map}) } \end{align*} \end{definition} Our connectives $\shpos$, $\shneg$, $\oplus$ and $\otimes$ match exactly those defined by Terui \cite{Terui}, who also proves the following internal completeness theorem stating that connectives apply on behaviours in a constructive way -- there is no need to close by bi-orthogonal. For each connective, we present two versions of internal completeness: one concerned with the full behaviour, the other with the behaviour's incarnation. \begin{theorem}[Internal completeness for connectives]\label{thm_intcomp_all} \begin{align*} &\shpos \beh N = \blacktriangledown \langle \beh N \rangle \cup \{\maltese\} && |\shpos \beh N| = \blacktriangledown \langle |\beh N| \rangle \cup \{\maltese\} \\ &\shneg \beh P = \setst{\design n}{\proj{\design n}{\blacktriangle} \in \blacktriangle(x).\beh P^x} && |\shneg \beh P| = \blacktriangle(x).|\beh P^x| \\ &\beh M \oplus \beh N = \iota_1 \langle \beh M \rangle \cup \iota_2 \langle \beh N \rangle \cup \{\maltese\} && |\beh M \oplus \beh N| = \iota_1 \langle |\beh M| \rangle \cup \iota_2 \langle |\beh N| \rangle \cup \{\maltese\} \\ &\beh M \otimes \beh N = \bullet \langle \beh M, \beh N \rangle \cup \{\maltese\} && |\beh M \otimes \beh N| = \bullet \langle |\beh M|, |\beh N| \rangle \cup \{\maltese\} \end{align*} \end{theorem} \section{Paths and Interactive Properties of Behaviours} \label{sec-paths} \imp{Paths} are sequences of actions recording the trace of a possible interaction. For a behaviour $\beh B$, we can consider the set of its \imp{visitable paths} by gathering all the paths corresponding to an interaction between a design of $\beh B$ and a design of $\beh B^\perp$. This notion is needed for defining \imp{regularity} and \imp{purity} and proving that those two properties of behaviours are stable under (some) connectives constructions. \subsection{Paths} \label{sub-paths} This subsection adapts the definitions of path and visitable path from \cite{FQ1} to the setting of computational ludics. In order to do so, we need first to recover \imp{location} in actions so as to consider sequences of actions. Location is a primitive idea in Girard's ludics \cite{Girard1} in which the places of a design are identified with \imp{loci} or \imp{addresses}, but this concept is not visible in Terui's presentation of designs-as-terms. We overcome this by introducing actions with more information on location, which we call \imp{located actions}, and which are necessary to: \begin{itemize} \item represent cut-free designs as trees -- actually, forests -- in a satisfactory way, \item define views and paths. \end{itemize} \begin{definition} \label{loc-ac} A \defined{located action}\footnote{Located actions will often be denoted by symbol $\kappa$, sometimes with its polarity: $\kappa^+$ or $\kappa^-$.} $\kappa$ is one of: $\maltese ~~~\boldsymbol |~~~ \posdesdots{x}{a}{x_1}{x_{\mathrm{ar}(a)}} ~~~\boldsymbol |~~~ a_x(x_1, \dots, x_{\mathrm{ar}(a)})$ \\ where in the last two cases (\defined{positive proper} and \defined{negative proper} respectively), $a \in \mathcal S$ is the \defined{name} of $\kappa$, the variables $x, x_1, \dots, x_{\mathrm{ar}(a)}$ are distinct, $x$ is the \defined{address} of $\kappa$ and $x_1, \dots, x_{\mathrm{ar}(a)}$ are the \defined{variables bound by} $\kappa$. \end{definition} In the following, ``action'' will always refer to a located action. Similarly to notations for designs, $\posdes{x}{a}{\vect x}$ stands for $\posdesdots{x}{a}{x_1}{x_n}$ and $a_x(\vect x)$ for $a_x(x_1, \dots, x_n)$. \begin{example} \label{ex-tree} We show how cut-free designs can be represented as trees of located actions in this example. Let $a^2, b^2, c^1, d^0 \in \mathcal S$, where exponents stand for arities. The following design is represented by the tree of Fig.~\ref{fig-ex-path}. \[\design d = a(x_1,x_2).\textcolor{red}(\posdesblue{x_2}{b}{a(x_3, x_4).\maltese + c(y_1).\textcolor{orange}{(}\posdes{y_1}{d}{}\textcolor{orange}{)}\textcolor{blue}{,} c(y_2).\textcolor{green}{(}\posdes{x_1}{d}{}\textcolor{green}{)}}\textcolor{red})\] \end{example} \begin{figure} \centering \begin{tikzpicture}[grow=up, level distance=1cm, sibling distance = 2.5cm] \draw[-] node (a) {$a_{x_0}(x_1, x_2)$} child{ node (b) {$x_2|\overline b \langle z_1,z_2 \rangle$} child { node (c) {$c_{z_2}(y_2)$} child { node (d) {$x_1|\overline d\langle\rangle$} } } child { node (c') {$c_{z_1}(y_1)$} child { node (d') {$y_1|\overline d \langle\rangle$} } } child { node (a') {$a_{z_1}(x_3,x_4)$} child { node (dai) {$\maltese$} } } }; \draw[->, violet, thick, rounded corners] ($(a)+(-.9,-.2)$) -- ($(b)+(-.9,0)$) -- ($(a')+(-.9,-.2)$) -- ($(dai)+(-.9,.3)$) ; \node[violet] at ($(a)+(-1.7,0)$) {a view}; \draw[-, red, thick, rounded corners] ($(a)+(.9,-.2)$) -- ($(d')+(.9,.3)$) -- ($(d')+(1,.1)$); \draw[-,red, dashed, thick] ($(d')+(1,.1)$) -- ($(c)+(-.8,-.1)$); \draw[->, red, thick, rounded corners] ($(c)+(-.8,-.1)$) -- ($(c)+(-.7,-.2)$) -- ($(d)+(-.7,.4)$) ; \node[red] at ($(a)+(1.6,0)$) {a path}; \end{tikzpicture} \caption{Representation of design $\design d$ from Example~\ref{ex-tree}, with a path and a view of $\design d$.} \label{fig-ex-path} \end{figure} Such a representation is in general a forest: a negative design $\negdes{a}{\vect{x^a}}{\design p_a}$ gives as many trees as there is $a \in \mathcal S$ such that $\design p_a \neq \Omega$. The distinguished variable $x_0$ is given as address to every negative root of a tree, and fresh variables are picked as addresses for negative actions bound by positive ones. This way, negative actions from the same subdesign, i.e., part of the same sum, are given the same address. A tree is indeed to be read bottom-up: a proper action $\kappa$ is \defined{justified} if its address is bound by an action of opposite polarity appearing below $\kappa$ in the tree; otherwise $\kappa$ is called \defined{initial}. Except the root of a tree, which is always initial, every negative action is justified by the only positive action immediately below it. If $\kappa$ and $\kappa'$ are proper, $\kappa$ is \defined{hereditarily justified} by $\kappa'$ if there exist actions $\kappa_1, \dots, \kappa_n$ such that $\kappa = \kappa_1$, $\kappa' = \kappa_n$ and for all $i$ such that $1 \le i < n$, $\kappa_i$ is justified by $\kappa_{i+1}$. Before giving the definitions of \imp{view} and \imp{path}, let us give an intuition. On Fig.~\ref{fig-ex-path} are represented a view and a path of design $\design d$. Views are branches in the tree representing a cut-free design (reading bottom-up), while paths are particular ``promenades'' starting from the root of the tree; not all such promenades are paths, though. Views correspond to \imp{chronicles} in original ludics \cite{Girard1}. For every positive proper action $\kappa^+ = \posdes{x}{a}{\vect{y}}$ define $\overline{\kappa^+} = a_x(\vect{y})$, and similarly if $\kappa^- = a_x(\vect{y})$ define $\overline{\kappa^-} = \posdes{x}{a}{\vect{y}}$. Given a finite sequence of proper actions $\pathLL s = \kappa_1 \dots \kappa_n$, define $\overline{\pathLL s} = \overline{\kappa_1} \dots \overline{\kappa_n}$. Suppose now that if $\pathLL s$ contains an occurrence of $\maltese$, it is necessarily in last position; the \defined{dual} of $\pathLL s$, written $\dual{\pathLL s}$, is the sequence defined by: \begin{itemize} \item $\dual{\pathLL s} = \overline{\pathLL s}\maltese$ if $\pathLL s$ does not end with $\maltese$, \item $\dual{\pathLL s} = \overline{\pathLL s'}$ if $\pathLL s = \pathLL s' \maltese$. \end{itemize} Note that $\dual{\dual{\pathLL s}} = \pathLL s$. The notions of \defined{justified}, \defined{hereditarily justified} and \defined{initial} actions also apply in sequences of actions. \begin{definition} \label{aj-seq} An \defined{alternated justified sequence} (or \defined{aj-sequence}) $\pathLL s$ is a finite sequence of actions such that: \begin{itemize} \item (Alternation) Polarities of actions alternate. \item (Daimon) If $\maltese$ appears, it is the last action of $\pathLL s$. \item (Linearity) Each variable is the address of at most one action in $\pathLL s$. \end{itemize} \end{definition} The (unique) justification of a justified action $\kappa$ in an aj-sequence is noted $\mathrm{just}(\kappa)$, when there is no ambiguity on the sequence we consider. \begin{definition} A \defined{view} $\viewseq v$ is an aj-sequence such that each negative action which is not the first action of $\viewseq v$ is justified by the immediate previous action. Given a cut-free design $\design d$, $\viewseq v$ is a \defined{view of} $\design d$ if it is a branch in the representation of $\design d$ as a tree (modulo $\alpha$-equivalence). \end{definition} The way to extract \defined{the view} of an aj-sequence is given inductively by: \begin{itemize} \item $\view{\epsilon} = \epsilon$, where $\epsilon$ is the empty sequence, \item $\view{\pathLL s\kappa^+} = \view{\pathLL s}\kappa^+$, \item $\view{\pathLL s\kappa^-} = \view{\pathLL s_0}\kappa^-$ where $\pathLL s_0$ is the prefix of $\pathLL s$ ending on $\mathrm{just}(\kappa^-)$, or $\pathLL s_0 = \epsilon$ if $\kappa^-$ initial. \end{itemize} The \defined{anti-view} of an aj-sequence, noted $\antiview{\pathLL s}$, is defined symmetrically by reversing the role played by polarities; equivalently $\antiview{\pathLL s} = \dual{\view{\dual{\pathLL s}}}$. \begin{definition} \label{def-path} A \defined{path} $\pathLL s$ is a positive-ended aj-sequence satisfying: \begin{itemize} \item (P-visibility) For all prefix $\pathLL s' \kappa^+$ of $\pathLL s$, $\mathrm{just}(\kappa^+) \in \view{\pathLL s'}$ \item (O-visibility) For all prefix $\pathLL s' \kappa^-$ of $\pathLL s$, $\mathrm{just}(\kappa^-) \in \antiview{\pathLL s'}$ \end{itemize} Given a cut-free design $\design d$, a path $\pathLL s$ is a \defined{path of} $\design d$ if for all prefix $\pathLL s'$ of $\pathLL s$, $\view{\pathLL s'}$ is a view of $\design d$. \end{definition} Remark that the dual of a path is a path. Paths are aimed at describing an interaction between designs. If $\design d$ and $\design e$ are cut-free atomic designs such that $\design d \perp \design e$, there exists a unique path $\pathLL s$ of $\design d$ such that $\dual{\pathLL s}$ is a path of $\design e$. We write this path $\interseq{\design d}{\design e}$, and the good intuition is that it corresponds to the sequence of actions followed by the interaction between $\design d$ and $\design e$ on the side of $\design d$. An alternative way defining orthogonality is then given by the following proposition. \begin{proposition} \label{perp-path0} $\design d \perp \design e$ if and only if there exists a path $\pathLL s$ of $\design d$ such that $\dual{\pathLL s}$ is a path of $\design e$. \end{proposition} At the level a behaviour $\beh B$, the set of visitable paths describes all the possible interactions between a design of $\beh B$ and a design of $\beh B^\perp$. \begin{definition} A path $\pathLL s$ is \defined{visitable} in a behaviour $\beh B$ if there exist cut-free designs $\design d \in \beh B$ and $\design e \in \beh B^\perp$ such that $\pathLL s = \interseq{\design d}{\design e}$. The set of visitable paths of $\beh B$ is written $\visit B$. \end{definition} Note that for every behaviour $\beh B$, $\dual{\visit B} = \visit{B^\perp}$. \subsection{Regularity, Purity and Connectives} \label{sub-regpur} The meaning of regularity and purity has been discussed in the introduction. After giving the formal definitions, we prove that regularity is stable under all the connectives constructions. We also show that purity may fail with $\multimap$, and only a weaker form called \imp{quasi-purity} is always preserved. \begin{definition}\label{reg} $\beh B$ is \defined{regular} if the following conditions are satisfied: \begin{itemize} \item for all $\design d \in |B|$ and all path $\pathLL s$ of $\design d$, $\pathLL s \in \visit{B}$, \item for all $\design d \in |B^\perp|$ and all path $\pathLL s$ of $\design d$, $\pathLL s \in \visit{B^\perp}$, \item The sets $\visit{B}$ and $\visit{B^\perp}$ are stable under shuffle. \end{itemize} where the operation of \defined{shuffle} ($\shuffle$) on paths corresponds to an interleaving of actions respecting alternation of polarities, and is defined below. \end{definition} Let $\proj{\pathLL s}{\pathLL s'}$ refer to the subsequence of $\pathLL s$ containing only the actions that occur in $\pathLL s'$. Let $\pathLL s$ and $\pathLL t$ be paths of same polarity, let $S$ and $T$ be sets of paths of same polarity. We define: \begin{itemize} \item $\pathLL s \shuffle \pathLL t = \setst{\pathLL u \mbox{ path formed with actions from } \pathLL s \mbox{ and } \pathLL t}{\proj{\pathLL u}{\pathLL s} = \pathLL s \mbox{ and } \proj{\pathLL u}{\pathLL t} = \pathLL t}$ if $\pathLL s, \pathLL t$ negative, \item $\pathLL s \shuffle \pathLL t = \setst{\kappa^+ \pathLL u \mbox{ path}}{\pathLL u \in \pathLL s' \shuffle \pathLL t'}$ if $\pathLL s = \kappa^+ \pathLL s'$ and $\pathLL t = \kappa^+ \pathLL t'$ positive with same first action, \item $S \shuffle T = \setst{\pathLL u \mbox{ path}}{\exists \pathLL s \in S, \exists \pathLL t \in T \mbox{ such that } \pathLL s \shuffle \pathLL t \mbox{ is defined and } \pathLL u \in \pathLL s \shuffle \pathLL t}$, \end{itemize} In fact, a behaviour $\beh B$ is regular if every path formed with actions of the incarnation of $\beh B$, even mixed up, is a visitable path of $\beh B$, and similarly for $\beh B^\perp$. Remark that regularity is a property of both a behaviour and its orthogonal since the definition is symmetrical: $\beh B$ is regular if and only if $\beh B^\perp$ is regular. \begin{definition} A behaviour $\beh B$ is \defined{pure} if every $\maltese$-ended path $\pathLL s \maltese \in \visit B$ is \defined{extensible}, i.e., there exists a proper positive action $\kappa^+$ such that $\pathLL s \kappa^+ \in \visit{B}$. \end{definition} Purity ensures that when an interaction encounters $\maltese$, this does not correspond to a real error but rather to a partial computation, as it is possible to continue this interaction. Note that daimons are necessarily present in all behaviours since the converse property is always true: if $\pathLL s \kappa^+ \in \visit{B}$ then $\pathLL s \maltese \in \visit{B}$. \begin{proposition} \label{prop_reg_stable} Regularity is stable under $\shpos$, $\shneg$, $\oplus$, $\otimes$ and $\multimap$. \end{proposition} \begin{proposition} \label{prop_pure_stable} Purity is stable under $\shpos$, $\shneg$, $\oplus$ and $\otimes$. \end{proposition} Unfortunately, when $\beh N$ and $\beh P$ are pure, $\beh N \multimap \beh P$ is not necessarily pure, even under regularity assumption. However, a weaker form of purity holds for $\beh N \multimap \beh P$. \begin{definition} A behaviour $\beh B$ is \defined{quasi-pure} if all the $\maltese$-ended \imp{well-bracketed} paths in $V_{\beh B}$ are extensible. \end{definition} We recall that a path $\pathLL s$ is \defined{well-bracketed} if, for every justified action $\kappa$ in $\pathLL s$, when we write $\pathLL s = \pathLL s_0 \kappa' \pathLL s_1 \kappa \pathLL s_2$ where $\kappa'$ justifies $\kappa$, all the actions in $\pathLL s_1$ are hereditarily justified by $\kappa'$. \begin{proposition} \label{prop_arrow_princ} If $\beh N$ and $\beh P$ are quasi-pure and regular then $\beh N \multimap \beh P$ is quasi-pure. \end{proposition} \section{Inductive Data Types} \label{sec-induct} Some important contributions are presented in this section. We interpret inductive data types as positive behaviours, and we prove an internal completeness result allowing us to make explicit the structure of fixed points. Regularity and purity of data follows. Abusively, we denote the positive behaviour $\{\maltese\}$ by $\maltese$ all along this section. \subsection{Inductive Data Types as Kleene Fixed Points} \label{data_types_kleene} We define the \imp{data patterns} via a type language and interpret them as behaviours, in particular $\mu$ is interpreted as a least fixed point. \imp{Data behaviours} are the interpretation of \imp{steady} data patterns. Suppose given a countably infinite set $\mathcal V$ of second-order variables: $X, Y, \dots \in \mathcal V$. Let $\mathcal S' = \mathcal S \setminus \{\blacktriangle, \pi_1, \pi_2, \wp\}$ and define the set of \defined{constants} $\mathrm{Const} = \setst{\beh C_a}{a \in \mathcal S'}$ which contains a behaviour $\beh C_a = \{\posdes{x_0}{a}{\vect{\Omega^-}}\}^{\perp\perp}$ (where $\Omega^- := \negdes{a}{\vect{x^a}}{\Omega}$) for each $a \in \mathcal S'$, i.e., such that $a$ is not the name of a connective. Remark that $V_{\beh C_a} = \{\maltese \hspace{.1cm} , \hspace{.1cm} \posdes{x_0}{a}{\vect x}\}$, thus $\beh C_a$ is regular and pure. \begin{definition} The set $\mathcal P$ of \defined{data patterns} is generated by the inductive grammar: \[ A, B ::= X \in \mathcal V ~~~\boldsymbol |~~~ a \in \mathcal S' ~~~\boldsymbol |~~~ A \oplus^+ B ~~~\boldsymbol |~~~ A \otimes^+ B ~~~\boldsymbol |~~~ \mu X. A\] \end{definition} The set of free variables of a data pattern $A \in \mathcal P$ is denoted by $\mathrm{FV}(A)$. \begin{example} \label{ex-patterns} Let $b, n, l, t \in \mathcal S'$ and $X \in \mathcal V$. The data types given as example in the introduction can be written in the language of data patterns as follows: \begin{align*} \mathrm{\mathbb Bool} & = b \oplus^+ b \hspace{1.8cm} \mathrm{\mathbb Nat} = \mu X.( n \oplus^+ X) \hspace{1.8cm} \mathrm{\mathbb List}_{A} = \mu X.(l \oplus^+ (A \otimes^+ X)) \\ & \mathrm{\mathbb Tree}_A = \mu X.(t \oplus^+ (A \otimes^+ \mathrm{\mathbb List}_X)) = \mu X.(t \oplus^+ (A \otimes^+ \mathbb \mu Y.(l \oplus^+ (X \otimes^+ Y)))) \end{align*} \end{example} Let $\mathcal B^+$ be the set of positive behaviours. Given a data pattern $A \in \mathcal P$ and an environment $\sigma$, i.e., a function that maps free variables to positive behaviours, the interpretation of $A$ in the environment $\sigma$, written $\interpret{A}^{\sigma}$, is the positive behaviour defined by: \begin{align*} & \interpret{X}^{\sigma} = \sigma(X) & \interpret{A \oplus^+ B}^{\sigma} & = (\shneg \interpret{A}^{\sigma}) \oplus (\shneg \interpret{B}^{\sigma}) \\ & \interpret{a}^{\sigma} = \beh C_a & \interpret{A \otimes^+ B}^{\sigma} & = (\shneg \interpret{A}^{\sigma}) \otimes (\shneg \interpret{B}^{\sigma}) \\ & \interpret{\mu X. A}^{\sigma} = \mathrm{lfp}(\phi^A_{\sigma}) \end{align*} where $\mathrm{lfp}$ stands for the least fixed point, and the function $\phi^A_{\sigma}: \mathcal B^+ \to \mathcal B^+, \beh P \mapsto \interpret{A}^{\sigma, X\mapsto \beh P}$ is well defined and has a least fixed point by Knaster-Tarski fixed point theorem, as shown by Baelde, Doumane and Saurin \cite{BDS}. Abusively we may write $\oplus^+$ and $\otimes^+$, instead of $(\shneg \cdot) \oplus (\shneg \cdot)$ and $(\shneg \cdot) \otimes (\shneg \cdot)$ respectively, for behaviours. We call an environment $\sigma$ regular (resp. pure) if its image contains only regular (resp. pure) behaviours. The notation $\sigma, X\mapsto \beh P$ stands for the environment $\sigma$ where the image of $X$ has been changed to $\beh P$. In order to understand the structure of fixed point behaviours that interpret the data patterns of the form $\mu X.A$, we need a constructive approach, thus Kleene fixed point theorem is best suited than Knaster-Tarski. We now prove that we can apply this theorem. Recall the following definitions and theorem. A partial order is a \defined{complete partial order} (CPO) if each directed subset has a supremum, and there exists a smallest element, written $\bot$. A function $f : E \to F$ between two CPOs is \defined{Scott-continuous} (or simply continuous) if for every directed subset $D \subseteq E$ we have $\bigvee_{x \in D} f(x) = f(\bigvee_{x \in D}x)$. \begin{theorem}[Kleene fixed point theorem] Let $L$ be a CPO and let $f:L \to L$ be Scott-continuous. The function $f$ has a least fixed point, defined by \[\mathrm{lfp}(f) = \bigvee_{n \in \mathbb N}f^n(\bot)\] \end{theorem} The set $\mathcal B^+$ ordered by $\subseteq$ is a CPO, with least element $\maltese$; indeed, given a subset $\mathbb P \subseteq \mathcal B^+$, it is directed and we have $\bigvee \mathbb P = (\bigcup \mathbb P)^{\perp\perp}$. Hence next proposition proves that we can apply the theorem. \begin{proposition} \label{prop_scott_conti} Given a data pattern $A \in \mathcal P$, a variable $X \in \mathcal V$ and an environment $\sigma : \mathrm{FV}(A) \setminus \{X\} \to \mathcal B^+$, the function $\phi^{A}_{\sigma}$ is Scott-continuous. \end{proposition} \begin{corollary} \label{coro_kleene_sup} For every $A \in \mathcal P$, $X \in \mathcal V$ and $\sigma : \mathrm{FV}(A) \setminus \{X\} \to \mathcal B^+$, \[\interpret{\mu X.A}^{\sigma} = \bigvee_{n \in \mathbb N} (\phi^{A}_{\sigma})^n(\maltese) = (\bigcup_{n \in \mathbb N} (\phi^{A}_{\sigma})^n(\maltese))^{\perp\perp}\] \end{corollary} This result gives an explicit formulation for least fixed points. However, the $\perp\perp$-closure might add new designs which were not in the union, making it difficult to know the exact content of such a behaviour. The point of next subsection will be to give an internal completeness result proving that the closure is actually not necessary. Let us finish this subsection by defining a restricted set of data patterns so as to exclude the degenerate ones. Consider for example ${\mathbb List_{A}}' = \mu X. (A \otimes^+ X)$, a variant of $\mathbb List_{A}$ (see Example~\ref{ex-patterns}) which misses the base case. It is degenerate in the sense that the base element, here the empty list, is interpreted as the design $\maltese$. This is problematic: an interaction going through a whole list will end with an error, making it impossible to explore a pair of lists for example. The pattern $\mathbb Nat' = \mu X.X$ is even worse since $\interpret{\mathbb Nat'} = \maltese$. The point of steady data patterns is to ensure the existence of a basis; this will be formalised in Lemma~\ref{lem_cb_basis}. \begin{definition} The set of \defined{steady} data patterns is the smallest subset $\mathcal P^s \subseteq \mathcal P$ such that: \begin{itemize} \item $\mathcal S' \subseteq \mathcal P^s$ \item If $A \in \mathcal P^s$ and $B$ is such that $\interpret{B}^\sigma$ is pure if $\sigma$ is pure, then $A \oplus^+ B\in \mathcal P^s$ and $B \oplus^+ A \in \mathcal P^s$ \item If $A \in \mathcal P^s$ and $B\in \mathcal P^s$ then $A \otimes^+ B \in \mathcal P^s$ \item If $A \in \mathcal P^s$ then $\mu X.A \in \mathcal P^s$ \end{itemize} \end{definition} The condition on $B$ in the case of $\oplus^+$ admits data patterns which are not steady, possibly with free variables, but ensuring the preservation of purity, i.e., type safety; the basis will come from side $A$. We will prove (\textsection~\ref{reg_pure_data}) that behaviours interpreting steady data patterns are pure, thus in particular a data pattern of the form $\mu X.A$ is steady if the free variables of $A$ all appear on the same side of a $\oplus^+$ and under the scope of no other $\mu$ (since purity is stable under $\shpos, \shneg, \oplus, \otimes$). We claim that steady data patterns can represent every type of finite data. \begin{definition} A \defined{data behaviour} is the interpretation of a closed steady data pattern. \end{definition} \subsection{Internal Completeness for Infinite Union} Our main result is an internal completeness theorem, stating that an infinite union of \imp{simple} regular behaviours with increasingly large incarnations is a behaviour: $\perp\perp$-closure is useless. \begin{definition} \begin{itemize} \item A \defined{slice} is a design in which all negative subdesigns are either $\Omega^-$ or of the form $a(\vect x).\design p_a$, i.e., at most unary branching. $\design c$ is a \defined{slice of} $\design d$ if $\design c$ is a slice and $\design c \sqsubseteq \design d$. A slice $\design c$ of $\design d$ is \defined{maximal} if for any slice $\design c'$ of $\design d$ such that $\design c \sqsubseteq \design c'$, we have $\design c = \design c'$. \item A behaviour $\beh B$ is \defined{simple} if for every design $\design d \in |\beh B|$: \begin{enumerate} \item $\design d$ has a finite number of maximal slices, and \item every positive action of $\design d$ is justified by the immediate previous negative action. \end{enumerate} \end{itemize} \end{definition} Condition (2) of simplicity ensures that, given $\design d \in |\beh B|$ and a slice $\design c \sqsubseteq \design d$, one can find a path of $\design c$ containing all the positive proper actions of $\design c$ until a given depth; thus by condition (1), there exists $k \in \mathbb N$ depending only on $\design d$ such that $k$ paths can do the same in $\design d$. Now suppose $(\beh A_n)_{n \in \mathbb N}$ is an infinite sequence of simple regular behaviours such that for all $n \in \mathbb N$, $|\beh A_n| \subseteq |\beh A_{n+1}|$ (in particular we have $\beh A_n \subseteq \beh A_{n+1}$). \begin{theorem} \label{thm_union_beh} The set $\bigcup_{n \in \mathbb N} \beh A_n$ is a behaviour. \end{theorem} A union of behaviours is not a behaviour in general. In particular, counterexamples are easily found if releasing either the inclusion of incarnations or the simplicity condition. Moreover, our proof for this theorem relies strongly on regularity. Under the same hypotheses we can prove $V_{\bigcup_{n \in \mathbb N} \beh A_n} = \bigcup_{n \in \mathbb N} V_{\beh A_n}$ and $|\bigcup_{n \in \mathbb N} \beh A_n| = \bigcup_{n \in \mathbb N} |\beh A_n|$, hence the following corollary. \begin{corollary} \label{coro_reg_pur} \begin{itemize} \item $\bigcup_{n \in \mathbb N} \beh A_n$ is simple and regular; \item if moreover all the $\beh A_n$ are pure then $\bigcup_{n \in \mathbb N} \beh A_n$ is pure. \end{itemize} \end{corollary} \subsection{Regularity and Purity of Data} \label{reg_pure_data} The goal of this subsection is to show that the interpretation of data patterns of the form $\mu X.A$ can be expressed as an infinite union of behaviours $(\beh A_n)_{n \in \mathbb N}$ satisfying the hypotheses of Theorem~\ref{thm_union_beh}, in order to deduce regularity and purity. We will call an environment $\sigma$ simple if its image contains only simple behaviours. \begin{lemma} \label{lem_incarn_hier} For all $A \in \mathcal P$, $X \in \mathcal V$, $\sigma : \mathrm{FV}(A) \setminus \{X\} \to \mathcal B^+$ simple and regular\footnote{The hypothesis ``simple and regular'' has been added, compared to the CSL version of this article, for correction.}, and $n \in \mathbb N$ we have \[|(\phi^{A}_{\sigma})^n(\maltese)| \subseteq |(\phi^{A}_{\sigma})^{n+1}(\maltese)|\] \end{lemma} \begin{proposition} \label{prop_interp_reg} For all $A \in \mathcal P$ and simple regular environment $\sigma$, $\interpret{A}^{\sigma}$ is simple regular. \end{proposition} \begin{proof} By induction on data patterns. If $A = X$ or $A = a$ the conclusion is immediate. If $A = A_1 \oplus^+ A_2$ or $A = A_1 \otimes^+ A_2$ then regularity comes from Proposition~\ref{prop_reg_stable}, and simplicity is easy since the structure of the designs in $\interpret{A}^{\sigma}$ is given by internal completeness for the logical connectives (Theorem~\ref{thm_intcomp_all}). So suppose $A = \mu X.A_0$. By induction hypothesis, for every simple regular behaviour $\beh P \in \mathcal B^+$ we have $\phi^{A_0}_{\sigma}(\beh P) = \interpret{A_0}^{\sigma, X \mapsto \beh P}$ simple regular. From this, it is straightforward to show by induction that for every $n \in \mathbb N$, $(\phi^{A_0}_{\sigma})^n(\maltese)$ is simple regular. Moreover, for every $n \in \mathbb N$ we have $|(\phi^{A_0}_{\sigma})^n(\maltese)| \subseteq |(\phi^{A_0}_{\sigma})^{n+1}(\maltese)|$ by Lemma~\ref{lem_incarn_hier}, thus by Corollary~\ref{coro_kleene_sup} and Theorem~\ref{thm_union_beh}, $\interpret{\mu X.A_0}^{\sigma} = \bigvee_{n \in \mathbb N} (\phi^{A}_{\sigma})^n(\maltese) = (\bigcup_{n \in \mathbb N}(\phi^{A_0}_{\sigma})^n(\maltese))^{\perp\perp} = \bigcup_{n \in \mathbb N}(\phi^{A_0}_{\sigma})^n(\maltese)$. Consequently, by Corollary~\ref{coro_reg_pur}, $\interpret{\mu X.A_0}^{\sigma}$ is simple regular. \end{proof} Remark that we have proved at the same time, using Theorem~\ref{thm_union_beh}, that behaviours interpreting data patterns $\mu X.A$ admit an explicit construction: \begin{proposition} \label{prop_reg_union} If $A \in \mathcal P$, $X \in \mathcal V$, and $\sigma : \mathrm{FV}(A)\setminus X \to \mathcal B^+$ is simple regular, \[\interpret{\mu X.A}^{\sigma} = \bigcup_{n \in \mathbb N} (\phi^{A}_{\sigma})^n(\maltese)\] \end{proposition} \begin{corollary} \label{coro_data_reg} Data behaviours are regular. \end{corollary} We now move on to proving purity. The proof that the interpretation of a steady data pattern $A$ is pure relies on the existence of a basis for $A$ (Lemma~\ref{lem_cb_basis}). Let us first widen (to $\maltese$-free paths) and express in a different way (for $\maltese$-ended paths) the notion of extensible visitable path. \begin{definition} Let $\beh B$ be a behaviour. \begin{itemize} \item A $\maltese$-free path $\pathLL s \in \visit B$ is \defined{extensible} if there exists $\pathLL t \in \visit B$ of which $\pathLL s$ is a strict prefix. \item A $\maltese$-ended path $\pathLL s\maltese \in \visit B$ is \defined{extensible} if there exists a positive action $\kappa^+$ and $\pathLL t \in \visit B$ of which $\pathLL s\kappa^+$ is a prefix. \end{itemize} \end{definition} Write $V_{\beh B}^{max}$ for the set of maximal, i.e., non extensible, visitable paths of $\beh B$. \begin{lemma} \label{lem_cb_basis} Every steady data pattern $A \in \mathcal P^s$ has a basis, i.e., a simple regular behaviour $\beh B$ such that for all simple regular environment $\sigma$ we have \begin{itemize} \item $\beh B \subseteq \interpret{A}^{\sigma}$, \item for every path $\pathLL s \in V_{\beh B}$, there exists $\pathLL t \in V_{\beh B}^{max}$ $\maltese$-free extending $\pathLL s$ (in particular $\beh B$ pure), \item $V_{\beh B}^{max} \subseteq V_{\interpret{A}^{\sigma}}^{max}$. \end{itemize} \end{lemma} \begin{proof}[Proof (idea)] If $A = a$, a basis is $\beh C_a$. If $A = A_1 \oplus^+ A_2$, and $A_i$ is steady with basis $\beh B_i$, then $\otimes_i \shneg \beh B_i:=\iota_i\langle\shneg \beh B_i\rangle$ is a basis for $A$. If $A = A_1 \otimes^+ A_2$, a basis is $\beh B_1 \otimes^+ \beh B_2$ where $\beh B_1$ and $\beh B_2$ are basis of $A_1$ and $A_2$ respectively. If $A = \mu X.A_0$, its basis is the same as $A_0$. \end{proof} \begin{proposition} \label{prop_basis} If $A \in \mathcal P^s$ of basis $\beh B$, $X \in \mathcal V$, and $\sigma : \mathrm{FV}(A)\setminus X \to \mathcal B^+$ simple regular, \[\interpret{\mu X.A}^{\sigma} = \bigcup_{n \in \mathbb N} (\phi^{A}_{\sigma})^n(\beh B)\] \end{proposition} \begin{proof} Since $\beh B$ is a basis for $A$ we have $\maltese \subseteq \beh B \subseteq \interpret{A}^{\sigma, X \to \maltese} = \phi^{A}_{\sigma}(\maltese)$. The Scott-continuity of the function $\phi^{A}_{\sigma}$ implies that it is increasing, thus $(\phi^{A}_{\sigma})^n(\maltese) \subseteq (\phi^{A}_{\sigma})^n(\beh B) \subseteq (\phi^{A}_{\sigma})^{n+1}(\maltese)$ for all $n \in \mathbb N$. Hence $\interpret{A}^{\sigma} = \bigcup_{n \in \mathbb N}(\phi^{A}_{\sigma})^n(\maltese) = \bigcup_{n \in \mathbb N} (\phi^{A}_{\sigma})^n(\beh B)$. \end{proof} \begin{proposition} For all $A \in \mathcal P^s$ and simple regular pure environment $\sigma$, $\interpret{A}^\sigma$ is pure. \end{proposition} \begin{proof} By induction on $A$. The base cases are immediate and the connective cases are solved using Proposition~\ref{prop_pure_stable}. Suppose now $A = \mu X.A_0$, where $A_0$ is steady with basis $\beh B_0$. We have $\interpret{A}^{\sigma} = \bigcup_{n \in \mathbb N} (\phi^{A_0}_{\sigma})^n(\beh B_0)$ by Proposition~\ref{prop_basis}, let us prove it satisfies the hypotheses needed to apply Corollary~\ref{coro_reg_pur}(2). By induction hypothesis and Proposition~\ref{prop_interp_reg}, for every simple, regular and pure behaviour $\beh P \in \mathcal B^+$ we have $\phi^{A_0}_{\sigma}(\beh P) = \interpret{A_0}^{\sigma, X \mapsto \beh P}$ simple, regular and pure, hence it is easy to show by induction that for every $n \in \mathbb N$, $(\phi^{A_0}_{\sigma})^n(\beh B_0)$ is as well. Moreover, for every $n \in \mathbb N$ we prove that $|(\phi^{A_0}_{\sigma})^n(\beh B_0)| \subseteq |(\phi^{A_0}_{\sigma})^{n+1}(\beh B_0)|$ similarly to Lemma~\ref{lem_incarn_hier}, replacing $\maltese$ by the basis $\beh B_0$. Finally, by Corollary~\ref{coro_reg_pur}, $\interpret{A}^{\sigma}$ is pure. \end{proof} \begin{corollary} \label{coro_data_pur} Data behaviours are pure. \end{corollary} \begin{remark} Although here the focus is on the interpretation of data patterns, we should say a word about the interpretation of (polarised) $\mu$MALL formulas, which are a bit more general. These formulas are generated by: \begin{align*} P, Q & \hspace{.3cm} ::= \hspace{.3cm} X_P ~~~\boldsymbol |~~~ X_N^\perp ~~~\boldsymbol |~~~ 1 ~~~\boldsymbol |~~~ 0 ~~~\boldsymbol |~~~ M \oplus N ~~~\boldsymbol |~~~ M \otimes N ~~~\boldsymbol |~~~ \shpos N ~~~\boldsymbol |~~~ \mu X.P \\ M, N & \hspace{.3cm} ::= \hspace{.3cm} P^\perp \end{align*} where the usual involutive negation hides the negative connectives and constants, through the dualities $1/\bot$, $0/\top$, $\oplus/\with$, $\otimes/\parr$, $\shpos/\shneg$, $\mu/\nu$ . The interpretation as ludics behaviours, given in~\cite{BDS}, is as follows: $1$ is interpreted as a constant behaviour $\beh C_a$, $0$ is the daimon $\maltese$, the positive connectives match their ludics counterparts, $\mu$ is interpreted as the least fixed point of a function $\phi^A_\sigma$ similarly to data patterns, and the negation corresponds to the orthogonal. Since in ludics constants and $\maltese$ are regular, and since regularity is preserved by the connectives (Proposition~\ref{prop_reg_stable}) and by orthogonality, the only thing we need in order to prove that all the behaviours interpreting $\mu$MALL formulas are regular is a generalisation of regularity stability under fixed points (for now we only have it in our particular case: Corollary~\ref{coro_reg_pur} together with Proposition~\ref{prop_reg_union}). Note however that interpretations of $\mu$MALL formulas are not all pure. Indeed, as we will see in next section, orthogonality (introduced through the connective $\multimap$) does not preserve purity in general. \end{remark} \section{Functional Types} \label{sec-func} In this section we define \imp{functional behaviours} which combine data behaviours with the connective $\multimap$. A behaviour of the form $\beh N \multimap \beh P$ is the set of designs such that, when interacting with a design of type $\beh N$, outputs a design of type $\beh P$; this is exactly the meaning of its definition $\beh N \multimap \beh P := (\beh N \otimes \beh P^\perp)^\perp$. We prove that some particular higher-order functional types -- where functions are taken as arguments, typically $(A \multimap B) \multimap C$ -- are exactly those who fail at being pure, and we interpret this result from a computational point of view. \subsection{Where Impurity Arises} \label{sub_func_impur} We have proved that data behaviours are regular and pure. However, if we introduce functional behaviours with the connective $\multimap$, purity does not hold in general. Proposition~\ref{prop_func_quasi_pure} indicates that a weaker property, quasi-purity, holds for functional types, and Proposition~\ref{prop_main} identifies exactly the cases where purity fails. Let us write $\mathcal D$ for the set of data behaviours. \begin{definition} A \defined{functional behaviour} is a behaviour inductively generated by the grammar below, where $\beh P \multimap^+ \beh Q$ stands for $\shpos((\shneg \beh P) \multimap \beh Q)$. \[ \beh P, \beh Q ::= \beh P_0 \in \mathcal D ~~~\boldsymbol |~~~ \beh P \oplus^+ \beh Q ~~~\boldsymbol |~~~ \beh P \otimes^+ \beh Q ~~~\boldsymbol |~~~ \beh P \multimap^+ \beh Q \] \end{definition} From Propositions~\ref{prop_reg_stable}, \ref{prop_pure_stable} and \ref{prop_arrow_princ} we easily deduce the following result. \begin{proposition} \label{prop_func_quasi_pure} Functional behaviours are regular and quasi-pure. \end{proposition} For next proposition, consider \defined{contexts} defined inductively as follows (where $\beh P$ is a functional behaviour): \[\mathcal C ::= [~] ~~~\boldsymbol |~~~ \mathcal C \oplus^+ \beh P ~~~\boldsymbol |~~~ \beh P \oplus^+ \mathcal C ~~~\boldsymbol |~~~ \mathcal C \otimes^+ \beh P ~~~\boldsymbol |~~~ \beh P \otimes^+ \mathcal C ~~~\boldsymbol |~~~ \beh P \multimap^+ \mathcal C\] \begin{proposition} \label{prop_main} A functional behaviour $\beh P$ is impure if and only if there exist contexts $\mathcal C_1, \mathcal C_2$ and functional behaviours $\beh Q_1, \beh Q_2, \beh R$ with $\beh R \notin \mathrm{Const}$ such that \[\beh P = \mathcal C_1[~\mathcal C_2[\beh Q_1 \multimap^+ \beh Q_2] \multimap^+ \beh R~]\] \end{proposition} \subsection{Example and Discussion} \label{ex_discuss} Proposition~\ref{prop_main} states that a functional behaviour which takes functions as argument is not pure: some of its visitable paths end with a daimon $\maltese$, and there is no possibility to extend them. In terms of proof-search, playing the daimon is like giving up; on a computational point of view, the daimon appearing at the end of an interaction expresses the sudden interruption of the computation. In order to understand why such an interruption can occur in the specific case of higher-order functions, consider the following example which illustrates the proposition. \begin{example} \label{ex-final} Let $\beh Q_1, \beh Q_2, \beh 1$ be functional behaviours, with $\beh 1 \in \mathrm{Const}$. Define $\beh{Bool} = \beh 1 \oplus^+ \beh 1$ and consider the behaviour $\beh P = (\beh Q_1 \multimap^+ \beh Q_2) \multimap^+ \beh{Bool}$: this is a type of functions which take a function as argument and output a boolean. Let $\alpha_1, \alpha_2, \beta$ be respectively the first positive action of the designs of $\beh Q_1, \beh Q_2, \beh 1$. It is possible to exhibit a design $\design p \in \beh P$ and a design $\design n \in \beh P^\perp$ such that the visitable path $\pathLL s = \interseq{\design p}{\design n}$ is $\maltese$-ended and maximal in $\visit P$, in other words $\pathLL s$ is a witness of the impurity of $\beh P$. The path $\pathLL s$ contains the actions $\alpha_1$ and $\overline{\alpha_2}$ in such a way that it cannot be extended with $\beta$ without breaking the P-visibility condition, and there is no other available action in designs of $\beh P$ to extend it. Reproducing the designs $\design p$ and $\design n$ and the path $\pathLL s$ here would be of little interest since those objects are too large to be easily readable ($\pathLL s$ visits the entire design $\design p$, which contains 11 actions). We however give an intuition in the style of game semantics: Fig.~\ref{ho-play} represents $\pathLL s$ as a legal play in a strategy of type $\beh P = (\beh Q_1 \multimap^+ \beh Q_2) \multimap^+ \beh{Bool}$ (note that only one ``side'' $\oplus_1 \shneg \beh 1$ of $\beh{Bool}$ is represented, corresponding for example to \texttt{True}, because we cannot play in both sides). This analogy is informal, it should stand as an intuition rather than as a precise correspondence with ludics; for instance, and contrary to the way it is presented in game semantics, the questions are asked on the connectives, while the answers are given in the sub-types of $\beh P$. On the right are given the actions in $\pathLL s$ corresponding to the moves played. The important thing to remark is the following: if a move $b$ corresponding to action $\beta$ were played instead of $\maltese$ at the end of this play, it would break the P-visibility of the strategy, since this move would be justified by move $q_{\shneg}$. \begin{figure} \centering \includegraphics[scale=1]{play_like.pdf} \caption{Representation of path $\pathLL s$ from Example~\ref{ex-final} in the style of a legal play} \label{ho-play} \end{figure} The computational interpretation of the $\maltese$-ended interaction between $\design p$ and $\design n$ is the following: a program $p$ of type $\beh P$ launches a child process $p'$ to compute the argument of type $\beh Q_1 \to \beh Q_2$, but $p$ starts to give a result in $\beh{Bool}$ before the execution of $p'$ terminates, leading to a situation where $p$ cannot compute the whole data in $\beh{Bool}$. The interaction outputs $\maltese$, i.e., the answer given in $\beh{Bool}$ by $p$ is incomplete. Moreover by Proposition~\ref{prop_func_quasi_pure} functional behaviours are quasi-pure, therefore the maximal $\maltese$-ended visitable paths are necessarily not well-bracketed. This is indeed the case of $\pathLL s$: remark for example that the move $q_{\oplus_1}$ appears between $a_1$ and its justification $q_{\shneg}$ in the sequence, but $q_{\oplus_1}$ is not hereditarily justified by $q_{\shneg}$. In HO games, well-bracketedness is a well studied notion, and relaxing it introduces control operators in program. If we extend such an argument to ludics, this would mean that the appearance of $\maltese$ in the execution of higher-order functions can only happen in the case of programs with control operators such as \imp{jumps}, i.e. programs which are not purely functional. \end{example} \section{Conclusion} This article is a contribution to the exploration of the behaviours of linear ludics in a computational perspective. Our focus is on the behaviours representing data types and functional types. Inductive data types are interpreted using the logical connectives constructions and a least fixed point operation. Adopting a constructive approach, we provide an internal completeness result for fixed points, which unveils the structure of data behaviours. This leads us to proving that such behaviours are regular -- the key notion for the characterisation of MALL in ludics -- and pure -- that is, type safe. But behaviours interpreting types of functions taking functions as argument are impure; for well-bracketed interactions, corresponding to the evaluation of purely functional programs, safety is however guaranteed. \subparagraph{Further Work} Two directions for future research arise naturally: \begin{itemize} \item Extending our study to greatest fixed points $\nu X.A$, i.e., coinduction, is the next objective. Knaster--Tarski ensures that such greatest fixed point behaviours exist \cite{BDS}, but Kleene fixed point theorem does not apply here, hence we cannot find an explicit form for coinductive behaviours the same way we did for the inductive ones. However it is intuitively clear that, compared to least fixed points, greatest ones add the infinite ``limit'' designs in (the incarnation of) behaviours. For example, if $\mathbb Nat_{\omega} = \nu X. (1 \oplus X)$ then we should have $|\interpret{\mathbb Nat_{\omega}}| = |\interpret{\mathbb Nat}| \cup \{\design d_\omega\}$ where $\design d_\omega = \mathrm{succ}(\design d_\omega) = x_0|\iota_2\langle\shneg(x).{\design d_\omega}^{x}\rangle$. \item Another direction would be to get a complete characterisation of $\mu$MALL in ludics, by proving that a behaviour is regular -- and possibly satisfying a supplementary condition -- if and only if it is the denotation of a $\mu$MALL formula. \end{itemize} \subparagraph{Acknowledgement} I thank Claudia Faggian, Christophe Fouqueré, Thomas Seiller and the anonymous referees for their wise and helpful comments. \bibliographystyle{plainurl}
{'timestamp': '2017-07-28T02:08:23', 'yymm': '1707', 'arxiv_id': '1707.08925', 'language': 'en', 'url': 'https://arxiv.org/abs/1707.08925'}
\section{Introduction} \label{sec:intro} The two main goals of non-equilibrium statistical mechanics are the derivation of macroscopic equations from microscopic dynamics and the calculation of transport coefficients~\cite{Balescu1975, balescu97, ResibDeLeen77}. The starting point is a kinetic equation that governs the evolution of the distribution function, such as the Boltzmann, the Vlasov, the Lenard-Balescu or the Fokker-Planck equation. Although most kinetic equations involve a (quadratically) nonlinear collision operator, it is quite often the case that for the calculation of transport coefficients it is sufficient to consider a linearized collision operator and, consequently, a linearized kinetic equation. In this case, it is well known that the transport coefficients are related to the eigenvalues of the linearized collision operator~\cite[Ch. 13]{Balescu1975}, \cite[Ch. 10]{balescu97}. The goal of this article is to present some results on the analysis of transport coefficients for a particularly simple class of kinetic equations describing the problem of self-diffusion. The distribution function $f(q,p,t)$ of a tagged particle satisfies the kinetic equation \begin{equation}\label{e:kinetic} \frac{\partial f}{\partial t} + p \cdot \nabla_q f = Q f, \end{equation} where $q, \, p$ are the position and momentum of the tagged particle and $Q$ is a linear collision operator. $Q$ is a dissipative operator which acts only on the momenta and with only one collision invariant, corresponding to the conservation the particle density. The macroscopic equation for this problem is simply the diffusion equation for the particle density $\rho(p,t) = \int f(p,q,t) \, dp$~\cite{Schroter77} \begin{equation}\label{e:diffusion} \frac{\partial \rho}{\partial t} = \sum_{i,j=1}^d D_{ij} \frac{\partial^2 \rho}{\partial x_i \partial x_j}, \end{equation} where the components of the diffusion tensor $D$ are the transport coefficients that have to be calculated from the microscopic dynamics. At least two different techniques have been developed for the calculation of transport coefficients. The first technique is based on the analysis of the kinetic equation~\eqref{e:kinetic} and, in particular, on the expansion of the distribution function in an appropriate orthonormal basis, the basis consisting of the the eigenfunctions of the linear collision operator $Q$ ~\cite[Ch. 13]{Balescu1975}, \cite[Ch. 10]{balescu97}. Transport coefficients are then related to the eigenvalues of the collision operator. The second technique is based on the Green-Kubo formalism~\cite{KuboTodaHashitsume91}. This formalism enables us to express transport coefficients in terms of time integrals of appropriate autocorrelation functions. In particular, the diffusion coefficient is expressed in terms of the time integral of the velocity autocorrelation function \begin{equation}\label{e:Green_Kubo} D = \int_0^{+\infty} \langle p(t) \otimes p(0) \rangle \, dt. \end{equation} The equivalence between the two approaches for the calculation of transport coefficients, the one based on the analysis of the kinetic equation and the other based on the Green-Kubo formalism has been studied~\cite{Resibois1964}. The Green-Kubo formalism has been compared with other techniques based on the perturbative analysis of the kinetic equations, e.g.~\cite{petrosky99b,petrosky99a, brilliantov05} and the references therein. Many works also exist on the rigorous justification of the validity of the Green-Kubo formula for the self-diffusion coefficient~\cite{JiangZhang03, Durr_al90, Chenal06,Spohn91, Chenal06}. Usually the linearized collision operator is taken to be a symmetric operator in some appropriate Hilbert space. When the collision operator is the (adjoint of the) generator of a Markov process (which is the case that we will consider in this paper), the assumption of the symmetry of $Q$ is equivalent to the reversibility of the microscopic dynamics~\cite{qian02}. There are various cases, however, where the linearized collision operator is not symmetric. As examples we mention the linearized Vlassov-Landau operator in plasma physics~\cite[Eq. 13.6.2]{Balescu1975} or the motion of a charged particle in a constant magnetic field undergoing collisions with the surrounding medium~\cite[Seq. 11.3]{balescu97}. It is one of the main objectives of this paper to study the effect of the antisymmetric part of the collision operator on the diffusion tensor. In most cases (i.e. for most choices of the collision kernel) it is impossible to obtain explicit formulas for transport coefficients. The best one can hope for is the derivation of estimates on transport coefficients as functions of the parameters of the microscopic dynamics. The derivation of such estimates is quite hard when using formulas of the form~\eqref{e:Green_Kubo}. In this paper we show that the Green-Kubo formalism is equivalent to a formulation based on the solution of a Poisson equation associated to the collision operator $Q$. Furthermore,we show that this formalism is a much more convenient starting point for rigorous and parturbative analysis of the diffusion tensor. The Poisson equation (cell problem) is the standard tool for calculating homogenized coefficients in the theory of homogenization for stochastic differential equations and partial differential equations~\cite{PavlSt08}. The problem of obtaining estimates on the diffusion coefficient has been studied quite extensively in theory of turbulent diffusion--the motion of a particle in a random, divergence-free velocity field~\cite{kramer}. In particular, the dependence of the diffusion coefficient (eddy diffusivity) on the Peclet number has been investigated. For this purpose, a very interesting theory has been developed by Avellaneda and Majda~\cite{ma:mmert, AvelMajda91}, see also~\cite{golden2, bhatta_1}. This theory is based on the introduction of an appropriate bounded (and sometimes compact) antisymmetric operator and it leads to a very systematic and rigorous perturbative analysis of the eddy diffusivity. This theory has been extended to time-dependent flows~\cite{avellanada2}. In this paper we apply the Majda-Avellaneda theory to the problem of the derivation of rigorous estimates for the diffusion tensor of a tagged particle whose distribution function satisfies a kinetic equation of the form~\eqref{e:kinetic}. We study this problem when the collision operator is the Fokker-Planck operator (i.e. the $L^2$-adjoint) of an ergodic Markov process. This assumption is not very restrictive when studying the problem of self-diffusion of a tagged particle since many dissipative integrodifferential operators are generators of Markov processes~\cite{KikuchiNegoro96}. We obtain formulas for both the symmetric and the antisymmetric parts of the diffusion tensor and we use these formulas in order to study various asymptotic limits of physical interest. The rest of the paper is organized as follows. In Section~\ref{sec:Green_Kubo} we obtain an alternative representation for the diffusion tensor based on the solution of a Poisson equation and we present two elementary examples. In Section~\ref{sec:Diff} we apply the Majda-Avellaneda theory to the problem of self-diffusion and we study rigorously the weak and strong coupling limits for the diffusion tensor. Examples are presented in Section~\ref{sec:examples}. Conclusions and open problems are discussed in Section~\ref{sec:conclusions}. \section{The Green-Kubo Formula} \label{sec:Green_Kubo} In this section we show that we can rewrite the Green-Kubo formula for the diffusion coefficient in terms of the solution of an appropriate Poisson equation. We will consider a slight generalization of~\eqref{e:kinetic}, namely we will consider the long-time dynamics of the dynamical system \begin{equation}\label{e:x} \frac{d x}{d t} = V(z), \end{equation} where $z$ is an ergodic Markov process state space $\mathcal Z$, generator $\mathcal L$ and invariant measure $\pi(dz)$.\footnote{We remark that the process $z$ can be $x$ itself, or the restriction of $x$ on the unit torus. This is the precisely the case in turbulent diffusion and in the Langevin equation in a periodic potential.} The kinetic equation for the distribution function is \begin{equation}\label{e:kinetic_gen} \frac{\partial f}{\partial t} + V(z) \cdot \nabla_x f = \mathcal L^* f, \end{equation} where $\mathcal L^*$ denotes the $L^2(\mathcal Z)$-adjoint of the generator $\mathcal L$. The kinetic equation~\eqref{e:kinetic} is of the form~\eqref{e:kinetic_gen} for $V(p) = p$, and where we assume that the collision operator (which acts only on the velocities) is the Fokker-Planck operator of an ergodic Markov process, which can be a diffusion process (e.g. the Ornstein-Uhlenbeck process in which case~\eqref{e:kinetic_gen} becomes the Fokker-Planck equation) a jump process (as in the model studied in~\cite{Ellis73}), or a L\'{e}vy process. \begin{prop}\label{prop:green_kubo} Let $x(t)$ be the solution of~\eqref{e:x}, let $z(t)$ be an ergodic Markov process with state space $\mathcal Z$, generator $\mathcal L$ and invariant measure $\pi(dz)$ and assume that $V(z)$ is centered with respect to $\mu(dz)$, $$ \int_{\mathcal Z} V(z) \, \mu(dz) = 0. $$ Then the diffusion tensor~\eqref{e:Green_Kubo} is given by \begin{equation}\label{e:deff} D = \int_{\mathcal Z} V(z) \otimes \phi(z) \, \mu(dz) \end{equation} where $\phi$ is the solution of the Poisson equation \begin{equation}\label{e:poisson} - \mathcal L \phi = V(z) \end{equation} \end{prop} \begin{proof} Let $e$ by an arbitrary unit vector. We will use the notation $D^e = D e \cdot e, \, x^e = x \cdot e$. The Green-Kubo formula for the diffusion coefficient along the direction $e$ is \begin{eqnarray} D^e &=& \int_0^{+\infty} \langle \dot{x}^{e} (t) \dot{x}^e (0) \rangle \, dt \nonumber \\ & = & \int_0^{+\infty} \langle V^e (z(t)) V^e (z(0)) \rangle \, dt. \label{e:green_kubo1} \end{eqnarray} We calculate now the correlation function in~\eqref{e:green_kubo1}. We will use the notation $z=z(t ; p)$ with $z(0;p) = p$. We have \begin{equation}\label{e:corel} \langle V^e (z(t;p)) V^e (z(0;p)) \rangle = \int_{\mathcal Z} \int_{\mathcal Z} V^e (z) V^e (p) \rho(z,t ;p) \mu(d p) dz, \end{equation} where $\rho(z,t ; p)$ is the transition probability density of the Markov process $z$ which is the solution of the Fokker-Planck equation \begin{equation}\label{e:fokker_planck} \frac{\partial \rho}{ \partial t} = \mathcal L^* \rho, \quad \rho(z,0;p) = \delta(z-p). \end{equation} We introduce the function \begin{equation*} \overline{V}^e(t,p) := {\mathbb E} V^e(z) = \int_{\mathcal Z} V^e (z) \rho(z,t ; p) dz \end{equation*} which is the solution of the backward Kolmogorov equation \begin{equation}\label{e:kolmogorov} \frac{\partial \overline{V}}{\partial t} = \mathcal L \overline{V}^e, \quad \overline{V}^e (0,p) = V^e(p). \end{equation} We can write formally the solution of this equation in the form $$ \overline{V}^e = e^{\mathcal L t} V^e(p). $$ We substitute this into~\eqref{e:corel} to obtain $$ \langle V^e (z(t;p)) V^e (z(0;p)) \rangle = \int_{\mathcal Z} \left( e^{\mathcal L t} V^e(p) \right) V^e (p) \, \mu(dp). $$ We use this now in the Green-Kubo formula~\eqref{e:green_kubo1} and, assuming that we can interchange the order of integration, we calculate \begin{eqnarray*} D^e & = & \int_0^{+\infty} \left( e^{\mathcal L t} V^e(p) \right) V^e (p) \, \mu(dp) \, dt \\ & = & \int_{\mathcal Z} \left( \int_0^{+\infty} e^{\mathcal L t} V^e(p) \, dt \right) V^e (p) \, \mu(dp) \\ & = & \int_{\mathcal Z} \Big( (-\mathcal L)^{-1} V^e (p) \Big) V^e (p) \mu(d p) \\ & = & \int_{\mathcal Z} \phi^e V^e \, \mu(dp). \end{eqnarray*} where $\phi^e$ is the solution of the Poisson equation $-\mathcal L \phi^e = V^e$. In the above calculation we used the identity $(-\mathcal L)^{-1} \cdot = \int_0^{+\infty} e^{\mathcal L t} \cdot \, dt$~\cite[Ch. 11]{PavlSt08}, ~\cite[Ch. 7]{evans}. \end{proof} From~\eqref{e:deff} it immediately follows that the diffusion tensor is nonnegative definite: \begin{eqnarray*} D^e:=e \cdot D e &=& \int_{\mathcal Z} V^e \phi \cdot e \, \mu(dz) = \int_{\mathcal Z} (-\mathcal L) \phi^e \phi^e \, \mu(dz) \\ & \geq & 0, \end{eqnarray*} since, by definition, the collision operator is dissipative. When the generator $\mathcal L$ is a symmetric operator in $L^2(\mathcal Z;\mu(dz))$, i.e. the Markov process $z$ is reversible~\cite{qian02}, the diffusion tensor is symmetric: \begin{eqnarray*} D_{ij} &=& \int_{\mathcal Z} V_i(z) \phi_j(z) \, \mu(dz) = \int_{\mathcal Z} (-\mathcal L) \phi_i(z) \phi_j(z) \, \mu(dz) \\ & = & \int_{\mathcal Z} \phi_i(z) (-\mathcal L) \phi_j(z) \, \mu(dz) = D_{ji}. \end{eqnarray*} Green-Kubo formulas for reversible Markov processes have already been studied, since in this case the symmetry of the generator of the Markov process implies that the spectral theorem for self-adjoint operators can be used~\cite{kipnis,JiangZhang03}. Much less is known about Green-Kubo formulas for non-reversible Markov process. One of the consequences of non-reversibility, i.e. when the generator of the Markov process $z$ is not symmetric in $L^2(\mathcal Z;\mu(dz))$, is that the diffusion tensor is not symmetric, unless additional symmetries are present. The symmetry properties of the diffusion tensor in anisotropic porous media have been studied in~\cite{KochBrady88}, see also~\cite{thesis}. A general representation formula for the antisymmetric part of the diffusion tensor will be given in the next section. \subsection{Elementary Examples} \paragraph{The Ornstein-Uhlenbeck process.} The equations of motion are \begin{eqnarray} \dot{q} & = & p, \\ \dot{p} & = & -\gamma p + \sqrt{2 \gamma \beta^{-1}} \dot{W}. \end{eqnarray} The equilibrium distribution of the velocity process is $$ \mu(dp) = \sqrt{\frac{\beta}{2 \pi}} e^{-\frac{\beta}{2} p^2} \, dp. $$ The Poisson equation is $$ - \mathcal L \phi = p, \quad \mathcal L = - \gamma p \partial_p + \gamma \beta^{-1} \partial_p^2. $$ The mean zero solution is $$ \phi = \frac{1}{\gamma} p. $$ The diffusion coefficient is $$ D = \int \phi p \mu (dp) = \frac{1}{\gamma \beta}, $$ which is, of course, Einstein's formula. \paragraph{A charged particle in a constant magnetic field.} We consider the motion of a charged particle in the presence of a constant magnetic field in the $z$ direction, ${\bf B} = B e_3$, while the collisions are modeled as white noise~\cite[Ch. 11]{balescu97}. The equations of motion are \begin{eqnarray} \frac{d {\bf q}}{d t} & = & {\bf p}, \\ \frac{d {\bf p}}{d t} & = & \Omega \, {\bf p} \times e_3 - \nu {\bf p} + \sqrt{2 \beta^{-1} \nu} \dot{{\bf W}}, \end{eqnarray} where ${\bf W}$ denotes standard Brownian motion in ${\mathbb R }^3$, $\nu$ is the collision frequency and $$ \Omega = \frac{e B}{m c} $$ is the Larmor frequency of the test particle. The velocity is a Markov process with generator \begin{equation}\label{e:magnetic} \mathcal L = \Omega (p_2 \partial_{p_1} - p_1 \partial_{p_2}) + \nu (- p \cdot \nabla_p + \beta^{-1} \Delta_p). \end{equation} The invariant distribution of the velocity process is the Maxwellian $$ \mu(d {\bf p}) = \left(\frac{\beta}{2 \pi} \right)^{\frac{3}{2}} e^{-\frac{\beta}{2} | \bf p|^2} \, d {\bf p}. $$ The vector valued Poisson equation is $$ - \mathcal L {\bf \phi} = {\bf p}. $$ The solution is $$ {\bf \phi} = \left(\frac{\nu}{\nu^2 + \Omega^2} p_1 + \frac{\Omega}{\nu^2 + \Omega^2} p_2, \, -\frac{\Omega}{\nu^2 + \Omega^2} p_1 + \frac{\nu}{\nu^2 + \Omega^2} p_2, \, \frac{1}{\nu} p_3 \right). $$ The diffusion tensor is \begin{eqnarray} D & = & \int {\bf p} \otimes {\bf \phi} \mu(d {\bf p}) = \beta^{-1} \left( \begin{array}{ccc} \frac{\nu}{\nu^2 + \Omega^2} \; & \; \frac{\Omega}{\nu^2 + \Omega^2} \; & \; 0 \\ - \frac{\Omega}{\nu^2 + \Omega^2} \; & \; \frac{\nu}{\nu^2 + \Omega^2} \; & \; 0 \\ 0 \; & \; 0 \; & \; \frac{1}{\nu} \end{array} \right) . \end{eqnarray} Notice that the diffusion tensor is not symmetric. This is to be expected, since the generator of the Markov process~\eqref{e:magnetic} is not symmetric. \section{Stieltjes Integral Representation and Bounds on the Diffusion Tensor} \label{sec:Diff} When the Markov process $z$ is reversible, it is straightforward to obtain an integral representation formula for the diffusion tensor using the spectral theorem for the self-adjoint operators~\cite{kipnis}. It is not possible, in general, to do the same when $z$ is a nonreversible ergodic Markov process. This problem was solved by Avellaneda and Majda~\cite{AvelMajda91} in the context of the theory of turbulent diffusion by introducing an appropriate bounded, antisymmetric operator. In this section we apply the Avellaneda-Majda theory in order to study the diffusion tensor~\eqref{e:deff} when $z$ is an ergodic Markov process in $\mathcal Z$. We will use the notation $L^2_{\mu} :=L^2 (\mathcal Z ; \mu(dz))$. We decompose the collision operator $\mathcal L$ into its symmetric and antisymmetric part with respect to the $L^2_{\mu}$ inner product: $$ \mathcal L = \mathcal A + \gamma \mathcal S, $$ where $\mathcal A = -\mathcal A^*$ and $\mathcal S = \mathcal S^*$. The parameter $\gamma$ measures the strength of the symmetric part, relative to the antisymmetric part. The Poisson equation~\eqref{e:poisson}, along the direction $e$, can be written as \begin{equation}\label{e:poisson2} - (\mathcal A + \gamma \mathcal S) \phi^e = V^e. \end{equation} Our goal is to study the dependence of the diffusion tensor on $\gamma$, in particular in the physically interesting regime $\gamma \ll 1$. let $(\cdot, \cdot)_{\mu}$ denote the inner product in $L^2_{\mu}$. We introduce the family of seminorms $$ \|f \|_k^2 := (f, (-\mathcal S)^k f)_{\mu}. $$ Define the function spaces $H^k:= \{f \in L^2_\mu \, : \, \|f \|_k < +\infty \}$ and set $k=1$. Then $\| \cdot \|_1$ satisfies the parallelogram identity and, consequently, the completion of $H^1$ with respect to the norm $\| \cdot \|_1$, which is denoted by ${\mathcal H}$, is a Hilbert space. The inner product $\langle \cdot , \cdot \rangle$ in ${\mathcal H}$ is defined through polarization and it is easy to check that, for $f, \, h \in {\mathcal H}$, $$ \langle f,h \rangle = (f, (-\mathcal S) h)_{\mu}. $$ A careful analysis of the function space ${\mathcal H}$ and of its dual is presented in~\cite{LanOll05}. Motivated by~\cite{AvelMajda91}, see also~\cite{bhatta_1, GoldenPapanicolaou83}, we apply the operator $(-\mathcal S)^{-1}$ to the Poisson equation~\eqref{e:poisson2} to obtain \begin{equation}\label{e:poisson3} (-\mathcal G + \gamma I) \phi^e = \widehat{V}^e, \end{equation} where we have defined the operator $\mathcal G:=(-\mathcal S)^{-1} \mathcal A$ and we have set $\widehat{V}^e := (-\mathcal S)^{-1} V^e$. This operator is antisymmetric in ${\mathcal H}$: \begin{lemma}\label{e:antisym} The operator $\mathcal G: {\mathcal H} \rightarrow {\mathcal H}$ is antisymmetric. \end{lemma} \begin{proof} We calculate \begin{eqnarray*} \langle \mathcal G f, h \rangle & = & \int (-\mathcal S)^{-1} \mathcal A f (- \mathcal S) h \, \mu (dz) = \int \mathcal A f h \, \mu(dz) \\ & = & - \int (-\mathcal S)^{-1} (-\mathcal S) f \mathcal A h \, \mu(dz) = - \int f (-\mathcal S) \mathcal G h \, \mu(dz) \\ & = & - \langle f, \mathcal G h \rangle. \end{eqnarray*} \end{proof} We remark that, unlike the problem of turbulent diffusion~\cite{AvelMajda91, bhatta_1, MajMcL93}, the operator $\mathcal G$ is not necessarily bounded or, even more, compact. Under the assumption that $\mathcal G$ is bounded as an operator from ${\mathcal H}$ to ${\mathcal H}$ we can develop a theory similar to the one developed in~\cite{AvelMajda91}. The boundedness of the operator $\mathcal G$ needs to be checked for each specific example. Using the definitions of the space ${\mathcal H}$, the operator $\mathcal G$ and the vector $\widehat{V}$ we obtain \begin{equation}\label{e:deff_alt} D_{ij} = \langle \phi_i, \widehat{V}_j \rangle. \end{equation} We will use the notation $\| \cdot \|_{{\mathcal H}}$ for the norm in ${\mathcal H}$. It is straightforward to analyse the overdamped limit $\gamma \rightarrow +\infty$. \begin{proposition}\label{prop:large_gamma_exp} Assume that $\mathcal G: {\mathcal H} \rightarrow {\mathcal H}$ is a bounded operator. Then, for $\gamma, \, \alpha$ such that $\|\mathcal G \|_{{\mathcal H} \rightarrow {\mathcal H}} \leq \gamma$, the diffusion coefficient admits the following asymptotic expansion \begin{equation}\label{e:deff_exp} D = \frac{1}{\gamma} \|\widehat{V} \|_{{\mathcal H}}^2 + \sum_{k=1}^{\infty} \frac{1}{\gamma^{2k+1}} \|\mathcal G^k \widehat{V} \|^2_{{\mathcal H}}. \end{equation} In particular, \begin{equation}\label{e:large_gamma} \lim_{\gamma \rightarrow +\infty} \gamma D = \|\widehat{V} \|_{{\mathcal H}}^2. \end{equation} \end{proposition} \begin{proof} We use~\eqref{e:poisson3}, the definition of the space ${\mathcal H}$, and the boundendness and antisymmetry of the operator $\mathcal G$ to calculate \begin{eqnarray*} D^e & = & \frac{1}{\gamma} \left\langle \left(I - \frac{1}{\gamma} \mathcal G \right)^{-1} \widehat{V}^e, \widehat{V}^e \right\rangle \\ & = & \frac{1}{\gamma} \sum_{k=0}^{+\infty} \frac{1}{\gamma^k} \left\langle \mathcal G^k \widehat{V}^e , \widehat{V}^e \right\rangle \\ & = & \frac{1}{\gamma} \|\widehat{V}^e \|_{{\mathcal H}}^2 + \sum_{k=1}^{+\infty} \frac{1}{\gamma^{2 k +1}} \left\langle \mathcal G^{2 k} \widehat{V}^e , \widehat{V}^e \right\rangle \\ & = & \frac{1}{\gamma} \|\widehat{V} \|_{{\mathcal H}}^2 + \sum_{k=1}^{+\infty} \frac{1}{\gamma^{2 k +1}} \left\| \mathcal G^{ k} \widehat{V}^e \right\|_{{\mathcal H}}^2. \end{eqnarray*} \end{proof} From~\eqref{e:large_gamma} we conclude that the large $\gamma$ asymptotics of the diffusion coefficient is universal: the scaling $D^e \sim \frac{1}{\gamma}$ is independent of the specific properties of $\mathcal A, \, \mathcal S$ or $\widehat{V}^e$. This is also the case in problems where the operator $\mathcal G$ is not bounded, such as the Langevin equation in a periodic potential~\cite{HP07}. Of course, the expansion~\eqref{e:deff_exp} is of limited applicability, since it has a very small radius of convergence. This expansion cannot be used to study the small $\gamma$ asymptotics of the diffusion coefficient. The analysis of this limit requires the study of a weakly dissipative system, since the antisymmetric part of the generator $\mathcal A$ represents the deterministic part of the dynamics, whereas the symmetric part $\mathcal S$ the noisy, dissipative dynamics. It is well known that the dynamics of such a system in the limit $\gamma \rightarrow 0$ depends crucially on the properties of the unperturbed deterministic system~\cite{freidlin5, FreidWentz84, ConstKiselRyzhZl06}. The properties of this system can be analyzed by studying the operator $\mathcal A$. For the asymptotics of the diffusion coefficient, the null space of this operator has to be characterized. This fact has been recognized in the theory of turbulent diffusion~\cite{AvelMajda91,MajMcL93, kramer}. We will show that a similar theory to the one developed in these papers can be developed in the abstract framework adopted in this paper. Assume that $\mathcal G: {\mathcal H} \rightarrow {\mathcal H}$ is bounded. Let $\mathcal N = \{f \in {\mathcal H} \, : \, \mathcal G f = 0 \}$ denote the null space of $\mathcal G$. We have ${\mathcal H} = \mathcal N \oplus \mathcal N^{\bot}$. We take the projections on $\mathcal N$ and $\mathcal N^\bot$ to rewrite~\eqref{e:poisson3} as \begin{equation}\label{e:poisson_4} \gamma \phi_N = \widehat{V}_N, \quad (-\mathcal G +\gamma I) \phi_{N^{\bot}} = \widehat{V}_{N^\bot}. \end{equation} We can now write $$ D^e = \frac{1}{\gamma} \|\widehat{V}_N^e \|_{{\mathcal H}}^2 + \langle \phi_{N^\bot}, \widehat{V}^e_{N^{\bot}} \rangle. $$ \begin{prop} Assume that there exists a function $p \in {\mathcal H}$ such that $$ - \mathcal G p = \widehat{V}^e_{N^\bot}. $$ Then \begin{equation}\label{e:gamma_lim} \lim_{\gamma \rightarrow 0} \gamma D^e = \|\widehat{V}^e_N \|_{{\mathcal H}}^2 \end{equation} In particular, $D^e = o(1/\gamma)$ when $\widehat{V}^e_N = 0$. \end{prop} \begin{proof} We write $\phi_{N^{\bot}} = p + \psi$ where $\psi$ solves the equation $$ (-\mathcal G + \gamma I) \psi = - \gamma p. $$ We use $\psi$ as a test function and use the antisymmetry of $\mathcal G$ in ${\mathcal H}$ to obtain the estimate $$ \|\psi \|_{{\mathcal H}} \leq C, $$ from which we deduce that $\| \phi_{N^{\bot}} \|_{{\mathcal H}} \leq C$ and~\eqref{e:gamma_lim} follows. \end{proof} Let $\mathcal G$ be a bounded operator. Since it is also skew-symmetric, we can write $G = i \Gamma$ where $\Gamma$ is a self-adjoint operator in ${\mathcal H}$. From the spectral theorem of bounded self-adjoint operators we know that there exists a one parameter family of projection operators $P(\lambda)$ which is right-continuous, and $P(\lambda) \leq P(\mu)$ when $\lambda \leq \mu$ and $P(-\infty)= 0, \, P(+\infty) = I$ so that $$ f(\Gamma) = \int_{{\mathbb R }} f(\lambda) \, d P(\lambda) $$ for all bounded continuous functions. Using the spectral resolution of $\Gamma$ we can obtain an integral representation formula for the diffusion coefficient~\cite{AvelMajda91}: \begin{equation}\label{e:avell_majda} D^e = \frac{1}{\gamma} \|\widehat{V}_N^e \|_{{\mathcal H}}^2 + 2 \gamma \int_0^{+\infty} \frac{ d \mu_e}{\gamma^2 + \lambda^2} \end{equation} where $d\mu_e = \langle d P(\lambda) \widehat{V}_{N^\bot}^e, \widehat{V}_{N^\bot}^e \rangle$. We can obtain a similar formula for the antisymmetric part of the diffusion tensor $$ A = \frac{1}{2} (D - D^T). $$ In particular, we have the following. \begin{prop}\label{prop:antisymm} Assume that the operator $\mathcal G : {\mathcal H} \rightarrow {\mathcal H}$ is bounded. Then the antisymmetric part of the diffusion tensor admits the representation \begin{equation}\label{e:avell_majda_antisym} A_{ij} = \frac{1}{2} \int_{{\mathbb R }} \frac{\lambda d \mu_{ij}(\lambda) }{ \lambda^2 + \gamma^2} \end{equation} where $$ d \mu_{ij} = \langle d P (\lambda) \widehat{V}^i_{N^\bot} , \widehat{V}^j_{N^\bot}\rangle. $$ \end{prop} \begin{proof} We calculate \begin{eqnarray*} A_{ij} & = & \frac{1}{2} (D_{ij} - D_{ji}) \\ & = & \frac{1}{2} \big( \langle \phi_i , \widehat{V}^j \rangle - \langle \phi_j , \widehat{V}^i \rangle \big) \\ & = & \frac{1}{2 } \big( \langle \phi_i , \widehat{V}^j_{\bot}) - \langle \phi_j , \widehat{V}^i_{\bot} \rangle \big) \\ & = & \frac{1}{2 } \big( \langle {\mathcal R}_\gamma \widehat{V}^i_{\bot} , \widehat{V}^j_{\bot}\rangle - \langle {\mathcal R}_\gamma \widehat{V}^j_{\bot} , \widehat{V}^i_{\bot}\rangle \big) \\ & = & \frac{1}{2 } \left\langle \big( {\mathcal R}_\gamma -{\mathcal R}_\gamma^*) \widehat{V}^i_{\bot} , \widehat{V}^j_{\bot} \right\rangle, \end{eqnarray*} where we have used the notation ${\mathcal R}_\gamma = (-i\Gamma + \gamma I)^{-1}$. From the symmetry of $\Gamma$ we deduce that ${\mathcal R}^*_\gamma = (i\Gamma +\gamma I)^{-1}$. Now we use the representation formula $$ {\mathcal R}_\gamma = \int_0^{+\infty} e^{-\gamma t} e^{i \Gamma t} dt $$ to obtain \begin{eqnarray*} A_{ij} & = & \frac{1}{2 } \left\langle ({\mathcal R}_\gamma -{\mathcal R}_\gamma^*) \widehat{V}^i_{\bot} , \widehat{V}^j_{\bot} \right\rangle \\ & = & \frac{1}{2 } \left\langle \int_0^{+\infty} e^{-\gamma t} \left( e^{i\Gamma t} - e^{- i\Gamma t} \right) dt \widehat{V}^i_{\bot} , \widehat{V}^j_{\bot} \right\rangle \\ & = & \frac{1}{2} \left\langle \int_0^{+\infty} e^{-\gamma t} \sin (\Gamma t) \, dt \widehat{V}^i_{\bot} , \widehat{V}^j_{\bot} \right\rangle \\ & = & \frac{1}{2} \int_{{\mathbb R }} \int_0^{+\infty} e^{-\gamma t} \sin(t \lambda) \, dt \, d \mu_{ij} (\lambda) \\ & = & \frac{1}{2} \int_R \frac{\lambda \, d \mu_{ij}(\lambda)} {\gamma^2 + \lambda^2}. \end{eqnarray*} \end{proof} \begin{remark} The antisymmetric part of the diffusion tensor is independent of the projection of $\widehat{V}$ onto the null space of $\mathcal G$. \end{remark} When the operator $\mathcal G : {\mathcal H} \rightarrow {\mathcal H}$ is compact we can use the spectral theorem for the compact, self-adjoint operator $\Gamma = i \mathcal G$ to obtain an orthonormal basis for the space $\mathcal N^{\bot} $. In this case the integrals in~\eqref{e:avell_majda} and~\eqref{e:avell_majda_antisym} reduce to sums and the analysis of the weak noise limit $\gamma \rightarrow 0$ becomes rather straightforward. The weak noise (large Peclet number) asymptotics for the symmetric part of the diffusion tensor for the advection-diffusion problem with periodic coefficients were studied in~\cite{bhatta_1, MajMcL93}. The asymptotics of the antisymmetric part of the diffusion tensor for the advection-diffusion problem were studied in~\cite{thesis}. \begin{comment} Assume that $\mathcal G:{\mathcal H} \rightarrow {\mathcal H}$ is a compact operator. Under this assumption, and in view of Lemma~\ref{e:antisym}, we can define the operator $\Gamma = i \mathcal G$, which is selfadjoint and compact. The spectral theorem for compact selfadjoint operators implies that $\Gamma$ real, discrete spectrum and that the eigenfunctions of the operator form an orthonormal basis in $\mathcal N$, $\Gamma f_n = \lambda_n f_n$. We can then "solve" the Poisson equation~\eqref{e:poisson_2} by expanding the solution in terms of the eigenfunctions of $\Gamma$: \begin{equation*} \phi = \phi_N + \sum_{j=1}^{\infty} \phi_j f_j, \end{equation*} where $\phi_N \in \mathcal N^{\bot}$ and $\phi_j = (\phi,f_j)_1$. Since $V \in L^2(\pi)$, we have that $\hat{V} \in {\mathcal H}$. Hence, it has the Fourier series expansion \begin{equation*} \hat{V} = \hat{V}_N + \sum_{j=1}^{\infty} \hat{V}_j f_j. \end{equation*} We substitute the above expansions into equation~\eqref{e:poisson_2} and take the ${\mathcal H}$ inner product to obtain the system of equations $$ \gamma \phi_N = \hat{V}_N, \quad (\gamma + i \lambda_n) \phi_n = \hat{V}_n. $$ Hence, the Fourier expansion for the solution of the Poisson equation is \begin{equation}\label{e:phi_expand} \phi = \frac{1}{\gamma} \hat{V}_N + \sum_{j=1}^{\infty} \frac{\hat{V}_j}{\gamma + i \lambda_j } f_j, \end{equation} The diffusion coefficient is \begin{equation*} D = (\phi, (-{\mathcal L}) \phi)_{\pi} = \gamma \| \phi \|_1^2. \end{equation*} We substitute the expansion~\eqref{e:phi_expand} into this formula to obtain \begin{eqnarray} D & = & \frac{1}{\gamma} \|\phi_N \|_1^2 + \gamma \sum_{j=1}^{\infty} |\phi_j|^2 \nonumber \\ & = & \frac{1}{\gamma} \|\phi_N \|_1^2 + \gamma \sum_{j=1}^{\infty} \frac{|\hat{V}_j|^2}{\gamma^2 + \lambda_j^2}. \label{e:deff_expansion} \end{eqnarray} From this formula we can immediately deduce that \begin{itemize} \item Assume that the projection of the right hand side of the Poisson equation onto the null space of $\mathcal G$ vanishes, $\hat{V} = 0$. Then $$ \lim_{\gamma \rightarrow 0} \gamma D = 0 \quad \mbox{and} \quad \lim_{\gamma \rightarrow +\infty} \gamma D =\sum_{j=1}^{\infty} |\hat{V}_j|^2 = \|\hat{V} \|_1^2. $$ \item Assume that the projection of the right hand side of the Poisson equation onto the null space of $\mathcal G$ does not vanish, $\hat{V} \neq 0$. Then $$ \lim_{\gamma \rightarrow 0} \gamma D = \| \hat{V}_N\|_1^2 \quad \mbox{and} \quad \lim_{\gamma \rightarrow +\infty} \gamma D = \| \hat{V}_N\|_1^2 + \sum_{j=1}^{\infty} |\hat{V}_j|^2 = \|\hat{V} \|_1^2. $$ \end{itemize} \begin{prop} \end{prop} \begin{center} {\bf Examples-Counterexamples} \end{center} \begin{enumerate} \item Motion in a periodic, divergence free field: $$ d x = v(x) \, dt + \sqrt{2 \sigma} \, d W, $$ where $v(x)$ is smooth, periodic, divergence-free. In this case the invariant measure of diffusion process on ${\mathbb T}^d$ is the Lebesgue measure. The method outlined in this section was applied to this problem in~\cite{MajMcL93, bhatta_1, papan2}, see also~\cite{wiggins} for a similar analysis in the case of time dependent flows. In this case it is easy to check that the operator $G = (-\Delta)^{1} v \cdot \nabla$ is antisymmetric and compact in $H^1({\mathbb T}^d)$ and Theorem applies. \item Motion in a periodic, divergence free flow driven by colored noise: $$ dx = (v(x) + \eta )\, dt, \quad d \eta = - \gamma \eta + \sqrt{2 \gamma} \, d W. $$ The generator of this process is $$ \mathcal L = (v(x) + \eta) \cdot \nabla_x + \gamma (- \eta \cdot \nabla_\eta + \Delta_\eta). $$ The invariant measure of this process on ${\mathbb T}^d \times {\mathbb R }^d$ is $$ \pi (dx d \eta) = (2 \pi)^{-d/2} e^{-\frac{|\eta|^2}{2}} \, dx d \eta. $$ The symmetric and antisymmetric parts of the generator of the $\{ x, \, eta \}$ process in $L^2({\mathbb T}^d \times {\mathbb R }^d;\pi(dx d \eta))$ are, respectively: $$ \mathcal S = (- \eta \cdot \nabla_\eta + \Delta_\eta), \quad \mathcal A = (v(x) + \eta) \cdot \nabla_x, $$ and, hence, the generator is of the form $\mathcal L = \mathcal A + \gamma \mathcal S$. Let $\phi \in L^2({\mathbb T}^d \times {\mathbb R }^d;\pi(dx d \eta))$ with periodic boundary conditions in $x$ be the unique (up to constants) solution of the Poisson equation $$ - \mathcal L \phi = v(x) + \eta. $$ The formula for the diffusion coefficient along an arbitrary unit vector $\xi$ is \begin{eqnarray*} \xi \cdot D \xi := D^\xi & = & \int_{{\mathbb T}^d \times {\mathbb R }^d} (-\mathcal L \phi^\xi) \phi^\xi \, \pi(dx d \eta) = \gamma \int_{{\mathbb T}^d \times {\mathbb R }^d} (-\mathcal S \phi^\xi) \phi^\xi \, \pi(dx d \eta) \\ &=& \gamma \int_{{\mathbb T}^d \times {\mathbb R }^d} |\nabla_\eta \phi^\xi|^2 \, \pi(dx d \eta). \end{eqnarray*} \item The Langevin equation in a periodic potential which was discussed in Section~\ref{sec:intro}. In this case the result presented in this section does not apply since $\mathcal S^{-1} \mathcal A$ is not compact. \item Estimates on the diffusion coefficient for problems for which we don't know what the invariant measure is and, consequently, we don't know the form of the symmetric and antisymmetric parts of the generator in weighted $L^2$ space. As an example, consider the non-gradient second order dynamics considered in~\cite{HairPavl04} $$ \tau \ddot{x} = v(x) - \dot{x} + \sqrt{2 \sigma} \dot{W}, $$ where $v(x)$ is a smooth, periodic vector field. The existence and uniqueness of an invariant measure for this process on ${\mathbb T}^d \times {\mathbb R }^d$ is known, but not is form. In~\cite{HairPavl04} estimates on the invariant measure were obtained. One possibility would be to consider the weak formulation of the Poisson equation for this problem in an approximate function space with a weight which is close to the density of the actual invariant measure. \end{enumerate} \end{comment} \section{Examples} \label{sec:examples} \paragraph{The generalized Langevin equation.} The generalized Langevin equation (gLE) in the absence of external forces reads \begin{equation}\label{e:gLE} \ddot{q} = - \int_0^t \gamma(t-s) \dot{q}(s) \, ds + F(t), \end{equation} where the memory kernel $\gamma(t)$ and noise $F(t)$ (which is a mean zero stationary Gaussian process) are related through the fluctuation-dissipation theorem \begin{equation}\label{e:fluct_dissip} \langle F(t) F(s) \rangle = \beta^{-1} \gamma(t-s). \end{equation} We approximate the memory kernel by a sum of exponentials~\cite{Kup03}, \begin{equation} \gamma(t) = \sum_{j=1}^{N} \lambda^2_j e^{-\alpha_j |t|}. \end{equation} Under this assumption, the non-Markovian gLE~\eqref{e:gLE} can be rewritten as a Markovian system of equations in an extended state space: \begin{subequations} \begin{eqnarray} \dot{q} &=& p, \label{e:q} \\ \dot{p} &=& \sum_{j=1}^N \lambda_j u_j, \label{e:p} \\ \dot{u}_j &=& -\alpha_j u_j - \lambda_j p + \sqrt{2 \beta^{-1} \alpha_j } \,\dot{W}_j,\quad j=1,\dots N. \label{e:u} \end{eqnarray} \end{subequations} This example is of the form~\eqref{e:x} with the driving Markov process being $\{p, u_1,\dots u_N \}$. The generator of this process is \begin{eqnarray*} \mathcal L = \Big(\sum_{j=1}^N \lambda_j u_j \Big) \frac{\partial}{\partial p} + \sum_{j=1}^N \Big( -\alpha_j u_j \frac{\partial}{\partial u_j} -\lambda_j p \frac{\partial}{\partial u_j} +\beta^{-1} \alpha_j \frac{\partial^2}{\partial u_j^2} \Big). \end{eqnarray*} This is an ergodic Markov process with invariant measure \begin{equation}\label{e:invmeas_gle} \rho(p, u) = \frac{1}{\mathcal Z} e^{- \beta \big(\frac{p^2}{2} + \sum_{j=1}^N \frac{u_j^2}{2} \big)}, \end{equation} where $\mathcal Z = \big(2 \pi \beta^{-1} \big)^{(N+1)/2} $. The symmetric and antisymmetric parts of the generator $\mathcal L$ in $L^2({\mathbb R }^{N+1}; \rho(p, {\bf u}) dp d {\bf u})$ are, respectively: $$ \mathcal S = \sum_{j=1}^N \Big( -\alpha_j u_j \frac{\partial}{\partial u_j} +\beta^{-1} \alpha_j \frac{\partial^2}{\partial u_j^2} \Big) $$ and $$ \mathcal A = \Big(\sum_{j=1}^N \lambda_j u_j \Big) \frac{\partial}{\partial p} + \sum_{j=1}^N \Big( -\lambda_j p \frac{\partial}{\partial u_j} \Big). $$ It is possible to study the spectral properties of $(-\mathcal S)^{-1} \mathcal A$. However, it is easier to solve the Poisson equation $$ - \mathcal L \phi =p $$ and to calculate the diffusion coefficient. The solution of this equation is $$ \phi = \sum_{k=1}^N \frac{\lambda_k}{\alpha_k} \frac{1}{\sum_{k=1}^N \frac{\lambda^2_k}{\alpha_k} } u_k + p \frac{1}{\sum_{k=1}^N \frac{\lambda^2_k}{\alpha_k} } $$ The diffusion coefficient is $$ D = \int_{{\mathbb R }^{N+1}} p \phi \rho(p,u) \, dp du = \beta^{-1} \frac{1}{\sum_{k=1}^N \frac{\lambda^2_k}{\alpha_k} }. $$ We remark that, in the limit as $N \rightarrow +\infty$ the diffusion coefficient can become $0$. Indeed, \begin{eqnarray*} \lim_{N \rightarrow +\infty} D = \left\{ \begin{array}{cc} \beta^{-1} C \; & \; \sum_{k=1}^{+\infty} \frac{\lambda^2_k}{\alpha_k} = C^{-1}, \\ 0 \; & \; \sum_{k=1}^{+\infty} \frac{\lambda^2_k}{\alpha_k} = +\infty, \end{array} \right. \end{eqnarray*} Thus, phenomena of anomalous diffusion, in particular of subdiffusion, can appear in this simple model. The rigorous analysis of this problem, in the presence of interactions, will be presented elsewhere~\cite{Ottobrepavliotis09}. \paragraph{The Generalized Ornstein-Uhlenbeck Process.} We consider the following SDE \begin{subequations}\label{e:OUgen} \begin{eqnarray} \dot{{\bf q}} & = & {\bf p}, \\ \dot{{\bf p}} & = & (\alpha J -\gamma I ) {\bf p} + \sqrt{2 \gamma \beta^{-1}} \, \dot{{\bf W}}, \end{eqnarray} \end{subequations} where ${{\bf q}}, \, {{\bf p}} \in {\mathbb R }^d,$ $J=-J^T$, $ \alpha, \, \gamma >0$ and ${\bf W}$ is a standard Brownian motion on ${\mathbb R }^d$. The presence of the antisymmetric term $J {\bf p}$ in the equation for ${\bf p}$ implies that the velocity is an {\bf irreversible} Markov process. The generator of the Markov process ${\bf p}$ is \begin{equation}\label{e;gen_antisymm} \mathcal L = (\alpha J -\gamma I) p \cdot \nabla_p + \gamma \beta^{-1} \Delta_p. \end{equation} It is easy to check that $\nabla_p \cdot(J p e^{-\frac{\beta^{-1}}{2} |p|^2}) = 0.$ Hence, the equilibrium distribution of the velocity process is the same as in the reversible case: $$ \mu_{\beta}(dp) = \left( \frac{\beta}{2 \pi} \right)^\frac{d}{2} e^{-\frac{\beta}{2} |p|^2} \, dp. $$ We can decompose the generator $\mathcal L$ into its $L^2({\mathbb R }^d ; \mu_{\beta}(dp))-$symmetric and antisymmetric parts: $$ \mathcal L =\alpha \mathcal A + \gamma \mathcal S, $$ where $\mathcal A = J p \cdot \nabla_p$ and $\mathcal S = -p \cdot \nabla_p +\beta^{-1} \Delta_p.$ Using the results from~\cite{Lunardi1997} (or, equivalently, the fact that the eigenfunctions and eigenvalues of $\mathcal S$ are known) it is possible to show that the operator $\mathcal G = (-\mathcal S)^{-1} \mathcal A$ is bounded from ${\mathcal H} :=H^1({\mathbb R }^d ; \mu_{\beta}(dp))$ and the results obtained in Section~\ref{sec:Diff} apply.~\footnote{Note, however, that this operator is not compact from ${\mathcal H}$ to ${\mathcal H}$.} For this problem we can also obtain an explicit formula for the diffusion tensor. \begin{prop}\label{prop:deff} The diffusion tensor is given by the formula \begin{equation}\label{e:deff_j} D = \beta^{-1} (-\alpha J^T + \gamma I)^{-1}. \end{equation} \end{prop} \begin{proof} The Poisson equation is $$ -\mathcal L {\bf \phi} = {\bf p}, $$ where the boundary condition is that ${\bf \phi} \in (L^2({\mathbb R }^d ; \mu_{\beta}(dp)))^d$ and we take the vector field ${\bf \phi}$ to be mean zero. The solution to this equation is linear in ${\bf p}$: $$ {\bf \phi} = C {\bf p}, $$ for some matrix $C \in {\mathbb R }^{d \times d}$ to be calculated. Substituting this formula in the Poisson equation we obtain (componentwise) $$ \sum_{k, \ell} Q_{k \ell} p_\ell C_{i k} = p_i, $$ where the notation $Q = -\alpha J + \gamma I$ was introduced. Notice that, since $Q$ is positive definite, it is invertible. We take now the $(L^2({\mathbb R }^d ; \mu_{\beta}(dp)))^d$-inner product with $p_m$ (denoted by $\langle \cdot, \cdot \rangle_{\beta}$) and use the fact that $\langle p_{\ell} , p_m \rangle_\beta = \beta^{-1} \delta_{\ell m}$ to deduce $$ \sum_k Q_{km} C_{ik} =\delta_{im}, \quad i,m=1,\dots d. $$ Or, $$ Q^T C = I, $$ and, consequently, $C = (Q^T)^{-1} = (-\alpha J^T + \gamma I)^{-1}$. Furthermore, \begin{eqnarray*} D_{ij} & = & \left\langle \phi_i , p_j \right\rangle_{\beta} = \sum_k \left\langle C_{ik} p_k , p_j \right\rangle_{\beta} \\ & = & \beta^{-1} \sum_k C_{i k} \delta_{j k} = \beta^{-1} C_{ij}, \end{eqnarray*} from which~\eqref{e:deff_j} follows. \end{proof} The small $\gamma$-asymptotics of $D$ depends on the properties of the null space of $\mathcal G:=(-S)^{-1}\mathcal A$ or, equivalently, $\mathcal A$. For the problem at hand, it is sufficient to consider the restriction of ($\mathcal N(\mathcal G)$) (or $\mathcal N(\mathcal A)$) onto linear functions in ${\bf p}$. Consequently, in order to calculate $\mathcal N(\mathcal A)$ we need to calculate the null space of $J$, $\mathcal N(J) = \big\{ \bf{b }\in {\mathbb R }^d \; : \; J \bf{b} = 0 \big \}$. As an example, consider the case $d=3$ and set \begin{eqnarray} J = \left( \begin{array}{ccc} 0 \; & \; 1 \; & \; 1 \\ -1 \; & \; 0 \; & \; 1 \\ -1 \; & \; -1 \; & \; 0 \end{array} \right). \end{eqnarray} In this case it is straightforward to calculate the diffusion tensor: $$ D = \frac{1}{\gamma \, \left( 3\,{\alpha}^{2}+{\gamma}^{2} \right)} \left[ \begin {array}{ccc} {\gamma}^{2}+{\alpha}^{2} & -\alpha\, \left( \gamma+\alpha \right) & -\alpha\, \left( \gamma-\alpha \right) \\\noalign{\medskip} \alpha\, \left( \gamma-\alpha \right) & \gamma^{2}+\alpha^{2} & - \alpha\, \left( \gamma+\alpha \right) \\\noalign{\medskip} \alpha\, \left( \gamma+\alpha \right) & \left( \gamma - \alpha \right) & \gamma^{2}+\alpha^{2} \end {array} \right] $$ \begin{figure} \centerline{ \begin{tabular}{c@{\hspace{2pc}}c} \includegraphics[width=2.8in, height = 2.8in]{deff1} & \includegraphics[width=2.8in, height = 2.8in]{deff2} \\ a.~~ ${\bf e} \cdot \xi = 0$ & b.~~ ${\bf e} \cdot \xi \neq 0$ \end{tabular}} \begin{center} \caption{Diffusion coefficient~\eqref{e:deff_e}} \label{fig:deff} \end{center} \end{figure} The null space of $J$ is one-dimensional and consists of vectors parallel to ${\bf \xi} = (1, \, -1, \, 1).$ From the analysis presented in the previous section it is expected that the diffusion tensor vanishes in the limit as $\gamma \rightarrow 0$ along directions ${\bf e} \bot {\bf \xi}$. Indeed, from the above formula for the diffusion coefficient we get that (with $|{\bf e}| = 1$) \begin{equation}\label{e:deff_e} D^e = \frac{1}{\gamma (3 \alpha^2 +\gamma^2)} \Big( \gamma^2 + |{\bf e} \cdot {\bf \xi}|^2 \alpha^2 \Big). \end{equation} Clearly, when ${\bf e} \cdot {\bf \xi} = 0$ we have $$ \lim_{\gamma \rightarrow 0} D^e = 0, $$ whereas when ${\bf e} \cdot {\bf \xi} \neq 0$ we obtain $$ \lim_{\gamma \rightarrow 0} \gamma \, D^e = \frac{|{\bf e} \cdot {\bf \xi}|^2}{3}. $$ The diffusion coefficient, as a function of $\gamma$, and for $\alpha =1$ is plotted in Figure~\ref{fig:deff}. \section{Conclusions}\label{sec:conclusions} The Green-Kubo formula for the self-diffusion coefficient was studied in this paper. It was shown that the Green-Kubo formula can be rewritten in terms of the solution of an Poisson equation when the collision operator is linear and it is the generator of an ergodic Markov process. Furthermore, the effect of irreversibility in the microscopic dynamics on the diffusion coefficient was investigated and the Majda-Avellaneda theory was used in order to study various asymptotic limits of the diffusion tensor. Several examples were also presented. There are several directions in which the work reported in this paper can be extended. First, a similar analysis can be applied to the linear Boltzmann equation (i.e. for a collision operator that has five collision invariants), in order to obtain alternative representation formulas for other transport coefficients, in addition to the self-diffusion coefficient. In this way, it should be possible to obtain rigorous estimates on other transport coefficients. Second, the effect of external forces on the scaling of transport coefficients with respect to the various parameters of the problem can be studied: the techniques presented in this paper are applicable to a kinetic equation of the form \begin{equation}\label{e:kinetic_V} \frac{\partial f}{\partial t} + F(q) \cdot \nabla_p f + {\bf p} \cdot \nabla_q f = Q f, \end{equation} where $F(q)$ is an external force. Finally, phenomena of subdiffusion (i.e. the limit $D \rightarrow 0$) and superdiffusion (i.e. $D \rightarrow \infty$) can also be analyzed within the framework developed in this paper. A simple example was given in Section~\ref{sec:examples}. All these problems are currently under investigation. \bigskip \paragraph{Acknowledgments.} The author thanks P.R. Kramer for many useful discussions and comments and for an extremely careful reading of an earlier version of this paper. \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def\Rom#1{\uppercase\expandafter{\romannumeral #1}}\def\u#1{{\accent"15 #1}}\def\Rom#1{\uppercase\expandafter{\romannumeral #1}}\def\u#1{{\accent"15 #1}}\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}}
{'timestamp': '2010-02-22T13:22:50', 'yymm': '1002', 'arxiv_id': '1002.4103', 'language': 'en', 'url': 'https://arxiv.org/abs/1002.4103'}
\section{Introduction} For large-scale inverse problems, which often arise in real life applications, the solution of the corresponding forward and adjoint problems is generally computed using an iterative solver, such as (preconditioned) fixed point or Krylov subspace methods. Indeed, the corresponding linear systems could be too large to be handled with direct solvers (e.g.~LU-type solvers), and iterative solvers are easier to parallelize on many cores. Naturally this leads to the idea of \emph{one-step one-shot methods}, which iterate at the same time on the forward problem solution (the state variable), the adjoint problem solution (the adjoint state) and on the inverse problem unknown (the parameter or design variable). If two or more inner iterations are performed on the state and adjoint state before updating the parameter (by starting from the previous iterates as initial guess for the state and adjoint state), we speak of \emph{multi-step one-shot methods}. Our goal is to rigorously analyze the convergence of such inversion methods. In particular, we are interested in those schemes where the inner iterations on the direct and adjoint problems are incomplete, i.e.~stopped before achieving convergence. Indeed, solving the forward and adjoint problems exactly by direct solvers or very accurately by iterative solvers could be very time-consuming with little improvement in the accuracy of the inverse problem solution. The concept of one-shot methods was first introduced by Ta'asan \cite{taasan91} for optimal control problems. Based on this idea, a variety of related methods, such as the all-at-once methods, where the state equation is included in the misfit functional, were developed for aerodynamic shape optimization, see for instance \cite{taasan92, shenoy97, hazra05, schulz09, gauger09} and the literature review in the introduction of \cite{schulz09}. All-at-once approaches to inverse problems for parameter identification were studied in, e.g., \cite{haber01, burger02, kaltenbacher14}. An alternative method, called Wavefield Reconstruction Inversion (WRI), was introduced for seismic imaging in \cite{leeuwen13}, as an improvement of the classical Full Waveform Inversion (FWI) \cite{tarantola82}. WRI is a penalty method which combines the advantages of the all-at-once approach with those of the reduced approach (where the state equation represents a constraint and is enforced at each iteration, as in FWI), and was extended to more general inverse problems in \cite{leeuwen15}. Few convergence proofs, especially for the multi-step one-shot methods, are available in literature. In particular, for non-linear design optimization problems, Griewank \cite{griewank06} proposed a version of one-step one-shot methods where a Hessian-based preconditioner is used in the design variable iteration. The author proved conditions to ensure that the real eigenvalues of the Jacobian of the coupled iterations are smaller than $1$, but these are just necessary and not sufficient conditions to exclude real eigenvalues smaller than $-1$. In addition, no condition to also bound complex eigenvalues below $1$ in modulus was found, and multi-step methods were not investigated. In \cite{hamdi09, hamdi10, gauger12} an exact penalty function of doubly augmented Lagrangian type was introduced to coordinate the coupled iterations, and global convergence of the proposed optimization approach was proved under some assumptions. In \cite{guenther16} this particular one-step one-shot approach was extended to time-dependent problems. In this work, we consider two variants of multi-step one-shot methods where the forward and adjoint problems are solved using fixed point methods and the inverse problem is solved using gradient descent methods. This is a preparatory work where we focus on (discretized) linear inverse problems. Note that the present analysis in the linear case implies also local convergence in the non-linear case. The only basic assumptions we require are the inverse problem uniqueness and the convergence of the fixed point iteration for the forward problem. To analyze the convergence of the coupled iterations we study the real and complex eigenvalues of the block iteration matrices. We prove that if the descent step is small enough then the considered multi-step one-shot methods converge. Moreover, the upper bounds for the descent step in these sufficient conditions are explicit in the number of inner iterations and in the norms of the operators involved in the problem. In the particular scalar case (Appendix~\ref{app:1D}), we establish sufficient and also necessary convergence conditions on the descent step. This paper is structured as follows. In Section~\ref{sec:intro-k-shot}, we introduce the principle of multi-step one-shot methods and define two variants of these algorithms. Then, in Section~\ref{sec:one-step}, respectively Section~\ref{sec:multi-step}, we analyze the convergence of one-step one-shot methods, respectively multi-step one-shot methods: first, we establish eigenvalue equations for the block matrices of the coupled iterations, then we derive sufficient convergence conditions on the descent step by studying both real and complex eigenvalues. In Section~\ref{sec:complex_extension} we show that the previous analysis can be extended to the case where the state variable is complex. Finally, in Section~\ref{sec:num-exp} we test numerically the performance of the different algorithms on a toy 2D Helmholtz inverse problem. Throughout this work, $\scalar{\cdot,\cdot}$ indicates the usual Hermitian scalar product in $\mathbb{C}^n$, that is $\scalar{x,y} \coloneqq \overline{y}^\intercal x, \forall x,y\in\mathbb{C}^n$, and $\norm{\cdot}$ the vector/matrix norms induced by $\scalar{\cdot,\cdot}$. We denote by $A^*=\overline{A}^\intercal$ the adjoint operator of a matrix $A\in\mathbb{C}^{m\times n}$, and likewise by $z^*=\overline{z}$ the conjugate of a complex number $z$. The identity matrix is always denoted by $I$, whose size is understood from context. Finally, for a matrix $T\in\mathbb{C}^{n\times n}$ with $\rho(T)<1$, we define \[ s(T) \coloneqq \sup_{z\in\mathbb{C}, |z|\ge 1}\norm{\left(I-T/z\right)^{-1}} \] which is further studied in Appendix \ref{app:lems}. \section{Multi-step one-shot inversion methods} \label{sec:intro-k-shot} We focus on (discretized) linear inverse problems, which correspond to a \emph{direct (or forward) problem} of the form: find $u \equiv u(\sigma)$ such that \begin{equation} \label{pbdirect} u=Bu+M\sigma+F \end{equation} where $u\in\R^{n_u}$, $\sigma\in\R^{n_\sigma}$, $B\in\R^{n_u \times n_u}$, $M\in\R^{n_u \times n_\sigma}$ and $F\in\R^{n_u}$. Here $I-B$ is the invertible matrix of the direct problem, obtained after discretization, with parameter $\sigma$. Note that in the non-linear case $B$ would be a function of $\sigma$. Equation~\eqref{pbdirect} is also called \emph{state equation} and $u$ is called \emph{state}. Given $\sigma$, we can solve for $u$ by a fixed point iteration \begin{equation} \label{iterpbdirect} u_{\ell+1}=Bu_\ell+M\sigma+F, \quad \ell=0,1,\dots, \end{equation} which converges for any initial guess $u_0$ if and only if the spectral radius $\rho(B)$ is strictly less than $1$ (see e.g. \cite[Theorem 2.1.1]{greenbaum97}). Hence we assume $\rho(B)<1$. Now, we measure $f=Hu(\sigma)$, where $H\in\R^{n_f\times n_u}$, and we are interested in the \emph{linear inverse problem} of finding $\sigma$ from $f$. In order to guarantee the uniqueness of the inverse problem, we assume that $H(I-B)^{-1}M$ is injective. In summary, we set \begin{equation} \label{direct-inv-prb} \begin{array}{lc} \mbox{direct problem:} & u=Bu+M\sigma+F,\\ \mbox{inverse problem:} & \mbox{measure }f=Hu(\sigma),\mbox{ find }\sigma \end{array} \end{equation} with the assumptions: \begin{equation} \label{hypo} \rho(B)<1, \quad H(I-B)^{-1}M \mbox{ is injective}. \end{equation} To solve the inverse problem we write its least squares formulation: given $\sigma^\text{ex}$ the exact solution of the inverse problem and $f \coloneqq Hu(\sigma^\text{ex})$, $$\sigma^\text{ex} = \mbox{argmin}_{\sigma\in\R^{n_\sigma}} J(\sigma) \quad \mbox{ where } J(\sigma) \coloneqq \frac{1}{2}\norm{Hu(\sigma)-f}^2.$$ Using the classical Lagrangian technique with real scalar products, we introduce the \emph{adjoint state} $p \equiv p(\sigma)$, which is the solution of $$p=B^*p+H^*(Hu-f)$$ and allows us to compute the gradient of the cost functional $$\nabla J(\sigma)=M^*p(\sigma).$$ The classical gradient descent algorithm then reads \begin{equation} \label{usualgd} \mbox{\textbf{usual gradient descent:}}\quad \begin{cases} \sigma^{n+1}=\sigma^n-\tau M^*p^n, \\ u^n=Bu^n+M\sigma^n+F,\\ p^n=B^*p^n+H^*(Hu^n-f), \end{cases} \end{equation} where $\tau>0$ is the descent step size, and the state and adjoint state equations are solved exactly by a direct solver. Here $\sigma^{n+1}=\sigma^n-\tau \nabla J(\sigma_{n})$; if instead we update $\sigma^{n+1}=\sigma^n-\tau \nabla J(\sigma_{n-1})$, we obtain the \begin{equation} \label{shifted-gd} \mbox{\textbf{shifted gradient descent:}}\quad \begin{cases} \sigma^{n+1}=\sigma^n-\tau M^*p^n, \\ u^{n+1}=Bu^{n+1}+M\sigma^n+F,\\ p^{n+1}=B^*p^{n+1}+H^*(Hu^{n+1}-f). \end{cases} \end{equation} Both algorithms converge for sufficiently small $\tau$ (see e.g.~Appendix~\ref{app:convGD}): for any initial guess, \eqref{usualgd} converges if \begin{equation} \label{best-tau-usualgd} \tau<\frac{2}{\norm{H(I-B)^{-1}M}^2}, \end{equation} and \eqref{shifted-gd} converges if \begin{equation} \label{best-tau-shifted-gd} \tau<\frac{1}{\norm{H(I-B)^{-1}M}^2}. \end{equation} Here, we are interested in methods where the direct and adjoint problems are rather solved iteratively as in \eqref{iterpbdirect}, and where we iterate at the same time on the forward problem solution and the inverse problem unknown: such methods are called \emph{one-shot methods}. More precisely, we are interested in two variants of \emph{multi-step one-shot methods}, defined as follows. Let $n$ be the index of the (outer) iteration on $\sigma$, the solution to the inverse problem. We update $\sigma^{n+1}=\sigma^n-\tau M^*p^n$ as in gradient descent methods, but the state and adjoint state equations are now solved by a fixed point iteration method, using just \emph{$k$ inner iterations}, and \emph{coupled}: $$\begin{cases} u^{n+1}_{\ell+1}=Bu^{n+1}_\ell+M\sigma+F,\\ p^{n+1}_{\ell+1}=B^*p^{n+1}_\ell+H^*(Hu^{n+1}_\ell-f), \end{cases} \quad\ell=0,1,\dots,k, \quad\begin{cases} u^{n+1}=u^{n+1}_k,\\ p^{n+1}=p^{n+1}_k \end{cases}$$ where $\sigma$ depends on the considered variant ($\sigma=\sigma^{n+1}$ or, for the shifted methods, $\sigma=\sigma^n$). As initial guess we naturally choose $u^{n+1}_0=u^n$ and $p^{n+1}_0=p^n$, the information from the previous (outer) step. In summary, we have two multi-step one-shot algorithms \begin{equation} \label{alg:k-shot n+1} k\mbox{\textbf{-step one-shot:}}\quad\begin{cases} \sigma^{n+1}=\sigma^n-\tau M^*p^n, \\ u^{n+1}_0 = u^n, p^{n+1}_{0} = p^n,\\ \quad\left| \begin{array}{l} u^{n+1}_{\ell+1}=Bu^{n+1}_\ell+M {\sigma^{n+1}}+F,\\ p^{n+1}_{\ell+1}=B^*p^{n+1}_\ell+H^*(Hu^{n+1}_\ell -f), \end{array}\right. \\ u^{n+1} = u^{n+1}_k, p^{n+1} = p^{n+1}_k \end{cases} \end{equation} and \begin{equation} \label{alg:k-shot n} \mbox{\textbf{shifted} }k\mbox{\textbf{-step one-shot:}}\quad\begin{cases} \sigma^{n+1}=\sigma^n-\tau M^*p^n, \\ u^{n+1}_0 = u^n, p^{n+1}_{0} = p^n,\\ \quad\left| \begin{array}{l} u^{n+1}_{\ell+1}=Bu^{n+1}_\ell+M {\sigma^{n}}+F,\\ p^{n+1}_{\ell+1}=B^*p^{n+1}_\ell+H^*(Hu^{n+1}_\ell -f), \end{array}\right. \\ u^{n+1} = u^{n+1}_k, p^{n+1} = p^{n+1}_k, \end{cases} \end{equation} and in particular, when $k=1$, we obtain the following two algorithms \begin{equation} \label{alg:1-shot n+1} \mbox{\textbf{one-step one-shot:}}\quad\begin{cases} \sigma^{n+1}=\sigma^n-\tau M^*p^n, \\ u^{n+1} = Bu^n+M\sigma^{n+1}+F\\ p^{n+1} = B^*p^n+H^*(Hu^n-f) \end{cases} \end{equation} and \begin{equation} \label{alg:1-shot n} \mbox{\textbf{shifted} }\mbox{\textbf{one-step one-shot:}}\quad\begin{cases} \sigma^{n+1}=\sigma^n-\tau M^*p^n, \\ u^{n+1} = Bu^n+M\sigma^n+F\\ p^{n+1} = B^*p^n+H^*(Hu^n-f). \end{cases} \end{equation} The only difference for the shifted versions lies in the fact that $\sigma^{n}$ is used in \eqref{alg:k-shot n} and \eqref{alg:1-shot n}, instead of $\sigma^{n+1}$ in \eqref{alg:k-shot n+1} and \eqref{alg:1-shot n+1}, so that in \eqref{alg:k-shot n+1} and \eqref{alg:1-shot n+1} we need to wait for $\sigma$ before updating $u$ and $p$, while in \eqref{alg:k-shot n} and \eqref{alg:1-shot n} we can update $\sigma, u, p$ at the same time. Also note that when $k\rightarrow\infty$, the $k$-step one-shot method \eqref{alg:k-shot n+1} formally converges to the usual gradient descent \eqref{usualgd}, while the shifted $k$-step one-shot method \eqref{alg:k-shot n} formally converges to the shifted gradient descent \eqref{shifted-gd}. We first analyze the one-step one-shot methods ($k=1$) in Section~\ref{sec:one-step} and then the multi-step one-shot methods ($k \ge 2$) in Section~\ref{sec:multi-step}. \section{Convergence of one-step one-shot methods ($k=1$)} \label{sec:one-step} \subsection{Block iteration matrices and eigenvalue equations} \label{sec:eigen-eq-k=1} To analyze the convergence of these methods, first we express $(\sigma^{n+1},u^{n+1},p^{n+1})$ in terms of $(\sigma^n,u^n,p^n)$, by inserting the expression for $\sigma^{n+1}$ into the iteration for $u^{n+1}$ in \eqref{alg:1-shot n+1}, so that system \eqref{alg:1-shot n+1} is rewritten as \begin{equation} \label{1-shot expl n+1} \begin{cases} \sigma^{n+1}=\sigma^n-\tau M^*p^n\\ u^{n+1}=Bu^n+M\sigma^n-\tau MM^*p^n+F\\ p^{n+1}=B^*p^n+H^*Hu^n-H^*f.\\ \end{cases} \end{equation} System \eqref{alg:1-shot n} is already in the form we need. In what follows we first study the shifted $1$-step one-shot method, then the $1$-step one-shot method. Now, we consider the errors $(\sigma^n-\sigma^\text{ex},u^n-u(\sigma^\text{ex}),p^n-p(\sigma^\text{ex}))$ with respect to the exact solution at the $n$-th iteration, and, by abuse of notation, we designate them by $(\sigma^n,u^n,p^n)$. We obtain that the errors satisfy: for the shifted algorithm \eqref{alg:1-shot n} \begin{equation} \label{1-shot-err expl n} \begin{cases} \sigma^{n+1}=\sigma^n-\tau M^*p^n\\ u^{n+1}=Bu^n+M\sigma^n\\ p^{n+1}=B^*p^n+H^*Hu^n\\ \end{cases} \end{equation} and for algorithm \eqref{1-shot expl n+1} \begin{equation} \label{1-shot-err expl n+1} \begin{cases} \sigma^{n+1}=\sigma^n-\tau M^*p^n\\ u^{n+1}=Bu^n+M\sigma^n-\tau MM^*p^n\\ p^{n+1}=B^*p^n+H^*Hu^n,\\ \end{cases} \end{equation} or equivalently, by putting in evidence the block iteration matrices \begin{equation} \label{1-shot-itermat n} \begin{bmatrix} p^{n+1}\\ u^{n+1}\\ \sigma^{n+1} \end{bmatrix}=\begin{bmatrix} B^* & H^*H & 0 \\ 0 & B & M \\ -\tau M^* & 0 & I \end{bmatrix} \begin{bmatrix} p^{n}\\ u^{n}\\ \sigma^{n} \end{bmatrix} \end{equation} and \begin{equation} \label{1-shot-itermat n+1} \begin{bmatrix} p^{n+1}\\ u^{n+1}\\ \sigma^{n+1} \end{bmatrix}=\begin{bmatrix} B^* & H^*H & 0\\ -\tau MM^* & B & M \\ -\tau M^* & 0 & I \end{bmatrix} \begin{bmatrix} p^{n}\\ u^{n}\\ \sigma^{n} \end{bmatrix}. \end{equation} Now recall that a fixed point iteration converges if and only if the spectral radius of its iteration matrix is strictly less than $1$. Therefore in the following propositions we establish eigenvalue equations for the iteration matrix of the two methods. \begin{proposition}[Eigenvalue equation for the shifted $1$-step one-shot method]\label{prop:eq-eigen shift-1-shot} Assume that $\lambda\in\mathbb{C}$ is an eigenvalue of the iteration matrix in \eqref{1-shot-itermat n}. \begin{enumerate}[label=(\roman*)] \item If $\lambda\in\mathbb{C}$, $\lambda\notin\mathrm{Spec}(B)$, then $\exists \, y\in\mathbb{C}^{n_\sigma}, y\neq 0$ such that \begin{equation}\label{ori-eq-eigen-1-shot n} (\lambda-1)\norm{y}^2+\tau\scalar{M^*(\lambda I-B^*)^{-1}H^*H(\lambda I-B)^{-1}My,y}=0. \end{equation} \item $\lambda=1$ is not an eigenvalue of the iteration matrix. \end{enumerate} \end{proposition} \begin{remark} Since $\rho(B)$ is strictly less than $1$, so is $\rho(B^*)$. \end{remark} \begin{proof} Since $\lambda\in\mathbb{C}$ is an eigenvalue of the iteration matrix in \eqref{1-shot-itermat n}, there exists a non-zero vector $(\tilde{p},\tilde{u},y)\in\mathbb{C}^{n_u+n_u+n_\sigma}$ such that \begin{equation} \label{eq:eigenvec-1-shot n} \begin{cases} \lambda y = y-\tau M^*\tilde{p} \\ \lambda\tilde{u}= B\tilde{u}+My \\ \lambda\tilde{p}=B^*\tilde{p}+H^*H\tilde{u}. \end{cases} \end{equation} By the second equation in \eqref{eq:eigenvec-1-shot n} $\tilde{u}=(\lambda I-B)^{-1}My$, so together with the third equation $$ \tilde{p}=(\lambda I-B^*)^{-1}H^*H\tilde{u}=(\lambda I-B^*)^{-1}H^*H(\lambda I-B)^{-1}My, $$ and by inserting this result into the first equation we obtain \begin{equation}\label{eq:pre-eq-eigen-1-shot n} (\lambda-1)y=-\tau M^*(\lambda I-B^*)^{-1}H^*H(\lambda I-B)^{-1}My, \end{equation} that gives \eqref{ori-eq-eigen-1-shot n} by taking the scalar product with $y$. We also see that if $y=0$ then the above formulas for $\tilde{u},\tilde{p}$ immediately give $\tilde{u}=\tilde{p}=0$, that is a contradiction. \noindent (ii) Assume that $\lambda=1$ is an eigenvalue of the iteration matrix, then \eqref{eq:pre-eq-eigen-1-shot n} gives us $$M^*(I-B^*)^{-1}H^*H(I-B)^{-1}My=0,$$ but this cannot happen for $y \ne 0$ due to the injectivity of $H(I-B)^{-1}M$. \end{proof} \begin{proposition}[Eigenvalue equation for the $1$-step one-shot method] \label{prop:eq-eigen 1-shot} Assume that $\lambda\in\mathbb{C}$ is an eigenvalue of the iteration matrix in \eqref{1-shot-itermat n+1}. \begin{enumerate}[label=(\roman*)] \item If $\lambda\in\mathbb{C}$, $\lambda\notin\mathrm{Spec}(B)$ then $\exists \, y\in\mathbb{C}^{n_\sigma}, y\neq 0$ such that: \begin{equation}\label{ori-eq-eigen-1-shot n+1} (\lambda-1)\norm{y}^2+\tau\lambda\scalar{M^*(\lambda I-B^*)^{-1}H^*H(\lambda I-B)^{-1}My,y}=0. \end{equation} \item $\lambda=1$ is not an eigenvalue of the iteration matrix. \end{enumerate} \end{proposition} \begin{proof} Since $\lambda\in\mathbb{C}$ is an eigenvalue of the iteration matrix in \eqref{1-shot-itermat n+1}, there exists a non-zero vector $(\tilde{p},\tilde{u},y)\in\mathbb{C}^{n_u+n_u+n_\sigma}$ such that \begin{equation} \label{eq:eigenvec-1-shot n+1} \begin{cases} \lambda y = y-\tau M^*\tilde{p} \\ \lambda\tilde{u}= B\tilde{u}+My-\tau MM^*\tilde{p} \\ \lambda\tilde{p}=B^*\tilde{p}+H^*H\tilde{u}. \end{cases} \end{equation} By the third equation in \eqref{eq:eigenvec-1-shot n+1} $\tilde{p}=(\lambda I-B^*)^{-1}H^*H\tilde{u}$, and inserting this result into the second equation we obtain $$ \lambda\tilde{u}=B\tilde{u}+My-\tau MM^*(\lambda I-B^*)^{-1}H^*H\tilde{u}, $$ or equivalently, $$[I+\tau MM^*A](\lambda I-B)\tilde{u}=My$$ where $A=(\lambda I-B^*)^{-1}H^*H(\lambda I-B)^{-1}$. Since $\tau>0$, $I+\tau MM^*A$ is a positive definite matrix. Therefore $$\tilde{u}=(\lambda I-B)^{-1}[I+\tau MM^*A]^{-1}My$$ and $$\tilde{p}=(\lambda I-B^*)^{-1}H^*H\tilde{u}=A[I+\tau MM^*A]^{-1}My.$$ By inserting this result into the first equation in \eqref{eq:eigenvec-1-shot n+1} we obtain $$(\lambda-1)y=-\tau M^*A[I+\tau MM^*A]^{-1}My.$$ Thanks to the fact that $[I+\tau MM^*A]^{-1}$ and $MM^*A$ commute, we have $$(\lambda-1)My=-\tau MM^*A[I+\tau MM^*A]^{-1}My=-\tau[I+\tau MM^*A]^{-1}MM^*AMy$$ then $$(\lambda-1)[I+\tau MM^*A]My=-\tau MM^*AMy,$$ that leads to $$(\lambda-1)My+\tau\lambda MM^*AMy=0.$$ Since $H(I-B)^{-1}M$ is injective, so is $M$. Therefore \begin{equation}\label{eq:pre-eq-eigen-1-shot n+1} (\lambda-1)y+\tau\lambda M^*AMy=0, \end{equation} that gives \eqref{ori-eq-eigen-1-shot n+1} by taking scalar product with $y$. We also see that if $y=0$ then the above formulas for $\tilde{u},\tilde{p}$ immediately give $\tilde{u}=\tilde{p}=0$, that is a contradiction. \noindent (ii) Assume that $\lambda=1$ is an eigenvalue of the iteration matrix, then \eqref{eq:pre-eq-eigen-1-shot n+1} gives us $$M^*(I-B^*)^{-1}H^*H(I-B)^{-1}My=0,$$ but this cannot happen for $y \ne 0$ due to the injectivity of $H(I-B)^{-1}M$. \end{proof} In the following sections we will show that, for sufficiently small $\tau$, equations \eqref{ori-eq-eigen-1-shot n} and \eqref{ori-eq-eigen-1-shot n+1} admit no solution $|\lambda|\ge 1$, thus algorithms \eqref{alg:1-shot n} and \eqref{alg:1-shot n+1} converge. When $\lambda\neq 0$, it is convenient to rewrite \eqref{ori-eq-eigen-1-shot n} and \eqref{ori-eq-eigen-1-shot n+1} respectively as \begin{equation} \label{eq-eigen-1-shot n} \lambda^2(\lambda-1)\norm{y}^2+\tau\scalar{M^*\left(I-B^*/\lambda\right)^{-1}H^*H\left(I-B/\lambda\right)^{-1}My,y}=0 \end{equation} and \begin{equation} \label{eq-eigen-1-shot n+1} \lambda(\lambda-1)\norm{y}^2+\tau\scalar{M^*\left(I-B^*/\lambda\right)^{-1}H^*H\left(I-B/\lambda\right)^{-1}My,y}=0. \end{equation} For the analysis we use auxiliary results proved in Appendix~\ref{app:lems}. \medskip First, we study separately the very particular case where $B=0$. \begin{proposition}[shifted $1$-step one-shot method]\label{tau-1-shot-B=0 n} When $B=0$, the eigenvalue equation \eqref{eq-eigen-1-shot n} admits no solution $\lambda\in\mathbb{C}, |\lambda|\ge 1$ if $\tau<\frac{-1+\sqrt{5}}{2\norm{H}^2\norm{M}^2}$. \end{proposition} \begin{proof} When $B=0$, equation \eqref{eq-eigen-1-shot n} becomes $\lambda^2(\lambda-1)\norm{y}^2+\tau\norm{HMy}^2=0$ which is equivalent to $\lambda^3-\lambda^2+\frac{\norm{HMy}^2}{\norm{y}^2}\tau=0$. Then, the conclusion can be obtained by Lemma \ref{marden-ord-3}. \end{proof} \begin{proposition}[$1$-step one-shot method]\label{tau-1-shot-B=0 n+1} When $B=0$, the eigenvalue equation \eqref{eq-eigen-1-shot n+1} admits no solution $\lambda\in\mathbb{C}, |\lambda|\le 1$ if $\tau<\frac{1}{\norm{H}^2\norm{M}^2}$. \end{proposition} \begin{proof} When $B=0$, equation \eqref{eq-eigen-1-shot n+1} becomes $\lambda(\lambda-1)\norm{y}^2+\tau\norm{HMy}^2=0$ which yields $\lambda^3-\lambda^2+\frac{\norm{HMy}^2}{\norm{y}^2}\tau\lambda=0$. Then, the conclusion can be obtained by Lemma \ref{marden-ord-3}. \end{proof} \subsection{Real eigenvalues} We now find conditions on the descent step $\tau$ such that the real eigenvalues stay inside the unit disk. Recall that we have already proved that $\lambda=1$ is not an eigenvalue for both methods. \begin{proposition}[shifted $1$-step one-shot method]\label{tau n k=1 real} Equation \eqref{eq-eigen-1-shot n} \begin{enumerate}[label=(\roman*)] \item admits no solution $\lambda\in\R, \lambda>1$ for all $\tau>0$; \item admits no solution $\lambda\in\R, \lambda\le -1$ if we take $$\tau<\frac{2}{\norm{H}^2\norm{M}^2s(B)^2},$$ where $s(B)$ is defined in Lemma~\ref{inv(I-T/z)}; moreover if $0<\norm{B}<1$, we can take \begin{equation}\label{eq:tau n k=1 real} \tau<\frac{\chi_0(1,\norm{B})}{\norm{H}^2\norm{M}^2}, \quad \text{where } \; \chi_0(1,b) = 2(1-b)^2 \end{equation} (here in the notation $\chi_0(1,b)$, $1$ refers to $k=1$). \end{enumerate} \end{proposition} \begin{proof} When $\lambda\in\R\backslash\{0\}$ equation \eqref{eq-eigen-1-shot n} becomes $$\lambda^2(\lambda-1)\norm{y}^2+\tau\norm{H(I-B/\lambda)^{-1}My}^2=0.$$ The left-hand side of the above equation is strictly positive for any $\tau>0$ if $\lambda>1$; it is strictly negative for $\tau$ satisfying the inequality in (ii) if $\lambda\le -1$, noting that $\lambda \mapsto \lambda^2(\lambda-1)$ is increasing for $\lambda\le-1$. \end{proof} \begin{proposition}[$1$-step one-shot method]\label{tau n+1 k=1 real} Equation \eqref{eq-eigen-1-shot n+1} admits no solution $\lambda\in\R, \lambda\neq 1, |\lambda|\ge 1$ for all $\tau>0$. \end{proposition} \begin{proof} When $\lambda\in\R\backslash\{0\}$ equation \eqref{eq-eigen-1-shot n+1} becomes $$\lambda(\lambda-1)\norm{y}^2+\tau\norm{H(I-B/\lambda)^{-1}My}^2=0.$$ If $\lambda\in\R,\lambda\neq 1,|\lambda|\ge 1$ then $\lambda(\lambda-1)>0$, thus the left-hand side of the above equation is strictly positive for any $\tau>0$. \end{proof} \subsection{Complex eigenvalues} \label{subsec:complex-1-step} We now look for conditions on the descent step $\tau$ such that also the complex eigenvalues stay inside the unit disk. We first deal with the shifted $1$-step one-shot method. \begin{proposition}[shifted $1$-step one-shot method]\label{tau n k=1 cpl} If $B\neq 0$, $\exists\tau>0$ sufficiently small such that equation \eqref{eq-eigen-1-shot n} admits no solution $\lambda\in\mathbb{C}\backslash\R, |\lambda|\ge 1$. In particular, if $0<\norm{B}<1$, given any $\delta_0>0$ and $0<\theta_0\le\frac{\pi}{6}$, take \[ \tau < \frac{\min \{ \chi_1(1,\norm{B}),\; \chi_2(1,\norm{B}),\; \chi_3(1,\norm{B}),\; \chi_4(1,\norm{B}) \} }{\norm{H}^2\norm{M}^2}, \] where $$\chi_1(1,b)=\frac{(1-b)^4}{4b^2},\quad\chi_2(1,b)=\cfrac{2\sin\frac{\theta_0}{2}(1-b)^2}{(1+b)^2},$$ $$\chi_3(1,b)=\cfrac{\delta_0\cos^2\frac{5\theta_0}{2}}{2\left(1+2\delta_0\sin\frac{5\theta_0}{2}+\delta_0^2\right)}\cdot\cfrac{(1-b)^4}{b^2},\quad\chi_4(1,b)=\left[\sin\left(\frac{\pi}{2}-3\theta_0\right)+\cos2\theta_0\right](1-b)^2$$ (here in the notation $\chi_i(1,b), i=1,\dots,4$, $1$ refers to $k=1$). \end{proposition} \begin{proof} \textbf{Step 1. Rewrite equation \eqref{eq-eigen-1-shot n} so that we can study its real and imaginary parts.} Let $\lambda=R(\cos\theta+\ic\sin\theta)$ in polar form where $R=|\lambda| \ge 1$ and $\theta\in(-\pi,\pi)$. Write ${1}/{\lambda}=r(\cos\phi+\ic\sin\phi)$ in polar form where $r={1}/{|\lambda|}={1}/{R}\le 1$ and $\phi=-\theta\in(-\pi,\pi)$. By Lemma \ref{decompq}, we have $$\left(I-\cfrac{B}{\lambda}\right)^{-1}=P(\lambda)+\ic Q(\lambda),\quad \left(I-\cfrac{B^*}{\lambda}\right)^{-1}=P(\lambda)^*+\ic Q(\lambda)^*$$ where $P(\lambda)$ and $Q(\lambda)$ are $\mathbb{C}^{n_u\times n_u}$-valued functions, and, by omitting the dependence on $\lambda$, \begin{equation}\label{eq:pB} \norm{P}\le p \coloneqq \left\{\begin{array}{cl} (1+\norm{B})s(B)^2 &\text{ for general }B\neq 0,\\ \cfrac{1}{1-\norm{B}} &\text{ when }\norm{B}<1; \end{array}\right. \end{equation} \begin{equation}\label{eq:q1B} \norm{Q}\le q_1 \coloneqq \left\{\begin{array}{cl} \norm{B}s(B)^2 &\text{ for general }B\neq 0,\\ \cfrac{\norm{B}}{1-\norm{B}} &\text{ when }0<\norm{B}<1; \end{array}\right. \end{equation} \begin{equation}\label{eq:q2B} \norm{Q}\le |\sin\theta|q_2,\quad q_2 \coloneqq \left\{\begin{array}{cl} \norm{B}s(B)^2 &\text{ for general }B\neq 0,\\ \cfrac{\norm{B}}{(1-\norm{B})^2} &\text{ when }0<\norm{B}<1. \end{array}\right. \end{equation} Now we rewrite \eqref{eq-eigen-1-shot n} as \begin{equation} \label{ready-split-k=1} \lambda^2(\lambda-1)\norm{y}^2+\tau G(P^*+\ic Q^*,P+\ic Q)=0 \end{equation} where $$G(X,Y)=\scalar{M^*XH^*HYMy,y}\in\mathbb{C}, \quad X,Y\in\mathbb{C}^{n_u\times n_u}.$$ $G$ satisfies the following properties: \begin{itemize} \item $\forall X,Y_1,Y_2\in\mathbb{C}^{n_u\times n_u}, \forall z_1,z_2\in\mathbb{C}$: \quad $G(X, z_1Y_1+z_2Y_2)=z_1G(X,Y_1)+z_2G(X,Y_2).$ \item $\forall X_1,X_2,Y\in\mathbb{C}^{n_u\times n_u}, \forall z_1,z_2\in\mathbb{C}$: \quad $G(z_1X_1+z_2X_2,Y)=z_1G(X_1,Y)+z_2G(X_2,Y).$ \item $\forall X\in\mathbb{C}^{n_u\times n_u}$: \quad $0\le G(X^*,X)=\norm{HXMy}^2\le (\norm{H}\norm{M}\norm{X})^2 \norm{y}^2.$ \item $\forall X,Y\in\mathbb{C}^{n_u\times n_u}$: \quad $G(X,Y)+G(Y^*,X^*)\in\R$, indeed $$\begin{array}{ll} G(X,Y)&=\scalar{M^*XH^*HYMy,y}=\scalar{y,M^*Y^*H^*HX^*My}\\ &=\scalar{M^*Y^*H^*HX^*My,y}^*=G(Y^*,X^*)^*. \end{array}$$ \end{itemize} With these properties of $G$, we expand \eqref{ready-split-k=1} and take its real and imaginary parts, so we respectively obtain: \begin{equation}\label{re-eq k=1} \Re(\lambda^3-\lambda^2)\norm{y}^2+\tau [G(P^*,P)- G(Q^*,Q)]=0 \end{equation} and \begin{equation}\label{im-eq k=1} \Im(\lambda^3-\lambda^2)\norm{y}^2+\tau [G(P^*,Q)+ G(Q^*,P)]=0 \end{equation} \noindent\textbf{Step 2. Find a suitable combination of equations \eqref{re-eq k=1} and \eqref{im-eq k=1}, choose $\tau$ so that we obtain a new equation with a left-hand side which is strictly positive/negative.} Let $\gamma=\gamma(\lambda)\in\R$, defined by cases as in Lemma~\ref{gamma123 n}. Multiplying equation \eqref{im-eq k=1} with $\gamma$ then summing it with equation \eqref{re-eq k=1}, we obtain: $$ [\Re(\lambda^3-\lambda^2)+\gamma\Im(\lambda^3-\lambda^2)]\norm{y}^2+\tau[G(P^*,P)-G(Q^*,Q)+\gamma G(P^*,Q)+\gamma G(Q^*,P)]=0, $$ or equivalently, \begin{equation}\label{im-gamma-re k=1} [\Re(\lambda^3-\lambda^2)+\gamma\Im(\lambda^3-\lambda^2)]\norm{y}^2+\tau G(P^*+\gamma Q^*,P+\gamma Q)-(1+\gamma^2)\tau G(Q^*,Q)=0. \end{equation} Now we consider four cases of $\lambda$ as in Lemma~\ref{gamma123 n}: \begin{itemize} \item\textit{Case 1.} $\Re(\lambda^3-\lambda^2)\ge 0$; \item\textit{Case 2.} $\Re(\lambda^3-\lambda^2)<0$ and $\theta\in [\theta_0,\pi-\theta_0]\cup[-\pi+\theta_0,-\theta_0]$ for fixed $0<\theta_0\le\frac{\pi}{6}$; \item\textit{Case 3.} $\Re(\lambda^3-\lambda^2)<0$ and $\theta \in (-\theta_0,\theta_0)$ for fixed $0<\theta_0\le \frac{\pi}{6}$; \item\textit{Case 4.} $\Re(\lambda^3-\lambda^2)<0$ and $\theta \in (\pi-\theta_0,\pi)\cup(-\pi,-\pi+\theta_0)$ for fixed $0<\theta_0\le \frac{\pi}{6}$. \end{itemize} The four cases will be treated in the following four lemmas (Lemmas \ref{k=1 n case1}--\ref{k=1 n case4}), which together give the statement of this proposition. \end{proof} \begin{lemma}[Case 1] \label{k=1 n case1} Equation \eqref{eq-eigen-1-shot n} admits no solutions $\lambda$ in Case 1 if we take $$\tau<\frac{1}{4\norm{H}^2\norm{M}^2\norm{B}^2s(B)^4}.$$ Moreover, if $0<\norm{B}<1$, we can take $$\tau<\frac{(1-\norm{B})^4}{4\norm{H}^2\norm{M}^2\norm{B}^2}.$$ \end{lemma} \begin{proof} Writing \eqref{im-gamma-re k=1} for $\gamma=\gamma_1$ as in Lemma \ref{gamma123 n} (i) (in particular $\gamma_1^2=1$), we have \begin{equation}\label{case1 k=1} [\Re(\lambda^3-\lambda^2)+\gamma_1\Im(\lambda^3-\lambda^2)]\norm{y}^2+\tau G(P^*+\gamma_1 Q^*,P+\gamma_1 Q)-2\tau G(Q^*,Q)=0. \end{equation} By the properties of $G$ we have $$G(P^*+\gamma_1 Q^*,P+\gamma_1 Q) \ge 0$$ and $$G(Q^*,Q)\le(\norm{H}\norm{M}\norm{Q})^2\norm{y}^2\le (\norm{H}\norm{M}|\sin\theta|q_2)^2\norm{y}^2,$$ therefore the left-hand side of \eqref{case1 k=1} will be strictly positive if $\tau$ satisfies $$\tau<\frac{\Re(\lambda^3-\lambda^2)+\gamma_1\Im(\lambda^3-\lambda^2)}{2\left(\norm{H}\norm{M}|\sin\theta|q_2\right)^2}.$$ Since $\Re(\lambda^3-\lambda^2)+\gamma_1\Im(\lambda^3-\lambda^2)\ge 2|\sin(\theta/2)|$ by Lemma \ref{gamma123 n} (i), it is enough to choose $$\tau<\cfrac{1}{4\left|\sin\frac{\theta}{2}\right|\cos^2\frac{\theta}{2}\norm{H}^2\norm{M}^2q_2^2}.$$ Since $\left|\sin\frac{\theta}{2}\right|\cos^2\frac{\theta}{2}\le 1$, it is sufficient to choose $\tau<\frac{1}{4\norm{H}^2\norm{M}^2q_2^2}$ and we use definition \eqref{eq:q2B} of $q_2$. \end{proof} \begin{lemma}[Case 2] \label{k=1 n case2} Equation \eqref{eq-eigen-1-shot n} admits no solutions $\lambda$ in Case 2 if we take $$\tau<\cfrac{2\sin\frac{\theta_0}{2}}{\norm{H}^2\norm{M}^2(1+2\norm{B})^2s(B)^4}.$$ Moreover, if $0<\norm{B}<1$, we can take $$\tau<\cfrac{2\sin\frac{\theta_0}{2}(1-\norm{B})^2}{\norm{H}^2\norm{M}^2(1+\norm{B})^2}.$$ \end{lemma} \begin{proof} Writing \eqref{im-gamma-re k=1} for $\gamma=\gamma_2$ as in Lemma \ref{gamma123 n} (ii) (in particular $\gamma_2^2=1$), we have \begin{equation}\label{case2 k=1} [\Re(\lambda^3-\lambda^2)+\gamma_2\Im(\lambda^3-\lambda^2)]\norm{y}^2+\tau G(P^*+\gamma_2 Q^*,P+\gamma_2 Q)-2\tau G(Q^*,Q)=0. \end{equation} By the properties of $G$ $$G(Q^*,Q)\ge 0,\quad G(P^*+\gamma_2 Q^*,P+\gamma_2 Q)\le(\norm{H}\norm{M}\norm{P+\gamma_2 Q})^2\norm{y}^2$$ and the estimate $\norm{P+\gamma_2 Q}\le \norm{P}+|\gamma_2|\norm{Q}=\norm{P}+\norm{Q}\le p+q_1,$ the left-hand side of \eqref{case2 k=1} will be strictly negative if $\tau$ satisfies: $$\tau<\frac{-\Re(\lambda^3-\lambda^2)-\gamma_2\Im(\lambda^3-\lambda^2)}{\left[\norm{H}\norm{M}(p+q_1)\right]^2}.$$ Thanks to Lemma \ref{gamma123 n} (ii), it is sufficient to choose $$\tau<\cfrac{2\sin\frac{\theta_0}{2}}{\norm{H}^2\norm{M}^2(p+q_1)^2}$$ and we use definitions \eqref{eq:pB} and \eqref{eq:q1B} of $p$ and $q_1$. \end{proof} \begin{lemma}[Case 3] \label{k=1 n case3} Let $\delta_0>0$ be fixed. Equation \eqref{eq-eigen-1-shot n} admits no solutions $\lambda$ in Case 3 if we take $$\tau<\cfrac{\delta_0\cos^2\frac{5\theta_0}{2}}{2\left(1+2\delta_0\sin\frac{5\theta_0}{2}+\delta_0^2\right)}\cdot\cfrac{1}{\norm{H}^2\norm{M}^2\norm{B}^2s(B)^4}.$$ Moreover, if $0<\norm{B}<1$, we can take $$\tau<\cfrac{\delta_0\cos^2\frac{5\theta_0}{2}}{2\left(1+2\delta_0\sin\frac{5\theta_0}{2}+\delta_0^2\right)}\cdot\cfrac{(1-\norm{B})^4}{\norm{H}^2\norm{M}^2\norm{B}^2}.$$ \end{lemma} \begin{proof} Writing \eqref{im-gamma-re k=1} for $\gamma=\gamma_3$ as in Lemma \ref{gamma123 n} (iii), we have \begin{equation}\label{case3 k=1} [\Re(\lambda^3-\lambda^2)+\gamma_3\Im(\lambda^3-\lambda^2)]\norm{y}^2+\tau G(P^*+\gamma_3 Q^*,P+\gamma_3 Q)-(1+\gamma_3^2)\tau G(Q^*,Q)=0. \end{equation} By the properties of $G$ $$G(P^*+\gamma_3 Q^*,P+\gamma_3 Q) \ge 0,\quad G(Q^*,Q)\le(\norm{H}\norm{M}\norm{Q})^2\norm{y}^2$$ and by the estimate $\norm{Q}\le|\sin\theta|q_2$, the left-hand side of \eqref{case3 k=1} will be strictly positive if $\tau$ satisfies: $$\tau<\frac{\Re(\lambda^2-\lambda)+\gamma_3\Im(\lambda^2-\lambda)}{(1+\gamma_3^2)\left(\norm{H}\norm{M}|\sin\theta|q_2\right)^2}.$$ Since by Lemma \ref{gamma123 n} (iii) $\Re(\lambda^3-\lambda^2)+\gamma_3\Im(\lambda^3-\lambda^2)>2\delta_0\left|\sin\frac{\theta}{2}\right|$, it is sufficient to choose $$\tau<\cfrac{\delta_0}{2(1+\gamma_3^2)\norm{H}^2\norm{M}^2q_2^2}=\cfrac{1}{2\norm{H}^2\norm{M}^2q_2^2}\cdot\cfrac{\delta_0\cos^2\frac{5\theta_0}{2}}{1+2\delta_0\sin\frac{5\theta_0}{2}+\delta_0^2},$$ where we have used the definition of $\gamma_3$. To conclude we use definition \eqref{eq:q2B} of $q_2$. \end{proof} \begin{lemma}[Case 4] \label{k=1 n case4} Equation \eqref{eq-eigen-1-shot n} admits no solutions $\lambda$ in Case 4 if we take $$\tau<\cfrac{\sin\left(\frac{\pi}{2}-3\theta_0\right)+\cos2\theta_0}{\norm{H}^2\norm{M}^2(1+\norm{B})^2s(B)^2}.$$ Moreover, if $0<\norm{B}<1$, we can take $$\tau<\left[\sin\left(\frac{\pi}{2}-3\theta_0\right)+\cos2\theta_0\right]\cfrac{(1-\norm{B})^2}{\norm{H}^2\norm{M}^2}.$$ \end{lemma} \begin{proof} Here it is enough to consider \eqref{re-eq k=1}. By the properties of $G$ $$G(Q^*,Q)\ge 0,\quad G(P^*,P)\le(\norm{H}\norm{M}p)^2\norm{y}^2$$ we see that the left-hand side of \eqref{re-eq k=1} will be strictly negative if $\tau$ satisfies $$\tau<\frac{-\Re(\lambda^3-\lambda^2)}{\left(\norm{H}\norm{M}p\right)^2}.$$ Thanks to Lemma \ref{gamma123 n} (iv), it is sufficient to choose $$\tau<\cfrac{\sin\left(\frac{\pi}{2}-3\theta_0\right)+\cos2\theta_0}{\norm{H}^2\norm{M}^2p^2},$$ and definition \eqref{eq:pB} of $p$ leads to the conclusion. \end{proof} Similarly, with the help of Lemma \ref{gamma123 n+1}, we prove for the $1$-step one-shot method the analogue of Proposition~\ref{tau n k=1 cpl}. In particular, note that here just three cases of $\lambda$ need to be considered, because the analogue of the fourth one is excluded by Lemma \ref{gamma123 n+1} (iv). \begin{proposition}[$1$-step one-shot method]\label{tau n+1 k=1 cpl} If $B\neq 0$, $\exists\tau>0$ sufficiently small such that equation \eqref{eq-eigen-1-shot n+1} admits no solution $\lambda\in\mathbb{C}\backslash\R, |\lambda|\ge 1$. In particular, if $0<\norm{B}<1$, given any $\delta_0>0$ and $0<\theta_0\le\frac{\pi}{4}$, take \[ \tau < \frac{\min \{ \psi_1(1,\norm{B}),\; \psi_2(1,\norm{B}),\; \psi_3(1,\norm{B}) \} }{\norm{H}^2\norm{M}^2}, \] where $$\psi_1(1,b)=\frac{(1-b)^4}{4b^2},\quad\psi_2(1,b)=\cfrac{2\sin\frac{\theta_0}{2}(1-b)^2}{(1+b)^2},\quad\psi_3(1,b)=\cfrac{\delta_0\cos^2\frac{3\theta_0}{2}(1-b)^4}{2\left(1+2\delta_0\sin\frac{3\theta_0}{2}+\delta_0^2\right)b^2}$$ (here in the notation $\psi_i(1,b), i=1,2,3$, $1$ refers to $k=1$). \end{proposition} \subsection{Final result ($k=1$)} Considering Proposition~\ref{tau-1-shot-B=0 n}, and taking the minimum between the bound \eqref{eq:tau n k=1 real} in Proposition~\ref{tau n k=1 real} for real eigenvalues and the bound in Proposition~\ref{tau n k=1 cpl} for complex eigenvalues, we obtain a sufficient condition on the descent step $\tau$ to ensure convergence of the shifted $1$-step one-shot method. \begin{theorem}[Convergence of shifted $1$-step one-shot] \label{th:tau n k=1 all} Under assumption \eqref{hypo}, the shifted $1$-step one-shot method \eqref{alg:1-shot n} converges for sufficiently small $\tau$. In particular, for $\norm{B}<1$, it is enough to take $$\tau<\frac{\chi(1,\norm{B})}{\norm{H}^2\norm{M}^2},$$ where $\chi(1,\norm{B})$ is an explicit function of $\norm{B}$ (in this notation $1$ refers to $k=1$). \end{theorem} \begin{remark} Set $b=\norm{B}$. For $0<b<1$, a practical (but not optimal) bound for $\tau$ is $$\tau< \frac{1}{\norm{H}^2\norm{M}^2} \cdot \min\left\{\frac{1}{2}\cdot\frac{(1-b)^2}{(1+b)^2}, \; \frac{1-\sin\frac{5\pi}{12}}{4}\cdot\frac{(1-b)^4}{b^2}\right\}.$$ Indeed, using the notation in Proposition~\ref{tau n k=1 real} and \ref{tau n k=1 cpl}, it is easy to show that $\chi_2(1,b)\le\chi_0(1,b)$ and $\chi_3(1,b)\le\chi_1(1,b)$. By studying $\chi_3(1,b)$ and noting that $\delta_0^2+1\ge 2\delta_0$, we see that we should take $\delta_0=1$. Finally, we can take for instance $\theta_0=\frac{\pi}{6}$, then compare $\chi_2(1,b)$, $\chi_3(1,b)$ and $\chi_4(1,b)$. \end{remark} Putting together Propositions~\ref{tau-1-shot-B=0 n+1}, \ref{tau n+1 k=1 real}, \ref{tau n+1 k=1 cpl}, we obtain a sufficient condition on the descent step $\tau$ to ensure convergence of the $1$-step one-shot method. \begin{theorem}[Convergence of $1$-step one-shot] \label{th:tau n+1 k=1 all} Under assumption \eqref{hypo}, the $1$-step one-shot method \eqref{alg:1-shot n+1} converges for sufficiently small $\tau$. In particular, for $\norm{B}<1$, it is enough to take $$\tau<\frac{\psi(1,\norm{B})}{\norm{H}^2\norm{M}^2},$$ where $\psi(1,\norm{B})$ is an explicit function of $\norm{B}$ (in this notation $1$ refers to $k=1$). \end{theorem} \begin{remark} Similarly as above, for $0<b<1$, a practical (but not optimal) bound for $\tau$ is $$\tau< \frac{1}{\norm{H}^2\norm{M}^2} \cdot \min\left\{2\sin\frac{\pi}{8}\cdot\frac{(1-b)^2}{(1+b)^2}, \;\frac{1-\sin\frac{3\pi}{8}}{4}\cdot\frac{(1-b)^4}{b^2}\right\}.$$ \end{remark} \section{Convergence of multi-step one-shot methods ($k\ge 2$)} \label{sec:multi-step} We now tackle the multi-step case, that is the $k$-step one-shot methods with $k\ge 2$. \subsection{Block iteration matrices and eigenvalue equations} \label{sec:eigen-eq} Once again, to analyze the convergence of these methods, first we express $(\sigma^{n+1},u^{n+1},p^{n+1})$ in terms of $(\sigma^n,u^n,p^n)$, by rewriting the recursions for $u$ and $p$: systems \eqref{alg:k-shot n+1} and \eqref{alg:k-shot n} are respectively rewritten as \begin{equation} \label{k-shot expl n+1} \begin{cases} \sigma^{n+1}=\sigma^n-\tau M^*p^n\\ u^{n+1}=B^ku^n+T_kM\sigma^n-\tau T_kMM^*p^n+T_kF\\ p^{n+1}=[(B^*)^k-\tau X_kMM^*]p^n+U_ku^n+X_kM\sigma^n+X_kF-T_k^*H^*f\\ \end{cases} \end{equation} and \begin{equation} \label{k-shot expl n} \begin{cases} \sigma^{n+1}=\sigma^n-\tau M^*p^n\\ u^{n+1}=B^ku^n+T_kM\sigma^n+T_kF\\ p^{n+1}=(B^*)^kp^n+U_ku^n+X_kM\sigma^n+X_kF-T_k^*H^*f\\ \end{cases} \end{equation} where \begin{equation}\label{eq:T} T_k=I+B+...+B^{k-1}=(I-B)^{-1}(I-B^k), \quad k\ge 1, \end{equation} $$U_k=(B^*)^{k-1}H^*H+(B^*)^{k-2}H^*HB+...+H^*HB^{k-1}, \quad k\ge 1,$$ \begin{equation}\label{eq:X} X_k=\left\{\begin{array}{cl} (B^*)^{k-2}H^*HT_1+(B^*)^{k-3}H^*HT_2+...+H^*HT_{k-1} & \text{if }k\ge 2,\\ 0 & \text{if }k=1. \end{array}\right. \end{equation} Note that \eqref{k-shot expl n+1} ($k$-step one-shot) can be obtained from \eqref{k-shot expl n} (shifted $k$-step one-shot) by replacing $\sigma^n$ with $\sigma^{n+1}=\sigma^n-\tau M^*p^n$ in the equations for $u$ and $p$, which yields two extra terms in \eqref{k-shot expl n+1}. In what follows we first study the shifted $k$-step one-shot method then the $k$-step one-shot method. The following lemma gathers some useful properties of $T_k, U_k$ and $X_k$. \begin{lemma}\label{lem:propTUX} \begin{enumerate}[label=(\roman*)] \item The matrices $U_k$ and $X_k$ can be rewritten as \begin{equation*} \begin{split} & U_k=\sum_{i+j=k-1} (B^*)^iH^*HB^j \quad \text{for } k\ge 1, \\ & X_k=\sum_{l=0}^{k-2}\sum_{i+j=l} (B^*)^iH^*HB^j=\sum_{l=1}^{k-1}U_l \quad \text{for } k\ge 2. \end{split} \end{equation*} \item The matrices $U_k$ and $X_k$ are self-adjoint: $U_k^*=U_k$, $X_k^*=X_k$. \item We have the relation \begin{equation}\label{uxt} U_kT_k-X_kB^k+X_k=T_k^*H^*HT_k, \quad\forall k\ge 1. \end{equation} \end{enumerate} \end{lemma} \begin{proof} (i) is easy to check by the definitions. (ii) follows from (i). \noindent (iii) For $k=1$, we have $U_1=H^*H$, $T_1=I$ and $X_1=0$, hence the identity is verified. For $k\ge 2$, note that $X_{k+1}=B^*X_k+H^*HT_k$, then by (ii) $X_{k+1}=X_{k+1}^*=X_kB+T_k^*H^*H$. On the other hand, from (i) we get that $X_{k+1}=X_k+U_k$. Thus, $$X_k+U_k=X_kB+T_k^*H^*H,\quad\mbox{ or equivalently, }\quad U_k=X_k(B-I)+T_k^*H^*H.$$ Finally, $$U_kT_k=X_k(B-I)T_k+T_k^*H^*HT_k=X_k(B^k-I)+T_k^*H^*HT_k.$$ \end{proof} Now, we consider the errors $(\sigma^n-\sigma^\text{ex},u^n-u(\sigma^\text{ex}),p^n-p(\sigma^\text{ex}))$ with respect to the exact solution at the $n$-th iteration, and, by abuse of notation, we designate them by $(\sigma^n,u^n,p^n)$. We obtain that the errors satisfy: for the shifted algorithm \eqref{k-shot expl n} \begin{equation} \label{k-shot-err expl n} \begin{cases} \sigma^{n+1}=\sigma^n-\tau M^*p^n\\ u^{n+1}=B^ku^n+T_kM\sigma^n\\ p^{n+1}=(B^*)^kp^n+U_ku^n+X_kM\sigma^n\\ \end{cases} \end{equation} and for algorithm \eqref{k-shot expl n+1} \begin{equation} \label{k-shot-err expl n+1} \begin{cases} \sigma^{n+1}=\sigma^n-\tau M^*p^n\\ u^{n+1}=B^ku^n+T_kM\sigma^n-\tau T_kMM^*p^n\\ p^{n+1}=[(B^*)^k-\tau X_kMM^*]p^n+U_ku^n+X_kM\sigma^n,\\ \end{cases} \end{equation} or equivalently, by putting in evidence the block iteration matrices \begin{equation} \label{itermat n} \begin{bmatrix} p^{n+1}\\ u^{n+1}\\ \sigma^{n+1} \end{bmatrix}=\begin{bmatrix} (B^*)^k & U_k & X_kM \\ 0 & B^k & T_kM \\ -\tau M^* & 0 & I \end{bmatrix} \begin{bmatrix} p^{n}\\ u^{n}\\ \sigma^{n} \end{bmatrix} \end{equation} and \begin{equation} \label{itermat n+1} \begin{bmatrix} p^{n+1}\\ u^{n+1}\\ \sigma^{n+1} \end{bmatrix}=\begin{bmatrix} (B^*)^k-\tau X_kMM^* & U_k & X_kM \\ -\tau T_kMM^* & B^k & T_kM \\ -\tau M^* & 0 & I \end{bmatrix} \begin{bmatrix} p^{n}\\ u^{n}\\ \sigma^{n} \end{bmatrix}. \end{equation} Now recall that a fixed point iteration converges if and only if the spectral radius of its iteration matrix is strictly less than $1$. Therefore in the following propositions we establish eigenvalue equations for the iteration matrix of the two methods. \begin{proposition}[Eigenvalue equation for the shifted $k$-step one-shot method]\label{prop:eq-eigen shift-k-shot} Assume that $\lambda\in\mathbb{C}$ is an eigenvalue of the iteration matrix in \eqref{itermat n}. \begin{enumerate}[label=(\roman*)] \item If $\lambda\in\mathbb{C}$, $\lambda\notin\mathrm{Spec}(B^k)$, then $\exists \, y\in\mathbb{C}^{n_\sigma}, y\neq 0$ such that \begin{equation}\label{ori-eq-eigen n} (\lambda-1)\norm{y}^2+\tau\scalar{M^*[\lambda I-(B^*)^k]^{-1}[(\lambda-1)X_k+T_k^*H^*HT_k](\lambda I-B^k)^{-1}My,y}=0. \end{equation} \item $\lambda=1$ is not an eigenvalue of the iteration matrix. \end{enumerate} \end{proposition} \begin{proposition}[Eigenvalue equation for the $k$-step one-shot method] \label{prop:eq-eigen k-shot} Assume that $\lambda\in\mathbb{C}$ is an eigenvalue of the iteration matrix in \eqref{itermat n+1}. \begin{enumerate}[label=(\roman*)] \item If $\lambda\in\mathbb{C}$, $\lambda\notin\mathrm{Spec}(B^k)$ then $\exists \, y\in\mathbb{C}^{n_\sigma}, y\neq 0$ such that: \begin{equation}\label{ori-eq-eigen n+1} (\lambda-1)\norm{y}^2+\tau\lambda\scalar{M^*[\lambda I-(B^*)^k]^{-1}[(\lambda-1)X_k+T_k^*H^*HT_k](\lambda I-B^k)^{-1}My,y}=0. \end{equation} \item $\lambda=1$ is not an eigenvalue of the iteration matrix. \end{enumerate} \end{proposition} \begin{remark} Since $\rho(B)$ is strictly less than $1$, so are $\rho(B^*), \rho(B^k)$ and $\rho((B^*)^k)$. \end{remark} \noindent The proofs for Propositions~\ref{prop:eq-eigen shift-k-shot} and \ref{prop:eq-eigen k-shot} are respectively similar to the ones of Propositions~\ref{prop:eq-eigen shift-1-shot} and \ref{prop:eq-eigen 1-shot}, the slight difference is that in the calculation we use \eqref{uxt} to simplify some terms. In the following sections we will show that, for sufficiently small $\tau$, equations \eqref{ori-eq-eigen n} and \eqref{ori-eq-eigen n+1} admit no solution $|\lambda|\ge 1$, thus algorithms \eqref{alg:k-shot n} and \eqref{alg:k-shot n+1} converge. When $\lambda\neq 0$, it is convenient to rewrite \eqref{ori-eq-eigen n} and \eqref{ori-eq-eigen n+1} respectively as \begin{equation}\label{eq-eigen n} \lambda^2(\lambda-1)\norm{y}^2+\tau\scalar{M^*\left[I-(B^*)^k/\lambda\right]^{-1}[(\lambda-1)X_k+T_k^*H^*HT_k]\left(I-B^k/\lambda\right)^{-1}My,y}=0 \end{equation} and \begin{equation}\label{eq-eigen n+1} \lambda(\lambda-1)\norm{y}^2+\tau\scalar{M^*\left[I-(B^*)^k/\lambda\right]^{-1}[(\lambda-1)X_k+T_k^*H^*HT_k]\left(I-B^k/\lambda\right)^{-1}My,y}=0 \end{equation} The scalar case where $n_u, n_\sigma, n_f =1$ is analyzed in Appendix~\ref{app:1D}. \begin{remark}\label{rmk:B=0,k>1} Note that when $B=0$ and $k \ge 2$, the shifted $k$-step one-shot and $k$-step one-shot are respectively equivalent to the shifted and usual gradient descent methods, therefore we retrieve the same bounds \eqref{best-tau-shifted-gd}--\eqref{best-tau-usualgd} for the descent step $\tau$ as for those methods. \end{remark} For the analysis we use auxiliary results proved in Appendix~\ref{app:lems}, and the following bounds for $s(B^k), T_k, X_k$. \begin{lemma}\label{lem:boundsSTXk} If $\norm{B}<1$, \[ s(B^k)\le\frac{1}{1-\norm{B}^k}, \quad \norm{T_k} \le \frac{1-\norm{B}^k}{1-\norm{B}}, \quad \norm{X_k} \le \frac{\norm{H}^2(1-k\norm{B}^{k-1}+(k-1)\norm{B}^k)}{(1-\norm{B})^2}. \] \end{lemma} \begin{proof} The bound for $s(B^k)$ is proved using Lemma~\ref{inv(I-T/z)} and $\norm{B^k} \le \norm{B}^k$. Next, from \eqref{eq:T} we have $$\norm{T_k}\le 1+\norm{B}+...+\norm{B}^{k-1}=\frac{1-\norm{B}^k}{1-\norm{B}}.$$ From \eqref{eq:X}, if $k\ge 2$ we have $$\begin{array}{cl} \norm{X_k}&\le\norm{H}^2 \bigl(\norm{B}^{k-2}+\norm{B}^{k-3}(1+\norm{B})+...+(1+\norm{B}+...+\norm{B}^{k-2})\bigr)\\ &=\displaystyle\norm{H}^2(1+2\norm{B}+...+(k-1)\norm{B}^{k-2})=\frac{\norm{H}^2(1-k\norm{B}^{k-1}+(k-1)\norm{B}^k)}{(1-\norm{B})^2}. \end{array} $$ \end{proof} \subsection{Real eigenvalues} We first find conditions on the descent step $\tau$ such that the real eigenvalues stay inside the unit disk. Recall that we have already proved that $\lambda=1$ is not an eigenvalue for any $k$. \begin{proposition}[shifted $k$-step one-shot method]\label{tau n k>1 real} When $k\ge 2$, $\exists\tau>0$ sufficiently small such that equation \eqref{eq-eigen n} admits no solution $\lambda\in\R, \lambda\neq 1, |\lambda|\ge 1$. More precisely, take \begin{itemize} \item $\tau<\frac{2}{\norm{M}^2\left(\norm{H}^2\norm{T_k}^2+2\norm{X_k}\right)s(B^k)^2}$ if the denominator of the right-hand side is not $0$; \item any $\tau>0$ otherwise. \end{itemize} Moreover, if $\norm{B}<1$, we can take $$\tau<\frac{(1-\norm{B})^2}{\norm{H}^2\norm{M}^2}\cdot\frac{2(1-\norm{B}^k)^2}{(1-\norm{B}^k)^2+2(1-k\norm{B}^{k-1}+(k-1)\norm{B}^k)}.$$ \end{proposition} \begin{proof} When $\lambda\in\R$ equation \eqref{eq-eigen n} is rewritten as $$ \begin{array}{ll} \lambda^2(\lambda-1)\norm{y}^2+\tau\norm{HT_k\left(I-\frac{B^k}{\lambda}\right)^{-1}My}^2&\\ +\tau(\lambda-1)\scalar{M^*\left[I-\frac{(B^*)^k}{\lambda}\right]^{-1}X_k\left(I-\frac{B^k}{\lambda}\right)^{-1}My,y}&=0. \end{array} $$ We show that if $\lambda>1$ (or respectively $\lambda\le-1$) we can choose $\tau$ so that the left-hand side of the above equation is strictly positive (or respectively negative). Indeed, if $\lambda>1$, we choose $\tau$ such that \[ \lambda^2\norm{y}^2-\tau\left|\scalar{M^*\left[I-\frac{(B^*)^k}{\lambda}\right]^{-1}X_k\left(I-\frac{B^k}{\lambda}\right)^{-1}My,y}\right|>0 \] and this can be done by taking $\tau$ such that $$[\norm{X_k}\norm{M}^2s(B^k)^2]\tau<1.$$ If $\lambda\le-1$, we choose $\tau$ such that $$\begin{array}{ll} \lambda^2(\lambda-1)\norm{y}^2+\tau\norm{HT_k\left(I-\frac{B^k}{\lambda}\right)^{-1}My}^2&\\ +\tau(1-\lambda)\left|\scalar{M^*\left[I-\frac{(B^*)^k}{\lambda}\right]^{-1}X_k\left(I-\frac{B^k}{\lambda}\right)^{-1}My,y}\right|&<0 \end{array}$$ and this can be done by taking $\tau$ such that $$\left[\frac{\norm{H}^2\norm{T_k}^2\norm{M}^2s(B^k)^2}{2}+\norm{X_k}\norm{M}^2s(B^k)^2\right]\tau<1,$$ so we obtain the first conclusion. Finally, the second conclusion in the case $\norm{B}<1$ can be obtained by Lemma \ref{lem:boundsSTXk}. \end{proof} \begin{proposition}[$k$-step one-shot method]\label{tau n+1 k>1 real} When $k\ge 2$, $\exists\tau>0$ sufficiently small such that equation \eqref{eq-eigen n+1} admits no solution $\lambda\in\R, \lambda\neq 1, |\lambda|\ge 1$. More precisely, take \begin{itemize} \item $\tau<\cfrac{1}{\norm{X_k}\norm{M}^2s(B^k)^2}$ if the denominator of the right-hand side is not $0$; \item any $\tau>0$ otherwise. \end{itemize} Moreover, if $\norm{B}<1$, we can take $$\tau<\frac{(1-\norm{B})^2}{\norm{H}^2\norm{M}^2}\cdot\frac{(1-\norm{B}^k)^2}{1-k\norm{B}^{k-1}+(k-1)\norm{B}^k}.$$ \end{proposition} \begin{proof} When $\lambda\in\R$ equation \eqref{eq-eigen n+1} is rewritten as $$ \begin{array}{ll} \lambda(\lambda-1)\norm{y}^2+\tau\norm{HT_k\left(I-\frac{B^k}{\lambda}\right)^{-1}My}^2&\\ +\tau(\lambda-1)\scalar{M^*\left[I-\frac{(B^*)^k}{\lambda}\right]^{-1}X_k\left(I-\frac{B^k}{\lambda}\right)^{-1}My,y}&=0. \end{array} $$ We show that we can choose $\tau$ so that the left-hand side of the above equation is strictly positive. Indeed, if $\lambda>1$, we choose $\tau$ such that \[ \lambda\norm{y}^2-\tau\left|\scalar{M^*\left[I-\frac{(B^*)^k}{\lambda}\right]^{-1}X_k\left(I-\frac{B^k}{\lambda}\right)^{-1}My,y}\right|>0 \] and this can be done by taking $\tau$ such that $$\norm{X_k}\norm{M}^2s(B^k)^2\tau<1.$$ If $\lambda\le-1$, we choose $\tau$ such that \[ \lambda\norm{y}^2+\tau\left|\scalar{M^*\left[I-\frac{(B^*)^k}{\lambda}\right]^{-1}X_k\left(I-\frac{B^k}{\lambda}\right)^{-1}My,y}\right|<0 \] and this is also done by taking $\tau$ such that $$\norm{X_k}\norm{M}^2s(B^k)^2\tau<1.$$ so we obtain the first conclusion. Finally, the conclusion in the case $\norm{B}<1$ can be obtained by Lemma \ref{lem:boundsSTXk}. \end{proof} \subsection{Complex eigenvalues} \label{subsec:complex-k-step} We now look for conditions on the descent step $\tau$ such that also the complex eigenvalues stay inside the unit disk. We first deal with the shifted $k$-step one-shot method. \begin{proposition}[shifted $k$-step one-shot method]\label{tau n k>1 cpl} When $k\ge 2$, $\exists\tau>0$ sufficiently small such that equation \eqref{eq-eigen n} admits no solution $\lambda\in\mathbb{C}\backslash\R$, $|\lambda|\ge 1$. In particular, if $\norm{B}<1$, given any $\delta_0>0$ and $0<\theta_0<\frac{\pi}{6}$, take \[ \tau < \frac{\min \{ \chi_1(k,\norm{B}),\; \chi_2(k,\norm{B}),\; \chi_3(k,\norm{B}),\; \chi_4(k,\norm{B}) \} }{\norm{H}^2\norm{M}^2} \] where \[ \chi_1(k,b)=\frac{(1-b)^2(1-b^k)^2}{4b^{2k} + \sqrt{2}(1-k b^{k-1}+(k-1)b^k)(1+b^k)^2} \] \[ \chi_2(k,b)=\frac{(1-b)^2(1-b^k)^2}{\bigl[\frac{1}{2\sin(\theta_0/2)}(1-b^k)^2+\sqrt{2}(1-k b^{k-1}+(k-1)b^k)\bigr](1+b^k)^2} \] \[ \chi_3(k,b)=\frac{(1-b)^2(1-b^k)^2}{ \frac{2c\sin(\theta_0/2)}{\delta_0}b^{2k} + (1-kb^{k-1}+(k-1)b^k) \Bigl[ \frac{\sqrt{c}}{\delta_0}(1+b^{2k}) + 2\max\Bigl(\frac{\sqrt{c}}{\delta_0},\frac{\sqrt{c}}{\cos3\theta_0}\Bigr)b^k \Bigr] } \] \[ \chi_4(k,b)=\frac{\left[\sin\left(\frac{\pi}{2}-3\theta_0\right)+\cos2\theta_0\right](1-b)^2(1-b^k)^2}{(1-b^k)^2+2(1-kb^{k-1}+(k-1)b^k)(1+b^k)^2} \] and $c=\frac{1+2\delta_0\sin\frac{5\theta_0}{2}+\delta_0^2}{\cos^2\frac{5\theta_0}{2}}$. \end{proposition} \begin{proof} \textbf{Step 1. Rewrite equation \eqref{eq-eigen n} so that we can study its real and imaginary parts.} Let $\lambda=R(\cos\theta+\ic\sin\theta)$ in polar form where $R=|\lambda| \ge 1$ and $\theta\in(-\pi,\pi)$. Write ${1}/{\lambda}=r(\cos\phi+\ic\sin\phi)$ in polar form where $r={1}/{|\lambda|}={1}/{R}\le 1$ and $\phi=-\theta\in(-\pi,\pi)$. By Lemma \ref{decompq} applied to $T=B^k$, we have $$\left(I-\cfrac{B^k}{\lambda}\right)^{-1}=P(\lambda)+\ic Q(\lambda),\quad \left(I-\cfrac{(B^*)^k}{\lambda}\right)^{-1}=P(\lambda)^*+\ic Q(\lambda)^*$$ where $P(\lambda)$ and $Q(\lambda)$ are $\mathbb{C}^{n_u\times n_u}$-valued functions, and, by omitting the dependence on $\lambda$, \begin{equation}\label{eq:pBk} \norm{P}\le p \coloneqq \left\{\begin{array}{cl} (1+\norm{B^k})s(B^k)^2 &\text{ for general }B,\\ \cfrac{1}{1-\norm{B}^k} &\text{ when }\norm{B}<1; \end{array}\right. \end{equation} \begin{equation}\label{eq:q1Bk} \norm{Q}\le q_1 \coloneqq \left\{\begin{array}{cl} \norm{B^k}s(B^k)^2 &\text{ for general }B,\\ \cfrac{\norm{B}^k}{1-\norm{B}^k} &\text{ when }\norm{B}<1; \end{array}\right. \end{equation} \begin{equation}\label{eq:q2Bk} \norm{Q}\le q_2|\sin\theta|,\quad q_2 \coloneqq \left\{\begin{array}{cl} \norm{B^k}s(B^k)^2 &\text{ for general }B,\\ \cfrac{\norm{B}^k}{(1-\norm{B}^k)^2} &\text{ when }\norm{B}<1. \end{array}\right. \end{equation} \noindent Now we rewrite \eqref{eq-eigen n} as \begin{equation} \label{ready-split n k>1} \lambda^2(\lambda-1)\norm{y}^2+\tau G(P^*+\ic Q^*,P+\ic Q)+\tau(\lambda-1)L(P^*+\ic Q^*,P+\ic Q)=0. \end{equation} where $$G(X,Y)=\scalar{M^*XT_k^*H^*HT_kYMy,y},\quad L(X,Y)=\scalar{M^*XX_kYMy,y}$$ for $X,Y\in\mathbb{C}^{n_u\times n_u}$. $G$ satisfies the following properties: \begin{itemize} \item $\forall X,Y_1,Y_2\in\mathbb{C}^{n_u\times n_u}, \forall z_1,z_2\in\mathbb{C}$: \quad $G(X, z_1Y_1+z_2Y_2)=z_1G(X,Y_1)+z_2G(X,Y_2).$ \item $\forall X_1,X_2,Y\in\mathbb{C}^{n_u\times n_u}, \forall z_1,z_2\in\mathbb{C}$: \quad $G(z_1X_1+z_2X_2,Y)=z_1G(X_1,Y)+z_2G(X_2,Y).$ \item $\forall X\in\mathbb{C}^{n_u\times n_u}$: \quad $G(X^*,X)\in \R$. \item $\forall X,Y\in\mathbb{C}^{n_u\times n_u}$: \quad $G(X,Y)+G(Y^*,X^*)\in\R$, indeed $$\begin{array}{ll} G(X,Y)&=\scalar{M^*XT_k^*H^*HT_kYMy,y}=\scalar{y,M^*Y^*T_k^*H^*HT_kX^*My}\\ &=\scalar{M^*Y^*T_k^*H^*HT_kX^*My,y}^*=G(Y^*,X^*)^*. \end{array}$$ \end{itemize} Similarly, $L$ has the same properties as $G$ (note that $X_k^*=X_k$ by Lemma \ref{lem:propTUX}). With these properties of $G$ and $L$, we expand \eqref{ready-split n k>1} and take its real and imaginary parts, so we respectively obtain: \begin{equation}\label{re-eq k>1} \Re(\lambda^3-\lambda^2)\norm{y}^2+\tau G_1+\tau [\Re(\lambda-1)L_1-\Im(\lambda-1)L_2]=0 \end{equation} and \begin{equation}\label{im-eq k>1} \Im(\lambda^3-\lambda^2)\norm{y}^2+\tau G_2+\tau [\Im(\lambda-1)L_1+\Re(\lambda-1)L_2]=0 \end{equation} where $$G_1=G(P^*,P)-G(Q^*,Q),\quad G_2=G(P^*,Q)+G(Q^*,P),$$ $$L_1=L(P^*,P)-L(Q^*,Q),\quad L_2=L(P^*,Q)+L(Q^*,P).$$ \noindent\textbf{Step 2. Find a suitable combination of equations \eqref{re-eq k>1} and \eqref{im-eq k>1}, choose $\tau$ so that we obtain a new equation with a left-hand side which is strictly positive/negative.} Let $\gamma=\gamma(\lambda)\in\R$, defined by cases as in Lemma~\ref{gamma123 n}. Multiplying equation \eqref{im-eq k>1} with $\gamma$ then summing it with equation \eqref{re-eq k>1}, we obtain: \begin{equation}\label{im-gamma-re k>1} \begin{array}{ll} [\Re(\lambda^3-\lambda^2)+\gamma\Im(\lambda^3-\lambda^2)]\norm{y}^2+\tau G(P^*+\gamma Q^*,P+\gamma Q)-(1+\gamma^2)\tau G(Q^*,Q)&\\ +\tau\left([\Re(\lambda-1)+\gamma\Im(\lambda-1)]L_1+[\gamma\Re(\lambda-1)-\Im(\lambda-1)]L_2\right) &=0. \end{array} \end{equation} Now we prepare some useful estimates. \begin{itemize} \item $\forall X\in\mathbb{C}^{n_u\times n_u}$: \quad $0\le G(X^*,X)=\norm{HT_kXMy}^2\le (\norm{H}\norm{T_k}\norm{M}\norm{X})^2 \norm{y}^2.$ Since $\norm{Q}\le q_1$ and $\norm{Q}\le q_2|\sin\theta|$, we have $$G(Q^*,Q)\le (\norm{H}\norm{T_k}\norm{M}q_1)^2 \norm{y}^2 \text{ and } G(Q^*,Q)\le (\norm{H}\norm{T_k}\norm{M}q_2\sin|\theta|)^2 \norm{y}^2.$$ \item By Cauchy-Schwarz inequality we have $$|\Re(\lambda-1)+\gamma\Im(\lambda-1)|\le \sqrt{1+\gamma^2}|\lambda-1|; \quad |\gamma\Re(\lambda-1)-\Im(\lambda-1)|\le \sqrt{1+\gamma^2}|\lambda-1|.$$ \item $\forall X,Y\in\mathbb{C}^{n_u\times n_u}$: \quad $|L(X,Y)|=|\scalar{M^*XX_kYMy,y}|\le\norm{X_k}\norm{M}^2\norm{X}\norm{Y}\norm{y}^2.$ Hence $$\begin{array}{ll} |L_1|&=|L(P^*,P)-L(Q^*,Q)| \le |L(P^*,P)|+|L(Q^*,Q)|\\ &\le \norm{X_k}\norm{M}^2(\norm{P}^2+\norm{Q}^2)\norm{y}^2 \le \norm{X_k}\norm{M}^2(p^2+q_1^2)\norm{y}^2, \end{array}$$ $$\begin{array}{ll} |L_2|&=|L(P^*,Q)+L(Q^*,P)| \le |L(P^*,Q)|+|L(Q^*,P)|\\ &\le 2\norm{X_k}\norm{M}^2\norm{P}\norm{Q}\norm{y}^2 \le 2\norm{X_k}\norm{M}^2pq_1\norm{y}^2, \end{array}$$ and then $$\begin{array}{ll} &|[\Re(\lambda-1)+\gamma\Im(\lambda-1)]L_1+[\gamma\Re(\lambda-1)-\Im(\lambda-1)]L_2|\\ \le&|\Re(\lambda-1)+\gamma\Im(\lambda-1)||L_1|+|\gamma\Re(\lambda-1)-\Im(\lambda-1)||L_2|\\ \le&\sqrt{1+\gamma^2}|\lambda-1|\norm{X_k}\norm{M}^2(p^2+q_1^2+2pq_1)\norm{y}^2\\ =&\sqrt{1+\gamma^2}|\lambda-1|\norm{X_k}\norm{M}^2(p+q_1)^2\norm{y}^2. \end{array} $$ \end{itemize} Now we consider four cases of $\lambda$ as in Lemma~\ref{gamma123 n}: \begin{itemize} \item\textit{Case 1.} $\Re(\lambda^3-\lambda^2)\ge 0$; \item\textit{Case 2.} $\Re(\lambda^3-\lambda^2)<0$ and $\theta\in [\theta_0,\pi-\theta_0]\cup[-\pi+\theta_0,-\theta_0]$ for fixed $0<\theta_0<\frac{\pi}{6}$; \item\textit{Case 3.} $\Re(\lambda^3-\lambda^2)<0$ and $\theta \in (-\theta_0,\theta_0)$ for fixed $0<\theta_0<\frac{\pi}{6}$; \item\textit{Case 4.} $\Re(\lambda^3-\lambda^2)<0$ and $\theta \in (\pi-\theta_0,\pi)\cup(-\pi,-\pi+\theta_0)$ for fixed $0<\theta_0<\frac{\pi}{6}$. \end{itemize} The four cases will be treated in the following four lemmas (Lemmas \ref{k>1 n case1}--\ref{k>1 n case4}), which together give the statement of this proposition. \end{proof} \begin{lemma}[Case 1] \label{k>1 n case1} For $k\ge 2$, equation \eqref{eq-eigen n} admits no solutions $\lambda$ in Case 1 if we take \begin{itemize} \item $\tau<\cfrac{s(B^k)^{-4}}{4\norm{H}^2\norm{M}^2\norm{T_k}^2\norm{B^k}^2+\sqrt{2}\norm{M}^2\norm{X_k}(1+2\norm{B^k})^2}$ if the denominator of the right-hand side is not $0$; \item any $\tau>0$ otherwise. \end{itemize} Moreover, if $\norm{B}<1$, we can take $$\tau<\frac{(1-\norm{B})^2}{\norm{H}^2\norm{M}^2}\cdot\frac{(1-\norm{B}^k)^2}{4\norm{B}^{2k}+\sqrt{2}(1-k\norm{B}^{k-1}+(k-1)\norm{B}^k)(1+\norm{B}^k)^2}.$$ \end{lemma} \begin{proof} Writing \eqref{im-gamma-re k>1} for $\gamma=\gamma_1$ as in Lemma \ref{gamma123 n} (i) (in particular $\gamma_1^2=1$), we have \begin{equation}\label{case1 k>1} \begin{array}{ll} [\Re(\lambda^3-\lambda^2)+\gamma_1\Im(\lambda^3-\lambda^2)]\norm{y}^2+\tau G(P^*+\gamma_1 Q^*,P+\gamma_1 Q)-2\tau G(Q^*,Q)&\\ +\tau\left([\Re(\lambda-1)+\gamma_1\Im(\lambda-1)]L_1+[\gamma_1\Re(\lambda-1)-\Im(\lambda-1)]L_2\right) &=0. \end{array} \end{equation} Since $G(P^*+\gamma_1 Q^*,P+\gamma_1 Q)\ge 0$, and by estimating \[ G(Q^*,Q)\le(\norm{H}\norm{T_k}\norm{M}q_2\sin|\theta|)^2 \norm{y}^2, \] \begin{multline*} [\Re(\lambda-1)+\gamma_1\Im(\lambda-1)]L_1+[\gamma_1\Re(\lambda-1)-\Im(\lambda-1)]L_2\\ \ge-\sqrt{2}|\lambda-1|\norm{X_k}\norm{M}^2(p+q_1)^2\norm{y}^2, \end{multline*} by Lemma \ref{gamma123 n} (i) the left-hand side of \eqref{case1 k>1} will be strictly positive if $\tau$ satisfies: $$\left(2\left(\norm{H}\norm{T_k}\norm{M}q_2\right)^2\frac{|\sin\theta|^2}{|\lambda-1|}+\sqrt{2}\norm{X_k}\norm{M}^2(p+q_1)^2\right)\tau<1.$$ Since $\frac{|\sin\theta|^2}{|\lambda-1|}\le\frac{|\sin\theta|^2}{2|\sin(\theta/2)|}=2\left| \sin\frac{\theta}{2}\right|\cos^2\frac{\theta}{2}\le 2$, we have the first part of the conclusion using definitions \eqref{eq:pBk}, \eqref{eq:q1Bk}, \eqref{eq:q2Bk} of $p, q_1, q_2$. Finally, the conclusion in the case $\norm{B}<1$ can be obtained by Lemma \ref{lem:boundsSTXk}. \end{proof} \begin{lemma}[Case 2] \label{k>1 n case2} For $k\ge 2$, equation \eqref{eq-eigen n} admits no solutions $\lambda$ in Case 2 if we take \begin{itemize} \item $\tau<\cfrac{s(B^k)^{-4}}{\left(\frac{1}{2\sin(\theta_0/2)}\norm{H}^2\norm{M}^2\norm{T_k}^2+\sqrt{2}\norm{M}^2\norm{X_k}\right)(1+2\norm{B^k})^2}$ if the denominator of the right-hand side is not $0$; \item any $\tau$ otherwise. \end{itemize} Moreover, if $\norm{B}<1$, we can take $$\tau<\frac{(1-\norm{B})^2}{\norm{H}^2\norm{M}^2}\cdot\frac{(1-\norm{B^k})^2}{\left[ \frac{1}{2\sin(\theta_0/2)}(1-\norm{B}^k)^2+\sqrt{2}(1-k\norm{B}^{k-1}+(k-1)\norm{B}^k)\right](1+\norm{B}^k)^2}.$$ \end{lemma} \begin{proof} Writing \eqref{im-gamma-re k>1} for $\gamma=\gamma_2$ as in Lemma \ref{gamma123 n} (ii) (in particular $\gamma_2^2=1$), we have \begin{equation}\label{case2 k>1} \begin{array}{ll} [\Re(\lambda^3-\lambda^2)+\gamma_2\Im(\lambda^3-\lambda^2)]\norm{y}^2+\tau G(P^*+\gamma_2 Q^*,P+\gamma_2 Q)-2\tau G(Q^*,Q)&\\ +\tau\left([\Re(\lambda-1)+\gamma_2\Im(\lambda-1)]L_1+[\gamma_2\Re(\lambda-1)-\Im(\lambda-1)]L_2\right) &=0. \end{array} \end{equation} Since $G(Q^*,Q)\ge 0$, and by estimating $\norm{P+\gamma_2 Q}\le \norm{P}+|\gamma_2|\norm{Q}=\norm{P}+\norm{Q}\le p+q_1$, so that \[ G(P^*+\gamma_2 Q^*,P+\gamma_2 Q)\le[\norm{H}\norm{T_k}\norm{M}(p+q_1)]^2 \norm{y}^2, \] and \begin{multline*} [\Re(\lambda-1)+\gamma_2\Im(\lambda-1)]L_1+[\gamma_2\Re(\lambda-1)-\Im(\lambda-1)]L_2\\ \le\sqrt{2}|\lambda-1|\norm{X_k}\norm{M}^2(p+q_1)^2\norm{y}^2, \end{multline*} by Lemma \ref{gamma123 n} (ii), the left-hand side of \eqref{case2 k>1} will be strictly negative if $\tau$ satisfies: $$\left(\left[\norm{H}\norm{T_k}\norm{M}(p+q_1)\right]^2\frac{1}{|\lambda-1|}+\sqrt{2}\norm{X_k}\norm{M}^2(p+q_1)^2\right)\tau<1.$$ Since $\frac{1}{|\lambda-1|}\le\frac{1}{2\sin(\theta_0/2)}$, we have the first part of the conclusion using definitions \eqref{eq:pBk}, \eqref{eq:q1Bk} of $p, q_1$. Finally, the conclusion in the case $\norm{B}<1$ can be obtained by Lemma \ref{lem:boundsSTXk}. \end{proof} \begin{lemma}[Case 3] \label{k>1 n case3} Let $\delta_0>0$ be fixed and $c \coloneqq \frac{1+2\delta_0\sin\frac{5\theta_0}{2}+\delta_0^2}{\cos^2\frac{5\theta_0}{2}}$. For $k\ge 2$, equation \eqref{eq-eigen n} admits no solutions $\lambda$ in Case 3 if we take \begin{itemize} \item $ \tau<s(B^k)^{-4}\bigg/\left[\frac{2c\sin\frac{\theta_0}{2} }{\delta_0}\norm{H}^2\norm{M}^2\norm{T_k}^2\norm{B^k}^2+\frac{\sqrt{c}}{\delta_0}\norm{M}^2\norm{X_k}(1+2\norm{B^k}+2\norm{B^k}^2)\right.$ $\left.+2\max\left(\frac{\sqrt{c}}{\delta_0},\frac{\sqrt{c}}{\cos3\theta_0}\right)\norm{M}^2\norm{X_k}(\norm{B^k}+\norm{B^k}^2) \right] $ if the denominator of the right-hand side is not $0$; \item any $\tau>0$ otherwise. \end{itemize} Moreover, if $\norm{B}<1$, we can take $$\begin{array}{ll} \tau<&\frac{(1-\norm{B})^2}{\norm{H}^2\norm{M}^2}(1-\norm{B}^k)^2\left[\frac{2c\sin\frac{\theta_0}{2}}{\delta_0}\norm{B^k}^{2k}+\frac{\sqrt{c}}{\delta_0}(1-k\norm{B}^{k-1}+(k-1)\norm{B}^k)(1+\norm{B}^{2k})\right.\\ &\left. +2\max\left(\frac{\sqrt{c}}{\delta_0},\frac{\sqrt{c}}{\cos3\theta_0}\right)(1-k\norm{B}^{k-1}+(k-1)\norm{B}^k)\norm{B}^k\right]^{-1}.\\ \end{array} $$ \end{lemma} \begin{proof} Writing \eqref{im-gamma-re k>1} for $\gamma=\gamma_3$ as in Lemma \ref{gamma123 n} (iii), we have \begin{equation}\label{case3 k>1} \begin{array}{ll} [\Re(\lambda^3-\lambda^2)+\gamma_3\Im(\lambda^3-\lambda^2)]\norm{y}^2+\tau G(P^*+\gamma_3 Q^*,P+\gamma_3 Q)-(1+\gamma_3^2)\tau G(Q^*,Q)&\\ +\tau\left([\Re(\lambda-1)+\gamma_3\Im(\lambda-1)]L_1+[\gamma_3\Re(\lambda-1)-\Im(\lambda-1)]L_2\right) &=0. \end{array} \end{equation} Since $G(P^*+\gamma_3 Q^*,P+\gamma_3 Q)\ge 0$, the left-hand side of \eqref{case3 k>1} will be strictly positive if $\tau$ satisfies: $$\begin{array}{ll} \tau<&\cfrac{1}{\norm{y}^2}\left[(1+\gamma_3^2)\cfrac{G(Q^*,Q)}{\Re(\lambda^3-\lambda^2)+\gamma_3\Im(\lambda^3-\lambda^2)}\right.\\ &\left.+|L_1|\cfrac{|\Re(\lambda-1)+\gamma_3\Im(\lambda-1)|}{\Re(\lambda^3-\lambda^2)+\gamma_3\Im(\lambda^3-\lambda^2)} +|L_2|\cfrac{|\gamma_3\Re(\lambda-1)-\Im(\lambda-1)|}{\Re(\lambda^3-\lambda^2)+\gamma_3\Im(\lambda^3-\lambda^2)}\right]^{-1}. \end{array} $$ By estimating \begin{itemize} \item $G(Q^*,Q)\le(\norm{H}\norm{T_k}\norm{M}q_2|\sin\theta|)^2 \norm{y}^2$ \item $|L_1|\le \norm{X_k}\norm{M}^2(p^2+q_1^2)\norm{y}^2$; \item $|L_2|\le 2\norm{X_k}\norm{M}^2pq_1\norm{y}^2$ \end{itemize} and using Lemma \ref{gamma123 n} (iii), it suffices to choose $$\begin{array}{ll} \left[(1+\gamma_3^2)\left(\norm{H}\norm{T_k}\norm{M}q_2\right)^2\frac{2|\sin\frac{\theta}{2}|\cos^2\frac{\theta}{2}}{\delta_0}+\norm{X_k}\norm{M}^2(p^2+q_1^2)\frac{\sqrt{1+\gamma_3^2}}{\delta_0}\right.&\\ \left. +2\norm{X_k}\norm{M}^2pq_1\max\left(\frac{\sqrt{1+\gamma_3^2}}{\delta_0},\frac{\sqrt{1+\gamma_3^2}}{\cos3\theta_0}\right)\right]\tau&<1. \end{array} $$ Noting that $c=1+\gamma_3^2$, the final result is obtained by definitions \eqref{eq:pBk}, \eqref{eq:q1Bk}, \eqref{eq:q2Bk} of $p,q_1,q_2$. Finally, the conclusion in the case $0<\norm{B}<1$ can be obtained by Lemma \ref{lem:boundsSTXk}. \end{proof} \begin{lemma}[Case 4] \label{k>1 n case4} For $k\ge 2$, equation \eqref{eq-eigen n} admits no solutions $\lambda$ in Case 4 if we take \begin{itemize} \item $\tau<\cfrac{\left[\sin\left(\frac{\pi}{2}-3\theta_0\right)+\cos2\theta_0\right]s(B^k)^{-4}}{\norm{H}^2\norm{M}^2\norm{T_k}^2(1+\norm{B^k})^2+2\norm{M}^2\norm{X_k}(1+2\norm{B^k})^2}$ if the denominator of the right-hand side is not $0$; \item any $\tau>0$ otherwise. \end{itemize} Moreover, if $\norm{B}<1$, we can take $$\tau<\frac{(1-\norm{B})^2}{\norm{H}^2\norm{M}^2}\cdot\frac{\left[\sin\left(\frac{\pi}{2}-3\theta_0\right)+\cos2\theta_0\right](1-\norm{B}^k)^2}{ (1-\norm{B}^k)^2+2(1-k\norm{B}^{k-1}+(k-1)\norm{B}^k)(1+\norm{B}^k)^2}.$$ \end{lemma} \begin{proof} Here it is enough to consider \eqref{re-eq k>1}. By the properties of $G$ $$G(Q^*,Q)\ge 0,\quad G(P^*,P)\le(\norm{H}\norm{T_k}\norm{M}p)^2\norm{y}^2$$ and Lemma \ref{gamma123 n} (iv), we see that the left-hand side of \eqref{re-eq k>1} will be strictly negative if $\tau$ satisfies: $$\begin{array}{ll} \left[\left(\norm{H}\norm{T_k}\norm{M}p\right)^2\frac{1}{\sin\left(\frac{\pi}{2}-3\theta_0\right)+\cos 2\theta_0} + \norm{X_k}\norm{M}^2(p+q_1)^2\frac{2}{\sin\left(\frac{\pi}{2}-3\theta_0\right)+\cos2\theta_0}\right]\tau&<1. \end{array} $$ Definitions \eqref{eq:pBk}, \eqref{eq:q1Bk} of $p, q_1$ lead to the final result. Finally, the conclusion in the case $0<\norm{B}<1$ can be obtained by Lemma \ref{lem:boundsSTXk}. \end{proof} Similarly, with the help of Lemma \ref{gamma123 n+1}, we prove for the $k$-step one-shot method the analogue of Proposition~\ref{tau n k>1 cpl}. In particular, note that here just three cases of $\lambda$ need to be considered, because the analogue of the fourth one is excluded by Lemma \ref{gamma123 n+1} (iv). \begin{proposition}[$k$-step one-shot method]\label{tau n+1 k>1 cpl} $\exists\tau>0$ sufficiently small such that equation \eqref{eq-eigen n+1} admits no solution $\lambda\in\mathbb{C}\backslash\R$, $|\lambda|\ge 1$. In particular, if $\norm{B}<1$, given any $\delta_0>0$ and $0<\theta_0<\frac{\pi}{4}$, take \[ \tau < \frac{\min\{\psi_1(k,b), \; \psi_2(k,b), \; \psi_3(k,b)\}}{\norm{H}^2\norm{M}^2} \] where \[ \psi_1(k,b)=\frac{(1-b)^2(1-b^k)^2}{4b^{2k}+\sqrt{2}(1-kb^{k-1}+(k-1)b^k)(1+b^k)^2} \] \[ \psi_2(k,b)=\frac{(1-b)^2(1-b^k)^2}{\Bigl[ \frac{1}{2\sin(\theta_0/2)}(1-b^k)^2+\sqrt{2}(1-kb^{k-1}+(k-1)b^k)\Bigr](1+b^k)^2} \] \[ \psi_3(k,b)= \frac{(1-b)^2(1-b^k)^2}{ \frac{2c\sin(\theta_0/2)}{\delta_0}b^{2k} + (1-kb^{k-1}+(k-1)b^k) \Bigl[ \frac{\sqrt{c}}{\delta_0}(1+b^{2k}) +2\max\Bigl(\frac{\sqrt{c}}{\delta_0},\frac{\sqrt{c}}{\cos2\theta_0}\Bigr)b^k \Bigr] } \] and $c=\frac{1+2\delta_0\sin\frac{3\theta_0}{2}+\delta_0^2}{\cos^2\frac{3\theta_0}{2}}$. \end{proposition} \subsection{Final result ($k\ge 2$)} Considering Remark~\ref{rmk:B=0,k>1}, and taking the minimum between the bound in Proposition~\ref{tau n k>1 real} for real eigenvalues and the bound in Proposition~\ref{tau n k>1 cpl} for complex eigenvalues, we finally obtain a sufficient condition on the descent step $\tau$ to ensure convergence of the shifted multi-step one-shot method. \begin{theorem} [Convergence of shifted $k$-step one-shot, $k\ge 2$] \label{th:tau n k>1 all} Under assumption \eqref{hypo}, the shifted $k$-step one-shot method, $k\ge 2$, converges for sufficiently small $\tau$. In particular, for $\norm{B}<1$, it is enough to take $$\tau<\frac{\chi(k,\norm{B})}{\norm{H}^2\norm{M}^2},$$ where $\chi(k,\norm{B})$ is an explicit function of $k$ and $\norm{B}$. \end{theorem} Similarly, by combining Remark \ref{rmk:B=0,k>1}, Propositions \ref{tau n+1 k>1 real} and \ref{tau n+1 k>1 cpl}, we obtain a sufficient condition on the descent step $\tau$ to ensure convergence of the multi-step one-shot method. \begin{theorem}[Convergence of $k$-step one-shot, $k\ge 2$] \label{th:tau n+1 k>1 all} Under assumption \eqref{hypo}, the $k$-step one-shot method, $k\ge 2$, converges for sufficiently small $\tau$. In particular, for $\norm{B}<1$, it is enough to take $$\tau<\frac{\psi(k,\norm{B})}{\norm{H}^2\norm{M}^2},$$ where $\psi(k,\norm{B})$ is an explicit function of $k$ and $\norm{B}$. \end{theorem} \section{Inverse problem with complex forward problem and real parameter} \label{sec:complex_extension} In this section we show that a linear inverse problem with associated complex forward problem and real parameter can be transformed into a linear inverse problem which matches with the real model at the beginning of Section~\ref{sec:intro-k-shot}, so that the previous theory applies. More precisely, here we study the state equation $$u=Bu+M\sigma+F$$ where $u\in\mathbb{C}^{n_u}$, $\sigma\in\R^{n_\sigma}$, $B\in\mathbb{C}^{n_u\times n_u}, M\in\mathbb{C}^{n_u\times n_\sigma}$. We measure $Hu(\sigma)=f$ where $H\in\mathbb{C}^{n_f\times n_u}$ and we want to recover $\sigma$ from $f$. Using the method of least squares, we consider the cost functional $$J(\sigma) \coloneqq \frac{1}{2}\norm{Hu(\sigma)-f}^2,$$ then by the Lagrangian technique with $$\mathcal{L}(u,v,\sigma)=\frac{1}{2}\norm{Hu-f}^2+\Re\scalar{Bu+m\sigma+F-u,v},$$ we can define the adjoint state $p=p(\sigma)$ such that $$p=B^*p+H^*(Hu(\sigma)-f),$$ which allows us to compute $$\nabla J(\sigma)=\Re(M^*p).$$ By separating the real and imaginary parts of all vectors and matrices $u=u_1+\ic u_2$, $p=p_1+\ic p_2$, $B=B_1+\ic B_2$, $M=M_1+\ic M_2$, $F=F_1+\ic F_2$, $H=H_1+\ic H_2$, $f=f_1+\ic f_2$, we can transform this inverse problem with complex forward problem into the inverse problem with real forward problem introduced at the beginning of Section \ref{sec:intro-k-shot}. Indeed, note that $B^*=B_1^*-\ic B_2^*$, $M^*=M_1^*-\ic M_2^*$, $H^*=H_1^*-\ic H_2^*$, so we have $$\begin{cases} u_1+\ic u_2=(B_1+\ic B_2)(u_1+\ic u_2)+(M_1+\ic M_2)\sigma+(F_1+\ic F_2)\\ p_1+\ic p_2=(B_1^*-\ic B_2^*)(p_1+\ic p_2)+ (H_1^*-\ic H_2^*)[(H_1+\ic H_2)(u_1+\ic u_2)-(f_1+\ic f_2)]\\ \nabla J(\sigma)=\Re[(M_1^*-\ic M_2^*)(p_1+\ic p_2)], \end{cases}$$ which implies $$\begin{cases} u_1=B_1u_1-B_2u_2+M_1\sigma+F_1 \\ u_2=B_2u_1+B_1u_2+M_2\sigma+F_2 \\ p_1=B_1^*p_1+B_2^*p_2+(H_1^*H_1+H_2^*H_2)u_1-(H_2^*H_1-H_1^*H_2)u_2-(H_1^*f_1+H_2^*f_2) \\ p_2=-B_2^*p_1+B_1^*p_2+(H_2^*H_1-H_1^*H_2)u_1+(H_1^*H_1+H_2^*H_2)u_2-(-H_2^*f_1+H_1^*f_2) \\ \nabla J(\sigma)=M_1^*p_1+M_2^*p_2. \end{cases}$$ By setting $$ \tilde{u}=\begin{bmatrix} u_1 \\ u_2 \end{bmatrix}, \tilde{p}=\begin{bmatrix} p_1 \\ p_2 \end{bmatrix}, \tilde{B}=\begin{bmatrix} B_1 & -B_2 \\ B_2 & B_1 \end{bmatrix}, \tilde{M}=\begin{bmatrix} M_1 \\ M_2 \end{bmatrix}, \tilde{F}=\begin{bmatrix} F_1 \\ F_2 \end{bmatrix}, \tilde{H}=\begin{bmatrix} H_1 & -H_2\\ H_2 & H_1 \end{bmatrix}, \tilde{f}=\begin{bmatrix} f_1 \\ f_2 \end{bmatrix} $$ we have $$\begin{cases} \tilde{u}=\tilde{B}\tilde{u}+\tilde{M}\sigma+\tilde{F}\\ \tilde{p}=\tilde{B}^*\tilde{p}+\tilde{H}^*(\tilde{H}\tilde{u}-\tilde{f})\\ \nabla J(\sigma)=\tilde{M}^*\tilde{p}, \end{cases}$$ that has the same structure as the inverse problem at the beginning of Section \ref{sec:intro-k-shot}. Finally we finish this section by two lemmas that match the assumptions of the inverse problem with complex state variable with the assumptions of the transformed inverse problem with real state variable. \begin{lemma} $\mathrm{Spec}(\tilde{B})=\mathrm{Spec}(B)\cup\overline{\mathrm{Spec(B)}}$. \end{lemma} \begin{proof} By writing \begin{equation}\label{eq:B1B2} \tilde{B}=\begin{bmatrix} B_1 & -B_2 \\ B_2 & B_1 \end{bmatrix} =\underbrace{ \begin{bmatrix} I & I \\ \ic I & -\ic I \end{bmatrix}}_{C^{-1}} \begin{bmatrix} \overline{B} & 0 \\ 0 & B \end{bmatrix} \underbrace{ \begin{bmatrix} \frac{1}{2}I & -\frac{\ic}{2}I \\ \frac{1}{2}I & \frac{i}{2}I \end{bmatrix}}_{C}, \end{equation} we find that $\det(\tilde{B}-\lambda I)=\det(\overline{B}-\lambda I)\det(B-\lambda I)$. The conclusion is then deduced thanks to the fact that $\mathrm{Spec}(\overline{B})=\overline{\mathrm{Spec(B)}}$. \end{proof} \begin{lemma} Assume that $\rho(B)<1$, and $H(I-B)^{-1}M$ is injective. Then $\rho(\tilde{B})<1$, and $\tilde{H}(\tilde{I}-\tilde{B})^{-1}\tilde{M}$ is injective where $\tilde{I}\in\R^{2n_u\times 2n_u}$ is the identity matrix. \end{lemma} \begin{proof} The previous lemma says that $\rho(\tilde{B})=\rho(B)<1$. Therefore $(\tilde{I}-\tilde{B})^{-1}$ is well-defined and thanks to \eqref{eq:B1B2}, $$\begin{array}{ll} (\tilde{I}-\tilde{B})^{-1}&=\underbrace{ \begin{bmatrix} I & I \\ \ic I & -\ic I \end{bmatrix}}_{C^{-1}}\begin{bmatrix} (I-\overline{B})^{-1} & 0 \\ 0 & (I-B)^{-1} \end{bmatrix} \underbrace{\begin{bmatrix} \frac{1}{2}I & -\frac{\ic}{2}I \\ \frac{1}{2}I & \frac{i}{2}I \end{bmatrix}}_{C} \\ &=\frac{1}{2}\begin{bmatrix} (I-\overline{B})^{-1}+(I-B)^{-1} & -\ic(I-\overline{B})^{-1}+\ic (I-B)^{-1} \\ \ic(I-\overline{B})^{-1}-\ic(I-B)^{-1} & (I-\overline{B})^{-1}+(I-B)^{-1} \end{bmatrix}. \end{array}$$ Now we have $$\begin{array}{ll} \tilde{H}(\tilde{I}-\tilde{B})^{-1}\tilde{M} &=\frac{1}{2}\begin{bmatrix} H_1 & -H_2\\ H_2 & H_1 \end{bmatrix} \begin{bmatrix} (I-\overline{B})^{-1}+(I-B)^{-1} & -\ic(I-\overline{B})^{-1}+\ic (I-B)^{-1} \\ \ic(I-\overline{B})^{-1}-\ic(I-B)^{-1} & (I-\overline{B})^{-1}+(I-B)^{-1} \end{bmatrix} \begin{bmatrix} M_1 \\ M_2 \end{bmatrix} \\ &=\frac{1}{2}\begin{bmatrix} \overline{H}(I-\overline{B})^{-1}+H(I-B)^{-1} & -\ic\overline{H}(I-\overline{B})^{-1}+\ic H(I-B)^{-1} \\ \ic\overline{H}(I-\overline{B})^{-1}-\ic H(I-B)^{-1} & \overline{H}(I-\overline{B})^{-1}+H(I-B)^{-1} \end{bmatrix} \begin{bmatrix} M_1 \\ M_2 \end{bmatrix} \\ &=\frac{1}{2}\begin{bmatrix} \overline{H}(I-\overline{B})^{-1}\overline{M}+H(I-B)^{-1}M \\ \ic \overline{H}(I-\overline{B})^{-1}\overline{M}-\ic H(I-B)^{-1}M \end{bmatrix}. \end{array} $$ Now assume that there exists $x\in\mathbb{C}^{n_\sigma}$ such that $H(\tilde{I}-\tilde{B})^{-1}\tilde{M}x=0$, then $$ \begin{cases} [\overline{H}(I-\overline{B})^{-1}\overline{M}+H(I-B)^{-1}M]x=0\\ [\ic \overline{H}(I-\overline{B})^{-1}\overline{M}-\ic H(I-B)^{-1}M]x=0 \end{cases} $$ or, equivalently, $$ \begin{cases} [H(I-\overline{B})^{-1}\overline{M}+H(I-B)^{-1}M]x=0\\ [-\overline{H}(I-\overline{B})^{-1}\overline{M}+H(I-B)^{-1}M]x=0. \end{cases} $$ By summing up these two equations we deduce that $H(I-B)^{-1}Mx=0$, then $x=0$ thanks to the injectivity of $H(I-B)^{-1}M$. \end{proof} \section{Numerical experiments} \label{sec:num-exp} Let us introduce a toy model to illustrate numerically the performance of the different methods. Given $\Omega\subset\R^n$ an open bounded Lipschitz domain, we consider the direct problem for the linearized scattered field $u \in\mathbb{H}^2(\Omega)$ given by the Helmholtz equation \begin{equation} \label{u-toymodel} \left\{ \begin{array}{ll} \dive(\tilde{\sigma}_0\nabla u)+\tilde{k}^2u=\dive(\sigma\nabla u_0), &\text{in }\Omega, \\ u=0, &\text{on }\pa\Omega, \end{array} \right. \end{equation} where the incident field $u_0:\Omega\to\R$ satisfies \begin{equation} \label{u0-toymodel} \left\{ \begin{array}{ll} \dive(\tilde{\sigma}_0\nabla u_0)+\tilde{k}^2u=0, &\text{in }\Omega, \\ u_0=f, &\text{on }\pa \Omega \end{array} \right. \end{equation} with the datum $f:\partial\Omega\to\R$. Here $\sigma:\overline{\Omega}\to\R$ such that $\sigma\big|_{\pa\Omega}=0$; $\tilde{\sigma}_0=\sigma_0+\delta\sigma_{\mathrm{r}}$ is a given function with $\delta\ge 0$ and random $\sigma_{\mathrm{r}}$. More precisely, given $\tilde{\sigma}_0$ and $f$, we solve for $u_0=u_0(f)$ in \eqref{u0-toymodel}, then insert $u_0$ into \eqref{u-toymodel} to solve for $u=u(\sigma)$. The variational formulations for $u$ and $u_0$ are respectively \begin{equation} \label{vf-u} \int_{\Omega}\tilde{\sigma}_0\nabla u\cdot\nabla v-\int_\Omega \tilde{k}^2uv=\int_{\Omega}\sigma\nabla u_0\cdot\nabla v, \quad\forall v\in\mathbb{H}^1_0(\Omega)\quad\text{and }u=0 \text{ on }\pa\Omega, \end{equation} \begin{equation} \label{vf-u0} \int_{\Omega}\tilde{\sigma}_0\nabla u_{0}\cdot\nabla v-\int_\Omega \tilde{k}^2uv=0, \quad\forall v\in\mathbb{H}^1_0(\Omega)\quad\text{and }u_0=f\text{ on }\pa\Omega. \end{equation} We are interested in the inverse problem of finding $\sigma$ from the measurement $Hu(\sigma)$ where $Hu \coloneqq \tilde{\sigma}_0\frac{\pa u}{\pa\nu}\big|_{\pa\Omega}$. To solve this inverse problem we use the method of least squares. Denoting by $\sigma^\text{ex}$ the exact $\sigma$ and $g=\tilde{\sigma}_0\frac{\pa u(\sigma^\text{ex})}{\pa\nu}\big|_{\pa\Omega}$ the corresponding measurement, we consider the cost functional $J(\sigma)=\frac{1}{2}\norm{Hu(\sigma)-g}^2_{\mathcal{L}^2(\pa\Omega)}=\frac{1}{2}\int_{\pa\Omega}(\tilde{\sigma}_0\frac{\pa u(\sigma)}{\pa\nu}-g)^2$. The Lagrangian technique allows us to compute the gradient $\nabla_\sigma J(\sigma)=-\nabla u_0\cdot\nabla p(\sigma)$, where the adjoint state $p=p(\sigma)$ satisfies \begin{equation} \label{vf-p} \int_{\Omega}\tilde{\sigma}_0\nabla p\cdot\nabla v-\int_\Omega\tilde{k}^2pv=0, \quad\forall v\in\mathbb{H}^1(\Omega)\quad\text{and }p=\left(\tilde{\sigma}_0\frac{\pa u(\sigma)}{\pa\nu}\bigg|_{\pa\Omega}-g\right)\text{ on }\pa\Omega. \end{equation} By discretizing $u$ by $\mathbb{P}^1$ finite elements on a mesh $\mathcal{T}_h^u$ of $\Omega$, and $\sigma$ by $\mathbb{P}^0$ finite elements on a coarser mesh $\mathcal{T}_h^\sigma$ of $\Omega$, the discretization of \eqref{vf-u} can be written as the linear system $A_1\vec u=A_2\vec\sigma$, where $\vec u\in\R^{n_u}$, $\vec \sigma\in\R^{n_\sigma}$. More precisely, $A_1$ and $A_2$ are respectively issued from the discretization of $\int_{\Omega}\tilde{\sigma}_0\nabla u\cdot\nabla v-\int_\Omega \tilde{k}^2uv$ and $\int_{\Omega}\sigma\nabla u_0\cdot\nabla v$, where the Dirichlet boundary conditions are imposed by the penalty method. To rewrite the system in the form \eqref{pbdirect}, we consider the naive splitting $A_1=A_{11}+\delta A_{12}$, where $A_{11}$ and $A_{12}$ are respectively issued from the discretization of $\int_{\Omega}\sigma_0\nabla u\cdot\nabla v-\int_\Omega \tilde{k}^2uv$ and $\int_{\Omega}\sigma_{\mathrm{r}}\nabla u\cdot\nabla v$. Then we get $$\vec{u}=A_{11}^{-1}(-\delta A_{12}\vec{u}+A_2\vec{\sigma})\quad\text{ and } \vec{u}=0\text{ on }\pa\Omega$$ and $$\vec{p}=A_{11}^{-1}\left(-\delta A_{12}\vec{p}\right)\quad\text{ and } \vec{p}=H\vec{u}-\vec{g}\text{ on }\pa\Omega$$ where $H\in\R^{n_f\times n_u}$ is the discretization of the above operator $H$ by abuse of notation. Choosing $\delta$ such that $\delta\norm{A_{11}^{-1}A_{12}}_2<1$, we consider \eqref{direct-inv-prb} with $B=-\delta A_{11}^{-1}A_{12}$, $M=A_{11}^{-1}A_2$, $F=0$. The application of $A_{11}^{-1}$, which has the same size as matrix $A_1$, is done by a direct solver; more practical fixed point iterations will be investigated in the future. \begin{figure}[htbp] \centering \includegraphics[width=0.7\linewidth]{domain.png} \caption{Domain with six source points for the numerical experiments. The unknown $\sigma$ is supported on the three squares.} \label{fig:domain} \end{figure} We then perform some numerical experiments in FreeFEM \cite{FreeFEM} with the following setting: \begin{itemize} \item Wavenumber $\tilde{k}=2\pi$, $\sigma_0=1$, $ \delta=0.01$, $\sigma_{\mathrm{r}}$ is a random real function with range in the interval $[1,2]$. \item Wavelength $\lambda=\frac{2\pi}{\tilde{k}}\sqrt{\sigma_0}=1$, mesh size $h=\frac{\lambda}{20}=0.05$. The domain $\Omega$ is the disk shown in Figure \ref{fig:domain}, where the squares are the support of function $\sigma$. Here $n_u=5853$, $n_\sigma=6$. \item We test with $6$ data $f$ given by zero-order Bessel function of the second kind centered at the points shown in Figure~\ref{fig:domain}, and the cost functional is the normalized sum of the contributions corresponding to different data. \item We take $\sigma^\text{ex}=10$ in every square and $0$ otherwise. The initial guess for the inverse problem is $12$ in every square and $0$ otherwise. \item For the first iteration, we perform a line search to adapt the descent step $\tau$, using a direct solver for the forward and adjoint problems. \item The stopping rule for the outer iteration is based on the relative value of the cost functional and on the relative norm of the gradient with a tolerance of $10^{-5}$. \end{itemize} \noindent Recall that $k$ is the number of inner iterations on the direct and adjoint problems. We are interested in two experiments. \begin{figure}[htbp] \centering \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\linewidth]{usual_fix_k_1.png} \caption{Convergence curves of usual gradient descent and $1$-step one-shot for different descent step $\tau$.} \label{fig:usual_fix_k_1} \end{subfigure} \hfill \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\linewidth]{usual_fix_k_2.png} \caption{Convergence curves of usual gradient descent and $2$-step one-shot for different descent step $\tau$.} \label{fig:usual_fix_k_2} \end{subfigure} \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\linewidth]{usual_fix_tau_2.png} \caption{Convergence curves of usual gradient descent and $k$-step one-shot for different $k$ with $\tau=2$.} \label{fig:usual_fix_tau_2} \end{subfigure} \hfill \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\linewidth]{usual_fix_tau_2-5.png} \caption{Convergence curves of usual gradient descent and $k$-step one-shot for different $k$ with $\tau=2.5$.} \label{fig:usual_fix_tau_2.5} \end{subfigure} \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\linewidth]{v2_usual_fix_tau_2.png} \caption{Convergence curves of $k$-step one-shot for different $k$ with $\tau=2$.} \label{fig:v2_usual_fix_tau_2} \end{subfigure} \hfill \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\linewidth]{v2_usual_fix_tau_2-5.png} \caption{Convergence curves of $k$-step one-shot for different $k$ with $\tau=2.5$.} \label{fig:v2_usual_fix_tau_2.5} \end{subfigure} \caption{Convergence curves of usual gradient descent and $k$-step one-shot.} \label{fig:usual_fix_k,tau} \end{figure} In the first experiment, we study the dependence on the descent step $\tau$. In Figure \ref{fig:usual_fix_k_1} and \ref{fig:usual_fix_k_2} we respectively fix $k=1$ and $k=2$ and compare $k$-step one-shot methods with the usual gradient descent method. On the horizontal axis we indicate the (outer) iteration number $n$ in \eqref{usualgd} and \eqref{alg:k-shot n+1}. We can verify that for sufficiently small $\tau$, both one-shot methods converge. In particular, for $\tau=2$, while gradient descent and $2$-step one-shot converge, $1$-step one-shot diverges. Oscillations may appear on the convergence curve for certain values of $\tau$, but they gradually vanish when $\tau$ gets smaller. For sufficiently small $\tau$, the convergence curves of both one-shot methods are comparable to the one of gradient descent. In the second experiment, we study the dependence on the number of inner iterations $k$, for fixed $\tau$. First (Figures~\ref{fig:usual_fix_tau_2}--\ref{fig:usual_fix_tau_2.5}), we investigate for which $k$ the convergence curve of $k$-step one-shot is comparable with the one of usual gradient descent. As in the previous pictures, on the horizontal axis we indicate the (outer) iteration number $n$ in \eqref{usualgd} and \eqref{alg:k-shot n+1}. For $\tau=2$ (see Figure \ref{fig:usual_fix_tau_2}), we observe that for $k=3,4$ the convergence curves of $k$-step one-shot are close to the one of usual gradient descent. Note that with $3$ inner iterations the $\mathcal{L}^2$ error between $u^n$ and the exact solution to the forward problem ranges between $4.3\cdot10^{-6}$ and $0.0136$ for different $n$ in \eqref{alg:k-shot n+1}; in fact this error is rather significant at the beginning then it tends to reduce when we are closer to convergence for the parameter $\sigma$. Therefore incomplete inner iterations on the forward problem are enough to have good precision on the solution of the inverse problem. In the very particular case $\tau =2.5$ (see Figure \ref{fig:usual_fix_tau_2.5}), we observe an interesting phenomenon: when $k=3,5,10$, with $k$-step one-shot the cost functional decreases even faster than with usual GD. For bigger $k$, for example $k=14$, the convergence curve of one-shot is close to the one of usual gradient descent as expected. Next (Figures~\ref{fig:v2_usual_fix_tau_2}--\ref{fig:v2_usual_fix_tau_2.5}), since the overall cost of the $k$-step one-shot method increases with $k$, we indicate on the horizontal axis the accumulated inner iteration number, which sums up $k$ from an outer iteration to the next. More precisely, because at the first outer iteration we perform a step search by a direct solver, we set to $1$ the first accumulated inner iteration number; for the following outer iterations $n\ge 2$, the accumulated inner iteration number is set to $1+(n-1)k$. In Figures~\ref{fig:v2_usual_fix_tau_2}--\ref{fig:v2_usual_fix_tau_2.5} we replot the results for the converging $k$-step one-shot methods of Figures~\ref{fig:usual_fix_tau_2}--\ref{fig:usual_fix_tau_2.5} with respect to the accumulated inner iteration number. For $\tau=2$ (see Figure \ref{fig:v2_usual_fix_tau_2}), while $k=2$ presents some oscillations, quite interestingly it appears that $k=3$ gives a faster decrease of the cost functional with respect to $k=4$, at least after the first iterations. For $\tau=2.5$ (see Figure \ref{fig:v2_usual_fix_tau_2.5}) we observe that $k=3$ is enough for the decrease of the cost functional, but with some oscillations, and the considered higher $k$ appears again to give slower decrease. \begin{figure}[htbp] \centering \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\linewidth]{shift_fix_k_1.png} \caption{Convergence curves of shifted gradient descent and shifted $1$-step one-shot for different descent step $\tau$.} \label{fig:shift_fix_k_1} \end{subfigure} \hfill \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\linewidth]{shift_fix_k_2.png} \caption{Convergence curves of shifted gradient descent and shifted $2$-step one-shot for different descent step $\tau$.} \label{fig:shift_fix_k_2} \end{subfigure} \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\linewidth]{shift_fix_tau_0-25.png} \caption{Convergence curves of shifted gradient descent and shifted $k$-step one-shot for different $k$ with $\tau=0.25$.} \label{fig:shift_fix_tau_0.25} \end{subfigure} \hfill \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\linewidth]{shift_fix_tau_0-5.png} \caption{Convergence curves of shifted gradient descent and shifted $k$-step one-shot for different $k$ with $\tau=0.5$.} \label{fig:shift_fix_tau_0.5} \end{subfigure} \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\linewidth]{v2_shift_fix_tau_0-25.png} \caption{Convergence curves of shifted $k$-step one-shot for different $k$ with $\tau=0.25$.} \label{fig:v2_shift_fix_tau_0.25} \end{subfigure} \hfill \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\linewidth]{v2_shift_fix_tau_0-5.png} \caption{Convergence curves of shifted $k$-step one-shot for different $k$ with $\tau=0.5$.} \label{fig:v2_shift_fix_tau_0.5} \end{subfigure} \caption{Convergence curves of shifted gradient descent and shifted $k$-step one-shot.} \label{fig:shift_fix_k,tau} \end{figure} A similar behavior can be observed for the shifted methods in Figure \ref{fig:shift_fix_k,tau}. \begin{figure}[htbp] \centering \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\linewidth]{all_fix_tau_0-5.png} \caption{Convergence curves with $\tau=0.5$.} \label{fig:all_fix_tau_0.5} \end{subfigure} \hfill \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\linewidth]{all_fix_tau_1-3.png} \caption{Convergence curves with $\tau=1.3$.} \label{fig:all_fix_tau_1.3} \end{subfigure} \caption{Comparison of usual gradient descent and $k$-step one-shot with shifted gradient descent and shifted $k$-step one-shot.} \label{fig:all_fix_tau} \end{figure} Finally we fix two particular values of $\tau$ and compare all considered methods in Figure \ref{fig:all_fix_tau}. We note that shifted methods present more oscillations with respect to non-shifted ones, especially for larger $\tau$. \section{Conclusion} We have proved sufficient conditions on the descent step for the convergence of two variants of multi-step one-shot methods. Although these bounds on the descent step are not optimal, to our knowledge no other bounds, explicit in the number of inner iterations, are available in literature for multi-step one-shot methods. Furthermore, we have shown in the numerical experiments that very few inner iterations on the forward and adjoint problems are enough to guarantee good convergence of the inversion algorithm. These encouraging numerical results are preliminary in the sense that the considered fixed point iteration is not a practical one, since it involves a direct solve of a problem of the same size as the original forward problem. We will investigate in the future iterative solvers based on domain decomposition methods (see e.g.~\cite{dolean-ddm15}), which are well adapted to large-scale problems. In addition, fixed point iterations could be replaced by more efficient Krylov subspace methods, such as conjugate gradient or GMRES. Another interesting issue is how to adapt the number of inner iterations in the course of the outer iterations. Moreover, based on this linear inverse problem study, we plan to tackle non-linear and time-dependent inverse problems. \bibliographystyle{plain}
{'timestamp': '2022-07-27T02:10:40', 'yymm': '2207', 'arxiv_id': '2207.10372', 'language': 'en', 'url': 'https://arxiv.org/abs/2207.10372'}